Skip to content

Air-gapped Environment

Computence AI Agent can be installed in air-gapped Splunk environments, with no internet connection.

In this case, the backend and all supporting services must be available locally.

Inference Engine

Warning

A local inference engine is a pre-requisite for an air-gapped environment.

The local LLMs enforce guardrails (malicious payload and prompt-injection checks) while also powering Splunk reasoning and tool execution.

We continuously validate new checkpoints, publish benchmarks, and update this list as models evolve. Currently supporting the following open-weight models:

  • gpt-oss-120b (OpenAI)
  • glm-4.6 (Z.ai)
  • kimi-k2-0905 (Moonshot AI)
  • minimax-m2 (MiniMax)

The inference endpoint must expose an OpenAI-compatible Completions API, accept bearer API keys, and be reachable over either HTTP or HTTPS.

Example connection test:

curl -H "Authorization: Bearer <API_KEY>" \
     -H "Content-Type: application/json" \
     -d '{"model":"gpt-oss-120b","messages":[{"role":"user","content":"What is the meaning of life?"}]}' \
     https://example.com/api/v1/chat/completions

Database

We support PostgreSQL 17 as the backend database. Connect to an existing database or use the official PostgreSQL 17.x image.

After provisioning, use the database migration environment provided to you to bootstrap the database. Detailed instructions are available in the migration environment.

Computence AI Backend

The backend ships as a hardened container image. Run it directly with Docker or Docker Compose.

Minimum allocation: 1 CPU core, 1 GB RAM, and a stable network path to both the model endpoint and PostgreSQL.

  1. Create the following .env file:

    # .env
    
    # App
    TRUSTED_HOSTS=<host1>[,<host2>]  # Hostnames that the backend can accept requests on
    
    # Inference
    INFERENCE_URL=<url>
    INFERENCE_API_KEY=<api_key>
    GUARDRAIL_MODEL=<model>          # Could use the same model
    VERIFY_SSL=<boolean>             # default: true
    
    # Database
    DB_HOST=<hostname>               # default: localhost
    DB_PORT=<port>                   # default: 5432
    DB_NAME=<database>               # default: splunk_ai
    DB_USER=<user>                   # default: splunk_ai
    DB_PASSWORD=<password>
    

  2. Run the container image with the .env file:

    docker run --rm -p 3000:3000 --env-file .env computence-ai/ai-agent-for-splunk-backend:latest
    

Splunk App

From this point on, setting up the Splunk app is the same as for an internet-connected deployment.

Follow the instructions in Installation and Configuration to complete the setup.