Skip to content

Air-gapped Environment

Lynx AI Agent can be installed in air-gapped Splunk environments, with no internet connection.

In this case, the backend and all supporting services must be available locally.

Inference Engine

Warning

A local inference engine is a pre-requisite for an air-gapped environment.

The local LLMs power Splunk reasoning and tool execution, while also enforcing guardrails with malicious payload and prompt-injection checks.

We continuously validate new checkpoints, publish benchmarks, and update this list as models evolve.
Currently, the following open-weight models are supported:

  • GLM 4.7 (Z.ai)
  • Kimi K2.5 (Moonshot AI)
  • MiniMax M2.1 (MiniMax)

The inference endpoint must expose an OpenAI-compatible Completions API, accept bearer API keys, and be reachable over either HTTP or HTTPS.

You can test the connection to the inference endpoint like this:

curl -H "Authorization: Bearer <API_KEY>" \
     -H "Content-Type: application/json" \
     -d '{"model":"glm-4.7","messages":[{"role":"user","content":"What is the meaning of life?"}]}' \
     https://example.com/api/v1/chat/completions # (1)!

  1. Set the base URL to the inference endpoint in your organization.

Lynx AI Backend

The backend ships as a hardened container image. Run it directly with Docker or Docker Compose.

Minimum allocation: 1 CPU core, 1 GB RAM, and a stable network path to both the model endpoint and PostgreSQL.

  1. Create the following .env file:

    # .env
    
    # App
    ONPREM_MODE=true # (1)!
    TRUSTED_HOSTS=<host1>[,<host2>] # (2)!
    SSL_ENABLED=<boolean> # (3)!
    
    # Inference
    INFERENCE_URL=<url>
    INFERENCE_API_KEY=<api_key>
    GUARDRAIL_MODEL=<model> # (4)!
    VERIFY_SSL=<boolean> # (5)!
    

    1. This is required to be set to true when deployed in an air-gapped environment.
    2. Hostnames that the backend can accept requests on.
    3. Whether to listen for HTTPS requests. false means HTTP. true requires SSL certificates to be mounted into the container. Default: false
    4. AI model used for guardrail protections. Must be set to the name value of one of the on-prem AI models in ai.conf. Default: google/gemini-2.5-flash-lite
    5. Whether to verify the SSL certificate of the inference endpoint. Default: true
  2. Run the container image with the .env file:

    docker run -d -p 3000:3000 --env-file .env \
      ghcr.io/trylynx-ai/ai-agent-for-splunk-backend:latest
    

    If SSL_ENABLED=true, mount a certificate chain and private key into the container at /ssl/cert.pem and /ssl/privkey.pem:

    docker run -d -p 3000:3000 --env-file .env \
      -v /path/to/fullchain.pem:/ssl/cert.pem:ro \
      -v /path/to/privkey.pem:/ssl/privkey.pem:ro \
      ghcr.io/trylynx-ai/ai-agent-for-splunk-backend:latest
    

Tip

Ensure the certificate files are readable by the container process (e.g., chmod 644).

Splunk App

When deploying the Splunk app, in addition to setting up the details of your backend server, you must also set onprem_mode = true and specify your avaiable on-prem AI models in local/ai.conf.
See ai.conf for more details.

From this point on, setting up the Splunk app is the same as for an internet-connected deployment.

Follow the instructions in Installation and Configuration to complete the setup.