Skip to content

Architecture

Lynx AI Agent offers flexible deployment architectures to accommodate many security and infrastructure requirements.

Whether you want an easy-to-use, fully managed cloud service or a completely air-gapped on-premises installation, these architectures cover popular client environments and align seamlessly with Splunk Validated Architectures (SVAs).

Cloud

Note

Not to be confused with Splunk Cloud. The cloud version of Lynx AI can be used with any valid Splunk environment, including on-premises and hybrid deployments. The only requirement is internet connectivity.

This is the simplest path to AI in production and the recommended way to use Lynx AI. Gain continuous access to the latest and greatest AI models, without the need to maintain or scale anything.

Lynx AI manages the entire AI stack for you, while keeping a strict ZDR (Zero Data Retention) policy that ensures:

  • No prompts or responses are logged or stored beyond transient processing
  • No data is used for model training

Lynx AI cloud is highly-available, scales horizontally, and is fully protected by Cloudflare's enterprise-grade security layer, including robust DDoS protection and proactive cyber threat mitigation.

flowchart LR
    User(["User"])

    subgraph you["Your Infrastructure"]
        subgraph appStack[" "]
            App["Lynx AI Agent Splunk App"]
            Splunk[("Splunk")]
        end
    end

    subgraph lynx["Lynx AI ยท Cloudflare"]
        Backend["Lynx AI Backend"]
        Models["Frontier Proprietary & Open-Weight AI Models"]
    end

    User -- Prompt --> App
    App -- Response --> User
    App <--> Splunk
    App <--> Backend
    Backend <-- "Zero Data Retention (ZDR)" --> Models

    style appStack fill:transparent,stroke:transparent
    style lynx fill:transparent,stroke:#f6821f,stroke-width:2px
  • Compatible with: Splunk Cloud, Splunk Enterprise
  • Requirements: Internet connectivity from user's browser
  • Best for:
    • Instant & easy AI capabilities in your Splunk environment
    • Fast-moving teams that need access to the smartest frontier AI models
    • Managed, always-on service
    • Cost-sensitive customers

Private Cloud

Some highly-regulated customers may have a hard requirement for all data to remain within their cloud perimeter (tenant isolation). For such cases, Lynx AI can be fully deployed inside a customer's own cloud infrastructure.

We call this architecture Private Cloud, sometimes called Bring Your Own Cloud (BYOC).

Under this architecture, in addition to the Splunk app, you own and manage the Lynx AI backend and provide your own inference endpoint - usually AWS Bedrock, Google Vertex AI, or Azure AI Foundry. All inference is done in your own cloud tenant.

flowchart LR
    User(["User"])

    subgraph you["Your Infrastructure"]
        subgraph appStack[" "]
            App["Lynx AI Agent Splunk App"]
            Splunk[("Splunk")]
        end
        subgraph backendStack[" "]
            Backend["Lynx AI Backend"]
        end
    end

    subgraph provider["Cloud Provider"]
        Models["Frontier Proprietary & Open-Weight AI Models"]
    end

    User -- Prompt --> App
    App -- Response --> User
    App <--> Splunk
    App <--> Backend
    Backend <--> Models

    style appStack fill:transparent,stroke:transparent
    style backendStack fill:transparent,stroke:transparent
  • Compatible with: Splunk Cloud, Splunk Enterprise
  • Requirements:
    • Self-hosting the Lynx AI backend (Docker image, horizontally scalable)
    • Connectivity from user's browser to the self-hosted backend
    • Cloud-based inference endpoint
  • Best for:
    • Customers with strict data sovereignty or regulatory requirements
    • Teams that manage their own cloud infrastructure, or have access to internal DevOps resources

Air-gapped

In some special cases, customers may have a fully air-gapped environment, with no internet connectivity whatsoever. Lynx AI supports air-gapped networks where the customer owns every single component of the AI stack - from the Splunk app, through the backend, to the inference engine.

Warning

This is an advanced use-case that requires the customer's organization to have highly specialized knowledge of AI infrastructure and a fleet of inference GPUs to host the AI models.

Unless you absolutely know what you are doing, we recommend using one of the other architectures instead.

Note that setting up a self-hosted inference engine is out of scope for Lynx AI. You are expected to operate and maintain it within your organization.

Some popular choices for inference engines are vLLM, SGLang and Ollama.
We also support LLM proxies like LiteLLM.

When choosing an AI model for use in an air-gapped environment, consult our Model Performance page for info on the latest open-weight models and their requirements.
Lynx AI Agent requires fairly capable AI models to be effective in Splunk - small models aren't usually a good fit.

You can find more information about air-gapped deployments in our Air-gapped documentation.

flowchart LR
    User(["User"])

    subgraph you["Your Infrastructure"]
        subgraph appStack[" "]
            App["Lynx AI Agent Splunk App"]
            Splunk[("Splunk")]
        end
        subgraph inferenceStack[" "]
            Backend["Lynx AI Backend"]
            Inference["Inference Engine (OpenAI-compatible API)"]
            Models["Open-Weight AI Models"]
        end
    end

    User -- Prompt --> App
    App -- Response --> User
    App <--> Splunk
    App <--> Backend
    Backend <--> Inference
    Inference <--> Models

    style appStack fill:transparent,stroke:transparent
    style inferenceStack fill:transparent,stroke:transparent
  • Compatible with: Splunk Enterprise
  • Requirements:
    • Self-hosting the Lynx AI backend (Docker image, horizontally scalable)
    • Connectivity from user's browser to the self-hosted backend
    • Self-hosted inference engine (OpenAI-compatible API)
    • Supported open-weight AI model (at least one)
  • Best for:
    • Classified, regulated, or high-security environments where data must never leave the network perimeter
    • Teams with access to specialized AI infrastructure and GPU resources

Comparison

Here's a quick comparison of the three architectures, to help you choose the one that best suits your needs.

Ownership

This table shows who is responsible for managing each component in each deployment model.

Component โ˜ Cloud ๐Ÿ”’ Private Cloud โ›“๏ธโ€๐Ÿ’ฅ Air-gapped
Splunk App You You You
Backend Lynx AI You You
Inference Lynx AI Cloud Provider You

Capabilities & Requirements

This table compares the three architectures for Lynx AI Agent.

โ˜ Cloud ๐Ÿ”’ Private Cloud โ›“๏ธโ€๐Ÿ’ฅ Air-gapped
Internet Required โœ… Internal cloud access only โŒ
Splunk Cloud Supported โœ… โœ… โŒ
Splunk Enterprise Supported โœ… โœ… โœ…
Frontier AI Models โœ… โœ… Open-weight models only
ZDR (Zero Data Retention) โœ… Provider's responsibility Customer's responsibility