Connect your LLM

Kera does not bundle an LLM. You bring your own provider — keep full control over your AI costs, data, and model choice.

  1. 1 Open workspace settings

    Go to your workspace settings and find the LLM Provider section.

  2. 2 Choose a provider

    Select from the supported providers:

    • Anthropic — Claude (Opus, Sonnet, Haiku)
    • OpenAI — GPT-4o, GPT-4, GPT-3.5
    • Google — Gemini Pro, Gemini Ultra
    • Mistral — Mistral Large, Medium, Small
    • OpenAI-compatible — any endpoint that speaks the OpenAI API (self-hosted, Ollama, vLLM, etc.)
  3. 3 Enter your API key

    Paste your provider's API key. The key is stored securely in your workspace and never shared with other workspaces or users.

    // Example configuration Provider: Anthropic API Key: sk-ant-... Model: claude-sonnet-4-20250514
  4. 4 Select your model

    Choose the specific model you want to use. You can change this at any time without losing any data or chat history.

  5. 5 Test the connection

    Open the chat panel and send a message. If the provider is configured correctly, you will get an AI response. If not, Kera shows a clear error with what went wrong.

  6. 6 Switch providers anytime

    You can swap providers or models at any time. Your tickets, documents, and chat history stay the same — only the AI backend changes.

Tips

  • Your API key stays in your workspace, never shared. Kera does not proxy or store your LLM traffic.
  • Token usage is billed by your provider directly — Kera adds no markup.
  • For on-premise or self-hosted models, point to any OpenAI-compatible endpoint (Ollama, vLLM, etc.).
  • The natural_language workflow action also uses your configured LLM — set it up here first.
  • Each workspace has its own provider configuration, so teams can use different models.

Next steps