Provider Configuration Guide
The anyllm.json file is the heart of your AnyLLM application. This is where you configure connections to all large language models (LLMs), whether they are local models or cloud services. This guide will help you understand each parameter and easily connect any provider.
Anatomy of anyllm.json
Let's break down the structure with an example. Imagine this is your configuration:
{
"provider": {
"openrouter": {
"name": "OpenRouter",
"type": "openai_compatible",
"options": {
"baseURL": "https://openrouter.ai/api/v1/chat/completions",
"header": {
"Authorization": "Bearer your_openrouter_key"
}
},
"models": {
"Gemini-Flash": {
"name": "google/gemini-flash-1.5"
},
"Phi-3-Mini": {
"name": "microsoft/phi-3-mini-128k-instruct"
}
}
},
"ollama": {
"name": "Ollama (local)",
"type": "openai_compatible",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"Local-Phi-3": {
"name": "hf.co/microsoft/Phi-3-mini-4k-instruct-gguf:latest"
}
}
}
}
}
Provider Key ("openrouter", "ollama")
This is a unique identifier that you create yourself. It's used internally by the program to distinguish between providers. It is recommended to use short and clear names, such as google, groq, my_local_llms.
name (Provider Name)
- What is it? A user-friendly name that will be displayed in the application's interface.
- Example:
"name": "Ollama (local)" - Why is it needed? So you can easily understand which provider you are selecting.
type (API Type)
- What is it? The protocol type AnyLLM will use to communicate with the provider.
openai_compatible: For models compatible with the OpenAI API (e.g., OpenRouter or any other aggregator). They usually state that they are OpenAI-compatible.google: Specify this only if you are using a direct connection to the Gemini API. Google is not compatible with the OpenAI API and uses a different connection type.
options (Connection Options)
This object contains the technical details for the connection.
baseURL: The base URL of the API service. This is the most important address to find in your provider's documentation.header: An object for passing HTTP headers, most often used for authorization.Content-Type: Usually always"application/json".Authorization: Here you specify your secret API key. TheBearer YOUR_KEYformat is the most common. Never share this key or publish it in the open!- You can specify any parameters that need to be passed in the headers. We only provide a basic example.
models (Models)
Here you list the models you want to use from this provider.
-
Model Key (
"Gemini-Flash","Local-Phi-3"): This is an alias or short name for the model that you create yourself. It will be displayed in the model selection list in the interface. Make it convenient for you. -
name(Model Name at the Provider): This is the official and full model name that the provider's API requires. It must be copied exactly from the documentation or the list of models on the provider's website. AnyLLM will use this name when sending a request.
Practical Connection Examples
1. Ollama (Running Models Locally)
Ollama is a great way to run LLMs on your own computer.
- Install Ollama from their official website and run any model (e.g.,
ollama run hf.co/microsoft/Phi-3-mini-4k-instruct-gguf:latest). - Ollama automatically creates an OpenAI-compatible API server.
anyllm.json Configuration:
"ollama": {
"name": "Ollama (local)",
"type": "openai_compatible",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"Local Phi-3": {
"name": "hf.co/microsoft/Phi-3-mini-4k-instruct-gguf:latest"
},
"DeepSeek-R1-0528-Qwen3-8B": {
"name": "hf.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q4_K_M"
}
}
}
baseURL: For Ollama, this is almost alwayshttp://localhost:11434/v1.- Authorization: Not needed, as the service runs locally.
- Model Names (
hf.co/...): These are the official model names you used in theollama runcommand. You can see all installed models with theollama listcommand.
2. OpenRouter (Access to Dozens of Models with One Key)
OpenRouter is an aggregator service that provides access to models from Google, Anthropic, Mistral, Microsoft, and others through a single API.
- Register: Sign up at openrouter.ai.
- API Key: Copy your API key from the settings page.
- Find Models: Go to the Models page to see the list of available models.
anyllm.json Configuration:
"openrouter": {
"name": "OpenRouter",
"type": "openai_compatible",
"options": {
"baseURL": "https://openrouter.ai/api/v1/chat/completions",
"header": {
"Authorization": "Bearer sk-or-v1-your-long-key"
}
},
"models": {
"Google Gemini Flash": {
"name": "google/gemini-flash-1.5"
},
"Claude 3.5 Sonnet": {
"name": "anthropic/claude-3.5-sonnet"
},
"Phi-3 Mini Instruct": {
"name": "microsoft/phi-3-mini-128k-instruct"
}
}
}
baseURL:https://openrouter.ai/api/v1/chat/completions. You can find this in the documentation, for example, for shell or curl requests. This can be seen in any aggregator's documentation; they all specify the endpoint for accessing their services.Authorization: Insert your key afterBearer.- Model Names: Copy the model identifier directly from the OpenRouter site (e.g.,
google/gemini-flash-1.5). This is exactly what you need to put in thenamefield.
3. Google Gemini (and other direct APIs)
Many services, like Google, have their own APIs that are not always OpenAI-compatible.
So how do you use Google models?
The best way is through a proxy service that provides an openai_compatible interface. OpenRouter from the example above is an ideal candidate.
If you are using another service that provides access to Gemini models, the principle is the same:
- Get the
baseURLandAPI Keyfrom that service. - Find out from them which model identifier to use.
Example for a hypothetical provider anyllm-best-api.tech:
"anyllm-best-api": {
"name": "AnyLLMBestAPI",
"type": "openai_compatible",
"options": {
"baseURL": "https://anyllm-best-api.host/api/v1/chat/completions",
"header": {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_KEY_FROM_THEM"
}
},
"models": {
"Gemini 2.5 Flash": {
"name": "gemini-2.5-flash"
}
}
}
Here, gemini-2.5-flash is the name that the anyllm-best-api.tech service expects, not Google itself. Always check the documentation of the service whose baseURL you are using!