Skip to main content

Overview

The Otoroshi LLM extension provides advanced function calling (tool calling) capabilities. The gateway can intercept LLM responses that request tool calls, execute the tools server-side, and send the results back to the LLM — all transparently to the consumer.

How does function calling work?

  1. The consumer sends a chat completion request to the gateway
  2. The gateway injects the configured tool definitions into the request and forwards it to the LLM provider
  3. The LLM responds with a tool_calls request (instead of a text response)
  4. The gateway executes the tools server-side (JavaScript, HTTP, WASM, or Workflow)
  5. The gateway appends the tool results to the conversation and re-sends it to the LLM
  6. Steps 3-5 repeat until the LLM produces a final text response (or the max calls limit is reached)
  7. The final response is returned to the consumer

The consumer only sees the final text response — the tool calling loop is entirely handled by the gateway.

Tool function backends

A tool function is defined as an entity with a name, description, JSON schema parameters, and a backend that executes the function. Five backend kinds are supported:

BackendDescription
QuickJsJavaScript code executed in a WASM-based QuickJS runtime
WasmPluginA full WASM plugin registered in Otoroshi
HTTPHTTP call to an external endpoint
WorkflowExecute an Otoroshi workflow
Route(Reserved for future use)

Supported providers

The following providers support the tool calling loop (both blocking and streaming):

ProviderTool callingStreaming tools
OpenAIYesYes
Azure OpenAIYesYes
AnthropicYesYes
MistralYesYes
GroqYesYes
X-AI (Grok)YesYes
CohereYesYes
OllamaYesNo

Provider configuration

To enable function calling on a provider, add tool function IDs to the wasm_tools (or tool_functions) array in the provider options:

{
"provider": "openai",
"connection": {
"base_url": "https://api.openai.com/v1",
"token": "${vault://local/openai-token}",
"timeout": 30000
},
"options": {
"model": "gpt-4o-mini",
"wasm_tools": [
"tool-function_xxx",
"tool-function_yyy"
],
"mcp_connectors": [],
"max_function_calls": 10,
"allow_config_override": true
}
}
ParameterTypeDefaultDescription
wasm_toolsarray[]List of tool function entity IDs. Also accepts tool_functions as alias
mcp_connectorsarray[]List of MCP connector IDs
max_function_callsinteger10Maximum number of recursive tool call iterations (safety limit)
allow_config_overridebooleantrueAllow consumers to override config in the request body

If the consumer's request already includes a tools array, the gateway passes it through to the provider as-is (no injection of configured tools).

Tool function entity

A tool function entity defines a callable tool with its JSON schema and backend:

{
"id": "tool-function_xxx",
"name": "get_weather",
"description": "Get the current weather for a given location",
"strict": true,
"parameters": {
"location": {
"type": "string",
"description": "The city name"
},
"unit": {
"type": "string",
"description": "Temperature unit",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"],
"backend": {
"kind": "QuickJs",
"options": {
"jsPath": "..."
}
}
}
FieldTypeDescription
namestringThe tool name sent to the LLM (must be unique per provider)
descriptionstringDescription of what the tool does (used by the LLM to decide when to call it)
strictbooleanWhether to enforce strict JSON schema validation
parametersobjectJSON schema of the function parameters
requiredarrayList of required parameter names
backendobjectBackend configuration (kind + options)

Tool naming convention

The gateway prefixes tool names to route tool calls to the correct backend:

PrefixSource
wasm___A registered tool function entity
wasm_____inline_An inline function from an agent configuration
mcp___An MCP connector tool

These prefixes are added/removed transparently — the LLM and the consumer see clean tool names.

MCP Connectors as tool functions

MCP Connectors can also be used as tool functions called by the gateway during LLM interactions. When attached to an LLM provider, MCP connectors expose their tools as callable functions, enabling the LLM to interact with external services (databases, APIs, etc.) through the MCP protocol.

For more details on MCP support, see the MCP documentation.

Admin API

Tool function entities are available at:

ai-gateway.extensions.cloud-apim.com/v1/tool-functions

Standard CRUD operations are supported (GET, POST, PUT, DELETE).