Skip to main content

Prompt context

Prompt contexts let you inject pre-messages and post-messages around the consumer's messages. This is the primary way to add system prompts, instructions, or framing information to LLM interactions without modifying the consumer's request.

How it works

The consumer sends messages like:

[
{"role": "user", "content": "What is the weather in Paris?"}
]

With a prompt context that has a system prompt as a pre-message, the LLM actually receives:

[
{"role": "system", "content": "You are an english butler. Always respond politely and formally."},
{"role": "user", "content": "What is the weather in Paris?"},
{"role": "system", "content": "Remember to be concise."}
]

Entity structure

{
"id": "context_xxx",
"name": "English Butler",
"description": "Makes the LLM respond like an english butler",
"pre_messages": [
{
"role": "system",
"content": "You are an english butler. Always respond politely and formally."
}
],
"post_messages": [
{
"role": "system",
"content": "Remember to be concise."
}
]
}
FieldTypeDescription
pre_messagesarrayMessages prepended before the consumer's messages
post_messagesarrayMessages appended after the consumer's messages

Each message has:

FieldTypeDescription
rolestringMessage role (system, user, assistant)
contentstringMessage content

Two ways to use contexts

1. Plugin-based (route level)

Add the Cloud APIM - LLM Proxy - prompt context plugin to your route, before the LLM proxy plugin:

{
"enabled": true,
"plugin": "cp:otoroshi_plugins.com.cloud.apim.otoroshi.extensions.aigateway.plugins.AiPromptContext",
"config": {
"ref": "context-entity-id"
}
}
ParameterTypeDescription
refstringReference to a PromptContext entity ID

Multiple context plugins can be stacked on the same route. Their pre/post messages accumulate.

2. Provider-level (provider settings)

Each LLM provider entity has a context setting that supports a default context and a whitelist of selectable contexts:

{
"context": {
"default": "context-entity-id",
"contexts": ["context-entity-id-1", "context-entity-id-2"]
}
}
ParameterTypeDefaultDescription
defaultstringnullDefault context applied automatically when no context is specified
contextsarray[]Whitelist of contexts that consumers can select from

Consumers can select a context by including "context" in the request body:

curl --request POST \
--url http://myroute.oto.tools:8080/v1/chat/completions \
--header 'content-type: application/json' \
--data '{
"messages": [
{"role": "user", "content": "Hello!"}
],
"context": "context-entity-id-1"
}'

The context can be referenced by ID or name. Only contexts listed in the provider's whitelist are accepted. The context field is stripped from the body before forwarding to the LLM.

Plugin vs. provider-level context

AspectPlugin-basedProvider-level
Configured onRouteProvider entity
Consumer can chooseNoYes (from whitelist)
Default contextAlways appliedApplied when consumer doesn't specify one
StackableYes (multiple plugins)No (one context per request)
Use caseFixed context per routeMulti-persona or multi-tenant setup

Example: multi-persona provider

Configure a provider with multiple available contexts:

{
"context": {
"default": "context_butler",
"contexts": ["context_butler", "context_engineer", "context_teacher"]
}
}

Consumers select the persona in their request:

{"messages": [{"role": "user", "content": "Explain quantum computing"}], "context": "context_teacher"}

Testing contexts

The admin UI includes a built-in context tester. Provide a user message and test the context injection + LLM call directly from the Otoroshi back-office.