📄️ Overview
The Otoroshi LLM extension provides a complete prompt engineering toolkit to control, optimize, and standardize interactions with LLMs. You can store reusable prompts, create message templates with dynamic expressions, and inject contextual information around user messages.
📄️ Prompt library
The prompt library lets you store reusable prompt texts as named entities. Prompts can be referenced by ID from multiple plugins, guardrails, and other features, providing a single source of truth for your prompt texts.
📄️ Prompt templating
Prompt templates let you define standardized message sets with dynamic expression placeholders. When a template is active, it replaces the consumer's messages entirely, giving you full control over what gets sent to the LLM.
📄️ Prompt context
Prompt contexts let you inject pre-messages and post-messages around the consumer's messages. This is the primary way to add system prompts, instructions, or framing information to LLM interactions without modifying the consumer's request.
📄️ Prompt validator
The prompt validator plugin validates the content of consumer messages against regex-based allow/deny patterns before they reach the LLM provider. This is a lightweight alternative to LLM-based guardrails for pattern-based content filtering.