📄️ Overview
🚧 Enforcing Usage Limits
📄️ Auto Secrets Leakage
This Guardrail is a security measure that prevents the LLM from exposing sensitive information such as passwords, API keys, or confidential credentials, reducing the risk of data leaks. This safeguard can be applied before sending the prompt to the LLM (blocking requests that attempt to share secrets) and after generating a response (preventing accidental leaks).
📄️ Characters count validation
📄️ Prompt contains guardrail
📄️ Prompt contains gender bias guardrail
A mechanism that identifies and reduces biased language related to gender in user prompts, promoting fairness and inclusivity in AI-generated content. It can be applied before the LLM receives the request (blocking biased prompts) and after to filter or rephrase biased responses.
📄️ Prompt contains gibberish guardrail
This Guardrail acts like a filter that detects and manages inputs that are nonsensical, random, or meaningless, preventing the AI from generating irrelevant or low-quality responses.
📄️ LLM guardrails
📄️ Language moderation
Configuration
📄️ Personal Health information
It's a safeguard that ensures the LLM does not process, store, or share any personal health-related details, protecting user privacy and compliance with regulations.
📄️ Personal information
Configuration
📄️ Prompt injection
📄️ QuickJS
📄️ Racial Bias
📄️ Regex
📄️ Secrets Leakage
In the context of LLMs (Large Language Models) and AI, a "Secrets Leakage Guardrail" refers to a security mechanism designed to prevent AI models from exposing sensitive or confidential information. This can include API keys, passwords, proprietary business data, or personally identifiable information (PII).
📄️ Semantic contains
📄️ Sentences Count
📄️ Toxic Language
Configuration