Model Configurations
When you utilize an LLM-as-Judge Metric, CertOps needs to know which language model you want acting as the Judge. This is where you utilize a Model Configuration (Model Config).
A Model Configuration defines how CertOps connects to a specific LLM to run evaluations.
In your test configuration (certops.yaml), you provide the judge_model_config_id to tell CertOps: "I want gpt-4o to dynamically evaluate all run responses against my custom answer-relevance metric using my provided API key".
Support for All Models
CertOps does not lock you into a single LLM provider ecosystem. The evaluation engine is designed to be completely model-agnostic.
You can configure CertOps to use any foundational model from any major provider (OpenAI, Anthropic, Google, Mistral, Meta) as long as it has a compatible API.
Configuration Properties
When creating a new Model Configuration via the UI or API, you define:
- Configuration Name: A system-friendly ID representing this connection (e.g.,
production-gpt-4o). - Provider: The model host (e.g.,
openai,anthropic). - Model ID: The exact model string to request (e.g.,
gpt-4o,claude-3-5-sonnet-20240620). - Parameters: Any additional configuration variables required to run the model securely (see resolving secrets below).
Self-Hosted and Open-Source Models
CertOps natively supports self-hosted LLMs, local instances, proxy gateways, and custom fine-tuned models.
You achieve this by utilizing the base_url property within your Model Config.
By defining a custom base_url (such as http://localhost:11434/v1 for a local Ollama instance, or an internal corporate Azure AI URL), you can reroute all metric evaluation traffic to your own secure infrastructure. This ensures that sensitive dataset evaluation data never has to leave your Virtual Private Cloud (VPC) or local machine.
Securing API Keys
Model configurations often require authentication tokens (like api_key). Because Model Configs are shared resources across your organization, CertOps ensures these credentials are not exposed in plaintext.
When defining a parameter value, you can securely cast it as a Secret Reference by prepending the value with a $ matching a key stored in your CertOps Secrets vault (e.g., $openai-api-key). At runtime, CertOps securely resolves these secrets directly in memory immediately before executing the metric evaluation.