This framework supports multiple LLM providers through a flexible configuration system.
For Standard OpenAI:
export OPENAI_API_KEY='sk-your-openai-key-here'
python simuleval_core.pyFor Custom LLM Proxy:
export OTHER_API_KEY='your-proxy-api-key'
python simuleval_core.py- Copy the example configuration:
cp config/private_config.example.py config/private_config.py- Edit
config/private_config.pywith your credentials:
# For OpenAI
OPENAI_CONFIG = {
"api_key": "sk-your-actual-key-here",
"model": "gpt-4o-mini",
"provider": "openai"
}
# OR for custom proxy (organization-specific)
OTHER_CONFIG = {
"api_key": "your-actual-proxy-key",
"model": "gpt-4o-mini",
"base_url": "https://your-internal-proxy.company.com/v1",
"provider": "openai"
}- Run the simulation:
python simuleval_core.pyBefore running the full simulation, test your connection:
python test_llm_connection.pyThis will verify your API configuration is working correctly.
- OPENAI_CONFIG: Uses standard OpenAI API with gpt-4o-mini
- OTHER_CONFIG: Uses custom proxy endpoint with gpt-4o-mini
- Environment variables: Override file-based config
The system checks for configuration in this order:
- OTHER_API_KEY environment variable
- OPENAI_API_KEY environment variable
- OTHER_CONFIG in private_config.py
- OPENAI_CONFIG in private_config.py
- The
config/private_config.pyfile is automatically git-ignored - Never commit API keys to version control
- Use environment variables in production environments