← Back to Admin Home
|
登出
Model & System Config
Manage LLM models and parameters. Changes take effect within 2 minutes (cache TTL).
Chat Models (text conversation)
Model list (one per line, first = primary)
OpenRouter model IDs, tried in order. First model is default.
Temperature
Max Tokens
Voice Chat Models
Model list (one per line)
Used for voice call LLM responses. Keep fast models for low latency.
Temperature
Max Tokens
Extraction Models (memory / analysis)
Model list (one per line)
Used for memory extraction, profile updates, image analysis.
Temperature
Max Tokens
Custom Config Keys
+ Add
Reload
Save All