This feature introduces a simplified LLM selection interface that initially displays only 4-6 carefully curated models (a mix of non-Chain-of-Thought and CoT models that balance performance with cost-effectiveness) in the dropdown menu. Users can access the full 20+ model library through an expandable "Show all models" option. The feature also includes an optional info tooltip explaining the difference between thinking/CoT models and standard models. This addresses the decision paralysis that less technical users face when confronted with extensive model lists and benchmark tables. It streamlines the user experience by providing sensible defaults while preserving full choice for power users, reducing cognitive load during model selection and making Kagi's AI features more accessible to mainstream users who want quality results without extensive research.
Users would interact with this feature in several ways:
Casual users would see 4-6 recommended models (like Claude Sonnet, GPT-4o, a thinking model like Claude Opus, and a fast/cheap option) and simply pick one without overwhelm. Power users would click "Show all models" to access specialized options like coding-focused or multilingual models.
Potentially:
This would extend Kagi's current model dropdown by adding a collapsed/expanded state system. The curated default list could be dynamically updated based on benchmark performance and cost metrics. User selections could inform which models appear in their personalized "recent" or "favorites" section, creating a progressive disclosure system that grows with user expertise.