AI model pickers are a design failure, not a feature
%20(1).png)
AI coding assistants exist to make developers more productive. So why are some tools making devs less efficient by forcing them to choose a model?
Recently, Sam Altman said what we at Augment have known for a while: model pickers are bad UX.

Yet, tools like Cursor offer a dropdown with 8+ models to choose from, with even more hidden in settings. As a software engineer, how do I know which model to pick? Should I experiment? Should I cycle through them to find the best one? How often do I need to revisit that choice? Even worse is that some models infer a higher-cost per use.
The reality is, developers shouldn’t have to think about this at all.

Developers Want Productivity, Not More Decisions
The promise of AI coding assistants is effortless productivity. The best assistants integrate seamlessly, providing smart, context-aware suggestions that just work. A model picker shifts that burden onto the developer—forcing them to navigate a decision they’re not equipped to make.
This is the equivalent of an IDE asking:
🔹 “Which garbage collection algorithm should we use?”
🔹 “Would you like AST-based refactoring, or a simpler regex approach?”
🔹 “Which indexing method do you prefer for code search?”
We don’t make developers decide these things because experts have already optimized for the best experience.
The Latest Model Isn't Always the Best
It’s tempting to assume that dropping in the latest, most powerful model will automatically improve results—but that’s not how LLMs work in practice.
🔹 Sonnet 3.7, for example, is powerful but requires careful tuning to avoid excessive verbosity.
🔹 GPT-4.5 launched without much fanfare because, in real-world tasks, folks didn’t see an improvement.
The fundamental truth is that LLM quality depends on input quality. Even the best model will struggle without the right context. That’s why at Augment, we built a real-time Context Engine designed to scale to enterprise codebases. It ensures that LLMs get the right context at the right time, making responses more accurate, relevant, and useful.
In a world where vendors are investing heavily in UX and offering a long list of models, our main bet is on context—not a dropdown. We’re not just building a small local index with basic search algorithms, hoping they uncover the right context (spoiler alert: they won’t). Instead, we’ve built a true Context Engine that deeply understands enterprise-scale codebases, dynamically retrieves the most relevant information, and feeds LLMs exactly what they need to deliver high-quality, relevant suggestions.
A model picker does nothing to solve this problem. Simply swapping between models won’t help if the system isn’t feeding it the right information.
Model Selection Should Be Automatic
At Augment, we don’t expose a model picker because we handle the complexity for you. We dynamically select the best model based on:
✅ Task type (code completion, chat, inline suggestions)
✅ Performance benchmarks across real-world coding tasks
✅ Cost vs. latency trade-offs
✅ The latest advancements in AI models
AI model selection isn’t the user’s problem—it’s ours. We take this responsibility very seriously, and our world-class AI research team has built extensive testing and evaluation criteria to ensure the best results.
Every model we’re considering for the Augment Code product goes through a rigorous audition. This process includes evaluating performance on external benchmarks (like swebench-verified), extensive internal dogfooding, A/B tests with our 20+ full-time testing contractors, and evaluation against internal benchmarks. This testing happens fast. Claude Sonnet 3.7 was in production in Augment Code less than 12 hours after the initial release from Anthropic.
A Model Picker Wastes Developer Time (and Money)
The illusion of choice can feel empowering, but in reality, it creates friction. Imagine needing to switch search algorithms every time you Google something. Or choosing between 10 different compilers before running your code.
Worse, model pickers can create unexpected costs. In tools where model pickers are available, some models cost significantly more per use. This creates a completely unpredictable cost for organizations committing to these tools. When the latest model drops, you may be exposed to astronomical pricing without any guarantees your engineers will find any value in it.
At Augment, you don’t have to worry about usage limits or surprise costs. Regardless of which model we pick for you, it’s unlimited use. No toggling settings, no second-guessing, no hidden fees—just the best AI for the job.
If your AI coding assistant requires you to pick a model, it’s not doing its job.