No Forks Allowed: How User-Centric Design Saved Next Edit from Compromise
.png)
Introduction
AI-powered coding tools face three hard problems simultaneously: AI research excellence, intuitive UX and product design, and robust systems engineering. When any one of these isn't working, the whole experience falls apart.
That's exactly the challenge we faced with Next Edit, our feature that intuits how changes in one part of the codebase will ripple across your entire codebase, and suggests what will need to be updated, next.
When early testing for Next Edit revealed that our UI approach and model behavior weren't meeting our standards, we were faced with a difficult decision: fork VS Code to “own” the editor, or pause and re-envision the entire experience.
We're building for professional software engineers—people who take immense pride in their tools, as do we. Getting this right wasn't just about shipping a feature; it was about crafting an experience worthy of being part of a developer's daily workflow.
Ultimately, we made the call to pause and do the hard work of making the VS Code API work for Next Edit, and in that time learned a lot about what it takes to get AI coding experiences to be just right.
What is Next Edit?
Next Edit suggests code changes beyond the cursor by understanding the ripple effects of your changes across your entire workspace.
For example, when you add a newsession_id
field to a data class, Next Edit automatically identifies all the places that need updates — direct usages, SQL queries, related classes, and tests — keeping your code in sync without manual hunting.
We started Next Edit by researching three core AI challenges:
- Figuring out what task the user is trying to accomplish
- Determining where to make those changes across the codebase
- Executing those changes accurately and efficiently
Rather than simply using a single commercially available foundational model, we found a three-model approach delivered much greater accuracy:
- Location Model: Observes your edit history to predict where you're likely to make the next edit (both in the same file and across your broader codebase).
- Generation Model: Suggests the actual code changes once a location is identified and the intent is understood.
- Description Model: Suggests a short description for the suggested code change to help users quickly understand the change.
Our first Next Edit try: the hover
Initially, we displayed Next Edit suggestions via a hover-based UI. While internal dogfooding suggested this approach had promise, we needed real-world feedback to see if it truly delighted users—or if it was too intrusive.

Wave Rollout
Instead of a wide release, we employed a "wave" rollout, beginning with a Wave-1 group of users. This approach takes inspiration from canary deployments in large-scale distributed systems, letting us gradually expose Next Edit to real-world conditions while minimizing user disruption.
What we uncovered: Polarizing UX and a too-eager model
The feedback was unvarnished. First, the hover-based approach was deeply polarizing. We uncovered four distinct user groups:
- The “Power User” cohort loved the feature and were extremely excited about its potential.
- Another group of users was so frustrated with the UX that they turned Next Edit off (“Unhappy Users”)
- The group in between didn't understand how to fully use the feature (“Middle of the Pack”)
- Users who loved it, loved it in spite of the UX because of the value it brought them
Something concerning was what we discovered about our AI model's behavior. In early trials, our generation model was too eager, churning out suggestions that weren't always relevant. It was simply too sloppy, interrupting developers with low-quality suggestions that didn't merit the disruption.
We felt that the feedback was split: some users loved the product and were very passionate about it, whereas other users went so far as to turn off the feature.
We were confronted with a question: Should we ship Next Edit, assuming that not every feature is for everyone? Or should we go back to the drawing board? Ultimately, the decision came down to providing value to our users. We had a strong conviction that the value provided by Next Edit is quite high for software engineers that are doing major refactoring and we didn’t want them to miss out. Back to the drawing board we went.
How we fixed UX and model quality issues
Refining the model
Our first step was to fix the eagerness and sloppiness displayed by the model. However, when the research team trained the model to interrupt less , we realized that the pendulum swung too far in the other direction. The model became lazy, frustrating power users who wanted more frequent recommendations. It wasn't making suggestions often enough, even in cases where it could have provided genuine value.
The sweet spot emerged through iterative tuning and better data quality—improving our training sets to balance model eagerness with accuracy. We took a step back and invested in higher-quality training data that better represented the kinds of refactoring opportunities developers actually value.
Finally, we landed on a balanced approach that produced suggestions users were truly happy with—not too aggressive, not too timid, but just right for augmenting a professional developer's workflow.
Redesigning the UX
For the UI challenges, we hit reset on our approach, exploring different ways to surface suggestions that wouldn't disrupt coding flow but would still be discoverable and actionable.
Throughout this process, the question of forking VS Code resurfaced repeatedly. We were tempted to fork to solve our UI issues, but we held true to our stance not to fork, because we strongly believe that it’s not right for the user. This constraint forced us to be more creative within VS Code's native capabilities. For example, the team pushed the API to the limit, and developed ways to render inline diffs within the editor.
User feedback after relaunch
For Next Edit, we focus on two dimensions of success: discoverability and engagement. Given that Next Edit is “in the flow” of a developer, we took feedback from Wave 1 users and deliberately de-emphasized discoverability in favor of subtlety, preferring an interface that wouldn't disrupt a user's workflow.

The results speak for themselves: once developers did discover Next Edit, our data showed high engagement rates, which is a strong proxy for highlighting the feature's value. Users weren't just trying it once—they were incorporating it into their daily workflow.
Early feedback has been overwhelmingly positive—with some users even switching over from competing tools just to use Next Edit.
How team structure contributed to our success
This kind of pivotal redesign was only possible because of our cross-functional team structure. Next Edit was built by a tight-knit group that included:
- AI researchers fine-tuning and post-training models to adapt to real-world coding contexts
- UX and design engineers shaping the product's look and feel within VS Code's constraints
- UI and backend engineers integrating AI suggestions seamlessly into the IDE, with a robust and performant backend
Having all these roles in a single team ensured that UX and model development co-evolved—rather than being bolted together at the end. When we hit problems, we could quickly coordinate across disciplines to develop holistic solutions.
This cross-functional collaboration was crucial for co-designing both the AI and the interface, especially for complex tasks like refactoring.
What's Next
Our task at hand is clear: how can we provide value to professional software engineers, eliminate toil, and make teams happier and more productive through AI? We are building a powerful tool for developers who care deeply about their craft, and we’re just getting started. And, for those following Augment closely as we get ready to release our IDE agent, you will see the lessons of Next Edit reflected in our approach to user feedback, design, and integration into a developers daily workflow.