How do junior developers grow in an AI world?
At Pagecloud, everyone on the engineering team is senior. We’re thinking about our next hire, and for the first time, I don’t have a clear picture of what the junior developer path looks like.
I’ve hired and managed junior developers before. The playbook was well understood. You start them on small features. They ship, they break things, they fix what they broke. They read a lot of code. They sit in code reviews and absorb patterns. Over a few years, the repetition builds something you can’t teach directly: taste. An instinct for when something is off, even when it works.
That playbook assumed the junior was writing code every day and learning from the friction of doing it badly, then doing it better.
I’m not sure that playbook works anymore.
The gap between looking senior and being senior
A junior developer with AI tools can produce code that looks senior. The output passes tests, follows conventions, handles edge cases. If you read the PR without context, you might not know who wrote it.
But looking senior and being senior are very different things. The difference is in the decisions behind the code. Why this abstraction and not that one. Why this data model. Why this tradeoff. When to keep things simple and when complexity is justified.
Those instincts come from years of exposure to consequences. You learn what a bad abstraction looks like because you maintained one for two years. You learn why premature optimization is dangerous because you spent a week untangling something that didn’t need to be fast. The lessons stick because they were painful.
If AI absorbs the mechanical work, what builds those instincts?
Taste is the hard part
Taste is knowing the difference between code that works and code that’s right. It’s the ability to look at a solution and feel that something is off before you can articulate why.
You develop taste through exposure. Through writing bad code and living with the consequences. Through reading good code and noticing what makes it good. Through shipping something you thought was clever and watching it become a maintenance burden six months later.
I worry that AI short-circuits this process. Not because AI produces bad code, but because it removes the struggle that builds pattern recognition. When the junior never fights with a bad abstraction they chose, do they learn to recognize one? When they never feel the pain of a poorly designed system they built, do they develop the instinct to design a better one?
The feedback loop that built taste used to be automatic. Now it has to be intentional. And I’m not sure anyone has figured out what that looks like.
Prompting is a lossy abstraction
Here’s what I keep coming back to: communicating via prompt instead of code is, by default, a lossy abstraction. You’re describing what you want in natural language, and the AI is interpreting that into code. Something is always lost in translation.
The people who lose the least are the ones who understand what they’re asking for. When you know what a good abstraction looks like, you can describe it precisely. When you understand the tradeoffs between different data models, you can steer the AI away from the wrong one. When you can smell a code smell, you catch it in the output before it ships.
You can’t direct an agent to do something you don’t understand yourself. Or rather, you can, but you won’t know whether it did it well.
Here’s a concrete example. A junior prompts “add caching to this endpoint.” The AI adds it. The code looks clean, the tests pass. But the junior doesn’t think to check the invalidation strategy, doesn’t consider what happens when the underlying data changes, doesn’t ask about cache stampedes under load. Someone who’s been burned by bad caching catches all three in the review. The gap between “add caching” and “add caching correctly” is exactly the kind of knowledge that gets swallowed by the lossy abstraction. The junior didn’t know enough to prompt precisely, and didn’t know enough to evaluate what came back.
This is why I believe the traditional skills still matter. Not despite AI, but because of it. Understanding tradeoffs between abstractions, recognizing code smells, knowing what’s going to bite you six months from now when you build the next feature on top of this one. We’re going to be writing more software than ever. More software means more systems to evaluate, integrate, maintain, and debug. The demand for people who can do that well doesn’t shrink. It grows. The people writing the best software will be the ones who understand it deeply enough to communicate precisely, even through a lossy channel.
What I think still matters
Caring is still the filter. I’ve written before about how giving a shit is the most underrated career trait. That doesn’t change with AI. If anything, it matters more. When the tool can produce something passable in seconds, the person who stops and asks “is this actually good?” is the one who grows. The person who ships the first output without thinking doesn’t build taste regardless of the tools.
Reading code matters more than writing it. If AI writes most of the first draft, the junior’s primary skill becomes evaluating code they didn’t write. Can they read a function and spot the tradeoff that was made? Can they look at an architecture and understand why it will or won’t scale? This is a different skill than writing code from scratch, and most mentorship doesn’t emphasize it enough. Doing lots of code review and writing lots of code are still how you build this muscle. AI doesn’t change that.
The feedback loop needs to be manufactured. The natural loop (write code, live with it, learn from the pain) is weaker now. Mentors need to create it deliberately. That might mean having juniors maintain systems, not just build new ones. Having them debug production issues. Having them review AI-generated code and explain what they’d change and why. The point isn’t to slow them down. It’s to make sure speed doesn’t skip the part where judgment gets built.
Volume of experiments isn’t the same as learning. AI lets you try ten approaches in an afternoon. That’s powerful, but only if you can evaluate which one is good. Without judgment, speed becomes noise. A junior needs someone to help them understand not just what works, but why one working solution is better than another.
The counter-argument
I want to be fair about the other side of this. You could argue that AI is the best learning tool a junior developer has ever had access to. A junior today can read more codebases in a week than I read in a year. They can try ten architectural approaches in an afternoon and compare the results. They can ask the AI to explain why one pattern is better than another and get a detailed, patient answer every time. No senior developer, no matter how generous with their time, can match that availability.
Maybe AI accelerates taste-building rather than undermining it. Maybe the junior who reviews 50 AI-generated implementations of the same feature develops pattern recognition faster than the one who hand-wrote three.
I’m not fully convinced. There’s a difference between having something explained to you and feeling the consequence of getting it wrong. Reading about why a bad abstraction is bad is not the same as maintaining one for two years. The explanation gives you knowledge. The pain gives you instinct. I don’t know how to shortcut that, or whether you can.
But I’m open to being wrong about this.
Still collecting data
I have ideas about what the future looks like for junior developers, for mentorship, for how we should think about hiring and growth paths in an AI-native world. I’ll write about that separately.
Right now I’m still collecting data. I’m watching how other teams are handling this. I’m paying attention to which juniors are growing fast and trying to understand why.
What I do believe: the skills that make a great developer haven’t changed. Clear thinking. Good judgment. The ability to reason about systems and tradeoffs. Caring about craft. These matter more now, not less, because the cost of producing code has dropped to near zero while the cost of producing good code hasn’t changed at all.
The path to building those skills is what’s shifted, and I haven’t seen anyone lay out the new one convincingly.
I want to hear from you
If you’re hiring juniors right now, what are you looking for? How are you thinking about their growth path?
If you are a junior right now, what does your day-to-day look like? How are you building your skills when AI handles so much of the mechanical work?
If you’ve thought about this more than I have, I’d genuinely like to hear it. I’m @mgrouchy on X or email me.