Shepards

Beyond Junior Engineers: Why AI Coding Systems Need Shepherds, Not Managers

This post challenges the common narrative that AI coding systems are approaching "junior engineer" capability, arguing instead that they've already surpassed that level in raw ability but remain fundamentally novice in their approach. It explores what this means for how we work with these systems.

I keep hearing the same prediction from AI commentators: by the end of 2025, GenAI coding systems will finally reach the level of a junior engineer. If you have not used Claude Code yet for coding its very hard to explain, but these systems are far more capable then a junior engineer.

These systems aren't approaching junior engineer level—they've already blown past it. The issue isn't their capability; it's their nature. They're incredibly powerful, but they're blunt instruments. And understanding this distinction changes everything about how we should work with them.

The Novice Paradox: Knowledgeable but Not Wise

Working with current AI coding systems feels exactly like working with an enthusiastic junior engineer who somehow has encyclopedic knowledge but zero judgment. They can recite best practices, implement complex algorithms, and even suggest architectural patterns. But they lack the nuance, context, and judgment that comes from experience.

If you map these GenAI systems to the Dreyfus model of skill acquisition, they sit perfectly in the novice category. This might seem contradictory—how can something with such vast knowledge be a novice? The answer lies in understanding what the Dreyfus model actually measures: not knowledge, but the ability to apply that knowledge contextually.

A novice follows rules. They need clear instructions and struggle with ambiguity. They can't prioritise effectively or understand when to break the rules. That's exactly how current AI systems operate, regardless of how much they "know."

From Managers to Shepherds: A Fundamental Shift

This realisation has profound implications for how we structure our work. As we rebuild our enterprise systems using an army of novice GenAI agents, we—the humans in the loop—need to operate more like shepherds than traditional managers.

Think about what a shepherd actually does. They don't micromanage each sheep's grazing pattern. Instead, they:

This is fundamentally different from how we've traditionally managed junior engineers, where we could delegate discrete tasks and expect reasonable judgment within those boundaries.

The New Skills Portfolio: Broader and Deeper

This shift demands a paradoxical combination of skills. We need to simultaneously broaden our capabilities while deepening our ability to debug at a low level. It's not enough to be a specialist anymore; we need to be specialised generalists.

Here's what our new skill portfolio needs to include:

#ProductDesign Understanding not just what to build, but why. AI can generate endless features, but it can't determine which ones actually matter to users.

#SystemDesign Seeing the big picture and understanding how pieces fit together. AI excels at local optimisation but struggles with system-wide thinking.

Communications and Stakeholder Management: Translating between AI capabilities and human needs. Being the bridge between what's possible and what's valuable.

#Debugging Going deeper than ever before. When AI generates code at superhuman speed, bugs become more subtle and systemic. We need to debug not just code, but entire AI-generated architectures.

#Security Understanding attack vectors that AI might not consider. Security requires adversarial thinking—imagining how systems might be misused, not just used.

#QualityEngineering Defining what "good" looks like in ways that AI can understand and implement. Creating test strategies for code we didn't write and might not fully understand.

Context Over Control

The key insight is that we're shifting from control to context. We can't control every line of code anymore—there's simply too much of it, generated too quickly. Instead, we need to provide rich context that guides AI toward good outcomes.

This is why prompting and context engineering are emerging as critical skills. But it goes beyond that. We need to architect systems that are AI-friendly, with clear boundaries and well-defined interfaces. We need to create development environments where AI novices can work effectively without causing havoc, sandbox environments with privalidged access management.

The Enterprise Transformation Nobody's Talking About

As enterprises rush to adopt AI coding systems, they're missing this fundamental shift. They're treating AI like faster junior developers when they should be restructuring their entire development approach around the shepherd model.

This isn't about having fewer developers or replacing human judgment. It's about recognising that we're working with a new kind of colleague—one that's simultaneously more capable and more limited than any human we've worked with before.

The companies that thrive will be those that understand this paradox and restructure accordingly. They'll build teams of shepherds who can guide their AI flocks toward valuable outcomes, intervening when necessary but mostly providing direction and context.

Embracing Our New Role

The future isn't about competing with AI's coding speed or knowledge depth. It's about developing the uniquely human skills that make AI effective: judgment, context, creativity, and the ability to see the bigger picture.

We're not becoming obsolete; we're becoming more important than ever. But our importance lies not in writing code, but in knowing what code should be written, why it matters, and how it fits into the larger system.

The age of the shepherd-developer is here. The question isn't whether AI will replace developers, but whether developers will successfully evolve into the shepherds our AI novices desperately need.