How Will People Learn When A.I. Does the Grunt Work?

That question assumes building from scratch was the only path to understanding: it wasn't. It was just the only one available.

Introduction

These days, as Claude Cowork strikes fear in the hearts of knowledge workers across finance, consulting, law, accounting, and beyond, we keep circling back to the same questions:

  • Do law associates still need to grind through document review?
  • Do consulting analysts still need to build the deck from scratch?
  • Do medical residents still need to memorize what they can look up in seconds?

Bloomberg recently published a piece about the drastic decline in entry level jobs in NYC, and the implication is hard to ignore: "If AI can do the entry-level mechanical work, how does anyone develop the knowledge and judgment necessary for senior roles?"

But we need to separate the utility vs. the apprenticeship that mechanical work offers.

  • On the utility side, AI is rapidly approaching replacement-level capabilities for entry-level work. The natural follow-up is considering whether entry-level roles just become glorified "A.I. babysitters", but I think this framing undersells what's actually at stake.
  • The harder question is about apprenticeship: Is the grind the primary vehicle for learning? And if so, does removing the grind mean the learning goes with it?

As someone who (relatively) put in the sweat equity in entry-level finance, I want to take a step back and examine what the grind is actually for. Here, I want to delineate between two oft-conflated assumptions:

  1. "You need to build it to understand it."
  2. "You need to build it to prove that you understand it."

The first is about how individuals learn. The second is about how society keeps score.


How People Learn: What Does It Mean to "Understand"?

I'd suggest that for the vast majority of work (excluding frontier science or cutting-edge research), there are three levels of understanding:

1
Mechanical Understanding
The procedural knowledge of how it works.

Building a three-statement financial model. Drafting a motion to dismiss. Running a regression analysis. Coding a data pipeline.

2
Structural Understanding
The reasoning behind why it works the way it does.

Why a DCF discounts future cash flows (because a dollar today is worth more than a dollar tomorrow). Why a contract has indemnification clauses (because risk needs to be allocated before something goes wrong). Why a clinical trial has a control group (because correlation is useless without a baseline).

3
Applied Understanding
The judgment of knowing when it doesn't work and how to fix it.

Recognizing that a revenue assumption is too aggressive because this sector doesn't grow like that. Knowing that boilerplate indemnification doesn't protect your client in this specific deal structure. Seeing that the standard treatment protocol won't work for a patient with these co-morbidities.

And this is the implicit promise of the apprenticeship model: Do the mechanical work, internalize the patterns, and eventually develop judgment.

Of course, the grind has proven to build mechanical proficiency: this was the path for decades, churning out many of the best investors, lawyers, doctors, engineers alive today. But the grind only reliably teaches the first type: It doesn't teach the second or third types. Instead, we develop structural understanding through asking questions and applied understanding through lived experience, neither of which the grind necessarily guarantees.

What the grind is actually great at is filtering: identifying who can tolerate the pain long enough to be trusted with more responsibility. And we've confused the filter for the pedagogy.

I'm not suggesting that the people who went through the grind were wasting their time. But I am suggesting that the apprenticeship model has a hidden assumption worth questioning: that mechanical repetition is the only path to developing judgment. That there is no route from novice to expert that doesn't pass through years of implementation detail.

Maybe The Grind Is a Bottleneck, Not a Path

In entry-level finance, the bar for "good analyst" is building the model correctly. The bar for "great analyst" is understanding what the model represents. But the system only explicitly trains for the first, and hopes the second shows up on its own.

  • The rote mechanical work prompts you to ask: "Does this formula work? Is this cell reference right? Why is this circular?"
  • But a great analyst connects the dots: "What drives value here? What breaks under stress? Which assumptions is this entire analysis resting on?"

I didn't need to build the model from scratch to ask those deeper questions. I needed to understand the structure of what the model was trying to represent. Those are different skills, and the system was designed around the wrong one.

I would suggest that much of what we call "learning" is just "struggling with the medium." AI removes the implementation tax and and reveals what the actual skill has always been.


Addressing Inevitable Objections

At this point, I imagine some of you are already crafting a strongly worded rebuttal. So let's get ahead of it:


Top-Down vs. Bottom-Up Learning

What does it look like we don't need to understand the mechanics to understand the structure? Two competing approaches to developing judgment:

  1. Bottom-Up: Mechanical repetition allows for pattern recognition, which develops into intuition over time. Earn understanding through labor.
  2. Top-Down: Structural understanding allows for targeted interrogation, which develops into judgment faster and more consciously. Earn understanding through inquiry.
Bottom-Up
Implicit through repetition
Mechanics
Mechanical repetition
Structure
Implicit structure emerges
Top-Down
Deliberate through inquiry
Structure
Structural discovery
Mechanics
Targeted mechanical grounding
Judgment

In fact, the grind never taught top-down thinking. It forces everyone to build the mechanics, with some people developing top-down instincts along the way, seeing the purpose behind the mechanics. Others become very skilled mechanically but can't tell you whether it should have been built differently. Both groups can get promoted for excelling at the mechanics. Both have the credibility. But only one can actually operate at the next level.

We then credit the grind for producing the top-down thinkers, when really, it just happened to be the environment they were in when they developed on their own.

But I'm not arguing that bottom-up was never valuable. I'm arguing that the default should now flip:

  • Pre-AI: This was inefficient but functional. The only way to find top-down thinkers was to run everyone through the same mechanical process and see who emerged with judgment. But this was more about filtration than education.
  • Post-AI: We don't need to run everyone through years of mechanics and hope the right ones figure it out on their own. We can name it, teach it, and build tools that let people engage at that level from the start.

The Broader Implication

This isn't ultimately a piece about A.I. skills. It's about how we've mistaken the credentialing system for the learning system, and confused credibility with capability. But AI now lets us split those apart. And let's not dig in our heels, because the people who insist on conflating them aren't protecting learning. They're protecting a system that validated them.

This isn't just a corporate culture problem. It's baked into how we train people from the start. Our education system builds bottom-up thinkers: Follow the curriculum. Master the details. Trust the process. Don't skip ahead.

This produced people who are very good at executing within a given structure, but not necessarily good at questioning whether the structure itself makes sense.

For decades, this worked. Go to college, get a prestigious white collar job, and progress through the corporate ladder. The prescribed path delivered real leverage, so following it was rational. But if AI can now do the mechanical work, then the value of having survived the mechanical grind drops. And the people who never developed the muscle to ask "why am I doing this?" and "what is the actual structure of this problem?" are the most exposed.

The uncomfortable implication is: We need people who can interrogate abstract structures, not just perform within them. And that's a fundamentally different orientation than what most of our educational and professional systems teach, select for, and reward.

But the real question isn't whether AI will change what skills matter, as that's already happening. The question is whether we'll update our institutions fast enough. Our schools, our hiring practices, our professional cultures are all still optimized for a world where the grind was the only path.

The grind was the price of entry, not the point. The thinking was always the point. Now there's a faster path to it, if we're willing to let go of the old one.