Introduction
These days, as Claude Cowork strikes fear in the hearts of knowledge workers across finance, consulting, law, accounting, and beyond, we keep circling back to the same questions:
- Do law associates still need to grind through document review?
- Do consulting analysts still need to build the deck from scratch?
- Do medical residents still need to memorize what they can look up in seconds?
Bloomberg recently published a piece about the drastic decline in entry level jobs in NYC, and the implication is hard to ignore: "If AI can do the entry-level mechanical work, how does anyone develop the knowledge and judgment necessary for senior roles?"
But we need to separate the utility vs. the apprenticeship that mechanical work offers.
- On the utility side, AI is rapidly approaching replacement-level capabilities for entry-level work. The natural follow-up is considering whether entry-level roles just become glorified "A.I. babysitters", but I think this framing undersells what's actually at stake.
- The harder question is about apprenticeship: Is the grind the primary vehicle for learning? And if so, does removing the grind mean the learning goes with it?
As someone who (relatively) put in the sweat equity in entry-level finance, I want to take a step back and examine what the grind is actually for. Here, I want to delineate between two oft-conflated assumptions:
- "You need to build it to understand it."
- "You need to build it to prove that you understand it."
The first is about how individuals learn. The second is about how society keeps score.
How People Learn: What Does It Mean to "Understand"?
I'd suggest that for the vast majority of work (excluding frontier science or cutting-edge research), there are three levels of understanding:
Building a three-statement financial model. Drafting a motion to dismiss. Running a regression analysis. Coding a data pipeline.
Why a DCF discounts future cash flows (because a dollar today is worth more than a dollar tomorrow). Why a contract has indemnification clauses (because risk needs to be allocated before something goes wrong). Why a clinical trial has a control group (because correlation is useless without a baseline).
Recognizing that a revenue assumption is too aggressive because this sector doesn't grow like that. Knowing that boilerplate indemnification doesn't protect your client in this specific deal structure. Seeing that the standard treatment protocol won't work for a patient with these co-morbidities.
And this is the implicit promise of the apprenticeship model: Do the mechanical work, internalize the patterns, and eventually develop judgment.
Of course, the grind has proven to build mechanical proficiency: this was the path for decades, churning out many of the best investors, lawyers, doctors, engineers alive today. But the grind only reliably teaches the first type: It doesn't teach the second or third types. Instead, we develop structural understanding through asking questions and applied understanding through lived experience, neither of which the grind necessarily guarantees.
What the grind is actually great at is filtering: identifying who can tolerate the pain long enough to be trusted with more responsibility. And we've confused the filter for the pedagogy.
I'm not suggesting that the people who went through the grind were wasting their time. But I am suggesting that the apprenticeship model has a hidden assumption worth questioning: that mechanical repetition is the only path to developing judgment. That there is no route from novice to expert that doesn't pass through years of implementation detail.
Maybe The Grind Is a Bottleneck, Not a Path
In entry-level finance, the bar for "good analyst" is building the model correctly. The bar for "great analyst" is understanding what the model represents. But the system only explicitly trains for the first, and hopes the second shows up on its own.
- The rote mechanical work prompts you to ask: "Does this formula work? Is this cell reference right? Why is this circular?"
- But a great analyst connects the dots: "What drives value here? What breaks under stress? Which assumptions is this entire analysis resting on?"
I didn't need to build the model from scratch to ask those deeper questions. I needed to understand the structure of what the model was trying to represent. Those are different skills, and the system was designed around the wrong one.
I would suggest that much of what we call "learning" is just "struggling with the medium." AI removes the implementation tax and and reveals what the actual skill has always been.
Addressing Inevitable Objections
At this point, I imagine some of you are already crafting a strongly worded rebuttal. So let's get ahead of it:
The Internalization Gap"You can't shortcut understanding. You have to earn it."
Think about what's actually happening when someone "learns structure" through building. Take a financial model for example. They build the model, make mistakes, see how changing one Excel cell cascades through others, and through that process they gradually internalize what the model represents. The mechanics are the vehicle for encountering the structure.
But this is a noisy, inefficient vehicle for understanding structure. For every moment of structural insight ("oh, that's why this acquisition destroys value even though revenue grows"), there are hours of mechanical troubleshooting that teach you nothing about the structure, but just teach you about Excel. Debugging a circular reference doesn't deepen your understanding of how a business creates value. It deepens your understanding of how spreadsheets handle dependencies.
But can you create those moments of structural insight more directly? Not by skipping them, but by engineering them purposefully. You can encounter structure when you take a completed financial model and ask, "what happens if I double the churn rate?" You're seeing the dependencies, the cascading effects, the load-bearing assumptions. You're just encountering them without the hours of mechanical noise in between.
So the question becomes whether building is the only form of engagement that produces understanding, or just the only one we had access to until now.
The Blind Spot"You don't know what you don't know."
This assumes that if you haven't done the mechanical work, you don't even have the map to know where the gaps are.
But I would counter that structural understanding is precisely what gives you the map. If you understand the purpose of each component in a system (not just how to build it, but why it exists and what it's protecting against), then you can reason about what's missing even if you've never personally encountered it. How?
- Structure reveals dependencies. When you understand why something exists, you can trace what it connects to, and what would break if it weren't there. You don't need to have personally witnessed the breakage to reason about it.
- Structure reveals assumptions. Every framework is built on decisions: what to include, what to ignore, what to simplify. If you understand the structure, you can name those decisions and ask whether they hold in your specific situation. The mechanical worker often can't, because they've internalized the assumptions as "just how it's done."
- Structure makes blind spots queryable. This is the key shift. You can ask: what is this framework optimized for? What scenarios was it not designed to handle? What would need to be true for this to fail? These are structural questions. You don't need 200 reps to ask them, you need a clear understanding of what the system is for.
The irony is that mechanical experience often creates blind spots rather than eliminating them. When you've done something the same way 200 times, the way you do it becomes invisible. Structural thinking makes the invisible visible again, by asking why rather than how.
The Intuition Problem"You can't develop gut instinct without doing the reps."
Let's start with, what is "intuition"? Intuition is the result of pattern-matching against a large library of prior experiences, most of which a person can't consciously articulate. So is there a way to develop that same pattern library through structural engagement rather than mechanical repetition? You could instead use AI to:
- Map the failure space directly. Instead of implicitly absorbing patterns over years, ask upfront: What typically goes wrong here? What are the hidden dependencies? Where does this type of analysis usually break? You're building the pattern library consciously and on demand, rather than implicitly over 200 reps.
- Stress-test by simulation rather than by scar tissue. Instead of learning that a certain assumption is dangerous because something once blew up over it, run 50 scenarios in an afternoon. Intentionally break things. Map where the fragility is. You'll get the same information in a fraction of the time.
- Use AI as a "second set of eyes" with its own pattern library. The experienced professional's intuition is really just access to a lot of prior examples. AI has its own reference class, and arguably a larger one. "Does anything here look unusual compared to typical analyses in this sector?" is a real question you can now ask.
In other words, you can achieve the same structural understanding depending on how you use AI, the questions that you ask.
The Dunning-Kruger Effect"Without the grind, people will think they understand things they don't."
This is a real risk, but this applies with or without the grind. Someone who does have mechanical expertise can be completely blind to how little they understand at the structural level. They know how to build the thing, so they assume they understand the thing. That's Dunning-Kruger too, it just looks more respectable because it's backed by visible effort.
In fact, real structural understanding is testable in a way that mechanical competence isn't. If someone truly understands the structure, they can answer "why" questions, not just "how" questions. They can predict what happens when you change something. They can identify when the framework doesn't apply. Mechanical competence is much easier to fake: You just point to the output and say "I built that" (something increasingly less trustworthy with generative AI).
So while the Dunning-Kruger risk is real, the solution isn't to reinstate the grind, but to change how we test for understanding. Stop asking "how you build it?" and start asking "what happens if this assumption is wrong?" The people who actually developed structural understanding can answer that. The people who are parroting A.I. output can't.
The Missing Foundation"Who catches the AI's mistakes if no one learned the fundamentals?"
The ratio of specialization is wrong, not the concept. Some people should absolutely go deep into the foundations and will naturally be drawn to it anyways. The need for low-level specialists is not an argument for making everyone start at the bottom. We don't make every software engineer learn assembly before they can write Python. We don't make every driver understand combustion engineering before they can operate a car. This is intuitive to us in most domains, except professional knowledge work, where "earn your stripes" is still treated as universal policy.
The current system doesn't deliver on its own promise. The bottom-up path is supposed to produce people who eventually think structurally. But how often does it actually? A lot of people spend years in the "earn your stripes" phase, develop solid mechanical skills, and never learn to think structurally because the system doesn't explicitly teach it. It just hopes the grind will produce it as a byproduct, and promotes those who learn it.
Top-Down vs. Bottom-Up Learning
What does it look like we don't need to understand the mechanics to understand the structure? Two competing approaches to developing judgment:
- Bottom-Up: Mechanical repetition allows for pattern recognition, which develops into intuition over time. Earn understanding through labor.
- Top-Down: Structural understanding allows for targeted interrogation, which develops into judgment faster and more consciously. Earn understanding through inquiry.
In fact, the grind never taught top-down thinking. It forces everyone to build the mechanics, with some people developing top-down instincts along the way, seeing the purpose behind the mechanics. Others become very skilled mechanically but can't tell you whether it should have been built differently. Both groups can get promoted for excelling at the mechanics. Both have the credibility. But only one can actually operate at the next level.
We then credit the grind for producing the top-down thinkers, when really, it just happened to be the environment they were in when they developed on their own.
But I'm not arguing that bottom-up was never valuable. I'm arguing that the default should now flip:
- Pre-AI: This was inefficient but functional. The only way to find top-down thinkers was to run everyone through the same mechanical process and see who emerged with judgment. But this was more about filtration than education.
- Post-AI: We don't need to run everyone through years of mechanics and hope the right ones figure it out on their own. We can name it, teach it, and build tools that let people engage at that level from the start.
The Broader Implication
This isn't ultimately a piece about A.I. skills. It's about how we've mistaken the credentialing system for the learning system, and confused credibility with capability. But AI now lets us split those apart. And let's not dig in our heels, because the people who insist on conflating them aren't protecting learning. They're protecting a system that validated them.
This isn't just a corporate culture problem. It's baked into how we train people from the start. Our education system builds bottom-up thinkers: Follow the curriculum. Master the details. Trust the process. Don't skip ahead.
This produced people who are very good at executing within a given structure, but not necessarily good at questioning whether the structure itself makes sense.
For decades, this worked. Go to college, get a prestigious white collar job, and progress through the corporate ladder. The prescribed path delivered real leverage, so following it was rational. But if AI can now do the mechanical work, then the value of having survived the mechanical grind drops. And the people who never developed the muscle to ask "why am I doing this?" and "what is the actual structure of this problem?" are the most exposed.
The uncomfortable implication is: We need people who can interrogate abstract structures, not just perform within them. And that's a fundamentally different orientation than what most of our educational and professional systems teach, select for, and reward.
But the real question isn't whether AI will change what skills matter, as that's already happening. The question is whether we'll update our institutions fast enough. Our schools, our hiring practices, our professional cultures are all still optimized for a world where the grind was the only path.
The grind was the price of entry, not the point. The thinking was always the point. Now there's a faster path to it, if we're willing to let go of the old one.