For 65 years, we’ve used Peter Drucker’s term “knowledge worker” to describe anyone whose job involves thinking rather than manual labour. But the definition relied on a constraint that no longer exists: the scarcity of information. We’re still using a term built for a world where information was scarce, even though we now live in a world drowning in it. AI has inverted that constraint, and our job titles haven’t caught up.
Drucker coined the term in 1959. Since then, knowledge work has been rewritten multiple times — personal computing in the 1970s and 1980s, the internet and web in the 1990s, mobile and SaaS in the 2000s. Each time, the work changed fundamentally. But we kept using Drucker’s framework.
Now AI has arrived, and suddenly everyone’s talking about knowledge workers again. Not because the term is useful, but because we don’t have language for what’s actually happening.
Here’s what’s actually happening: AI doesn’t just change how knowledge work gets done. It exposes how much of what we call “knowledge work” isn’t knowledge at all — it’s volume.
What Knowledge Work Actually Means
Knowledge work was originally defined as work where the primary output was information, decisions, or insight rather than physical goods.
But most modern “knowledge work” has drifted. It’s become information processing — summarising, formatting, organising, updating. Volume work is everything that scales linearly with time: reading, documenting, coordinating. It’s work humans were never uniquely suited for. We just didn’t have an alternative.
Until now. And information processing is exactly what AI excels at.
Are You Actually a Knowledge Worker?
If you want to know whether your role qualifies, ask yourself:
Does your job require judgment under ambiguity?
If you regularly make decisions with incomplete information and conflicting priorities, you’re doing knowledge work.
Does your work involve synthesis that AI can’t replicate?
If you’re combining context AI doesn’t have access to — organisational politics, unspoken constraints, cultural nuance — that’s knowledge work.
Does your work involve shaping direction, not just executing it?
If you influence what gets built and why, not just how, you’re a knowledge worker.
Could AI automate parts of your job — but not the parts that actually matter?
That’s the hallmark of modern knowledge work: partial automation of the volume, but the judgment remains irreplaceable.
If AI can do the part of your job that fills your calendar, but not the part that defines your value, you’re a knowledge worker. If it’s the other way around, you’re not.
If you answered “no” to most of these, you might be in a role that’s built around work AI now handles at machine scale.
If AI Can Handle It, Was It Ever Actually Knowledge Work?
Consider what product teams do all day:
- Research synthesis
- Competitive analysis
- Meeting notes and status updates
- Feature specs and user stories
- Documentation
AI compresses all of this. Not assists with it. Compresses it.
A task that took three days now takes thirty minutes. That’s not augmentation. That’s elimination with a polite word.
The reality no one wants to say plainly: a significant portion of product roles were built around work that AI now handles at machine scale.
Product Roles Aren’t Being Reshaped. Some Are Being Removed.
Let’s be specific about what’s disappearing:
Junior PM roles built around information gathering
These roles existed to compress information for senior decision-makers. AI now does this instantly.
Mid-level analyst roles focused on synthesis
If your job was “read these ten documents and tell me what matters,” that job is gone.
Design roles centred on production, not strategy
Creating variations, documenting components, maintaining design systems — all automatable at scale.
Research roles that primarily transcribe and categorise
Interview synthesis, theme clustering, insight surfacing — AI handles this faster and more thoroughly.
This isn’t a prediction for 2030. It’s the Q4 2024 hiring strategy for early-stage startups and the unannounced policy at scaleups.
Companies just aren’t saying it plainly because “we eliminated 40% of our product org through AI” isn’t something you put in a press release. Instead, they’ll hire fewer juniors, promote fewer mid-levels, and call it “efficiency gains.”
The New Divide: Casual Users vs Leverage Users
Inside product teams right now, two groups are forming:
Casual users: people who occasionally ask AI to summarise something or rewrite an email.
Leverage users: people who’ve rebuilt their entire workflow around AI — research automation, multi-step task orchestration, continuous synthesis.
The productivity gap between these groups is widening fast. Not 10% or 20%. More like 300-500%.
A leverage-user PM can now do the strategic work of a small team. They’ve automated everything AI can handle and spend their time on what AI can’t: judgment, prioritisation, alignment, and trade-offs.
The casual user is still working the old way, just slightly faster.
Guess which one keeps their job when budgets tighten?
What Actually Remains
Strip away everything AI can automate, and what’s left is surprisingly small:
Judgment under ambiguity
AI can’t decide what to build when the right answer depends on context it doesn’t have.
Trade-off evaluation
AI can list options. It can’t choose between them when every option has costs that can’t be quantified.
Cross-functional alignment
AI can’t navigate politics, egos, competing incentives, or organisational dynamics.
Taste
AI can generate a thousand variations. It can’t tell you which one is right.
Knowing what’s missing
AI responds to what you ask. It doesn’t tell you what you forgot to ask.
This is what product work becomes: the residual that AI can’t handle.
For some people, that’s liberating. For others, it’s terrifying — because they’ve built their careers on work that’s now automated.
The Training Pathway Problem No One’s Solving
Here’s the crisis no one’s addressing: junior roles were how people learned judgment.
You started by gathering information. Then synthesising it. Then interpreting it. Then making recommendations. Then making decisions.
Each step built the foundation for the next.
AI just automated steps one through three.
So how do you train someone to do step four if they never did steps one through three?
No one knows. And the people who already have the judgment aren’t particularly motivated to solve this problem — they’re the ones whose value just increased.
This isn’t a future problem. It’s happening right now. The last cohort of junior PMs who learned through the traditional pathway graduated into their roles in 2022-2023. Anyone starting now enters a landscape where the apprenticeship model is already broken.
The Predictions Everyone’s Too Polite to Make
By 2028, here’s what product orgs will actually look like:
30-50% fewer seats
Not because of layoffs. Because of natural attrition that doesn’t get backfilled.
Elimination of the “glide path” from junior to senior
Entry-level roles won’t exist in their current form. You’ll either arrive with judgment already developed elsewhere, or you won’t arrive at all.
Extreme compensation divergence
The small number of people who combine judgment with AI leverage will command significantly higher compensation. Everyone else’s will stagnate or decline.
Hybrid roles as the only roles
“Pure” PMs, designers, or researchers won’t exist. Every role will require technical fluency, strategic thinking, and AI workflow mastery.
The rise of the AI-native consultancy model
Rather than hiring, companies will engage individuals who can deliver team-level output solo because they’ve mastered AI leverage.
These aren’t predictions I’m making to be provocative. They’re the logical conclusion of current trends if nothing changes.
The Risks Everyone’s Ignoring
Over-reliance on AI for judgment it can’t actually provide
AI produces confident answers even when it’s contextually wrong. Teams that don’t maintain critical thinking will make expensive mistakes.
Velocity without direction
AI lets you move faster. It doesn’t tell you where to go. Teams will ship more of the wrong things, faster.
Organisational fragmentation
Some teams go AI-native. Others resist. The gap in velocity and expectations creates friction that breaks collaboration.
The illusion of knowledge work
AI makes it easy to produce outputs that look like insight but are actually just sophisticated summarisation. Teams will mistake volume for value.
Why We Need to Abandon the Framework Entirely
“Knowledge worker” was designed for a world where the constraint was access to information.
That world no longer exists.
AI doesn’t just change knowledge work. It inverts the constraint. The bottleneck isn’t access to information anymore — it’s knowing what to do with infinite information.
Drucker’s framework can’t describe this. It’s like using a map of horse trails to navigate motorways.
We’re clinging to it because we don’t have a replacement. But the absence of new language doesn’t make the old language accurate.
What Comes Next
The teams that thrive will be the ones who recognise that AI hasn’t just changed the work — it’s changed what counts as valuable.
Volume work is over. The era where you could build a career on information gathering, synthesis, and documentation is ending.
What remains is judgment. But not generic judgment — contextual judgment that AI can’t replicate because it requires understanding what’s unsaid, what’s political, what’s impossible, and what’s missing.
If you want to know whether you’re on the right side of this shift, audit your week: how many hours did you spend on judgment versus processing?
The question isn’t whether AI will transform product teams. It already has.
The real question is whether your work is the kind AI replaces — or the kind AI reveals.
