Product Leadership: When AI Isn’t Special Anymore

AI Stops Being Special
Learn UX, Product, AI on Coursera

Stay relevant. Upskill now—before someone else does.

AI is changing the product landscape, it's not going to take your job, but the person who knows how to use it properly will. Get up to speed, fast, with certified online courses from Google, Microsoft, IBM and leading Universities.

  • ✔  Free courses and unlimited access
  • ✔  Learn from industry leaders
  • ✔  Courses from Stanford, Google, Microsoft

Spots fill fast - enrol now!

Search 100+ Courses

You shipped your first AI feature eight months ago. Your competitors shipped theirs six months ago. And the startup that launched last week has better AI than both of you, and they built it in a weekend. This is the part where you’re supposed to panic about keeping up. But the real problem isn’t that everyone else better AI features. It’s that having AI has stopped mattering and nobody told you.

I’ve spent twenty years watching product teams chase technology shifts. Mobile. Cloud. SaaS. AI is different. Not because it’s faster or more transformative (though it is both), it’s different because the window between “competitive advantage” and “expected” collapsed so fast that most product leaders are still updating their roadmaps based on a market that doesn’t exist anymore.

Everyone Has AI Now. So What?

Here’s what your Monday morning looks like: you open your product analytics and see that your shiny new AI feature (the one your CEO demoed to the board all excitedly) Only has a 12% adoption rate and it’s decreasing. Not because it doesn’t work or there’s a problem, but because your users don’t care that it’s AI. They care whether it solves their problem faster than doing it themselves.

Meanwhile, your competitor shipped something similar last month. And the month before that, another competitor did too. And next month, three more will. You’re all calling the same APIs, wrapping them in slightly different UI, and pretending this constitutes as innovation.

The differentiation you thought you had evaporated the moment AI became accessible enough that anyone could ship it.

“AI-powered” already means nothing. Your users don’t care about your technology stack. They never did. They care about outcomes. And when everyone delivers the same outcome using the same technology, you’re not competing on AI anymore. You’re competing on everything you were supposed to be competing on before AI existed. Understanding the problem, designing the right solution and executing better than your competitors.

Except now you’ve got compute costs attached.

What differentiation actually looks like:

If Notion stopped marketing their AI features and instead focused on “reclaim 5 hours per week from meeting notes,” they’d immediately differentiate against Coda and Obsidian, who are all screaming about the same AI writing and summarisation capabilities. The technology is identical. They’re all using similar models. But one is selling an outcome whilst the others are selling features.

Differentiation isn’t in the technology. It’s in understanding which problem matters most to your users, which outcome they’ll pay for, and how you measure whether you’re delivering it. Your competitors can copy your features in a weekend. They can’t copy your understanding of the problem you’re solving.

Start with the outcome you’re trying to create. Then work backwards to figure out which AI capabilities serve that outcome. Not the other way around. If you can’t describe your value proposition without mentioning AI, you don’t really have one.

The Bill Is Coming

You added AI features when they were cheap. Or even free. At the very least, they’re subsidised enough that you didn’t have to defend the cost to your finance team. You moved fast, shipped features, and celebrated the fact that you were “AI-native” or “AI-first” or whatever the marketing copy said.

Now, those subsidies are ending. OpenAI has already started cutting free tiers. Anthropic never offered them generously. Every AI company that gave away capability to build market share is now facing the same investor pressure: show us revenue or show us the door.

Which means you’re about to get invoices that make you reconsider every AI feature you’ve shipped. And the question becomes which of these features actually deliver enough value to justify their cost?

Not which ones do users engage with. Engagement is cheap to buy with novelty. Not which ones look good in demos. Demos don’t pay the AWS bills. But which ones deliver measurable outcomes that impact the bottom line. Which ones can you defend when someone from finance knocks on your door and starts asking why your infrastructure costs tripled.

How to make ROI decisions that hold up:

Audit every AI feature you’ve shipped and attach three numbers to it: cost per use, retention impact, and revenue attribution. Not estimates. Actual data from your analytics and your invoices.

You’ll find that most of your AI features fall into one of three categories:

  1. Expensive novelties that users tried once
  2. Cheap utilities that users depend on daily
  3. High-cost features that drive measurable conversion or retention

Kill the first category immediately. Those features were always vanity items. Optimise the second category ruthlessly. Can you use a smaller model, cache more aggressively, or batch requests to cut costs by 60% without users noticing? For the third category, calculate the actual customer lifetime value they generate and defend them when finance comes asking.

Open your analytics right now. Sort your AI features by cost-per-use. You’ll probably find three features eating 80% of your budget with single-digit adoption rates. Kill them this week. You don’t need permission. You need to stop bleeding money on features nobody uses.

The Middle Is Collapsing

AI did something truly unprecedented. It raised the floor for “good enough” whilst simultaneously exposing the ceiling it can’t reach.

Good enough blog posts. Adequate code. Passable analysis. Competent support responses. AI delivers all of this cheaply and reliably. Which means if your product’s value proposition was “we’re pretty good at this thing,” you’re now competing with free or nearly-free alternatives that are also pretty good.

But AI can’t reach excellence. It can’t do novel thinking. It struggles with deep expertise, nuanced judgment, complex context. It gets you out of the bottom 50%, but it can’t get you into the top 5%.

This creates a brutal squeeze. You’re not cheap enough to compete with AI-augmented solo operators who can now do “pretty good” work at a fraction of your cost. And you’re not excellent enough to justify a premium over competitors who also have AI doing the same “pretty good” work you’re doing.

The middle ground, where most products live, is collapsing, and you need to choose. Do you race to the bottom or rise to the top? Do you become the cheapest option by leaning hard into AI with minimal human intervention? Or do you become the best option by using AI to augment genuine expertise that delivers outcomes AI alone can’t do?

AI Stops Being Special

How to choose your position:

Look at your margins and your team’s capabilities honestly. If you’re a lean team with strong technical execution, the bottom might be viable. Automate aggressively, cut human touchpoints, compete on price and speed. You won’t win on quality, but you don’t need to.

If you’ve got deep domain expertise, specialists who understand nuances AI misses, or relationships that create trust, aim for the top. Use AI to handle the commodity work so your experts can focus on the decisions and judgment that AI can’t replicate. Charge more because you’re delivering outcomes, not features.

The mistake is staying in the middle where there is decent quality at moderate prices because that’s where you’ve always been. That position only worked when “decent” required human effort. Now it doesn’t.

Look at tax software. TurboTax went to the bottom with maximum automation, minimal human touch, compete on price and convenience. Meanwhile, boutique tax advisors use software to handle compliance work but charge 10x more for judgment on complex situations, estate planning, and audit defence. H&R Block is stuck in the middle, offering neither the cheapest option nor the best expertise, and they’re losing market share to both ends.

Pick a side and defend it.

Your Vendor Strategy Is Probably Wrong

Let me guess, you built on whatever AI provider was easiest to integrate, fastest to market, or most impressive in the demo. You told yourself you would evaluate alternatives later, abstract the dependency, or build our own if needed.

But let’s be honest, you’re not going to do any of those things. You’re locked in. And your vendor knows it.

Here’s what happens next: your vendor either gets acquired, runs out of money, or becomes one of three companies that dominate the market. You’ve seen this pattern in cloud, in SaaS, in every infrastructure shift. And AI will be no different.

If your vendor gets acquired or shuts down, you spend a quarter rebuilding instead of shipping. Your engineering team resents you for the technical debt. Your CEO asks why you built on a foundation you couldn’t control.

If your vendor becomes one of the dominant players, they gain pricing power. Your costs double. Then triple. You’re locked in because migration would cost more than paying the premium. Your margins shrink whilst your vendor’s expand.

What smart vendor strategy looks like:

Build your AI features behind an abstraction layer from day one. Not because you’ll definitely switch vendors, but because you need pricing leverage. When your primary vendor raises prices 3x, you need to be able to credibly threaten to move to a competitor.

Pick your primary vendor based on three criteria: financial stability, pricing transparency, and technical roadmap alignment. Not based on which model performs 2% better on your eval set. Performance gaps close. Business model gaps typically do not.

If possible, even have a secondary vendor integrated and tested, even if you’re not using them in production. This costs you maybe a week of engineering time upfront, but it could save you three months down the line when your primary vendor gets acquired or changes their pricing model and you’ve now got multiple AI products integrated into you platform.

Build your prompt handling to accept any OpenAI-compatible API. When you need to switch from OpenAI to Anthropic or a self-hosted model, you’re changing an endpoint URL and an API key, not rewriting your integration. That’s two hours of work instead of two months. Most AI vendors now support OpenAI-compatible endpoints precisely because they know lock-in is a concern. Use that to your advantage.

What Actually Matters Now

The normalising of AI is forcing you to look back at how your product provides value if everyone has the same technology.

No longer is it good enough to simply ask what AI features should we be building. It needs to be what problem are we solving now that AI is universal. When the technology is identical across the board, what makes you better than your competitors, and what makes your product worth paying for?

This is actual strategy. Not the roadmap kind of strategy where you list features and ship them. The hard kind where you have to articulate why your product exists and why anyone should care.

Your users have AI in every tool they use. Your competitors have AI. The solo operator who’s undercutting you on price has AI. Having it has stopped being special. You’re back to what problem do you solve, and on a more personal level, why are you the right choice to deliver what happens after AI?

If you can’t answer those questions without referencing your AI capabilities, you don’t have a strategy. You have a dependency on technology that everyone else also has.

The opportunity hiding in the aftermath:

When technology commoditises, strategy matters more. Much more. This is when I get really personal. For twenty years, product leaders could hide their weak strategy behind their ability to bring strong execution. Ship fast, iterate quickly, add features competitors don’t have. That worked when building the features was hard.

But building features isn’t hard anymore (a bit hyperbolic, but you get my meaning).

Your competitors can ship what you shipped, faster and cheaper. Which means the product leaders who are going to win are the ones who know which features to build, which problems actually matter, and how to deliver outcomes that users will pay for.

This is good news if you’re actually good at product strategy. In the past, you’ve been competing against teams who just out-execute you. With that being less of an issue now, you’re competing on whether you understand the problem, whether you can prioritise ruthlessly, and whether you can measure what matters.

The product leaders I see winning right now are not the ones with the best AI. They’re the ones who use AI to free up time for the strategic work that actually differentiates them. Understanding users. Making hard prioritisation calls. Building trust. Delivering outcomes.

AI didn’t make product leadership obsolete. It made busy work obsolete. If your value as a product leader was writing specs and running standups, you’re in trouble. If your value was understanding problems and making strategic decisions, you’ve never been more valuable.

Life after AI is becoming a filter. One that strips away everything from product leadership that isn’t strategy. Will you get filtered out?

Oops. Something went wrong. Please try again.
Please check your inbox

Want Better Results?

Start With Better Ideas

Subscribe to the productic newsletter for AI-forward insights, resources, and strategies

Matthew Hall | Productic
Matthew Hall is a Product Leader with 20 years of experience scaling startups, including multi-million-pound exits and transformative engagement growth. He writes about product strategy, AI integration, and practical lessons from building products that work.