Why AI is just a tool?
Have you ever stopped to seriously ask yourself, what actually *is* intelligence? Not in a passing, hand-wavy way. I mean genuinely sat with the question.
I did, and the more I pulled on that thread, the more the whole “Artificial Intelligence” label started to feel like an extraordinary piece of marketing rather than a description of reality.
Intelligence, as I understand it, is not just pattern recognition at scale. It is not the ability to produce a grammatically coherent paragraph or generate a plausible-sounding answer. Intelligence involves awareness of context, the formation of intent, the experience of consequence, and crucially the understanding that you exist in a chaotic world where your choices carry weight. A chess engine does not *want* to win. A language model does not *care* whether it helped you. There is no “there” there. There is processing. There is output. But there is no understanding in the way that word has ever meant anything to anyone.
So when I hear “Artificial Intelligence,” I hear two words working very hard to borrow credibility from a concept they do not fully inhabit. And I think we should talk about that.
Also, I am not new to working with AI. I have written before about using it as a roundtable for brainstorming and about how it changes the way you think. In both cases, I was essentially writing about AI as a *collaborator* , a framing I found useful at the time. But the more I work with these systems daily, the more I feel that framing does us a quiet disservice. It flatters the technology in a way that subtly shifts the locus of responsibility. And in product management, responsibility is almost everything.
Why Is It Called AI? What Makes It Intelligent?
The term “Artificial Intelligence” was coined at the 1956 Dartmouth Conference, at a time when the dream was literal: machines that think. Machines with goals, reasoning, perhaps even consciousness. What we built instead, at least so far, is something genuinely impressive but fundamentally different. We built very sophisticated autocomplete. Extremely good pattern interpolation. Systems trained on the sum of human expression, capable of producing outputs that *resemble* thought without *being* thought.
The “intelligence” label stuck because it was useful. It made research funding easier to justify, companies easier to sell, and headlines easier to write. It was not a rigorous scientific classification, it was a pitch. And pitches have a way of shaping perception far beyond their intended shelf life.
When GPT-4 writes a business case, it is not reasoning about your business. It is generating text that statistically looks like a business case, based on millions of documents it has processed. That is remarkable. It is also categorically different from what happens when an experienced PM looks at your problem, draws on lived failures, and makes a judgment call with skin in the game.
The word “intelligence” carries connotations of agency, judgment, and understanding. None of those are present. We should be precise about this, not to diminish the technology, but because precision is how we use things well.
How Come I Find AI to Be a Tool?
My working definition of a tool is simple: it has a specific purpose, it operates within a defined environment, and it is controlled or managed by a human. A hammer. A spreadsheet. A database. A calculator.
By that definition, AI is a tool. A remarkably capable, sometimes uncanny tool, but a tool.
It does not have preferences about how you use it. It does not push back when you ask it something foolish. It does not get frustrated when you ignore its output and make a worse decision anyway. It processes input and produces output, and the entire chain of meaning, judgment, and accountability runs through you, not through it.
I think the confusion comes from the fact that AI *communicates* in the medium of intelligence, language. When something talks to you, gives you reasons, expresses what looks like opinion, the brain’s social machinery kicks in and starts treating it like a peer. This is not irrationality; it is deeply human. But it is also a bias we need to consciously override if we want to use AI effectively.
The moment you start deferring to the tool, the moment you accept its output not because you evaluated it, but because it *sounded* authoritative, you have handed your judgment to a statistical process. That should feel uncomfortable. It should feel like handing the wheel to a very confident GPS that has never actually driven a car.
What Is the Benefit of AI Being Defined as Anything Else?
Short answer: there is no benefit to me or you. There is significant benefit to the people selling it.
Calling AI “intelligent” creates the impression that it operates with some degree of autonomy, intention, and reliability that justifies reduced oversight. If it is intelligent, maybe you do not need to check its outputs so carefully. If it is intelligent, maybe it deserves a seat at the table rather than a role as a support function. If it is intelligent, you can outsource judgment to it and feel good about the delegation.
None of this serves you. It serves vendors who want to charge enterprise-tier pricing for what is essentially a very good prediction engine. It serves narratives of disruption that generate engagement and investment. It serves a culture of hype that has been running hot for three years and shows no signs of cooling.
There is also a subtler risk. When we anthropomorphise tools, when we assign them intelligence, intent, and agency, we quietly diffuse accountability. Who is responsible when AI-generated analysis leads to a bad product decision? Is it the PM who used it, or is it the “intelligent system” that produced the flawed output? The framing matters. If AI is a tool, the answer is unambiguous: the person holding the tool is responsible. Always. Full stop.
Calling it something more than a tool is not a philosophical upgrade. It is a liability transfer.
Final Thoughts
Treating AI as a tool makes you more effective
Here is the thing: AI is genuinely remarkable. Not because it is intelligent, but because it is an extraordinarily powerful tool that multiplies what a single person can think through, draft, evaluate, and produce. As a product manager, I use it daily. It makes me faster. It makes my thinking more structured. It surfaces angles I would have missed.
But that is precisely the argument for treating it as a tool rather than a peer. A hammer makes you more effective at driving nails. A spreadsheet makes you more effective at modelling scenarios. AI makes you more effective at thinking at scale, but only if *you* remain the one doing the thinking.
The PMs who will get the most out of AI are not the ones who treat it like an oracle or a teammate. They are the ones who stay firmly in the driver’s seat: giving it sharp instructions, interrogating its outputs, bringing their own experience and judgment to every interaction, and never letting the fluency of its language substitute for the rigor of their own reasoning.
Own the tool. Do not let the tool own the narrative.
That is how I stay effective. That is how I stay accountable. And that is, ultimately, why PM stands for product manager and not prompt master.