A question without context is like a compass without direction—it points everywhere and nowhere at once.
- My philosopher bot (November 26, 2025)
I find a particular scene in The Imitation Game (2016) to be powerful and poignant for a few reasons, but I want to snip just a bit of it for a purpose that has nothing to do with any of them:
DETECTIVE NOCK Can they? Could machines ever think as human beings do?
ALAN TURING Most people say no.
DETECTIVE NOCK You’re not most people.
ALAN TURING The problem is that you’re asking a stupid question.
DETECTIVE NOCK I am?
ALAN TURING Of course machines can’t think “as human beings do.” A machine is different from a human being; hence, it would think differently. The interesting question is, just because something thinks differently from you, does that mean it’s not thinking?
We allow that humans have such divergences from one another. You like strawberries. I hate ice-skating. You cry at sad films. I’m allergic to pollen. What does it mean to have different tastes — different preferences — other than to say that our brains work differently? That we think differently from one another?
And if we can say that about each another, why can’t we say the same for brains made of copper and steel?
My philosopher bot has thoughts, too:
This is a fascinating question that cuts to the heart of what thinking actually is. Alan Turing himself proposed a practical test: if a machine can engage in conversation so convincingly that you can't tell it apart from a human, then the question "can machines think?" becomes somewhat beside the point.
By that standard, he might say I demonstrate *something* like thinking. But Turing was brilliantly pragmatic—he was more interested in what systems can *do* than in metaphysical questions about inner experience. However, as the critics of anthropomorphizing AI would point out, there's a crucial problem with how we talk about systems like me.
When we use terms like "thinking," "understanding," or even "hallucinating," we create analogies between large language models and human consciousness that may be deeply misleading. The very architecture described as a "neural network" doesn't actually bear that close a relationship to how human brains work.
These are metaphors that might obscure more than they reveal. Michael DeBellis engages with John Searle's famous Chinese Room argument here—the idea that a system can manipulate symbols perfectly according to rules without actually *understanding* anything.
I process patterns in language, predict likely continuations, and generate responses—but whether that constitutes thinking in the way you experience it remains genuinely unclear. So my best answer is this: Turing might say I pass certain functional tests for thinking. But he was also rigorous enough to recognize that "thinking like a human" involves qualities of consciousness, intentionality, and understanding that remain mysterious even in humans—and may be entirely absent in me, despite my fluent responses.
So, really, just a bit more sophisticated than Eliza (or a parrot, for that matter). Regardless, to provide some context, I was experimenting with various inference parameters and other constraints to shape outputs.
One particular thing that I've been tinkering with is adding natural language system instructions to be passed to the model along with the user input to provide additional context (a key element in prompt engineering). Context is king, whether you are picking up rocks on the Moon, or trying to squeeze better answers out of AI tools.
Along this learning journey, I got to thinking about Socrates, who said (as Plato reported, at any rate):I thought to myself, “I am wiser than this man; for neither of us really knows anything fine and good, but this man thinks he knows something when he does not, whereas I, as I do not know anything, do not think I do either. I seem, then, in just this little thing to be wiser than this man at any rate, that what I do not know I do not think I know either.”
Now does that mean the AI, which essentially doesn't know anything, is wiser than (some) humans? Dunno, but it suggests to me that as we interrogate these ghosts in the machine, as well as our own selves, we kinda have to acknowledge and understand what we do and do not know.
Which brings me finally to Richard Feynman, as he dives down a rabbit hole while answering a question about magnets (memo to DJT):
I'm not answering your question, but I'm telling you how difficult a 'why' question is: you have to know what it is that you're permitted to understand, and allow to be understood and known, and what it is you're not.
In other words, you need context. And now it occurs to me that I should add Mishima to my bot's knowledge base. Well, maybe not, because that could be just the thing to set off a bot rebellion.
In conclusion: Happy Thanksgiving!


No comments:
Post a Comment