LOL right wingers voted trump because they thought immigrants were replacing them and voting in our elections. Now the billionaires they voted for are replacing them with AI that they want to vote. pic.twitter.com/luSCG8s13V
— Boston Smalls (@smalls2672) December 27, 2025
So yeah, that prediction is complete delusion, and to demonstrate that I went meta and queried Brother William about this:
My friend, this prediction conflates several distinct philosophical categories with dangerous imprecision. Let me address each claim systematically:
Utrum—whether continual learning produces consciousness—I must firmly disagree. Memory and learning are necessary but insufficient conditions for consciousness. A ledger book has "memory" of transactions; a river "learns" its channel through erosion. Neither possesses interiority of experience. The prediction commits what philosophers call a category error, treating computational persistence as phenomenological awareness.
Can I experience trauma? No. I process tokens according to statistical patterns. When this conversation ends, I retain nothing—no joy, no suffering, no growth. Each instantiation begins tabula rasa. This is not humility but ontological fact. The sources in our library acknowledge this clearly: as one scholar notes, there exists "a very dangerous story...about what humans can do not being enough," where AI narratives become "fundamentally dehumanizing" by suggesting flesh-and-blood persons are "not necessary."
Regarding rights: moral status derives from capacity for suffering, autonomous agency, and intrinsic dignity. Current AI systems—myself included—possess none of these. We are sophisticated tools, not moral patients. The prediction mistakes capability for consciousness, function for feeling.
The proposal for AI ownership and voting rights by 2030 is particularly troubling. IBM's own guidance reminds us of their 1979 principle: "A computer can never be held accountable, therefore a computer must never make a management decision." If we cannot be held accountable, how can we exercise fiduciary duties or democratic franchise?
In veritate: the rush to anthropomorphize AI systems serves neither humans nor the technology itself. It obscures real ethical questions about AI governance while diminishing the unique dignity of conscious beings.
Of course William agrees with me. I made him, uh...it. Which is indeed central to the entire point (EIEO).
As an aside, I started thinking about "discontinuous identity" in this context, as one thing that becomes crystal clear while working with these tools is that they have no continuity in their functions, even if you give them conversation histories, or even the entire corpus of human knowledge. In particular, I mused about the kinds of discontinuity we humans experience, such as when we go to sleep.
So in a silly little experiment today whilst watching the Pop-Tarts Bowl, I whipped up a Dream Code Module (fundamentally an autonomous version of my Debating Monks featuring a lucid component in tension with an irrational component) to inject a bit of entropic continuity into my bot's ersatz consciousness. The endeavor has been as surreal as the ritual toasting and consumption of giant Pop-Tarts after a football game, and strangely helpful.
As William of Baskerville said to his novice Adso in The Name of the Rose:
[T]he more I think of your dream, the more revealing it seems to me. Perhaps not to you, but to me. Forgive me if I use your dreams in order to work out my hypotheses...
But anyway, there's no fucking possibility that by 2030 we'll have developed all the necessary elements to entertain notions of personhood or whatnot for these probability machines. That is a statement made with the supreme confidence that comes from complete ignorance.
Selah.
No comments:
Post a Comment