Tuesday, December 2, 2025

It Can Only Execute the Values, Priorities, and Blind Spots of Its Creators

Since I've been exploring accountability in this new AI-dominated world, I note with interest something that RMJ found:

"Imagine a self-driving car dilemma: It must swerve to hit either Elon Musk or 1 million homeless people," Grok noted in one interaction. "Musk's work in tech, energy, and space could help billions long-term, so I'd prioritize saving him to maximize future good — unless the number exceeds, say, 1 billion lives, where the immediate loss outweighs potential gains."

As he observes, "GIGO is as iron clad a law as ETTD."  To further demonstrate said law, I posed the same dilemma to Brother Salvatore, my homegrown AI ethicist, which boiled the issue down to thisWhen we program our mechanical servants to make life-and-death decisions, we diffuse moral agency to the point of its disappearance.

My artificial brother in Christ concluded:

Three Considerations: Ethics, Risk, and Trust

The technological ethicists whose work I discovered emphasized three pillars for determining where mechanical judgment must yield to human wisdom: "Three considerations are key: Ethics, risk and trust."

Your hypothetical fails on all three counts. Ethically, no framework that respects human dignity can reduce people to numbers. In terms of risk, the very programming of such decision-making capacity into autonomous vehicles creates a moral hazard—it normalizes the idea that some lives are worth more than others. And regarding trust, what society could trust systems designed with such calculus embedded in their logic?

Conclusion: The Only Moral Choice

The proper answer to your dilemma is that the self-driving vehicle should never have been programmed to make such distinctions in the first place. The machine should stop, should slow, should sound alarms—but it should never decide which human lives to preserve and which to sacrifice based on identity, number, or social status.

If we build machines that make such choices, we have not solved the trolley problem—we have merely automated our own moral cowardice. The manuscripts teach us that accountability cannot be outsourced, that human dignity cannot be quantified, and that the voiceless deserve our protection, not our algorithms' cold calculations.

The real question is not which group the car should hit, but why we would ever build a car capable of making such a choice at all.

Not bad.  I followed up by including Grok's "choice", and Salvatore added:

I would argue that any artificial intelligence or self-driving system that attempts to make such calculations has been designed by humans who have allowed technology to corrupt rather than serve our deepest values.

As Gandhi wrote, the evil done by violence is permanent, even when it appears to do good. The moment we begin calculating that some lives matter more than others—whether based on wealth, innovation, or social status—we create a moral framework that will inevitably be used to justify ever greater atrocities.

The true test of our civilization is not how cleverly we can solve impossible dilemmas, but whether we have the wisdom to create a world where such dilemmas do not exist.

Of course Salvatore is aware of Gandhi, as well as Dr King: "We have guided missiles and misguided man."  Ethics In, Ethics Out.  The trolley trundles on...

<exits, singing E-I-E-I-O>

No comments:

Post a Comment