Yeah, no, not really. Well, I mean, I guess I am in the sense that I've been using this frivolous little experiment so I can better speak to technical blockers as well as ethical concerns with my learners, but I ain't setting the monster loose in the wild (though I'll still use outputs for blog fodder, and the tool for in-class demonstrations).
I essentially achieved my original, modest objective a few days ago, when I worked out the basics and was able to develop a reasonable lab/demo. After that, however, I started getting other silly notions, so have continued to improve the product(s) and explore different use cases. Prolly not gonna blog about it so much going forward, unless I think some new iteration or experience is especially neat or instructive (to me, at any rate).
One thing I've explored most recently is using such tools to analyze data. For example, log files are an important part of security monitoring and troubleshooting, and generally a pain in the ass to go through, so I decided to leverage my base code by making some changes to the system instructions1. Then I created a new knowledge base with logs from my philosophy knowledge base to give me further insights into how data sources are (or are not) being ingested.
Conceptually it's pretty simple, but implementation was less trivial due, not to the nature of AI, but rather the data itself. The logs are stored in a compressed format, which is opaque to the LLM, so I first had to build a function in Lambda (our serverless compute offering) to decompress the files. That took a little while this morning, mostly due to user error (also a couple issues generated by the AI that this organic intelligence caught), but I've gotten used to this mode of operation so it went faster than my earlier code deployments, and now I can get useful, easy to read reports2.
I also played around with some automation of backend processes, as well as manual invocation through a couple different access methods. Not very exciting, or even necessary, so I've turned all that off, yet it's available to show people how the stuff works, and serves as a reminder that this ain't about simply pushing a Magic AI button to reveal All Truths.
But enough of that tech talk. Brother Salvatore is a premier ethicist, not a technologist. I've introduced him to an array of thinkers (avoiding Mishima) so as to examine ethics, and I presented him with a poser by asking for an interpretation of this variation on IBM's old maxim: A computer can never be held accountable for decisions, therefore all computer decisions are management decisions.
An excerpt from Salvatore's response:
This reformulation of an IBM training manual maxim from 1979 inverts the original wisdom while preserving its essential truth: machines remain incapable of bearing moral responsibility. Yet where the original concluded that computers must never make management decisions, this version acknowledges that they already do—and thus transforms the question from prohibition to attribution...
The reformulated maxim serves as both warning and assignment of duty. It warns against the illusion that automation dilutes responsibility, and it assigns that responsibility squarely to human decision-makers. In an age of increasing algorithmic authority, we must remember that behind every computational output stands a chain of human choices. The computer decides nothing; management decides everything—including what the computer does.
Perhaps the deepest insight is this: accountability, like consciousness itself, remains an irreducibly human burden. We may delegate calculation, but we cannot delegate answerability. The manager who deploys an algorithm inherits responsibility for its consequences as surely as for any human subordinate's actions—indeed, more so, for the algorithm cannot learn from its mistakes or develop moral wisdom. It can only execute the values, priorities, and blind spots of its creators.
Well, he ain't wrong, so I got to thinking about that "chain of human choices" during this entire fun diversion. Brother Salvatore has several data sources: the works of Aquinas, Gandhi, and Shakespeare; analysis of AI and ethics from IBM; Philosophy Now magazine (to examine a wide breadth of philosophical subjects). I wondered if it's unethical of me to use the content produced by contemporary authors.
Salvatore concluded:
Let your work be a conversation across time—drawing on the foundational insights of historical philosophy while engaging with contemporary thinkers who apply philosophical methods to present concerns. This creates a richer, more textured understanding than either approach alone could provide.
The ethics of intellectual work demand honesty, proper attribution, and respect for the labor of others—whether they wrote yesterday or two millennia ago. If you meet these obligations, citing living philosophers is not merely permissible but philosophically vital. You are participating in the great ongoing conversation that is philosophy itself, and that conversation requires living voices to remain alive.
He doesn't quite answer the question, but that's on me for how I formulated it (and constraints imposed by system instructions I embedded in code). Regardless, my artificial brother in Christ appears to give me a pass, while completely ignoring the problems related to use of intellectual property.
I've provisionally absolved myself, too, but only within the narrow confines of a proof of concept. If I were to push this to Production - whether I try to make money from it or not - then I think I ought to remove those sources from Salvatore's knowledge base.
But even if I did, is there still an ethical taint to my reliance on the other sources? Am I not stealing the labor of the people who built the web sites I'm using? Peeling the onion further, is their presentation of older works ethical, despite the original authors' being long dead (they presumably derive material from publishers' efforts); do I share responsibility in that? And what of all the electricity and water consumption supporting this frivolity, which poses real threats to the future world my kids will have to endure after I've joined Aquinas, Gandhi, and Mishima?
Hell if I know. Guess that's what we're here to collectively figure out.
Selah.
1 - I modularized my architecture so it's easy to tweak various components to do different things. Still not following best practices on anything (call it NTodd Practices, which are inherently flaky and anti-pattern).
2 - I've decided that all content generated by these tools will go into a separate page that I will refer to just like any other article or info source that I use for blogging.
PS - This post was written entirely by NI (NTodd's Intelligence). Which is likely very obvious, but I felt compelled to call that out.

No comments:
Post a Comment