First, I saw this:
Unregulated generative AI is messing up the trustworthiness of scientific research, but what could go wrong pic.twitter.com/zVFDGxvdiw
— Luiza Jarovsky, PhD (@LuizaJarovsky) December 19, 2025
Then this one:
This paper from Harvard and MIT quietly answers the most important AI question nobody benchmarks properly:
— Alex Prompter (@alex_prompter) December 18, 2025
Can LLMs actually discover science, or are they just good at talking about it?
The paper is called “Evaluating Large Language Models in Scientific Discovery”, and instead… pic.twitter.com/Yqo8Uqwrg0
Key takeaway from the latter: [LLMs] optimize for plausibility, not truth.
Key takeaway from the two posts together: Ethics In, Ethics Out holds true for every layer of an AI architecture, from the training data to the prompts and parameters to how we use the outputs, and of course, all beginning with the decision to use AI in the first place.
Selah.
No comments:
Post a Comment