Gongol.com Archives: April 2023

Brian Gongol


April 16, 2023

Computers and the Internet Artificial intelligence keeps hallucinating

In a couple of test runs, economists found ChatGPT prone to citing works that don't exist. This is both a foreseeable outcome of the system design (so far, artificial intelligence is mainly a sophisticated text-prediction tool, not a method for assembling actual wisdom) and an acute flaw in the technology if used for its most obvious purposes. ■ Human beings have a very reasonable interest in developing technologies that will efficiently supply answers to known questions (like a speedy research librarian). We also have a reasonable interest in developing technologies that will answer novel questions. Since the beginning of writing itself, the superpower of our species has been the ability to store knowledge outside of our own brains. The spoken word allowed our ancestors to start storing knowledge in other people's brains, and the written word let them put that knowledge in places we could protect, duplicate, and move about. There's a reason the loss of the Library of Alexandria remains one of history's great tragedies. ■ Computers have a striking ability to do what books cannot: They can be programmed to generate new knowledge altogether, like detecting objects in space. That generative capacity could truly be profound. But it remains something humans have to double-check. Not only is it imperative to check the work in its own right, it's also essential that humans make sure that the programming stays correct. Computers only work according to their programming, and there are countless ways in which new knowledge can have consequences for old code. ■ Dan Brooks offers the pithy observation that ChatGPT "nails the voice of someone trying to hit word count." He's right; the imperative behind the technology is to keep generating new words, no matter their need or their validity. And that's the problem that causes the "hallucinations": Whether or not real knowledge exists, AI is set to just keep making up something to fill the space. ■ That is no small problem if, as is extremely likely, artificial intelligence tools are on the verge of multiplying in number and output at rates we may have little capacity to comprehend or to double-check. That doesn't mean we should unplug them -- we should be ready and eager to put useful tools to work to make human life better. But we're going to need to think of ways to firewall the generated content that hasn't been checked or validated so that it doesn't form a feedback loop of garbage-in, garbage-out that could leave us all wondering what was ever true in the first place. A hard drive can be reformatted. The Internet cannot.

Agriculture Farmers get an early start on spring planting in Iowa

It's been a while since that's happened

News Courage, big and small

Chad Gibbs: "Whenever we remember the Holocaust, we should remember the small rebellions, the individual stands, and the little acts of caring"

Threats and Hazards The second-order problem with an intelligence leak

Matt Tait: "In other words, just because you can't see the significance of how something tiny in a photo can have massive repercussions doesn't mean that experts with extensive experience and that little bit of extra context can't."

Business and Finance The place of a little love in a market economy

Deirdre McCloskey: "There's a mild love that's exhibited in market relationships, even very hands-off relationships. You go to your grocery store that you habitually go to and you keep seeing the same butcher, the same clerk. What the French in the 18th century called sweet commerce makes you into little friends whereas central planning socialism does not make you into little friends."


Feedback link