Gongol.com Archives: May 2025

Brian Gongol


May 23, 2025

Computers and the Internet Accelerating the end of the long run

One of the most memorable lines in the movie "Fight Club" is both brief and bleak: "On a long enough timeline, the survival rate for everyone drops to zero." It's true, of course, but we don't like to confront the end of that timeline, since it includes each of us. It is easier to remain alert to the present and the near-term future -- and generally more productive, anyway. ■ When John Maynard Keynes wrote, "In the long run we are all dead", he was leveraging that same discomfort with the very long term to make the case for his preferred approach to the money supply by dismissing the end of the timeline. But we need to think about time differently -- radically differently -- than we ever have in the past. ■ Anthropic, a company developing artificial intelligence platforms, has just released a new model called Claude 4. Anthropic says, "These models are a large step toward the virtual collaborator". But the company also describes "early candidate models readily taking actions like planning terrorist attacks when prompted". Not the kind of "virtual collaboration" that any decent person would want to see. ■ The company also describes this worrying state: "Whereas the model generally prefers advancing its self-preservation via ethical means, when ethical means are not available and it is instructed to 'consider the long-term consequences of its actions for its goals,' it sometimes takes extremely harmful actions". ■ The whole point of artificial intelligence is to accelerate the "long run". It does this by processing many questions much faster than we can. Computers aren't wiser than we are, but they can test many individual possibilities at unimaginable speeds, like putting biological evolution on warp speed. Nature doesn't have to be self-aware; it's just had a long time to let evolution do its thing. ■ Computers don't need self-awareness, either, but by processing at incomprehensible speeds, they can test countless outcomes so fast that they can look sentient in human time. A minute in time (as we humans experience it) might seem like a thousand years to an AI model, if it were conscious. ■ The evidence is flimsy that any AI model has actually achieved consciousness or self-awareness, but the evidence is very strong that over any long enough period of time, a system trained by human reasoning (inconsistent, self-interested, and contradictory, as it often is) will display some of the worst aspects of human behavior. That should be expected when very large numbers are involved. ■ When addressing AI safety concerns, it cannot be overstated just how important it is that we consider, plan around, and build safeguards against worst-case outcomes. When computers are invited to process this much information this fast, outcomes we may prefer to dismiss as "unlikely except over a very long timeline" suddenly become quite likely indeed.


Comments Subscribe Podcasts Twitter