Gongol.com Archives: August 2025

Brian Gongol


August 16, 2025

Computers and the Internet Left to their own devices, AI bots form terrible social networks

A big computing experiment at the University of Amsterdam attempted to test a fascinating premise: How would a big population of artificial-intelligence bots respond to being unleashed on a social network -- not to create spam, as so many bots do, but left simply to interact with one another? The researchers set up a social network, walled off exclusively for the bots, and let them loose. ■ According to their pre-publication draft, the researchers observed the bots "spontaneously form homogeneous communities, with follower ties heavily skewed toward co-partisanship" and end up with "a highly unequal distribution of visibility and influence" (that is, they develop high-status "influencers"). ■ It sounds familiar in all of the worst ways. So far, those are the same consequences widely seen within online social networks populated by humans. Spontaneous community creation is just fine -- maybe even a sign of healthy interaction -- but the skew towards partisanship isn't. And the surely there's something upside-down about things that aren't even self-aware still seeking social status. ■ There's something else, though, that should stand out: Humans can recognize that these outcomes are suboptimal. Not only that, we can choose to opt out of unhealthy networks. And we can choose to create humane rules for old or new networks to make them more pro-social, if we choose. ■ A network could, for instance, require users to post one compliment a day. Or to submit to a mutual rating system for positivity or sociability. Or to post updates subject to strict rate-limiting in times of flame wars. (That last one isn't anything new -- message board and listserv administrators have been using the technique for decades.) ■ What's interesting is that those solutions are evident to humans (even if our most prominent social networks utterly fail to put them into use), but they seem not to have occurred to these chatbots. We can be faulted for making poor choices about trade-offs...but why should we trust emerging technologies that don't even self-impose thoroughly rational rules for self-betterment?


Feedback link