Gongol.com Archives: April 2025

Brian Gongol


April 12, 2025

Computers and the Internet Wrong number

Go back in time to just before Y2K, and you encounter a time when it was possible to register just about anything as a domain name -- the gold rush was just beginning. But it cost $119 to register a domain name with Network Solutions, with a hard limit of 26 characters (including the top-level domain). So the temptation to register and squat on desirable names was high, but it was tempered a bit by the up-front cost. ■ But as enthusiasm for the Internet grew, people who might have previously squatted on a domain name to capture something like "pets.com" before selling it to others found that it could also be lucrative to squat on typo names, too -- like "ptes.com". Google (itself a website with a funny name) was only starting to emerge, so a lot could be gained from catching human beings making wrong guesses. ■ Fast-forward to today, and people are surrendering their thinking processes to artificial intelligence everywhere you look. Some of it just means jumping on the latest fads, like generating your action-figure avatar. But many others are using AI as a surrogate for more serious processes, like writing computer code. ■ Coding can often be tedious, so resources have emerged to make developers' lives easier -- resources like code repositories, where libraries of existing code can be copied, pasted, stored, modified, and retrieved. This is a great system if everyone involved can be trusted. But developers are using artificial intelligence tools to help generate new code, and artificial intelligence has a serious problem with hallucinations -- nonexistent things the AI "imagines" because of the way its predictive nature behaves. ■ There's a real hazard in this development, because code-generating AI is hallucinating the existence of nonexistent code packages. That's bad enough, because it produces programs that don't work. But just as typosquatters came for domain names with bad intentions in mind, now crooked parties are putting malicious software in the destinations where AI has been hallucinating the existence of code packages. So when the AI-written code goes looking for real code in a library that doesn't exist, it ends up finding malware instead. ■ The first thing any security-minded person should do when a technology is deployed in a new field is to imagine the ways in which it could be abused. It won't necessarily stop the abuse from happening, but it might at least begin to raise red flags around the circumstances where we need to apply more careful, cautious thought. We've known for more than a quarter-century that people looking for the right things in the wrong places could end up in dangerous territory. Now we need to realize that AI "helpers" may be just as prone to looking in the wrong spots as the humans they're supposedly assisting.


Comments Subscribe Podcasts Twitter