Gongol.com Archives: April 2024

Brian Gongol


April 12, 2024

Threats and Hazards It's only protest if it's peaceful

After threatening members of the Bakersfield City Council with murder in their own homes, a woman was arrested and tossed into jail. She has entered a plea of "not guilty" in response to 18 felony charges. ■ In polite news coverage, she is being called a "protester". That is a disservice to the language. Protest has a long and honorable history; threats of personal violence do not. ■ There is a strain of behavior in public life that chooses to catastrophize issues at every turn. A little piece of it can be found in every use of warnings like, "This is the most important election of our lifetime." And it routinely escalates from there. ■ The problem with this pattern is twofold: First, the chronic catastrophization of all things political turns some people into antisocial lunatics who think all ends must justify any means. (If it's always the "most important", then compromise, persuasion, and incrementalism have no real hope.) ■ Second, it blurs the line between words and actions. We have to be able to exchange words freely with people so that we can contain even our strongest feelings within civilized boundaries. ■ People who threaten to bring physical harm to city councilmembers, governors, and even Vice Presidents, actively surrender their right to remain in society until they can cool down and find their behavior corrected. Threats of violence aren't protest, they are terrorism.

Computers and the Internet Aligned with our machines

In 1781, Alexander Hamilton gave us a beautiful line that seems to have perfectly anticipated our technology-saturated world: "Nothing is more common than for men to pass from the abuse of a good thing, to the disuse of it." We find it easy to believe the worst about new technologies because it is easy to imagine how we might abuse them in our own self-interest. But those cases are often oversold. ■ Nobody has much difficulty in imagining how generative artificial intelligence (AI) could be used to make it easier to cheat in high school and college classrooms. A technology made for the purpose of imitating human language has pretty obvious utility for doing things like writing essays. ■ The good news is that artificial intelligence doesn't appear to be increasing rates of cheating. (The bad news is that cheating was already widely self-reported long before AI came into the picture.) ■ But there are prospective dangers ahead that will undoubtedly lurk in the shadows of AI use, which is why the issue of AI alignment is so important. Requiring technology to serve human interests requires developing a lot of rules and definitions around hard questions like the classic, "What does it mean to be human?" ■ It would be a cruel irony if, while we are in the phase of "abuse of a good thing", we were to err on the side of ignorance in our approach to AI alignment, simply because too many people proved too impatient for their own good and failed to study enough of the humanities to become good technologists down the road.


Comments Subscribe Podcasts Twitter