Gongol.com Archives: October 2023
More than 55,000 people showed up to watch an exhibit basketball game played between the University of Iowa and DePaul -- noteworthy for two reasons: It was played in Kinnick Stadium (the University of Iowa's outdoor football stadium, with a capacity of 69,250), and because it set by far and away the attendance record for a college women's basketball game. ■ Events like the "Crossover at Kinnick" are great for elevating the social cachet of women's sports. But they're also reminders of just how much induced demand plays a role in life. ■ If someone had looked only at the historical record, they might have had the impression that there was no demand at all for an outdoor women's basketball game in Iowa City. But in the same spirit as the Field of Dreams game and the 26-lane-wide freeway in Houston, "If you build it, they will come." ■ These things don't necessarily happen without other factors, of course. Marketing and promotion play a role. Community buy-in and adjusting preferences are involved, too. They do, however, point to the importance of avoiding the lure of inertia. ■ It's easy to believe that the future will look like a straight-line projection from the past through the present. It's also a mistake. Things change and big ideas come to fruition because individuals commit to them and invest in creating visions that attract others. ■ Those visions don't have to be manipulative or self-serving. They also shouldn't be isolated to entertainment spectacles. The world needs bold and seductive projects to serve public interests, too, -- though not necessarily delivered by the public sector -- on matters ranging from housing affordability to innovations in education to entitlement spending to biotechnology and well beyond. ■ The belief that great motivating events are possible (and important) ought to take some cues from the progress made in the world of sports. The public often doesn't know how much it wants something until someone shows the initiative to paint a picture of a new reality.
Facebook is encouraging users to "chat" with an artificial intelligence model in the character of Jane Austen. It's part of a fairly transparent effort by the site to get users to spend as much time as possible engaging with the platform, which is important to a company that derives about $50 in revenue per North American user per quarter. ■ The first obvious question is: Why Austen in particular? But close behind it is: Why does this AI model have a verification symbol that is the same as what Facebook applies to real, living celebrities? It could easily be designated differently, but it is not. ■ What is the purpose of a verification mark for an AI model representing a real historical figure, if not to create at least some synthetic appearance of (unearned) authority? That's a bold and dangerous move for Facebook. Today, perhaps it's merely Jane Austen. But what's to stop them from doing the same thing tomorrow with the synthesized words and likeness of George Washington? Or Aristotle? Or Jesus? ■ AI models based upon real people have been a fairly evident "next step" for at least a decade now. The entire history of biography -- and family lore -- is about reaching into the past to seek answers for the present. It's one of the most obvious use cases for artificial intelligence. Like search engines, these models retrieve and reconstruct material from databases, so they really ought to be called "personality engines". And they most likely will prove too irresistible to have around in one form or another, indefinitely. ■ What people don't fully appreciate yet is just how little source content it will really take to form a personality engine for just about anyone. With enough willingness to let computers fill in the blanks and make forecasts based upon incomplete information, one could probably synthesize enough material to look like a defensible worldview from about 200 to 300 pages of written text. ■ That just isn't very much source text to ask! But it's treacherous territory, particularly if the model-builders aren't extremely careful about what they use for input material, how they label it, and what they do to make sure that newly-synthesized material generated in the "voice" of a particular individual doesn't become the source material for another personality engine that doesn't know the difference between the original material and what's post-canonical. ■ And it truly begs us to consider the ramifications of co-opting someone's "voice" without their consent. Jane Austen never said Facebook could use her as a chatbot -- she died in 1817. Modern audiences may be greedy to get her advice today, but whatever we get shouldn't be considered "official" in the sense that most audiences would consider material published by a "verified" account. ■ For now, it may seem harmless to use Jane Austen like this. And it will probably seem mostly harmless to take Grandma's private diaries after she passes away and submit them as source material for a Grandma-AI (this is absolutely certain to be someone's business model sooner rather than later). But who controls whether Grandma-AI is released for public distribution, and who rakes in the earnings from her words and likeness? Families already go to legal battle with one another over real property, nest eggs, and secret recipes. Who controls Grandma's synthesized words and likeness when they could be worth money in the commercial market? ■ We've entered extraordinary territory -- completely uncharted -- and it's not clear that significant participants have even begun to duly consider the consequences. A great deal of good could come from plumbing the sources of the past for answers in the present, but we need to put as much energy into the boundaries as we invest in charging for the frontiers.