Pylimitics

"Simplicity" rearranged


The Calculation of Desire

It’s very difficult, recently, to escape the flood of news stories about artificial intelligence, many of them created by the latest version of artificial intelligence. “AI” is an initialism now recognized by far too many people. When I say “latest version” of AI, artificial intelligence has been around by that name since the mid-1950s, and there have been several “waves” of faddish fascination with whatever it happened to be at the time. I mean the latest wave. But the term “artificial intelligence” has been applied for so long to so many different technologies and approaches that, as writer Ted Chiang noted, it’s really just “a bad choice of words in 1956.” 

Each iteration of AI has had a couple of successes, from Joseph Weizenbaum’s mid-1960s Eliza computer program to the MYCIN expert system in the mid-1970s, and on to today’s ChatGPT. But each of these approaches has had even more failures. Each wave of failures led to relative cessation of effort in that particular AI approach and, eventually, the emergence of some new approach that had the AI label slapped on. 

One of the big successes of the current approach, which all the cool kids know as “large language models,” is that you can ask the AI questions in plain English and get an answer. One of the big failures is that although most of the answers you get tend to be reasonable, the AIs also “hallucinate.” That’s a semi-technical term for “the answer the AI came up with is complete nonsense.” Large language models, at least as I understand it, create their answers by calculating the statistical odds of “the word that comes next.” It’s an attempt by artificial intelligence to produce something very much like what people do. 

This is not the first time. Another writer, Charlie Stross, has pointed out that corporations are a kind of slow AI.” And that slow version of AI has been trying to imitate people for decades. Just look around. Anything you buy from McDonalds (or their competitors) seems very much like an attempt by an AI to produce food like a human would. Large chain stores seem very much like attempts by an AI to create a market like humans do. Shopping malls are another example. Advertising has been around for many decades, but if you compare recent advertising to the way ads were quite a while back, things have clearly changed. Almost as if modern ads are an AI’s attempt to guess how to motivate people to buy what the AI guesses we like and are interested in. 

I could go on. Company names, toys (especially product lines of toys from big companies), and movies from the bigger and more corporate studios — they all seem like they’re representations of what some other kind of mind thinks we like. Or, maybe, they’re just calculating the statistical odds of what the next step could be. It could be, I suppose, that that’s the way people’s minds work too, including mine. But even if that’s the case, it’s not the whole story. There’s something else going on, some additional depth or layer, because it sure feels different. I’m not sure I can put my finger on precisely what that difference is, though. Maybe I should ask ChatGPT. 



Leave a Reply

Your email address will not be published. Required fields are marked *

About Me

I’m Pete Harbeson, a writer located near Boston, Massachusetts. In addition to writing my own content, I’ve learned to translate for my loquacious and opinionated puppy Chocolate. I shouldn’t be surprised, but she mostly speaks in doggerel.