#005 I'm Building an AI Company. Won't Someone Please Regulate Me?
A study on why tech investors and CEOs are simultaneously going all-in on AI platforms (AI is the future!) and also seeking government regulation (AI is going to kill us all!)
A little bit of background before I get started. I am not building a giant AI bot. There are a lot of people out there with extreme opinions on AI, both for and against. I am not one of those people. For those of you who are listening to the audio version, you might notice little breaks in the audio as I switch from the article and down to the footnotes and back again. These small cuts help shave precious seconds off the audio recording. Just one more small benefit of being a subscriber!
It’s been three months since the open letter1 published on the Future of Life Institute website, where several prominent members of the tech community called for a 6-month hiatus on training AI systems more powerful than GPT-4. It was a strange letter, in the sense that it was published by some of the very people who have been at the forefront of developing the technology. It was as if the CEO of an oil company went to the government and said, “What our industry is doing is a danger to society. I am concerned that we are going to destroy life as it currently exists. Please regulate us.” It’s an odd position for someone running an oil company to take.
I have the feeling I’m going to stretch this analogy to the breaking point (the oil analogy, I mean). I work best when I have something concrete to work with though. And the term “AI” is anything but concrete.
So, let’s start by creating a common understanding of what we mean when we say that “the oil industry is a danger to society”. When I read that, my mind goes to the idea that, when I burn gas in my car, I contribute to global warming. When I purchase a petroleum product like plastic, I am contributing to waste that’s going to continue poisoning people and animals long after I’m gone.
But the way you hear it could be wildly different. How many oil products are there2? The list includes gases (propane, methane), liquid fuels (gasoline, kerosene, fuels, dyes, detergents), lubricants (motor oils, greases), wax (paraffin, candle, slack), sulfur, tar, and asphalt. Someone who has just a general understanding that “oil is bad for the environment” (I’m in this group) has a general understanding of why it might be bad, whereas if you have a deep understanding of the industry, you would hear something completely different.
So far, so good. I haven’t broken my analogy yet. On to the next question: what do OpenAI3 executives mean when they give this warning?
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.4
That’s some strong language!
I don’t know about you. My mind tends to go to either the idea of either (1) some super-smart AI being becoming self-aware and deciding to manipulate the future of the planet to its own ends, or (2) humans deciding to automate killing and incorrectly installing the Off switch. Either option, taken to its extreme, could end up destroying civilization. These are some pretty serious eventualities! I understand why there might be cause for concern.
In our oil analogy, we talked about our first impressions, and then we dug deeper. It’s helpful to do so with our first impressions about AI. And while these might be the first thoughts that come to my mind when I think about AI, it’s not necessarily what people who work in the industry think about. For example, Geoffrey Hinton5 (who didn’t actually sign the above letter and who is widely regarded as one of the founders of modern Artificial Intelligence): his concern doesn’t seem to be about the current generation of AI. He stated he is concerned about the arms race that is being kicked off around the technology itself.
So, let’s start by talking about what most of us hear when someone uses the term “Artificial Intelligence”. I don’t know about you, but when I hear that term (especially in the context of “AI is going to take over the world”), I imagine some kind of human-like intelligence, but by a machine. Something that can perceive and interpret a sentence or a situation, “think” about it somehow, and then choose how to respond.
But that’s just one possible interpretation. If we were chatting with each other in the 1700s, we might look at modern Structures & Algorithms as a type of artificial intelligence. Sorting algorithms, which are essentially sets of rules around how to sort objects, have been around at least as long as computers.
I remember being surprised when one textbook6 suggested that algorithms are a form of technology. After all, how could a list of instructions somehow be a technology? After thinking about it though, it totally makes sense. Why would I want to sit down and sort a bunch of numbers (or objects that can somehow be sorted), when I can have them automatically sorted using some recorded process? Especially if I can have a computer run through that process for me and simply return the final, sorted list? (After all, aren’t computer programs themselves just a list of instructions?) From the perspective of a person sorting numbers by hand in the 1700s7, this certainly sounds like Artificial Intelligence: I give the box a list of numbers, and it returns a sorted list. That requires a certain amount of intelligence, doesn’t it?
Well, the thing is, once we become used to having a technology around, we start to take it for granted, and then realize that it isn’t really “intelligence” in the sense we use to describe human intelligence. (Is this sorting algorithm really intelligent? Or is it just a bunch of little 0s and 1s following the logic that an actual, intelligent person put there?) So we stop defining it as Artificial Intelligence. Remember when newspapers were saying how Deep Blue had finally beaten a chess grandmaster? That was considered a remarkable breakthrough in Artificial Intelligence at the time. Today, it’s taken for granted8, and also not considered intelligence.
The technical term for what you and I think of when we hear Artificial Intelligence (as in, a machine that can think) is Artificial General Intelligence.
The vast bulk of the AI field today is concerned with what might be called “narrow AI” – creating programs that demonstrate intelligence in one or another specialized area, such as chess-playing, medical diagnosis, automobile-driving, algebraic calculation or mathematical theorem-proving. Some of these narrow AI programs are extremely successful at what they do. The AI projects discussed in this book, however, are quite different: they are explicitly aimed at artificial general intelligence, at the construction of a software program that can solve a variety of complex problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions.9
In that chapter, the authors (Pennachin and Goertzel) distinguish between what people in the AI field are currently working on (narrow applications of Artificial Intelligence, limited to a small domain with a limited set of rules), and the general public’s conception (an Artificial General Intelligence that can understand how to interact with and solve problems in a large variety of contexts).
All this to say, we are not there yet. The applications we are working on are not Artificial General Intelligence. Even our most advanced efforts right now are in the field of narrow AI. An even more precise way of phrasing it is to say that we are actually making advances in the field of Machine Learning10. Going back to our oil analogy (if that thing is still holding together), this is the difference between “being against all oil products” and having a more nuanced conversation about “I’m concerned that the amount of fuel we are burning as a civilization is unsustainable, and we also need to address the quantity of plastics in our oceans”.
We now have enough of a shared understanding of what we are talking about here, to have a discussion about the open letter. Specifically, we are not talking about an Artificial General Intelligence right now. We are talking about the potential for the current investment in Machine Learning to get out of control and create some apocalypse later on down the road. This is the perfect time to chat about the March 2023 statement from the DAIR institute.
While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as "Stochastic Parrots"), such as "provenance and watermarking systems to help distinguish real from synthetic" media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined "powerful digital minds" with "human-competitive intelligence." Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.11
The whole statement is worth reading12. But really, this is the meat of what I want to talk about today: what are the actual harms from AI systems today?
This new generation of auto-complete chatbots, like OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Bing chatbot, are technically known as Large Language Models. These are statistical models that make predictions of what words come next in a sentence, based on a bunch of text that they have initially ingested, combined with further training after the initial model was compiled. Basically, in order to train these models, the researchers collected a large (large!) set of initial materials, and had the model create a statistical model that would predict which words come next, based on a given input. These models, in a very real sense, are an advanced auto-complete.
What makes them different is that they have been anthropomorphized, which tends to get humans to trust them more. They also seem to be more “cooperative”, in the sense that I can ask a question and get a response. Before we continue, let’s acknowledge that this is a large step forward from the simple auto-complete we have had in e-mail or auto-correct that we have grown used to from mobile phones. At the same time, this model has some of the same problems: it can influence what we were about to say.
In other words, Machine Learning algorithms can influence the way we think, talk and write. A simple example is that, even though I have never once typed the word ducking on my keyboard, my phone is still convinced that ducking is what I meant to write. These changes, while seemingly small, can still steer us in directions that we don’t intend to go.
The real danger in the Open Letter is that it distracts from the very real, present problems created by AI companies13. They scrape websites that contain a ton of free content (without any regard for whether that content is under copyright), and use them to train synthetic models. They then sell these models to end-users without citing any sources or reference. And they use the interactions with users to further refine the models. Some end-users use ChatGPT’s output and simply throw it up on their own sites, increasing their ability to output a bunch of garbage very quickly. And all of this results in further concentration of power in the hands of a few companies.
If AI (or, really, Machine Learning) is causing so much actual harm in the present, then why are tech CEOs and investors warning us of all these future harms? Well, as the authors of the “Stochastic Parrots” letter wrote, worrying about a catastrophic future helps distract from the very real and present harms that these companies are causing. These are real harms, by real companies, run by real people. And it’s disingenuous of them to try to distract from what is happening now by trying to get us to worry about an imagined future.
Our role right now is to step back, calm down, and not react angrily. Machine Learning algorithms have a tremendous potential to benefit society. I remember when my class were first allowed to use graphing calculators back in high school. Some of us used them to skip the learning process altogether, to go straight from an equation to a graph. And some of us used the graphing calculator as a learning tool. We did our best to sketch what we thought the graph would look like, then we used the graphing calculator to immediately get feedback on what was working and what wasn’t. These Large Language Models can be used in both ways. Some people will use them to avoid learning. Others will use them to learn more deeply.
This Open Letter is a fantastic opportunity for us to reflect and learn. And we have a choice to make. Do we want to take the bait, and react with fear and anger? Do we want to swallow the idea that the future will either be a catastrophic failure or a runaway success, completely due to advances in Machine Learning technology?
Or do we ask ourselves the deeper question: how do we, as a society, impose meaningful regulation that aligns the values and desires of these companies with the good of society?
Here is the link to the open letter, last accessed on June 14, 2023.
Here is a list of 365 products made from oil, last accessed on June 14, 2023. It includes Aspirin, birth control pills, insulin, and vitamins. You heard that right: vitamins. The fact that I didn’t know that vitamins are made with oil byproducts makes me question how much I know about anything. You should probably take this whole article with a grain of salt (I believe salt is not an oil byproduct, but don’t quote me on that).
If you are responsible for hiring at OpenAI, I am 100% open to joining your OpenAI Residency program. Getting paid to learn? Sign me up, and please don’t hold this article against me!
Statement on AI Risk, last accessed on June 14, 2023. Includes a list of signatories from reputable universities, as well as executives from top tech firms.
Here’s his Wikipedia entry, last accessed on June 14, 2023. He holds a PhD in Artificial Intelligence, has spent his career working on Deep Learning, and is deeply respected in industry and academia.
That textbook was “Introduction to Algorithms”, by Cormen, Leiserson, Rivest and Stein. I’m linking to the Wikipedia page, because it’s a good summary (and you don’t have to buy the book to get an understanding of what’s going on). If you want to learn more about algorithms by playing with them, I’d suggest playing Human Resource Machine (that’s another Wikipedia entry), on your platform of choice. It’s super fun, and definitely not frustrating! (Disclaimer: this game can be frustrating.)
Like a Neanderthal.
For more reading, see the article, “Thinking Machines: The Search for Artificial Intelligence. Does history explain why today’s smart machines can seem so dumb?”
Quoted from Contemporary Approaches to Artificial General Intelligence, by Pennachin and Goertzel.
Note that in their 2023 Worldwide Developer Conference, Apple avoided jumping on the AI hype train, instead focusing on how their are using Machine Learning to create better user experiences.
This is from the Statement from the listed authors of Stochastic Parrots on the “AI pause” letter, last accessed June 19, 2023.
One day, I hope to be able to speak as succinctly and eloquently as they have here.
And we haven’t even touched how regulating an industry can increase barriers to entry for newcomers and entrench existing companies. OpenAI currently doesn’t have a “moat” around its business. Yes, it spent the GDP of a small country training its model. But other open-source alternatives still exist.