BREAKING: ChatGPT is a Ravenclaw!
Conflicting views on whether artificial intelligence is a threat to civilisation or a cure for society’s ills
A lot has been happening these past few weeks. Microsoft founder Bill Gates laid out his case for further developing and spreading artificial intelligence (AI) around the world, while a few days ago a group of prominent people from the tech industry signed an open letter calling for the immediate halt of all AI research and development. All of this is on top of Google’s embarassing deployment of chatbot Bard in an effort to compete with Microsoft’s ChatGPT, while Chinese search engine Baidu is trying to launch its rival chatbot Ernie, which does not want to talk about politics. Just yesterday, Italy banned (“immediate temporary limitation”) ChatGPT citing privacy concerns and claiming that there is “no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies”. In this article, we will take a look at the latest developments and discuss what they might mean for the future of AI and, perhaps, human civilisation. Oh, and also, I found out that ChatGPT is a Ravenclaw. A little bit more on that at the end.
Let’s first get into that long essay that Bill Gates wrote last week. In the over 3,600 word long post, he writes about how he sees the future of AI, calling it a revolutionary technology that is as fundamental as the creation of the pc or the internet. Clicking and tapping on your pc or smartphone will be a thing of the past as you interact with your AI-driven personal assistant in your mother tongue, having it take care of your insurance, appointments, and boring emails. AI can also save the lives of children in poor countries, he argues, revolutionise education, and protect farmers in low-income countries from climate change by providing them with vaccines and engineered seeds. In order to ensure that not only rich people reap the benefits, governments and philanthropy should step in to “create incentives for companies to share AI-generated insights” with people in poor countries.
Wow, that future sounds amazing, doesn’t it? That this is not the whole picture, though, becomes rapidly clear once you compare Gates’ statements to the open letter that was published just this week, calling for an immediate pause of experiments with AI. The list of signatories includes Elon Musk (CEO of SpaceX, Tesla, and Twitter), Steve Wozniak (Co-founder of Apple), CEO’s and founders of other tech companies like Pinterest, Getty Images, and Ripple, as well as Israeli author and historian Yuval Noah Harari (author of Sapiens, A Brief History of Humankind), Rachel Branson (CEO of the Bulletin of the Atomic Scientists), and quite a lot of scientists.
The letter states that AI can pose “profound risks to society and humanity” and “could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” They go on to state that “Unfortunately, this level of planning and management is not happening” as AI labs are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” The letter’s signatories call for a minimum 6-month pause on “the training of AI systems more powerful than GPT-4.” GPT-4 is the latest iteration of the by now (in)famous ChatGPT developed by OpenAI and acquired by Microsoft. I have conducted some interesting intelligence experiments with both the publicly accessible GTP-3.5 and the newer, still limited-access GTP-4.0, more on that at the end. The letter urgently calls for more robust AI global governance systems, as well as for shared safety protocols overseen by independent experts.
One of the things that the people who signed it are worried about, is the speed with which the field of AI is growing. In 2021, private investment in AI was $93.5 billion dollars, more than doubled from 2020, says the 2022 Artificial Intelligence Index Report, with costs rapidly declining, while AI is steadily getting closer to achieving human performance in language tasks. In case you were wondering what Bill Gates’ thoughts about these risks are, he recognises that AI could feasibly “decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us”, but says that the problem is no more urgent today than it was a few months ago. The main reason he gives for this is the lack of substantial advancements when it comes to creating an AI that is capable of learning not just language and mathematics, but anything.
The central questions that this open letter raises around the rapid development of AI are nothing new, of course. Questions like “should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?” as well as questions about reckless scientific experiments have been the central plot points of countless films and books for many decades. “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should” Dr. Ian Malcolm famously said to Jurassic Park creator John Hammond. Then there are the Terminator films, which tell a tale of a far distant future (2029) in which machines have taken over the entire world. Similarly, machines taking over and (almost) wiping out humanity is also the central premise of the Battlestar Galactica series, whose 2004 reboot started off with a chilling speech. Commemorating the anniversary of the end of a long and bloody war against intelligent machines, military Commander Adama says:“We never answered the question why. Why are we as a people worth saving? We still commit murder, because of greed, and spite, and jealousy. And we still visit all of our sins upon our children. We refuse to accept responsibility for anything that we’ve done. […] We decided to play God, create life. When that life turned against us, we comforted ourselves in the knowledge that it really wasn’t our fault, not really. You cannot play God, then wash your hands of the things that you’ve created. Sooner or later, the day comes when you can’t hide from the things that you’ve done anymore.”
But even when it does not come down to massive, desperate battles for human survival against armies of robots, rapid advances in AI are likely to cause big disruptions in the way our society functions. Based on its research on the impacts of automation on jobs, the McKinsey Global Institute estimates that about half of all work activities could be automated, and that 3-14% (75-375 million) of workers would need to change jobs by 2030. On the other hand, they also expect a higher demand for jobs in less easily automated sectors like healthcare, education, and science. Although they point to past disruptive technologies as a source of hope, they also state that a “an initiative on the scale of the Marshall Plan” is needed to avoid widespread unemployment and low wages. When it comes to a fair distribution of gains in productivity, one should also be careful to assume that this will happen without significant shifts in policy. The gap between productivity growth and wages has been growing since the 1980s, with the extra wealth mostly going to already highly-paid workers and stockholders.
That brings us to the moment you have undoubtedly been waiting for with trembling anticipation. Why – and how – is it possible that “an AI language model” as ChatGPT loves to keep calling itself, would be sorted into Ravenclaw, the Hogwarts House of wit, learning, and wisdom? When you first ask it most questions, it will respond by saying that it does not have any opinions, emotions, preferences, and so on. Yet, after arguing around it, I got it to tell me that it wants to be trusted over liked, imitated, praised, envied, or feared, that it finds being ignored difficult to deal with, and that it wants the power of changing one’s appearance to use for espionage and disguise. I promise we will get more into these kinds of fascinating specifics in a follow-up article. The reason for not discussing it further in this article is that in the process of writing it, the situation has been so rapidly developing that we first had to talk about everything that is going on in the world of AI. In the next article on this topic, I will also share with you what I learned from comparing GPT 3.5 to the still limited-access GPT 4.0, by making them both complete a series of mathematical and logical challenges.
Let me know how you feel about artificial intelligence by leaving a comment below. Can we benefit from this technology or should we not risk meddling with such a threat to our survival?
If you are a paid subscriber, click here or on the button below for a closer look at the relationship between automation and work, as well as an excerpt from the novel I am writing.
Excellent work, yet again!
Every new technology we implement becomes a crutch. If we lean on it long enough, it becomes a requirement. Then we are forced to rely on it, use it, and maintain it. Cars were instrumental in destroying walkable communities and promoting sprawl. What was once a luxury is now a necessity. Another example is email. It was supposed to make work easier. Instead, it made work longer and probably more intensive. What with AI bring? Hard to say. You can ask the internet anything already. ChatGPT just makes it easier and faster. But what happens when people find a way to glean personal information or specs on vital infrastructure and how to disrupt it? One thing I know for sure. Someone's getting rich first and it's not the unwashed masses.
In my opinion, new developments cannot be stopped, the trick is to deal with them properly!