Purposes of AI - Beneficial to society and a possible tool for social control
The second part of an interview with a student of AI and Voice Technology about how we should think about OpenAI's ChatGPT and other artificial intelligence.
It is time to dive a little deeper into AI’s wider implications for society. Funnily enough, I never planned to write more than one posts on AI, but it somehow just keeps happening. As if forces beyond my control are steering me in this direction… As if some kind of automatic model is controlling my actions for purposes of its own… Are my thoughts even my own or just the end result of the information that is being fed to me? How much can large tech corporations and governments influence the way I think? For quite some time, I have been trying to find some answers to these kinds of questions, which is why I wanted to see what someone with more intimate knowledge of automated systems like AI had to say on this topic.
In my previous post, How AI works - Mysterious and dangerously human, Artificial Intelligence student Sjors Weggeman talked at length about what transformer systems like ChatGPT can and cannot do for society, and explained how these systems function and ‘think’. He also responded to a point audience member and fellow Substacker
raised about older people’s ability to adopt new technologies. If you have any other questions you would like to ask Sjors based on my interview, I encourage you to leave a comment. In any case, I am glad to have you with me on my journey to discern the difference between legitimate concerns and profit-driven hype, so that we can find answers to the many moral dilemmas that our technologies create.Check out my previous posts on AI here:
Part 1 of the interview with Sjors:
Other posts on AI, in which I analyse the ways in which it is discussed in the news and by prominent figures…
…and try to better understand how AI language models like ChatGPT work.
In another post on AI, I mentioned how the military is planning to use AI on the battlefield, for instance for flying drones, but there are also places where it is already being used, like in policing, to predict where crime is most likely to occur. Earlier, you stated that we do not have to fear AI, but examples like these suggest to me that we should perhaps be fearing the implementation of AI. Is that what people should be worrying about and maybe even trying to stop?
“As I said: it’s a tool. You can use a hammer to build things, or you can use a hammer to destroy things. Don't be afraid of the tool, be afraid of the hand that uses it wrongly. To prevent misuse, we need the regulations and restrictions, like a hammer comes with a basic safety instruction.
I don’t believe progress can be stopped, and neither do I think it should be. AI should never get to actively decide over life and death, but neither should humans. I think there are exceptions to this, but only in the case of self-defence and as last resort. But that brings another debate about who started what and how. And I don’t think it is necessary to be afraid of AI, but I do think it is important to remain critical, to keep questioning the purposes of and the motives behind AI and its applications, as well as control it through laws and regulations, and the enforcement thereof.”
I think there is an argument to be made for how information systems shape human thought and opinion, and that the more sophisticated they become, the less agency we seem to have over our own decisions and opinions. How do you see these AI systems affect human behaviour, and to what extent should we be worried about losing control of our own minds and lives?
“When you browse social media or read the news, you already get to see a range of articles or posts that match your interests according to an article. These algorithms are rule-based systems with zero intelligent behaviour. Due to these algorithms, you quickly end up in a bubble, severely limiting the posts and articles you see. Let's say you are intrigued by articles regarding terrorism, the algorithm feeds you more news articles regarding terrorism, giving you the false impression that there is a lot of terrorism going on in the world, even though cases of terrorism are at an all-time low. This can be very terrifying for you, but can also be very dangerous, since your bubble will keep confirming your thoughts and beliefs, and you might not even be aware of it. I don't think we need to worry about this in the sense of AI, as this is already reality. Therefore, I think it is more important than ever before that we become aware of this ourselves and inform and teach others about this, and that we regulate how permeable these bubbles are.
Self-awareness, education, and social control are more important than ever!
For that reason I personally don't use any social media, except LinkedIn for networking, if that counts as social media. And when checking the news I consult multiple sites to distil the major noteworthy events.”
I think what you're essentially saying is that these newer forms of AI being implemented in places like Bing search are simply a more advanced form of the more simplistic versions used to recommend you YouTube videos or advertise products to you, and that therefore people should in similar ways learn how to use, or not use, them.
“Yes, I think that's a very nice way to put it. As was the case with Google and Bing, you often needed to rephrase your prompts a few times to get the exact results you were looking for, which really could become a skill. With ChatGPT this is less of an issue, but at the same time creative prompting becomes more of a challenge, so you can work around the ethical barriers we discussed. But besides that challenge, writing full questions with context, like your grandparents would do when using Google [ed. Funny side note: my grandparents would not even touch a computer, much less use it to figure out what Google is], now actually works. Making the need for repeated staccato keyword prompting to find the right result obsolete.”
That seems like a useful and positive development. On the other hand, algorithms are routinely used to circumvent our critical thinking or ‘hack’ our brains in order to keep us engaged or to nudge us towards certain products or behaviours. Even if you know this, it is almost impossible not be affected by this in some way. Should we then not be extremely worried when much more advanced systems like ChatGPT are used for the same purpose, that it becomes even more difficult to resist the ways these systems can manipulate our behaviour? In other words, I’m saying that AI itself is not the problem but the way that it is likely going to be used by governments and large tech companies. What do you think about this?
“This question reminds me a lot of George Orwell's 1984, which I think sketches a world that we should definitely be wary of. Therefore, I believe that the intention behind the system should always be questioned and regulated, but legislation always follows technological advancements with a certain delay. Accordingly, it is not surprising that several leaders from the tech industry call for a pause in AI development so legislation can catch up, with the EU now actively working on the AI act. These laws and regulations are meant to protect humans against the consequences of both intentional and unintentional harm. In the end however, it will be the extent to which these laws are enforced that determines whether we need to be afraid. For large tech companies, this depends a lot on worldwide legislation. For governments this becomes more difficult, as they are not directly capable of influencing the politics of another government. If anything, the only ones capable of exerting influence over governments, are the people governed by it. That is why the right to vote and to demonstrate are basic human rights that need to be cherished and invoked, especially now.”
Let’s end on a more personal note. What aspects of AI do you see yourself working with the coming years?
“AI’s like ChatGPT mostly use natural language processing. In the voice technology pipeline there are three parts: speech recognition, natural language processing (NLP), and speech synthesis. Although NLP is an important centre piece, it has nothing to do with the actual voice. This leaves me the choice between speech recognition and speech synthesis. The market is currently very favourable for speech recognition, with speech synthesis lagging behind. Since that is where my interest lies, I think it is better to wait a bit and get some experience in the industry and more general in AI. Maybe in five years the market is more favourable for speech synthesis. I believe this also holds for the PhD positions regarding speech synthesis that are currently offered. In the meantime I see myself working as an AI consultant or in cyber security. To some extent this relates to the ethical and privacy issues I'd say, but not necessarily to ChatGPT in particular. I will probably explore the possibilities of using ChatGPT to ease my work or as a hobby, although as little as possible since there is a large ecological cost for each prompt.”
Sjors chose an apt note to end on. An often overlooked impact of these AI systems is the environmental one. Training ChatGPT is estimated to have resulted in 552 tonnes of carbon dioxide emissions, which equals what the average human emits in about 115 years (avg. American 36 years, avg. European 81 years). Then there is energy needed to actually run these models, for which there is limited data. One data centre company founder estimates that one generative AI search requires about 4 to 5 times more computing than a standard search engine search. With major search engines like Google and Bing planning to integrate generative AI into their globally used search engines, the energy consumption is likely going to rise dramatically. On the other hand, AI systems can be a useful tool to optimise energy consumption. One three-month experiment in Google’s data centres with an AI that optimised cooling procedures resulted in energy savings of around 12.7%. But those are specific use cases whose benefits would be far outweighed by the impacts of the widespread adoption of this technology. Then there is also all the water needed to keep all those electronics cool. Training ChatGPT-3 is estimated to directly use about 700,000 litres of freshwater, plus 2.8 million litres needed for generating the electricity it uses. Those jaw-dropping numbers are even higher for less efficient data centres. That makes ChatGPT a very thirsty discussion partner. Just imagine having a chat with someone who keeps drinking a 500 mL bottle of water during each conversation.
These issues are not unique to AI, of course, but apply to everything that is on the internet. From the quickest Google search to mining for cryptocurrencies, none of it would work without an immense network of computers, servers, and other electronics, all of which require electricity and water. The world’s larger data centres are responsible for an astounding 2% of global electricity usage. Considering these often forgotten real-world impacts, should we really be moving most of our activities online, shifting to digital currencies like a Digital Euro, or stimulating phone plans with ‘unlimited GBs’?
As a final fun and frankly fascinating thing I found (any other fanatical fans of fanciful alliteration out there?), I challenge you to figure out which of the faces on this website are real and which ones are AI generated. It is more difficult than you might think...
Share your reactions, thoughts, or questions in the comments below. If you liked the article, please consider liking and/or sharing it with others, it helps people find my newsletter. Be sure to check out my other work too, both on AI and many other topics. Coming up are: an article on the movement art eurythmy and its potential for rekindling a connection to ourselves and each other, as well as an article about how droughts and extremer weather are and will be affecting food production as our planet keeps warming, for which I visited a biodynamic farm and interviewed its owner. See you next time.
Those emissions numbers absolutely FLOORED me. We so often forget how we contribute to climate change with things we take for granted- like a Google search! It is completely jarring.