

Discover more from Critical Consent by Robert Urbaschek
Re: An Open Letter on Artificial Intelligence
An open conversation with Joe Mayall about technology and the future
This is the second letter in a multi-part series between myself and Joe Mayall, the author of
. It started with me reaching out to Joe with this question: “How do you think the development of technologies such as AI will benefit or harm humanity?” This letter is a direct response to Joe’s answer to this question, which I shared with you last week. Each letter posted as part of our thought exchange will be published on both of our Substacks.Be sure to check out Joe’s other work, which largely focuses on describing economic and political issues from a socialist perspective.
Dear Joe,
You make some very insightful points, and I think that we are on the same page when it comes to the likely consequences of AI for anyone who is not a rich billionaire. I believe that there are two main areas of concern we should be keeping an eye on, besides the obvious one of an intelligent AI going rogue and taking over the world—a scenario whose moral, ethical, and philosophical implications are another discussion entirely. First, there is the question of whether AI will be able to put us all out of work. Because I for the most part agree with what you wrote on this issue, I will keep my thoughts on that brief, but I would not mind expanding on my thoughts in a future letter. The second area I am personally a lot more worried about because it is being discussed so little. This concerns the ways in which this technology is being implemented both online and in our personal devices. It is this part of recent developments in AI that I would very much like to hear your thoughts on.
Let’s start with how AI will affect the nature of work. In one of my other posts about AI, I cited a study that looked into these periods of technological upheaval and the disruptive and potentially destructive effects on workers and the economy. There were two findings that stuck with me. Firstly, while disruptive technologies did negatively affect workers through periods of job losses and suppressed wages, in the long run the economy was more productive. Second, what often made the crucial difference between disruptive and destructive was the rate at which these technologies got implemented, as well as how much was invested in job training programmes and social safety nets to help workers bounce back and move to other jobs. There are two main questions I would ask based on these conclusions: Who benefits from a more productive economy and whose responsibility is it to make sure change does not happen too fast and that workers are supported? The answers to these questions are what ultimately determines how harmful AI is going to be for us and our ability to make a living. When new technologies are implemented to the benefit of companies and their shareholders, instead of to the benefit of all, none of these amazing technologies will make our work easier or better paid, except incidentally, nor will the work that AI takes out of our hands mean that we have to work less or not at all.
This brings me to the second area of concern that I wanted to discuss. I fear that the technology is going to be used not only to protect and increase corporate profits, but to control people’s access to information and keep them thinking ‘correct’ thoughts. That is a lot to unpack in one letter, so I’m going to put several examples forward that we can use for further discussion.
Already the systems powering social networking sites are quite adept at inferring a lot of information about you based on your behaviour, by collecting and buying as much information about you as they can, monitoring your online behaviour through trackers and analytics, and in recent years by nudging—you could also say manipulating—your behaviour into a more desired direction. The goals are usually financial, to sell you certain products, but they can also be political, as has happened during the past few election cycles. Candidates who wage tech-driven campaigns can use all this available data to their advantage, to reach those voters that they expect can most easily be swayed. How effective that has been so far is debatable, and with the exception of a few studies there is not much concrete research on the success of this kind of microtargeting of voters. That we do not know the effects of this kind of technology is what makes it so worrying. So what happens once AI can make these kinds of systems many times more efficient and intelligent?
Then there’s search engines, our gateway to the internet. Say you decide to type something about a controversial topic like Covid into Google, and the search results direct you towards the website of the World Health Organization or the New York Times. Now ending up on one of these mainstream organisations’ websites is great if that is indeed the kind of information you wanted, but if you are instead looking for something that deviates from the norm, for instance when you’re questioning whether or not masks or lockdowns work, and are consistently steered away from finding information about that, then that’s a problem. That takes away your control over what information you have access to. The past few years especially have shown how important it is that there is not just one kind of information out there. Search engines play a major role in determining what is and isn’t seen. There are countless articles on how a search engine like Google manipulates search results for its own benefit, for instance by hiding sensitive subjects and controversial opinions. From the moment you begin typing your search query, you are nudged into a direction predetermined by Google’s algorithms. First by seeing search suggestions that affect what you are thinking about or how you decide to phrase your request, then by seeing only those search results that the algorithm determined were the ‘best’ results. Who decides what information is better? How do these search engines decide what results they prioritise? How much are these results based on the information that they have about you and how do they use this information? With the exception of some reports and studies here and there, we simply don’t know, but it seems obvious that a company like Google has an incentive to ensure that results are favourable to its interests. To me, a possible solution to this could be to make these companies’ algorithms transparent and give citizens the power to decide how they work.
The last example I want to bring up, is how companies like Microsoft are implementing AI into their software, for instance in the form of a personal assistant, or “copilot”, to help you with getting your work done and managing your life. It is in fact so convenient, you’ll never have to write another boring e-mail, solve complex problems, or come up with your own ideas ever again! But how can we trust that these assistants act with our best interests at heart? Already we see how existing systems try to steer us into certain directions, and how dissenting voices are suppressed on big platforms like YouTube or Facebook. Why would these personal assistants be any better? It might decide to prevent you from donating money to certain causes, as GoFundMe tried to do during a fundraiser to support the Canadian trucker protests, or the AI assistant might alert law enforcement when it sees that you are engaging in what it deems as dangerous behaviour.
The examples I laid out here are things that are already happening and that have been well documented. So again, I ask the question, what happens once AI makes these systems more effective at what they are doing? How can we decide what is best for us, for our society, if we are not able to access all the information we need? How much control will have left over our own thoughts if we are constantly nudged in different directions without even realising it? Without a doubt, those questions scare me infinitely more than the thought of a rogue AI taking over the world.
Looking forward to hear your thoughts,
Robert