Huge coincidence: Substack launches Notes, Twitter marks Substack as 'unsafe'.
A brief discussion about free speech, because Twitter apparently sees Notes as enough of a threat to start going after Substack
Last week, I published my first Notes on Substack Notes, which is kind of like Twitter but not terrible (so far). Since you have likely already received at least several of these emails, I wanted to use this post to talk about a little bit more than just ‘hey, I’m using Notes now’ (but hey, I’m using Notes now). On Notes, I am going to regularly post information about varying topics, or about developments happening in the world, share the amazing work of other Substackers, as well as scientific discoveries and studies, and debunk claims floating around on mainstream or social media. I will also be sharing some updates here and there about content for this newsletter. Check out my profile (scroll down, click ‘Notes’ next to ‘Posts’) to see what I have posted so far. Instructions on how to use Notes are at the bottom of this post.
This also seems like a good time to mention that you can expect a new post from me at least once every two weeks, somewhere in the weekend. If you want to know more about why and what my plans are for the future, I encourage you to read my explanation in this article or on my About page. So while I currently cannot guarantee to have time to write more often, I do want to spend more of my time doing what I love, namely writing. If you want to help me do that, consider upgrading your subscription. This week was a good week, though, on both a creative and a mental level, which is why I am excited to share this post with you.
Alright. On with today’s topic. Ever since Notes launched well over a week ago, Elon Musk, owner of Twitter, and as of last week, founder of a new AI company, has been having a feud with Substack. Substack links were being blocked, suppressed, or flagged as unsafe. People searching for ‘Substack’ were redirected to ‘newsletters’. Musk claims that Substack had been “trying to download a massive portion of the Twitter database” to use for Notes. Substack CEO Chris Best has denied this. Even though Musk denies ‘blocking’ Substack (which may technically be true as they seem to be doing everything short of blocking), something like this has happened before. Last December, Twitter blocked links to social media platforms like Facebook and Instagram, as well as to link aggregators like Linktree. This was later reversed, probably because of the backlash that followed. It remains unclear what exactly happened at Twitter and to what extent Twitter is still taking (or planning on taking) action against Substack. According to Substack CEO Best, the Substack team could not get a lot of answers out of Twitter.
So what makes Notes so dangerous? It might have something to do with the different approach it is taking. According to Substack, “The lifeblood of an ad-based social media feed is attention. By contrast, the lifeblood of a subscription network is the money paid to people who are doing great work within it. Here, people get rewarded for respecting the trust and attention of their audiences. The ultimate goal on this platform is to convert casual readers into paying subscribers. In this system, the vast majority of the financial rewards go to the creators of the content.”
To what extent this promise can become a reality obviously depends on Substack, but also on the people using it. So far, my experiences have been positive, and it really does seem to offer a space for nuanced discussions amongst like-minded people. But there have also been people who say that they are being targeted by hate speech and harassment. While reprehensible, it is difficult to say to what extent this is happening, or how Notes compares to other platforms. However, that study that everyone in the mainstream media seems to be bandying about as ‘proof’ that Twitter has become more bigoted and hateful since Musk’s takeover is highly questionable, as I laid out here:
I might do a more detailed analysis of that whole study in the future, if I think it is necessary or if you would like to see it, in which case let me know in the comments below. In any case, it is hard to tell how many people are saying how many terrible things.
Moderating hate speech or censorship?
That brings me to the whole free speech versus content moderation discussion, whose outcome is not only relevant to Notes, but may very well decide whether the internet can remain a place for open discussions and truthful reporting.
Of course, people can say horrible things, share untruthful or misleading information, or hold opinions that most of us would find reprehensible. Especially when people can so easily stay anonymous or harass others without fear of consequences. But I do not know how you can fairly decide what speech or posts cross a line and when it does not. Maybe a judge, well-versed in all the specific laws and regulations of their country, can make that distinction on a case-by-case basis. The problem with that, though, is that I doubt that the judiciary system of any country can even begin to sift through the vast amounts of data bouncing around on the internet on any given day. Besides, I do not think we should want them to. It is with good reason that speech-related cases tend to make it to court only when they (could) demonstrably have resulted in serious harm, like someone making a prank emergency call or an influential politician calling for ‘less Moroccans’. In the latter case, the judges were thankfully wise enough to make sure their verdict did not set any free-speech-limiting precedents. What they did instead was proclaim him guilty of inciting discrimination without imposing any punishment. If he had not been an influential politician, I am sure that this case would have never made it to court. We can, of course, debate the legitimacy of these kinds of court cases, but the reason I brought it up was to illustrate that, in countries with constitutional free speech protections, these cases tend to be rare.
Sure, making sure that your platform does not amplify or promote messages of hate or enable unchecked harassment could be a way to limit overt discrimination and harassment, but even there I am wondering where to draw the line. Where that line is often seems highly subjective and context-dependent. On the other hand, the way many online platforms operate, through profit-based algorithms that thrive off of division and conflict, is what allows them to be so readily weaponised to harass people. It tends to be in the platform’s interest to spur on as much engagement and outrage as it can, so it can sell more of your data to advertisers. Their ad-based revenue model also means that, as soon as certain content or speech starts scaring off advertisers, they will start limiting the spread of that speech on their platform. We could see that dynamic at work when Musk took over Twitter, with advertisers pulling out of Twitter, or threatening to do so, unless he reversed his ‘anything goes’ (unless you become a competitor like Notes or report on the whereabouts of his private plane) approach to free speech. Could changing the business model of these platforms and democratising ownership therefore be a way to rid ourselves of algorithms that intentionally stimulate toxic behaviour and give platform users more control without restricting speech? Something to consider.
Because once we start talking about the state or massive corporations preventing people from saying what they think and believe, however egregious, I start getting worried. To be clear, I have seen people say horrific things about someone because of the colour of their skin, or their sexuality, or [fill in the blank], and I think that it is absolutely unacceptable to treat other people that way. But precisely because it can be so hard to pin down or define where the line is, I do not think that restricting speech beyond what is determined through court rulings can be a solution.
Once you give people the power to control speech, however well-intentioned they may be (or not), they are going to abuse that power. To prevent people from expressing embarrassing personal criticisms, or to stop the ‘wrong’ people from expressing their views, for instance. There are way too many examples of this to count, even when only looking at recent years.
Despite the country’s constitutionally guaranteed freedom of expression, in the Netherlands you can get arrested for criticising the king. The United States is still trying everything it can to utterly destroy journalist Julian Assange for publishing evidence of their war crimes. Governments the world over, whether they are European or Chinese, are using their powers to monitor and regulate online speech, ostensibly to fight ‘misinformation’ or ‘protect public order’. Even when just looking at the past week we are spoiled for choice, with Facebook censoring articles by journalist
What these examples highlight, besides the willingness of people with power to squash anyone who dares to go against them, are that broad or vague laws can pose huge risks to free speech. According to the U.N. Human Rights office, the main problems with many of these laws are:
Poor definitions of what constitutes unlawful or harmful content;
Outsourcing regulatory functions to companies
Over-emphasis on content take-downs, and the imposition of unrealistic timeframes
Powers granted to State officials to remove content without judicial oversight
Over-reliance on artificial intelligence / algorithms.
I recently analysed how the Dutch government does this, based in part on European regulations, and found similar issues.
Okay, time to cleverly turn this massively complex discussion back around to where we started: Notes. This week, I posted a Note with the exact questions that informed the writing of this post:
I have done this precisely because I wanted to hear different viewpoints and arguments on this topic (as I do for any topic for that matter). I therefore encourage you to join the discussion, or leave a comment below, and share your thoughts. This Note has inspired a really engaging discussion already, which is why I think that it shows the potential of Notes, and of Substack as a whole, for fostering meaningful discussions and thought exchanges.
How to join Notes
Head to substack.com/notes or find the “Notes” tab in the Substack app. As a subscriber to Critical Consent by Robert Urbaschek, you’ll automatically see my notes. Feel free to like, reply, or share them around!
You can also share notes of your own. I hope this becomes a space where we can all share thoughts, ideas, and interesting quotes from the things we're reading on Substack and thinking about in our lives.
If you encounter any issues, you can always refer to the Notes FAQ for assistance. Looking forward to seeing you there.
Lastly, I wanted to hold a poll amongst my subscribers, as I recently found out that Substack currently does not allow for an open comment section on posts that contain a paywall. Even if that paywall is all the way at the bottom of a longer post, which is the way I want to do it, so that I can offer paid subscribers bonus content without preventing free subscribers from joining the discussion. This means that I have a few options to do this, at least until Substack changes this.
Create two separate posts. One for all subscribers and one for paid subscribers. This could mean that paid subscribers get 2 emails, but I could also simply add a second, paid, post to the Substack and link to it from the main post.
Create the posts as I had intended, with the paid subscriber content at the bottom. Since this locks the comment section to free subscribers, I will provide a link to Substack Chats, where all subscribers can freely share their thoughts on the post. Chat is only accessible to subscribers of this newsletter.
Similar to 2, only we hold the discussion on Notes. Discussions on Nots are accessible to everyone on Substack.
I am currently leaning towards 2, since it does not involve extra emails and is a better way to foster our community than 3, but I hope that this does not lead to less comments from free subscribers, which would be a shame. Feel free to leave a comment below to share your thoughts.
UPDATE: After a suggestion from one of my readers I have now added a fourth option to the poll. To forget about the extra content for paid subscribers at the bottom of posts and instead make posts longer and fully accessible to all. Does this still offer paid subscribers enough value, is it better to trust that when people want and can support me, that they will? Let me know what you would want if you were a paying subscriber. I also added another sentence to option 1, since I thought of another way to implement that.
Some really interesting points and it is really nice to see that the complexities of the discussion are being spoken about and considered on here. There are two things though that I haven’t seen highlighted yet. Firstly most discussion seems to assume we are talking about comments from individuals who can be debated, blocked, muted etc. These responses however do not apply when there is an orchestrated campaign in play that could bombard an individual or group with hate speech etc in a way that cannot be managed by these tools and whose goal is effectively to silence and remove the speech of others. The second is the role of misinformation (and particularly orchestrated misinformation campaigns). For both of these issues I could see the subscription model become even more problematic, especially with notes effectively making it a social media site, as substack directly profits from growing subscriber on these accounts. By actively connecting them with others that may be sympathetic (without any attempt at verification or moderation) a case could be made that they are promoting and facilitating campaigns of hate/misinformation for direct profit. I am absolutely not saying this is the case, I do think that substack has been created and is being managed with the best of intentions, but if I were musk or any other competitor trying to take them down I would be actively looking for problematic accounts that substack is profiting from and reporting to the relevant legal authorities. Even if they found a legal way around it the legislation, fees and negative publicity could be fatal.
4. Leave everything open, no paywall, and trust that 5% to 10% of your readers will become paying readers.