The Issue:

On August 5th 2019, in response to the El Paso, Texas and Ohio mass shootings, Cloudflare took the unusual step of withdrawing it's services to the forum website, 8Chan. This forum platform has been reported to have a reputation of an unmoderated platform, where hate speech and inappropriate content is widely disseminated. in response to the content, both before and after recent mass shootings, along with the turning public opinion that this forum is a tinder box for hate speech and racism, Cloudflare has made a public announcement that it will be withdrawing it's security and caching services from the forum.

Who are Cloudflare & what do they do?

Cloudflare provide DNS, online security and edge caching services to website owners and online applications. Instead of routing traffic through the standard internet traffic routes, website owners can route the traffic through their servers and network. This allows users to benefit from having static resources and content delivered to them from a node that is geographically closer to them, improving the delivery of resources.

Additionally, this also allows traffic to be screened by their security and firewall protocols, helping with one element of keep your website secure in the face of malicious traffic. They offer an excellent free tier service, and Leodan:Design recommend using Cloudflare® to all of our customers and clients.

Leodan:Design are a registered Cloudflare Partner as part of the services we offer for webhosting and the running of digitalplatform.press (our alternative to standard WordPress hosting solutions). We believe that their product is part of an essential toolkit which every website owner should utilise.

Cloudflare - The Web Performance & Security Company | Cloudflare

30 Tbps Capacity and 180 Data Center Global Footprint Cloudflare speeds up and protects millions of websites, APIs, SaaS services, and other properties connected to the Internet. Our Anycast technology enables our benefits to scale with every server we add to our growing footprint of data centers.

 

Who is responsible for online content?

One person's fake news is another person's truth. When you have the right to "Free Speech", while you may have a moral responsibility to present factually truthful information, you also have the right to lie, including presenting your opinion as fact. However, you do have the personal responsibility of the impact of those lies.
Ben Matthews
Leodan:Design

Removing online content that has been published by an individual is a real tricky area to assess and analyse. There is a lot of outcry about “Fake News”, with this term often being used to discredit content that people disagree with. Just because someone disagrees with something that is said, especially an opinion, doesn’t mean that the opinion is wrong. If the opinion is based on incorrect facts, then it should be challenged.

Unfortunately, the reality is that removing opinions of others rom publication is censorship. The ambiguous area is when an opinion is based on fact, and is especially true when making false accusations against a person, or a group of people.

The real problem comes when users or readers can act on these opinions, or are inspired, or insighted to act on them, especially when these opinions denounce or present mistruths about individuals or sections of society.

There is also the concept, that even if we do not like a point of view, especially if contains opinions that are abusive towards sections of society, particular involving issues of race, gender, sexuality or social status, do we have the right to censor that opinion.

Even if an opinion is wrong, factually false, it is still an opinion of an individual. While individuals can choose to ignore it, do we have right to censor it? Are we alienating minority groups if we do? For how many decades has the censorship of content by minority groups been an issue? 

It is a tricky issue – there is not one right or wrong answer.

However, when that content inspires acts which harms an individual, or their reputation, or quality of life, or removes their freedom of choice, then that is an infringement of their right to choose tol ive in a safe and secure environment.

The issue isn't fake news or the fact that it exists. The issue isn't that individuals may hold opinions about race, gender, sexuality or social status, that I find repellant, or the fact that these opinions exist. The issue is that some segments of society, some individuals, don't understand the difference between opinion and fact. They take these opinions as absolute truths, and then feel that these facts give them the right to engage in acts or campaign of physical, emotional and mental violence against other people.
Ben Matthews
Leodan:Design

Do online services have the right to remove content from their platforms?

This is a really complicated issue. If a service provider has an agreement to provide you with a service where you can publish content, do they have a right to censor that content? The realistic answer is no. If a platform, accepts you as a member of it’s community, and allow you to post to their platform, and those posts violate the terms and conditions, then yes, it could be argued thatthey have the right to remove that content. The difference here is in the role that it plays – is it providing a service or is it providing a community.

It can be argued, that if you are providing a service then you have no right to censor the content that your service is used to share. It can be argued that if your are providing a community, then you should set guidelines on what is acceptable content within that community. A good example of a community is an application or platform such as Youtube, or community forums. On these forums, which as about content creation and delivery, to contribute, you should adhere to publishing guidelines. If the content that you produce is not inline with these guidelines it should be removed.

Google Search: A Service Or A Community?

There has been much talk that Google should somehow start policing the content that it provides it’s users. This is an issue, as Google Search is a service that exists to mtach what a user searches for with the most relevant online content. It does endorse the sources that it suggests, it simply says, that based on your search, this is probably what you are looking for. For general search engine results, and some of the enhanced rich snippets, it asks that the content is readable by it’s crawler bots, and then looks for indicators that suggest that this is an authoritive source of information. 

The only reason that Google Search results really censor (or delist) is because of malware on the site that shows in the search results (Yes, there are other reasons too – let me know what they are in the comments). It would be malpractice to send a user to a website, on which, Google has detected that there is malware, or is a site that is associate with phishing scams. Another form of legitimate censorship is the “adult only” filters, to ensure that young children are not exposed to “adult content”. Google doesn’t exist to judge the user, just to provide them with the content that it requests. The idea that it could end up marking content as “fake news” seems off beat – it is not it’s job to tell me what is true and what is not. It’s purpose exists to find the most relevant and authoritive content on the subject that I seek (yep, quite often it is Wikipedia or Stack Overflow).

In summary, the purpose of the service that Google Search provides is to provide me with links / content (that are safe to click) which offers me the information that I am seeking. I want this information to relevant to what I am searching (yes, I might be searching for deliberately misleading information on purpose – and yes, I want Google to find that for me). It’s service is not to “police” the content – it is a service, not a community. I do not want them to censor the content that I see.

Conclusion: Google Search Is A Service

I had a friend start complaining that one of their favourite websites had been "censored" by Google and had been delisted. This was a large and internationally well know site. My friend had been influenced by the fact that the "website owner", who has no expertise in SEO, ranted at length, how Google had delisted his content. It wasn't delisted, it's just that the Search Engine Optimisation on the site wasn't very good. They had issues with splitting content across subdomains, the site wasn't a great experience for mobile, they didn't have any embedded schema content, and a lot of their headlines were "clickbait" where the content didn't answer the question that they claimed to answer. The website and it's content lost it's authority because it wasn't well designed or implemented. It wasn't delisted ... it wasn't censored ... the updated algorithms simply ranked that it wasn't very authoritive. The reality is, if the website owner was concerned about helping people with great information, they would have kept up to date, but the was site designed for (very well it's purpose) pulling readers in to show them adverts or sell them products, on the pretence of giving all this wonderful information for free.
Ben Matthews
Leodan:Design

Facebook: A Service Or A Community?

Mark Zuckerberg, and Facebook, has been the target of many scandals regarding privacy and “Fake News” in the past few years. The privacy issues are a major one, but while focusing on the issue of content, the discrete from the concerns regarding the  sharing of user’s information without permission. Recently, Facebook have been subject to criticism regarding fake news content. However, when looking at whether censorship is a good idea on the platform, we should look to see if they have any right to, while also considering that actions agains the platform maybe politically motivated to try and limit alternative narratives reaching the populus.

The purpose of Faceook, the value offering if you will, is to provide a tool to it’s users, that allows them to create or engage in, online communities. These communities can then share content with each other. Facebook provides this service, and it is not it’s role to judge this content or the users who access it.

It does not have the rihgt to censor it’s users in the same way a mobile phone provider does not have the right to listen in to my conversations, and terminate the call if they do not like the content of what is being said. It’s primary purpose is a communications tool.

The problem that Facebook have is not that they have allowed fake news, or content which promotes unpleasant (in my opinion) opinions. The problem is not that they have allowed the sharing of misinformation in public feeds or private groups. The problem comes in that they have been paid to help spread this false information, that they have targeted adverts at impressionable individuals.

While there is the moral issue of allowing content to be shared that can lead to individuals taking action, to censor this would be a breach of the user’s rights.

The issue is who you choose to do business with. Facebook has little choice over who it’s users are The fact that there are fake accounts, pretending to be people., is not the issue (although it harms the credibility of the platform). But while Facebook has little control over who it’s users are, or what they do on the platform … they do have a choice who their customers are. Just because they can take money from someone, and enter into a business relationship with them, doesn’t mean that they should.

Conclusion: Facebook is a service*.

*But Facebook is also a business that takes money to present adverts in the users's feed. It has a choice about who it takes money from, and content it will promote by paid means.
I find the furore in the USA over Russia meddling in the election quite hypocritical. The issue is about protecting democracy. If the Russian's had successfully rigged the election via some form of voting fraud, then I would understand the issue. However, we're talking about the spread of information, claimed to fake - but this is the right of "Free Speech". The whole point of democracy is that we expose the population to a range of views and ideas, and then they decide which one is best, and then they vote. I often wonder if the politicians are making such a big deal about fake news because they don't like that the messages conflict with their own. Perhaps they don't like that they aren't as effective at marketing / propoganda as those with an alternative message are. The reality is, that as a democracy, who trusts people to vote, you should trust them to read and analyze any information that they want in coming to their decision, whether that is factual, fake news or advertised as fiction - after all 1984 made me think and consider my political views.
Ben Matthews
Leodan:Design

Does Cloudflare, as a business, have the right to filter, censor or remove content?

Cloudflare offers a technical service. It is not in the business of being a content provider. It doesn’t provide a community and therefore is not responsible for pblishing content and adhering to quality guidelone or a set of pre-determined  moral or ethical guidelines. Quite simply, it is not it’s job or in the position to restrict access to the content that it delivers as it is not the owner or distributor of that content.

They actually sum up their role, and their reasoning incredibly well in thier blog post on the reasons why they have terminated the accounts of 8chan:

"Many of our customers run platforms of their own on top of our network. If our policies are more conservative than theirs it effectively undercuts their ability to run their services and set their own policies. We reluctantly tolerate content that we find reprehensible."

Matthew Prince - Cloudflare

What does this example mean to webhosts and other platform providers?

At Leodan:Design, we run and manager the digitalplatform.press service, which allows small businesses, artists, filmp roduction companies (to name a few) to produce their websites quickly and easily. In the scenario that a client or customer publishes content that we think is offensive, misleading or otherwise problematic, we cannot enforce that the customer removes this content. Additionally we cannot block access to this content and we cannot remove this content, not without our client’s consent.

In a nutshell, it is beyond our remit to override our customers choices regarding the type or quality of content that they publish.

"While we've been successful as a company, that does not give us the political legitimacy to make determinations on what content is good and bad. Nor should it. Questions around content are real societal issues that need politically legitimate solutions."

Matthew Prince - Cloudflare

We have one exception to this. As part of our service, we offer the opportunity for suitably qualified medical-orienated publishers (such as healthcare companies, doctrs, dentists, etc), to use specific content types that we mark up with the appropriate medical schema. This allows us to communicate to Google (and other search engines if they understand Json-LD markup for medical content from schema.org) to understand the context of the content. Additionally we use these specific content types to enable users to search medical content on our discovery portals.

This feature is not switched on by default and we only switch it on when we have assurances and proof of suitable authority to post such content. We have the policy that we will not remove the content, but instead change the markup from “medical content” to “general content”. We only do tihs if we receive reports from users and our other quality assurance processes. We then have this content reviewed inline with our policy, and if we find that this content is not medically authoritive, we simply switch off the restricted feature that describes it as such, whilst maintaining a conversation with our clients about this process.

In our opinion, this is not censoring or removing content. The content still exists and is freely available to the public. This process is one where we remove our recommendation to search engines that this content is authored by medically responsible people.

Additionally we have the policy where we will advise our clients if thier content is inappropriate, and suggest that they review and change it. We believe that advising our clients on contet falls under our remit. We believe changing our clients content or imposing our will on them is not

Does Cloudflare have the right to terminate their service offering to 8chan?

First of all, we have to remember that Cloudflare is a private company, made of private individuals. This means they have a right to choose who they interact with.

Imagine the scenario, someone comes into the pub, and starts shouting and swearing and being aggressive. The owner has the right to ask that person to leave, effectively withdrawing their consent to be on the property because of their behaviour. The owner has the duty to ask that person to leave, to ensure the safety of their staff, their customers, and the person themselves.

This is the same with any business … it is perfectly within anyone’s right to refuse to sell a product or service, just everyone has the right not to purchase a product or service.

But it is a tricky decision.

The European Human Rights law and precedent set by the UK Supreme Court does give a good example of the principle of refusing to communicate a message on behalf of a client with which you do not agree. While you may disagree with the ruling of the “Belfast Cake Scandal” and find the actions and views of the owners abhorrent, they do have the right not to endorse a message that they fundamentally disagree with.

And this is what Cloudflare have done. They have stated that they fundamentally disagree with the messages and values that are being portrayed on 8chan. They feel that the portal has contributed in part to these mass shootings as a result of the publication of what could be classed as “hate speech”. The service they provide enables a message that they fundamentally disagree with to be broadcast, and one that has created “real-world” consequences. By refusing to support the 8chan platform, they are not censoring the content, they are simply refusing to associate their brand with another whose values are in direct contradiction to their own.

“The bakers could not refuse to supply their goods to Mr Lee because he was a gay man or supported gay marriage, but that is quite different from obliging them to supply a cake iced with a message with which they profoundly disagreed.”

Supreme Court Justice Lady Hale - As quoted by The Guardian

Explanation & Example:

A business cannot refuse custom because of differences of belief. A business can refuse custom because if the task makes them do something that goes against their beliefs or makes them feel unsafe.

As an Athiest, I could not refuse to make a general website for a fundamentalist Christian because of their faith. I could refuse if I felt the message that they were asking me to communicate goes against my prinicples, or the I disagreed with the outcome that they were trying to achieve. I could refuse if their behaviour towards me made me feel uncomfortable, threatened or was an infrigement of my personal rights. However, It would not be legal for me to refuse them because of their difference in faith, or culture as this would be discrimination.

If I didn’t know about their beliefs, and I would have made the website, then not to do so would be discrimination.

If I did not agree with the message that they broadcast, then this would be grounds for non-service. If the outcomes that they sought to achieve went against my values, then this would be grounds for non-service. If there behaviour towards myself or my staff was inappropriate then this would be grounds for non-service.

The implications of Cloudflare's decision

First of all, I think Cloudflare should be congratulated on taking a stand and enforcing their right to choose their customers. I think sometimes customers can forget that a business chooses to enter into a business relationship with them as much as they choose to enter one with the business.

For Leodan:Design, we will be looking on formalising our policy on how we deal with customers who seek to publish content that goes against our morals, values and beliefs, especially with regards to content that is discrimnatory in nature or seeks to insight unfounded beliefs of hate crime. We will also be looking at how we can improve our onboarding process so our clients and customers are educated about how they can structure their content to ensure that it does not conflict with these values.

With regards to other digital companies, this move by Cloudflare has highlighted how a business can act on values and morals, by refusing to align with publishers who create discriminatory content, or allow this type of content to be published upon their platform. This will have an political impact on the likes of Facebook and Google,and although they cannot and should not censor the content that is exposed to users through it’s free and open channels (i.e. posts by users and non-promoted search results), they can decide who they take money from to provide enhanced visibility options such as adverts, provide support services to, and whose content they allow to be pushed (for a fee).

Undoubtedly, another service provider will step in to fill the void of services that Cloudflare have withdrawn from 8chan, just as they did with “The Daily Stormer”. Undoubetdly, Cloudflare will receiev a fiar amount of negative press from certain wings of the media.

However, whether by drawing this line in the sand, Cloudflare have set the precedent for other service providers to follow, will remain to be seen.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This page addresses the following questions:
  • Why did Cloudflare ban 8Chan?
  • Do Cloudflare censor websites?
  • Do webhosts have a responsibility to monitor the content on their servers?