online troll

Facebook and publisher responsibility

Sorting out the roads that led us to this tipping point as social media platforms are being held responsible for the content they deliver.

online troll
Image by Chetraruc from Pixabay

I’m writing this in mid-October 2021, after the Facebook whistleblower Frances Haugen has testified on Capitol Hill concerning how the social media giant handles misinformation and allows itself to be exploited in ways that harm people – especially children – and democracy, all in the pursuit of greater profits. I’m not going to dive into her testimony here, other than to summarize that Facebook uses its network algorithm to amplify divisiveness, false information, and political controversy because that kind of content increases engagement which brings in huge advertising dollar amounts. You can learn more about Haugen’s concerns and the documents she bases them on here and here and here. It is putting Facebook in perhaps its worst public relations position ever.

If you read this post some distance after the day I wrote it, a lot of things may have happened regarding regulations or standards for digital platform publishers. That is okay, because what I’m concerned with here is how we got to this point. This is not a topic where you can simply argue whether a tech company should or should not be gatekeepers for the content published on their social media platform. It is more complicated than that, and to understand that complexity we need to review the odd, often arbitrary history of telecom responsibility and regulation.

For our purposes it is probably best to start with the creation of the FCC, the Federal Communication Commission, in 1934. The FCC’s mandate was to control broadcaster access to the “airwaves” which were seen as belonging to the entire country, and also to ensure fair competition, public standards and safety, and availability to all especially regarding the mediums of radio and television.

One of the responsibilities of the FCC was to establish content standards around things like obscenity, and guidelines such as the Fairness Doctrine (eliminated in 1987). The government was seen as having the power to do all this since the airwaves were a public space. The upshot is that everyone knew where the lines were drawn (whether you agreed with those lines or not). All radio and television broadcasters had a standards and practices department to ensure in-house content was in line with these standards, and all independent content producers (syndicated radio shows, independent television producers) knew their products had to adhere to these standards to have them purchased and broadcast.

The rise of cable television brought about public-access television. Created by the FCC in the late 1960s, it provided a platform for local community, educational, and government programming. The telecoms that owned cable television fought hard against this, saying the FCC had no authority over how they managed their cable business since it didn’t involve the airwaves, but the FCC prevailed because of the de facto monopoly cable companies had in any community. This brought up a new issue for the telecoms: a lot of public access content was being produced by people who had no interest in standards and practices and saw themselves as exercising free speech in a venue the FCC provided them. Who was responsible when a public access show aired content that was seen by some as obscene, slanderous, or otherwise inappropriate?

The telecoms generally decided to not control or limit the content at all, for a very simple and compelling reason. They felt that if they sometimes stepped in and stopped the airing of some public access content, that would imply editorial control over all public access content, and therefore in turn they could be seen as publishers responsible – and accountable in a court of law should anyone sue – for everything appearing on their cable feed, no matter who produced it. This would have opened them up to all manner of legal issues.

Online trolls
Illustration by Travis Millard for The Hollywood Reporter

When the internet happened, telecoms and their tech partners carried this hands-off approach forward. I was a founding regional managing editor for Comcast when they got into the broadband internet business in the late 1990s, and back then I spoke with the Comcast counsel about this topic. At the time, telecoms like Comcast felt they needed to offer original, curated, and partner-contributed content in the AOL mold to attract and retain subscribers to the broadband service (this was when everyone else was on dial-up). Which meant that our public narrow-band users and subscriber broad-band users would be seeing content originating with me or my colleagues, partners like newspapers and magazines, our small league of freelance writers and producers, and really any organization or individual who was generating online content in those days.

The best approach, said our attorney, was to be completely hands-off with anything not directly created and published by us, otherwise we would create a precedent that we were responsible for something someone found online using our access service. The model was known as the “dumb pipe” – we are the conduit for the content that comes to you, it claimed, but we have no active control over it.

The rise of Web 2.0 and especially social media put tremendous strain on this strategy. The pressure was in two forms.

First was that user-generated content became increasingly problematic. In the early days the concerns were about things like sexual content or plagiarism. But as social platforms really took off user-generated content started to create a seriously hostile environment. On Twitter, especially, anonymous trolls would attack women and people of color about social issues and around topics the trolls felt they owned, such as technology and comic books. The Twitter algorithm functions quite differently from Facebook’s, and is not included in current scrutiny brought on by Ms. Haugen’s whistleblowing, but Twitter has long been under attack for not creating a method of protecting its users from bad actors.

This became a huge issue when one of the worst bad actors turned out to be the president of the United States, who regularly lied, spread misinformation, and slandered people via his personal Twitter account, in essence giving full permission to his followers to open the floodgates of online abuse in his wake. Many, including me, were perplexed why Twitter didn’t just create a very clear and explicit terms of use document, including specific consequences. That way they could investigate any complaint, and if the user was found to have violated terms of use they could be subject to escalating punishment, eventually including permanent banishment off the platform (which they could do since it is a private company). Donald Trump wouldn’t have lasted three months on Twitter if they had done that.

Meanwhile, Facebook has been undergoing the other form of pressure. Which is that their algorithm does explicitly, actively, and aggressively push types of content to users, whether the users want them to or not. Facebook is not the “dumb pipe” of telecoms in the early days of the Web. It is, arguably, a hands-on publisher of user-generated content, targeting certain audiences with certain content designed to prompt reactions that will keep them online and engaged, thus increasing ad views in order to generate revenue.

This highly specific targeting could be described as a form of “micro-publishing” where Facebook is very much in control of what is seen, by whom, when. If you’re obviously a fan of QAnon, Facebook pushes paranoid conspiracy content to you, and you eat it up and stay online, and interact, and Facebook makes more money. If you engage with people who post about Trump actually winning the 2020 election, Facebook pushes outrage and talk of insurrection to you, and you stay online, and interact, and Facebook makes more money. Then, if it all ends with a bunch of delusional, marginalized nitwits storming the capitol in early January 2021, Facebook will claim they had nothing to do with it, their hands are clean. “It was just people sharing stuff among themselves,” Facebook will assert as they count their money, “it was outside of our control.”

But as Ms. Haugen’s documentation makes clear, Facebook was in control to a remarkable extent and perfectly well knew the consequences of what they were doing. The same situation exists with Instagram – a Facebook property – and how it feeds content suggestions to young people, especially teenage girls who show evidence of simultaneously feeling more depression and self-doubt from Instagram yet also an increased addiction to it, all fueled by the sort of beauty and fashion influencer content Instagram sends their way.

We are increasingly in an era when most people get their news from social media, and if those platforms are pushing a very narrow, very biased, and often inaccurate and misleading slice of information depending on who the audience is, it isn’t a surprise that our society ends up with large swaths of people with bizarrely false perceptions about what is happening in the world. Their preferred source of news has been misleading them, and is doing so only because it provokes behavior that results in increased revenue for the platform. You can’t get much more mercenary than that.

So where are we now? Well, as I laid out, we’ve had a long history of companies seeing themselves as not responsible in any way for content they did not create themselves – they put their head in the sand and claim to be simple conduits. But with the advent of the internet and Web 2.0, telecoms and tech companies have used increasingly sophisticated ways to control and manipulate the delivery of content that belies any claim of being a dumb pipe. They are culpable in this mess.

It will be interesting to see where this leads. But it is obvious that we can’t solve the complexities of social media’s negative impact on people and communities by using broadcast models developed a century ago. The government at the very least needs to overhaul the FCC in philosophy as well as practice. Meanwhile, tech companies will need to establish stronger and more widely accepted ethical standards of conduct, and back them up with enforcement, although it may already be too late for them to avoid facing significant regulatory actions.