Social Media and Censorship


Image result for mark zuckerbergThis week’s decision by Facebook, Spotify, Apple, and YouTube to take down material posted by conspiracy theorist Alex Jones and remove his Infowars channel points to an acute dilemma faced by all of the social media platforms today in reconciling their commitments to both freedom of speech and to social responsibility toward the democracies that shelter them. They can reconcile these objectives over the long term if they (and we) do two things: first, accept the fact that they are media companies with an obligation to curate information on their platforms, and second, accept the fact that they need to get smaller.

The large internet companies have maintained that they are simply neutral platforms on which their users can exchange information freely with one another. As such, they do not have an obligation to filter that content for accuracy, social consequences, and the like. They are supported in this position by Section 230 of the Communications Decency Act of 1996, which exempts these companies from liability for what appears on their sites provided they do not play the role of traditional media companies like the New York Times, the Wall Street Journal, CNN, or Fox News. Section 230 was put in place both to protect freedom of speech and to promote growth and innovation in the tech sector.

Both users and general publics were happy with this outcome for the next couple of decades, as social media appeared and masses of people gravitated to platforms like Facebook and Twitter for information and communication. But these views began to change dramatically following the 2016 elections in the United States and Britain, and subsequent revelations both of Russian meddling in the United States and other countries, and of the weaponization of social media by far-Right actors like Alex Jones.

The idea that the big internet platforms are not media companies has never really been tenable, and the contradictions in their public protestations of neutrality have become ever more apparent over time. From the beginning the platforms realized they had to filter out certain kinds of content, like terrorist propaganda and child pornography, and did this by way of changes to their terms of use.

But there was a much bigger problem that they themselves were responsible for. Their business model was built on clicks and virality, which led them to tune their algorithms in ways that actively encouraged conspiracy theories, personal abuse, and other content that was most likely to generate user interaction. This was the opposite of the public broadcasting ideal, which (as defined, for example, by the Council of Europe) privileged material deemed in the broad public interest. User attention is the most precious commodity on the internet, and platform algorithms increasingly determined what users were likely to see or hear.

Traditional media companies curate the material they publish. They do this by setting certain standards for fact-checking and journalistic quality. But some of the most important decisions they make regard what information they deem fit to publish in the first place. They can decide to place stories about desperate Syrian refugees, transgender discrimination, or the travails of Central American mothers above the fold, or alternatively they can emphasize crimes committed by undocumented immigrants, Hillary Clinton’s email server, or political correctness on university campuses. Indeed, conservative complaints about bias in the mainstream media are less about deliberately faked news than about selective reporting that reflects the ideological preferences of media companies like the New York Times.

This is the most important sense in which the big internet platforms like Facebook, Twitter, and YouTube have become media companies: They craft algorithms that determine what their users’ limited attention will focus on, driven (at least up to now) not by any broad vision of public responsibility but rather by profit maximization, which leads them to privilege virality. This has produced a huge backlash that came to a head this spring after the revelations of the role that Facebook played in allowing Cambridge Analytica to access its data to help the Trump campaign. By the time Mark Zuckerberg testified to Congress in April, there had been a dramatic shift in public approval of his company and of the broader industry. The most visible consequence of this shift in political climate has been this week’s banning of Alex Jones.

Jones and his supporters have immediately responded to the ban by charging the platforms with censorship. In one sense this charge is misplaced: We worry most about censorship when it is done by powerful, centralized states. Private actors can and do censor material all the time, and the platforms in question are not acting on behalf of the U.S. government.

But Jones has a point with regard to scale. Facebook is not just another social media company; it has become a worldwide behemoth that in many countries (including the United States) has become something like a monopoly supplier of social media services. There are many countries in which Facebook has displaced email as the central channel of communication, and where it functions much like a public utility. Jones will not be able to reach nearly as wide an audience moving to different platforms as he can on YouTube and Facebook.

This then points directly to the other big problem with today’s social media universe, which is the size of the dominant platforms. Facebook today exercises government-like powers of censorship despite the fact that it is a private company. The New York Times or the Wall Street Journal can in effect censor Alex Jones by refusing to carry his content. But because there is a pluralistic and competitive market in traditional print media, this doesn’t matter; Jones’s followers can simply choose different media outlets. The same is not true in today’s social media space. I personally find Alex Jones completely toxic and am not unhappy to see his visibility reduced; that will be good for our democracy. But I am also very uncomfortable with a private quasi-monopoly like Facebook making this kind of decision.

Hence, it seems to me, public policy ought to encourage two parallel developments. First, the large internet platforms have to openly acknowledge that they are indeed media companies whose decisions can have major consequences for the health of American democracy. This will necessarily entail changes to the legal regime surrounding them, particularly Section 230 of the CDA.

But this acceptance of social responsibility will necessarily entail a second consequence: The platforms need to get smaller. Put another way, today’s internet needs to get more diverse, decentralized, and competitive so that people have an alternative to Facebook and YouTube. In a different political culture and climate, one could imagine state regulation of Facebook as a kind of public utility, but this will simply not be possible in the America of 2018 for reasons of political polarization and our anti-statist proclivities. So the real alternative is to promote a more decentralized social media market that more closely resembles the existing legacy media markets of newspapers and television. Whatever you think of them, Alex Jones’s followers should have a place to go.

To some extent this decentralization is already happening. The number of users of Facebook and Twitter are either leveling off or declining, and in certain markets they have been deserting to other encrypted platforms like Telegram or WhatsApp. I suspect that much of the future content that will be scrubbed from the big platforms will not be evenly balanced between Left and Right, since in my view much of the most toxic material has been generated by conservatives. So the platforms will be in the crosshairs of the conservative media, and they are likely to lose users from this quarter. This will not be a good thing from the standpoint of their shareholders, but it will be a good thing for American democracy.

It is also clear that our anti-trust laws need to be updated for the social media age. There has been a relentless growth in the size of the large platforms due to network externalities: that is, networks become more valuable to their users the larger they are. Even if the U.S. government were to decide to try to break up Facebook, it is not clear how it would do so. At in the case of the AT&T breakup, it is likely that a baby Facebook would eventually grow to occupy the same space as its parent.

Nonetheless, there are other decisions that could be made to limit platform size. I do not think it was a good idea for Facebook to have been allowed to buy Instagram and WhatsApp, or for Google to have acquired YouTube. These acquisitions seem to have been made as much to forestall potential competition as to profit for network economies, and they could be legislatively reversed. We need to have much more attention focused on modernizing our anti-trust laws in light of the challenges of the internet age. Then too there is the international dimension. Facebook’s monopoly power extends across many different countries, a challenge that has been met in authoritarian countries like China by simply banning it. That is obviously not a good outcome either, but the way forward is at this juncture far from clear.

Social media and the internet platforms have been a great source of free expression, debate, and political participation, as well as being the leading edge of an innovative American economy. But private sector actors can flourish only within the context of a broader democratic polity, and as such they have a responsibility to help maintain the health of that political system. This was something well understood by traditional media companies, and is a lesson to be re-learned today.