There is a growing sense that Facebook and other social media platforms are a significant factor in the increasing polarization of our politics and even a real threat to democracy in countries such as the U.S. and Canada – countries with historically strong democratic institutions. This post explores how the business model chosen by the biggest platforms has contributed to the weakening of our democratic institutions and what can be done to curb the socially destructive consequences of the platforms’ current operations.
The Facebook business model
The problem with Facebook is that it is fine-tuned to be an addictive site in which politics – and information more broadly – are indistinguishable from entertainment. Of course, much the same could be said of cable TV news and the tabloid press. However, the engagement and immersion in social media is more intense than the kind that television or print delivers. It encourages people to associate only with those who share their opinions, creating information filters regarding politics and general views of the world. By training its users to place greater importance on feelings of agreement and belonging (“friends”, “like/dislike”) than on objective truth and facts, Facebook has created a gigantic forum for tribalism. Or more precisely, a forum for tribalism that contains a multitude of tribes that define themselves in terms of politics, race, ethnicity, religion, cultural/consumer preferences and social status. And because they are tribes existing in an information bubble with news of the outside world delivered to them primarily by Facebook’s algorithms through its newsfeed, members of any given tribe are increasingly oblivious to any views other than their own. They are also increasingly oblivious (and even hostile) to the notion of objective truth and facts more generally.
Moreover, Facebook’s algorithms are designed to feed users, over time, ever more extreme material that plays to these tribal identities. In strictly business terms, this increases the average time a user stays on the platform thereby increasing Facebook’s advertising revenue. In political and social terms, it leads to a polarized electorate and society.
98% of Facebook’s revenue comes from selling ads and the company has every incentive to continue to collect as much private data as it can on its users in order to keep them engaged on the site and to allow ad buyers to effectively target their ads. The potential impact of a business model driven by this combination of intense immersion and surveillance manifested itself when it was revealed that the political consulting firm Cambridge Analytica had obtained information about 50 million Facebook users in order to develop psychological profiles to assist the Trump campaign. That number has since risen to 87 million. Yet Facebook seems incapable of accepting the fact that its relentless pursuit of growth, which Facebook CEO Mark Zuckerberg characterizes as encouraging “openness and connection” globally, has been socially destructive.
But concerns over tribalization and the debasement of truth and facts caused by social media, should not stop with Facebook. Apple, Amazon, Microsoft, and Google, also share an aspiration to become the primary lens through which we both view the world and participate in it. And Google, in particular, suffers from many of the same problems as Facebook.
For example, a software glitch in the social site Google+ (Google’s attempt to compete with Facebook), gave outside developers potential access to private Google+ profile data between 2015 and March 2018, when internal Google investigators discovered and fixed the issue. But then, according to the Wall St. Journal, Google’s legal and policy staff prepared a secret memo for senior executives warning that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to Cambridge Analytica. As a result, It wasn’t until October 6, that Google publicly admitted to the breach and that it was closing Google+ – a full seven months after the breach was discovered.
But Facebook is the most worrisome of the platforms because it is the only social media company approaching the scale and reach that would allow it to truly become the primary determinant of what its users know about the outside world and what they don’t know. It currently owns four of the top ten social media platforms in the world. Facebook had 2.2 billion monthly active users in June 2018, more than half of all people with Internet access around the world. WhatsApp has 1.5 billion, Facebook Messenger 1.3 billion, and Instagram 1 billion. All are growing quickly. Twitter, by comparison, has 330 million and is hardly growing.
Put bluntly, Facebook is a company that has lost control of its business model’s social and political consequences. And it knows it.
For example, in the United States, Facebook’s advertising tools continue to be used by foreign powers (especially, but not limited to Russia), determined to interfere with American elections. In Myanmar, earlier this year some 700,000 members of the minority, Muslim Rohingya community fled the country amid a military crackdown and ethnic violence. In March, a United Nations investigator said Facebook was used to incite violence and hatred against the Muslim minority group. The platform, she said, had “turned into a beast.” In the Philippines, Turkey, and other “illiberal” democracies, trolls connected with the government use Facebook to spread disinformation and terrorize opponents.
Mark Zuckerberg now spends much of his time apologizing for data breaches, privacy violations, and the manipulation of Facebook users by Russian spies. In contrast, a decade ago, Zuckerberg championed Facebook as a vehicle for positive political and social change. To truly achieve progress, Zuckerberg used to argue, societies would have to get over their concerns about privacy, which he described as a dated concept. This view served Facebook’s business model, which is based on users passively handing over personal data to Zuckerberg’s company. To increase its revenue, again more than 98 percent of which comes from advertising, Facebook needs more users to spend more time on its site and surrender more personal information about themselves.
The platforms undermine established news media
Facebook and Google are also financially undermining traditional newspapers which adhere to high journalistic standards of truth, facts and objectivity. Such outlets are legally considered “publishers” and as such, are legally liable for the accuracy of the content that appears in their properties. In contrast, Facebook and Google are considered mere platforms (or intermediaries) and are exempted from legal liability for the content appearing in their products. This gives them a huge advantage over traditional news outlets in that they can allow users to post pretty much anything they like without Facebook and Google executives having to worry about whether the “news” content that is posted on their sites, has any relationship to the truth. And, of course, they don’t have to pay the generators of this “news”, a penny.
Moreover, unlike Facebook and Google, the traditional news media lack the capacity to collect the motherlodes of personal data of deep interest to digital advertisers who want to highly target their marketing efforts. This limits their digital ad growth considerably relative to the platform giants and prevents them from recouping the revenue lost from plummeting print ad and print subscription revenues.
What this has resulted in is that Google and Facebook have used their size, technical prowess and regulatory freedom in gathering their users’ personal data to accumulate unprecedented power over the distribution of the web’s content – including news. in 2017, Google controlled 40.7 percent of the U.S. digital ad market, followed by Facebook with 19.7 percent. At this point, the pair account for just over 60 percent of the total U.S. digital advertising market and command 80 percent of incremental growth. And the numbers are not too different in Canada. Google’s share of the Canadian digital advertising market is almost 10 times that of the entire Canadian daily newspaper industry and 60 times that of community newspapers. As a result, news rooms have been decimated across the county dealing yet a further blow to the ideal of an informed public capable of distinguishing between fact-based journalism shedding real light on the operations of our political and economic institutions and conspiracy theories propagated by extreme groups and foreign powers.
The time of self-regulation is over but the federal government avoids taking action
The fact is that neither Facebook nor Google will change their business models on their own. Only government can steer them in a more socially constructive direction. It also goes without saying that Facebook, Google and the other platforms will fight all proposed reforms every inch of the way.
That said, the need for government action grows by the day. In the words of Canada’s Privacy Commissioner, Daniel Therrien: “the time of self-regulation is over.”
In Canada, the top priority needs to be a top to bottom overhaul of our privacy legislation, the most important being the federal Personal Information Protection and Electronic Documents Act (PIPEDA). The federal Privacy Commissioner has repeatedly criticized the legislation as being overly permissive and giving companies far too wide latitude to use personal information for their own benefit. Under PIPEDA, private organizations have a legal obligation to be accountable, but recent developments make it clear that Canadians cannot rely exclusively on companies such as Facebook and Google to manage their personal information responsibly. What that means is that provisions protecting privacy have to be spelled out in much greater detail in the Act and effective enforcement of the Act’s provisions has to be considerably stronger than what currently exists.
To be clear, it is not enough to ask private companies to live up to their responsibilities. Canadians need stronger privacy laws that will protect them when organizations like Facebook and Google fail to do so. And compliance with these laws must be enforced by a regulator, independent from industry and the government, with sufficient powers to ensure compliance.
In addition to the Privacy Commissioner, several parliamentary committees have supported the call for bold legislative reform to reign in the platforms. Notably, in February, 2018, the House of Commons Standing Committee on Access to Information, Privacy and Ethics (ETHI), which is tasked with reviewing Canada’s privacy laws, concurred with many of the Privacy Commissioner’s recommendations to amend PIPEDA, and even called for additional measures inspired by the European Union’s excellent General Data Protection Regulation (GDPR), which came into force in May.
In a later report in June, after hearing from witnesses on the Facebook/Cambridge Analytica scandal, ETHI came to the view that additional amendments (notably those conferring new enforcement powers to the Office of the Privacy Commissioner, including the power to inspect or audit companies such as Facebook and Google) were urgently required. ETHI also agreed that political parties need to be governed by privacy laws.
Unfortunately, the government has been silent on the three sets of recommendations and introduced two bills that could have been used to implement many of the recommendations, but failed to do so.
Opportunities to implement some of the major recommendations included:
- proposed changes to national security legislation contained in Bill C-59. During the hearings on the bill, the Privacy Commissioner reiterated a number of the recommendations he made on Privacy Act reform in 2016, among other things.
- Bill C-76, the Elections Modernization Act, which dealt with political party financing, and was an opportunity to address the lack of standards and oversight over the personal information handling practices of political parties. According to the Privacy Commissioner, however, C-76 added nothing of substance in terms of privacy protection. According to the Commissioner, rather than impose internationally recognized standards, the bill leaves it to political parties to define the rules they want to apply. It also fails to impose independent oversight over how Canada’s political parties use voter profiles often purchased from private data brokers.
What should government do?
Given the opaqueness of business models and the complexity of information flows in the age of data analytics, artificial intelligence (AI) and the Internet of Things, Canada needs strong privacy laws and a strong regulator to enforce those laws.
That means that meaningful consent must be the driving force behind the reform of Canadian privacy law.
Therefore, following the European GDPR rules that came into effect in March, Canada must adopt an “opt-in consent by default approach” to providing personal information to platforms. More specifically, the Government must implement amendments to the Personal Information Protection and Electronic Documents Act to explicitly provide for opt-in consent as the default for any use by a platform of personal information for secondary purposes, with a long-term view to implementing a default opt-in system regardless of purpose. Moreover, opt-in should only be permitted when consent is meaningfully given with the user provided sufficient information by the platforms as to exactly how the platform will use the personal information given to them.
Related to the above, there needs to be to dramatic action to improve the platforms’ algorithmic transparency. And again, the GDPR may provide a model for the kind of transparency that is required.
Thirdly, political parties must be far more transparent in how they use personal data and detailed rules as to what they can and cannot do with that personal data should be written into the Canada Elections Act.
Finally, the Government should consider including in the Personal Information Protection and Electronic Documents Act a framework for a right to erasure based on the model developed by the European Union that would, at a minimum, include a right for young people to have information posted online either by themselves or through an organization taken down.
Closely related to this, the Government should consider including a broader framework for the right to de-indexing (eg. having a Google listing with personal or false information removed on request) in the Personal Information Protection and Electronic Documents Act and this right should be expressly recognized in the case of personal information posted online by individuals when they were minors.
In late June 2018, the government responded to ETHI’s recommendations to amend PIPEDA. The Minister of Innovation, Science and Economic Development agreed that changes are required to our privacy regime, but he argued that further study of the viability of all options, for instance on enforcement models, was required with a view to presenting Canadians with proposals. The minister launched a national digital and data consultation, which could eventually result in amendments to the law in two or three years.
Canadians cannot afford to wait several years until known deficiencies in privacy laws are fixed. Technology is evolving extremely rapidly and many new technologies disrupt not only business models but also social and legal norms. Legal protections must improve apace if consumer trust is to reach the level everyone desires. As ETHI commented in its June report, “the urgency of the matter cannot be overstated.”