The root of fake news in Canada: Facebook and other advertising-based social media


Canadian Christopher Wylie says that Cambridge Analytica targeted 50 million Facebook users without their knowledge during the U. S. presidential election campaign with Trump aligned messaging based on psychological profiles.


This article contends that the increasing spread of “fake news” is a direct result of the rise of social media platforms such as Facebook, Twitter and Google. These companies have undermined traditional, fact-based newspapers, and have encouraged the growth of web-based, fake news sites in the following ways.

  1. They have undermined the business model of fact-based, quality journalism by garnering the lion’s share of digital advertising at a time when print-based advertising was collapsing; and
  2. By refusing to take responsibility for what is posted on their sites, they have allowed their sites to be used by fake news propagators;

The following are four examples of the harm being done by the rise of fake news driven by the growth of social media:

Example 1: Facebook estimated that 11.4 million Americans saw advertisements that had been bought by Russians in an attempt to sway the 2016 election in favor of Donald Trump. Google found similar ads on its own platforms, including YouTube and Gmail. A further 126 million Americans, Facebook disclosed, were exposed to free posts by Russia-backed Facebook groups. Approximately 1.4 million Twitter users received notifications that they might have been exposed to Russian propaganda. But this probably understates the reach of the propaganda spread on its platform. Just one of the flagged Russian accounts, using the name @Jenn_Abrams (a supposed American girl), was quoted in almost every mainstream news outlet.

A Russian troll farm known as the Internet Research Agency used Facebook’s tools to promote rallies, protests and other events across the U.S. According to Facebook, 13 of the pages created by the Internet Research Agency attempted to organize 129 events. Some 338,300 unique Facebook accounts viewed the events, the company said. Facebook said about 62,500 marked they were attending one of the events and 25,800 accounts marked they were interested.

Example 2: UN human rights experts investigating a possible genocide in the Rakhine state of Myanmar warned on March 14 that Facebook’s platform was being used by ultra-nationalist Buddhists to incite violence and hatred against the Muslim Rohingyas and other ethnic minorities.

A security crackdown in the country last summer led to around 650,000 Rohingya Muslims fleeing into neighboring Bangladesh. Since then there have been multiple reports of state-led violence against the refugees, and the UN has been leading a fact-finding mission in the country.

Also on March 14, chairman of the UN mission, Marzuki Darusman, told reporters that Facebook had played a “determining role” in Myanmar’s crisis (via Reuters).

The UN’s Darusman said Facebook has “substantively contributed to the level of acrimony and dissension and conflict” between the Hindu majority in Myanmar and the Muslim minority. “Hate speech is certainly of course a part of that,” he continued, adding: “As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media.”

Example 3

In 2014, Cambridge Analytica, a voter-profiling company that would later provide services for Donald Trump’s 2016 presidential campaign, reached out with a request on Amazon’s “Mechanical Turk” platform, an online marketplace where people around the world contract with others to perform various tasks. Cambridge Analytica was looking for people who were American Facebook users. It offered to pay them to download and use a personality quiz app on Facebook called this is your digital life.

About 270,000 people installed the app in return for $1 to $2 per download. The app “scraped” information from their Facebook profiles as well as detailed information from their friends’ profiles. Facebook then provided all this data to the makers of the app, who in turn turned it over to Cambridge Analytica.

A few hundred thousand people may not seem like a lot, but because Facebook users have a few hundred friends each on average, the number of people whose data was harvested reached about 50 million. Most of those people had no idea that their data had been siphoned off (after all, they hadn’t installed the app themselves), let alone that the data would be used to shape voter targeting and messaging for Donald Trump’s presidential campaign.

Example 4:

Unlike Facebook and Google, neither traditional news media nor a bevy of digital upstarts have the capacities to collect the motherlodes of personal data of deep interest to digital advertisers who want to highly target their marketing efforts. This limits their digital ad growth considerably relative to the platform giants and prevents them from recouping the revenue lost from plummeting print ad and print subscription revenues.

What this amounts to is that Google and Facebook have used their size and technical prowess in  gathering their users’ personal data to accumulate unprecedented power over the distribution of the web’s content – including news. Digital ad revenues in the United States grew by $2.7 billion in the first quarter of 2016 alone, compared with a year earlier. Of that, $1.4 billion went to Google, $1 billion to Facebook — and just $300 million to everybody else. At this point, the pair account for about 70 percent of the total U.S. digital advertising market and command 90 percent of incremental growth. And the numbers are not too different in Canada. Google’s share of the Canadian digital advertising market is almost 10 times that of the entire Canadian daily newspaper industry and 60 times that of community newspapers.

The result in Canada: The Globe and Mail recently offered yet another in what appears to be a never-ending round of industry buyouts, reducing its journalistic complement by a further two dozen to about 250, which is about 100 fewer than it employed in 2010. Staffing at the much-heralded Toronto Star Touch tablet app was scaled back as third-quarter revenues dropped 20 percent for Torstar’s Star Media Group. All told, the Star newsroom has shrunk to 170 from 470 a decade ago. And revenues at its Metroland community papers, once seemingly immune to the industry’s ravages, were down 10 percent.

Publisher or platform intermediary?

To a growing array of critics, Facebook is a media company and should be regulated as such. For its part, Facebook wants to be viewed differently – as a platform – free of the regulations, responsibilities and tax rules of traditional outlets such as newspapers and broadcasters. In Canada, it does not want to be subject to CRTC regulation and it certainly does not want to be considered a publisher responsible for the accuracy of the content on its site or be subject to liability for possible defamation contained in its posted content.

But despite adamantly denying that it is a media company, it has become the place where roughly 40 per cent of Canadians and Americans go to get their daily news.

That reality is replicating itself in other jurisdictions and increasingly pitting the company against lawmakers in several countries, who want Facebook and other social-media giants such as Google to take on more liability for the content posted on their platforms – or face sanctions.

The root of the problem is that the key catalyst of the dissemination of “fake news” disinformation on social media platforms like Facebook, are the opaque algorithms and the complex online advertising models that dominate these technology platforms. These two features combine to form the core of a very profitable business model that inherently pushes sensational paid and unpaid content into user newsfeeds that is often false, inaccurate, or misleading. In fact, as Facebook has grown, it has become ever more reliant on advertising, which brought in 98 per cent of its revenues last year! This is up from 84 per cent in 2012.

The truth is that a platform such as Facebook massively benefits financially from its users reading and sharing sensational, fake news articles which contain advertisements. Again, this is because Facebook’s algorithms are designed to take users personal data and then send users exactly the content they want into their newsfeeds. This includes content that plays  to their political and social biases. In fact, many platform algorithms are designed to feed users ever more extreme material that plays to these biases. This increases the average time a user stays on platforms such as Facebook thereby increasing the platform’s advertising revenue.

As a result, according to a growing number of observers, this business model is contributing to political polarization and radicalization as platform users view only content that aligns with their political views and in some cases, ever more extreme articulations of those views fed to them by the platform’s algorithms.

But despite the harm being done by a business model that is essentially  hard-wired to spread fake news, companies like Facebook and Google continue to allocate massive lobbying resources to fight regulatory efforts to reign in these core elements of their business model.

The reason? Facebook, Google and other platform giants know full well that legislated changes to their business model that would reduce fake news could radically effect their revenue, profits and overall market capitalization. Case in point – on March 19, the first trading day after publications such as the New York Times and the Guardian released the details behind Cambridge Analytica’s use of Facebook data on behalf of the Trump campaign, Facebook shares dropped 6.8 percent. On March 20, Facebook shares dropped another 3%.

But no matter how hard they fight to maintain the status quo,  governments will not allow companies with the size and power of Facebook and Google to exist in a regulatory vacuum for long. Facebook has become the world’s largest social network, connecting more than two billion people around the world. If it can’t control the spread of fake news through its own efforts, regulators have a duty to decide on their own how to regulate it.

In particular, regulators need to look closely at five issues:

  • Options for giving users more control over their personal data;
  • Requiring platforms to take more responsibility for what is posted on their platforms;
  • The relationship between privacy and defamation law;
  • Requiring greater transparency in platform algorithms; and
  • Closing a range of tax loopholes that are being exploited by the platforms.

Let’s start with giving users more control over how they share their personal data with the platforms. Subsequent posts will deal with the other four issues.

The fake news/data privacy debate in Canada

In late February, the House of Commons Standing Committee on Access to Information, Ethics and Privacy released the results of a comprehensive study into Canadian privacy law.

The report touches on everything from special privacy safeguards for minors to enhanced enforcement powers for the Office of the Privacy Commissioner of Canada, but at its heart are three key findings: the law is no longer fit for purpose, the standard of consent is not good enough, and Canada is at risk of restrictions on data transfers with the European Union if the government does not act.

The House committee report recommends significant reforms to the standard of consent over the platforms’ use of personal data. However, platforms such as Facebook and Google will put up a massive fight over implementing higher standards of opt-in consent.

The Commons report  is clear that the key to meaningful consent must start with the notion that Canadians should have to give explicit and informed consent to Facebook and other platforms to use their personal data to sell to advertisers and other parties. However, the Committee seems to have left it to the government to figure out the details as to how that objective might best be achieved.

According to Professor Michael Geist, Canada Research Chair in Internet and E-commerce Law at the University of Ottawa, Privacy Commissioner Daniel Therrien, the Canadian Internet Policy and Public Interest Clinic, and other expert commentators, meaningful and informed  consent would mean changes to the federal Personal Information Protection and Electronic Documents Act and other legislation as follows:

  • First, federal political parties are not subject to privacy laws. This is clearly unacceptable. Information about our political views is highly sensitive and therefore particularly worthy of protection. We must take action in the face of serious allegations that democracy is being manipulated through analysis of the personal information of voters. Bringing parties under privacy laws would be a step in the right direction.
  • Second, Canada should implement opt-in consent for the sharing of personal data as the default approach. At the moment, opt-in is only used where strictly required by law or for highly sensitive information such as health or financial data. Under the current system, the majority of personal information that is collected, used, and disclosed by Facebook, Google and Twitter and sold to advertisers and other third parties, is done without the informed consent of the user.
  • Third, since the informed consent of the collection of personal data depends upon the public understanding how their personal information will be collected, used, and disclosed, the rules associated with transparency must be improved. Confusing negative-option check boxes that leave the public unsure about how to exercise their privacy rights should be rejected as an appropriate form of consent.

Moreover, given the uncertainty associated with big data and cross-border transfers of information, new forms of transparency in privacy policies are needed. For example, algorithmic transparency would require search engines and social media companies to disclose how information is used to determine the content displayed to each user. Data transfer transparency would require companies to disclose where personal information is stored and when it may be transferred outside Canada.

  • Fourth, effective consent means giving users the ability to exercise their privacy choices when dealing with platforms such as Facebook. Most policies are offered on a “take it or leave it” basis (agree/disagree check boxes) with lengthy terms and conditions written in legalese that few ordinary users can understand. These user terms and conditions leave users little room to customize how their information is collected, used and disclosed. This needs to change – real consent should also mean real choice of what personal information can be collected by a platform and how it can be used.
  • Fifth, stronger enforcement powers granted to the Privacy Commissioner are needed to address privacy violations. Canadian privacy law is still premised on moral suasion or fears of public shaming, not tough enforcement backed by penalties. If privacy rules are to be taken seriously, there must be serious consequences when companies run afoul of the rules. Even informed  consent will not be adequate  in a world where data may be used for multiple purposes not always known to the user when it is collected. With that in mind, changes should be made to the Personal Information Protection and Electronic Documents Act to allow the Privacy Commissioner  to go into an organization to independently confirm that the principles in our privacy laws are being respected – without necessarily suspecting a violation of the law. These inspection powers exist in other regulated industries; why isn’t our personal information worthy of the same protection?
  • Sixth, the Privacy Commissioner’s office must be given  the power to make orders and issue fines, allowing the Office to more effectively deal with companies who refuse to comply with the law.

Europe’s General Data Protection Regulation (GDPR) and ePrivacy Regulation – a model for Canada?

The EU General Data Protection Regulation

The EU General Data Protection Regulation (GDPR) was created to align the data privacy laws across all EU countries. The GDPR will come into effect in May, 2018. A major element of the GDPR is that the processing of any EU citizens’ information is now protected, regardless of whether the information processing is done within the EU or not, and regardless of where the platform originates from. Any platform around the globe that sells to an EU citizen is bound by law to protect their private data. This will put pressure on Canadian privacy law to align with the GDPR.

The idea of data traffic has been expanded in the GDPR to include all metadata that derives as a result of any communications. The GDPR also strengthens the area of consent as to how a user’s personal information can be used and whether it can be shared. It also makes it easy for users to access their personal data and contains a requirement for all businesses and websites that take any information from any user to maintain the information and make it available to the user if requested.

An important ‘right to be forgotten’ (i.e. de-listed from search engine listings on request) is regulated for under the GDPR as is a right to data portability.

ePrivacy Regulation

A second regulation – The European Union ePrivacy Regulation – has been published (in draft form) to broaden the scope of the current EU ePrivacy Directive and align the various online privacy rules that exist across EU member states. The regulation takes on board all definitions of privacy and data that were introduced within the EU’s previous General Data Protection Regulation, and acts to clarify and enhance it. In particular, the areas of unsolicited marketing, cookies and confidentiality are covered in a more specific context in the ePrivacy regulation. Most observers believe the regulation will come into effect sometime in the first half of 2019.

Unsolicited Marketing

The current version of the EU ePrivacy regulation now requires any type of communications, including emails and text messages, to be consented to. Marketers will not be able to send emails or text without prior permission from each email or mobile account holder.


Under the proposed regulation, cookies will now be tracked within software and the user’s browser within settings that each user can change to suit their willingness to share personal information. This will do away with the litany of banner pop ups that request user consent for use of cookies on individual websites. This changes previous regulations which made each website request permission to use cookies from each user.


Since the ePrivacy regulation is an add-on to the existing ePrivacy directive, one aim was to broaden the scope to include online communications providers under the same requirements as traditional telecommunications providers. In this regard, companies including Gmail, Skype, Facebook Messenger and WhatsApp are now required to provide the same level of customer data safety as EU bricks and mortar providers. Providers of any electronic communication service are required to secure all communications through the best available techniques. This creates a need for websites to stay technologically in sync with the best safety features available on the market.

The new provisions also create the necessity for metadata to be treated the same as the actual content of the communication that it is facilitating being sent. It prohibits the interception of any such communication except where authorized by an EU member state specifically under law (such as within a criminal investigation).

Main differences between the two regulations

Each regulation was drawn up to reflect a different segment of EU law. The GDPR was created to enshrine Article 8 of the European Charter of Human Rights in terms of protecting personal data, while the ePrivacy regulation was created to enshrine Article 7 of the charter in respect to a person’s private life. The private sphere of the end user is covered under the ePrivacy regulations, making it a requirement for a user’s privacy to be protected at every stage of every online interaction.

The ePrivacy directive takes the broad online retail sector into account in terms of how personal information might be used and in this sense is what it adds to the overall regulations that make up the GDRP.


On March 20, Canada’s Privacy Commissioner Daniel Therrien announced he was  launching an investigation into whether Facebook violated federal laws by allowing user data to be misused, the latest in the political crisis engulfing the social media giant as it struggled to respond to intensifying calls from global lawmakers to address privacy breaches and political interference. The probe would investigate reports that a U.K. political data-analysis firm Cambridge Analytica hired by the President Donald Trump’s election campaign team improperly accessed data from 50 million Facebook users.

The Commissioner has also repeatedly called for legislative changes similar to those proposed above.

Also, on March 20, the acting Minister Responsible for Democratic Institutions, Scott Brison, indicated an openness to update Canada’s privacy laws which presumably means an openness to update the Personal Information Protection and Electronic Documents Act.

With the first of Europe’s new privacy rules set to take effect in just 6 weeks and new stories breaking everyday of privacy violations by the private platforms, political consultants, and the political parties themselves, the Trudeau government must move now on long overdue privacy reforms.



Join the Discussion!

Notify of