Bill C-27 is Canada’s latest attempt to regulate the use of personal data in the digital realm. It is worth taking a close look at Bill C-27 to fully appreciate the huge economic implications of the policy choices Canada will be making in the realm of digital regulation in the coming months.
The role of “consent” in Bill C-27
While a modest improvement over the 2020 data privacy legislation (Bill C-11) which died on the order paper with the call of the 2021 federal election, Bill C-27 still relies on a restricted consent model (i.e. under the legislation, a data processor like Facebook will need to receive consent from the user before using his or her personal data). But the legislation appears to allow Facebook and other giant data processors such as Google, Tic Tock, etc., ample opportunity to monetize their users personal data by allowing them to bury the request for consent in their overall Terms of Service. In other words, if you don’t consent to the Terms of Service (which includes a small section on the platform’s policy of extracting your personal data and using it to send you personalized ads), you don’t get to use any feature of the platform. This matters greatly in economic terms because the business model that has come to dominate tech over the last twenty years or so is the ad supported, services-for-free business model which relies on the ability of the giant digital companies to extract your personal data from anywhere on the internet and monetize it through personalized ads.
How central is advertising revenue to the social media giants? Meta, the parent company of Facebook, reported third quarter 2022 revenues of $27.714 billion, with approximately 98% derived from advertising. Google’s revenue is just under 80% advertising in its latest quarter. The business logic of both companies is the more personal information they have about their users, the more more money they make from advertising.
This means that if Canada legislated a complete ban on the extraction and monetization of personal data through C-27, the ad supported, services-for-free business model would collapse in Canada. In such an eventuality, the global platforms would have to shift towards some other business model (eg. monthly subscription) – or leave the country. Of course, if Canada were the only country to completely ban the extraction of personal data for monetization, the global platforms would be more likely to leave Canada than develop a completely new business model for use in a country of less than 40 million.
In any case, the specifics of Bill C-27 make clear that a complete ban on the extraction and monetization of personal data in Canada is not going to happen. This is no surprise. The 2020 predecessor to Bill C-27 — Bill C-11 — was actually condemned by former Privacy Commissioner Daniel Therrien as a “step backward” for personal data privacy. Bill C-27 does contain improvements over C-11 — the government has listened to some of the concerns of the privacy critics. However, it predictably has listened more to the concerns of the global platforms and other tech bemouths than the Canadian privacy advocates. In this latest iteration of Canadian data privacy reform, the right to keep your personal data private still takes a back seat to commercial interests.
Bill C-27 aligns fairly well (although not completely) with the preferred Silicon Valley approach to regulating the use of personal data. That does not sit well with many privacy critics who argue that data protection law is meant to set basic ground rules for data processors such as Facebook, Google, Tik Tok and Amazon. In the EU’s General Data Protection Regulation (GDPR), and in Quebec’s new privacy law, the balance between personal data privacy and commercial interests is tilted somewhat more towards data privacy than in C-27. In many critic’s eyes (including this one), if done right, the EU approach creates a hard stop for certain illegitimate uses of personal data (see below), and forms the basis of a strong regulatory framework for more legitimate uses.
But most of Big Tech passionately believes that strong regulatory frameworks such as the EU’s slow down commerce and innovation, and most of the global corporate data users operating in Canada have aggressively lobbied the federal government for a soft, flexible approach to personal data protection. It is an approach premised on the idea that, by-and-large, the tech industry is composed of good actors who have our interests at heart.
This author disagrees with that proposition. The global platforms do not have Canada’s interest at heart – they have their shareholders’ interest at heart. On the other hand, they do play a very significant role in the everyday lives of Canadians (particularly younger Canadians) and that is why the government can’t just change the rules on them overnight and effectively make their chosen business model illegal in Canada. For the time being (and likely well into the future), Canada has to strike a balance between the global platforms and their commercial need to monetize Canadians’ personal data and the right of Canadians to keep their personal data private – if that’s their choice.
On these sort of digital issues, Canada often ends up midway between the EU and Silicon Valley and that is pretty much where Bill C-27 is right now. That said, it is possible for Canada to choose a position closer to the EU whereby platform users can say “no” to the use of their personal data for personalized advertising and still have access to the global platforms’ many functions. The EU approach provides significant leeway for the giant platforms to collect personal data and monetize it. In other words, Canada could choose an option that prohibits platforms like Facebook from burying their request for consent for the use of personal data deep in their massive Terms of Service document by providing users with a short, plain language “yes or no” request for the use of their personal data for personalized advertising. Those Canadians that want highly personalized advertising could voluntarily consent to sharing their personal data with the platforms and those who are fine with non-targeted ads and want to keep their personal data private, could say no. Regardless of your answer, you could continue to use Facebook, Tictok and Google’s many functions.
It is also essential to remember that C-27 will be going to committee in the coming weeks following the conclusion of second reading debate. In the Canadian system, it is in committee following second reading where the most significant amendments to legislation are generally made. Moreover, Canada has a minority Liberal government meaning that the Liberals have a minority in Parliament (and in its committees) and need either the support of the NDP or the Bloc (or both) to get legislation through. At the very least, it is likely that both opposition parties will push hard for fewer and more tightly defined exemptions from seeking consent in order to gain their support for C-27. Maybe they can even be convinced to push for other elements of the EU approach.
The problem with the ‘legitimate interests’ section of Bill C-27
(Note to readers: The following may be a little too deep into the legislative weeds for some readers and they should feel free to skip ahead to the next section).
How much tightening of the platform’s exemptions from seeking consent should be undertaken?
A significant change in C-27 (relative to C-11) is a new “legitimate interests” exception (S.18(3)) which permits the collection or use of personal information by an organization without individual consent where it is for the “purpose of an activity in which the organization has a legitimate interest that outweighs any potential adverse effect on the individual resulting from that collection or use” and which is reasonable to expect.
The section then goes on to say the personal information collected without consent must not be used for the purpose of influencing the individual’s behaviour or decisions.
A number of interpretive questions arise here – to say the least. What is a necessary or legitimate interest in the case of online platforms? Are targeted ads and recommendations (through algorithms) reasonable to expect, or could non-targeted advertising be a viable alternative? And what does it mean to influence behaviour or decisions? Is this wording taking away the platforms’ ability to use the personal information they collected without consent for targeted ads and algorithms?
It is also important to point out that Bill C-27’s ‘legitimate interests’ exception is different in important respects from a similarly named section in the EU’s GDPR. Although Bill C-27 gives a nod to the importance of privacy as a human right in a new preamble, the human rights dimensions of privacy are not particularly evident in the body of the Bill. The ‘legitimate interests’ exception is available to platforms unless there is an “adverse effect on the individual” that is not outweighed by the organization’s legitimate interest (as opposed to the ‘interests or fundamental freedoms of the individual’ under the GDPR).
Presumably, it will be the platforms that will undertake the initial calculation of whether there is an “adverse effect on the individual” that is not outweighed by the organization’s legitimate interest. One of the problems in data protection law has been quantifying adverse effects on individuals. Data breaches, for example, are shocking and distressing to those impacted, but it is often difficult to show actual damages flowing from the breach, and moral damages have been considerably restricted by courts in many cases. Some courts have even found that ordinary stress and inconvenience of a data breach is not compensable harm since it has become such a routine part of life. If ‘adverse effects’ on individuals are reduced to quantifiable effects, the ‘legitimate interests’ exception will be far too broad.
This is not to say that some notion of the platform’s ‘legitimate interests’ to monetize personal data cannot be reconciled with the protection of the user’s data. There is already clearly an attempt in C-27 to incorporate some checks and balances, such as reasonable expectations and a requirement to identify and mitigate any adverse effects. But what C-27 does is take something that, in the GDPR, was meant to be quite exceptional and make it into a potentially much broader basis for the use of personal data by the platforms without the consent of the individual. It is able to do this because rather than reinforce the primacy of privacy rights in the legislation, C-27 places personal data privacy on an uneasy par with commercial interests in using personal data. The focus on quantifying an individuals ‘adverse effects’ runs the risk of equating privacy harm with quantifiable harm, thus diminishing the human and social value of privacy.
In the author’s opinion, the “legitimate interests” section of Bill C-27 needs to be significantly amended to emphasize the primacy of personal data privacy. In its present form, the entire section has the smell of an unnecessary sop to the global platforms.
Meta and the EU’s GDPR
Let’s take a close look at the practical implications of taking a personal data privacy approach similar to that of the EU in it’s 2018 GDPR.
Because of the privacy provisions in the GDPR, Meta suffered a major defeat on Jan. 4 that could severely undercut its Facebook and Instagram advertising business after European Union regulators found it had illegally forced users to effectively accept personalized ads.
The decision, including a fine of 390 million euros ($414 million US), has the potential to require Meta to make costly changes to its advertising-based business in the European Union, one of its largest markets.
The ruling is one of the most consequential judgments since the 27-nation bloc, home to roughly 450 million people, enacted a landmark data-privacy law aimed at restricting the ability of Facebook and other companies from collecting information about users without their prior consent. The law took effect in 2018.
The case hinges on how Meta receives legal permission from users to collect their data for personalized advertising. The company’s terms-of-service agreement — the very lengthy statement that users must accept to gain access to services like Facebook, Instagram and WhatsApp — includes language that effectively means users must either allow their data to be used for personalized ads or stop using Meta’s social media services altogether.
Ireland’s data privacy board, which serves as Meta’s main regulator in the European Union because the company’s European headquarters are in Dublin, said E.U. authorities had determined that placing the legal consent within the terms of service essentially forced users to accept personalized ads, violating the European law known as the General Data Protection Regulation, or G.D.P.R.
Meta has three months to outline how it will comply with the ruling. The decision does not specify what the company must do, but it could result in Meta’s allowing users to choose whether they want their data used for such targeted promotions.
If a large number of users choose not to share their data, it would cut off one of the most valuable parts of Meta’s business. Information about a user’s digital history — such as what videos on Instagram prompt a person to stop scrolling, or what types of links a person clicks when browsing Facebook feeds — is used by marketers to get ads in front of people who are the most likely to buy. The practices helped Meta generate $118 billion in revenue in 2021.
The judgment puts 5 to 7 percent of Meta’s overall advertising revenue at risk.
The penalty contrasts with regulations in the United States, where there is no federal data privacy law and only a few states like California have taken even tentative steps to create data privacy rules. But any changes that Meta makes as a result of the EU ruling could affect users in the United States; many tech companies apply E.U. rules globally because that is easier to put in effect than limiting them to Europe.
The E.U. judgment is the latest business headwind facing Meta, which was already grappling with a major drop in advertising revenue because of a change made by Apple in 2021 that gave iPhone users the ability to choose whether advertisers could track them. Meta said last year that Apple’s changes would cost it about $10 billion in 2022, with consumer surveys suggesting that a clear majority of users have blocked tracking.
Meta’s struggles come as it is trying to diversify its business from social media to the virtual reality world known as the metaverse. The company’s stock price has plummeted more than 60 percent in the past year, and it has laid off thousands of employees.
The Jan. 4 announcement relates to two complaints filed against Meta in 2018. Meta said it would appeal the decision, setting up what could be a prolonged legal fight that tests the power of the G.D.P.R. and how aggressively regulators use the law to force companies to change their business practices.
Canada is at a crucial moment in the regulation of its digital content. Regulators are looking to rein in the world’s largest technology firms by imposing new standards related to privacy and personal data (Bill C-27), rewriting the Competition Act to ensure that markets (including digital markets) function properly (currently in consultation), ensuring that streamers have the same Canadian content obligations as other broadcasters (Bill C-11), ensuring that Canadians are protected from online harms (forthcoming), and forcing Facebook and Google to pay for the news content they profit from but that other news media generate (Bill C-18).
Why is all this happening now?
Canada, like other advanced democracies, is in many ways digging itself out of a regulatory hole rooted in Section 230 of the US Communications Decency Act (CDA) and its global “spread”.
Section 230 of the CDA was enacted in 1996 for the purpose of enabling online platforms to act responsibly to screen and block offensive materials and to foster free expression online. Unfortunately, the CDA has been broadly interpreted to also immunize these global corporations from the accountability measures governments regularly use to further the public interest.
In short, digital companies who define themselves as platforms often do so in order to minimize their exposure to regulatory frameworks that would make them legally responsible for the digital content and services their sites make possible.
The current push to regulate the digital economy can then be seen as an attempt to apply a set of accountability measures to digitally based corporations that would have been in place from the very beginnings of the internet had Section 230 of the CDA not been put in place and then muscled onto the world stage by the US government and the Silicon Valley giants. For example, if Facebook and Google wanted to profit from the work of Canadian journalists, they should have been those paying Canadian journalists starting around 2005 at the very latest. And that is exactly what Bill C-18 will be forcing them to do.
Perhaps most importantly, Canada, the US and the UK are responding to the legislative accomplishments of the EU which has emerged as the de facto leader in regulating the digital economy. As discussed in detail above, the 27-nation bloc, home to roughly 450 million people, enacted a landmark data-privacy law (the GDPR) in 2018 aimed at restricting the ability of Facebook and other companies from collecting personal data from users without their prior consent. The EU law took effect in 2018 and on Jan. 4 of this year Meta was hit with a $414 million fine for violating the GDPR.
On November 1, 2022, the Digital Markets Act, EU’s flagship digital gatekeeper legislation entered into force. This started the clock for the legislations full application.
On November 16, 2022, the EU’s Digital Services Act (DSA) entered into force. The DSA’s main purpose is to fight the spread of illegal content, online disinformation, and other societal digital risks. The DSA introduces a comprehensive regime of content moderation rules for a wide range of businesses operating in the EU, including all providers of hosting services and “online platforms”.
Add to this list the EU’s AI Act (AIA) regulating artificial Intelligence and you have an EU digital regulatory framework that is years ahead of Canada’s.
Again, the EU regulatory framework is designed to carefully balance the individual rights of EU citizens and the commercial interests of the global data processors. Yet, Meta, Google, Amazon, Tic-Tok, etc. are fighting this progressive, but profoundly pragmatic, legislative agenda every inch of the way.
In pushing ahead with Bill C-18 and finally forcing Google and Facebook to pay for the original Canadian journalism they have used and profited from for decades, Canada is courageously siding with Canadian news media and journalists against Facebook and Google.
As important as Bill C-18 is, Bill C-27 will be a much bigger test than C-18 of whether Canada can stand up to the Silicon Valley giants and follow the EU’s pragmatic path of asserting the primacy of personal data privacy while at the same time allowing sufficient room for the word’s digital giants to remain commercially viable in Canada.
Bill C-27 will likely be going to Committee in the next month or two. When the gavel comes down on the final committee vote on the final amendment, Canadians will have a good sense of which of Canada’s political parties have their back when it comes protecting their personal data.