Seventy percent of Americans who voted for Donald Trump still believe that the 2020 US presidential election was stolen and that Trump really won.
This, in spite of the fact that after the election the Trump campaign and allied interests filed and lost at least 63 lawsuits contesting election processes, vote counting, and the vote certification process in multiple states. Among the judges who dismissed the lawsuits were some appointed by Trump himself.
Nearly all the suits were dismissed or dropped due to lack of evidence. Judges, lawyers, and other observers described the suits as “frivolous” and “without merit”. In one instance, the Trump campaign and other groups seeking his re-election collectively lost multiple cases in six states on a single day.
The failure of these lawsuits was covered in great detail by the legacy news media (the New York Times, the Globe and Mail, etc.) and Trump’s continued denial of Biden’s victory is regularly condemned across the media landscape.
But still, more than two years following the 2020 election, 70% of Trump voters still believe Trump won. Why?
While it is true that the vast majority of media outlets that actually employ professional journalists have been clear that Biden didn’t steal the election from Trump (Fox News excepted), this is not the case in the ad-supported social media world where many of Trump’s supporters get their “news”. This is an algorithm-based world dominated by user generated content which makes no claims to representing any sort of objective reality and where baseless conspiracy theories often thrive. Moreover, in this social media world, there is a large and growing community of users who treat news from media outlets who employ professional journalists with suspicion, if not outright contempt (i.e. as “fake news”).
On top of this, online platforms track their users internet use through so-called “cookies” they attach to a user’s browser. They also supplement their user profiles with information from data brokers who comb the internet gathering information on individual platform users. Acxiom, one of the larger U.S. data brokers, claims to have files on 2.5 billion people, with about 11,000 data points per consumer.
If these profiles contain information that suggests a user is pro-Trump (originally gathered, let’s say, from a user’s browsing history), content from various groups and individuals promoting the “Big Lie” will find their way onto the user’s newsfeed via the platform’s algorithms. As such, a pro-Trump social media user would likely be bombarded with content claiming that Biden stole the election from Trump. After sustaining this sort of bombardment by platform algorithms, a Trump voter could very well become an election denier – and stay that way no matter what the legacy news media (and the Democrats) say. This is a direct function of the way social media algorithms work.
Put a little differently, on the issue of whether Biden stole the election from Trump, there are two directly opposing online “realities”. One is an objective reality anchored in the findings of established democratic institutions (electoral commissions, the courts, etc.) and reported upon at length in the legacy media by professional journalists. This reality definitively says Biden won the 2020 election fair and square.
But there is also an alternate online “reality” rooted in an ad-supported eco-system driven by social media algorithms, tracking cookies and information brokers. These algorithms use our personal data (generally without our permission) and are explicitly designed to push users towards “engagement” with the social media platforms at all costs – filling a platform user’s newsfeed with content that may or may not be true (the algorithms don’t care), but that often inspires fear, resentment, anger and outrage. If content suggesting that the 2020 election was stolen from Trump will keep a user glued to a platform so that the platform can sell more ads, that user will get more and more misinformation of this sort.
And this world of social media algorithms has a very specific legal basis – S. 230 of the U.S. Communications Decency Act. Section 230, which was passed in 1996, says an “interactive computer service” cannot be treated as the publisher of third-party content. This protects internet platforms like Facebook and TikTok from lawsuits if a user posts something illegal. In other words, platforms are not legally liable for anything a user posts on their sites – no matter how personally or socially harmful the content is. (Section 230 of the CDA is sort of the law in Canada. This is because the new NAFTA, formally called the CUSMA, contains Article 19.17 that more or less mirrors S. 230).
This legally sanctioned world full of misinformation and disinformation not only distributes content from anti-democratic domestic actors like Donald Trump and his supporters, they are also conduits for disinformation from authoritarian states like Russia and China that seek to destabilize Western democracies. Just as Trump (and Trump wannabees such as Ron De Santis) use these algorithms to spread misinformation, authoritarian states use ad-supported social media algorithms to try to undermine elections and more generally undermine the rule of law in Canada and other western democracies.
The point of this tour through the alternate online reality of the “Big Lie”, is to underscore just how dangerous the algorithmic world of social media is. If 30% of American voters can be convinced that Biden stole the 2020 election from Trump, then 30% of American voters can be convinced of just about anything. In a society where truth and facts are the enemy of a significant portion of the population, open and free democracies are truly living on borrowed time.
The question that this post attempts to answer, therefore, is how best to deal with the problems of online misinformation, disinformation and the myriad of other problems associated with what the internet has become. What laws need to be in place in Canada to ensure that more than two years after an election has been held, there is only one online reality when it comes to who won the election and the name of the winner is just a boring, uncontested fact that no one could imagine disputing.
The European Approach
At the present moment, Europe is the jurisdiction that has thought the most about the misinformation (and disinformation) problems associated with an increasingly digital world. As a result, the EU is three to five years ahead of Canada and the US in creating a regulatory framework that begins to provide some solutions.
The EU approach is often described by its proponents as a rights-based approach. In some ways this is true but what defining the EU’s digital strategy as rights-based doesn’t capture is that the EU approach is filled with compromises with the commercial world as it is. The EU takes for granted that we are moving more and more towards a digital economy and that there are many socially useful things that are being made possible because of this. It is also thoroughly aware that in order for the European economy to thrive, there must be private sector, European high tech champions.
The EU’s digital regulatory regime also contains numerous compromises with the Silicon Valley giants and their chosen business models. That said, the EU’s evolving digital framework is certainly stronger on consumer protection and individual rights than anything found in North America and, as such, is well worth exploring for lessons for Canada.
Perhaps the best example of compromise in the EU approach to digital regulation lies at its very foundation – it does not attempt to directly push back against the immunity to liability of the internet platforms. Instead, it attempts to address the specific harms caused by the platforms’ immunity to legal liability. In so doing, it does make things somewhat more difficult for the ad supported social networks and search engines.
For example, under the GDPR, the EU’s personal data privacy legislation:
- User consent for the use of personal data by a platform must be “freely given, specific, informed and unambiguous.”
- Platform requests for user consent must be “clearly distinguishable from the other matters” and presented in “clear and plain language.”
Because of the personal data privacy provisions in the in the GDPR cited above, Meta suffered a major legal defeat on January 4 that could severely undercut its Facebook and Instagram advertising business after European Union regulators found it had illegally forced users to effectively accept personalized ads.
The decision, including a fine of 390 million euros ($414 million US), has the potential to require Meta to make costly changes to its advertising-based business in the European Union, one of its largest markets.
The ruling is one of the most consequential judgments since the 27-nation bloc, home to roughly 450 million people, enacted a landmark data-privacy law aimed at restricting the ability of Facebook and other companies from collecting information about users without their prior explicit consent. The law took effect in 2018.
The case hinges on how Meta receives legal permission from users to collect their data for personalized advertising. The company’s terms-of-service agreement — the very lengthy and technical legal statement that users must accept to gain access to services like Facebook, Instagram and WhatsApp — includes language that effectively means users must either allow their data to be used for personalized ads or stop using Meta’s various social media services altogether.
The EU Court ruled that this was illegal under the GDPR. Meta has until April, 4 to comply with the ruling.
Canadians have none of the legal protections for their personal data provided by the GDPR. That said, Bill C-27, currently being debated at second reading in the House of Commons, has started a serious debate on how best to protect personal data in Canada. While at the time of this writing, the personal data privacy provisions in the legislation generally fall short of the privacy protections afforded by the GDPR, both the NDP and the Bloc seem determined to try to amend the legislation at Committee to bring Bill C-27 up to GDPR standards. In slightly more technical terms, the task at hand is to amend the bill to eliminate implied consent as an alternative to express consent for the use of personal data.
And the Silicon Valley giants? They and the Canadian digital marketing world are having none of it. They are lobbying very, very hard to weaken almost every aspect of the already inadequate legislation.
The EU’s Approach to Fighting Monopoly power in the Digital World
The EU’s Digital Markets Act (DMA) came into force on November 1, 2022. It was developed to check the power of the giant platforms and stop the spread of “walled gardens” in which the global platforms design a market, operate it and also offer products that they favour in their owner-operated market. Amazon is the most obvious example of this but Google, with 90% of the search market, is perhaps even more dangerous. In an era where what is searchable online can be easily confused with what exists, Google has the perhaps the greatest potential to do extraordinary harm.
Google’s efforts to maintain search dominance — in particular, its agreements with smartphone manufacturers that required its search and other apps to be included by default on their devices — have been subject to investigation by the European Commission, resulting in a fine of 4.34 billion euros (about C$5.5 billion). The company has paid handsomely to acquire this default position on some other platforms, for example, by shelling out billions annually to Apple to maintain Google as the default for search on the iPhone Operating System.
With its search dominance unchallenged, Google has increasingly worked to direct web traffic within its own walled garden. In the process, it has increased the prominence of its own services in search results while scraping and displaying content from sites such as Yelp and Wikipedia, keeping user traffic within Google properties.
And this is exactly what the DMA seeks to prevent. The DMA primarily targets the providers of key platform services, such as social networks, search engines, web browsers, online marketplaces, messaging services, or video-sharing platforms, that have at least 45 million monthly active users in the EU and have 7.5 billion Euros in revenue. In other words, it’s going after the big fish that have a monopoly or something close to it in the EU digital economy.
Companies likely to be designated gatekeepers include Meta, MIcrosoft, Google, You Tube (owned by Google), Amazon, Apple, and TikTok (although there will likely be a few others).
As noted above, with social media, a primitive form of A.I. was used not to create content, but to curate user-generated content through algorithms based on personal data such as browsing history. The A.I. behind social media feeds was designed to maximize user engagement with the platforms in order to make money from personalized ads.
While very primitive by the AI standards of today, social media A.I. was able to create a curtain of false illusions with large parts of the population confusing these illusions with reality (eg. Biden stealing the 2020 election from Trump, etc.). Though the reality-distorting downsides of social media are now well known, those downsides haven’t been properly addressed because much of our everyday lives have become entangled with it (try telling your twelve yr. old that she can’t use TikTok anymore because it distorts reality and their personal data could be mined by the Communist government of China).
Because of the sophistication of its algorithms, the latest generation of A.I. has the potential to do much more harm than social media – and much more good. A.I. has the potential to help us defeat cancer, discover lifesaving drugs, and invent solutions for our climate and energy crises. There are innumerable other benefits we cannot begin to imagine.
However, if we don’t regulate the latest generation of AI properly, the new A.I. capacities will again be used to gain profit and power for the digital giants and sew misinformation and disinformation leading to ever more dueling online “realities” and fewer agreed upon facts.
For example, what if the dominant use of AI is Google, Microsoft, and other AI players unleashing A.I. systems that compete with one another to be the best at persuading users to want what the advertisers are trying to sell as opposed to educating consumers to buy quality products and services at reasonable prices? More specifically, what if a Bing powered by Chat GPT that has access to reams of personal browsing history, tries to coolly manipulate people on behalf of whichever advertiser is paying Microsoft the most money? And what if the providers of low-quality products and services use AI to target low-income and less educated consumers because they are easier to con?
Sure, all that is happening now but what if the current generation of chatty AI does all these harmful things better and, as a result, does much more damage to vulnerable peoples’ lives?
Nor is it just harmful AI advertising that is worth worrying about. What about when these systems are deployed on behalf of the scammers that have always populated the internet? These financial scams, built on your personal information, have the potential to be much more sophisticated with the current generation of AI – and much more damaging.
How about personalized AI deployed on behalf of unscrupulous political campaigns that have our personal data? How about personalized AI deployed by authoritarian governments like Russia, Iran, North Korea, and China determined to destabilize the world’s democracies? We could be heading very fast into a world where we just don’t know who and what to trust anymore because there is less and less agreed upon reality. This has already been a steadily growing problem for society since S.230 of the CDA threw liability out the window for internet based businesses. Things have been going steadily downhill on the personal data privacy front for at least the past twenty years. What if things keep getting worse and worse because giant (and not so giant) companies are unleashing better and better AI driven propaganda machines to increase profits and maximize shareholder value?
This is where the EU’s AI Act (AIA) regulating artificial Intelligence (likely to be finalized later this year or early 2024) comes in.
The EU AI Act classifies AI use in specific sectors as having unacceptable risk, high risk, limited risk and minimal or no risk.
Those AI risks that are deemed unacceptable are banned completely.
For example, biometric identification (fingerprint patterns, facial features, eye structures, DNA, speech, etc.) are banned except in limited circumstances.
AI in a number of other European sectors are deemed “high risk” but allowed.
For example, in the field of employment, the “high-risk” category applies to algorithms that would terminate employment (amongst many other employment related uses uses of AI). The AI Act still allows algorithms to fire people but the employees being fired by the algorithm will know it is the algorithm making the firing decision and there will be a clear human option for an appeal.
And there are many other sectors of the economy where AI providers must ensure that EU residents are informed that they are interacting with an AI system. Transparency of this sort along with human oversight requirements are perhaps the most important lesson Canada learn from the draft EU AI Act although there are many, many others.
The intense political battles currently raging in Canada over Bill C-11 (Online Streaming Bill), Bill C-18 (Online News Act) and Bill C-27 (Digital Charter Implementation Act) along with the review of the Competition Act, suggests that the battle between EU-like digital standards described above and the status quo standards supported by the Silicon Valley bemouths and the digital ad industry, is going to be fierce in Canada and the US in the coming years. The EU is 3-5 years ahead of Canada and the U.S. in its digital regulation but the battle lines are already forming on this side of the Atlantic as proposals to implement a whole host of EU-like digital standards are being floated in the face of fierce opposition from the likes of Meta, Google, Apple, Amazon, Microsoft and the digital ad industry.
Contrary to the corporate narrative in Canada and the US, this author believes that the evolving EU digital regulatory framework is flexible, nuanced, and in practice, carefully balances the protection of the rights of EU citizens and the commercial interests of the global data processors. As such, it provides a useful guide for Canada in developing our own digital regulatory framework.
In terms of the current legislative battles before Parliament, this means support for Bill C-11, the bill that obligates all streamers to produce Canadian content just like any other broadcaster whose work is shown on Canadian television.
It also means support for the Canadian news media in their epic battle against Meta and Alphabet over Bill C-18. Facebook and Google have profited immensely from Canadian news over the years but have not produced any original journalism. The established news media employing professional journalists has been badly damaged by the loss of ad revenue to Google and Facebook and deserves to be compensated for the use of their journalism by these giant platforms. This is exactly what Bill C-18 does.
Canadians deserve GDPR-like privacy safeguards for their personal data in Bill C-27 (as advocated for by the NDP and Bloc but not included in the present version of the bill). Platforms should be required to ask users for permission to use personal data (like their browsing history) in a clear, straightforward way and they should also be required to tell users that if permission is granted, their personal data will be used to craft personalized ads. If the user says no to the request, the platforms should be required to respect that decision and still allow users to access their services. Social media and the digital ad industry are vehemently opposed to this (Facebook, for example, received over 97% of its 2022 revenue from personalized ads) but such an approach is necessary to safeguard Canadians’ privacy.
The (really) long-term objective would be to gradually shift the internet away from personalized ads and towards business models rooted in subscriptions and/or old-school “contextual” ads (eg. sports web sights with lots of shaving cream and truck ads, etc.).
The wholly inadequate regulatory framework related to Artificial Intelligence (AIDA) added at the last moment to Bill C-27 needs to have three sets of fundamental amendments at committee. These are:
- AIDA should not only regulate “high risk” AI systems but all systems that deploy AI. There should be a sliding scale of “at risk” sectors similar to the EU’s AI Act. Even AI systems that are deemed to have limited or minimal risk should have some measure of regulation.
- AIDA should regulate collective harms as well as personal harms. The current language related to collective harms is far too weak.
- ISED should not be the regulating Ministry for AI as it is also the Ministry responsible for providing assistance for the development of AI.
If Parliament cannot find a way of passing these three sets of amendments, the AI portion of Bill C-27 should be withdrawn from C-27 followed by a thorough public consultation process appropriate to what is shaping up to be one of the greatest public policy challenges of the decade.
Canadian competition law is also wholly inadequate for the regulation of the digital economy. Many recent international anti-trust policy changes and proposals are driven by the view that the digital economy is increasingly dominated by a small number of very large firms, which provide much of the infrastructure on which our economies now run. These include social media, internet search, ecommerce, mobile devices, cloud computing, and software and applications. As indicated above, the EU’s Digital Market Act (DMA ) is essentially an ex-ante competition law designed to curb the power of the giant digital “gatekeepers” such as Amazon, Apple, Meta, Alphabet and Microsoft. While it would be breaking new ground for Canadian competition policy, Canada should look very closely at the DMA to see how its rules regarding “gatekeepers” can best be adapted to Canada.
Finally, in the coming months, the Liberal government will once again table an online anti-hate bill and there is much to learn from the EU’s Digital Services Act (DSA’s) content moderation provisions.
For example, the DSA bans personalized ads aimed at minors. The new Canadian anti-hate bill should include such a ban and, of course, much, much more.
The views of this author are clearly not shared by the Silicon Valley giants, Canada’s digital ad industry, Canada’s corporate law firms and the Canadian operations of the big global consultants.
However, if explained properly, they are likely to have the support of a substantial majority of Canadians. And Canadian governments are elected by individual Canadians, not corporations.