• About
  • Advertise
  • Contact
Friday, March 24, 2023
  • Login
No Result
View All Result
SUBSCRIBE NOW
Canada Fact Check
Cart / $0.00

No products in the cart.

  • Premium Content
  • Premium Store
  • Canadian Politics
    • Federal Feature Posts
  • Ontario Politics
    • Ontario Feature Posts
  • U.S./Global Politics
    • American Feature Posts
  • Corporate Watch
    • Corporate Watch Feature Posts
  • Premium Content
  • Premium Store
  • Canadian Politics
    • Federal Feature Posts
  • Ontario Politics
    • Ontario Feature Posts
  • U.S./Global Politics
    • American Feature Posts
  • Corporate Watch
    • Corporate Watch Feature Posts
No Result
View All Result
Canada Fact Check
No Result
View All Result
Home Uncategorized

Are European standards for regulating the internet the answer for Canada?

by Ethan Phillips
March 15, 2023
in Uncategorized
0
Are European standards for regulating the internet the answer for Canada?

Heritage Minister Pablo Rodrigues is currently presiding over 2 bills that have triggered fierce opposition from Silicon Valley

0
SHARES
31
VIEWS
Share on FacebookShare on Twitter

Seventy percent of Americans who voted for Donald Trump still believe that the 2020 US presidential election was stolen and that Trump really won.

This, in spite of the fact that after the election the Trump campaign and allied interests filed and lost at least 63 lawsuits contesting election processes, vote counting, and the vote certification process in multiple states. Among the judges who dismissed the lawsuits were some appointed by Trump himself.

Nearly all the suits were dismissed or dropped due to lack of evidence. Judges, lawyers, and other observers described the suits as “frivolous” and “without merit”. In one instance, the Trump campaign and other groups seeking his re-election collectively lost multiple cases in six states on a single day.

The failure of these lawsuits was covered in great detail by the legacy news media (the New York Times, the Globe and Mail, etc.) and Trump’s continued denial of Biden’s victory is regularly condemned across the media landscape.

But still, more than two years following the 2020 election, 70% of Trump voters still believe Trump won. Why?

While it is true that the vast majority of media outlets that actually employ professional journalists have been clear that Biden didn’t steal the election from Trump (Fox News excepted), this is not the case in the ad-supported social media world where many of Trump’s supporters get their “news”. This is an algorithm-based world which makes no claims to representing objective reality and where baseless conspiracy theories often thrive. Moreover, in this social media world, there is a large and growing community of users who treat news from media outlets who employ professional journalists with suspicion, if not outright contempt (i.e. as “fake news”).

On top of this, online platforms track their users internet use through so-called “cookies” they attach to a user’s browser. They also supplement their user profiles with information from data brokers who comb the internet to gathering information on individual platform users. Acxiom, one of the larger U.S. information brokers, claims to have files on 2.5 billion people, with about 11,000 data points per consumer.

If these profiles contain information that suggests a user is pro-Trump (originally gathered, let’s say, from a user’s browsing history), content from various groups and individuals promoting the “Big Lie” will find their way onto the user’s timeline via the platform’s algorithms. As such, a pro-Trump social media user would likely be bombarded with content claiming that Biden stole the election from Trump. After sustaining this sort of bombardment by platform algorithms, a Trump voter could very well become an election denier – and stay that way no matter what the legacy news media (and the Democrats) say. This is a direct function of the way social media algorithms work.

Put a little differently, on the issue of whether Biden stole the election from Trump, there are two directly opposing online “realities”. One is an objective reality anchored in the findings of established democratic institutions (electoral commissions, the courts, etc.) and reported upon at length in the legacy media by professional journalists. This reality definitively says Biden won the 2020 election fair and square.

But there is also an alternate online “reality” rooted in an ad-supported eco-system driven by social media algorithms, tracking cookies and information brokers. These algorithms use our personal data (without our permission) and are explicitly designed to push users towards “engagement” with the social media platforms at all costs – filling a platform user’s timelines with content that may or may not be true (the algorithms don’t care), but that often inspires fear, resentment, anger and outrage. If content suggesting that the 2020 election was stolen from Trump will keep a user glued to a platform so that the platform can sell more ads, that user will get more and more misinformation of this sort.

And this world of social media algorithms has a very specific legal basis – S. 230 of the U.S. Communications Decency Act. Section 230, which was passed in 1996, says an “interactive computer service” cannot be treated as the publisher of third-party content. This protects internet platforms like Facebook and TikTok from lawsuits if a user posts something illegal. In other words, platforms are not legally liable for anything a user posts on their sites – no matter how personally or socially harmful the content is. (Section 230 of the CDA is sort of the law in Canada. This is because the new NAFTA, formally called the CUSMA, contains Article 19.17 that more or less mirrors S. 230).

This legally sanctioned world full of misinformation and disinformation not only distributes content from anti-democratic domestic actors like Donald Trump and his allies, they are also conduits for disinformation from authoritarian states like Russia and China that seek to destabilize Western democracies. Just as Trump (and Trump wannabees such as Ron De Santis) use these algorithms to spread misinformation, authoritarian states use ad-supported social media algorithms to try to undermine elections and more generally undermine the rule of law in Canada and other western democracies.

The point of this tour through the alternate reality of the “Big Lie”, is to underscore just how dangerous the algorithmic world of social media is. If 30% of American voters can be convinced that Biden stole the 2020 election from Trump, then 30% of American voters (although not necessarily the same 30%) can be convinced of just about anything. In a society where truth and facts are the enemy of a significant portion of the population, open and free democracies are truly living on borrowed time.

The question that this post attempts to answer, therefore, is how best to deal with the problems of online misinformation, disinformation and the myriad of other problems associated with what the internet has become. What laws need to be in place in Canada to ensure that more than two years after an election has been held, there is only one online reality when it comes to who won the election and the name of the winner is just a boring, uncontested fact that no one could imagine disputing.

The European Approach

At the present moment, Europe is the jurisdiction that has thought the most about the misinformation (and disinformation) problems associated with an increasingly digital world. As a result, the EU is three to five years ahead of Canada and the US in creating a regulatory framework that begins to provide some solutions.

The EU approach is often described by its proponents as a rights-based approach. In some ways this is true but what defining the EU’s digital strategy as rights-based doesn’t capture is that the EU approach is filled with compromises with the commercial world as it is. The EU takes for granted that we are moving more and more towards a digital economy and that there are many socially useful things that are being made possible because of this. It is also thoroughly aware that in order for the European economy to thrive, there must be private sector, European high tech champions.

The EU’s digital regulatory regime also contains numerous compromises with the Silicon Valley giants and their chosen business models. That said, the EU’s evolving framework is certainly stronger on consumer protection and individual rights than anything found in North America and, as such, is well worth exploring for lessons for Canada.

The GDPR

Perhaps the best example of compromise in the EU approach to digital regulation lies at its very foundation – it does not attempt to directly push back against the immunity to liability of the internet platforms. Instead, it attempts to address the specific harms caused by the platforms’ immunity to legal liability. In so doing, it does make things somewhat more difficult for the ad supported social networks and search engines.

For example, under the GDPR, the EU’s personal data privacy legislation:

  • User consent for the use of personal data by a platform must be “freely given, specific, informed and unambiguous.”
  • Platform requests for user consent must be “clearly distinguishable from the other matters” and presented in “clear and plain language.”

Because of the personal data privacy provisions in the in the GDPR cited above, Meta suffered a major legal defeat on January 4 that could severely undercut its Facebook and Instagram advertising business after European Union regulators found it had illegally forced users to effectively accept personalized ads.

The decision, including a fine of 390 million euros ($414 million US), has the potential to require Meta to make costly changes to its advertising-based business in the European Union, one of its largest markets.

The ruling is one of the most consequential judgments since the 27-nation bloc, home to roughly 450 million people, enacted a landmark data-privacy law aimed at restricting the ability of Facebook and other companies from collecting information about users without their prior explicit consent. The law took effect in 2018.

The case hinges on how Meta receives legal permission from users to collect their data for personalized advertising. The company’s terms-of-service agreement — the very lengthy and technical legal statement that users must accept to gain access to services like Facebook, Instagram and WhatsApp — includes language that effectively means users must either allow their data to be used for personalized ads or stop using Meta’s various social media services altogether.

The EU Court ruled that this was illegal under the GDPR. Meta has until April, 4 to comply with the ruling.

The GDPR has various other provisions including:

  • Platform users will have the right to withdraw previously given consent to use their personal data whenever they want, and platforms have to honor their decision. Platforms can’t simply change the legal basis of the processing to one of the other justifications.
  • Children under 13 can only give consent to use their personal data with permission from their parent.
  • Platforms need to keep documentary evidence of user consent.

Canadians have none of the legal protections provided by the GDPR. That said, Bill C-27, currently being debated at second reading in the House of Commons, has started a serious debate on how best to protect personal data in Canada. While at the time of this writing, the personal data privacy provisions in the legislation generally fall short of the privacy protections afforded by the GDPR, both the NDP and the Bloc seem determined to try to amend the legislation at Committee to at least partly bring Bill C-27 up to GDPR standards.

And the Silicon Valley giants? They and the Canadian digital marketing world are having none of it. They are lobbying very, very hard to weaken almost every aspect of the already inadequate legislation.

The DMA and the DSA

As mentioned above, in one way or another, Canada will be forced to deal with a whole range of digital issues over the coming years and the question that this post is asking is whether or not the EU approach to these issues is the right choice for Canada.

Let’s proceed to the EU’s recently passed Digital Markets Act (DMA) to see if we can find any ideas Canada might usefully borrow in developing it own regulatory framework.

On November 1, 2022, the DMA, the EU’s flagship digital “gatekeeper” legislation, entered into force. This started the clock for the legislation’s full application which will be done in phases.

The DMA is premised on the fact that a small number of very large online platforms act as “gatekeepers” in EU digital markets. The Digital Markets Act aims to ensure that these giant platforms behave in a fair way online.

The law primarily targets the providers of key platform services, such as social networks, search engines, web browsers, online marketplaces, messaging services, or video-sharing platforms, that have at least 45 million monthly active users in the EU and have 7.5 billion Euros in revenue. In other words, it’s going after the big fish that have a monopoly or something close to it in the EU digital economy.

Companies likely to be designated gatekeepers include Meta, MIcrosoft, Google, You Tube, Amazon, Apple, and TikTok (although there will likely be a few others).
Key provisions in the DMA include:
  • Tighter restrictions on how digital gatekeepers can use people’s data—users must give their explicit consent for their activities to be tracked for advertising purposes.
  • Provisions that force gatekeepers to allow their messaging services and social media to team up and allow their users to communicate across platforms. This could mean, for example, Meta-owned WhatsApp users being able to send messages directly to a completely different messaging service, such as Telegram.
  • Presenting users with the option to uninstall preloaded applications on devices.
  • Gatekeepers are banned from ranking their own products or services higher than others in online searches. For example, Amazon would not be able to rank its own products ahead of third party products in the Amazon owned marketplace.

Gatekeepers would not be allowed to:

  • Prevent users from uninstalling any pre-installed software or app (i.e. Google cannot stop you from linking your Gmail account to a non-Google service).
  • Track end users outside of the gatekeepers’ core platform service for targeted advertising, without the effective consent of the user.

The DMA is an ex-ante competition law targeting the giant gatekeepers of the digital economy where “dominance” in markets doesn’t have to be proven. Companies such as Apple, Google, Facebook and Amazon are almost certain to be designated gatekeepers because of their size and a few other characteristics.

On November 16, 2022, the EU’s Digital Services Act (DSA) came into force. The DSA is the sister legislation to the DMA and its main purpose is to fight the spread of illegal content, online disinformation, online hate and other societal digital risks. The DSA introduces a comprehensive regime of content moderation rules for a wide range of digital businesses operating in the EU, including all providers of hosting services and “online platforms”.

EU lawmakers recognized that the largest platforms pose the greatest potential risks to society—such risks include negative effects on fundamental rights, elections, and public health (factually incorrect anti-vaccine information, etc.). The DSA will obligate platforms with over 45 million users in the EU, like YouTube, TikTok, Facebook and Instagram, to formally assess how their products, including algorithmic systems, may exacerbate these risks to society and to take measurable steps to prevent.

For example, the Digital Services Act bans personalized ads aimed at minors.

While there is some crossover between the two laws, they really do address different things. The DMA is more concerned with stopping the big internet companies from abusing their market dominance while the DSA addresses content moderation with stricter rules for very large platforms.

The EU AI Act

Add to this list the EU’s AI Act (AIA) regulating artificial Intelligence (likely to be finalized later this year or early 2024) and you have an EU digital regulatory framework that is years ahead of Canada’s – and the U.S’s.

The AI Act classifies AI use in specific sectors as having unacceptable risk, high risk, limited risk and minimal or no risk.

Those risks that are deemed unacceptable are banned completely.

For example, biometric identification (fingerprint patterns, facial features, eye structures, DNA, speech,  etc.) are banned except in limited circumstances.

AI models using subliminal techniques beyond a person’s consciousness are to be banned except if their use is approved for therapeutic purposes and with the explicit consent of the individuals exposed to them.

Also prohibited are the AI applications that are intentionally manipulative or designed to exploit a person’s vulnerability, like mental health or economic situation, to materially distort his or her behaviour in a way that can cause significant physical or psychological harm.

AI sourced social scoring (a mix of financial, criminal and other variables resulting in a single composite score) is presently banned in both the public and private sectors.

AI in the following sectors are deemed high risk but allowed:

For critical infrastructure, any safety component for road, rail and road traffic has been included under high risk.

In the educational area, the high risk category applies to personalised learning tasks based on the students’ personal data.

In the field of employment, the high-risk category applies to algorithms that make or assist decisions related to the initiation, establishment, implementation or termination of an employment relation, notably for allocating personalised tasks or monitoring compliance with workplace rules.

Regarding access to public services, AI use in allocating housing, electricity, heating and cooling and internet are considered high risk.

AI models intended to assess the eligibility for health and life insurance (including private sector insurance) and those to classify emergency calls, for instance, for law enforcement or emergency healthcare patient triage, are also classified as high risk.

A new high risk area was recently added for systems meant to be used by vulnerable groups, particularly AI systems that may seriously affect a child’s personal development. This vague wording might result in covering social media’s recommender systems if they impact minors.

Law enforcement, migration and border control management are also classified as high-risk.

Moreover, any AI application that could influence people’s voting decisions at local, national or European polls is considered at high, together with any system that supports democratic processes such as counting votes.

For AI-generated deep fakes, audio-visual content representing a person doing or saying something that never happened, the high-risk category applies unless it is an obvious artistic work.

Perhaps the most fiercely debated issue in the development of the EU AI Act is where to put general purpose AI models such as Chat-GPT.

General purpose AI can be used for just about anything and doesn’t neatly fit into any of the EU’s “use” sectors. These general purpose systems are classified as high risk in the latest draft but this is being fiercely opposed by the Silicon Valley giants and other large AI developers who want the “downstream” users to shoulder the risks of general purpose AI. At least for the moment, the EU is sticking with the “upstream” classification as it is the large companies (mostly American) that have the resources to make changes in the systems if they are found to be doing harm.

Under the AI Act, AI system providers and users will have new transparency obligations vis-à-vis individuals. For example, AI providers must ensure individuals are informed that they are interacting with an AI system. If an AI system generates ‘deep fakes’, the user of the AI system must disclose this. Users of an emotion recognition system or a biometric categorization system must inform the individuals exposed it.

There are also new documentation and processes obligations for high-risk AI systems. The current draft AIA requires preparing a wide range of new documentation and internal processes in relation to high-risk AI systems. For providers of these high risk AI systems, this will include, for example, preparing:

• systems for risk management, quality management, and post-market
monitoring;
• processes to ensure data quality and logging;
• tools for human oversight;
• measures to ensure accuracy, robustness, and cybersecurity; and
• technical documentation and instructions for use.

AI Liability Directive and revised Product Liability Directive

The Commission followed up the AI Act draft released in September 2022 with proposals for an AI Liability Directive and revised Product Liability Directive, which would make it easier for people to get compensation if they suffer AI-related damage, including discrimination.

Conclusion

The intense political battles currently raging in Canada over Bill C-18 (Online News Act) and Bill C-27 (Digital Charter Implementation Act) along with the review of the Competition Act, suggests that the battle between EU-like digital standards described above and the status quo standards supported by the Silicon Valley bemouths, is going to be fierce in Canada and the US in the coming years. The EU is 3-5 years ahead of Canada and the U.S. in its digital regulation but the battle lines are already forming on this side of the Atlantic as proposals to implement a whole host of EU-like digital standards are being floated in the face of fierce opposition from the likes of Meta, Google, Apple, Amazon and Microsoft.

Contrary to the corporate narrative in Canada and the US, this author believes that the evolving EU digital regulatory framework is flexible, nuanced, and in practice, carefully balances the protection of the rights of EU citizens and the commercial interests of the global data processors. As such, it provides a useful guide for Canada in developing our own digital regulatory framework.

In terms of the current legislative battles before Parliament, this means support for Bill C-11, the bill that obligates streamers to produce Canadian content just like any other broadcaster whose work is shown on Canadian television.

It also means support for the Canadian news media in their battle against Meta and Alphabet over Bill C-18. Facebook and Google have profited immensely from Canadian news over the years but have not produced any original journalism. The established news media employing professional journalists has been badly damaged by the loss of ad revenue to Google and Facebook and deserves to be compensated for the use of their journalism by these giant platforms. This is exactly what Bill C-18 does.

Canadians deserve GDPR-like safeguards for their personal data in Bill C-27 (as advocated for by the NDP and Bloc but not included in the present version of the bill). Platforms should be required to ask users for permission to use personal data (like their browsing history) in a clear, straightforward way and they should also be required to tell users that if permission is granted, the personal data will be used to craft personalized ads. If the user says no to the request, the platforms should be required to respect that decision and still allow users to access their services. Social media is vehemently opposed to this (Facebook, for example, received over 97% of its 2022 revenue from personalized ads) but such an approach is necessary to safeguard Canadians’ privacy.

The wholly inadequate regulatory framework related to Artificial Intelligence added at the last moment to Bill C-27 should be withdrawn (also the position of the NDP and Bloc) followed by a thorough public consultation process appropriate to what is shaping up to be one of the greatest public policy challenges of the decade. The framework is totally lacking in detail with the substance relegated to regulations. This means no public debate about regulating AI along the lines of what is currently taking place in Europe with the EU’s AI Act.

Canadian competition law is also wholly inadequate for the regulation of the digital economy. Many recent international anti-trust policy changes and proposals are driven by the view that the digital economy is increasingly dominated by a small number of very large firms, which provide much of the infrastructure on which our economies now run. These include social media, internet search, ecommerce, mobile devices, cloud computing, and software and applications. As indicated above, the EU’s Digital Market Act (DMA ) is essentially an ex-ante competition law designed to curb the power of the giant digital “gatekeepers” such as Amazon, Apple, Meta, Alphabet and Microsoft. While it would be breaking new ground for Canadian competition policy, Canada should look very closely at the DMA to see how its rules regarding “gatekeepers” can best be adapted to Canada.

The views of this author are clearly not shared by the Silicon Valley giants, Canada’s digital ad industry, Canada’s corporate law firms and the Canadian operations of the big global consultants.

However, if explained properly, they are likely to have the support of a substantial majority of Canadians. And Canadian governments are elected by individual Canadians, not corporations.

 

 

Share this:

  • Facebook
Ethan Phillips

Ethan Phillips

Ethan Phillips is the editor of Canada Fact Check and a practicing public policy and government relations consultant with 35 years experience researching, writing and consulting on Canadian and global public policy issues. He can be reached at Canadafactcheck@gmail.com.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

support quality journalism

Support Quality Journalism

Make a Donation

Get Canada Fact Check Updates Via Email!

Get Canada Fact Check updates on new posts and other breaking news delivered free to your inbox!

Category

  • American Feature Posts
  • American Politics
  • Auto Insurance
  • Business
  • Corporate Watch
  • Corporate Watch Feature Posts
  • Democratic Reform
  • Economic policy
  • Economy
  • Energy/Natural Resources
  • Environment
  • Federal Feature Posts
  • Federal Politics
  • Health
  • Infrastructure/ Transportation
  • Justice
  • Labour
  • Ontario Feature Posts
  • Ontario Politics
  • Pensions
  • Premium Content
  • Taxes/Budget
  • Uncategorized

About Us

Canada Fact Check is an independent news platform dedicated to transparency, democratic reform, government accountability and corporate responsibility.

The editor of Canada Fact Check is Ethan Phillips, a practicing public policy and government relations consultant with 35 years experience researching, writing and consulting on Canadian and global public policy issues.

Inquiries and tips for news stories are welcome and can be sent to: canadafactcheck@gmail.com.

  • About
  • Advertise
  • Contact

© 2019 Canada Fact Check. Designed by Web Sharx Webmaster: Empower You Web Solutions Inc.

No Result
View All Result
  • Premium Content
  • Premium Store
  • Canadian Politics
    • Federal Feature Posts
  • Ontario Politics
    • Ontario Feature Posts
  • U.S./Global Politics
    • American Feature Posts
  • Corporate Watch
    • Corporate Watch Feature Posts

© 2019 Canada Fact Check. Designed by Web Sharx Webmaster: Empower You Web Solutions Inc.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Get free notification of all new posts and a round-up of the week’s top stories!