A decade ago, the greed and carelessness of the financial industry dealt a serious blow to the global economy when financial products no one but their creators understood, imploded. The resulting economic collapse unleashed a wave of financial regulatory reform worldwide.
Now the tech industry — while making some things more convenient, has made serious mistakes of its own. It has abused privacy, squeezed the competition, undermined quality journalism, casually spread hate (e.g. the recent New Zealand shootings) and allowed foreign countries to influence elections. And that’s just the beginning of the list.
On March 15, a gunman live-streamed for 17 minutes the attack on two mosques in New Zealand that killed 50 people.
As a result, Google, Facebook, Amazon and Apple appear to be on the cusp of a wave of regulation that would have been unthinkable even two years ago.
On Thursday, Prime Minister Trudeau said he plans to introduce regulations for online platforms that will include financial penalties for the spread of misinformation, a day after signing a global pact to address violent speech on the internet.
Prime Minister Trudeau told a meeting of world leaders and technology companies in Paris on Thursday that the federal government will release a digital charter laying out how Canada plans to tackle issues such as hate speech, misinformation and election interference on the internet.
Innovation Minister Navdeep Bains is expected to provide more details at a summit on digital governance in Ottawa in late May.
As many countries signed on to the digital “Christchurch Call” Wednesday, the U. S. refused to sign.
But other countries are acting.
Australia passed a stringent law that demands social media companies quickly remove violent content from their platforms in light on the Christchurch massacre in New Zealand. Under the new law, social media executives could receive a prison sentence or their companies face hefty fines if the content is not taken down “expeditiously.”
In the U.K., a Parliamentary committee suggested that social media companies should be subjected to independent oversight and fined over “online harms.” The U.K. is looking to appoint an independent regulator to oversee social media companies’ protection of its users and dole out penalties like fines and blocked website access where appropriate.
The much-anticipated white paper follows a U.K. Digital, Culture, Media and Sport select committee report that was widely seen as a condemnation of Facebook and called the company “digital gangsters.”
In the U.S. , Elizabeth Warren, the Massachusetts senator who is a leading contender for the Democratic presidential nomination, released proposals this month that would force tech breakups and impose severe restrictions on what remained.
Ms. Warren’s plan creates two tiers of companies that would fall under the new regulations: those that have an annual global revenue of $25 billion or more, and those with annual revenue of $90 million to $25 billion. The upper tier would be required to “structurally separate” their products from their marketplace. Smaller companies would be subject to regulations but would not be forced to separate themselves from the online marketplace.
Essentially, Ms. Warren is rolling out the heavy artillery of regulation: anti-trust law.
Is there a case for using anti-trust law against the big internet platforms?
The technologist Anil Dash recently described some of the internet’s biggest platform businesses as “fake markets.” These are businesses that purport to be marketplaces, making money by connecting parties — people who want rides with drivers, advertisers with eyeballs — but are not actually markets in the strict sense of the word. They’re centrally and often aggressively managed and manipulated by the big technology companies.
Some hardly resemble markets at all.
Dash singled out Uber: In that “market,” drivers don’t set prices, consumers don’t actually have any choice over who drives them, and the whole system operates by trade-secret algorithms. Our ignorance of how such things work is easier to ignore when a platform is establishing itself and sharing the benefits of its growth through low prices. This dynamic only starts to bother us when a dominant platform emerges and there are fewer alternatives or none at all. In the absence of competition, prices go up and we begin to ask questions.
The truth of the matter is that the biggest internet platforms are businesses built on an imbalance of information and therefore power. In other words, they know far more about us than we know about them. We can guess, but can’t know, why we were shown a friend’s Facebook post about a divorce, instead of another’s about a child’s birth. We can theorize, but won’t be told, why YouTube thinks we want to see a right-wing polemic about Islam in Europe after watching a video about travel destinations in France. Everything that takes place within the platforms is enabled by systems we’re told must be kept private in order for the platform’s business model to work.
The imbalance of power between ordinary citizens and giant corporations is a refrain Ms. Warren has been hitting on for years, including in a 2016 speech titled “Reigniting Competition in the American Economy.” Last year, she introduced the Accountable Capitalism Act, which seeks to curb shareholder power by forcing corporations to increase worker representation on their governing boards, while also reducing incentives for big companies to pay out shareholders rather than reinvest in businesses.
Matt Stoller, a fellow at the Open Markets Institute in Washington and a former senior adviser to the Senate Budget Committee, said Ms. Warren’s plan was “practical” and “necessary.” He compared big tech companies to the tobacco monopolies of America’s past, which were eventually subjected to antitrust lawsuits.
But Ms. Warren’s tech proposal has its critics. Daniel Crane, an antitrust expert at the University of Michigan, noted that actual break-ups are unlikely. However, he does feel that “A likelier consequence is that the next acquisition they want to make will be rejected”. Even just the rhetoric can complicate their lives and put pressure going forward.”
There are other criticisms of a “one-size-fits-all” model to tech regulatory reform. As the industry analyst Ben Thompson has written, the large tech companies have different business models that pose different anti-competitive risks. The stranglehold that Google and Facebook have on the digital advertising market is different from the way Amazon muscles out e-commerce brands, which is different from the way Apple uses its App Store to force burdensome terms on developers.
For observers such as Thompson, a set of effective tech regulations would treat each problem discretely, and address each with surgical precision.
However, regardless of the specific policy options chosen, it is clear regulation is coming to the giant platforms.
The regulation debate in Canada
In Canada, there is a growing consensus that, in the words of Canada’s Privacy Commissioner, Daniel Therrien: “the time of self-regulation is over.”
Here, the top priority needs to be a top to bottom overhaul of our privacy legislation, the most important being the federal Personal Information Protection and Electronic Documents Act (PIPEDA).
The federal Privacy Commissioner has repeatedly criticized the legislation as being overly permissive and giving companies far too wide latitude to use personal information for their own benefit.
Under PIPEDA, private organizations have a legal obligation to be accountable, but recent developments make it clear that Canadians cannot rely exclusively on companies such as Facebook and Google to manage their personal information responsibly.
What that means is that provisions protecting privacy have to be spelled out in much greater detail in the Act and effective enforcement of the Act’s provisions has to be considerably stronger than what currently exists.
To be clear, it is not enough to ask private companies to live up to their responsibilities. Canadians need stronger privacy laws that will protect them when organizations like Facebook and Google fail to do so.
And compliance with these laws must be enforced by a regulator, independent from industry and the government, with sufficient powers to ensure compliance.
In addition to the Privacy Commissioner, several parliamentary committees have supported the call for bold legislative reform to reign in the platforms.
Notably, in February, 2018, a House of Commons Standing Committee on Access to Information, Privacy and Ethics (ETHI) report, concurred with many of the Privacy Commissioner’s recommendations to amend PIPEDA, and even called for additional measures inspired by the European Union’s excellent General Data Protection Regulation (GDPR), which came into force last May.
In a later report in June, after hearing from witnesses on the Facebook/Cambridge Analytica scandal, ETHI came to the view that additional amendments (notably those conferring new enforcement powers to the Office of the Privacy Commissioner, including the power to inspect or audit companies such as Facebook and Google) were urgently required.
In a separate June, 2018 separate report, ETHI also agreed that political parties need to be governed by privacy laws.
Unfortunately, the government has been silent on the three sets of recommendations and introduced two bills that could have been used to implement many of the recommendations, but failed to do so.
Opportunities to implement some of the major recommendations included:
- Proposed changes to national security legislation contained in Bill C-59. During the hearings on the bill, the Privacy Commissioner reiterated a number of the recommendations he made on Privacy Act reform in 2016, among other things.
- Bill C-76, the Elections Modernization Act, which dealt with political party financing, and was an opportunity to address the lack of standards and oversight over the personal information handling practices of political parties. According to the Privacy Commissioner, however, C-76 added nothing of substance in terms of privacy protection. According to the Commissioner, rather than impose internationally recognized standards, the bill leaves it to political parties to define the rules they want to apply. It also fails to impose independent oversight over how Canada’s political parties use voter profiles often purchased from private data brokers.
Given the opaqueness of business models and the complexity of information flows in the age of data analytics, artificial intelligence (AI) and the Internet of Things, Canada needs strong privacy laws and legal requirements ensuring greater transparency in how the giant platforms operate. Canada also needs a strong regulator to enforce those laws.
The first priority is that meaningful consent must be the driving force behind the reform of Canadian privacy law.
Therefore, following the European GDPR rules, Canada must adopt an “opt-in consent by default approach” to providing personal information to platforms. More specifically, the Government must implement amendments to the Personal Information Protection and Electronic Documents Act to explicitly provide for opt-in consent as the default for any use by a platform of personal information for secondary purposes, with a long-term view to implementing a default opt-in system regardless of purpose.
Moreover, opt-in should only be permitted when consent is meaningfully given with the user provided sufficient information by the platforms as to exactly how the platform will use the personal information given to them.
Related to the above, there needs to be to dramatic action to improve the platforms’ algorithmic transparency. And again, the GDPR may provide a model for the kind of transparency that is required.
Thirdly, political parties must be far more transparent in how they use personal data and detailed rules as to what they can and cannot do with that personal data should be written into the Canada Elections Act.
Finally, the Government should consider including in the Personal Information Protection and Electronic Documents Act a framework for a right to erasure based on the model developed by the European Union that would, at a minimum, include a right for young people to have information posted online either by themselves or through an organization, taken down.
Closely related to this, the Government should consider including a broader framework for the right to de-indexing (eg. having a Google listing with personal or false information removed on request) in the Personal Information Protection and Electronic Documents Act and this right should be expressly recognized in the case of personal information posted online by individuals when they were minors.
In late June 2018, the government responded to ETHI’s recommendations to amend PIPEDA.
The Minister of Innovation, Science and Economic Development agreed that changes are required to our privacy regime, but he argued that further study of the viability of all options, for instance on enforcement models, was required with a view to presenting Canadians with proposals.
The minister launched a national digital and data consultation, which could eventually result in amendments to the law in two or three years. Ontario has launched its own consultations on a data strategy.
Canadians cannot afford to wait several years until known deficiencies in privacy laws are fixed. Technology is evolving extremely rapidly and many new technologies disrupt not only business models but also social and legal norms. Legal protections must improve apace if consumer trust is to reach the level everyone desires.
As ETHI commented in its June, 2018 report, “the urgency of the matter cannot be overstated.”
Access Premium Canada Fact Check Content by Subscribing or by downloading individual Premium Posts!
To access all in-depth, Premium Content, click here to subscribe. For individual downloads of Premium Content articles, click here!