Pages

Tuesday, June 5, 2018

Richard Epstein: A Frontal Assault on Social Media


To all appearances, the folks in charge of privacy regulation within the European Union are unfamiliar with that old cliché, “If it ain’t broke, don’t fix it.” 

Last week, the EU parliament passed a long-anticipated and much-dreaded privacy law known as the General Data Protection Regulation (GDPR), a lengthy and convoluted document that is replete with vague substantive commands accompanied by hefty penalties for violation

The implicit assumption behind the regulation is that all individuals are entitled to control data about themselves, so that various firms that acquire this information not only have to hold it secure against outsiders, but are also limited in how they can use the data, while granting individual users extensive rights to access, control, and remove their personal data. 

The GDPR regime is not content to let these important issues be resolved by private contract. But the new regulation fails a simple test: It does not identify any breakdown in the current institutional arrangements to justify its massive oversight in the way in which individual data is managed by all sorts of organizations and firms.

No fair-minded person thinks it’s appropriate to allow strangers to hack into databases, public or private, or to deliver hacked data to others who can then use that data to defraud or defame innocent people. Right now, a robust, multi-layered regime of legal, political, economic, and social enforcement within the EU targets firms who are perceived to violate these norms. Yet there is scant justification for piling an additional massive regulatory scheme on top of the current mix of public and private remedies. Consider the fate of Cambridge Analytica, a firm that misused for political purposes data that it had acquired under false pretenses from Facebook during the 2016 presidential campaign. Cambridge Analytica recently shut down, undone by a “siege of media coverage.” Facebook’s Mark Zuckerberg, meanwhile, has been hauled over the coals repeatedly in both the United States and in Europe because the systems Facebook had in place were insufficient to protect against misuse. Zuckerberg responded with more robust solutions to satisfy its huge customer base, lest Facebook lose its dominant market position and the billions in revenue its users generate.

It is a mistake to underestimate the deterrent effect of such strong market responses to demonstrable forms of data misuse, and it is unwise to put into place, as the GDPR has done, a vast and untested apparatus to regulate the collection, storage, and distribution of processing of information when, in all but rare instances, the system functions as designed.

There is indeed an even deeper irony here. In Article 3 of the GDPR, we learn of its extensive extraterritorial scope: “This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.” That broad command captures all transactions related to “the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; . . .”

Hence the law obviously covers firms outside the EU, and in so doing, it saddles them with regulations that ironically make it easier for cybercriminals to target firms to which the GDPR applies. The reason is that the GDPR crimps the efforts of public and private bodies to thwart criminal behavior. In the misguided effort to protect the privacy of job applicants, Article 10 blocks employers and public officials from asking the applicants whether they have been convicted of a crime or are under investigation for criminal activities, thereby increasing the odds of a serious data breach orchestrated from within the organization.  Unfortunately, the EU fails to understand that privacy protection has always been a double-edged sword: It allows honest people to go about their lives without undue interference from others while also making it easier for knaves and thieves to conceal their identities in order to practice fraud against innocent persons.

Any sensible regulatory official ought to be sensitive to this trade-off. But the need for humility in the face of complexity has not reached Andrea Jelinek, the Austrian regulator heading the new European Data Protection Board, which has, under the EU’s federalist system, general oversight in enforcing the GDPR. Regrettably, Jelinek harbors no doubts about her own competence to wield a big stick: “If we have reasons to fine, we are going to fine.” For better or worse, direct enforcement does not lie with Jelinek but with member states, meaning that Ireland—the European headquarters for Facebook and Google—will have a disproportionate role to play. Already, there are rumblings of disagreement in enforcement priorities between Jelinek and Helen Dixon, the head of Ireland’s data protection commission. These little spats can complicate the business task for firms trying to keep in compliance with a barrage of inconsistent commands.

Matters are not made easier by the onerous and ill-defined standards under Chapter III of the GDPR, which allow users to correct data entries and receive prompt notice in the event of a data breach, often before the firms themselves understand the cause or scope of the breach. Yet the penalty for error is severe: the GDPR calls for fines that can reach either €20 million or a whopping 4 percent of worldwide annual sales, whichever is greater, even for conduct that is perfectly legal under local law outside of the EU.

The general rule of proportionality requires that the punishment fit the crime.  Overdeterrence, like underdeterrence, leads to serious economic distortions, for it induces excessive precautions against trivial risks. In this instance, moreover, it can contribute to the premature decision of non-EU firms to stop doing business inside the EU, which in turn gives a competitive edge to large firms like Google, which have the wherewithal to respond to threats more quickly than their smaller and less prepared rivals, and more quickly than Facebook, which has to deal with the transmission of data over literally hundreds of millions of covered individual accounts.

As drafted, Article I of the GDPR only protects “the process of personal data,” that is, data about “natural persons.” One of the great challenges for the GDPR concerns the rapid, repeated, and routine movement of data for commercial, medical, security, and business reasons, which makes it costly to obtain individual consent from each applicant for each use of particular data. The sensible response is for companies to request, and individuals to give, some blanket consent, subject to the understanding that serious sanctions will be imposed after the fact in the event of identified misconduct. But it is far from clear how this works under the GDPR.

The Charter of Fundamental Rights of the European Union, which enshrines the EU’s commitment to “the indivisible, universal values of human dignity, freedom, equality and solidarity,” specializes in grandiose generalizations that often fail to yield clear and sensible directives. Often, its values are in painful tension with one another. The Charter’s Article 7 upholds “Respect for private and family life: Everyone has the right to respect for his or her private and family life, home and communications.” Article 8 is more specific, providing: “Everyone has the right to the protection of personal data concerning him or her.” The remainder of Article 8 points to individual consent as the basis for an exception where there is “some other legitimate basis laid down by law”—full stop. But what is needed is a detailed account of which ends count as legitimate, and which means can be used to implement them.

Nonetheless, the GDPR purports to make good on the Charter’s guarantees. Ideally, the Charter’s respect for individual autonomy and consent would seem to allow for normal contractual waivers of privacy protections. But consent under the GDPR is sharply limited.  It refers only to “any freely given, specific, informed and unambiguous indication of the data subject’s wishes,” which throws doubt over such blanket waivers. The consent requirement is even more difficult to satisfy because under the GDPR the company bears the burden of showing whether an individual’s consent includes the permission to process data.  Nonetheless, because the GDPR allows any individual to withdraw consent for any reason at any time, long-term planning for the company becomes difficult. In addition, the GDPR imposes onerous requirements to ensure that consent to obtain information is “clearly distinguishable from other matters.” The regulation further cuts back on contractual freedom by requiring that a determination of whether consent is “freely given” take the “utmost account” of whether contractual performance is “conditional on consent to the processing of personal data that is not necessary for the performance of that contract,” which I take as undermining the ability of any firm to get (and any individual to give) general consent.

The situation is now even more uncertain because Maximillian Schrems, a well-known Austrian privacy activist, has already filed his first round of suits, seeking billions of Euros against Google, Instagram, WhatsApp, and Facebook as leader of a new organization called “nyob” (none of your business) The purpose of these complaints is to attack the soft underbelly of consent under the GDPR on the ground that it is not “freely given” if obtained by a threat to cut off the underlying service. Schrems complains that if users only have “the choice to delete the account or hit the ‘agree’ button—that’s not a free choice, it more reminds of a North Korean election process.” Forget the hyperbole, and remember that his frontal assault under the GDPR against all take-it-or-leave-it terms poses a mortal threat to the business models of social media companies, who depend upon using collected data in legitimate ways to generate revenues from advertisers and other parties. At the moment, it is unclear how regulated companies will pass their new costs onto their customers. No customer wants to pay fees for the services they now receive for free. Just think of the pricing nightmare that developing any global fee structure would create.

If Schrems gets his way, the GDPR amounts to a frontal assault on the standard social media business model, which could disrupt the world-wide operation of social media. The EU and its member states ought stoutly to resist any interpretive move under the GDPR that transforms its intended data protection regime into one that ensures, de facto, the destruction of the social media regime as it stands today.

Professor Richard A. Epstein, the Peter and Kirsten Bedford Senior Fellow at the Hoover Institution, is the Laurence A. Tisch Professor of Law, New York University Law School, and a senior lecturer at the University of Chicago.  This article was first published by the Hoover Institute's Defining Ideas.

No comments: