WELCOME to Connected Rights, your nip in the air of digital rights news and analysis.

Enjoy this newsletter? Forward it to a friend or get them to sign up. I’m David Meyer, aka @superglaze on Twitter and @davidmeyerwrites on Facebook. Don’t forget to check out the Connected Rights website and download a copy of my book, Control Shift: How Technology Affects You and Your Rights. T’aves baxtalo!

CIVIL SOCIETY ORGANIZATIONS ARE PLEADING WITH FACEBOOK to provide a mechanism for anyone to appeal content restrictions on its platform, with the decision being re-reviewed by a human rather than an algorithm.

“Facebook remains far behind its competitors when it comes to affording its users due process. We know from years of research and documentation that human content moderators, as well as machine learning algorithms, are prone to error, and that even low error rates can result in millions of silenced users when operating at massive scale,” reads an open letter from the groups, which include the likes of the ACLU, the Foundation for Press Freedom, and Human Rights Watch.

“Yet Facebook users are only able to appeal content decisions in a limited set of circumstances, and it is impossible for users to know how pervasive erroneous content takedowns are without increased transparency on Facebook’s part… Civil society groups around the globe have criticised the way that Facebook’s Community Standards exhibit bias and are unevenly applied across different languages and cultural contexts. Offering a remedy mechanism, as well as more transparency, will go a long way toward supporting user expression.”

Over to you, Zuck.

BIG TECH’S NEWFOUND KEENNESS FOR REGULATION is gathering pace, with Intel literally drafting a U.S. privacy bill for which it is now seeking sponsors.

The bill describes a self-certification system through which companies could attest to their strong data protection measures – companies that lie could face fines of up to $1 billion (which is lower than the GDPR’s maximum fine level) and executives could even face jail. Meanwhile, Apple’s Tim Cook has called for a U.S. GDPR equivalent, and Google and Facebook have also called for a comprehensive federal privacy law.

Politico’s Mark Scott hit the nail on the head when he wrote that the public and politicians need to be wary. Scott: “The 180-degree turn to embrace regulation — on topics as varied as privacy, competition and tax — smacks of a last-minute attempt to win public relations points and get ahead of these new rules before they eat them alive.”

Of course tech companies should have a voice when rules that apply to them are being formulated. But the same applies to all stakeholders, the public included. For that reason, it’s essential that U.S. lawmakers give as much time to civil society and consumer groups as they do to Big Tech.

To support my work, why not become a patron of Connected Rights? If you would prefer to make a one-off donation, I also have a PayPal.me page.

CAST YOUR MIND BACK TO 2016, when Google’s DeepMind AI subsidiary partnered with London’s Royal Free hospital to create an app called Streams. The app used patients’ personal data, without their consent, to better spot acute kidney injuries. The deal turned out to be illegal under the Data Protection Act.

DeepMind said more than once that the hospital’s data would “never be linked or associated with Google accounts, products or services”. Fast forward to yesterday, and Google announced that Streams would be joining the mothership, where it will be rolled out as “an AI-powered assistant for nurses and doctors everywhere”. The company promised that “patient data remains under our partners’ strict control, and all decisions about its use will continue to lie with them” – but privacy advocates are not convinced.

“This is TOTALLY unacceptable,” tweeted ace privacy academic Julia Powles. “DeepMind repeatedly, unconditionally promised to *never* connect people’s intimate, identifiable health data to Google. Now it’s announced…exactly that. This isn’t transparency, it’s trust demolition.”

Also note this, from the FT: “As the news was announced, it emerged that an independent review panel, set up and funded by DeepMind to scrutinise its sensitive relationship with the NHS, will be would wound up, said one person close to the situation.”

SUNDAR PICHAI’S LATEST DISMAL JUSTIFICATION for Google’s possible re-entry into the Chinese search market is that it already censors content in other countries. “One of the things that’s not well understood, I think, is that we operate in many countries where there is censorship. When we follow ‘right to be forgotten’ laws, we are censoring search results because we’re complying with the law,” he told the New York Times.

Two things. Firstly, the “right to be forgotten” (RTBF, and anyway it should be “delisted” goddammit)  is based on the fundamental rights to privacy and data protection. Yes, it may involve overriding the right to free expression, but not always – there are safeguards to make sure it doesn’t involve covering up things that the public ought to know about. It is also a tool for the average person. The Chinese censorship that Google would have to accept involves overriding free expression just because the government wants to suppress stuff. All censorship is not equal.

Secondly, Google has been censoring content on copyright-related grounds since way before the RTBF was a twinkle in Señor Costeja González’s eye. Google fought the RTBF tooth and nail, rather than simply going oh well the DMCA makes us take down links to pirated stuff so whatever. Yet somehow the analogous argument is now OK?

HOW EXPENSIVE IS CONTENT ID, YouTube’s toolkit for scanning all uploads in order to spot matches with copyrighted content? According to a blog post last week on Google’s copyright-enforcement efforts, Content ID has so far cost more than $100 million to operate.

That figure includes not only the cost of developing the system, but also staffing and computing resources. When Google talked about the subject a couple of years back, the figure was $60 million, so let’s say it costs around $20 million a year to run Content ID.

So what does this mean for startups that handle large quantities of user-generated content and might therefore be effectively forced to install a similar system under the not-yet-finalised EU Copyright Directive revision? Well, bear in mind that YouTube takes way more uploads than any startup would, so that explains the high resource requirements. But nonetheless, it’s a big scary figure that might act as a brake on investment into newer platforms.

If you’d like me to speak about digital rights at your event or provide advice for your business, drop me an email at david@dmeyer.eu.

FINALLY, IT’S TIME FOR THE DATA BROKERS. Privacy International last week lobbed a series of GDPR complaints at Oracle and Acxiom, as well as ad-tech firms Criteo, Quantcast and Tapad, and credit referencing agencies Equifax and Experian.

PI lawyer Ailidh Callander: “The data broker and ad-tech industries are premised on exploiting people’s data. Most people have likely never heard of these companies, and yet they are amassing as much data about us as they can and building intricate profiles about our lives. GDPR sets clear limits on the abuse of personal data. PI’s complaints set out why we consider these companies’ practices are failing to meet the standard — yet we’ve only been able to scratch the surface with regard to their data exploitation practices.”

Specifically, PI’s complaints regard the lack of consent or legitimate interest for data processing – given that there’s room for interpretation on the law and guidance regarding the “legitimate consent” legal basis, this should prove to be an interesting test case. PI is also het up about the lack of transparency in the industry, and the failure to adhere to data minimisation principles.

When I approached the companies for their responses, Experian and Criteo straightforwardly said their privacy practices aligned with the GDPR, while the others either refused to talk or didn’t respond at all. Adpocalypse, but when?