WELCOME to Connected Rights, your jiggle in the hips of digital rights news and analysis.

MEET THE NEW (ALMOST) BOSS: The EU is about to get a new head of digital policy, as the old one, Germany’s Günther Oettinger, has shuffled off to deal with the bloc’s budget. The new “digital economy and society” commissioner-designate, a notably younger Bulgarian MEP named Mariya Gabriel (38 to Oettinger’s 63), had her parliamentary confirmation hearing on Tuesday. Though probably a political success, it was somewhat infuriating for those of us who are interested in digital rights.

On encryption: Estonian MEP Kaja Kallas asked Gabriel to confirm her position on authorities’ access to encrypted communications, pointing out that strong encryption (used by services like WhatsApp) makes such access technically impossible. “Legal access can only take place within very strict conditions… and only where it concerns reasons of national security of the highest rank,” said Gabriel. “We need to give our own institutions the means to move forward [with strong security] but we also need to make sure those same instruments are not being used by others for purposes other than the purposes we had in mind.”

In other words, Gabriel appears to believe it’s possible to have magic encryption that only protects the communications of the good guys, but not those of the bad. This hardly makes her unique among today’s politicians, but it’s a bad sign nonetheless.

On online surveillance: Article 13 of Oettinger’s big copyright proposals, which Gabriel will have to take over if confirmed, would force online platforms to proactively monitor the stuff their users upload, and automatically filter out whatever infringes copyright. This is in direct conflict with existing “e-commerce” legislation, which is supposed to protect online platforms from this sort of requirement.

What does Gabriel think? “Article 13 of that text is very important.” After German MEP Julia Reda asked for a bit more, the Bulgarian said she recognised the need for “dialogue and compromise” as Oettinger’s proposals make their way through the legislative process. She said the Commission and EU member states had decided that the protections in the e-commerce directive were “still relevant”. At the same time, she claimed “we are in the face of new realities”, adding: “I will react based on my mandate.”

Please let me know if you have any idea what her actual position is, because I don’t.

On online censorship: Gabriel was much clearer on the issue of hate speech, which is another area where companies like Facebook are trying to clean up their platforms. “We have no place for hate speech,” she said, seemingly defining the term as “anything that can offend an individual” – a worryingly broad definition if ever there was one. “Of course we need freedom of expression, but for hate speech this is a subject that requires all of us to defend our society model, our values and our principles.”

On algorithmic transparency: Andreas Schwab, the German MEP who once proposed breaking up Google, asked Gabriel if she thinks the authorities should be able to inspect or control tech firms’ algorithms – a touchy subject, as firms like Google insist their algorithms are trade secrets, but an important one if we’re going to continue to uphold people’s rights in the algorithmic age.

“They can’t be something invisible that we don’t know anything about, especially because that can have negative repercussions,” she replied. “We need a proper understanding of algorithms that will aid transparency.”

It’s important to note that (if confirmed) Gabriel may only take up the post for a couple of years, as the Juncker Commission ends in October 2019. She also needs to shepherd the proposals that were already made by the technologically-less-than-literate Oettinger, so no-one could expect her to take over with a completely different perspective. But from a digital rights point of view, her stated positions veer between the worryingly vague, the just plain worrying, and the potentially promising. Watch this space.

SPEAKING OF FREE EXPRESSION, both Facebook and Google have been making loud noises about combatting pro-terrorist content on their platforms. They need to do this if they want to avoid legislation by countries such as France and the UK, which have been threatening a crackdown. But their mechanisms need to avoid dampening legitimate free expression.

Facebook wants to boost the use of “artificial intelligence” to spot and zap terrorism-glorifying accounts on its social network. According to a Wall Street Journal article, this will sometimes see Facebook’s algorithms turning to humans for advice if they’re not quite sure whether an account should be deleted, but not always – sometimes they will delete accounts “without review by a human”: http://on.wsj.com/2swS0gN. “Our AI can know when it can make a definitive choice, and when it can’t make a definitive choice,” said Facebook anti-terrorism chief Brian Fishman. We can only hope he’s right, otherwise this could all go very wrong.

Google, meanwhile, is also stepping up its use of automation to nix terrorist videos on YouTube. It’s also roping in more “expert NGOs” to help make “nuanced decisions about the line between violent propaganda and religious or newsworthy speech”. But here’s what’s really noteworthy: Google will start putting a warning in front of videos that don’t actually break its terms, but that still “contain inflammatory religious or supremacist content”. These videos also won’t be able to make money off advertising, and people won’t be able to comment on them: http://bit.ly/2sMJjyw

It will be fascinating to see which inflammatory pundits fall foul of this scheme.

AND SPEAKING OF ENCRYPTION, the European Parliament’s justice committee has recommended that strong, end-to-end encryption should be guaranteed in electronic communications, and that EU countries should be banned from weakening this security. This is just a legislative amendment from one committee, not the full-blown “EU proposal” that some reported, but it’s extremely heartening to see it brought up nonetheless: http://tcrn.ch/2svCvTg

A DRAMATIC PRIVACY FAILURE has taken place in the US, courtesy of a company working for the Republican National Committee (RNC). The firm, Deep Root Analytics, reportedly put the personal details of almost all the country’s 200 million registered voters on a cloud server with absolutely no security. Yes, that means anyone could access all this information: http://bit.ly/2sOMQfW

This is not only a mind-blowing betrayal of US voters – it also highlights the ways in which modern political parties use people’s information to profile them, in order to target them with tailored promises.

AND WHILE WE’RE ON THE SUBJECT of profiling and tailored pitches, Gizmodo has a great story on people in the US getting letters in the mail about medical conditions they may or may not have: http://bit.ly/2sUPUXJ. The company sending the letters, Acurian, appears to be getting at least part of its targeting information from a company that spies on people – without consent – when they look up online information about medical conditions.

WALLS WILL PROTECT YOU FROM CAMERAS, RIGHT? US researchers claim to have developed technology that lets drones build “high-resolution 3D images” of what’s going on inside buildings, by making complex calculations of the Wi-Fi signals at different points in those buildings. So, no. http://bit.ly/2rz39t8

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s