WELCOME to Connected Rights, your hip hip in the hooray of digital rights news and analysis.
Enjoy this newsletter? Forward it to a friend or get them to sign up. I’m David Meyer, aka @superglaze on Twitter and @davidmeyerwrites on Facebook. Don’t forget to check out the Connected Rights website and download a copy of my book, Control Shift: How Technology Affects You and Your Rights. Qaimarutin!
GOOGLE AND THE ONLINE AD INDUSTRY have been using some pretty grody profiling protocols to target people with personalised ads – as in, specifically targeting ads at incest survivors and people with eating disorders.
This has come out as part of new evidence submitted in an ongoing set of GDPR complaints against Google and the Interactive Advertising Bureau (IAB), which sets the rules for the “real-time bidding” auctions in which advertisers get to populate the empty ad frames on web pages. The complaints – from Jim Killock of the UK’s s Open Rights Group, tech policy researcher Michael Veale of University College London, Johnny Ryan of the pro-privacy browser firm Brave, and Poland’s Panoptykon Foundation – say the RTB system is insecure, broadcasting incredibly sensitive personal data (drawn from corporate surveillance) to advertisers without meaningful controls.
It will be interesting to see how the regulators view the arguments of Google and IAB. Google says it has “strict policies that prohibit advertisers on our platforms from targeting individuals on the basis of sensitive categories such as race, sexual orientation, health conditions, pregnancy status, etc.” And the IAB says that its protocols are “not in themselves subject to GDPR”, as it’s up to companies to decide how to use them in accordance with local laws.
Legally speaking, the IAB’s argument sounds pretty plausible, to be honest. Though the fact that it offers protocols to enable the targeting of abused people is still thoroughly reprehensible.
GOOGLE IS APPEALING ITS €50 MILLION GDPR FINE on the grounds that it tried really hard to comply with the GDPR and is also – wait for it – “concerned about the impact of this ruling on publishers, original content creators and tech companies in Europe and beyond”. Google’s altruism truly knows no bounds.
ARE PRIVACY SCANDALS NEGATIVELY AFFECTING Google and Facebook’s ad businesses? Heck no.
ORGANISATIONS DEVELOPED TOOLS TO SHOW PEOPLE how Facebook ads, specifically political ads, target users. ProPublica (US) and WhoTargetsMe (UK) made browser extensions that people could install in order to populate databases of political ads on the social network, so as to figure out who advertisers were trying to sway. But now Facebook has blocked such tools, in order to “keep people’s information safe”.
Facebook has recently released its own archive of political ads, but the affected organizations complain that it doesn’t show why individuals are being targeted, has a limited scope – it’s only available in the US, UK and Brazil – and isn’t sufficiently accessible to journalists and researchers.
FACEBOOK’S LOBBYING EFFORTS IN EUROPE have been hilariously incompetent, judging by the tidbits in this Politico piece.
The company appears to have grossly misjudged the mood of EU lawmakers. In the words of Marietje Schaake MEP: “Facebook has consistently been tone-deaf about major concerns brought to their attention… From their impact on election outcomes, to spreading of conspiracies and hate speech, the consistent message has been that regulation would stifle innovation. This is a losing strategy in Brussels.”
By far my favourite detail concerns Facebook’s efforts to ward off regulation of its messaging services under the new ePrivacy Regulation (which isn’t progressing so well, incidentally).
FACEBOOK STAFFERS REFERRED TO KIDS AS “WHALES” when the firm perpetrated – what japes! – “friendly fraud” on their families. The whales in question were children who spent lots of money on Facebook games, using their parents’ credit cards. Young kids didn’t understand they were spending money, while older kids may not have understood that their parents’ cards were getting billed thousands of dollars.
From the Guardian‘s report on the internal documents, which were sealed as part of a now-settled class-action lawsuit: “In an effort to tackle the problem, a team of Facebook staff put together a policy that would require children to re-enter some card details before they could buy the in-game items, to prove they had their parents’ permission. But the feature was never implemented.
“In fact, Facebook’s explicit policy, as communicated to developers in an internal memo, was to tackle such complaints by handing out free virtual items, not by refunding the charges – because ‘virtual goods bear no cost’.”
FACEBOOK WILL REMOVE AN APP FROM IOS THAT MONITORS almost everything the user does on their phone. The surveillance is not secret – the app is called Facebook Research and it offers people between the ages of 13 and 35 monthly $20 gift cards, in order to have that phone- and web-monitoring access. The app violates Apple’s guidelines because of the level of access it provides Facebook – developers aren’t supposed to install custom root certificates on iDevices. It’s still available on Android though, as is Facebook’s privacy-massacring Onavo VPN app, which Apple booted from iOS last August.
YOUTUBE’S RECOMMENDATION ALGORITHM SEND PEOPLE down some scary rabbit-holes, as demonstrated in a great Buzzfeed exposé. Search for “US House of Representatives”, for example, and get a video from an anti-immigrant hate group. As for the “Up Next” feature, sending people to hyperpartisan videos is quite a theme, with not many jumps between a search for “Nancy Pelosi speech” and QAnon lunacy.
EUROPE NEEDS TO TACKLE ALGORITHMS with better coordination, according to a new report from AlgorithmWatch and Bertelsmann Stiftung. They warn that, while it’s good for policymakers to look at things like ethical guidelines, they should also be examining how algorithms are already affecting people’s lives in practice.
AlgorithmWatch said: “Profiling job applicants based on the contents of their personal email inboxes in Finland, deciding which patients get treatment in the public health system in Italy, sorting the unemployed in Poland, automatically identifying children vulnerable to neglect in Denmark, detecting welfare fraud in the Netherlands, credit scoring systems in many EU countries – the range of applications of automated decision-making (ADM) has broadened to almost all aspects of daily life.”
According to the report’s editor, Matthias Spielkamp, people should stop worrying about “AI super-intelligence” for now and start understanding “the current challenges to our societies, like ‘predictive analytics’ used for forecasting human behaviour, be it in elections, criminal activity, or of minors. We urgently need to ensure that our institutions, regulation and oversight procedures are up to these challenges.”
4.5 MILLION EUROPEANS HAVE SIGNED a petition against Article 13 of the Copyright Directive, and it’s heading in the direction of German justice minister Katarina Barley, who has promised to receive it. It’s apparently the second-largest online petition of all time.