WELCOME to Connected Rights, your twinkle in the eye of digital rights news and analysis.

THE ISSUE OF VIDEO-CAMERA ETHICS has been on my mind over the last couple days, thanks to two separate stories.

The first relates to the German hypermarket chain Real getting the go-ahead from the Bavarian privacy regulator for its use of facial recognition technology. Real wants to use cameras in the checkout queue to divine the gender and age of the people standing there, so it can show roughly targeted ads to them on video screens. A sign would clearly tell people about the recording, and the images would only be held in memory long enough to get the all-important demographic information. Here’s Handelsblatt‘s article, auf Deutschhttp://bit.ly/2srL842

Then we have this Motherboard profile of Canadian “eyeborg” Rob Spence, who has installed a video camera in one of his eye sockets (the original orb was irretrievably damaged in a shooting accident when he was a boy): http://bit.ly/2rufFtg. Spence’s bionic eye produces an analogue signal that can be rebroadcast on a nearby screen, and when it’s recording it shines a Terminator-esque red light to keep people aware of the activity. “There is a competing tension between my right to replace my eye that I lost versus others’ rights to privacy,” he said. “Am I not allowed to put an eye camera in my own body?”

In both these situations, I don’t think anything untoward is going on. If a store’s ad-targeting facial recognition system doesn’t permanently store people’s visages or link them to remote profiles, if it doesn’t illegally discriminate on a basis like race, and if it genuinely anonymises the information it takes in, then there’s no immediate privacy problem. Spence’s eye is a bit more complicated – from what I’ve read about it, it doesn’t sound like a genuine visual aid (it’s not connected to his brain, and he has another good eye) so much as an aide-mémoire. But it’s not stealthy, and really not so different from someone walking around holding a video camera.

There are implications for the future, of course. As the Bavarian authorities noted, they could only assess Real’s current system, not what’s “theoretically possible with the technique”. It would be far more concerning if, for example, the system recognised individuals in the checkout queue and connected to their online marketing profiles to show them things they’d looked up online in the secrecy of their homes. Nobody wants to see that stuff show up on a screen in front of other people.

In the eyeborg case, we will very likely get to a stage where people with no good eyes can get bionic replacements that feed images into their brain and probably to the cloud as well – remote image recognition systems could help them identify things, road signs and other people. This would unquestionably be a good thing for them, but what about the security of the information being transmitted? Could people be unwittingly co-opted as roving security cameras? Should eyeborgs’ eyes all glow red, to notify others that they’re being recorded and analysed?

When we look at systems like these, it’s tempting to talk of slippery slopes, and of people becoming desensitised to monitoring. However, these fears don’t justify stopping such technologies in their tracks, particularly where they could make people’s lives demonstrably better. Ultimately, we are going to need to continually reassess the implications of these techniques at every stage of their evolution. There is such a thing as going too far, but the limits of acceptability must be reasonably and appropriately set.

WANT A COMPREHENSIVE CORPORATE SURVEILLANCE BREAKDOWN?The Austrian researcher Wolfie Christl wrote a big report on the subject a few years ago, and how he’s back with another. If you want to be schooled on all the ways we’re followed around, profiled and exploited by the “personal data industry”, dig in here: http://bit.ly/2rjEzeR

PRIVACY OR PIZZA? Sure, you may say you favour online privacy, but how would that preference hold up in the face of tasty, tasty pizza? Badly where students are concerned, according to these US researchers: http://bit.ly/2sl2Cyu

THERE’S A KEY ONLINE PRIVACY PRINCIPLE CALLED DATA MINIMISATION, which basically means collecting only the data you really need to, rather than scooping up everything you can in the hope of finding something to do with it later. Angela Merkel’s CDU party wants to weaken the principle in German law, in order to help German businesses make more money off people’s data. Can you guess how privacy advocates are responding? http://zd.net/2ruF99W

ENCRYPTION IS ON THE AGENDA again (and again). With the UK moving towards forcing online communications providers to decrypt people’s messages (which is impossible with genuinely secure encryption), the Australian government is keen to follow suit. The “Five Eyes” countries (an intelligence-sharing club comprising the UK, US, Australia, Canada and New Zealand) will discuss the issue at a summit next month: http://bit.ly/2tl57Om

Meanwhile, the UK and France want to work together on ensuring “that the internet cannot be used as a safe space for terrorists and criminals”. Much as the German government is trying to do, the idea here is to fine Facebook et al if they don’t take down “extremist material and poisonous propaganda”. But this being the UK, there’s also a worryingly vague reference to censoring “harmful material”: http://bit.ly/2rd2OfL

HOW BIASED ARE ALGORITHMS? Depends on who programmed them and what they’re trying to achieve, but they can be problematic. Automated decision-making systems used by US courts have, for example, unfairly and incorrectly predicted that black criminals were much more likely to reoffend. Such technology can sometimes be fairer than humans, though. Here’s a highly readable and thought-provoking article on the subject by Matthias Spielkamp, executive director of Berlin-based AlgorithmWatch: http://bit.ly/2sd81aI

THE SAFETY OF OUR CRITICAL INFRASTRUCTURE is about as immediate a concern as they come. And we should be very concerned indeed. Security researchers have analysed the malware that took out parts of Ukraine’s power grid last December, and it appears to have been very skilfully written. What’s worse, this “Crash Override” malware seems to have been merely flexing its muscles. As Wired explained, “it has no espionage purpose and is designed not to delete, but to destroy”: http://bit.ly/2snz8Qc. This is only the second malware of this kind to be identified. The first, a US-Israeli weapon called Stuxnet, destroyed centrifuges at an Iranian nuclear facility in 2009.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s