News Trending

SurvAIllance

ad+1

You may not be aware that the rough consensus in the tech community is that Cambridge Analytica were almost certainly bullshit artists. Oh, don’t get me wrong, what they tried to do, and/or claimed to do, was super shady and amoral and would have been ruinous to reasonable informed democracy if successful. But I have yet to find anyone credible who thinks that their “psychographics” approach actually works.

That is: it doesn’t work yet.

We may yet thank Cambridge Analytica, for inadvertently raising the red flag before the real invaders are at the gate. Because in this era of ongoing exponential leaps and growths in data collection, which is also an era of revolutionary, transformative AI advancement, it’s probably only a matter of time before electorates can be subtly manipulated wholesale.

Don’t take my word for it. Take that of Google senior AI / deep-learning researcher François Chollet, in a truly remarkable Twitter thread:

To quote the crux of what he’s saying:

In short, Facebook can simultaneously measure everything about us, and control the information we consume. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior … A loop in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see … A good chunk of the field of AI research (especially the bits that Facebook has been investing in) is about developing algorithms to solve such optimization problems as efficiently as possible

Does this sound excessively paranoid? Then let me point your attention to another eye-opening recent Twitter thread, in which Jeremy Ashkenas enumerates a set of Facebook’s more dystopian patent applications:

Again let me just summarize the highlights:

a system that watches your eye movements […] record information about actions users perform on a third party system, including webpage viewing histories, advertisements that were engaged, purchases made […] “Data sets from trusted users are then used as a training set to train a machine learning model” […]“The system may monitor such actions on the online social network, on a third-party system, on other suitable systems, or any combination thereof” […] ‘An interface is then exposed for “Political analysts” or “marketing agencies“’ […]

Are those quotes out of context? They sure are! So I encourage you to explore the context. I think you’ll find that, as Ashkenas puts it, “these patent applications don’t necessarily mean that Facebook wants to use any of these techniques. Instead, they illustrate the kinds of possibilities that Facebook management imagines.”

An explosion of data. A revolution in AI, which uses data as its lifeblood. How could any tech executive not imagine using these dramatic developments in new and groundbreaking ways? I don’t want to just beat up on Facebook. They are an especially easy target, but they are not the only fish in this barrel:

Here’s yet another viral privacy-related Twitter thread, this time from Dylan Curran, illustrating just how much data Facebook and Google almost certainly have on you:

Mind you, it seems fair to say that Google takes the inherent dangers and implicit responsibility of all this data collection, and the services it provides with this data, far, far more seriously than Facebook does. Facebook’s approach to potential malfeasance has been … well … let me point you to still another Twitter thread, this one from former Google Distinguished Engineer Yonatan Zunger, who I think speaks for all of us here while reacting to reports of Facebook’s CTO saying “the company is now mapping out potential threats from bad actors before it launches products.”

But the larger point is that the problem is not restricted to Facebook, or Google, or the big tech companies. It’s more acute with them, since they have more data and more power, and, in Facebook’s case, very little apparent sense that with great power comes great responsibility.

But the real problem is far more fundamental. When your business model turns data into money, then you are, implicitly, engaging in surveillance capitalism.

Surveillance and privacy are issues not limited to businesses, of course; consider the facial recognition goggles already in use by Chinese police, or India’s colossal fingerprint-face-and-retina-driven Aadhaar program, or dogged attempts by the UK and US governments to use your phone or your Alexa as their surveillance device. But corporations currently seem to be the sharp and amoral edge of this particular wedge, and we have no real understanding of how to mitigate or eliminate the manifold and growing dangers of their surveillance capitalism.

I’m not saying all data gathering is ipso facto bad; but I am saying that, given the skyrocketing quantity of sensors and data in our world, and the ability to tie that data to individuals, any initiatives which support privacy, pseudonymity, and anonymity should be considered desirable until proven otherwise, given the ever-growing oceans of data whose tides threaten to wash those lonely islands away.

I’m certainly not saying AI is bad. Its potential for improving our world is immense. But like every powerful tool, we need to start thinking about how its potential misuses and side effects before we rush to use it at scale.

And I’m saying we should almost be grateful to Cambridge Analytica, for selling snake oil which claimed to do what tomorrow’s medicines actually will. Let’s not overreact with a massive moral panic. (checks Internet) Whoops, too late. OK, fine — but let’s try to take advantage of this sense of panic to calmly and rationaly forestall the real dangers, before they arrive.



from TechCrunch https://ift.tt/2H91xkX
via IFTTT

0 comments: