A couple of weeks ago I gave a seminar at Stanford University entitled ‘Revealing data: creepy or curious’.
Much of my research has been driven by a desire to encourage people to be more curious about the world they live in – be it about nature, culture, science or simply discovering more about what is normally invisible to the naked eye. I have had the great fortune to work with some fantastic researchers and designers, over the years, who have designed, engineered and deployed ever more creative and, sometimes outlandish, technology interventions, to achieve this. Each new technology that surfaces offers new possibilities of triggering curiosity – from mobile, pervasive computing, AR, tangibles to IoT. Using these, we have created all manner of interventions to get people to stop and wonder, for example, recorded millions of bat calls to make them accessible to the general public, mimicked the sounds of photosynthesis to get children thinking about ecology, and visualised how much energy a community is using each day, by showing it as a new form of ‘street graph’ that could be seen by anyone walking down the road.
Collecting and analysing data has never been easier. There is tons of it now being collected, scraped, aggregated and combined in all manner of forms. It has opened up a whole new world of possibiilities. My research lab has started experimenting with new ways of presenting it back to communities in ways that can intrigue, galvanise and empower them. It is a very powerful tool. But by the same token, it has opened up all manner of doors for others to experiment with in less altruistic ways; be it human data, environmental data, health data, and online data. I am only too aware, like many others, of the emergence of creepy data. By this, I mean the output of sensing technologies and machine learning algorithms that make inferences about what lies behind how we present ourselves – be it our emotions, intentions or personality traits.
And therein lies the rub – the new brand of human data analytics is often done behind closed doors without us knowing it is happening. An example is the emergence of a new field of ‘emotional AI’ that uses facial recognition technology to help retailers improve how they customize the shopping experience. Faces of shopper are taken by store cameras in physical stores and converted into biometric templates that can work out in milliseconds the gender, age, emotion of someone and then look up any memories related to the face’s owner. In the UK, alone, it is estimated that 60% of retailers are using facial recognition technology to track people’s face. Companies like Axis Communications and NEC have become big players.
The same technology is also being used to assess job applicant’s answers to questions being asked of them; their every smile, twitch or frown is videoed during the job interview – that they are unaware of. It is subsequently analyzed and classified into specific kinds of emotions that are then matched up with personality traits. The combined personal data is scored and provides a profile as to how honest, passionate, or creative an applicant is and fed back to the company doing the hiring. For example, the UK start-up company, Human, are proud of their new AI tool, claiming it can “look into the micro-expressions” of people’s faces and, in doing so, even suggest that “these are the milliseconds of movement on the face which the naked human eyes often get wrong.” It has super-powers that can outplay us in our endeavor to deal with social situations. We may have evolved strategies to save face and when deemed necessary hide our true emotions – but now it seems AI can detect when we are lying, nervous, bored, unhappy, etc., from the mere twitch of a muscle.
Some might argue how is this approach different from other techniques used throughout history to cajole shoppers or detect if someone is lying? The drive to improve the shopping experience or to hire the right people is nothing new. Marketing companies and psychologists have been conjuring up personality tests, advertising strategies and the like, for decades in order to attract more eyeballs or understand better what makes someone tick and how hard-working, reliable, etc., they are. Furthermore, most online companies, routinely track and profile their customers, through the use of cookies and A/B testing. They are now accepted as part and parcel of life. So, why the sudden concern about so-called creepy data – especially the new methods of finding out more about what people do, what they like and what are they like?
The big difference between old fashioned marketing methods and the new-fangled analytics is that people do not have any control over the personal data that is being collected about them. All of this emotional AI can be seen as an invasion of their ‘neurological privacy’ – a term coined by my Phd student Lydia Nicholas that captures the insidiousness of collecting data under the skin.
Whereas advertising agents in the past, have used any number of tactics to get people to pay attention to the message in their ads and campaigns (Saatchi and Saatchi were masters at it) and then hopefully go and buy a product or vote for the opposite political party, they only had relatively small sampled data about what their targeted audiences said, did or felt. Moreover, they never got under the skin of people in the way current methods do. You could see on the billboards how low they would go when ‘lying’ to the public as this scary ‘labour isn’t working’ poster attests.
Nowadays, companies who collect and analyse personal data have a much sought after and valued asset. They know this only too well and will sell their data collections to third parties at a price – who can then use it in ways they deem fit. One only has to look at the Facebook-Cambridge Analytics scandal to see what can happen when data gets in to the wrong hands. While the jury is still out as to whether the online marketing methods used to persuade millions of people to vote for Trump or Brexit – that used data from millions of Facebook profiles – were effective at manipulating and changing thousands of people’s minds on whom to vote for – society’s outrage at this underhand way of treating the public without informing them properly of what they were using it for – has justifiably hit the roof.
Creepy data, in my mind, is derived from the new analytic methods being used to infer people’s proclivities and personalities, by getting under the hood, without them knowing it is happening. It is underhand and Brave New World-ish. Maybe emotional AI will one day become the new norm for figuring out when we are at our most suggestive and are only to happy to be primed when shopping for what we like looking for (in the way Netflix suggests what we might like to watch next). Or we will learn and wise up as to how to present our micro-expressions in the most favourable light. But before this can happen we should know what is behind the new science – to be more privy to just how gullible, smart or pleasant we really are.