Can digital assistants be used as agents of undue influence? AIs like Siri, Alexa, Cortana, Now, M and Bixby help us to send messages, play music, and set reminders. We can even insult them to get a witty comeback, or ask them to tell us a joke – and then demand they tell us a better one. They’re programmed to learn and adapt; with every interaction, they gather data – and then they relay that data back to Apple, Amazon, Microsoft, Google, Facebook or Samsung. Could it be that George Orwell’s predictions have come true, but, instead of being watched by Big Brother, we are being overheard by Little Sister?
University of Bath AI researcher Joanna Bryson warns that the feeling that these digital assistants have become part of the family may give us a false sense of security, because they are all actually spies, reporting information back to their originators. Bryson says that she modifies her conversation if she knows that there is a digital assistant in the environment.
Email systems also collect information. For instance, in the UK, online shopping service Ocado uses Google’s TensorFlow algorithm to analyze customers’ messages. The results of this analysis are immediate, and we see them everywhere – we are painfully aware of the online advertising services that try to sell us variants of items we have recently bought, or even just considered buying.
Google and Facebook collect reams of data, but assure us that it is anonymized. The problem is that we have to trust everyone who has access to that data. Massive failures in security have lead to data leaks from Barclays, Verizon, Wells Fargo, HSBC and, most recently, Equifax. If our supposedly secure banking services can be hacked, then we should all be a little suspicious about the safe use of our information by the huge corporations that hoover it up – and the on-line contracts we sign, usually without bothering to read them.
Should we hand over our privacy to artificial intelligence? Stephen Hawking has said that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk warns that AI poses “vastly more risk than North Korea.” Hopefully, we will be able to establish ethical controls before Skynet decides to wipe us all out, but we must not sit by silently while big business scoops up every last detail of our medical and bank records, and our shopping and browsing habits, let alone our most private and personal conversations.
The military developed much of the first AI research. Siri, for instance, was intended to help soldiers before she was fitted in iPhones and Macs. These systems were initially developed as weapons, and they can still be used as weapons.
AI also has a place in the brave new world of politics. It is very likely that both the recent US presidential campaign and the UK’s Brexit were both influenced by AI applications that scrutinize social media. Online bots were used in those propaganda wars – and at every step, they were pretending to be human.
Not only do these digital assistants report to their programmers – ostensibly to improve services – but they can also be hacked. Your digital assistant might not only be reporting your every word to the hackers, but, in the wrong circumstances, those hackers may be able to cause havoc in your home, steal your identity, and empty your bank accounts.
In a recent piece in New Scientist, Bristol University researcher Nello Cristianini points out: “We have happily accepted incredible intrusions into our privacy for nearly two decades. Now we live in a world where our own personal information is used and traded and mined for value. We should ask questions about where we want to draw the line.”
It is time to ask those questions.
~ jon atack