Robots As Social Influencers: The Future of Persuasive Technology

In the mid-90s, there was research going on at Stanford that would change the way we think about computers. The Media Equation experiments were simple but profound; participants were asked to interact with a computer that acted socially for a few minutes after which, they were asked to give feedback on the computer in one of two conditions: either on the same computer they had just been working on or on a second computer across the room. The study results showed that participants responding on computer #2 were far more critical of computer #1 than those responding to the same machine they’d worked on.

Think about that for a second… people responding on computer #1 didn’t want to ‘hurt’ the computer’s ‘feelings’ to its ‘face’ but had no problem talking about it behind its ‘back’. This phenomenon became known as CASA – Computers as Social Actors – because what they had shown is that people are hard-wired to respond socially to technology that presents itself as even vaguely social.

Fast forward over 20 years and the CASA phenomenon continues to be explored, particularly as our technologies have become more social. As a researcher, lecturer, and all-around lover of robotics, I observe this phenomenon in my work every time every time someone thanks a robot, assigns it a gender, or tries to justify its behaviour using anthropomorphic rationales. And what I’ve witnessed during my research is that while few are under any delusions that robots are people, we sure tend to defer to them just like we would another person.

While this may sound like the beginnings of a Black Mirror episode, this tendency is precisely what allows us to enjoy social interactions with robots and place them in caregiver, collaborator, or companion roles. The positive aspects of treating a robot like a person is precisely why roboticists design them as such; we like interacting with people. However, if we continue to follow the current path of robot and AI deployment, these technologies could emerge as far more dystopian than utopian.

Look at Hanson Robotics’ Sophia robot. It has been on 60 Minutes, received an honorary citizenship from Saudi Arabia, holds a title with the United Nations, and has been on a date with Will Smith. If you think this sounds more like a series of publicity stunts than incredible robotic advancements, you are probably right. While Sophia undoubtedly highlights many technological advancements, few surpass Hanson’s greatest achievements in marketing. If Sophia truly were a person, there is already a name that our society would give someone like it: influencer.

As these technologies improve, they become more lifelike, more engaging, and more capable of influencing you and me. However, worse than simply being sociopathic agents – goal oriented without morality or human judgement – these influencing technologies that we welcome into our lives become the tools of mass influence for whichever organization or individual controls them. If you thought the Cambridge Analytica scandal was bad, just imagine what Facebook’s algorithms of influence could do with a face. Or a thousand faces. Or a million. The true value of a persuasive technology is not its cold, calculated efficiency, but its scale.

And while recent scandals and exposures in the tech world have left many of us feeling helpless against the digital giants of the value, fortunately, the solution to this problem comes down to a single word: transparency. However, it is up to you to insist that the technology in our lives is accountable to a few basic questions. Who owns or sets the mandate of this technology? What are its objectives? What approaches can it use? What data does it access? These fundamental questions are important for social technologies to answer because we would expect to have the same answers when interacting with another person, albeit often implicitly. However, since robots could have the potential to soon leverage superhuman capabilities, enacting the will of an unseen actor, and without showing verbal or nonverbal cues that shed light on their intent, we must demand that these types of questions be answered explicitly.

As a roboticist, I get asked the question, “when will robots take over the world?” so often that I’ve developed a stock answer: “as soon as I tell them to.” While I do love watching people shift uncomfortably at my dark humour, my joke is underpinned by an important lesson: don’t scapegoat machines for decisions made by humans. I consider myself a robot sympathizer because I think ‘bots get unfairly blamed for many human decisions and errors. Though the tendency to anthropomorphize increasingly social technologies is innate to our being, it is important that we periodically remind ourselves that a robot is not your friend, your enemy, or anything in between. A robot is a tool, wielded by a person (however far removed), and increasingly used to influence us.