Many people are stuck with their eyes fixed on the AI horizon and the believed-to-be-inevitable singularity when humans transcend their physical form into a world of digital bliss. Regardless of which side of the singularity debate you fall on, there are many stepping-stones ahead of this extreme that warrant recognition and discussion not in five or ten years, but today. Though we haven’t cracked the general AI case, our soft efforts have already accomplished a lot: we’ve technically passed the Turing test, we carry on lengthy conversations with support-service chatbots with our banks, telcos, and others, and we allow recommendation engines to influence our food, movie, music, and dating habits.
Like it or not, AI is not simply an emerging movement; it is already here in a big way and is infiltrating the most private parts of our lives and even deaths. The often-speculated sci-fi scenario of a former friend being recreated in part or whole due to personality or biologic information left behind has recently evolved past fiction and become a reality thanks to Eugenia Kuyda’s AI startup, Luka. Casey Newton at The Verge recently published a beautiful piece on Kuyda’s efforts to build an AI around her late friend, Roman Mazurenko, and raises a number of crucial and difficult issues on this type of posthumous artificial intelligence. For some, the Roman chatbot was seen as insufficient. Others found it inappropriate. However, many of Mazurenko’s friends found some comfort in this rare and strange interaction.
Yet perhaps the most interesting part of Newton’s article is in his exploration of how people were using the Roman bot. Far from seeking unanswered questions or closure with their late friend, many seemed to go to the chatbot as a sort of confessional; a deaf ear to speak to. People would describe personal challenges they were encountering or ask Roman for advice, knowing the answers would be obscure, but finding value in them all the same. It seemed that the Roman bot had hit upon a crucial human need: something that felt intelligent enough to listen, but not smart enough to judge or critically talk back.
Interestingly, the Roman bot ended up filling a very similar gap to one of the first chatbots ever built. ELIZA was a computer program published in 1966 that mocked a conversation between a user and a rudimentary psychotherapist. In addition to being the first, ELIZA became a thing of programming legend, often referenced for decades to come in the AI and developer communities. Part of what made ELIZA so successful was that responses were often vague, obscure, and open to personal interpretation. Coupled with anonymity and a lack of human judgment, ELIZA—much like Roman—became the perfect platform for people to share and discuss the authentic depths of their mind with.
The question that must be raised, however, is what is it about these platforms that we love? Is it their fake humanity, or in fact, their lack thereof? Is it the anonymity and lack of perceived judgment? Do we love these platforms because they feel just human enough to trick us into fully immersing into the experience, but still foreign enough to not have to worry about feeling vulnerable? Or are we so insecure in our own societal narcissism that we subconsciously want to speak to ourselves in order to sort things out yet feel as though we need some sort of external agent to help us work through our issues?
And what happens when our AI become more advanced? What happens when their responses become less obscure and more poignant to our confessions? What happens when these chatbots sound and even look more human? Will they still serve the same purpose? Do we truly wish to bear our souls to those we love? Do we actually want to speak to the dead?
Or do we simply want to speak to ourselves, but not feel crazy while we’re doing it?