The history of User Experience (UX) is a reactive one. Long before the term was coined, let alone developed into a robust set of tools and frameworks, people were fighting back against the cold, dehumanizing doctrines of Taylorism and Ford’s assembly line. These methods treated people like cogs in a machine. Yet due to their productivity, they captivated industry through the early 1900s and two World Wars. However, as these efficiency-focused mindsets dug deeper into our social fabric and technology became increasingly powerful (as well as embedded into our daily lives through the computerization movement) people began to push back against these practices that deprived them of their humanity.
As early as the late 1940s, Toyota’s Production System challenged these practices and began respecting people’s critical role as the source of improvement in the production process. In the 50s, Bell Labs hired psychologists to run behavioural experiments around the design of the touchtone phone interface. Cognitive Science emerged post-WW2 as a discipline attempting to understand organizing principles of the mind. Through the 1970s, Xerox PARC pioneered early computing innovations through a radical human-centric approach. Each of these examples stands as a milestone on the path to the formalization of UX in the late 80s and early 90s. The rest, as they say, is history. UX is now a near-standardized discipline with millions of practitioners around the world and a table-stakes-expectation for any new product, service, or experience being brought to market.
Why the ‘AI Moment’ is the Same, but Different
Amazon “reverse-centaur” workers slavishly follow the dictates of an algorithm to chase items around a warehouse. Uber drivers are manipulated by Artificial Intelligence (AI) designed to give them just enough positive reinforcement and cash to stay on the road a little longer. UnitedHealth Group uses keystroke monitoring for digital employee productivity assessment. Tyson Foods invests in a smartwatch app that monitors the productivity of line-workers. You are forced to solve increasingly complex CAPTCHA problems to simultaneously prove you’re not a bot as well as train bots to better recognize bridges, boats, and traffic lights. We see similar headlines each week and our myopic, efficiency-focused history seems to be repeating itself.
Once again, technology has encroached upon our lives in a totalizing and dehumanizing way because once again, technological innovation has outpaced social innovation. Neo-Taylorism (a.k.a. Digital Taylorism) has returned workplace management to a repressive form of employee domination and is beginning to impact consumer-facing experiences to do the same to the very people who pay for an organization’s products and services (read: you). Industry has found a new tool for turning a human being into a predictable, controllable cog: artificial intelligence. The more things change, the more they stay the same. Yet where is UX to save us this time?
The Limitations of UX
The core philosophy of UX remains as critical today as ever; keep humans at the centre of your design. However, the practice of UX has become so algorithmic and hyper-focused on screen-based experiences that it has lost relevance to automation technologies such as AI and robotics. Worse than this, the UX conversation is now largely dominated by Big Tech; the most popular UX course in the world is currently offered by Google. Modern UX practice has become more about methods for hacking your dopamine circuits to optimize user engagement than it has about inclusive democratization of powerful technological tools. UX has become weaponized against us instead of used for us.
On the other hand, while the intent of some UX approaches remains pure, they are becoming woefully limited in the age of automation. Frameworks like Garrett’s Elements of User Experience do a good job of framing the strategy, scope, structure, skeleton, and surface of a web or app-based experience. However, Garrett’s elements neglect key aspects of both the meaning formed in interactions with humanlike AI as well as the relationships we develop with these increasingly social technologies.
UX was the solution to the horrendous designs of early computers, websites, and apps. These were initially developed by homogeneous teams of coders and engineers, unconsciously embedding their own biases, use cases, and requirements into the underlying digital fabric of the world. UX forced these teams to look outside of themselves and create more universally palatable designs that were inclusive of a broader range of peoples, backgrounds, and experiences. Yet emerging AI-based technologies increasingly feel designed by and for a modern generation of Digerati: technological elites who either work in or worship the Silicon Valley. Google facial recognition software that identifies black people as gorillas, Amazon job recruiting algorithms that discriminate against women, and the US Dept. of Health disqualifying low income groups from increased preventative care measures over bias data are all examples of this kind of self-serving technology.
What Has AI Changed?
Garrett’s five elements – strategy, scope, structure, skeleton, and surface – are useful in their ability to encompass the broad range of considerations for good UX. However, in the age of AI, two elements must be added to this list of considerations: semiotics and social.
Semiotics pertains to the meaning derived from a broad set of signs: words, gestures, touch, and much more. When you design a new chatbot, its pausing and prose become critical in contextualizing interactions with a user. When you create a virtual avatar, its appearance sets the tone for how people will perceive and approach it. When you build a robot, how it moves communicates as much or more to a user than what it says. Each of these signs represents a touchpoint that you can either actively control as part of your intended experience or passively neglect and in doing so, potentially undermine this experience.
In the mid-90s, Nass and Reeves’ Media Equation experiments showed the tendency for people to assign human-like characteristics to computers that present themselves as social actors. We now live in an era when our apps introduce themselves, our smarthomes talk back to us, and our products are increasingly being given a face. Never before has it been more important to consider the social implications of technological experiences. Whether we like it or not and whether we realize it or not, our children are forming relationships with Alexa, our elderly are forming relationships with care robots, and you are forming relationships with the apps you use every day at work. Once again, we can choose to actively craft positive relationships or we can passively neglect them and leave people to form unhealthy, dependent, or delusional relationships with their technology.
Why We Now Need AX
The time has never been more important for us to start having conversations about these extra considerations and how they contribute to Artificial Experience (AX): the complex interactions we have and relationships we form with modern automation technologies. As we lay the foundation and governance for many data and AI applications, we have the opportunity to be proactive in designing positive experiences instead of reactive to fix systems that have already done extensive damage. Though in some lights, AX can be viewed as the next generation of UX, I believe it is important to differentiate for a few reasons:
- The transition to AI-powered experiences represents a seismic shift in how we live and work, the likes of which we have not witnessed since the original computerization movement that spawned the need for UX.
- UX has become a victim of its own success and while the fundamental philosophy of UX remains intact, the rigid maturity of UX practice has dated it and the influence of Big Tech has poisoned it.
- The social nature of many AI-powered experiences necessitates a more complex and in-depth understanding of the social requirements and implications of that experience. This is the core of AX.
No longer should we simply ask ourselves, “what colour should the background be?”, “where should I place this button?”, or even “how should I structure information for a user?” We now must ask questions such as “what meaning does my choice of dialogue convey?”, “how does my bot’s facial expression make people feel?”, or even “what kind of relationship will someone develop with this agent?”
From the stick to the smartphone, our technologies have always been social by being extensions of ourselves and in turn, our sociality. Modern automation is yet another social extension, however, it is unique in being the first generation of technologies to which we also assign such vast social agency. We no longer simply view our technologies as tools, but instead are increasingly looking to them as coworkers, confidants, and even companions. UX tells us how to design these technologies to be good tools, however, this is not enough for the age of automation. We need AX to show us how to design these technologies to be good collaborators.