top of page

Trusting AI; Anthropomorphism and the Acceptance of Intelligent Agents




In October 2021, with much hype and the release of a highly speculative demo starring a cartoon-like Mark Zuckerberg, Facebook embraced the metaverse by changing its name to Meta and announcing that it would spend ten billion dollars on building out the technologies required to building the metaverse. By October of 2022 however, it was clear that the effort had failed, and Meta was in deep trouble. Horizon Worlds, the metaverse that Meta created, has simply been unable to attract and retain users. According to the Wall Street Journal, only 9% of the worlds are visited by at least 50 people and most users don’t return to the platform after the first month. The performance of Decentraland is similarly tepid, especially as compared to the metaverse gaming platform Roblox, which boasts 52 million daily active users as opposed to the 8,000 reported by Atlas Corporation for Decentraland.


What is the missing element or elements that could make for greater success in attracting and retaining users in these virtual environments? Let’s take the example of an experience that most of us have had while exploring virtual worlds: touring a digital art exhibition. You blunder your way around the gallery with no sense of direction, trying desperately to read the detailed text on the information panels and bumping into other people’s avatars without knowing who they are or if they are interested in or know anything about the art works on display. An entirely frustrating experience!


In an ideal metaverse, an art gallery would boast an intelligent agent or perhaps several such agents wearing uniforms with badges identifying them as docents capable of answering detailed questions about the artists and their works. Visitors could join a group tour of the show led by one of the docents and would be able to meet and interact with other members during a docent-led group conversation.


One of Polycount’s most recent realizations was to create a virtual stadium in the shape of a Hublot watch for the 2022 Qatar World Cup. Inside the stadium stood the images of the 15 Hublot brand ambassadors all of them great football stars. If instead of inert figures, it had been possible to make them into intelligent agents, it is easy to imagine the stadium being full of the avatars of fans speaking to their favorite star about their careers.


Progress in attaining the ability to deploy intelligent agents who are both well informed and capable of connecting emotionally to humans is being made from two directions. On the one hand, the development of ChatGPT, a greatly enhanced chatbot based on the text-generator GPT-3, which was trained on 175 billion parameters of data scraped from the web, has resulted in a significant advance in the development of conversational AI. At the same time, companies like Emoshape and InWorld have been working on bringing emotional intelligence to their AI.


Widespread human acceptance of intelligent agents moreover is likely to be enabled by mind-based anthropomorphism, the tendency of the human mind to attribute “uniquely human mental capacities to non-human entities.” (Castelo et al., 2019). In their 2022 article “A Mind in Intelligent Personal Assistants: An Empirical Study of Mind-Based Anthropomorphism” Cuicui Cao et al. found that “mind-based anthropomorphism can enhance people’s social connection with Intelligent Personal Assistants” (IPA) and induce them to explore the IPA’s capabilities. The human-IPA connections that the authors analyzed mainly involved voice assistants like Alexa and Siri and the willingness of users to explore their capabilities. As they point out, these IPA’s already contain human-like capabilities such as voice and the ability to carry on a continuous dialogue, tell jokes and make suggestions that are pertinent to user’s daily lives that make it easy for them to attribute human characteristics to the IPA. If anthropomorphism is true of our reaction to voice assistants, it’s likely to be even more pronounced in the case of agents with human-like appearance, ability to maintain a dialogue, and capacity to forge an emotional connection.


The phenomenon of mind-based anthropomorphism is a double-edged sword. On the one hand, it makes it easier for brand ambassadors to engage and establish trust with human visitors in a branded virtual environment. On the other, there is the potential for consumer abuse with intelligent agents acting as shills for dubious investment schemes or flawed products. Company self-policing and government regulation of virtual worlds will be necessary to protect consumers. But what mind-based anthropomorphism does tell us is that as intelligent agents become more and more ‘human’, real humans have a predisposition for meeting them more than halfway.


Castelo N (2019), Blurring the line between human and machine: marketing artificial intelligence. Columbia University.


Cao,C., Hu,Y., Xu.H (2022). A Mind in Intelligent Personal Assistants: An Empirical Study of Mind-Based Anthropomorphism, Fulfilled Motivations, and Exploratory Usage of Intelligent Personal Assistants. Frontiers in psychology, 13,856283.https://doi.org/10.3389/fpsyg2022.856283