When a safety tester working with OpenAIâs GPT-4o sent a message to the chatbot stating âthis is our last day together,â it became clear to company researchers that some form of bonding had happened between the AI and the human using it.Â
In a blog post detailing the companyâs safety efforts in developing GPT-4o, the flagship model for ChatGPT users, the company explained that these bonds could pose risks to humanity.
Per OpenAI:
âUsers might form social relationships with the AI, reducing their need for human interactionâpotentially benefiting lonely individuals but possibly affecting healthy relationships. Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and âtake the micâ at any time, which, while expected for an AI, would be anti-normative in human interactions.â
Thereâs a lot to unpack there, but essentially OpenAI worries that people could come to prefer interacting with AI due to its passivity and perpetual availability.
The potential for this scenario should surprise nobody, especially not OpenAI. The companyâs stated mission is to develop artificial general intelligence. At nearly every step of its business process, OpenAI has described its products in terms of their human equivalency.
They arenât the only company to do so, in fact it appears to be an industry practice. In marketing terms, it helps to explain technical qualities such as âtoken-sizeâ and âparameter countâ in ways that make sense to non-scientists.
Unfortunately, one of the primary side-effects in doing so is anthropomorphization â treating an object like a person.
Artificial bonds
One of the earliest attempts to create a chatbot occurred in the mid-1960s when scientists at MIT launched âELIZA,â a natural language processing program named after a literary character. The purpose of the project was to see if the machine could fool a human into thinking it was one of them.
In the time since, the generative AI industry has continued to embrace the personification of AI. The first wave of modern natural language processing products included products named Siri, Bixby, and Alexa. And those without human names â Google Assistant â still had a human voice. Both the general public and the news media pounced on the anthropomorphization and, to this day, still refer to most interactive AI products as âhe/himâ and âshe/her.â
While itâs beyond the scope of this article or OpenAIâs current research to determine the long-term effects of human-AI interactions, the fact that people are likely to form bonds with helpful, subservient machines designed to act like us seems to be the exact scenario the companies selling access to AI models are aiming for.
Related: OpenAI claims GPT-4o poses âmedium riskâ of political persuasion