When writer and journalist Juan José Melas had a chat with ChatGPT in September, he pretended to conduct a psychoanalysis session using the tool. He wanted to use the Turing Test to see if a chatbot could talk to him as a real person — specifically, like a psychoanalyst — and not a computer. The journalist told the AI about his dreams and fears, waiting for the AI to guide him in treatment, but, among other things, it always told him that it was an imaginary situation and explained that it was a linguistic model. Melas described his hypothetical psychoanalyst as narrow-minded and forgetful; In the end, this told him that the AI had failed the test.
In conversations like Melas Chats, a person's prior beliefs about an AI agent, such as ChatGPT, have an impact on the conversation and on perceptions of the tool's reliability, empathy, and effectiveness, according to researchers from MIT. (Massachusetts Institute of Technology) and Arizona State University, which recently published a study in Nature's machine intelligence magazine. We have found that AI is viewer intelligence. When we describe to users what an AI agent is, it doesn't just change their mental model; It also changes their behavior. “Because the tool is responsive to the user, when people change their behavior, it also changes the behavior of the tool,” says Pat Batara Notaporn, a graduate student in the Fluid Interfaces group at MIT's Media Lab and a co-author of the study.
“A lot of people think that AI is just an engineering problem, but its success is also a human factor problem,” says Patti Mays, study author and professor at MIT. The way we talk about it can have a huge impact on the effectiveness of these systems. “We attribute human forms and characteristics to AI, making it appear more human or personal than it actually is,” adds Ruby Liu.
The study included 310 participants, and the researchers divided them into three randomly selected groups. They then gave each group different background information about artificial intelligence. Participants had to discuss their mental health with an AI agent for approximately 30 minutes, decide whether they would recommend it to a friend, and then rate them. The first group was told that the agent had no intentions in the conversation, the second group was told that the AI had benevolent intentions and cared about their well-being, and the third group was told that it had malicious intentions and would try to deceive them.
Half of the participants in each group spoke to an AI agent based on the GPT-3 generative language model, a deep learning model that can generate human-like text. The other half talked about implementing the ELIZA chatbot, a less sophisticated, rule-based natural language processing program developed at MIT in the 1960s.
The results of the study revealed that user readiness towards the tool was crucial: 88% of people who received positive information and 79% of those who received neutral information believed that the AI was sympathetic or neutral, respectively. Angel Delgado, AI engineer at Paradigma Digital, believes that the positive majority is also the result of using GPT-3, which is the first to pass the Turing test: “It consists of allowing a person to interact with artificial intelligence [tool] Without telling them whether it was AI or not, to see if they could guess. GPT-3 is the first language model that achieved results so good that it sounds human-like.
People who were told that the tool cared tended to speak to it in a more positive way, which made the agent's responses more positive as well. The more you talk to the tool, the more you learn, explains Ramon López de Mantaras, director of the Artificial Intelligence Research Institute of the Spanish National Research Council. “You can correct, confirm and qualify its responses,” he adds.
From fear of finisher Due to lack of criticism
Negative introductory statements (i.e., unfavorable information given to a person before interacting with the AI agent) had the opposite effect: only 44% of participants who received unpleasant information about the tool trusted them. “With negative statements, instead of priming them to believe something, we were priming them to form their own opinion. If you tell someone to doubt something, they're likely to be more skeptical overall,” says Robbie Liu.
Patti Mays explains that the influence of science fiction is a major factor in negative thinking about artificial intelligence. “Movies like The situation or the end And Matrix Depicting scenarios in which artificial intelligence becomes self-aware and leads to the downfall of humanity. These fictional narratives contribute to the fear that artificial intelligence may take over and surpass human intelligence, which could pose a threat to our existence.
According to the study's findings, prior ideas about language models can have such a powerful effect that they can be used to make an agent appear more capable than they actually are, causing people to trust them too much or follow incorrect advice. “The tool you are interacting with is not an intelligent person,” Lopez de Mantaras says frankly. “People think the machine is smart and they listen to what it says without any critical thinking… Our ability to think critically is becoming less and less.”
Experts agree that we should be aware of how AI works and understand that it is programmed. “We have to prepare people more to be more careful and understand that AI agents can hallucinate and be biased. The way we talk about AI systems will have a big impact on how people respond to them,” Mayes says.
subscription FOR OUR WEEKLY NEWSLETTER For more English-language news coverage from EL PAÍS USA Edition