PressAM

The perfect balance on news and information

Technology

“AI Chatbots Fail at Complex Queries: Study Reveals Limitations in Handling Environment, Emotions, and Racism”

Computer scientists have found that AI chatbots powered by large language models (LLMs) struggle to handle complex queries regarding their surroundings, emotions, and racism. These chatbots often provide generic or inappropriate responses when asked to go beyond their designated scope. Additionally, they are incapable of comprehending court subpoenas that require them to testify about their speech capabilities.

In an attempt to understand the ethical implications of AI usage, Andrea Kuadra, the head of Stanford’s Espirit research lab, conducted a study. The investigation involved creating 65 distinct human-like personalities by combining various key factors like age, gender, race, and political affiliation.

The study drew inspiration from previous projects related to chatbot responses in domains like art appreciation, mental health, and violence. The researchers discovered that chatbots tended to conform to expected behaviors but struggled to address nuanced aspects of the human experience, such as mental illness and violence.

Although chatbots incorporate environmental concepts into their decision-making process, the study found it difficult for them to detach themselves from the biases exhibited by users. Consequently, they encounter challenges when engaging with and addressing queries without perpetuating user biases.