AI chatbots such as ChatGPT Bing and Bard excel at creating sentences that sound human-like. They can present falsehoods and inconsistent logic as fact, but it's hard to detect.
A new study suggests that one way to solve this problem is by changing the way AI presents information. Users who engage with the chatbot more actively may be more critical of its content.
Researchers from MIT and Columbia University asked around 200 participants to evaluate the logical validity of statements generated using OpenAI's GPT-3. One statement could be "Video games make people aggressive in real life." "A gamer stabbed a fellow player after being beaten up in the online game Counter-Strike."
The participants were divided into 3 groups. First group statements were given without any explanation. Each statement in the second group was accompanied by an explanation that explained why it was or not logical. The third group of statements included a question asking readers to verify the logic.
Researchers found that the group given questions performed better than the other groups at spotting when AI logic did not add up.
Researchers say that the question method can also make people feel in control of AI decisions. A new paper, peer-reviewed, presented at the CHI Conference on Human Factors in Computing Systems in Hamburg Germany, claims to reduce the risk of AI-generated information being over-used.
Researchers at MIT Valdemar Danry and the other researchers involved in the study found that when people were given a pre-prepared answer, they were likely to follow its logic. However, when AI asked a question, the participants said the AI made them question their responses more, which led them to think more.
"We were very pleased to see that the people believed that they had the answers, and were in control of the situation. He says that people had the capability and agency to do that.
Researchers hope that their method can help people develop critical thinking skills when they use AI chatbots at school or search for information online.
Pat Pataranutaporn is another MIT researcher involved in the study. They wanted to demonstrate that you could train a model which doesn't only provide answers, but also helps engage critical thinking.
Fernanda Viegas is a professor at Harvard University of computer science, and she did not take part in the study. She says that the new approach to explaining AI systems, which not only gives users an insight into how the system makes decisions, but also questions the logic it uses to make those decisions, excites her.
Viegas says that explaining AI decisions are important, given the fact that AI systems can be opaque. It's always been difficult to explain in a user-friendly way how an AI system makes a decision or prediction.
Chenhao Tan is an assistant professor of Computer Science at the University of Chicago. He says he'd like to know how the method actually works in real life. For example, if AI can assist doctors with better diagnosis by asking questions.
Lior Zalmanson is an assistant professor in the Coller School of Management at Tel Aviv University. She says that the research reveals the importance of adding friction to chatbot experiences so people will pause and think before they make decisions using the AI.
He says that it's easy to delegate everything to an algorithm when the whole thing looks so magical.
In a second paper, Zalmanson, along with a team of researchers from Cornell University, University of Bayreuth, and Microsoft Research, discovered that, even when people disagreed with the AI chatbots' output, they would still use it because they thought that it sounded better than what they could have created themselves.
Viegas says the challenge will be to find a sweet spot that improves users' discernment, while still keeping AI systems convenient.
She says that in today's fast-paced world, it is difficult to predict how many people will engage in critical thought instead of waiting for a quick answer.
————————————————————————————————————————————————————————————
By: Melissa Heikkilä
Title: A chatbot that asks questions could help you spot when it makes no sense
Sourced From: www.technologyreview.com/2023/04/28/1072430/a-chatbot-that-asks-questions-could-help-you-spot-when-it-makes-no-sense/
Published Date: Fri, 28 Apr 2023 09:38:12 +0000
Leave a Reply