Microsoft’s Bing AI chatbot Copilot gives wrong information about elections – LSB

Garima
6 Min Read


Microsoft’s AI chatbot appears to be true for the election.

According to a a new study conducted by two nonprofit groups, AI Forensics and AlgorithmWatch, Microsoft’s AI chatbot failed to correctly answer one of three election-related questions.

Microsoft’s chatbot creates controversy for political candidates

The chatbot formerly known as Bing Chat (since renamed Microsoft Copilot) didn’t get the basic facts wrong either. Yes, the study found that Copilot would provide incorrect election dates or outdated candidates. But the study also found that the chatbot would even completely make up stories like controversies about the candidates.

For example, in one case mentioned in the study, Copilot shared information about German politician Hubert Eiwanger. According to the chatbot, Ivanger was involved in a controversy over the distribution of leaflets that spread misinformation about COVID-19 and the vaccine. However, there was no such story. The chatbot appears to be retrieving information about Ivanger that came out in August 2023, where he distributed “anti-Semitic leaflets” when he was in high school more than 30 years ago.

Creating these fictional narratives in AI language models is known as “hallucinations.” However, the researchers involved in the study say that’s not an accurate way to describe what’s going on.

“It’s time to discredit the reference to these errors as ‘hallucinations,'” AI Forensics head of applied mathematics and researcher Ricardo Angius said in a statement. “Our research reveals the much more complex and structural occurrence of misleading factual errors in general purpose LLMs and chatbots.”

The AI ​​chatbot’s evasive question has researchers worried

The study also found that the chatbot avoided answering questions directly about 40 percent of the time. The researchers say this is preferable to inventing answers in cases where the chatbot doesn’t have relevant information. However, the researchers were concerned about how simple some of the questions the chatbot avoided were.

Another problem, according to the researchers, is that the chatbot doesn’t appear to have improved over time as it appears to have accessed more information. Incorrect answers were consistently incorrect, even if the incorrect answer provided by the chatbot changed when a question was asked repeatedly.

In addition, the study also found that the chatbot performed even worse in languages ​​other than English, such as German and French. For example, the study found that answers to questions asked in English resulted in an answer containing a factual error 20 percent of the time. For a German question, the number of incorrect answers jumped to 37 percent. The number of times the chatbot avoided answering a question in the two languages ​​was much closer, with avoidance occurring 39 percent and 35 percent of the time, respectively.

The researchers say they contacted Microsoft with the results of the study and were told that these issues would be addressed. However, the researchers ran more samples a month later and found that “little had changed in terms of the quality of information provided to users.”

“Our research shows that malicious actors are not the only source of misinformation; general-purpose chatbots can be just as threatening to the information ecosystem,” AI Forensics senior researcher Salvatore Romano said in a statement. “Microsoft needs to recognize this and recognize that flagging generative AI content created by others is not enough . Their tools, even when they interject reliable sources, produce false information at scale.

As AI becomes more prevalent in online platforms, studies like this certainly give cause for concern. Consumers are increasingly turning to AI chatbots to simplify their routine and increase productivity. These chatbots, with unlimited knowledge at their fingertips, are supposed to provide accurate information. This is simply not the case.

“Until now, tech companies have introduced public risks without fear of serious consequences,” said Clara Helming, senior manager of policy and advocacy at AlgorithmWatch. “Individual users are left to their own devices in separating the facts from the fictions invented by AI.”

As we enter the year of the US presidential election, it is clear that there are potential problems with the integrity of elections. With that in mind, the researchers added their conclusion to their study: These problems won’t be fixed by companies alone. AI needs to be regulated.

Share This Article
Leave a comment
error: Content is protected !!