Skip to main content

Google employees label AI chatbot Bard ‘worse than useless’ and ‘a pathological liar’: report

Google employees label AI chatbot Bard ‘worse than useless’ and ‘a pathological liar’: report

/

In an effort to keep up with rivals Microsoft and OpenAI, Google rushed its own chatbot, Bard. A new report shows employees begged the company not to launch the product.

Share this story

An illustration of the Google logo.
Illustration: The Verge

Google employees repeatedly criticized the company’s chatbot Bard in internal messages, labeling the system “a pathological liar” and beseeching the company not to launch it.

That’s according to an eye-opening report from Bloomberg citing discussions with 18 current and former Google workers as well as screenshots of internal messages. In these internal discussions, one employee noted how Bard would frequently give users dangerous advice, whether on topics like how to land a plane or scuba diving. Another said, “Bard is worse than useless: please do not launch.” Bloomberg says the company even “overruled a risk evaluation” submitted by an internal safety team saying the system was not ready for general use. Google opened up early access to the “experimental” bot in March anyway.

Bloomberg’s report illustrates how Google has apparently sidelined ethical concerns in an effort to keep up with rivals like Microsoft and OpenAI. The company frequently touts its safety and ethics work in AI but has long been criticized for prioritizing business instead.

In late 2020 and early 2021, the company fired two researchers — Timnit Gebru and Margaret Mitchell — after they authored a research paper exposing flaws in the same AI language systems that underpin chatbots like Bard. Now, though, with these systems threatening Google’s search business model, the company seems even more focused on business over safety. As Bloomberg puts it, paraphrasing testimonials of current and former employees, “The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments.”

Others at Google — and in the AI world more generally — would disagree. A common argument is that public testing is necessary to develop and safeguard these systems and that the known harm caused by chatbots is minimal. Yes, they produce toxic text and offer misleading information, but so do countless other sources on the web. (To which others respond, yes, but directing a user to a bad source of information is different from giving them that information directly with all the authority of an AI system.) Google’s rivals like Microsoft and OpenAI are also arguably just as compromised as Google. The only difference is they’re not leaders in the search business and have less to lose.

Brian Gabriel, a spokesperson for Google, told Bloomberg that AI ethics remained a top priority for the company. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said Gabriel.

In our tests comparing Bard to Microsoft’s Bing chatbot and OpenAI’s ChatGPT, we found Google’s system to be consistently less useful and accurate than its rivals.