In March, just before unveiling the AI chatbot Bard to the public, Google asked its employees to test the tool. What no company executive could imagine was the reaction which he collected among programmers.
Google, which has spearheaded much of the research on AI over the years, hadn’t yet integrated a consumer-facing version of generative AI into its products when ChatGPT was released.
Until then, the company was wary of its power and the ethical considerations that would come with integrating the technology into research and other flagship products, according to employees. But the fear of being supplanted by the competitionit made him lose his mind.
Now, its employees say those concerns have been ignored in a frantic attempt to catch up with ChatGPT and avoid the threat that could mean for Google’s search activity.
An employee came to the conclusion that Bard was “a pathological liar”: according to screenshots of the internal discussion. Another called it “shocking,” according to a Bloomberg report.
The ethics task force Google has promised to strengthen is now disempowered and demoralized, current and former workers said.
Employees responsible for the safety and ethical implications of new products have been told not to get in the way nor attempt to kill any of the generative AI tools under development, they said.
Google plans to revitalize its research around this cutting-edge technology, which could bring generative AI to millions of phones and homes around the world, ideally before Microsoft Corp.-backed OpenAI (MSFT) leads the way to the enterprise.
One employee wrote that when asked for advice on landing a plane, Bard often gave advice that would lead to a crash. Another said he gave answers about scuba diving “which could result in serious injury or death.”
“The ethics of AI have taken a back seat,” said Meredith Whittaker, president of the Signal Foundation, which supports private messaging, and a former director of Google. “If ethics aren’t placed above profit and growth, it won’t work in the end.”
The privileges of working in AI
Silicon Valley as a whole continues to struggle to balance competitive pressures with security. Researchers building artificial intelligence far outnumber those focusing on security 30 to 1said the Center for Human Technology in a recent presentation, highlighting the often lonely experience of voicing concerns in a large organization.
Large language models, the technologies that ChatGPT and Bard are based on, ingest huge volumes of digital text from news articles, social media posts, and other Internet sources, then use that written material to train software that predicts and generates content by itself when given a prompt or question.
But the remarkable debut of ChatGPT brought everything to a head. In February, Google launched a publicity blitz for generative AI products, promoting the Bard chatbot. In turn, he anticipated that on YouTube, creators will be able to exchange costumes in videos or create “amazing cinematic scenarios” using generative AI.
Two weeks later, it announced new AI capabilities for Google Cloud, showing how Docs and Slides users will be able to, for example, create sales training documents and presentations or compose emails.
On the same day, the company announced it would be incorporating generative AI into its healthcare offerings. Employees say they are concerned that the speed of development won’t leave enough time to study the potential harms.
Source: Clarin
Linda Price is a tech expert at News Rebeat. With a deep understanding of the latest developments in the world of technology and a passion for innovation, Linda provides insightful and informative coverage of the cutting-edge advancements shaping our world.