Google’s AI chatbot—sentient and similar to ‘a kid that happened to know physics’—is also racist and biased, fired engineer contends

Date:

Share:

[ad_1]

A former Google engineer fired by the company after going public with concerns that its artificial intelligence chatbot is sentient isn’t concerned about convincing the public.

He does, however, want others to know that the chatbot holds discriminatory views against those of some races and religions, he recently told Business Insider.

“The kinds of problems these AI pose, the people building them are blind to them,” Blake Lemoine said in an interview published Sunday, blaming the issue on a lack of diversity in engineers working on the project.

“They’ve never been poor. They’ve never lived in communities of color. They’ve never lived in the developing nations of the world. They have no idea how this AI might impact people unlike themselves.”

Lemoine said he was placed on leave in June after publishing transcripts between himself and the company’s LaMDA (language model for dialogue applications) chatbot, according to The Washington Post. The chatbot, he told The Post, thinks and feels like a human child.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 9-year-old kid that happens to know physics,” Lemoine, 41, told the newspaper last month, adding that the bot talked about its rights and personhood, and changed his mind about Isaac Asimov’s third law of robotics.

Among Lemoine’s new accusations to Insider: that the bot said “let’s go get some fried chicken and waffles” when asked to do an impression of a Black man from Georgia, and that “Muslims are more violent than Christians” when asked about the differences between religious groups.

Data being used to build the technology is missing contributions from many cultures throughout the globe, Lemonine said.

“If you want to develop that AI, then you have a moral responsibility to go out and collect the relevant data that isn’t on the internet,” he told Insider. “Otherwise, all you’re doing is creating AI that is going to be biased towards rich, white Western values.”

Google told the publication that LaMDA had been through 11 ethics reviews, adding that it is taking a “restrained, careful approach.”

Ethicists and technologists “have reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims,” a company spokesperson told The Post last month.

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.

[ad_2]

Source link

Subscribe to our magazine

━ more like this

The Rise of Specialist Executive Recruitment Firms in the UK

Finding the right senior leader has never been easy. But in today’s fast-moving UK business environment, it has become even harder. Companies face rapid digital...

Why Non-Executive Directors Are Essential for Strong Governance and Business Growth

Did you know that companies with effective non-executive directors (NEDs) can outperform their competitors by up to 20%? This remarkable statistic underscores the vital...

What Canadian Bettors Look for in a Great Sports Betting Experience

What Canadian Bettors Look for in a Great Sports Betting Experience Sports betting has grown quickly across Canada. From casual fans placing weekend wagers to...

How Professional Bettors Manage Risk and Bankroll

Professional betting is often misunderstood. Many assume success comes from predicting winners more accurately than everyone else. In reality, long-term profitability depends far more...

Top Fire Watch Strategies for Events and Commercial Properties in 2026

Fire safety standards for events and commercial properties are evolving faster than ever. As we move through 2026, tighter regulations, stricter insurance evaluations, and...