Microsoft AI engineer warns FTC about Copilot Designer safety concerns

Date:

Share:

[ad_1]

A Microsoft engineer is bringing safety concerns about the company’s AI image generator to the Federal Trade Commission, according to a report from CNBC. Shane Jones, who has worked for Microsoft for six years, wrote a letter to the FTC, stating that Microsoft “refused” to take down Copilot Designer despite repeated warnings that the tool is capable of generating harmful images.

When testing Copilot Designer for safety issues and flaws, Jones found that the tool generated “demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use,” CNBC reports.

Additionally, Copilot Designer reportedly generated images of Disney characters, such as Elsa from Frozen, in scenes at the Gaza Strip “in front of wrecked buildings and ‘free Gaza’ signs.” It also created images of Elsa wearing an Israel Defense Forces uniform while holding a shield with Israel’s flag. The Verge was able to generate similar images using the tool.

Jones has been trying to warn Microsoft about DALLE-3, the model used by Copilot Designer, since December, CNBC says. He posted an open letter about the issues on LinkedIn, but he was reportedly contacted by Microsoft’s legal team to remove the post, which he did.

“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in the letter obtained by CNBC. “Again, they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device.’”

In a statement to The Verge, Microsoft spokesperson Frank Shaw says the company is “committed to addressing any and all concerns employees have in accordance with” Microsoft’s policies.

“When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established in-product user feedback tools and robust internal reporting channels to properly investigate, prioritize and remediate any issues, which we recommended that the employee utilize so we could appropriately validate and test his concerns.” Shaw also says that Microsoft has “facilitated meetings with product leadership and our Office of Responsible AI to review these reports.”

Update March 6th, 6:09PM ET: Added a statement from Microsoft.

[ad_2]

Source link

Subscribe to our magazine

━ more like this

Expert Forensic Analysis in Investigating Crypto Investment Scams and Recovering Lost Funds

The allure of cryptocurrency investment, with its potential for high returns, has unfortunately attracted a darker side: sophisticated and deceptive scams. Victims of these...

Asia’s Certified Cryptocurrency Investigator Launches in Singapore: Pioneering Crypto Crime Investigation (C.C.I)

Singapore, – In a groundbreaking move to enhance digital asset security and bolster consumer confidence in the cryptocurrency market, the Crypto Crime  Investigation...

C.C.I Launches as the Ultimate Recovery Platform for Crypto Investors Targeted by Scams

Nevada, Florida – In response to the growing concern over cryptocurrency investment scams, C.C.I (Crypto Crime Investigation) proudly announces its official launch as the...

Here’s what we know about the suspect in the latest Trump assassination attempt

Local authorities said the U.S. Secret Service agents protecting Trump fired at a man pointing an AK-style rifle with a scope as Trump...

Buckle up for this 36-hour span that will soon take markets on a rollercoaster ride

The world economy’s tectonic plates will shift this week when a US easing cycle begins, just as officials from Europe to Asia set...