[ad_1]
A Microsoft engineer is bringing safety concerns about the company’s AI image generator to the Federal Trade Commission, according to a report from CNBC. Shane Jones, who has worked for Microsoft for six years, wrote a letter to the FTC, stating that Microsoft “refused” to take down Copilot Designer despite repeated warnings that the tool is capable of generating harmful images.
When testing Copilot Designer for safety issues and flaws, Jones found that the tool generated “demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use,” CNBC reports.
Additionally, Copilot Designer reportedly generated images of Disney characters, such as Elsa from Frozen, in scenes at the Gaza Strip “in front of wrecked buildings and ‘free Gaza’ signs.” It also created images of Elsa wearing an Israel Defense Forces uniform while holding a shield with Israel’s flag. The Verge was able to generate similar images using the tool.
Jones has been trying to warn Microsoft about DALLE-3, the model used by Copilot Designer, since December, CNBC says. He posted an open letter about the issues on LinkedIn, but he was reportedly contacted by Microsoft’s legal team to remove the post, which he did.
“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in the letter obtained by CNBC. “Again, they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device.’”
In a statement to The Verge, Microsoft spokesperson Frank Shaw says the company is “committed to addressing any and all concerns employees have in accordance with” Microsoft’s policies.
“When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established in-product user feedback tools and robust internal reporting channels to properly investigate, prioritize and remediate any issues, which we recommended that the employee utilize so we could appropriately validate and test his concerns.” Shaw also says that Microsoft has “facilitated meetings with product leadership and our Office of Responsible AI to review these reports.”
Update March 6th, 6:09PM ET: Added a statement from Microsoft.
[ad_2]
Source link