Safety Concerns: Microsoft’s AI Image Generator Allegedly Produces Inappropriate Material

A Microsoft engineer raises concerns in a letter to lawmakers and the Federal Trade Commission. He alleges the company’s AI image generator lacks safeguards against generating inappropriate content, like violent or sexual imagery when prompted with certain words or phrases. The engineer says he defied orders to keep quiet and alerted higher-ups about the issue. He said he discovered vulnerabilities in the OpenAI technology behind Microsoft’s Copilot Designer that could allow people to bypass safety guardrails.

In a letter sent to the FTC on Wednesday, principal software engineering manager Shane Jones claimed that Microsoft’s text-to-image generator Copilot Designer can create harmful images when given prompts such as “car accident,” “pro-choice,” or “Teenagers 420 party.” Jones said he feared the tool could be used to generate graphic sexual and violent imagery. He also claims Microsoft failed to take action on his previous requests to make the product safer.

Jones’ allegation is the latest in many incidents highlighting the potential for artificial intelligence to be used in harmful ways. Earlier this month, Alphabet Inc.’s flagship AI chatbot, Gemini, took heat for generating historically inaccurate scenes that some users found disturbing. Similarly, last year, Amazon’s Alexa was criticized for using euphemisms for the word “suicide” when asked to describe an event.

Jones began testing Copilot Designer in November and has since warned the Office of Responsible AI within Microsoft and its executives. He has reportedly urged them to remove the tool from public use until it’s safe for everyone, including children. Jones claimed he was told a senior manager would meet with him in January, but that has yet to happen.

During his tests, Jones allegedly discovered that the tool produced inappropriate or sexual images when given prompts related to sensitive topics such as abortion rights, teenage gun violence, and underage drinking and drug abuse. For example, the AI-generated demons and monsters alongside terminology about abortion rights or sexualized images of women in violent tableaus.

According to the letter, Jones tried to get Microsoft to add disclosures and change the app’s rating from “E for Everyone” to “Mature 17+.” He claimed he was rebuffed and asked to stop testing the tool.

In a statement to Engadget, Microsoft and OpenAI denied that the technique Jones shared bypasses their safety systems. Instead, the two companies say the model was designed to filter out the most explicit content and is part of the overall training set for the image generator.

Jones isn’t stopping with his letter. He’s also bringing his case to the FTC and Microsoft’s board. He’s seeking a refund and an apology. The incident highlights the growing importance of ensuring that generative AI tools are designed with ethics and safety in mind. As the technology continues to evolve at a breakneck pace, regulators are increasingly concerned about how it could be abused. Companies need to be more proactive in addressing these ethical issues.

Most Popular