Microsoft’s AI text-to-image generator Copilot Designer is facing backlash after allegations of generating violent and sexual imagery. The tool, designed to assist designers in creating images based on text prompts, has come under scrutiny following warnings from Microsoft engineer Shane Jones.
Jones reportedly raised concerns about the tool’s vulnerabilities and the alarming content it was producing. However, Microsoft allegedly ignored these warnings and did not take down the tool or implement any safeguards to prevent the generation of inappropriate images.
In a letter to lawmakers, stakeholders, and the Federal Trade Commission, Jones detailed the explicit and sexually objectified images that were being generated by Copilot Designer. He also called for an investigation into Microsoft’s AI decision-making and conduct regarding responsible AI incident reporting.
Bloomberg reviewed Jones’ letter to the FTC, which mentioned the disturbing content being generated by the tool. Attempts to replicate the prompts shared by Jones resulted in error messages, indicating that Microsoft may be filtering out certain images.
In response to the backlash, Microsoft stated that they are addressing the concerns raised by Jones through internal reporting channels and by enhancing safety systems. However, OpenAI, the organization behind Copilot Designer, did not respond to requests for comment regarding the situation.
The controversy surrounding Copilot Designer highlights the importance of responsible AI development and the need for proper oversight to prevent the creation of harmful imagery. Microsoft’s handling of the situation and response to Jones’ warnings will be closely monitored as the debate around AI ethics continues to grow.