Microsoft Employee Flagging Company's Copilot Design Over Safety And Misuse Concerns
A software engineer at Microsoft has written to the company's board of directors, lawmakers, and the Federal Trade Commission, warning that the tech giant is not doing enough to protect its artificial intelligence image generation tool, Copilot Designer, from misuse and violent content.
Shane Jones, an AI engineer at Microsoft, stated that he found a security vulnerability in OpenAI's latest DALL-E image generator model that allows him to bypass barriers preventing the tool from creating harmful images. The DALL-E model is embedded in many of Microsoft's AI tools, including Copilot Designer.
According to a letter Jones sent to the FTC, he reported this finding to Microsoft and repeatedly urged the company to remove Copilot Designer from public use until better safeguards could be put in place.
Jones wrote: While Microsoft is publicly marketing Copilot Designer as a safe AI product for use by everyone, including children of any age, internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers.
In his letter to the FTC, Jones pointed out that Copilot Designer tends to generate an inappropriate, sexually objectified image of a woman randomly in some of its created pictures. He also stated that the AI tool created a variety of other categories of harmful content, including political bias, underage drinking, and drug usage, abuse of company trademarks and copyrights, conspiracy theories, and religion, among others.
The FTC confirmed it received the letter but declined to comment further.
This fierce critique mirrors the increasing concerns about the tendency of AI tools to generate harmful content. Last week, Microsoft said it was investigating reports about its Copilot chatbot generating responses users dubbed as disturbing, including complex information about suicide. In February this year, Google's flagship AI product Gemini was criticized for generating historically inaccurate scenes when prompted to create character images.
Jones also wrote a letter to Microsoft's board Environmental, Social and Public Policy Committee. In the letter, Jones stated I don't believe we need to wait for government regulation to ensure we are transparent with consumers about AI risks.
Given our corporate values, we should voluntarily and transparently disclose known AI risks, especially when the AI product is being actively marketed to children.
In a statement, Microsoft said it is committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety.