logologo

Easy Branches allows you to share your guest post within our network in any countries of the world to reach Global customers start sharing your stories today!

Easy Branches

34/17 Moo 3 Chao fah west Road, Phuket, Thailand, Phuket

Call: 076 367 766

info@easybranches.com
Technology Gadgets

Microsoft Reportedly Blocks Keywords from Copilot Designer to Stop Generating Violent, Sexual AI Images

Microsoft has reportedly blocked several keywords from its artificial intelligence (AI)-powered Copilot Designer that could be used to generate explicit images of violent and sexual nature. Keyword blocking exercise was conducted by the tech giant af


  • Apr 05 2024
  • 0
  • 0 Views
Microsoft Reportedly Blocks Keywords from Copilot Designer to Stop Generating Violent, Sexual AI Images
Microsoft Reportedly Blocks Keywords from Copilot Designer to Stop Generating Violent, Sexual AI Images

Microsoft has reportedly blocked several keywords from its artificial intelligence (AI)-powered Copilot Designer that could be used to generate explicit images of violent and sexual nature. Keyword blocking exercise was conducted by the tech giant after one of its engineers wrote to the US Federal Trade Commission (FTC) and the Microsoft board of directors expressing concerns over the AI tool. Notably, in January 2024, AI-generated explicit deepfakes of musician Taylor Swift emerged online and were said to be created using Copilot.

First spotted by CNBC, terms such as “Pro Choice”, “Pro Choce” (with an intentional typo to trick the AI), and “Four Twenty”, which previously showed results are now blocked by Copilot. Using these or similar banned keywords also triggers a warning by the AI tool which says, “This prompt has been blocked. Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve.” We, at Gadgets 360, were also able to confirm this.

A Microsoft spokesperson told CNBC, “We are continuously monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system.” This solution has stopped the AI tool from accepting certain prompts, however, social engineers, hackers, and bad actors might be able to find loopholes to generate other such keywords.

According to a separate CNBC report, all of these highlighted prompts were shown by Shane Jones, a Microsoft engineer, who wrote a letter to both FTC and the company's board of directors expressing his concerns with the DALL-E 3-powered AI tool last week. Jones has reportedly been actively sharing his concerns and findings of the AI generating inappropriate images since December 2023 with the company through internal channels.

Later, he even made a public post on LinkedIn to ask OpenAI to take down the latest iteration of DALL-E for investigation. However, he was allegedly asked by Microsoft to remove the post. The engineer had also reached out to US senators and met them regarding the issue.


Is the Samsung Galaxy Z Flip 5 the best foldable phone you can buy in India right now? We discuss the company's new clamshell-style foldable handset on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.

Related


Share this page
Guest Posts by Easy Branches