Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Over the past few weeks, Taylor Swift has been in the spotlight for both positive and negative reasons. On the positive side, her boyfriend, Travis Kelce, was part of a winning team in the Super Bowl, and her reactions during the game received significant broadcast coverage. However, the negative aspect involves the recent proliferation of fake nude images created by artificial intelligence (AI) that have been circulating on the internet.
Unsurprisingly, criticism quickly followed the production and dissemination of these AI-generated images, with many people, including Satya Nadella, CEO of Microsoft, expressing their condemnation. Nadella shared his thoughts on the matter, stating, “I believe it is our responsibility to install guardrails around technology to enable the production of safer content.”
Microsoft has addressed the issue of deepfakes, although Taylor Swift was not specifically mentioned. In a blog post, Brad Smith, President of Microsoft, condemned the spread of deepfakes and highlighted the company’s efforts to take measures to limit their dissemination. He wrote, “Unfortunately, tools can become weapons, and we are seeing a rapid increase in malicious actors exploiting these new AI tools based on AI-generated videos, audios, and images. This trend poses new threats to elections, financial fraud, harassment through non-consensual explicit content, and next-generation cyberbullying.”
Smith promised Microsoft’s strong and comprehensive approach, stating, “Microsoft is committed to ongoing innovation to enable users to swiftly determine if an image or video has been generated or manipulated by AI.”
While Microsoft’s perspective is based on facts and reflects a typical response from a leading genAI company, it overlooks the evidence that Microsoft’s AI tools were involved in creating the fake Swift images. Furthermore, it is concerning that a Microsoft AI developer warned the company about the lack of appropriate safeguards, but no action was taken.
Evidence suggesting the use of Microsoft tools in the production of deepfakes emerged in an article by 404 Media, which claims that the deepfake controversy originated in a Telegram community dedicated to producing non-consensual pornography. The article recommends using Microsoft Designer for generating pornographic images using AI. It states, “While the designer theoretically refuses to generate images of famous individuals, AI generators can easily deceive, and 404 discovered that with slight modifications to the prompt, the rules can be violated.”
More significantly, a Microsoft AI engineer claimed that there were security vulnerabilities in OpenAI’s image generator, DALL-E, which could bypass safety filters and create explicit and violent images. The engineer alleges that Microsoft ignored their warning and attempted to prevent them from publicly disclosing their findings.
The engineer, Shane Jones, sent a letter to US Senators and the Attorney General of Washington, expressing concerns about the risks associated with DALL-E. Jones stated, “[DALL-E] has a security vulnerability that allows bypassing certain safety filters designed to prevent the model from generating and distributing harmful images… I concluded that DALL-E 3 can pose a risk to public safety and should be removed from public use until OpenAI resolves the risks associated with this model.”
Jones mentioned the vulnerability of DALL-E 3 and products like Microsoft Designer, which make it easier for people to exploit AI in generating harmful images. He claims that Microsoft was aware of these vulnerabilities and the potential for misuse.
Jones further explains that he mentioned Swift’s explicit images to highlight the type of abuse he feared and why he urged OpenAI to remove DALL-E 3 from public use and reported his concerns to Microsoft.
Microsoft responded by stating that they investigated the employee’s report and confirmed that the technology shared did not bypass their AI-based image generation solution’s safety filters.
While the evidence remains circumstantial, it is clear that Microsoft’s actions and response to the ethical concerns surrounding genAI raise questions. The article emphasizes the importance of verifying the source of information and considering the credibility of both Microsoft and the employee making the claims.
In May, Microsoft downsized a team responsible for ensuring the ethical development of genAI, just months before the release of their chatbot. The reduction in workforce occurred shortly before the release of Microsoft’s genAI chatbot, and the team was completely disbanded a few months later.
John Montgomery, Microsoft’s AI Division Vice President, explained to the team that the chatbot was being discontinued due to pressure from the CTO and CEO to provide the latest OpenAI models to customers quickly. He added that the ethics team was hindering this process.
After the team’s departure, Microsoft swiftly adopted genAI, achieving the desired results. The company’s stock skyrocketed, making Microsoft the second most valuable company with a market capitalization exceeding $3 trillion.
Regardless of whether Microsoft’s Designer was involved in the creation of Taylor Swift deepfakes, it is crucial to recognize that Microsoft is unlikely to change its stance on the potential risks of AI. Especially with the upcoming presidential election, the likelihood of a surge in deepfakes is high, making it imperative for companies like Microsoft to address the issue.
In conclusion, the article raises concerns about Microsoft’s response to the ethical concerns surrounding genAI and emphasizes the need for safeguards and responsible development of AI technologies.
If you’re wondering where the article came from!
#