AI-related insider controversies hit both Microsoft and Google

March 7, 2024
Google Microsoft

This week, two insider situations involving tech giants Microsoft and Google raised questions about the responsible development of AI systems and intellectual property management. 

First, at Microsoft, Shane Jones, a principal software engineering manager with six years of experience, has been independently testing the AI image generator Copilot Designer in his free time. 

Jones told CNBC that he was deeply troubled by the violent, sexual, and copyrighted images the tool was capable of generating. “It was an eye-opening moment,” Jones said. “It’s when I first realized, wow this is really not a safe model.”

Since November 2022, Jones has been actively testing the product for vulnerabilities, a practice known as red-teaming. 

He discovered that Copilot Designer could create images depicting “demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use.” 

Despite reporting his findings to Microsoft in December, Jones said the company has been reluctant to remove the product from the market.

Microsoft’s Copilot has acted strangely on occasions, including adopting a “god mode” that saw it vow to pursue world domination. 

In a letter addressed to Federal Trade Commission Chair Lina Khan, Jones wrote, “Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place.” 

He added that since Microsoft has “refused that recommendation,” he urges the company to add clear disclosures to the product and change its rating on Google’s Android app to indicate that it is only suitable for mature audiences.

Copilot Designer has been reportedly easy to coax into bypassing its guardrails and was responsible for the recent notorious explicit images of Taylor Swift, which circulated to millions across social media.

As Jones argues, the ability of AI systems to generate disturbing and potentially harmful images raises serious questions about the efficacy of safety features and how easy they are to subvert. 

Insider controversy at Google

Meanwhile, Google is grappling with its own AI-related controversy.

Linwei Ding, also known as Leon Ding, a former Google software engineer, was indicted in California on four charges related to allegedly stealing trade secrets about AI while secretly working for two Chinese companies. 

The Chinese national is accused of stealing over 500 confidential files related to the infrastructure of Google’s supercomputing data centers, which host and train large AI models.

According to the indictment, Google hired Ding in 2019 and began uploading sensitive data from Google’s network to his personal Google account in May 2022. 

These uploads continued periodically for a year, during which Ding spent a few months in China working for Beijing Rongshu Lianzhi Technology. This start-up tech company approached him and offered a monthly salary of $14,800 to serve as their Chief Technology Officer (CTO).

Ding also allegedly founded his own AI company, Shanghai Zhisuan Technology.

US Attorney General Merrick Garland stated, “The Justice Department will not tolerate the theft of artificial intelligence and other advanced technologies that could put our national security at risk.” FBI Director Christopher Wray added that Ding’s alleged actions “are the latest illustration of the lengths” companies in China will go to “to steal American innovation.”

As the world grapples with AI’s transformative potential, insider controversies in tech companies threaten to fan the flames of dissent. 

Cases at Microsoft and Google highlight the importance of fostering a culture of responsible innovation, including trust and transparency within the company itself.

AI is a technology that demands trust, and tech companies need to provide more assurance. This hasn’t always been forthcoming.

For instance, a group of 100+ tech experts recently co-signed a letter pleading with AI companies to open their doors to independent testing. 

They argued that tech companies are too secretive about their products except when their hand is forced, as we saw when Google pulled Gemini’s image generation model for creating bizarre, historically inaccurate images. 

Right now, it seems that AI’s exceptional pace of development often leaves trust and safety in its wake.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions