ChatGPT logins leak on dark web forums and marketplaces

June 23, 2023

OpenAI login leak

Singaporean cybersecurity consultancy Group-IB has detected an alarming leak of ChatGPT logins.

According to Group-IB, more than 101,134 devices with saved ChatGPT login details were breached by info stealers, a type of malicious software designed to extract sensitive data. From login credentials, banking data, browsing history, and more, info stealers mine and forward data to their operators. 

Group-IB’s Threat Intelligence platform identifies compromised ChatGPT logins traded on dark web marketplaces.

Breaches peaked in May when 26,802 compromised ChatGPT accounts appeared in dark web logs. The Asia-Pacific region accounted for the majority of leaked data. 

Why do people want ChatGPT logins?

ChatGPT receives colossal volumes of data daily, including personal and business information.

OpenAI warns users not to share sensitive and personal information with the AI, but that’s difficult if you’re using the tool to, say, generate a CV or analyze internal company information. Moreover, the AI saves user interactions and transcripts by default, including any potentially sensitive data.

Group-IB states that cybercriminal groups have grown more interested in the potential of ChatGPT logins, especially when the user can be identified from other personal information. For example, it’s entirely possible that a company CEO would share sensitive information to ChatGPT that can be used for hacking, ransoms, and other crime. 

Dmitry Shestakov, Head of Threat Intelligence at Group-IB, commented, “Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials. At Group-IB, we are continuously monitoring underground communities to promptly identify such accounts.”

In light of these security threats, Group-IB advises users to consistently update their passwords and activate two-factor authentication to safeguard their ChatGPT accounts. 

To add 2FA to your ChatGPT account, navigate to “Settings,” and the option should appear. However, at the time of writing, ChatGPT has suspended the setting without explanation – perhaps they are adding more security features to the app. 

Many companies are warning employees about sharing sensitive information with ChatGPT and other large language models (LLMs), including Google, who told staff to be careful with how they use chatbots, including their own LLM, Bard.

Fraudsters are bound to find creative ways to use hacked ChatGPT accounts – secure yours with 2FA to keep your data safe and clear chat logs regularly. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions