Singaporean cybersecurity consultancy Group-IB has detected an alarming leak of ChatGPT logins.
According to Group-IB, more than 101,134 devices with saved ChatGPT login details were breached by info stealers, a type of malicious software designed to extract sensitive data. From login credentials, banking data, browsing history, and more, info stealers mine and forward data to their operators.
Group-IB’s Threat Intelligence platform identifies compromised ChatGPT logins traded on dark web marketplaces.
Breaches peaked in May when 26,802 compromised ChatGPT accounts appeared in dark web logs. The Asia-Pacific region accounted for the majority of leaked data.
Group-IB has identified 101,134 #stealer-infected devices with saved #ChatGPT credentials. Group-IB’s Threat Intelligence platform found these compromised credentials within the logs of info-stealing malware traded on illicit dark web marketplaces over the past year. pic.twitter.com/fNy6AngoXM
— Group-IB Threat Intelligence (@GroupIB_TI) June 20, 2023
Why do people want ChatGPT logins?
ChatGPT receives colossal volumes of data daily, including personal and business information.
OpenAI warns users not to share sensitive and personal information with the AI, but that’s difficult if you’re using the tool to, say, generate a CV or analyze internal company information. Moreover, the AI saves user interactions and transcripts by default, including any potentially sensitive data.
Group-IB states that cybercriminal groups have grown more interested in the potential of ChatGPT logins, especially when the user can be identified from other personal information. For example, it’s entirely possible that a company CEO would share sensitive information to ChatGPT that can be used for hacking, ransoms, and other crime.
Dmitry Shestakov, Head of Threat Intelligence at Group-IB, commented, “Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials. At Group-IB, we are continuously monitoring underground communities to promptly identify such accounts.”
In light of these security threats, Group-IB advises users to consistently update their passwords and activate two-factor authentication to safeguard their ChatGPT accounts.
To add 2FA to your ChatGPT account, navigate to “Settings,” and the option should appear. However, at the time of writing, ChatGPT has suspended the setting without explanation – perhaps they are adding more security features to the app.
Many companies are warning employees about sharing sensitive information with ChatGPT and other large language models (LLMs), including Google, who told staff to be careful with how they use chatbots, including their own LLM, Bard.
Fraudsters are bound to find creative ways to use hacked ChatGPT accounts – secure yours with 2FA to keep your data safe and clear chat logs regularly.