New cyber-warning as OpenAI credentials are sold on the dark web

[ad_1]

Investigators and researchers are still learning of the scope of the cyberattack which has hit US government agencies and other victims around the world – AFP

After over 200,000 stolen OpenAI credentials were found being sold on the dark web, all OpenAI users and organizations who use ChatGPT need to be hyper-aware of the types of fraud that can arise from generative AI breaches.

Open source models represent an opportunity for threat actors because these systems have not undergone reinforcement learning by human feedback (RLHF), an activity that is focused on preventing risky or illegal answers (as pointed out by Bleeping Computer).

With users inputting and sharing sensitive information without a second thought in ChatGPT, the compromise of their accounts can lead to the exposure of all previous conversations and can fuel pointed and personal scams and phishing attacks.

As a result, fraudsters will be able to make their attacks more credible and effective, victimizing more and more people.

Looking into this technological concern is Philipp Pointner, Chief of Digital Identity at Jumio.

Pointner explains to Digital Journal that recent advances in technology have helped play into the hands of cybercriminals: “With the rise of generative AI, it is no surprise that credentials for generative AI tools and chatbots are a sought-after form of data. This incident brings attention to the rising security concerns that GPT technology brings.”

Pointner adds that the availability of the data increases the level of vulnerability for many consumers: “With over 200,000 OpenAI credentials up for grabs on the dark web, cybercriminals can easily get their hands on other personal information like phone numbers, physical addresses and credit card numbers.”

There are other risk factors to take into account: “Generative AI chatbots also bring an additional concern. With these credentials, fraudsters can gain access to all types of information users have inputted into the chatbot, such as content from their previous conversations, and use it to create hyper-customized phishing scams to increase their credibility and effectiveness.”

In terms of lesson to be drawn it is clear from Pointner that actions are needed to help to prevent recurrence: “Now more than ever, with the rising popularity of generative AI chatbots, organizations must implement more robust and sophisticated forms of security, such as digital identity verification tools that confirm every user is who they claim to be.” Pointner adds further: “By establishing every user’s true identity, businesses everywhere can ensure the user accessing or using an account is authorized and not a fraudster. On the consumer side, users should be more wary of the type of sensitive information they are sharing with online chatbots.”

[ad_2]

Source link

You May Also Like

About the Author: Chimdi Blaise