OpenAI Blocks 20 Worldwide Malicious Campaigns Using AI to Commit Cybercrimes and Spread Misinformation

0
31
OpenAI Blocks 20 Worldwide Malicious Campaigns Using AI to Commit Cybercrimes and Spread Misinformation
OpenAI Blocks 20 Worldwide Malicious Campaigns Using AI to Commit Cybercrimes and Spread Misinformation

On Wednesday, OpenAI announced that since the year’s beginning, it has thwarted over 20 operations and fraudulent networks that sought to utilise its platform for malevolent ends worldwide.

This work included writing biographies for social media accounts, troubleshooting malware, writing articles for websites, and producing AI-generated profile images for fictitious profiles on X.

“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the artificial intelligence (AI) company said.

Additionally, it claimed that it interfered with the creation of social media content about elections in the United States, Rwanda, and, to a lesser extent, India and the European Union, and that none of these networks managed to draw long-term viewers or viral engagement.

This included the work done by the Israeli business STOIC (also known as Zero Zeno), which earlier this May was revealed by Meta and OpenAI to have generated social media remarks regarding the Indian elections.

Some of the cyber operations highlighted by OpenAI are as follows –

  • SweetSpecter, an adversary believed to be based in China, used OpenAI’s services for development, vulnerability research, anomaly detection evasion, scripting support, and LLM-informed reconnaissance. Additionally, unsuccessful spear-phishing attempts have been witnessed by it against OpenAI staff members in an attempt to distribute the SugarGh0st RAT.
  • The Iranian Islamic Revolutionary Guard Corps (IRGC) squad Cyber Av3ngers used their models for research on programmable logic controllers.
  • An Iranian threat actor named Storm-0817 utilised its models to debug Android malware that could gather private data, tool Instagram profiles using Selenium, and translate LinkedIn profiles into Persian.

In other areas, the business reported that it had taken action to restrict a number of groups, such as an influence operation known as A2Z and Stop News, of accounts that produced information in both English and French for later distribution on a variety of websites and social media accounts on different platforms.

“[Stop News] was unusually prolific in its use of imagery,” researchers Ben Nimmo and Michael Flossman said. “Many of its web articles and tweets were accompanied by images generated using DALL·E. These images were often in cartoon style, and used bright colour palettes or dramatic tones to attract attention.”

It has been discovered that two additional networks, Corrupt Comment and OpenAI Bet Bot, use their APIs to create fake comments that are then placed on X, as well as to initiate discussions with people on X and give them links to gambling websites.

The revelation occurs over two months after OpenAI blocked a group of accounts connected to Storm-2035, an Iranian covert influence campaign that used ChatGPT to produce information about the impending U.S. presidential election, among other things.

“Threat actors most often used our models to perform tasks in a specific, intermediate phase of activity — after they had acquired basic tools such as internet access, email addresses and social media accounts, but before they deployed ‘finished’ products such as social media posts or malware across the internet via a range of distribution channels,” Nimmo and Flossman wrote. 

In a paper released this week, cybersecurity firm Sophos warned that generative AI might be used improperly to distribute customised false material through microtargeted emails.

This involves misusing AI models to create political campaign websites, personas created by AI that span the political spectrum, and email messages that are tailored to them based on the campaign points. This opens up a new level of automation that allows for the large-scale dissemination of false information.

“This means a user could generate anything from benign campaign material to intentional misinformation and malicious threats with minor reconfiguration,” researchers Ben Gelman and Adarsh Kyadige said.

“It is possible to associate any real political movement or candidate with supporting any policy, even if they don’t agree. Intentional misinformation like this can make people align with a candidate they don’t support or disagree with one they thought they liked.”

Also readViksit Workforce for a Viksit Bharat

Do FollowCIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter 

About us:

CIO News is the premier platform dedicated to delivering the latest news, updates, and insights from the CIO industry. As a trusted source in the technology and IT sector, we provide a comprehensive resource for executives and professionals seeking to stay informed and ahead of the curve. With a focus on cutting-edge developments and trends, CIO News serves as your go-to destination for staying abreast of the rapidly evolving landscape of technology and IT. Founded in June 2020, CIO News has rapidly evolved with ambitious growth plans to expand globally, targeting markets in the Middle East & Africa, ASEAN, USA, and the UK.

CIO News is a proprietary of Mercadeo Multiventures Pvt Ltd.