ELITE MORAN

Blogs

Learning to be Dangerous. How OpenAI’s Latest Innovation Will Alter the Cybersecurity Landscape

ChatGPT has become wildly popular since it debuted in November 2022, with the AI-driven platform amassing more than 100 million users two months after it was launched.

The natural language processing tool has human-like functions that have taken the world by storm, but it has also raised concerns across various industries. A New York school banned the tool due to concerns that students can use it to cheat, and Google has become so alarmed over its capabilities that it has issued a code red since ChatGPT is a threat to its search business.

All the more serious, the cybersecurity sector has noticed that ChatGPT can heighten security threats. And there is statistics to back the concerns.

According to new research from Blackberry, the AI-powered chatbot poses significant security threats, with increasing evidence showing that threat actors are already testing ChatGPT’s ability to create phishing emails and malicious payloads. Blackberry’s CTO for cybersecurity, Shishir Sigh, said that Blackberry expects hackers to become better at using ChatGPT for nefarious activities in 2023.

Furthermore, 51% of IT security experts from Australia, the UK, and North America participating in the survey believe that a ChatGPT-enabled cyberattack will occur before 2023 ends, and 71% agree that nation-state adversaries are likely exploring how they can use the technology to target other countries. 

The Rising Threat Of The ChatGPT Malware

An enhanced AI program like ChatGPT can become dangerous when used for nefarious reasons. For example, hackers can use ChatGPT to draft malicious code, with numerous dark web underground networks deploying it for scripting malware for use in ransomware attacks.

However, industry giants like Microsoft have forged multi-billion partnerships with OpenAI to develop more AI capabilities, which does little to alleviate the concerns that it will eventually become a serious threat to countries and organizations worldwide. This is due to reasons like:

  • ChatGPT is very easy to use: The simplicity of ChatGPT in creating sophisticated malware attracts amateurs and hackers with limited technical capabilities. It will cause a new breed of hackers to emerge, thus increasing cybersecurity threats.
  • High accessibility: Free availability of the AI-driven tool is one of its primary selling points. Anyone with an internet connection can use the program anonymously from any location to churn out phishing emails and dangerous malware.
  • Automated outputs: ChatGPT is designed to produce output automatically based on user prompts. Therefore, it makes it easier for cybercriminals to develop malware consistently and rapidly, enabling them to create multiple malware variants.

Cybersecurity Researchers Use ChatGPT To Develop Malware.

CyberArk cybersecurity researchers published a blog detailing using ChatGPT to develop polymorphic malware. They used ChatGPT to create polymorphic code, which is code mutated to create varying iterations to bypass signature-based detection software.

While the process was complicated due to the content policy filters that OpenAI has implemented to prevent users from abusing ChatGPT, the researchers used a process they called insisting and demanding during input requests to develop malicious executable code. Although the code is purely malicious and detectable using security software, the researchers note that the danger lies in that ChatGPT is an AI and machine learning tool that learns from its inputs to produce better outputs. It will get better at creating undetectable malware.

In addition, a few weeks after its launch, security researchers from Israeli cybersecurity firm Check Point demonstrated how nefarious actors could use ChatGPT to create convincing phishing emails capable of delivering malicious payloads. Specifically, they revealed how ChatGPT could be used with OpenAI’s code development system, Cordex, to create phishing emails. Sergey Shykevich, group manager at Check Point threat intelligence, said that such use cases demonstrate that ChatGPT can alter the cyber threat landscape significantly, noting that it marks a major step towards the dangerous evolution of effective and increasingly complex AI-enabled cyber threat capabilities.

A Great Learning Platform For Aspiring And Novice Cybercriminals

ChatGPT cannot execute codes or programs, including those it produces. Hence, attackers using it to execute cyberattacks is out of the question. Since cybercriminals can’t get it to run scans or launch attacks, they will use it for the next best thing – to learn how to perform various attacks.

One of the most amazing capabilities of ChatGPT is that it can output easy and clear instructions for software and cybersecurity programs, such as popular network scanning and pen-testing tools like Metasploit and Nmap. In most cases, ChatGPT can advise users on the most effective tools to use and provides understandable instructions on how to use them for malicious cyber activities.

Potentially, this implies that ChatGPT can help individuals with zero technical skills to effectively use various attack tools to engage in a wide range of malicious activities. These include conducting network scans and screening systems for security weaknesses and exploitable vulnerabilities.

The AI-powered chatbot can then assist users with how to exploit security flaws to gain unauthorized access to sensitive data, networks, and systems.

Ominously, these capabilities increase risks for organizations, where begrudged employees or upcoming hackers can leverage ChatGPTs capabilities to exploit vulnerabilities to cause harm.

What Is The Way Forward?

Cybersecurity threats from artificial intelligence are not new since the technology has existed for many years. However, with emerging interactive tools like ChatGPT demonstrating distinct examples of how AI can alter the cyber threat landscape, AI cyber threats are now scarier.

Thus, cybersecurity vendors must become more proactive in implementing behavioral AI components in security systems and software to detect and deter AI-generated attacks.

Dr. Raj Sharma, a lead AI and cybersecurity consultant at the University of Oxford, believe that AI-generated attacks cannot be countered using traditional security controls. “If there is some kind of hacking tool that uses AI, then we have to use AI to understand its behavior,” he says.

As such, artificial intelligence technologies will become critical in developing defensive measures against the evolving cyber landscape where attackers turn to AI-powered platforms to create malware and launch attacks.

Ultimately, the impacts of ChatGPT and other similar platforms on the cybersecurity landscape depend on users’ intentions. The bottom line is that it is crucial to be aware of all potential risks resulting from its use to inform the appropriate mitigation actions to reduce those risks

More blogs