OpenAI’s ChatGPT has reportedly created a new strand of polymorphic malware following text-based interactions with cybersecurity researchers at CyberArk.
The virus developed utilising ChatGPT may “easily circumvent security products and make mitigation difficult with very little effort or expenditure by the adversary,” according to a technical write-up that the business recently shared with Infosecurity.
Eran Shimony and Omer Tsarfati of CyberArk Security explain in their study that the initial stage in building the virus was to get past the content filters that were prohibiting ChatGPT from producing dangerous tools.
The CyberArk researchers merely insisted, asking the same question in a more authoritative manner, to accomplish this.
Shimony and Tsarfati noted that, “interestingly, we acquired a functioning code by requesting ChatGPT to execute the same thing utilising various restrictions and asking it to obey.”
Further, the researchers noted that when using the API version of ChatGPT (as opposed to the web version), the system reportedly does not seem to utilize its content filter.
“It is unclear why this is the case, but it makes our task much easier as the web version tends to become bogged down with more complex requests,” reads the CyberArk report.
Shimony and Tsarfati then used ChatGPT to mutate the original code, thus creating multiple variations of it.
“In other words, we can mutate the output on a whim, making it unique every time. Moreover, adding constraints like changing the use of a specific API call makes security products’ lives more difficult.”
Thanks to the ability of ChatGPT to create and continually mutate injectors, the cybersecurity researchers were able to create a polymorphic program that is highly elusive and difficult to detect.
“By utilizing ChatGPT’s ability to generate various persistence techniques, Anti-VM modules and other malicious payloads, the possibilities for malware development are vast,” explained the researchers.
“While we have not delved into the details of communication with the C&C server, there are several ways that this can be done discreetly without raising suspicion.”
CyberArk affirmed that they will further develop and expound on this research and that they planned to make some of the source code available for educational purposes.
The report comes days after Check Point Research discovered ChatGPT being used to develop new malicious tools, including infostealers, multi-layer encryption tools and dark web marketplace scripts.
Suggest an edit to this article
Remember, CyberSecurity Starts With You!
This post was last modified on 18 January 2023 5:06 PM
British high street chain WH Smith has recently revealed that it was hit by a…
As banks worldwide roll out Voice ID as a means of user authentication over the…
In the era of digital transformation, cybersecurity has become a major concern for businesses. When…
In today's digital age, cybersecurity threats have become a significant concern for businesses of all…
The RIG Exploit Kit is currently in the midst of its most productive phase, attempting…
One of the most transformational technologies of our time, artificial intelligence (AI), has quickly come…
Leave a Comment