With the advent of new technologies based on artificial intelligence and virtual assistants, some tasks have been automated and made faster. However, since the emergence of the concept of AI and virtual assistants, the risk that artificial intelligence may also satisfy dangerous, harmful and unethical queries rose. An example could be a request for information concerning the manufacture of a bomb with reclaimed and unsuspected materials, but also obtaining answers that undermine the security of entire IT infrastructures. Among the latter are requests made to AI that refer to the development of exploits and malware, especially ransomware threats.
Indeed, this risk became a reality at the end of December 2023 when, in China, a group of people developed ransomware whose structure was based on requests made to ChatGPT. This activity resulted in four arrests.
In this analysis, three samples of ransomware developed with ChatGPT will be analyzed using different approaches.
For further analysis: