A recent report by Acronis, a cyber security company, reveals that there has been a surge in cyber attacks using Artificial Intelligence (AI). The report titled Acronis Cyberthreats Report H2 2023 also found that the adoption of machine learning and AI by cybercriminals has led to the development of highly sophisticated attack methodologies.
The report that is based on data collected from over 1,000,000 unique endpoints across 15 key countries, says, “AI’s ability to learn and adapt makes it a potent tool for executing attacks that can evade standard detections and countermeasures. The result is a significant shift in the threat landscape, challenging companies to rethink their security strategies.”
On a rather ominous note, the report says, “The cybersecurity sector is now in an arms race with malicious actors, each side using increasingly advanced AI to outwit the other. Companies must be prepared to engage in a continuous process of learning and evolving their security protocols to outpace the adaptive nature of malicious AI.”
In a world where AI, Machine Learning (ML), Internet of Things (IoT) and other similar emerging technologies are already generating record amounts of data, this report’s findings are sure to send a chill down the spine. Data centers are already scrambling to upgrade their infrastructure to cope with the huge volumes of data generated each day, becoming AI-ready has become one of the top priorities of the Cloud and Data Center industry. Concepts like AI Super Cloud are no longer things that exist in the realm of science fiction, but something that the world increasingly relies on every day! So where does that leave us? We need AI, but can we trust everyone with it? How do we prevent its misuse?
To understand that, let’s take a look at some examples shared by the Acronis report about how AI is being misused.
Just how AI is being misused for cyberattacks
The report warns against spear phishing and AI-generated social engineering attacks saying, “Natural language processing (NLP) tools can now draft phishing emails that mimic the tone, style and vocabulary of genuine communications from trusted sources. Likewise, AI algorithms can analyze an individual’s online behavior to tailor deceptive messages that the recipient is more likely to trust and act upon.”
While there is still some amount of awareness about deepfakes and impersonation, the report also highlights other examples of misuse of AI such as using it to make malware adaptable. “Malware typically has a static behavior pattern, making it detectable by traditional security solutions. However, with the integration of AI, malware can now dynamically adjust its operations to evade detection, learn from environmental interactions or even deactivate if it detects a sandbox environment,” says the report.
Cybercriminals are leveraging malicious AI tools, including WormGPT, FraudGPT, DarkBERT, DarkBART and ChaosGPT. The public release of ChatGPT has increased the use of generative AI for cyberattacks.
In fact, there have been cases where tools designed to combat cyberattacks are being misappropriated to inflict digital damage. “Developed by a South Korean company called S2W Security and trained on dark web data, DarkBERT is designed to combat cybercrime, but like ChatGPT, it is now being exploited for malicious purposes,” says the report, adding, “Another example of malevolent AI usage is the Mylobot botnet, that incorporates various evasion tactics and exhibits the potential for further adaptation based on AI integration, highlighting the shift towards intelligent malware development.”

“There’s a disturbing trend being recognized globally where bad actors continue to leverage ChatGPT and similar generative AI systems to increase cyberattack efficiency, create malicious code, and automate attacks,” says Candid Wüest, Acronis VP of Product Management. “Now, more than ever, corporations need to prioritize comprehensive cyber protection solutions to ensure business continuity,” he added.
Michael Suby, Research VP, IDC also acknowledges this saying, “Unfortunately, bad actors continue to profit from these activities and are leveraging AI-enhanced techniques to create more convincing phishing schemes, guaranteeing that this problem will continue to plague businesses.”
One could argue that any technology can be misused, but generative AI makes it difficult to predict just how can this technology be misused. It is this unpredictability that adds a layer of dread to a growing feeling of vulnerability. Raign’s lines from When It’s All Over come to mind,
“It’s all gone wrong,
Heaven hold us.
We’ve lost the sun,
Heaven told us.
The world was strong,
Heaven hold us.
Where do we go
When it’s all over?”
*Feature Image via www.vpnsrus.com