- HP’s report reveals cybercriminals using AI to write malware and phishing lures.
- ChromeLoader campaigns use AI to create more sophisticated web browser malware.
- 12% of all email threats bypassed gateway security in Q2, HP researchers found.
Generative AI used to be a playground for making cute cat photos and writing bad poetry.
However, cybercriminals have discovered a darker use: malware development.
According to HP Inc.’s latest Threat Insights Report, generative AI is enabling hackers to create malicious code faster and smarter than ever before.
If you thought dealing with deepfakes was bad, brace yourself: AI can now write sophisticated malware that targets your computer without breaking a sweat.
HP revealed its findings during the company’s annual HP Imagine keynote, emphasizing how AI-generated code is used in ChromeLoader malware campaigns.
So what exactly is ChromeLoader? It is malware that hijacks your browser, allowing attackers to manipulate your searches and redirect you to malicious websites disguised as useful tools such as PDF converters.
While this may appear to be old news, what is alarming is the speed and polish with which these attacks are carried out, thanks to artificial intelligence.
HP’s threat research team reported that these campaigns are sleeker, larger, and more professional than before.
They’ve even managed to get around Windows security policies by embedding their code in seemingly innocuous applications.
But that is not all. HP researchers also discovered AI’s involvement in another cunning attack vector: SVG images.
These images, which are commonly used in web design, now contain malicious code that hides in plain sight while still capable of delivering malware to your device.
In Q2, cybercriminals used 122 different file formats to deliver malware, with email remaining the most common method of infecting systems.
Subscribe to our newsletter
To make matters worse, HP discovered that 12% of all threats delivered via email in Q2 were able to evade business-level security measures.
Yes, the PDF files and emails you receive could be far more dangerous than you realize.
Cybercriminals are also using Large Language Models (LLMs), the foundation of generative AI, to create harder-to-detect malware and phishing lures.
While AI has increased the sophistication of malware, it has also reduced the barrier to entry, allowing even novices to write code that rivals that of experienced hackers.
HP researchers believe the neatly commented code found in some campaigns is the result of AI, which explains the recent increase in attacks.
So, if AI makes malware creation easier, what comes next? As HP notes, cybercriminals will continue to innovate, and users should be concerned.
Businesses, banks, and payment platforms are scrambling to keep up, deploying their own AI to combat these attacks.