giovedì, Maggio 2, 2024

Intelligenza artificiale al servizio degli hacker: l’emergente minaccia dei nuovi attacchi informatici

INCREASING THREAT OF HACKERS UTILIZING ARTIFICIAL INTELLIGENCE

As artificial generative intelligence continues to infiltrate our lives, more and more people (let’s not deny it) are imagining possible scenarios of the near future.

So far, we hadn’t thought that, sooner or later, this would have to happen: artificial intelligence at the service of hackers. This is what has been denounced not only by OpenAI and Microsoft in recent hours, but something had already happened previously. Let’s try to take stock of the situation.

INTELLIGENT USE OF ARTIFICIAL INTELLIGENCE BY HACKERS

Our government has recently approved a bill that tightens the penalties for cybercriminals. But it’s possible that it may need to be updated: lately, hackers have been using artificial intelligence for their nefarious actions. OpenAI, the producer of ChatGPT, and Microsoft, in partnership with Sam Altman’s company, have both reported this.

On Wednesday, February 14th, an elaborate post with the emblematic title “Staying ahead of threat actors in the age of AI” appeared on the Microsoft Security blog. It is the release of “research on emerging threats in the age of artificial intelligence, focusing on identified activities associated with known threat actors.”

THREATS FROM RUSSIA AND NORTH KOREA

A large part of the report is dedicated to the actions taken by Microsoft and OpenAI to counter the threats of hackers using artificial intelligence. After that, a brief list of cybercriminal groups is provided, “exploring and testing different AI technologies as they emerge, in an attempt to understand the potential value of their operations and the possible security controls they may need to bypass”.

Forest Blizzard, also known as Strontium, an organization tied to Russian military intelligence, is mentioned. It states that “Forest Blizzard has been extremely active in targeting organizations related to the Russian war in Ukraine throughout the conflict, and Microsoft believes that Forest Blizzard’s operations play a significant support role for Russian foreign policy and military objectives in both Ukraine and worldwide.”

Thallium, a North Korean hacker group, “was very active throughout 2023. Its recent operations were based on phishing emails to compromise and gather information from prominent figures with experience in North Korea”. The authors of the actions posed as academic institutions and NGOs, inducing victims to provide expert opinions and comments on foreign policies related to North Korea.

Other AI-based hacker offensives have been launched by Iran and China.

NO SIGNIFICANT ATTACKS

Microsoft and OpenAI emphasize that no AI-based hacker attacks have been significant, and that no unknown techniques have been used. Furthermore, all accounts associated with these criminal organizations have been disabled. But cybersecurity certainly needs to accept new challenges and implement new defense measures.

NEW FRONTIERS OF CRIMINALITY

In this sense, a very recent example is worth noting, based on AI image production. A few days ago, a group of Iranian hackers broadcast a deepfake Video in the United Arab Emirates (as well as in the United Kingdom and Canada). In this video, a journalist produced with artificial intelligence announced a service that showed (unverified) images of Palestinians killed in Gaza by Israeli military.

Consider also the ability of ChatGPT to write code, or that of other Software to imitate the voices of more or less known characters. Not surprisingly, a few months ago, the big tech companies hired hacker groups to identify flaws in their artificial intelligence models. Hoping that the ancient saying “When you can’t beat it, make it your friend” doesn’t apply.

ARTICOLI COLLEGATI:

ULTIMI ARTICOLI: