The European Union is taking these potential threats very seriously and has reacted with several regulations.
It introduced the European Artificial Intelligence Act (AI Act) on 1st August 2024. The act is designed to foster responsible artificial intelligence development in the EU and addresses potential risks to citizens’ health, safety, and fundamental rights.
The EU also adopted the Cyber Resilience Act on 10th October 2024 which concerns the protection of products with digital elements (including home cameras, kitchen appliances, televisions, and toys) and ensures they are safe from cyber-attacks before they go on sale.
Furthermore, the EU directive NIS2 came into force on 16th January 2023 for industries with critical infrastructures. The importance of this has been highlighted by
several spectacular cyberattacks that have targeted critical infrastructure such as power grids, water systems, and transport networks.
With both security providers and cybercriminals having access to AI systems this creates an ongoing ‘arms race’ between the two.
Forbes recently discussed how AI-powered attacks are more dangerous than traditional cyber-attacks in a number of key ways. AI enables the automation of intricate attack processes, allowing cybercriminals to initiate and manage attacks on an unparalleled scale. AI systems can also learn from previous attempts, continually enhancing their efficiency and ability to avoid detection. At the same time, advanced AI can process large volumes of data to detect vulnerabilities and patterns in target systems more effectively than human attackers, and the use of Natural Language Processing enables cybercriminals to launch more convincing phishing and social engineered attacks that are harder for victims to spot. Hackers will continue to develop even more sophisticated and automated techniques, capable of analyzing software and organizations to identify the most vulnerable entry points.
A recent study by Forrester found that 88% of security experts expect AI-driven attacks to become mainstream – it's only a matter of time.
This means that cyber analysts and IT experts can only defeat bad actors by understanding how AI will be weaponized, thus being able to confront cybercriminals head-on and develop and deploy suitable security measures.
Looking further ahead, the EU also has a focus on quantum-safe encryption to safeguard against future quantum computing risks.
The European Commission recently published a Recommendation on Post-Quantum Cryptography to encourage Member States to develop and implement a harmonized approach, as the EU transitions to post-quantum cryptography. This is designed to help ensure that the EU's digital infrastructures and services remain secure in the next digital era.
Different security measures must be taken to defend against attacks depending on which targets – technologies or humans – the AI-based attacks are aimed at.
Conventional cyber security measures are often inadequate when it comes to detecting and defending against AI attacks. They rely heavily on signature-based detection and rule-based systems that are not equipped for the dynamic and evolving nature of AI cyberattacks.
It therefore makes sense to beat the enemy with its own weapons: AI can serve as a powerful ally in protecting IT assets. Advanced detection mechanisms that utilize machine learning (ML) and AI should be used to detect nuanced anomalies and patterns that may be missed by traditional methods. Tools like Natural Language Processing (NLP) and image recognition detect threats in various languages and formats.
Here’s a set of suggestions that every company can utilize to help plan its defenses and make its IT more secure:
- Implementing AI-Enhanced Defense Systems – Combat threats by deploying AI-enhanced security solutions. These include advanced threat detection systems, automated patch management, and AI-driven network monitoring tools that detect and respond to anomalies in real-time.
- Conducting Regular Security Audits and Penetration Testing – Perform frequent system assessments, including AI-powered penetration tests. These can uncover vulnerabilities that traditional methods might overlook and provide insights into how AI could be used to exploit systems.
- Regular updates – To make evasion attacks more difficult, continuous updates and improvements to the AI algorithms are necessary to adapt to new evasion techniques.
- Adversarial training – AI-based defense systems are trained with adversarial examples to improve their resilience against attacks.
- Implementing Deception Technologies – Deploy decoy systems and fake data to confuse and mislead AI-powered attacks. These tools can detect threats early and gather valuable intelligence on attacker tactics.
- Developing an AI-Specific Incident Response Plan – Revise your incident response strategies to address the speed and complexity of AI-driven attacks. This may involve automated response protocols and using the services of specialized teams trained in AI forensics.
- Implementing a Zero Trust security framework – This operates on the principle of "never trust, always verify" and is an excellent approach to enhancing security. Instead of assuming everything inside an organization's network is safe, Zero Trust requires continuous verification of all users, devices, and connections, regardless of their location.
- Use of two-factor authentication – The introduction of two-factor authentication increases security, as employees have to confirm their identity in two different ways in order to gain access to resources and data.
- Collaborative Threat Intelligence Sharing – Join industry-wide threat-sharing networks. By pooling knowledge on AI-driven attacks, businesses can stay ahead of emerging threats and benefit from collective defense strategies.
No matter how much technology is used to protect against AI-based attacks, the biggest security risk is still the human element. Where emotions and deception are used to gain trust and trick people into handing over sensitive information, even the most sophisticated technologies can only help to a limited extent.
It is therefore all the more important that employees are made aware of the forms of cyberattacks and receive regular staff training to help recognize and report potential threats. Employees should be trained to carefully verify all contact attempts (email, SMS, phone calls, etc.) to ensure they know the sender and question the context of the messages sent to them, especially when it comes to money transactions where having a face-to-face conversation or telephone call can be used to double-check.
Combining AI tools with human teams ensures more comprehensive protection against sophisticated cyber-attacks. Further measures can include:
- Adapting existing email and URL filters,
- Submitting takedown requests for websites and web hosts,
- Reporting the blocking of telephone numbers to mobile phone providers,
- Reporting bank details of cyber criminals, money mule authorities, or hacked legitimate email accounts to the respective providers.