EU AI Act: All 47 Cybersecurity References

Max Zhou
8 min readNov 7, 2024

--

Introduction

The purpose of this article is to share a reference document for cybersecurity professionals on the EU AI Act — regarding the 47 mentions of the term ‘cybersecurity’ within the official text (excluding cited sources and titles).

For reference, `Penalty/Penalties’ is mentioned 19 times.

Cybersecurity is a key driver of this act with massive financial penalties for non-compliance, detailed in Article 99 of the EU AI Act.

At a high-level, the highest individual penalty is:

  • € 35,000,000 EUR ( ~$38,000,000 USD) or up to 7% of the offender’s total worldwide annual turnover for the preceding financial year, whichever is higher

This document is meant to be shared so practitioners can quickly identify areas of focus.

Summary

The EU AI Act lays a foundation of legally enforceable laws to limit the impact, add oversight controls, and governance over AI systems. It classifies them into categories as Prohibited Systems, High Risk Systems, and General Purpose AI (GPAI).

The law is primarily targeted at providers (developers) of these systems (such as organizations like OpenAI). However, the law also dictates obligations for certain users (deployers) of these AI systems.

This would be any organization or entity that leverages these AI systems to support specific use cases with the technology. This applies to organizations that operate in the EU or have users where the AI system’s output is used in the EU.

The EU AI Act was formally notified as new law on July 12, 2024. It has officially gone into effect on August 1, 2024. It goes into full effect by August 2, 2027, but begins enforcement as early as February 2, 2024 for Prohibited Systems.

Article 15 (pg 61): Accuracy, robustness and cybersecurity provides more detailed expectations regarding the topic of interest in this article.

All 47 References of ‘Cybersecurity’ from the EU AI Act

All references below include the paragraph number denoted from the regulation as (#) from the full text. Others may be denoted with the Article reference within the full text.

There are cases where the term is mentioned several times in the same sentence, which has been separated as a new line item below and continued with an ellipsis.

  1. Biometric systems which are intended to be used solely for the purpose of enabling cybersecurity and personal data protection measures should not be considered to be high-risk AI systems. (54)
  2. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. (55)
  3. Requirements should apply to high-risk AI systems as regards risk management, the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, human oversight, and robustness, accuracy and cybersecurity. (66)
  4. High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity, in light of their intended purpose and in accordance with the generally acknowledged state of the art. (74)
  5. Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or membership inference), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. (76)
  6. To ensure a level of cybersecurity appropriate to the risks, suitable measures, such as security controls, should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure. (76)
  7. Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, high-risk AI systems which fall within the scope of a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements, in accordance with that regulation may demonstrate compliance with the … (77)
  8. cybersecurity requirements of this Regulation by fulfilling the essential … (77)
  9. cybersecurity requirements set out in that regulation. (77)
  10. When high-risk AI systems fulfil the essential requirements of a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements, they should be deemed compliant with the … (77)
  11. cybersecurity requirements set out in this Regulation in so far as the achievement of those requirements is demonstrated in the EU declaration of conformity or parts thereof issued under that regulation. (77)
  12. To that end, the assessment of the cybersecurity risks, associated to a product with digital elements classified as high-risk AI system according to this Regulation, carried out under a regulation of the European Parliament and of the Council on horizontal… (77)
  13. cybersecurity requirements for products with digital elements, should consider risks to the cyber resilience of an AI system as regards attempts by unauthorised third parties to alter its use, behaviour or performance, including AI specific vulnerabilities such as data poisoning or adversarial attacks, as well as, as relevant, risks to fundamental rights as required by this Regulation. (77)
  14. The conformity assessment procedure provided by this Regulation should apply in relation to the essential cybersecurity requirements of a product with digital elements covered by a regulation of the European Parliament and of the Council on horizontal … (78)
  15. cybersecurity requirements for products with digital elements and classified as a high-risk AI system under this Regulation. (78)
  16. However, this rule should not result in reducing the necessary level of assurance for critical products with digital elements covered by a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements. (78)
  17. Therefore, by way of derogation from this rule, high-risk AI systems that fall within the scope of this Regulation and are also qualified as important and critical products with digital elements pursuant to a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements and to which the conformity assessment procedure based on internal control set out in an annex to this Regulation applies, are subject to the conformity assessment provisions of a regulation of the European Parliament and of the Council on horizontal … (78)
  18. cybersecurity requirements for products with digital elements insofar as the essential… (78)
  19. cybersecurity requirements of that regulation are concerned. (78)
  20. Building on the knowledge and expertise of ENISA on the cybersecurity policy and tasks assigned to ENISA under the Regulation (EU) 2019/881 of the European Parliament and of the Council, the Commission should cooperate with ENISA on issues related to … (78)
  21. cybersecurity of AI systems. (78)
  22. The providers of general-purpose AI models presenting systemic risks should be subject, in addition to the obligations provided for providers of general-purpose AI models, to obligations aimed at identifying and mitigating those risks and ensuring an adequate level of cybersecurity protection, regardless of whether it is provided as a standalone model or embedded in an AI system or a product. (114)
  23. Furthermore, providers should ensure an adequate level of cybersecurity protection for the model and its physical infrastructure, if appropriate, along the entire model lifecycle. (115)
  24. Cybersecurity protection related to systemic risks associated with malicious use or attacks should duly consider accidental model leakage, unauthorised releases, circumvention of safety measures, and defence against cyberattacks, unauthorised access or model theft. (115)
  25. That protection could be facilitated by securing model weights, algorithms, servers, and data sets, such as through operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls, appropriate to the relevant circumstances and the risks involved. (115)
  26. Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, in accordance with Article 54(3) of Regulation (EU) 2019/881, high-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to that Regulation and the references of which have been published in the Official Journal of the European Union should be presumed to comply with the… (122)
  27. cybersecurity requirement of this Regulation in so far as the … (122)
  28. cybersecurity certificate or statement of conformity or parts thereof cover the… (122)
  29. cybersecurity requirement of this Regulation. (122)
  30. This remains without prejudice to the voluntary nature of that cybersecurity scheme. (122)
  31. In order to carry out third-party conformity assessments when so required, notified bodies should be notified under this Regulation by the national competent authorities, provided that they comply with a set of requirements, in particular on independence, competence, absence of conflicts of interests and suitable cybersecurity requirements. (126)
  32. The Commission should take into account cybersecurity risks when carrying out its tasks as data controller on the EU database (131)
  33. the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated (Article 13)
  34. which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity (Article 13)
  35. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. (Article 15)
  36. The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. (Article 15)
  37. Notified bodies shall satisfy the organisational, quality management, resources and process requirements that are necessary to fulfil their tasks, as well as suitable cybersecurity requirements. (Article 31)
  38. High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 (Article 42)
  39. the references of which have been published in the Official Journal of the European Union shall be presumed to comply with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the… (Article 42)
  40. cybersecurity certificate or statement of conformity or parts thereof cover those requirements. (Article 42)
  41. ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model. (Article 55)
  42. that AI regulatory sandboxes facilitate the development of tools and infrastructure for testing, benchmarking, assessing and explaining dimensions of AI systems relevant for regulatory learning, such as accuracy, robustness and cybersecurity, as well as measures to mitigate risks to fundamental rights and society at large. (Article 58)
  43. cooperate, as appropriate, with other Union institutions, bodies, offices and agencies, as well as relevant Union expert groups and networks, in particular in the fields of product safety, cybersecurity, competition, digital and media services, financial services, consumer protection, data and fundamental rights protection; (Article 66)
  44. In particular, the national competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of AI technologies, data and data computing, personal data protection, cybersecurity, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements. Member States shall assess and, if necessary, update competence and resource requirements referred to in this paragraph on an annual basis. (Article 70)
  45. National competent authorities shall take appropriate measures to ensure an adequate level of cybersecurity. (Article 70)
  46. They shall put in place adequate and effective cybersecurity measures to protect the security and confidentiality of the information and data obtained, and shall delete the data collected as soon as it is no longer needed for the purpose for which it was obtained, in accordance with applicable Union or national law. (Article 78)
  47. the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracy, robustness and compliance with other relevant requirements set out in Chapter III, Section 2, as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f); (h) cybersecurity measures put in place; (Annex IV)

Reference

High-level summary (10 minute read): https://artificialintelligenceact.eu/high-level-summary/

Full text(3 hour read): https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689

--

--

Max Zhou

Information Security Professional. Product Security through continuous improvement and hand- on technical expertise