European Artificial Intelligence Act: many procedural and substantive requirements

22/03/22

Invest in the ethical use of data and privacy

The European Commission has recently published the draft AI Act. The Act classifies AI systems into four risk-based divisions, where systems that fall under higher risk levels have to comply with more safeguards. Especially for smaller enterprises, the effects on high-risk AI systems become disproportionately strenuous due to the substantial amount of requirements and their associated compliance costs up to 160.000 euros.

‘The proposed European AI legislation creates many procedural and substantive requirements for all parties involved with (high-risk) AI systems,’ according to PwC cybersecurity and data protection experts Bram van Tiel and Yvette van Gemerden. ‘Start the process of adaptation in due time and take the draft AI Act into consideration during future business decisions. Invest in the ethical use of data and privacy.’

Artificial intelligence: four risk-levels

Artificial Intelligence is a machine that mimics the decision-making and problem-solving capabilities of the human mind. Unlike humans, AI does not have to confine itself to empirical data and methods. It has infinite usages, such as predicting outcomes, recognizing faces and calculating price plans. AI systems vary greatly from one another and while they are ever-evolving, human supervision is required. The ethical development of AI is still a work in progress.

In April 2021, the European Commission presented a proposal for a Regulation regarding Artificial Intelligence: the draft AI Act. The Commission hopes to set the global regulatory standard and turn the EU into an AI hub. The draft Act safeguards fundamental rights, while at the same time stimulating the development of AI systems, thus creating space for investment and innovation.

Unacceptable risk

AI systems in this category create a clear threat to the safety, livelihoods and rights of people. Such systems are completely prohibited. Think of:

  • Manipulative systems that focus on distorting behaviour, such as measuring a truck driver’s fatigue and playing a sound that pushes them to drive longer.
  • Social scoring systems that connect scores to persons on the basis of which they are treated disproportionately, for example restricting a person’s travel capacity if they do not properly recycle.
  • Large-scale biometric systems used by law enforcement which recognise characteristics of a person, such as facial recognition software linked to a citywide camera network.
High risk

AI systems under this category pose a high risk to health, safety and fundamental rights, but do not fall under the ‘unacceptable risk’ category. These systems are either (part of) products covered by the EU legislation in Annex II or fall within high-risk areas in Annex III. Annex II contains, among others, certain medical devices. In Annex III, AI plays a role in recruitment processes for example. These systems are subject to strict requirements and obligations, including a European Conformity marking.

Limited risk

AI systems under this category are subject to transparency obligations, to allow individuals interacting with the system to make informed decisions. This is the case for a chatbot, by letting the user know they are speaking to an AI-empowered machine.

Minimal risk

AI systems in this category pose minimal to no risk for a person’s health and safety. They are not subject to additional requirements beyond existing legislation. The Commission and Member States solely facilitate voluntary codes of conduct and encourage compliance.

Prepare your organisation for the European AI Act

The draft AI Act has an extraterritorial reach similar to the General Data Protection Regulation (GDPR). ‘It will apply to both public and private actors inside and outside the European Union (EU)’, as clarified by Van Gemerden. ‘For example, if a US organisation’s AI system processes data of one or more EU citizens, they have to comply with the AI Act. This is also the case when the US organisation uses solely the output of the AI system in the Union. The draft Act will apply to virtually all organisations involved with AI systems in the Union. Especially organisations handling high-risk systems, will likely have to apply an integrated approach to align their compliance of the GDPR with that of the AI Act.’

While the Act is expected to enter into force by the end of 2024, Van Tiel recommends providers of AI systems to start preparing now for a stream of assessments, documentations, certifications and the like. ‘Research shows that the complete draft Act contains around 200 obligations for AI operators, meaning that your organisation may be faced with a substantial amount of red tape in a few years. The sooner your organisation adapts to the AI Act, the better you are able to embed responsible and ethical AI and privacy practices into your systems. This adaptation can be done in three steps:

  1. Map out the existence and utilisation of AI systems in your organisation and possible developments that you wish to implement in the coming years.
  2. Expand roles to carry the responsibility of the AI systems. Compliance requires the awareness of legal, tech and organisational staff. The importance of cross-functional reporting lines and multidisciplinary controls cannot be overstated.
  3. Implement a system of continuous risk management review within the organisation's operations. As AI systems evolve over time, so could the associated risks. One-off risk mitigation is therefore likely not sufficient.’

Higher compliance costs and fines

The draft AI Act relies almost entirely on self-assessment, meaning there are currently no public bodies which provide extensive compliance assistance. The ethical use of data and privacy elements have evolved into valuable business imperatives. ‘Implementing them is not solely checking a box, but rather a way to distinguish yourself from the competition’, according to Van Gemerden. ‘Your customers follow developments closely and are concerned about the ethical use of their data.’

Organisations need to take appropriate steps to ensure compliance, which will create company-level costs. Van Tiel expects that especially high-risk AI systems will give rise to considerable financial impact. ‘Compliance costs are likely to exceed those incurred by the GDPR threefold.  Based on the EU impact assessment, small and medium-sized enterprises can expect compliance costs of up to €160.000, assuming the AI system complies with all current legislation. All organisations involved with high-risk AI are expected to spend extensive resources on compliance and implementation, due to the expansive scope of the draft AI Act and its detailed sets of requirements. The fines of the draft AI Act further enhance the financial aspect: a maximum of thirty million euros or six percent of the global turnover, which is significantly higher than the maximum GDPR fines.’

Contact us

Bram van Tiel

Bram van Tiel

Partner Cybersecurity & Dataprivacy, PwC Netherlands

Tel: +31 (0)62 243 29 62

Yvette van Gemerden

Yvette van Gemerden

Partner, PwC Netherlands

Tel: +31 (0)65 200 59 24

Follow us