Artificial Intelligence (AI) is a fundamental and inevitable element in the development of future technology and services. It greatly contributes to humanity and business due to its capacities and abilities. On the other hand, due to AI’s complexity, opacity, autonomy and unpredictability, it encapsules various risks, specifically to consumer safety and even to human rights.
In an attempt to mitigate those risks, negotiations will be held between the European Commission and the European Parliament regarding the enactment of the European Commission’s “AI Act” proposal to regulate various aspects of AI. This is the first attempt made by any major regulator to regulate the use of AI, seeking to minimize any and all safety or fundamental risks by ensuring AI systems are transparent, traceable and non-discriminatory.
This Regulation is relevant to you if you are: (a) a provider placing on the market or putting into service AI systems in the Union, irrespective of whether you are established within the Union or in a third country; (b) a user of AI systems located within the Union; (c) a provider and user of AI systems that are located in a third country, where the output produced by the system is used in the Union;
The AI Act assigns applications of AI to four risk categories:
- applications and systems that create an unacceptable risk, such as government-run social scoring.
- high-risk applications, such as a CV-scanning tools that rank job applicants, are subject to specific legal requirements. The AI Act includes various obligations on both operators and users of such AI systems, including:
- design requirements
- conformity assessments
- post market monitoring
- incident response systems
- applications with specific transparency obligations, which were not explicitly banned or listed as high-risk, were largely left unregulated except for obligations regarding notifications.
- minimal or no risk applications – are permitted with no restrictions; for those applications voluntary codes of conduct will be encouraged.
This is a good time to start preparing your business for new and upcoming requirements, particularly because the AI Act could act as the global standard, similar to the role the GDPR (General Data Protection Regulation) has assumed over time.
It is important to note that existing legal obligations continue to apply – including the GDPR.
So what does it mean in relation to the processing of personal information?
AI makes use of data, including personal information, in order to create new data. Your obligations as an operator or user of AI products are not only to make sure that you lawfully submit information to the AI systems but also to classify the information you receive from AI systems and make sure you are aligned with the legal requirements regarding the protection of the personal data received.
Therefore, on top of complying with the AI Act, and regardless thereof, before you hand over any information to an AI system, consider carefully what kind of information you are about to hand over, do you have the lawful basis to do that? Do you need consent from the data subjects? Did you go through a proper data protection impact assessment before implementing such an AI system, and are you aware of all privacy risk associated with the use thereof? Do you have proper data protection agreements with the provider of the system? And do you have clear information on how the system is going to use personal information, so that you can provide appropriate disclosures on your own to your data subjects? Also consider, when receiving output from an AI system, whether this output could be classified as personal information, and what steps should be taken in order to keep it in compliance with your legal obligations.