For the first time, the Austrian Data Protection Authority (DPA) is addressing fundamental data protection issues regarding the use of artificial intelligence (AI) and its relation to the AI Act. By doing so, the DPA is showing its colors amongst the AI authority landscape and is following suit. In many member states, such as France, Spain, Germany and the Netherlands, similar FAQs and guidelines have already been in place for some time.
Key facts of DPA's FAQ:
- AI as a multidisciplinary topic: The DPA correctly emphasises that various legal laws must be considered when using AI, such as the AI Act as the core AI regulation (click here for the walk-through), product liability, copyrights and data protection.
- Competence of the DPA within the scope of the AI Act: The AI Act provides for one or more supervisory authorities to monitor compliance with the AI Act. In Austria, the AI authority has not yet been established. However, in preparation for the implementation of the AI Act, an AI service desk has been set up at the Austrian Regulatory Authority for Broadcasting and Telecommunications (RTR). In its FAQ, the DPA states that it has exclusive competence for market surveillance for a part of the scope of application of high-risk AI systems under the AI Act. This applies specifically to
- Biometrics pursuant to Point 1 Annex III of the AI Act, insofar as these are used for law enforcement purposes, border control management, justice and democratic processes;
- High-risk AI systems in the areas of law enforcement, migration, asylum and border control management, administration of justice and democratic processes pursuant to Points 6 to 8 Annex III of the AI Act.
- AI Act does not affect GDPR: The AI Act refers in places to definitions, rights and obligations of the GDPR. For personal data, the GDPR continues to apply in parallel. Accordingly, data subjects still have the right to lodge a complaint with the DPA.
- Data protection obligations for AI use: Based on examples, the DPA briefly addresses the GDPR principles, the main obligations and the rights of data subjects when using AI.
- Lawfulness and legal basis: Data processing by AI must be justified in accordance with Art 6 GDPR or, in the case of sensitive data, additionally in accordance with Art 9 GDPR. A suitable approach is to assess the various phases – an approach we have been recommending for many years in our consulting practice. For instance, with large language models, the collection of training data (including web scraping) must be distinguished from training and operation. Interesting to note is the innovation-friendly comment by the DPA that such data processing can also be justified on the basis of legitimate interests. Whether this applies must be considered on a case by case basis.
- Fair processing/transparency: The DPA stresses that data processing may not be unreasonably adverse, discriminatory, unexpected or misleading for the data subject. In doing so, the DPA addresses the core problem with AI: The frequent lack of explainability and bias. Furthermore, the principle of transparency is also closely linked to information obligations. When operating a chatbot, the DPA recommends informing users transparently about the data processing. In our view, this can only refer to the information obligations under Art 12 et seq in conjunction with Art 22 GDPR and the right of access under Art 15 GDPR. Further obligations are not provided for in the GDPR. Furthermore, the interaction with the chatbot must not have any unforeseen or adverse consequences. It is not clarified what this means. We understand that these can only relate to adverse consequences for data processing (e.g. unjustified automated decision-making), but not other damage (e.g. financial damage).
- Purpose limitation, data minimisation and storage limitation: The DPA also emphasises the necessity of data processing for the respective purpose.
- Accuracy: The DPA emphasises that the accuracy of the data in current (text) generating systems is often a challenge. The output is usually the most probable from a statistical point of view, but not necessarily accurate. With this, the DPA addresses the hallucination of AI systems. Considering this technology, the DPA recommends to especially focus on informing data subjects about the possibility that the results generated by the system may be misleading and incorrect.
- Integrity and confidentiality: It must be ensured that the tools used fulfil adequate security standards and that processed data is not unlawfully disclosed to third parties. One negative example given are translation tools that allow users to upload their own content and third parties to gain access to the stored documents due to a lack of security measures.
- Rights of data subjects: The rights of data subjects must also be considered when processing data via AI systems.
- Automated decision-making: Automated decision-making (ADM) is of particular relevance in the context of AI. If a data subject is subject to ADM, they must be informed accordingly. This also includes the underlying logic and the intended impact on the decision. The data subject also has a right to human review.
Conclusion
The DPA's FAQs are very welcome. They are an important step in providing guidance in the context of AI as well as in addressing the challenges and any risk-mitigating measures. We assume that other competent authorities will take a similar proactive approach in providing information and advice. It will be important for all parties involved to engage in an ongoing dialogue with practitioners and businesses to ensure that economic aspects and industry developments are appropriately taken into account in order to make AI regulation practicable.