Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How

Share

Abstract

#AI #genAI #legislation

Advances in Generative Artificial Intelligence (AI) are resulting in AI-generated media output that is (nearly) indistinguishable from human-created content. This can drastically impact users and the media sector, especially given global risks of misinformation. While the currently discussed European AI Act aims at addressing these risks through Article 52's AI transparency obligations, its interpretation and implications remain unclear. In this early work, we adopt a participatory AI approach to derive key questions based on Article 52's disclosure obligations. We ran two workshops with researchers, designers, and engineers across disciplines (N=16), where participants deconstructed Article 52's relevant clauses using the 5W1H framework. We contribute a set of 149 questions clustered into five themes and 18 sub-themes. We believe these can not only help inform future legal developments and interpretations of Article 52, but also provide a starting point for Human-Computer Interaction research to (re-)examine disclosure transparency from a human-centered AI lens.

Type of Publication
articles-and-papers
Theme
artificial-intelligence
Publishing
Cornell University
Publishing Date
2024
Language
english
Status
open-access
DOI
10.48550/arXiv.2403.06823
Authors
Abdallah El Ali; Karthikeya Puttur Venkatraj; Sophie Morosoli; Laurens Naudts; Natali Helberger; Pablo Cesar