Transparency and Secrecy

The proliferation of artificial intelli­gence (AI) systems has brought significant benefits across sectors, enhancing efficiency and decision-making. How­ever, the use of AI in public sector decision-making raises complex legal and ethical challenges, particularly regarding transparency and secrecy. This dissertation of Ida Varošanec explores these challenges, focusing on the interplay between transparency requirements and intellectual prop­erty (IP) protection for AI systems in the European Union (EU). At the core of the research lies the tension between the public interest of transparency and the private sector’s need for confidentiality. Transparency is important for accountability, trust, and fairness, yet secrecy, particularly in the form of trade secrets, shields proprietary algorithms and business interests. Public sector reliance on AI for decisions in governance, healthcare, and law enforcement intensifies the need for clarity and oversight. Issues such as algorithmic bias, lack of explainability, and flawed outcomes, exemplified by recent cases in the Netherlands, underscore the urgency of addressing transparency gaps in AI deployment. The dissertation systematically reviews the dual concepts of transparency and secrecy, emphasizing their roles in public trust and regulatory frameworks. Transparency is multifaceted, encompassing inter alia design, explainability, understandability, and access to information. Secrecy, while necessary in areas such as privacy and IP protection, can hinder due process rights, legitimacy and trust when misapplied. The study intro­duces the Transparency-Secrecy (T-S) spectrum, a framework to evaluate and calibrate these competing inter­ests. Legal frameworks, including the EU’s General Data Protection Regula­tion (GDPR) and the AI Act, provide a basis for addressing these issues but remain fragmented. While the AI Act mandates transparency for high-risk AI systems, its reliance on self-regulation and vague compliance thresh­olds such as ‘sufficient transparency’ leave critical gaps regarding the oper­ationalisation of transparency. The Trade Secrets Directive, while protecting IP against misappropriation, does not provide much guidance on how to balance competing interests, potentially undermining transparency obligations and resulting in inconsistent enforcement and sector-spe­cific disparities.

The dissertation advocates for a context-dependent socio-technical regulatory approach combining command-and-control, co-regulation, and self-regulation strategies. It highlights the need for robust accountability mechanisms, such as public procurement standards, oversight bodies, and continuous monitoring, to not only foster compliance with transparency requirements, but to ensure trustworthy deployment and use of AI systems. Specifically, the integration of soft-law practices, technical tools like explainable AI (XAI), and private sector initiatives further supports this goal. Notably, the research emphasizes that complete transparency is neither achievable nor desirable due to their limited role in delivering their promises. Instead, transparency and secrecy must coexist, with each fulfilling specific roles that depend on the context and audience. Tailored solutions, such as counterfactual explanations and cryptographic techniques, offer pathways to reconciling these interests. Embedding transparency throughout the AI lifecycle, from design to deployment, is vital for fostering trust and safeguarding public welfare.

The study concludes that navigating the trade-offs between interests protected by transparency and secrecy requires collaboration among policymakers, technologists, and stakehold­ers. By addressing these challenges holistically, legal frameworks can establish meaningful transparency without compromising legitimate confidentiality concerns. Ultimately, this multifaceted approach is critical to ensuring responsible AI use and maintaining public trust in AI-driven decision-making. However, this challenge still needs to be overcome.

This research was supervised by prof. Sofia Ranchordás, prof. Jeanne Mifsud Bonnici and dr. Nynke Vellinga. Varošanec obtained her PhD on December 19th 2024 at the University of Groningen.

Ida Varošanec
Transparency and Secrecy as a Spectrum in AI-assisted Trustworthy Public Decision-making: EU Legis­lator Walking a Tightrope


Dissertation via open access.

Over de auteur(s)