Artificial Intelligence (AI) holds great promise for the public sector and for people – but it also comes with certain risks for people’s rights and wellbeing, as many cases of algorithmic discrimination, amongst other harms, have shown. For this reason, as governments increasingly look to AI as tool to make their work more effective and efficient, it is important to be mindful of these risks and to create mechanisms of accountability. In this session, you will hear the case for why transparency should be a cornerstone of public sector AI use so that governments remain accountable and how public AI registers can help with that. And you will hear from those pioneering first initiatives in this area.
Fellow/ Mercator Fellowship on International Affairs
Maximilian Gahntz is passionate about ensuring that technology is developed and deployed in an inclusive and equitable way. In his work, he explores the question what a regulatory and governance framework for artificial intelligence (AI) should look like – so that AI is used in accordance with human and civil rights and to the benefit of all of society. As a Mercator Fellow, he worked at the European Commission contributing to its draft AI Act and at the Mozilla Foundation, where he worked on AI, platform accountability, and data governance in the EU and the U.S.
He holds Master’s degrees in Public Administration and Public Policy from Columbia University and Sciences Po Paris as well as a Bachelor’s degree in Politics and Public Administration from the University of Konstanz. Before his graduate studies, Max worked as a management consultant for public sector organizations and social service providers.
Linda van de Fliert
Projectlead for Public control on Algorithms/ City of Amsterdam
Linda van de Fliert is part of the Chief Technology Office of the City of Amsterdam, working towards responsible and trustworthy use of technology in Amsterdam. Linda is the lead for several projects on fair and transparent AI. She’s developing practical tools to operationalize and implement the somewhat abstract concepts of AI ethics in the City’s use of AI and algorithms; for example through procurement conditions and a public AI register. Linda is also working together with other Dutch and international governments to further develop these and other tools to global standards for responsible AI.
Executive Director AlgorithmWatch
Matthias Spielkamp is co-founder and executive director of AlgorithmWatch (Theodor Heuss Medal 2018, Grimme Online Nominee 2019). He testified before committees of the Council of Europe, the European Parliament, the German Bundestag and other institutions on automation and AI and is a member of the Global Partnership on AI (GPAI). Matthias serves on the governing board of the German section of Reporters Without Borders, the advisory councils of Stiftung Warentest and the Whistleblower Network and the Expert Committee on Communication/Information of Germany’s UNESCO Commission. He was a fellow of ZEIT Stiftung, Stiftung Mercator and the American Council on Germany. Matthias is editor of the Automating Society reports and has written and edited books on the automation of society, digital journalism and Internet governance. He holds master’s degrees in Journalism from the University of Colorado in Boulder and in Philosophy from the Free University of Berlin.