Date: 2021
Type: Video
Deployment : fundamental rights and biases in AI
GASSER, Urs; POIARES PESSOA MADURO, Luis Miguel ; WATCHER, Sandra - Moderator(s): DE COCK BUNING, Madeleine
The State of the Union Conference, 2021, Artificial Intelligence
GASSER, Urs, POIARES PESSOA MADURO, Luis Miguel, WATCHER, Sandra, moderated by DE COCK BUNING, Madeleine, Deployment : fundamental rights and biases in AI, The State of the Union Conference, 2021, Artificial Intelligence - https://hdl.handle.net/1814/71448
Retrieved from Cadmus, EUI Research Repository
Given the major impact that AI has on society, fundamental rights including human dignity and privacy protection are increasingly central to its deployment. Public and private organisations that use AI systems play a key role in ensuring that the systems they use and the products and services they offer meet appropriate standards of transparency, non-discrimination and fairness. The recently proposed EU legal framework on AI follows a risk-based approach. It defines four future proof risk levels from unacceptable risks and high risks to limited risks and minimal risks. Unacceptable risks are posed by those systems that form a clear threat to the safety, livelihoods and rights of people (e.g. social scoring by governments) and are therefore banned. High-risk are those systems that are part of critical infrastructures (e.g. transport), that could put the life and health of citizens at risk; access to education (e.g. scoring of exams); Employment (e.g. CV-sorting software for recruitment); Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan); Law enforcement (e.g. evaluation of evidence); Migration, asylum and border control management (e.g. verification of authenticity of travel documents/remote biometric identification); High-risk AI systems will be subject to strict obligations before they can be put on the market e.g. adequate risk assessment and mitigation systems; high quality of the datasets feeding the system to minimise discriminatory outcomes; Logging of activity to ensure traceability of results and appropriate human oversight. Limited risk systems such as chatbots have specific transparency obligations: since, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back. For AI systems with Minimal risk levels the vast majority there is free use (eg AI-enabled video games or spam filters. The Regulation does not intervene as these AI systems represent no risk for citizens rights or safety. Where are the biggest vulnerabilities for citizens when it comes to AI and fundamental rights? Will this recent EC Proposal for a Regulation on Artificial Intelligence be able to (re)build consumers trust? How can we build a flexible transnational regulatory framework respectful for fundamental rights and public values? While different countries are increasingly looking to the adoption of regulation to ensure trustworthy AI as a tool to shape AI deployment by stakeholders, what transnational effort is required to steer global collaboration towards responsible uses of AI whilst avoiding competitive disadvantage?
Additional information:
This contribution was delivered online on 6 May 2021 on the occasion of the hybrid 2021 edition of EUI State of the Union on Europe in a Changing World'.
Cadmus permanent link: https://hdl.handle.net/1814/71448
External link: https://www.youtube.com/watch?v=zDuk0WBuoJI&t=15385s
https://stateoftheunion.eui.eu/geopolitics/#A1GEOPOLITICS
https://stateoftheunion.eui.eu/geopolitics/#A1GEOPOLITICS
Series/Number: The State of the Union Conference; 2021; Artificial Intelligence
Publisher: European University Institute
Files associated with this item
Files | Size | Format | View |
---|---|---|---|
There are no files associated with this item. |