Deployment : fundamental rights and biases in AI

dc.contributor.authorGASSER, Urs
dc.contributor.authorPOIARES PESSOA MADURO, Luis Miguel
dc.contributor.authorWATCHER, Sandra
dc.contributor.editorDE COCK BUNING, Madeleine
dc.date.accessioned2021-05-28T14:52:37Z
dc.date.available2021-05-28T14:52:37Z
dc.date.issued2021
dc.descriptionThis contribution was delivered online on 6 May 2021 on the occasion of the hybrid 2021 edition of EUI State of the Union on Europe in a Changing World'.
dc.description.abstractGiven the major impact that AI has on society, fundamental rights including human dignity and privacy protection are increasingly central to its deployment. Public and private organisations that use AI systems play a key role in ensuring that the systems they use and the products and services they offer meet appropriate standards of transparency, non-discrimination and fairness. The recently proposed EU legal framework on AI follows a risk-based approach. It defines four future proof risk levels from unacceptable risks and high risks to limited risks and minimal risks. Unacceptable risks are posed by those systems that form a clear threat to the safety, livelihoods and rights of people (e.g. social scoring by governments) and are therefore banned. High-risk are those systems that are part of critical infrastructures (e.g. transport), that could put the life and health of citizens at risk; access to education (e.g. scoring of exams); Employment (e.g. CV-sorting software for recruitment); Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan); Law enforcement (e.g. evaluation of evidence); Migration, asylum and border control management (e.g. verification of authenticity of travel documents/remote biometric identification); High-risk AI systems will be subject to strict obligations before they can be put on the market e.g. adequate risk assessment and mitigation systems; high quality of the datasets feeding the system to minimise discriminatory outcomes; Logging of activity to ensure traceability of results and appropriate human oversight. Limited risk systems such as chatbots have specific transparency obligations: since, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back. For AI systems with Minimal risk levels the vast majority there is free use (eg AI-enabled video games or spam filters. The Regulation does not intervene as these AI systems represent no risk for citizens rights or safety. Where are the biggest vulnerabilities for citizens when it comes to AI and fundamental rights? Will this recent EC Proposal for a Regulation on Artificial Intelligence be able to (re)build consumers trust? How can we build a flexible transnational regulatory framework respectful for fundamental rights and public values? While different countries are increasingly looking to the adoption of regulation to ensure trustworthy AI as a tool to shape AI deployment by stakeholders, what transnational effort is required to steer global collaboration towards responsible uses of AI whilst avoiding competitive disadvantage?en
dc.identifier.urihttps://hdl.handle.net/1814/71448
dc.language.isoen
dc.orcid.uploadTRUE
dc.publisherEuropean University Institute
dc.relation.ispartofseriesThe State of the Union Conferenceen
dc.relation.ispartofseries2021en
dc.relation.ispartofseriesArtificial Intelligenceen
dc.relation.urihttps://www.youtube.com/watch?v=zDuk0WBuoJI&t=15385s
dc.relation.urihttps://stateoftheunion.eui.eu/geopolitics/#A1GEOPOLITICS
dc.rightsinfo:eu-repo/semantics/openAccess
dc.titleDeployment : fundamental rights and biases in AIen
dc.typeVideoen
dspace.entity.typePublication
eui.subscribe.skiptrue
person.identifier.orcid0000-0002-9596-1669
person.identifier.other43723
person.identifier.other27348
relation.isAuthorOfPublication40b47b52-e6da-4217-ab50-c68d6bd410d1
relation.isAuthorOfPublication.latestForDiscovery40b47b52-e6da-4217-ab50-c68d6bd410d1
relation.isEditorOfPublication443d748e-edcb-4291-90c2-7c49d46ea9dc
relation.isEditorOfPublication.latestForDiscovery443d748e-edcb-4291-90c2-7c49d46ea9dc
Files