dc.contributor.author | KOIVISTO, Ida | |
dc.date.accessioned | 2020-06-04T13:15:55Z | |
dc.date.available | 2020-06-04T13:15:55Z | |
dc.date.issued | 2020 | |
dc.identifier.issn | 1831-4066 | |
dc.identifier.uri | https://hdl.handle.net/1814/67272 | |
dc.description.abstract | The normative attractivity of transparency is beyond compare. No wonder it is one of the main principles in the EU’s General Data Protection Regulation. It also features in a majority of AI ethics codes. Transparency is called for because it is assumed that it will solve the so-called ‘black box problem’ (uncertainty about how inputs translate into outputs in algorithmic systems) and by so doing legitimize automated decision-making (computer-based decisionmaking without human influence; ADM). In this paper, the legitimizing effect of transparency in ADM is discussed. I argue that transparency cannot deliver in its quest to resolve the black box problem. The main claim is that transparency is inherently performative in nature and cannot but be so. This performativity goes against the promise of unmediated visibility, vested in transparency. As demonstrated, when transparency is brought into the context of ADM, its hidden functioning logic becomes visible in a new way. | en |
dc.format.mimetype | application/pdf | en |
dc.language.iso | en | en |
dc.publisher | European University Institute | en |
dc.relation.ispartofseries | EUI AEL | en |
dc.relation.ispartofseries | 2020/01 | en |
dc.rights | info:eu-repo/semantics/openAccess | en |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
dc.subject | Transparency | en |
dc.subject | Automated decision-making | en |
dc.subject | The black box problem | en |
dc.subject | AI ethics | en |
dc.subject | Data protection | en |
dc.title | Thinking inside the box : the promise and boundaries of transparency in automated decision-making | en |
dc.type | Working Paper | en |
dc.rights.license | Attribution 4.0 International | * |