Contribution to book
Open Access

When the algorithm is not fully reliable : the collaboration between technology and humans in the fight against hate speech

Loading...
Thumbnail Image
Files
When_algorithm_2022.pdf (160.31 KB)
Full-text in Open Access, Published version
License
Attribution-NonCommercial-NoDerivatives 4.0 International
Access Rights
Full-text via DOI
ISSN
Issue Date
Type of Publication
Keyword(s)
LC Subject Heading
Other Topic(s)
EUI Research Cluster(s)
Initial version
Published version
Succeeding version
Preceding version
Published version part
Earlier different version
Initial format
Citation
Hans-W. MICKLITZ, Oreste POLLICINO, Ammon REICHMAN, Andrea SIMONCINI, Giovanni SARTOR and Giovanni DE GREGORIO (eds), Constitutional challenges in the algorithmic society, Cambridge : Cambridge University Press, 2022, pp. 298-314
Cite
CASAROSA, Federica, When the algorithm is not fully reliable : the collaboration between technology and humans in the fight against hate speech, in Hans-W. MICKLITZ, Oreste POLLICINO, Ammon REICHMAN, Andrea SIMONCINI, Giovanni SARTOR and Giovanni DE GREGORIO (eds), Constitutional challenges in the algorithmic society, Cambridge : Cambridge University Press, 2022, pp. 298-314 - https://hdl.handle.net/1814/77850
Abstract
With their ability of selecting content available, algorithms are used to automatically identify or flag potentially illegal content, and in particular hate speech. After the adoption of the Code of conduct on countering illegal hate speech online by the European Commission on 31 May 2016, the IT companies heavily relied on algorithms that can skim the hosted content. However, such intervention could not be completed without the collaboration of moderators in charge of verifying the doubtful content. The interplay between technological and human control, however, adds several questions. Under the technological dimension, the most important issues concern the discretion of private companies as regards the definition of the illegal content; the level of transparency as regards the translation of the legal concepts into code; the existence of procedural guarantees applicable to the system adopted to challenge automatic decisions. Under the human dimension, the most important issues concern the selection procedure to identify the so-called ‘trusted flaggers’ able to provide the final decision regarding the illegal nature of the online content, the existence of accreditation or verification process that would evaluate the quality of the notices provided by such trusted flaggers, the allocation of liability in case of mistake between the online intermediary and the trusted flagger.
Table of Contents
Additional Information
Published online: 01 November 2021
External Links
Geographical Coverage
Temporal Coverage
Version
Source
Source Link
Research Projects
Sponsorship and Funder Information