What is algorithmic justice called, and how is it different from predictive justice?
The use of algorithms in the field of justice is increasing. The term algorithmic justice expresses concretely what is being done: to operate, aid or enlighten a portion of justice by implementing algorithms. The term predictive distracts us from what is done in practice, i.e. assessing and estimating penalties, compensation, or risk of recidivism, and not anticipating them. We must remain vigilant not to view AI as an oracle that would in fact satisfy the demand for accuracy or to make facts objective or their relation to truth.
We have published a report because we believe we need a clear translation of what is going on, and the opportunities and risks associated with applying algorithms to fairness. The mechanisms that prevail in the application of technology must be explicit in order to inform public decision makers and citizens. We hope it will be used for dialogue between researchers and technology designers, and between citizens and public decision makers.
Is administrative justice relevant?
Of course ! Since the October 7, 2016 Law of the Digital Republic, local authorities with more than 3,500 inhabitants must make their data open (Open Data). This decision will help train more and better algorithms and increase algorithmic fairness. Moreover, since this same law, the source codes of the algorithms used by the administration must be connected. Finally, individual decisions made on the basis of arithmetic processing must be referred to the citizen by explicit reference.
What are the good aspects?
It helps to educate justice actors more about their decisions and their assessment of the case. These tools can also make it possible to provide more transparency in the course of justice. Combined with open data and open science policies,he is – whose strength lies in finding correlations in very large and complex data sets – assists lawyers with some delicate tasks such as searching for information in a set of documents.
What are the risks, biases, and risks?
However, the risks are there. The PredPol tool, presented in our memo, in Los Angeles identified areas to be patrolled and whose crime risk was determined to be high by the algorithm. Using these tools, police patrol and stigmatize populations such as African Americans or Hispanics, turning a statistical trend into a systematic case. Another danger lies in the use of these tools by legal actors unfamiliar with technology and science. Judges, police officers and even lawyers will have to fit this system in order to be able to master the tools, and to be able to alert if there is a risk of arithmetic bias or discrimination.
Note that the use of AI in justice is rated as high risk by the new Proposal to regulate artificial intelligence by the European Commission. AI systems aim to assist a jurisdiction in research and interpretation of facts and the law that undergo highly demanding compliance audits before they are deployed, and monitor audits throughout their use.
This article is part of the file
Public Policy: Are Algorithms Gaining Power?
“Certified tv guru. Reader. Professional writer. Avid introvert. Extreme pop culture buff.”