International: Artificial intelligence in the administration of justice

In short

In the not too distant past, many believed that artificial intelligence (AI) or machine learning (ML) would not significantly change the practice of law. The legal profession was considered – by its very nature – to require specialized skills and nuanced judgment that only humans could provide and would therefore be immune to the disruptive changes brought about by digital transformation. However, the application of ML technology in the legal industry is now increasingly mainstream, particularly as a tool to save lawyers time and provide richer analysis of ever-growing datasets. voluminous to facilitate legal decision-making in court systems around the world.


More in detail

One of the main areas of application of ML in justice systems is “predictive justice”. It involves the use of ML algorithms that perform a probabilistic analysis of a particular legal dispute using case law precedents. To function properly, these systems must rely on huge databases of previous court decisions which must be translated into a standardized language which, in turn, is capable of creating predetermined patterns. These will ultimately help the machine learning software generate the prediction.

Does this technology mean that trials are completed at the speed of light, that lawyers can know in advance whether or not to take legal action, that the courts decide a case immediately? Well, there is still a long way to go and we also have to balance the inherent risks of using these technological tools. For example, the data used to train the ML system could lead to biases and consolidate stereotypes and inequalities that would be validated simply because they were produced multiple times by the AI. So beware of possible additional complexity in creating new precedents and case law against the odds!

To assess the opportunities and challenges brought by predictive justice systems using ML tools, it is instructive to examine examples of case law, as often history is an indicator for understanding the future.

Machine Learning in Justice Systems

The first time “predictive justice” began to emerge was in the United States in 2013 in State v. Loomis where it was used by the court in sentencing. In that case, Mr. Loomis, a US citizen, was charged with driving a car during a drive-by shooting, receiving stolen property and resisting arrest. During the trial, the circuit court was assisted in its sentencing decision by a predictive machine learning tool and the end result was that the judge imposed a custodial sentence. Apparently, the judge was convinced by the fact that the machine learning software tool had suggested that there was a high probability that the defendant would reoffend in the same way.

On appeal, the Wisconsin Supreme Court upheld the legitimacy of the software because the judge would have reached the same result with or without the machine learning software. The decision included the conclusion that the risk assessment provided by the AI ​​software, while not determinative in itself, can be used as a tool to improve a judge’s assessment, taking into account the application of other evidence of conviction when deciding on the appropriate sentence for an accused.

In essence, the Wisconsin Supreme Court recognized the importance of the role of the judge, stating that this type of machine learning software would not replace their role, but could be used to assist them. As one can imagine, this case opened the door to a new way of dispensing justice.

Indeed, fast forward to today and we read news from Shanghai telling us the story of the first robot ever created to analyze case files and indict defendants based on a verbal description of the case. AI scientists have perfected the robot using a huge amount of cases so that the machine is able to identify various types of crimes (i.e. fraud, theft, gambling) with precision claimed by 97%.

AI-based predictions used to aid courts are becoming more prevalent and can raise significant concerns (including bias and transparency). Several regulatory authorities are cooperating to advance a set of rules, principles and guidance to regulate AI platforms in justice systems and more generally.

For example, in Europe, a significant step towards digital innovation in judicial systems has been taken with the creation of the European Commission for the Efficiency of Justice (CEPEJ) which published the “European Ethical Charter on the artificial intelligence in judicial systems and their environment”, one of the first regulatory documents on AI (“charterThe Charter provides a set of principles to be applied by legislators, legal professionals and policy makers when working with AI/ML tools aimed at ensuring that the use of AI in justice systems is compatible with fundamental rights, including those of the European Convention on Human Rights and the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.

Recently, the CEPEJ defined its Action Plan 2022 to 2025 for “Digitalization for better justice” identifying a three-step path to ensure fair use of AI in courts according to the visual representation below:

Source: EUROPEAN COMMISSION FOR THE EFFICIENCY OF JUSTICE (CEPEJ) – Revised roadmap to ensure appropriate follow-up to the CEPEJ Ethical Charter on the use of artificial intelligence in judicial systems and their environment.

The CEPEJ’s commitment does not stop there. Indeed, the table below gives an overview of the development of IT tools in the judicial systems of EU Member States (civil and criminal) and the acceleration of the use of information technologies in EU courts.

PUB_CASE0760696-image2

Source: Dynamic database of European judicial systems.

More generally, the European Commission is currently focusing on developing a set of provisions to regulate AI systems, which are outlined in a draft AI regulation (“Regulation“) published in 2021. The regulation proposes harmonized rules for applications of AI systems. It follows a proportionate risk-based approach that differentiates prohibited, high-risk, restricted and minimal-risk uses of AI systems. Regulatory intervention therefore increases with the increase in the potential of algorithmic systems to cause harm. To learn more, read our alert New draft rules on the use of artificial intelligence.

AI systems used for law enforcement or in the administration of justice are defined as high-risk AI systems under the regulations. Note that the use of real-time biometric identification systems in public places by law enforcement is (subject to certain exceptions) prohibited. High-risk AI systems are subject to requirements, including ensuring the quality of datasets used to train AI systems, applying human oversight, creating records to enable compliance checks, and providing relevant information to users. Various stakeholders, including suppliers, importers, distributors and users of AI systems, are subject to individual requirements, particularly with regard to the compliance of AI systems with the requirements of the regulation and CE marking. of these systems to indicate compliance with the regulations.

The regulation still has a long way to go before it is finally approved and becomes binding on member states, but it’s already a step forward in AI regulation — not just because it can be used in the administration of justice, but also because it can have a profound impact on the way we work, communicate, play, live in the digital age.

Camille Ambrosino contributed to the preparation of this editorial.

This article originally appeared in the January 2022 edition of LegalBytes.

Comments are closed.