32/2023
Issue
Prominent search engines yield many millions of hits *1 when one searches for the keywords ‘robot judge’. While the discussion around AI technologies and their impact on society is increasingly focused on questions of whether and how the use of AI should be regulated and what kind of legal boundaries, we need for safeguarding fundamental rights in its context, the fact is that algorithmic systems are already used in the realms of litigation and judicial systems. The article presents an attempt to ascertain how the use of AI in judicial proceedings affects the cross-border recognition of judgements and, thereby, influences society in general. Special attention is given to the pivotal matter of trust in judicial decisions of other countries. More precisely, can the use of AI reshape or otherwise influence the current procedure of cross‑border recognition of judgements and the judicial duties of a judge in that procedure, at least in the case of Estonia?
Keywords:
AI in judiciary; cross-border litigation; recognition of judgements; international private law
1. Introduction
From a Web search for the keywords ‘robot judge’, the search engine reports finding 58,900,000 results. *2 This is little surprise as international entities, states, and non-governmental organisations alike follow the call to discuss ethical, legal, and fundamental-rights issues related to the use of artificial intelligence (AI) in numerous sectors. The European Union (EU) is taking steps to amend its legal framework *3 with guiding principles for a ‘human-centric’ approach to AI that entails respect for European values and principles. Likewise, several states have adopted AI strategies *4 and legal acts, and courts have made their first decisions on AI *5 , with the case law developing by the day. *6 In another part of the landscape, scientists are calling for suspension of the training of powerful AI-based systems, arguing that ‘AI systems with human-competitive intelligence can pose profound risks to society and humanity’ and that, therefore, governments should step in. *7
The discussion coalescing around AI technologies and their impact on society is increasingly focused on the questions of whether – and, if so, how – the use of AI should be regulated and what kind of legal boundaries we would need for safeguarding fundamental rights. We ought to recognise, however, that algorithmic systems are already with us: many arenas, not least litigation and judicial systems, have been employing them for more than a decade already. *8
Back in 1996, Richard Susskind explained several possibilities connected with disruptive technologies in justice systems. *9 In the field of litigation and operation of judicial systems, smart algorithmic systems have been in use for 10 years or more, mainly for crime prevention and predictive policing but also for processing of minor offences all over the world *10 . In the civil-litigation domain, semi-automatic algorithm-based procedures have been applied for small-claims processes in Estonia since 2006 *11 .
How AI, a rapidly advancing area of algorithmics, may affect judicial processes and the work of a judge in general has been discussed by several authors. In 2018, Tania Sourdin argued that the role of a judge is one of transmission; some judicial work will be conducted or assisted by technological tools, but the core function of the judge remains – partly because society is not willing to accept machines deciding over human life, partly because judicial decision-making is so complex even in its essence alone. *12 When reporting on their study of the judgements of the European Court of Human Rights, Masha Medvedeva, Michel Vols, and Martijn Wieling considered how the predictive analytical tools may affect the parties to the procedure. They concluded that these analysis tools showed accuracy levels of 75%. While systems may get improved, one must consider that these tools are not able to assess the changes in the society using them. *13
In Estonia, academic discussion of the need to regulate the use of AI took a more serious turn in 2018, hand in hand with the drafting of the AI Strategy. In the report of the country’s AI expert group, based on legal research conducted by Tallinn Technical University, several legislative changes were suggested. *14 The Ministry of Justice drafted a legislative-intent document for the regulation of AI (referred to as the Krati VTK) *15 , and preparation of a draft law to amend the administrative‑procedure law followed. *16 The aim with the Krati VTK was to establish regulation to protect fundamental rights and ensure transparency of the use of AI and automated decision-making in public administration.
Discussion of that draft law has stalled – it not been adopted by the Parliament of Estonia, the Riigikogu. Nevertheless, several articles by Estonian authors address automated decision‑making in public administration, and some relevant case law from the Supreme Court has emerged *17 . In a few judgements, the latter has expressed a stance on the use of automated case-management systems etc. and the influence of these systems on access to information, but it has not yet tackled the directly use of AI in public administration.
Writings on jurisprudence do address this matter, however. For instance, Kätliin Lember studied the use of AI in public administration – namely, in the preparation of administrative decisions and with regard to state-liability questions. *18 Furthermore, Supreme Court judge Ivo Pilving explained in his article the possibilities for making amends in the event of discretion breakdowns related to public authorities issuing automated administrative decisions, alongside the issues of state liability that arise. He concluded that the Estonian legislative framework of state liability is aligned well with the challenges that automated decision-making in public administration may present. *19
Private-law perspectives, especially in relation to liability issues associated with using AI, have been discussed by the European Parliament. As a result, recommendations to the European Commission offer suggested ways of amending civil law. *20
However, the use of AI in concrete judicial procedures has not received much attention from academics. This article is an effort to start filling the gap by investigating some specific aspects of everyday judicial co-operation and asking whether the use of AI in the judicial process may affect the trustworthiness of judicial decisions and cross-border recognition of judgements. In particular, the discussion examines what procedural-law tools exist and how a judge can use them to assess the risks of the decisions made/suggested by AI. Specifically, are the tools of judicial co-operation sufficient for collection of evidence and information in the recognition procedure? Are there ways to overcome such risks?
To begin answering these questions, the article maps the principles for trustworthy AI as recognised in Europe and the legal requirements characterising a fair trial. Both are elements that a judge must vet in the recognition process. The article also gives a few examples of the use of AI, from several countries whose judgements might be subject to recognition in Estonia. Then, the piece analyses the relevant procedural tools from an Estonian standpoint and illustrates the legal risks with the aid of several illustrative cases of AI functioning in judicial proceedings. While Estonia serves as an example, its relevance is broader: since the legal requirements for judicial co-operation are laid down in EU law, in numerous regulations on cross-border judicial cooperation, the conclusions may be of valid use by either comparison or analogy in the cases of other EU member states as well.
2. Principles and future regulations related to AI in the European Union
In 2018, the Council of Europe adopted its ethics charter for AI in the judiciary *21 (hereinafter ‘the Charter’), which sets forth ethics principles related to the use of AI in judicial systems and processes. In so doing, it is considered the first European text of its kind. The Charter provides a framework of principles and concrete examples to guide policymakers, legislators, and justice professionals in developing AI for national judicial processes.
However, the main principles to be followed in practice for deployment are still vague and ambiguous, starting with the definition of AI itself. There are many ways to define AI, and scientists’ understandings of it vary and are constantly changing. One could jokingly define AI as ‘something that computers are not able to do so far’. *22 However, rapid progress in the development of large language models *23 has led to concerns that all tasks consisting of writing structured text will be automated by computer systems. It has been said also that today’s vague definitions of AI permit attaching that label to any computer-based decision‑support system with analysis or models at its core. *24 To address such challenges, the European Commission appointed a group of experts (High-Level Expert Group on Artificial Intelligence-the HLEG) to advise on its artificial‑intelligence strategy. *25 The HLEG has articulated its recommendations for EU ethics guidelines for trustworthy and human-centric AI (hereinafter ‘the Guidelines’) and also for AI‑related regulation. Among its outputs are a definition of AI offered for use in EU legislation. *26
For the EU, the key principles linked to AI were emphasised already in 2018’s strategic communication on artificial intelligence for Europe *27 , and the intended human-centric approach and ethics standards are detailed specifically in the AI ethics guidelines. *28 The essence of the human-centricity objective here is for any use of AI systems to be in the service of humanity and the common good, with the goal of improving human welfare and freedom, but it also means that the human being enjoys a unique moral status of primacy in all aspects of the society. Every AI system should respect human autonomy, prevent harm to human beings, be fair, and be explicable. *29 It was not long before these principles started becoming part of hard law. Soon, the proposal for a European Parliament Regulation instrument laying down harmonised rules on artificial intelligence and also amending certain legislative acts will bear fruit in the adoption of the EU’s AI Act. *30
The Charter underscores five main principles to be followed in the use of AI in judicial systems. These are the principle of respect for fundamental rights; a principle of non‑discrimination; enshrinement of the value of respect for quality and security; the principle of keeping the systems ‘under user control’; and a principle of ‘transparency, impartiality, and fairness’, meaning that the data-processing methods must be made accessible and understandable. External audits of the development and deployment of an AI system covered by these principles are authorised under these principles. *31
According to the Guidelines, three criteria should be met throughout the AI system's life cycle:
(1) it should be lawful and in compliance with the legal framework established;
(2) it should be ethical, ensuring compliance with good ethics and solid fundamental values; and
(3) it should be robust from a technical and social perspective both; that is, the AI-based system should perform in a safe, secure, and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts.
The Guidelines stipulate seven key requirements to be met throughout the development, deployment, and use of AI systems:
(1) human agency and oversight
(2) technical robustness and safety
(3) privacy and appropriate data governance
(4) transparency
(5) diversity, non-discrimination, and fairness
(6) environmental and societal well-being
(7) accountability
In light of the subject matter of this article, transparency, fairness, and non-discrimination conditions and meeting the accountability criterion are the most relevant of the requirements. These are also requirements or principles for a fair trial according to the ECHR’s Article 6 *32 . Although the Guidelines provide definitions for these requirements, those definitions’ origins lie in the Charter and ECHR *33 . From the perspective of the Guidelines, fairness has a substantive and a procedural dimension – meaning, on one hand, equal and just distribution of benefits and costs alike and, on the other, ensuring non-discrimination and avoiding stigmatisation *34 In its Article 2, the Guidelines document cites fundamental rights as the very basis for trustworthy AI. As all governmental power should be legally authorised and limited by law, any AI systems acting as part of the governmental or judicial power must, in their turn, respect justice and the rule of law, and they must be constructed so as to maintain and foster democratic processes *35 . Therefore, AI systems used in the judiciary must incorporate an inherent commitment to guaranteeing compliance with both the principle of the rule of law and mandatory legal provisions, so as to ensure due process and equality before the law.
The Guidelines define following the principle of explicability as an element crucial for accountability whereby trust in AI systems can be validly maintained. For this, the development and the deployment of (especially high-risk) AI need to be transparent, the capabilities and purpose of any AI system must be openly communicated, and decisions – to the greatest extent possible – should be explainable to anyone directly or indirectly affected.
While explainability is a key property sought from the decisions made by ethical AI systems, the high speed of technology development means that the prerequisite competencies are not widely available. This challenge deserves further attention.
The requirement of accountability complements all of the other requirements, and the principle of accountability has to be honoured throughout the life cycle of the relevant AI systems. Therefore, in the scheme foreseen per the AI Act proposal, these systems must be audited regularly and independently, negative impacts should be reported and assessed methodically, in a principled manner, and adequate remedies should be applied in response to any of the latter identified.
With more specific regard to the fundamental-rights issue, the explanatory memorandum on the proposal for that act emphasises it directly. Article 3.5 presents justification for restrictions on the freedom to conduct business and the freedom of art and science to safeguard public interests such as the protection of other fundamental rights in situations wherein high-risk AI technology is developed and used.
Employing a risk-based approach, the proposal articulates classification of the risk level on the basis of the function performed by the AI system and the specific purpose for which that system is used. According to Recital 32 and Article 54 of the proposal, AI systems used in the judiciary are to be regarded as high-risk AI systems. *36
In 2020, the European Commission for the Efficiency of Justice (CEPEJ) published a feasibility study for the possible introduction of a mechanism for certifying artificial‑intelligence tools and services in the sphere of justice and the judiciary. *37 The study report emphasises that it is necessary to create a mandatory certification system for all AI systems developed and deployed in a justice system, with certification maintained over the entire life cycle of the AI system by approved bodies. Such certification enables verifying the continuous compliance of AI systems in the judicial sphere, and it is strongly recommended to automate compliance-monitoring, to make it systematic and more consistent.
As AI systems operate automatically, the rules laid down in Article 22 of the General Data Protection Regulation (GDPR) are similarly relevant. *38 Estonia’s Advocate General Priit Pikamäe explicitly stated in his opinion in the Schufa Holding AG case (C-634/21) that the term ‘decision’ for purposes of that article’s first paragraph must be interpreted in a broad sense. Any person subject to an automated decision should be given all relevant information on the requirements and criteria applied by the automatic system(s), and the method followed in the automated work should be explained. *39
3. Due process and the standard of the fair trial
Common agreement as to what constitutes a judicial process or due process in Europe is stipulated in the European Convention on Human Rights, or ECHR (‘the Convention’). *40 Also, the denominators and values connected with due process are set out in Article 2 of the Treaty on European Union *41 and the EU Charter of Fundamental Rights. *42 The principles are interpreted by the case law of the European Court of Human Rights (ECtHR), but it has not yet dealt with the question of AI in judicial context. For guidance, we can look also to Article 24 of the Estonian Constitution *43 , where the right to a fair trial is enshrined.
The minimum standard for following due process, also termed the principle of a fair trial, is stipulated in paragraph 1 of Article 6 of the Convention: in the determination of one’s civil rights and obligations or of any criminal charge against a person, everyone is entitled to a fair and public hearing before an independent and impartial tribunal established by law, held within reasonable time. Judgement shall be pronounced publicly but the press and public may be excluded from either the entire trial or a portion of it in the interests of morals, public order, or national security, when the interests of juveniles or the protection of the private life of the parties so requires.
Several judgements of the ECtHR serve to explain the principle of a fair trial in detail. For example, in Pönka v. Estonia *44 and in Sakhnovskiy v Russia *45 , the court stated that, while being heard in person is part of due process, participation via videoconference is not a violation of Article 6 of the Convention. The rulings also underscored that a court has discretion to solve the case by means of written procedure if the procedural rules allow it discretion with regard to this matter. One may conclude, then, that participating in the trial in real time and the right to be heard in real time by the judge is evidently an important part of the fairness required of judicial proceedings.
The ECJ, in turn, ruled in cases C-7/21 and C-21/17 that the right to be served (properly) is an element of the right to a defence and stated that the court, in the process of recognition of judgements, has an ex officio duty to assure that right. Integral to the right to a defence is the possibility of understanding the procedure. *46
The Constitutional Review Chamber of the Supreme Court of Estonia has ruled that access to justice, including participation in the hearing, may depend on the technological avenues offered by the court. *47 Hence, national jurisprudence has held technology to be a part of the judicial proceedings and, therefore, encompassed by the principle of a fair trial.
One may conclude that the right to be heard and to a defence starts with proper service of the claim and entails active, preferably oral participation – which may be remote – in the court hearing.
4. The legal framework for the recognition of foreign judgements
4.1. Pre-conditions for recognition under EU law
At the EU level, recognition of judgements is mostly harmonised *48 . As all legislation regulating the recognition of judicial decisions in the EU shares similar procedure for that recognition, this article focuses on the regulation specific to recognising foreign judgements on civil and commercial matters (per the Brussels I Regulation *49 ).
Predominantly, judgements of third jurisdictions are recognised under the Hague Convention on the Recognition of and Enforcement of Foreign Judgments in Civil and Commercial Matters. *50 In addition to this, there is a new, advanced version of the convention on recognition of judgements, which has not yet entered force. *51 Another relevant mechanism is the recognition of arbitration judgements within the scope of the New York Convention. *52
Regulations give a broad interpretation to decisions that fall within their jurisdiction. Relevant ECJ case law explains this interpretation in detail. The ECJ ruled in case C‑46/20 that a divorce decree drawn up by the civil registrar that contains a divorce agreement constitutes a judgement within the meaning of the Brussels IIa Regulation. *53 The term also covers a court order or a decision of an administrative body acting as a judicial body. *54 Similarly, an act of a public notary may be deemed a judgement, as the regulation on succession certificates stipulates *55 . Recital 20 of the succession-certificate regulation states directly that the term ‘court’ must be interpreted as having a broad meaning.
The recognition procedure as dealt with by the various regulations and conventions differs in its details, but the procedure itself at this level has no significant meaning for purposes of this article, so description at that level of detail is not offered here. What is important is that the recognition of a foreign judgement is a formal procedure guided by the principle of mutual trust in judicial decision-making. The set of circumstances deemed to justify refusal to recognise a judgement made in another state is highly restricted. This set consists primarily of cases wherein a judgement is issued in serious breach of defendants’ rights or it runs counter to public order. As an open legal concept, ordre public is always subject to interpretation by the courts.
Brussels I (recast) set the requirements for any refusal to recognise a judgement. According to Article 45’s paragraph 1, upon application by any interested party, the recognition of a judgement shall be refused:
(a) if such recognition is manifestly contrary to public policy (or ordre public) in the Member State addressed or
(b) where the judgement was given in default of appearance, if the defendant was not served with the document by which the proceedings were initiated or with an equivalent document in sufficient time and in such a way as to enable the defendant to arrange a defence, unless the defendant failed to commence proceedings to challenge the judgement when it was possible for said party to do so. The latter constitutes the minimum standard for due process as stipulated in the Convention’s Article 6 (1).
Recital 29 of the Brussels I regulation, articulating the right to due process and a fair trial, emphasises that the declaration of enforceability should not jeopardise respect for the rights connected with one’s defence. The above-mentioned enforceability grounds related to one’s right to a defence should consider whether the defendant had an opportunity to arrange a defence, whether the claim was duly served, and whether the judgement was made by default.
Recital 28 of the Brussels I regulation gives guidance on how to interpret ordre public. Where a judgement articulates a measure or order not known in the law of the Member State addressed, that measure or order should be adjusted to one of equivalent effect. How and by whom the adaptation is to be carried out should be determined by each Member State. In Estonia, the relevant body is the court – the court should investigate the measure at issue that is laid down in the judgement.
The ECJ has ruled that the concept of public order, stipulated as it is by EU legislation, must be interpreted by the ECJ even if the court of the Member State in question is free to determine its content. *56 The recognition of judgements relies on mutual trust and on uniform interpretation of fundamental rights in EU law; therefore, it hinges on the common understanding of order public. *57 The court also has clarified the notion of a breach of public order and has specified the corresponding requirements. In the Renault SA v. Maxicar SpA case, the court referred to an action constituting ‘manifest breach of a rule of law regarded as essential in the legal order of the State in which enforcement is sought or of the right recognized as being fundamental within that legal order’. *58
4.2. The Estonian setting
A document is a judgement if, pursuant to Article 2 (a) of the Brussels I regulation, it is a judicial document that solves the case itself. Hence, an automatically generated payment order is a final decision and may be enforced according to Estonia’s Code of Enforcement Procedure, Article 2 (para 1, §1). *59 It is an outcome of a judicial procedure. In the recognition procedure, the status of the document (i.e., whether it constitutes a judgement) is proved by the certificate issued in accordance with Article 53 of the Brussels I regulation stating the legal status of the document and presenting a declaration of due process.
In Estonia, the recognition process is conducted in the action-by-petition procedure and the court has ex officio investigative duties to ascertain the facts and compile the evidence, inclusive of the relevant legislation of the foreign country. Among these duties is that of establishing the fact of whether the judgement was issued with the aid of AI and, if it was, to what extent. One implication that should follow is the duty of the court to find out the law of the foreign country, which in Estonia is a question of fact to be proved by a party, according to Section 293(1) of the Civil Procedure Act (CPA) *60 . Per the CPA’s Section 438(1) and Section 442(8) and also the Private International Law Act *61 ’s Section 2(1), the implementation of domestic law and the determination and implementation of the applicable foreign substantive law is fundamentally a duty of the courts. At the same time, the Private International Law Act’s Section 4 divides the burden of determining foreign law between the court and the parties (as does the CPA’s §234). The parties have the right to submit evidence pertaining to the content of foreign law, and the court has a corresponding right to require the parties to submit relevant evidence.
Whomever this duty falls to, the private international law instruments remain the same. For third countries, the governing instrument is the 1970 Hague Convention. *62 For the EU, it is the Regulation instrument on collection of evidence. *63 The Hague Convention may be implemented broadly, and it affords the collection of evidence in arbitration, in addition to court settings. *64
In a nutshell, the court has three opportunities to collect information. The parties may collect and submit evidence, the court may request it from the counterparties of the Member State in question (the central authority or relevant court in the other Member State), or the court may choose to collect the evidence itself. The Supreme Court of Estonia has ruled that it may be the court’s duty to determine the relevance of the foreign law *65 in cases wherein the legislation in question can be found. Maarja Torga has concluded that, while a judge has no obligation to know the law of a foreign state, the ability to find relevant legislation may be a valuable judicial skill, especially where there exists an ex-officio duty to do so. *66 For German courts, domestic commentaries recommend resorting to this approach for discovering the potentially important legislation of a foreign country only when that legislation is written in German as the language of record. *67
In conclusion, it would be difficult for the court or parties to understand the essence of the foreign law, as opposed to merely identifying said law. The most suitable method for collecting evidence about the use of AI in the process of issuing a judgement is most likely through the central judicial authority. As is elaborated upon below, this information is seldom readily available in clear form from publicly accessible sources.
5. International developments of AI in the sphere of the judiciary
5.1. The Chinese example
To illustrate possible future developments in how AI systems could operate in the world’s various court systems and be involved in litigation, one could look closely at China, which has made extensive efforts to apply AI solutions in courts that cover vast populations. Chinese judgements may be all the more important in that China looms large in other respects too – the judgements might be subject to recognition in Estonia, especially with China being one of the biggest trade partners to the European Union (according to Eurostat *68 ).
China’s State Council published the Artificial Intelligence Development Plan, a national strategy white paper, in 2017. *69 It proposed a technology-driven judicial reform to fortify the position of the courts and reduce human error in judicial decisions. *70 Under its ‘smart courts’ strategy, China has already developed its first robot judges, called Xiaozhi. They appear able to adjudicate some civil cases efficiently – for example, cases involving consumer credit or private loan agreements. *71 Although ‘Xiaozhi’ is defined as an AI-based legal assistant for the judge and it is operated by the judge, AI determines the cause in the process. The AI lists questions of factual circumstances to be clarified, forms a chain of evidence, and reconstructs a timeline of the entire loan process. The Xiaozhi began with skills in automated review of particular facts, automatic scheduling of a hearing, voice recognition and transcription of the minutes of the hearing, identification of evidence, and analysis; then, in 2019, the assistants mastered new skills, such as generating the chain of evidence, asking factual questions (related to facts not in evidence), and formulating the judgement. *72 Characterisations of this system give the impression that a judge has little manoeuvring room to change the list of questions the AI assistant uses.
Alongside Xiaozhi, several other AI solutions operate in Chinese courts, where they have been widely adopted to make the Chinese judicial system more efficient, with a stated aim of better serving the public. At the same time, sources point out that China’s public security agencies use AI for locating lawbreakers and for more precise governance of social services in various milieux – which extends even to social scoring. *73 Moral governance is an explicit goal of Chinese state strategy, pursued by introducing a minimal moral standard and enhancing the ‘moral integrity’ of citizens. In such a context, giving citizens marks for good or bad behaviour is a valid tool *74 One element of behaviour taken into account in this broader framework is a person’s credit rating and activities as a bad debtor *75 ; citizens’ actions in the court system presumably form another component to the social scoring, yet data on this part of the picture are not freely available.
None of the descriptions I could find, even for Xiaozhi alone, specify either the merit scheme or the nature of the data used for training the system. Moreover, there are indications that the AI systems form a part of the Judicial Accountability Reform, which was triggered because the judges make their judgements independently and decisions of judges are not directly supervised. *76 Although the clear conflict with judicial independence that this situation spotlights is not the subject of this article, one may conclude that transparency is not clearly ensured in Chinese courts’ application of AI. The issue is an important one, both for individual jurisdictions and for the interfaces between them.
5.2. Other examples of online dispute-resolutions systems
Online dispute-resolution (ODR) systems seem to be the most popular AI systems utilised in many states to settle disputes. Although what they produce may be enforceable settlements, it must be stressed that all of the ODR systems described here are provided by the state court system for voluntary use. Since the result of ODR may be a legally binding decision with cross-border enforceability, considerations related to recognition of judicial decisions prove relevant here.
In February 2019, British Columbia and the Ontario Superior Court of Justice launched a pilot project *77 to use Digital Hearing Workspace as an online case-management platform whereby parties in civil cases can submit electronic materials for a hearing. Likewise, it provides authorised parties with access to event-related documents. *78 Another prominent tool is Smartsettle ONE, a privately developed online negotiation application that courts are piloting as a voluntary settlement tool. *79 Its Web site introduces the tool as an ‘elegant and intelligent app with five sophisticated algorithms’ but fails to describe or even identify those algorithms. While it offers an opportunity to try the system and test oneself against the robot, it gives no information on the training data, statement about legal criteria, or relevant information about the process.
A final system worthy of note is the ODR tool Rechtwijzer, introduced back in 2015 to help settle divorce cases in the Netherlands. Its method stands in contrast against the opacity noted above: a report on its experimental use explains the workings of the system. The tool asks the parties questions, and it provides settlement options on the basis of their input and the information given by the parties earlier. There is no need for legal representation, but the systems give additional information on the opportunities to seek legal advice or mediation. *80
One may conclude that information about the use and the functionality of the AI is more readily available in the case of ODR, but specifics still are lacking. Details on the training data and the legal norms used – details essential for assessing conformity with ethics guidelines – are missing.
5.3. The Estonian e-filing system and AI-based tools
In Estonia, the judiciary is using two AI tools to support the everyday work of the courts, but neither of them makes judicial decisions. Both tools are part of the e-filing system and the Court Information System (KIS). The function, legal basis, and operation of the KIS, which functions also as a database, are stipulated in the statute on creating this registry. *81 The legal foundations for the e-filing system are described in the government regulation on establishment of the e-filing system and the statute on maintaining said system. *82 The CPA also refers to e-filing, in Article 601; therefore, the e-filing system as a whole and all its functions have to be in accordance with the legal requirements stipulated in the CPA. Since the tools are not part of the judicial decision-making itself, they have no effect on the recognition of Estonian judgements across borders. Nonetheless, there is a transparency requirement – again, one demonstrably not fulfilled. Understanding might require at least good knowledge of the Estonian language and some sense of Estonian state administration.
5.3.1. Salme
Speech-recognition software Salme, based on natural-language technology, helps to record court hearings and, simultaneously with the audio recording of the session, create a transcrip of the session. Little technical information is available about Salme beyond the documentation provided in connection with its procurement process. *83 The tender was for creating peech‑recognition software, interfacing with the KIS, and supplying an online service for the processing of transcripts. What information is available comes from the news of the launch of Salme’s version 3.0, on the relevant e-Estonia Web page *84 , and from the procurement registry, where the original documentation, specification of the task subject to tender, and the stated purpose behind the Salme system are archived. *85
5.3.2. Krat
Another AI tool used by Estonia’s judiciary is the anonymisation software Krat, which removes participants’ personal data from court judgements. Once again, very little information is publicly available, and again the best path to some understanding of the tool’s nature is to search the public procurement registry. *86 Procurement documentation describes the project as development of a user interface for scrubbing personal data from court decisions such that they can be safely disclosed.
A master's thesis by Silver Kullamaa examines the technical aspects and functionality of the system, with Kullamaa reaching the conclusion that there is a risk of breach of privacy because the anonymisation process is not linked to the functions of the e-filing system. *87 In both cases, the procurement documentation describes the functionality of the outcome but does not mention any legal framework, ethics factors, or fundamental-rights issues related to or to be addressed via the development of the system. No applicability issues, auditing possibilities, or other relevant requirements for monitoring system deployment are mentioned.
This state of affairs is in conflict with the principles set down in the Charter a year prior to the procurement. It is also not in line with the recommendations of the HLEG released two years before that procurement. It is worth mentioning that, according to Article 9 of the Ministry of Justice statute, data-protection issues and the participation in the CEPEJ work fall under the purview of the Ministry of Justice, as the administrative arm of the courts. *88
5.3.3. Payment orders for small claims
Finally, Estonia has also developed a semi-automated process for small claims and maintenance decisions whereby computer-generated payment orders are issued. These are considered to be judgement for purposes of enforcement.
This payment-order process for small claims is a semi-automated civil proceeding in Estonian courts. It employs algorithms, yet there is human oversight of some functionality, related to such matters as jurisdiction and service of documents. While the description of the process on the official Web site of the courts does not state directly that the process is semi-automated, one can deduce this via an educated guess upon reading the description. *89
6. Conclusion
Within the legal framework that exists now and that for the near future, any AI used in the judiciary is considered high-risk and therefore must be accountable, transparent, and explainable. However, from examining the publicly available information on AI systems operating in courts of various states against the EU principles for appropriate use of AI, a lack of transparency becomes evident. In the examples cited here, even the fact that a court is using an AI solution at all to assist with the judicial proceedings is not expressly stated on the official Web site. When some information can be found by means of search engines, from academic research, or via media publications, one still only gets a general idea that AI‑based systems exist in these contexts, not a sense of what they do.
Furthermore, substantive considerations for the development of the pertinent instruments are not properly described, if mentioned publicly at all. Relevant public information is available only in individual countries’ languages (Chinese, Estonian, etc.) and mostly with journalistic tones. The mechanism of action is vaguely described in some cases – e.g., those of the Canadian ODR or small-claims processing in Estonia. Examples offered in this article are of systems or tools that are currently used in judicial applications from various countries. One inevitably concludes that none of the AI tools identified in this article honour the principles of transparency and accountability. Meeting transparency and fairness requirements necessitates official information published by public authorities. As for the nature of the details, the barest minimum should be explicit indications at government level (on the official Web site of the courts, for example) that automated procedures or AI systems are used in the judicial process to assist a judge or to help a court with its functions. For greater clarity, there should be information on the training data used for the AI system and on the basic criteria applied.
If an AI system operating in the judiciary does not fulfil the requirements for trustworthy AI as established in the given country’s legislative/legal framework or its ethics principles, a judge in the course of deciding over recognition of a judgement possibly made by or with the assistance of such an AI system may consider the judgement either to be contrary to ordre public or to be at least not in the line with the principle of a fair trial. Such considerations may, in their turn, constitute grounds for refusal to recognise the judgement across borders.
pp.107-118