Search Menu

JURIDICA INTERNATIONAL. LAW REVIEW. UNIVERSITY OF TARTU (1632)

Issues list

Issues

100 Years Later

29/2020
ISBN 978-9985-870-48-8

Cover image
Download

Issue

PDF

A Kratt as an Administrative Body: Algorithmic Decisions and Principles of Administrative Law

Estonia, as the number-one-ranked country in Europe for the digital public services dimension of the Digital Economy and Society Index, aims at widespread adoption of artificial-intelligence systems to assist or even replace officials in public administration. It is expected that there will be 50 artificial-intelligence applications operating in Estonian public administration by the end of 2020. The machine learning capacity that is often intrinsic in artificial intelligence systems means, in practice, that even the data analyst or programmer who wrote the respective code is later no longer able to explain the parameters behind the decisions. If the state allows a so-called black box to make administrative decisions, further constitutional issues will arise in addition to that of judicial control of such a decision. An administrative decision presumes the implementation of legislation. Owing to the vagueness of the law, a judicial appraisal does not merely involve formal-logic operations, as laws and regulations require interpretation and the consideration of the facts. This is particularly important in making discretionary decisions. Interpretation and consideration must not be limited to the predictions made on the basis of earlier, similar cases by means of statistical methods. It is not rare that a decision on applying a standard needs to be made also in a situation that the legislator has been unable to foresee and for which there is no requisite pattern emerging in the training data fed to an algorithm. The article examines the related principles arising from the Constitution, and one of the conclusions drawn from these is that for factually or legally complex decisions, the weight of the decision must be borne by humans, at least until much more powerful artificial intelligence is developed. However, with the help of learning algorithms individual components and elements of such decisions can be taken. Full automation remains an option in cases of routine administrative decisions that are advantageous for the person(s) concerned and that lack negative side effects for them, as well in cases where all relevant factual circumstances are comprehensible to an algorithm as such and transparent.

Keywords:

fundamental rights; artificial intelligence; algorithmic systems; principles of administrative law; black box

Artificial-intelligence applications used in the private sector, part of the ‘fourth industrial revolution’, are increasingly finding their way into public-sector offices. Estonia also has ambitions to use robots, or kratts**, more widely in public administration to support or replace officials. *1 Public-sector kratts have received rather cursory academic attention, *2 even though the legislator already granted authorisation in some fields (tax administration, environmental fees, unemployment insurance) for automated administrative decisions in 2019. *3 There are plans to present the Riigikogu with a bill by June of 2020 that, if adopted, would introduce the necessary changes to existing legislation, including the Administrative Procedure Act, to allow for wider use of artificial intelligence. *4

There has been talk in Estonian media about automating pension payments and unemployment registration. *5 It is worth noting that not all e-government solutions are based on artificial intelligence and many are just simpler, automated forms of data processing. True, according to the principles of administrative law, if such human-guided solutions malfunction, they too may be transgressing the law. The real challenge, however, is legislating artificial intelligence, especially self-learning algorithms. With sloppy or malicious implementation, kratts may easily defy the rules of fair procedure, break the law, or treat individuals and businesses arbitrarily. Robots cannot explain their decisions yet. In order for our kratt project to succeed without exposing society and businesses to grave risks, the development of e-government must show full understanding of the nature of machine learning and, equally, its impact on administrative and judicial procedures. *6 Acting rashly in this field is tantamount to exercising governmental authority in line with a horoscope. It would be naïve and dangerous to let ourselves get overcome by the illusion of a kratt that can or will soon be able to engage in reasoned debate or comprehend the content of human language, including legal texts. Proper implementation of the law requires both rationality and true understanding of the law. If the risks are perceived and considered and also the algorithms are used for the proper operations, there will be plenty of work for them in public administration and they could be beneficial to both efficiency and quality in decision-making.

To maintain focus in this article, we will not broach questions of data protection, *7 even though there is important commonality here when it comes to administrative law. We will consider algorithms without regard for whether decisions are made about humans or legal entities and whether these are based on personal or other data. The breadth of this article doesn’t allow us to go into depth on the issue of equality. This article is aimed at giving the reader some examples of the use of robots in public administration (1); attempting to explain the technical nature of algorithmic administrative decisions (2); and, finally, examining the operation of the principles of Estonian administrative law in this type of decision-making (3). The main question posed by this article is whether and when a kratt can be taught to read and follow the law, as the legitimacy of governmental authority must not be sacrificed to progress.

1. Algorithms in public administration

Artificial-intelligence enthusiasts both in Estonia and abroad have pointed out that the implementation of algorithms affords wide opportunities for cost savings, productivity gains, and freeing officials from routine assignments. *8 An increasingly powerful fleet of computers and ever more intelligent software can handle ‘crazy’ quantities of data and solve assignments that are too complicated for humans. The public sector has to keep up with the private sector. Among other applications, algorithms may become necessary in public administration to effectively control the use of artificial intelligence in business – in such areas as automated transactions on the stock exchange. *9

Smart public administration systems can be classified as falling into the following categories: communication with people, *10 internal activities, *11 and preparation of decisions and decision-making. In the framework of this article, we are primarily interested in the latter two. In Estonia, for example, the Agricultural Registers and Information Board uses algorithms to analyse satellite imagery to check for compliance with grassland mowing obligations. The Tallinn City Government uses machine vision to measure traffic flows. The Ministry of the Interior wants to automate surveillance with a nationwide network of face- and number-recognition cameras. The Unemployment Insurance Fund hopes to implement artificial intelligence soon to assess the risk of unemployment. *12

In the U.S., an algorithm determines family benefits and analyses the risks that may justify separating a child from his or her family. *13 They have also applied algorithms to bar entry into the United States, to approve pre-trial bail, to grant parole, in counter-terrorism, for planning inspection visits to restaurants, etc. There are predictions that artificial intelligence will soon be implemented in the fields of aeroplane pilots’ licensing, tax refund assessment, and the assignment of detainees to prisons. *14 Algorithmic predictive policing is used in the U.S. and also in Germany to predict the time, place, and perpetrator of an offence. The system analyses crime statistics together with camera and drone surveillance records to identify occurrence patterns for certain offences and uses the information gleaned to direct operational forces. Disputes over the use of algorithms have already reached the highest courts in European countries, such as the French and the Dutch Council of State, with regard to such issues as university applications and environmental permits. *15

2. Technical background

2.1. Basic concepts

If we wish to understand artificial intelligence, we must first clarify some concepts from data science with definitions that are far from unanimous. *16 The latter notwithstanding, we will try to give one potential overview.

Artificial intelligence can be understood as the ability of a computer system to perform tasks commonly associated with the human mind, such as understanding and observing information, communicating, discussing, and learning. These features of artificial intelligence must be considered metaphors in the functional sense, because machine ‘learning’ is not actually the same as human learning. Artificial intelligence has many branches: automated decision support, speech recognition and synthesis, image recognition, and so on. A robot in our context is an artificial-intelligence application – an intelligent system. *17

Data mining is the process of extracting new knowledge – generalisations, data correlation, and repeating patterns – from large volumes of data (big data) by using statistical methods. *18 Various statistical methods have previously allowed analysts to build mathematical models based on data sets to describe what is happening in nature or society. These can, in turn, help one assess and classify new situations and predict the future, such as the weather or criminal recidivism. This becomes particularly effective if the models are built on self-learning (machine-learning) algorithms. *19

An algorithm is a set of precise mathematical or logical instructions, more generally a step-by-step procedure for solving a given problem (one example might be a cake recipe). The representation of an algorithm in programming language is a computer program. (1) Algorithms where the performance is entirely human-defined are distinguished from (2) algorithms that change their parameters autonomously in the course of learning. *20 The systems that automate the traditional decision-making processes in public administration (expert systems) are based on the former. Artificial-intelligence applications in public administration are mostly based on learning algorithms (sometimes also on more sophisticated non-learning algorithms).

Machine learning is the process by which an artificial-intelligence system improves its service by acquiring or reorganising new knowledge or skills. It is characterised by using the help of learning algorithms to assess situations or make predictions (e.g., making diagnoses, detecting credit card fraud, predicting crime). There are many machine-learning techniques, with different characteristics: linear and logistic regression, decision trees, the decision forest, artificial neural networks, etc. *21 In the most widespread – supervised learning – the algorithm is first trained from training data, a large number of data cases wherein the input (e.g., payment behaviour data) and output (e.g., solvency) values (features) are known. At a later stage, the application must calculate the output values for new cases on its own on the basis of the input data. These can be presented as numerical data (regression) or, for example, as yes/no answers (classification). The core element of a learning algorithm is its optimising or objective function. This is the mathematical expression of the algorithm’s task, which contains a set of so-called weight parameters. *22 As it learns, the robot looks for possible combinations of weights and chooses the working model that is most appropriate for the future and the one that gives solutions that deviate the least from the relationship given in the training data. These operations are repeated hundreds, thousands, or even millions of times. *23

By automated administrative decisions we mean any administrative decision that is prepared or made by means of automation. This may be based on simpler or more sophisticated non-learning algorithms (expert systems) as well as on machine learning. *24 For example, land-tax statements in Estonia are made entirely according to set rules and require no cleverness on the part of a computer. An algorithmic administrative decision is more narrowly a decision made with the help of artificial intelligence. Automated administrative decisions can be divided into fully and semi‑automated ones. The latter are approved by an official. Sometimes the computer decides, by following certain criteria provided, whether it is able to make a final decision, such as granting of a tax-refund claim, or an official must decide instead. *25 Sometimes the concepts of automated and algorithmic decisions are used synonymously, and the two are often combined, but it must be taken into account that the learning potential of artificial intelligence brings both a new opportunity and also problems to public administration. *26

2.2. The basic characteristics of machine learning

Self-learning algorithms can handle trillions of data cases, each with tens of thousands of variables. For some time, institutions in Estonia have been collecting data in large data warehouses for analytical purposes. *27 Machine learning doesn’t change the fundamental essence of data analysis but amplifies it: machine learning (in its current capacity) is only able to discover statistical correlations. These are not causal, natural, or legal relationships. Depending on the level of refinement of the model, the output data from machine learning may reflect the real world and anticipate the future with amazing accuracy. However, probability calculations will always retain some rate of error. *28

Learning algorithms and models created during learning are so sizeable and complex that a human – even an experienced computer scientist or the creator of the algorithm – may not always be able to observe or explain the work of a machine-learning application (this is opacity or the black-box effect). The more efficient the algorithm, the more opaque it is. Individual elements of a sophisticated machine-learning system, such as individual trees in a decision forest, can be tracked, but this does not allow much to be inferred about the process as a whole. Sometimes opacity of a system is actually desirable, to protect personal data or business secrets or to prevent the addressee of a decision from deceiving the algorithm. *29

Because of the statistical nature of machine learning, very big sets of data are needed. Unfortunately, or, rather, fortunately, we have too little information on terrorist acts, for example, to make accurate estimates. *30 In addition to the quantity of data, high quality and standardisation are no less important: accuracy, relevance, organisation, compatibility, comprehensiveness, impartiality, and – above all – security. This applies to both the training data and the ‘operating data’ used in the actual implementation of the algorithm. All machine-learning predictions are based on training data and previous experience. Another golden rule of machine learning is this: garbage in, garbage out. Poor data quality can result in a variety of distortions, including failure to investigate all of the factors affecting assessment because of inability, not considering this important, or finding it economically nonessential. *31 At the same time, large numbers of decisions amplify the impact of an error rate in absolute terms.

Models developed and decisions made through machine learning cannot be completely foreseen or guided. *32 Nonetheless, people – programmers; analysts; data scientists; system developers; and, ultimately, the end user – have a huge role and responsibility in the quality of machine learning’s outcomes. The end result is influenced by all kinds of strategic decisions and fine-tuning: defining the relevant output value (target features), *33 creating an objective function, selecting and developing the type of algorithm, fine-tuning the algorithm to be more cautious or bolder, and performing testing and auditing. We must take into account that two distinct types of algorithms, both of which might be very precise on their own, may give completely different answers for the same case. *34

3. Rule of robots or smart rule of law?

The above-mentioned technicalities of machine learning pose significant legal challenges in public administration. Machine learning can produce great results statistically, but in certain cases, a lot can go wrong also.

3.1. Administrative risks: Digital delegation and privatisation

The authority of the government can only be exercised by a competent institution. This institution may use automatic devices, such as a traffic light or computer, for this purpose. The more discretion is given to the algorithm, the more acute becomes the question of whether the decision is actually subject to the control of the competent institution or, instead, it is running its own course. *35 In our view, the decision is always formally attributed to the institution using the algorithm and they remain legally responsible for it. But with larger decisions to be made, a substantive problem actually arises: can the institution make the algorithm sufficiently consider all the important details of a decision? *36

We can assume that the state will have a practical need to delegate the development of its algorithms largely to privately held IT companies. That makes it important that we not lose democratic control over the companies directly managing the algorithm, as with making sure they don’t gain full control over the content of administrative decisions or maximise their profits at the expense of the quality of the administrative decisions. Therefore, as we develop our e-government, we have to analyse whether the current public procurement and administrative co-operation laws sufficiently address these risks. *37

3.2. Impartiality

For decades, people have been hoping that artificial intelligence can help create a bias-free, selfless, comfort-zone-free decision-maker that treats everyone equally. Regrettably, the reality of machine learning has shown some serious difficulties with the problem of bias. Artificial intelligence tends to discriminate against some groups of people when the quality of input data or the algorithm itself is inadequate. For example, when some groups have been monitored more closely than others, this may become reflected in the training data (as seen with blacks in predictive policing in the U.S. or in recidivism assessment systems). *38

3.3. Legal basis

The Estonian Constitution’s §3 (1) 1 states that governmental authority shall be exercised solely pursuant to the law. The question of when and how the legislature should authorise institutions to implement machine-learning technology cannot be answered simply or unilaterally. If machine learning is used only in the preparation of administrative decisions (e.g., to forecast pollutant emissions before issuing of an environmental permit) while the final administrative decision is made by a human official following normal procedural rules, then machine learning can be considered one detail of the administrative procedure and control over the decision-making remains at the discretion of the administrative institution (Administrative Procedure Act §5 (1)), hence not requiring any special provisions. *39 If the role of the human in decision-making is limited to that of a rubber stamp or disappears altogether, then it may be a matter requiring parliamentary approval. In each area (licences, social benefits, environmental protection, law enforcement, immigration, etc.), the widespread implementation of intelligent systems raises specific issues that need to be resolved separately and balanced with appropriate substantive and procedural guarantees. *40 Aside from the legal issues, it would be wise to consider the risks to public finance: does it make the legislature a slave to the robot? An expensive and complicated implementation system may start to obstruct legislative changes and political will. *41

The Taxation Act’s §462 (1) grants an implementing institution broad powers to make automatic administrative decisions in the field of taxation without intervention by an official. A more detailed list must be established by the Ministry of Finance. *42 The law does not impose restrictions on the type or manner of decisions that may be automated. Because of its rather precise legal definitions, taxation is considered rather suitable for automation. Here, well-founded reliance on a broad mandate shouldn’t produce unacceptable results. However, granting total power to an authority *43 to fully automate any administrative decision may result in violations of §3 (1) and §14 of the Constitution.

3.4. Supremacy of the law

Pursuant to §3 (1) 1 of the Constitution, the exercise of governmental authority may be guided by an algorithm only if the word of the law is followed at all times during its application. *44 But this requires the human or self-learning system to convert the law into an algorithm. In some cases, this may be possible in principle, albeit a substantial task, but that would require the developer to have very in-depth knowledge of information technology, mathematics, and the law. *45 Still, many legal provisions cannot be described in the unambiguous variables specific to an algorithm. *46 This is due both to the inevitable vagueness of the instrument of law – human language – and to the intentional slack that ensures flexibility in legislation. *47 Instead of step-by-step instructions (conditional programs), the law often uses outcome-oriented programs: *48 general objectives such as better living environments, public involvement and informing the public, balancing and integration of interests, sufficiency of information, expedient and economical while also reasonable land use (Planning Act §§ 8–12); discretionary powers, such as the right of a law-enforcement agency to issue a precept to a person liable for public order to counter a threat or eliminate a disturbance (Law Enforcement Act §28); undefined legal terms, such as overriding public interest (Water Act §192 (2)) or danger (Law Enforcement Act §5 (2)); and general principles, such as human dignity, proportionality, and equal treatment (Constitution §10, §11 sentence 2, and §12).

By dint of the uncertainty of the law, legal subsumptions *49 (such as the decisions necessary to implement a law – is an object a building in the sense of the Building Act, is a person a contracting entity in the sense of the Public Procurement Act, is the recipient of rural support sustainable, and how should one define a goods market in competition supervision?) are not mere formal logical acts but require judgement. Before a situation is resolved, the decision-maker must interpret the norm to explain whether the legislator wanted to subject the situation to the norm or not. What’s more, the decision must be made in situations that didn’t occur to the legislature, such as that of a new cross-border tax avoidance scheme. Here, it is up to the implementer to assess whether he or she is dealing with permissible optimisation or abuse (Taxation Act §84). Those implementing the law – the ministers, officials, judges, and contracting parties – continue to interpret it and fill in the gaps in the regulatory process started by the parliament. It is up to them to make the law concrete. *50

We must note that there is some similarity to machine learning here: a learning algorithm is not yet complete in the form in which humans created it. It keeps developing itself and is able to create new models to classify situations. So couldn’t the legislator’s real will not be modelled in this way as well? Is it not a standard classification task for a smart system, almost like finding cat pictures? Regrettably, the source material for machine learning – data from the past – cannot in principle be sufficient for further developing the law as code. *51 A law’s enforcer must also account for existing judicial and administrative practice *52 and the generalisations that crystallise out of it, but his or her sources must not be limited to this alone. *53 An official or a judge must be able to perceive, understand, and apply a much broader context: the history of the law, the systematics of norms, the objective of this law, and the general meaning of justice but especially the direct and indirect effects of the decision. It is not possible in all fields to produce sufficient quantitative or qualitative data to describe all the layers of law and its operating environment. And it is far from possible for (current) smart systems to follow all of this material in real time. Therefore, many situations require a rational being who understands the peculiarities of the specific situations being regulated and, when necessary, creates a new law appropriate for the situation instead of searching for one in previously tested patterns. *54

The vagueness of legal concepts expressed in natural language is not a flaw in the law. It must remain possible to argue over the law if we are to reach fair decisions in specific situations. But this requires open and honest discussion over different interpretations and ways of assessing the facts. Even if you translate the law into zeroes and ones, you don’t escape the need to interpret it. This need would simply move from the decision-making stage to (1) the expert system’s creation and calibration stage or (2) the intelligent system’s learning stage. *55 In both cases, at least the persons concerned and presumably also the competent administrative institution (considering the complexity of machine learning) lack an effective opportunity to have a say in the interpretation. Because of their complexity, the decisions made by a self-learning algorithm are not just difficult to predict, they are structurally unpredictable. *56 But how can you ensure that the algorithm won’t deviate from the law as it learns? Periodic testing and auditing is not a sufficient solution, because tests are also unable to anticipate or run through all of life’s possible scenarios. The costs of such extensive calibration and testing would eventually outweigh the benefits. *57 Also, the machine-language translation of a law that an administrative robot could supposedly follow is anything but static. It is corrected not only by new laws, interpretations in case law *58 , and decisions made in constitutional review but also by the development of the context of the law – society. Weak artificial intelligence is not capable of perceiving or applying these changes itself.

The main question here is not whether and to what extent a machine makes mistakes. A machine doesn’t perform any legal-thought operations. In the best case and only with sufficient quantities of data, machine learning (in its current capacity) can merely mimic legal decisions through statistical operations, not comprehend the content of the law or make rational decisions based on it. *59 But that is precisely the demand set by §3 (1) 1 of the Constitution. We are claiming not that an expert or smart system is unable to replace any legal assessment, just that solutions to the problems described above must be found when one is using such systems.

3.5. Discretionary power

These problems are exacerbated by discretionary decisions where the law does not prescribe clear instructions, such as those on whether to require the demolition of a building, what requirements to set for service providers in a procurement, whether and under what conditions to allow extraction in an area with groundwater problems, or where to build a landfill. Ostensibly, discretionary power does not give authorities the right to make arbitrary decisions. Discretionary decisions must also obey the general principles of justice and consider the purpose of the law and all of the relevant facts specific to each individual case (Administrative Procedure Act §4 (2)). *60 An algorithm that has been completely defined by humans is not suited to making discretionary decisions, because circumstances are unpredictable. True, there is some measure of standardisation and generalisation in making judgements, as in the case of internal administrative rules, but officials must retain the right and the duty to deviate from such standards when it comes to atypical cases. *61 However, optimists believe that, even though the capability is lacking at the moment, it is not rigid algorithms but machine learning that will be able to take advantage of the dynamic discretionary parameters to soon work within the lines of value principles and discretionary bounds. *62

This does not seem realistic for the near future. *63 First of all, decisions of this kind are too unique for generation of large enough bodies of data for machine learning to be capable of modelling them. Secondly, discretionary rules and the general principles of justice may seem like simple maxims at first glance. They may even be represented as mathematical formulas, but this does not yet guarantee their practical applicability to machine learning. Let us illustrate with R. Alexy’s proportionality formula by trying to explain its application through the example of an injunction to shut down a fish-processing plant infected with a dangerous bacterium.

               Ii ∙ Wi ∙ Ri

W i, j = Ij ∙ Wj ∙ Rj     .

 

Here, i and j are the principles considered in making the decision (in this case, fundamental rights: consumer health versus the freedom to conduct business). Wi,j is the specific value of principle i in relation to principle j. For the Veterinary and Food Board to issue an injunction, i, or health, must outweigh j, or freedom to conduct business. In other words, Wi,j must be >1. I is the intensity of interference with the given principle, which expresses the extent of potential damage if one or the other principle recedes (consumer illness or death / the facility’s bankruptcy and unemployment for its many workers). W is the relevant principle’s abstract value and illustrates the general importance we attach to public health and to freedom to conduct business. R is the probability of damage that could result from violation of the principle (for example, if the plant stays open, the product will not necessarily be contaminated but might be, but if it is shut down, bankruptcy is certain). *64

Even if we disregard other criticism of this equation, *65 the real difficulty does not lie in calculations so much as in assigning correct values to the variables in the equation and in arguments over whether and to what extent one or another principle (fundamental right) is infringed, what the proven facts are, and whether they are even relevant with regard to the judgement to be made. The likelihood of one or another outcome (R) may depend on very special circumstances that did not occur to those inputting the algorithm’s learning and working data; however, judgements made on the importance of the principles (W) and the intensity of interference (I) are value-based and can only be made in acute awareness of the sizeable context accumulated over a long span of evolution in law and society. If this information is not easily accessible to the human official, the deficiency can be overcome by communication between the decision-maker and the parties to the proceeding in a fair administrative procedure (see Subsection 3.6, below). The weak machine-learning technologies available today and expected in the near future are characterised by limited understanding of the context and the content of communication. *66 This is equally true for undefined legal concepts (public interest, material harm, etc.). *67 Even if, for example, the Law Enforcement Act’s §5 (2) defines a threat as a sufficiently probable offence (e.g., food poisoning), it still does not quantify the level of sufficient probability. This is a legal judgement that is based on value judgements as to the significance of one or another interest, not just a statistical prognosis of the occurrence of damage.

To fully delegate a complex discretionary or judgement-based decision to an algorithm would, in our opinion, constitute a gross breach of discretion, against the Code of Administrative Court Procedure’s §158 (3) 1 (failure of an administrative institution to exercise discretionary power). An algorithm can, however, be implemented as an aid.

3.6. Fair proceedings and the principle of investigation

Fair proceedings – especially the right to a hearing (Administrative Procedure §40 (1)) – play an important role in guaranteeing the substance of a decision as well as the dignity of the persons concerned. *68 The establishment and further development of law in a state based on the rule of law must take place in the framework of honest and open (at least to the persons concerned) dialogue. In decisions affecting large numbers of people, such as spatial planning and environmental permits, the right to express an opinion must also be open to the public. Such discussions are merely mimicked by contemporary algorithms (i.e., debate robots), not actually (meaningfully) held. In the case of machine learning, listening to the discussion would be all the more necessary, as the algorithm may not be programmed or have learned to account for unpredictable circumstances. A rare event may turn out to be decisive for the right prediction, as with a broken leg meaning that a person won’t complete his weekly workout even if years of his behavioural patterns would indicate otherwise. *69

C. Coglianes and D. Lehr point out that the right to a hearing is fairly flexible under U.S. law and that machine learning without a hearing could, in some situations, yield more accurate decisions on average than humans through the hearing process. *70 This is not adequate justification. A citizen or business falling within the margin of error does not have to be satisfied with pretty statistics and retains the right to demand a lawful decision on his case. In Estonia, the Administrative Procedure Act’s §40 (3) provides several exceptions to the right to a hearing. The catalogue of exceptions may be augmented via special laws if the effectiveness of a hearing is low in practice. But no general exception to any administrative acts on algorithms may be granted. The greater the discretion of the authority, the more necessary communication becomes for the proceedings, and, therefore, the less possibility there is of using fully automated decisions; i.e., when artificial intelligence is applied, the person concerned must retain the opportunity to interact with an official. *71

The effective protection of rights and public interest is guided by the principle of investigation in the Administrative Procedure Act (§6) – an administrative institution is obligated to take initiative in investigating all relevant facts. This is also a challenge for algorithms, because they cannot deal with circumstances that haven’t been entered in their systems. The reality around us is not yet completely digitised or machine-readable with sensors. Therefore, a machine can only consider fragments of the actual situation in its analysis. *72 But a human is able to take initiative in searching for additional data from sources that have not been provided or are not in some manual. Not knowing important information does not exonerate the decision-maker who errs against a prohibitive norm. *73 This is why German law requires the intervention of a human officer, obliging him or her to manually correct an automated decision in light of the additional circumstances. *74 However, with intelligent implementation, machine learning can be applied to follow the principle of investigation – e.g., to select tax returns that need more extensive, manual control. *75

3.7. Reasoning

The reasoning behind decisions made by governmental authorities is a core element of a fair procedure. According to §56 of the Administrative Procedure Act, an administrative act must state its legal and factual basis (the provision delegating authority and the circumstances justifying its application) and, if the act is based on discretion, at least the primary motives for the choice between the options (e.g., why the pulp mill should be in Narva and not Tartu or why the construction of a wind farm should be prohibited). This is not a mere ethical recommendation but a fundamental, constitutional obligation. *76 A law-enforcement mandate shall only be granted to an entity that can demonstrate that their decision is lawful – in accordance with external limits as well as internal rules of discretion. This brings us to an important difference between the private and public sectors: the exercise of freedoms does not need to be justified, but the use of authority does. A person receiving a notice of tax assessment or a demolition injunction does not have to accept an official’s claim that ‘I don’t know why, but the machine made this decision about you’.

Neither Administrative Procedure Act §56 nor Taxation Act §462 articulates exemptions for automatic, including algorithmic, administrative decisions. Such exceptions would violate the Constitution as well as generally accepted standards in democratic, rule-of-law states. *77 As we have seen, creators of algorithms are often unable to explain the decisions made by a robot, for reason of machine learning’s opacity. In the U.S., Houston used an algorithmic decision-making process to terminate employment contracts with teachers in 2011. During the ensuing litigation, the school administrator was unable to explain the functioning of the algorithm, claiming that he had no ownership or control over the technology. *78 Also, the U.S. Government has cited the issue of opacity as a matter of concern. *79

Some experts see a possibility of solving the problem of opacity by using artificial intelligence to develop language-processing programs enough that the computer can analyse numerous prior justifications to synthesise a machine argument that is seemingly similar to legal arguments. *80 This method would still use only statistics, not reasoning based on the methods of jurisprudence, meaning that it could only offer an inadmissible semblance. Reasoning must be genuine, though. *81 There is a growing search for ways to increase transparency in machine learning by following the principles of accountability and explainability. Among other things, this requires greater access to learning and source data, the data processing, and the algorithms and their learning processes. *82 These challenges entail collisions with business secrets, internal information, personal data protection, and – above all – a human’s ability to analyse the work of an algorithm. Moreover, the most important aspect of the reasoning for an administrative act is not its technical description of how the decision was made but the motives behind it, why the decision made was this particular one. Even in the case of decisions made by humans, we are interested not in the biochemical details of the decision-maker’s brain but in his or her explanations. There is little benefit to expanding the overall transparency of machine learning to the reasoning of individual cases. *83

As an alternative, development has started on so-called explainable artificial intelligence (xAI). Since the actual mathematical processes of machine learning are too complex and sizeable for humans to productively investigate them directly, developers are trying to employ artificial intelligence for this task too, in work such as trying to model complex implementation by using a simpler and more comprehensible algorithm. *84 Also, there are efforts to construct similar fictitious situations wherein the algorithm gives a different answer. To this end, some variables are ignored or changed (e.g., gender or age) while others are kept constant. This technique can be used to parse out the criteria instrumental to a given decision. *85 But this still only takes us halfway: it is an explanation of the background of a statistical judgement, not a legal judgement itself. *86 That may suffice if the administrative act is solving of a complex but mathematically solvable problem, such as predicting development or the likelihood of an event (e.g., the increase or decrease in the population of a protected species when a railway is built). *87 Such judgements and predictions can be necessary, but the obligation of justification has a wider berth. In general, an administrative act may need an explanation of why legal provision x is applied and not y, why a statutory provision is interpreted as a and not b, what facts have been ascertained and why, why these facts are pertinent according to the relevant law, *88 or why it is necessary to implement a certain measure (e.g., why should a dangerous structure be demolished instead of rebuilt?). All of these issues demand counter-arguments to the positions held by the parties to the proceeding that were not addressed in the decision. A robot cannot give adequate explanations for these thought operations because it does not perform such operations. *89 If an administrative act requires a substantive legal justification, then the current level of information technology entails a need to place a human in the ‘circuit’. *90

3.8. Judicial review

To ensure legality, it is recommendable to subject both private- and public-sector artificial-intelligence applications to multifaceted monitoring (documentation, auditing, certification, standardisation). *91 This is necessary but cannot replace the judicial protection of persons who find that their rights may have been violated (Constitution §15 (1), European Convention on Human Rights art. 6 and 13, Charter of Fundamental Rights of the European Union art. 47). If sufficient substantive and factual argumentation is given for administrative decisions made by means of an algorithm, there is no fundamental problem with judicial control. But difficulties arise in the absence of such argumentation. *92

C. Coglianes and D. Lehr point out that courts tend to give deference to agencies when it comes to technically complex issues. *93 But it’s the algorithm that makes the decision complex! Implementing algorithms must not become a universal magic wand that frees the executive institution from judicial review for any decision. We can only talk of loosening control in situations wherein judges would, even without the use of artificial intelligence, defer for other reasons, such as the economic, technical, or medical complexity of the content or if the infringement of the rights of affected individuals is not excessively intense. Here, a complex administrative decision may include elements with different control intensity. *94 Discussing this matter, E. Berman sees the opportunities for use of algorithms the more discretion an authority has. This position is somewhat confusing because it does not account for the breadth of discretion afforded by the law, or the significant influence of general principles and basic rights. Her ultimate conclusion is that control may be allowed to weaken where infringements are not very grievous and regulation is sparse and that it may disappear altogether in situations wherein no-one’s rights are affected (e.g., deciding where to locate police patrols). *95 We can agree with this conclusion. As a general rule, administrative court proceedings retain control of rationality (including proportionality), which requires substantive administrative decisions that are at least monitored by humans as well as legal justification, with the exception of routine, mass decisions.

The problem of control cannot be solved simply by the administrative institution disclosing the content and raw data of the algorithm to the court. *96 To analyse this material, the court would need its own IT knowledge or expert assistance. This is neither realistic from the angle of reasonable procedural resources nor in accordance with the constitutional roles given to the branches of government (Constitution §4). It would mean placing the primary responsibility for the compliance of an algorithm in the hands of the court where §§ 3 and 14 of the Constitution place it in the hands of the executive power. There is no need to turn algorithms into direct subjects of judicial control. It is not necessarily important whether the data is distorted or has calibration errors or bias – these deficiencies may not affect the end result. It is also not the responsibility of the appellant to prove such deficiencies when challenging an algorithmic decision. And it is not a reasonable solution to compensate for the complexity of the algorithm with longer appeal times. Algorithmic administrative decisions must also obtain final, conclusive force within a reasonable amount of time. *97 In court, it is important for administrative decisions taken by an algorithm to be legally justifiable. The executive institution must be convinced of the legality of its decision, and the process of forming such a conviction must be traceable without any special knowledge of computer science. This means using a so-called administrative Turing test, meaning that citizens and businesses must not detect any difference in whether a law-enforcement decision by the executive institution is made with the help of artificial intelligence or not. *98 An institution using algorithms can implement the help of the algorithm for making a decision if it is able. If not, a representative of reasonable thought – an official – must step in.

4. Conclusion: The division of labour between kratt and master

Administrative decisions vary widely in terms of content, legal and factual framework, and decision-making process. Depending on the field and situation, rigid, standard solutions; generalisations; and simplifications may be allowed to a greater or lesser extent in administrative law. *99 There are quite a few routine decisions that are subject to clear rules (e.g., in the areas of social benefits and taxes), and those can be trusted to computers working with non-learning or learning algorithms. *100 It may also make sense to use self-learning algorithms in areas where there is wide latitude for governmental decisions and the decision-making requires a more non-judicial analysis (e.g., determining the positions for police patrols or modelling protected populations). *101 But the important and complex decisions in society (e.g., where to build a railway or whether to build a nuclear power plant) are not routine and ought not be automated, at least not fully, because of a lack of appropriate learning data. These decisions need human judgement. *102

In situations that fall between those two extremes, it is realistic to expect co‑operation between the robot and the official, wherein the scope of each role may vary greatly, depending on the field and situation: *103

More routine but not quite mechanical administrative decisions that are advantageous and lack negative side effects for the public and whose factual circumstances are comprehensible to an algorithm can be fully automated administrative decisions. However, the affected party must retain the right to request human review of the decision if desired. From a procedural point of view, it would be conceivable to provide a fully automatic administrative act as the initial act while allowing one month for the person to apply for a manual administrative act, for example. *104

Administrative decisions of moderate complexity may require the administrative decision’s approval by an official, but here we must avoid the ‘rubber stamp’ phenomenon. The official should first examine the arguments of the parties to the proceeding and the views of other authorities, assess the comprehensiveness and exhaustiveness of the facts on the basis of the investigative principle, and prepare a justification for the administrative act together with a thorough evaluation of his or her choices. As technology advances, there is reason to believe we will be able to use increasing assistance from machines in forming these justifications (to explain the aspects that tipped the scales or to prepare a draft justification or at least the more routine parts of it).

For factually or legally complex decisions, the weightof the decision must be borne by humans, at least until stronger artificial intelligence is developed, *105 though learning algorithms may be used to evaluate individual elements of those decisions. Officials still should take direct statements from witnesses, communicate directly and humanely with the parties to the proceedings, and make principled and justified decisions.

With all of these variations, quality machine learning is particularly suitable for assisting officials in those areas of their job where they need to make predictions about circumstances or events on which humans lack certain knowledge as well (e.g., the likelihood of offences). But the legal decision (e.g., whether the prediction is sufficient to qualify as the justification for intervention) must be made by a human. *106 Machine learning could also be implemented in very uncertain situations where a decision needs to be made but even officials would have trouble presenting rational justifications (e.g., a long-term environmental impact). *107 In any case, the implementation of machine learning in the performance of administrative tasks requires a sense of responsibility on the part of the institution as well as legal, statistical, and IT knowledge at least to the extent necessary to adequately outsource and oversee the development services. *108

In conclusion, we are of the opinion that, at the current level of artificial intelligence, it is not possible to delegate atypical and complex administrative decisions to applications of it. Doing so is hindered both by the inability of the applications to conduct fair proceedings and explain the reasons and by the insufficiency of data. In conditions such as these, the delegation of a decision to an algorithm would be in conceptual conflict with the legality of administration and with procedural rules, along with the guarantee of judicial control. This is the actual state of things. The authors are not ambitious enough to predict whether implementation of ‘science-fiction technology’ available in the distant future could be in compliance with the law in effect at that time.

pp.47-61