The article proposes to examine the ways in which the production of state legal acts occurs through multiple existing technologies that create what is so-called “artificial intelligence” and, based on them, determine to what extent their use is compatible with Brazilian law, especially with regard to controlling their legitimacy.
### INTRODUCTION
Artificial intelligence is gradually being adopted by the State to automate factual and legal measures that are constitutionally attributed to it. The procedure for automating state acts is nothing new, having been known for centuries. However, the novelty lies in the new ways of automating state acts through the so-called artificial intelligence.
Secular is also the duty constitutionally imposed on the State to clarify and motivate the reasons for which it produces its acts. Transparency and motivation have long been the antidote to state arbitrariness, since criminals and wrongdoing, as a rule, occur outside public and state knowledge and control, thus operating in the shadows.
Likewise, artificial intelligence mechanisms seem to operate in the shadows, preventing the recognition, knowledge, and process adopted to produce state solutions based on artificial intelligence.
What, then, is the legitimate way of using artificial intelligence in the production of state acts in the light of the duty of motivation? This is the question we face in this article, in which we try to offer some proposals for solutions.
## I. THE PRINCIPLE OF MOTIVATION IN THE PRODUCTION OF STATE ACTS
a) The Duty Of The State To Motivate The Acts It Produces
1. According to the Constitution of the Federative Republic of Brazil (Constitution of Brazil), the
motivation is the constitutionally imposed duty in the exercise of a state function. As a result of this duty, it is imposed on all those who act in the exercise of a state function or attribution (what we call public agents) the task of disclosing the reasons that led them to adopt a certain legal or material measure for the protection of the public interest, according to the enabling legal norm.
As a logical consequence of the duty of motivation, the public agent is equally compelled to reveal the logical relevance between his deliberation and the facts that occurred.[^8] Without this, the reasonableness and proportionality of the legal or factual measure adopted, according to the purpose of the jurisdiction rule, cannot be ascertained.
2. Whether in Brazil or abroad[1], the duty of public agents to motivate their actions is justified not only by serving as an antidote capable of avoiding or fulminating abusive and authoritarian state measures. This duty also lends itself to explaining and externalizing the reasons why public agents adopted a given measure, neglecting other measures that could also be adopted to achieve the same purpose. Thus, motivation allows society to know and be informed of the reasons for public choices, which is especially relevant when the law grants a wide margin of freedom for these choices.
b) Explicit, Clear, Sufficient, and Congruent Motivation
3. Under Article 50, paragraph one, of the Brazilian Federal Law (Federal Law) 9,784, "the motivation must be explicit, clear, and congruent (...)"[2]. It is certain, however, that no normative command is
explicit in its content, but only in its external form or exteriorization. Hence, for example, to say that the "dignity of the human person" cannot be violated by state or private behavior requires, in our judgment, that this guarantee be interpreted. It requires the law enforcer to extract from this guarantee the emerging rights in favor of the individual and the limitations or restrictions on private or state conduct. Thus, in the given example, the phrase "dignity of the human person" is explicit; on the other hand, the rights and guarantees arising from this guarantee require interpretation, and this, in our opinion, is always implicit[3].
As legal institutes and normative prescriptions are externalized through signs[^20][4], the construction of their meaning, content, and scope requires interpretation, as observed by José Souto Maior Borges[5]. This is because the legal norm and the legal institutes reveal themselves through and due to the interpretation we make of it. In this context, there is no mention[^7] of normative command or explicit legal institute. And, for the same reason, there is no mention of explicit motivation either.
After all, both the famously explicit legal concepts – such as legality – and the recognizably implicit ones – such as good faith and legal certainty – reveal this attribute at the formal level (whether, for example, they are or are not conveyed in a legal text, whether they have their existence proclaimed and externalized in the legislated text). As for the material level, its juridical substance, content, meaning, and scope are always revealed through interpretation.
Thus, the interpretation seeks to extract from the legal institute or a legal prescription its normative load, and, for this reason, José Souto Major Borges rightly states that every concept or legal command is, therefore, implicit.
4. Now, if motivation is also revealed in the legal world in the same way as legal concepts and normative prescriptions, it is unreasonable to intend to attribute a literal meaning to Article 50, paragraph one, of Federal Law 9,784, demanding "explicit motivation."
For this reason, the "explicit motivation" prescribed in Article 50, paragraph one, of Federal Law 9,784, must be understood as the need for the motivation to be externalized, allowing the interpreter and the recipient to become aware of the justifications that led to the production of the act[6].
5. Moreover, the idea of congruence - prescribed in Article 50, paragraph one, of Federal Law 9,784 - concerns the correlation and logical pertinence between a given fact and the legal measure adopted by the public authority (what reasonableness consists of), including the adequacy between the intensity of the measure adopted according to its purpose (which is the principle of proportionality).
This is also the thought of Augusto Durán Martínez, for whom:
"La congruencia es una exigencia de razonabilidad. El acto administrativo es una declaración de voluntad razonable, puesto que la voluntad presupone la inteligencia. La motivación, además de suficiente y congruente, para ser legitimá debe, además, ser exacta. La motivación es exacta cuando son ciertas las circunstancias de/hecho enunciadas, cuando las reglas de decrecho invocadas son aplicables al caso y su interpretación es correcta y cuando la finalidad enunciada pueda ser satisfecha con lo decidido. Mientras que la suficiencia y la congruencia se aparecen con la sola lectura del acto, la exactitud exige 'un salto del papel', que es preciso comparar lo dicho en el acto con la realizad"8.
But is the motivation of produced state acts always necessary?
c) The Motivation in State Acts Produced based on Bound and Discretionary Competence
6. The motivation of State acts, and especially the acts produced by the State in the exercise of the administrative function[9], is not always necessary.
With this, it is stated that administrative acts – which are the legal acts produced by the State in the exercise of the administrative function – do not always need to be accompanied by motivation.
In fact, the administrative act emanated without conscience or will (we mention it later, in item 16) can be guided by the same logic of the incidence of rules of conduct, since the outbreak of its effects in the legal world is perfected automatically and infallibly, as Alfredo Augusto Becker points out[10], with no need for any kind of motivation.
Take, for example, a traffic sign and the automatically and sequentially projected lights by a traffic light. They are forms through which constant emanations of administrative acts are perfected, based on the exercise of a related competence, and validly produced without any kind of motivation. But that is not all.
The linked administrative acts that do not need motivation are those whose valid production requires (i) the demonstration of the occurrence of the legal fact described in the hypothesis of the competence rule; or (ii) being implicitly excluded by the law since it would make the very protection of the public interest unfeasible (as in acts produced orally or urgently).
7. If some administrative acts produced in the exercise of a related competence do not require motivation, other acts based on the same competence require that provision for their valid production. For this reason, one cannot focus on the error of assuming that all related administrative acts do not need to be motivated.
Indeed, many acts produced in the course of processes or administrative procedures are strictly bound – such as, for example, the granting of precautionary measures[11], the lack of knowledge, or the dismissal of claims or appeals handled outside the legally established period –, and, notwithstanding this morphology, the motivation should be declined contemporaneously to its production, under penalty of offense to due legal process.
That is why Celso Antonio Bandeira de Mello states that "jurisdictional decisions, whatever they may we say that it observes its own legal regime). In this context, we welcome the concept of administrative function supported by Celso Antonio Bandeira de Mello, for whom: "Administrative function is the function that the State, or whoever replaces it, exercises in the intimacy of a hierarchical structure and regime and that in the Brazilian constitutional system is characterized by the fact that it is performed through infra-legal or, exceptionally, infra-constitutional behaviors, all submitted to legality control by the Judiciary" (BANDEIRA DE MELLO, C. A. Curoso de Direito Administrativo, 36th ed., Fórum Belo Horizonte, 2023, p. 36).
be, confirmed or reformed, persistent or overcome by new jurisprudential guidance, are always pronounced as acts bound to said Law. So, there is no assumption that the judge has discretion to grant or reject a request for an injunction.[12]
Thus, even though the enabling legal rule may impose the exercise of the public function in a binding manner, it is not rare to require prior or contemporary clarification regarding the factual situation that occurred and the legal basis adopted to justify the adoption of a given state measure. This is to allow the administered to assess the existence of logical relevance between the factual and legal circumstances and the solution adopted, according to the said rule of jurisdiction.
For example: If there is an enabling legal rule, the Public Authority must adopt a precautionary measure capable of preserving the good of life discussed in the records of a process or administrative procedure, provided that it demonstrates, with motivation, that the authorizing requirements of this procedural measure are present (which are: the fumus boni iuris and the periculum in mora).
8. Despite the unquestionable validity of some types of administrative acts produced without conscience or will and without reason, as long as they result from the exercise of related competence, the same conclusion is not reached in cases where this administrative act (produced without conscience or will) is emanated based on discretionary competence.
And it is precisely in this moving field that people are treading these days, in which the Public Authority - out of necessity or incentive - produces discretionary acts making use of the so-called "artificial intelligence." This happens in Brazil and abroad.
9. Thus, the classic theme of motivation is gaining special prominence nowadays, as artificial intelligence is used more and more frequently to (i) propose to the public agent the content of the legal act to be produced; or (ii) produce, on behalf of the public agent, a specific legal act.
In this context: How to motivate these discretionary acts that were produced without conscience or will, since they were emanated or suggested by artificial intelligence? Who is in charge of motivating them? Is it the artificial intelligence itself?
To answer these questions, we must, initially, know how the proposed solution is built using artificial intelligence and, thus, scrutinize how its valid production takes place (or can take place) according to the legal system.
## II. THE PRINCIPLE OF MOTIVATION IN ACTS SUGGESTED OR PRODUCED BASED ON ALGORITHMS
a) The Dilemma in the Motivation of the Act Performed or Proposed based on Algorithms
10. In the historical moment in which we live, it is questioned to what extent one can demand the motivation of a state act produced through a program that employs or adopts algorithms, the basis for the construction of an artificial intelligence system.
Therefore, it is imperative to know what models are used to create what is called artificial intelligence and, thus, to assess the extent to which it can legitimately be adopted in Brazilian law (an idea that seems to extend to those with normative systems that require motivation in the production of state acts).
b) Understanding Algorithm Arrangements and the Production of Acts based on Artificial Intelligence
11. The limits of the use of artificial intelligence in the production or suggestion of production of administrative acts require, preliminarily, that we know how these contraptions propose solutions from the exposed problem.
According to Bruno Romani and Marcos Muller, "Until 2017, the most used AI architecture (or technique) to analyze and generate text was the so-called recurrent neural networks (RNN). They 'look' at a set of terms and generate the next word sequentially, always based on what appears before - it is a kind of 'word queue'."13
It turns out that "(...) the RNNs have two problems. The first is that they cannot analyze several words at the same time, which slows down the training process of these systems. In addition, they cannot keep their 'attention' in very long sentences and end up 'forgetting' the first analyzed terms. That is, RNN cannot handle a very long queue of words. Therefore, they are incapable of writing long paragraphs"[14], which led to the creation of an algorithm capable of solving this problem, which came to be called 'Transformer', the basis of systems such as Bert, from Google, and the GPT, which would later come to supply the ChatGPT.
And how concretely does the artificial intelligence system work, which can and has been used in the production or suggestion of production of legal acts? As "computers (...) do not understand words (...) the language needs to become mathematics. (...) Each term gets a number (called a token), and these identifications are transformed into multidimensional vectors called 'embeddings'. The 'embeddings' help to preserve the idea of semantics because they group the vectors of similar words – for example, the vectors of 'spring' and 'summer' tend to be close to each other in the words 'cloud'. Another element of this analysis is the position of these words in the sentence. A position code (called position 'encoding') helps determine which words tend to appear together and where they tend to appear in a sentence. This is important because the placement of a word in the sentence changes its meaning. For the machine to understand the relationships and generate parameters, massive volumes of data are needed, called large language models (LLM). GPT-3.5, the first 'brain' of ChatGPT, was trained with 45 TB of text, including 10 billion words and 8 million texts. It includes the entire Wikipedia in English, packages of digital books (collectively called Books 1 and Books 2), and two massive packages of web pages (named The Common Crawl and WebText2).[15] Here, then, is the transformation that takes place on the database that serves as support for the operation of the artificial intelligence system.
This theme is of transcendent importance because if the database is fed according to biases – texts that attribute legitimacy to racist, sexist behavior, mitigating the rights of those administered and under the jurisdiction etc. – the default algorithm ('embeddings' and 'position encoding') will follow this trend in the answers provided. We will therefore have racist, sexist responses, mitigating the rights of those administered and under our jurisdiction.
This is because "(...) within AI systems, the word becomes mathematics. AI tools that generate text use probabilistic analysis models to understand the relationship between words and select the terms that best meet user demands (...)."16 Precisely because the algorithm uses probabilistic methods to select words that, taken together, can make sense, it is not uncommon for "...a phenomenon baptized in the scientific community as 'hallucination', which refers to texts invented by machines that extrapolate reality or common sense (...)" Turning words into numbers and vice versa makes clear the reasons why ChatGPT hallucinates or argues about erroneous information. It is just picking words from a probabilistic model. There is no feeling or understanding.[17]
12. Thus, the artificial intelligence that caused a real worldwide frisson, the ChatGPT, is described as follows, coherently with the algorithm models used: "...ChatGPT does not understand what you type for one simple reason: within AI systems, the word becomes mathematics. AI tools that generate text use probabilistic analysis models to understand the relationship between words and select terms that best meet user demands."18
Therefore, in the so-called artificial intelligence systems, there is no feeling, understanding, or a look at the reality described given the purpose of the norm (which is the protection of the public interest or the common good). There is, simply, an arrangement between words that, by probabilistic means, can make sense, according to the database provided. The production of a normative act is produced or suggested without conscience or will.
Artificial intelligence does not propose any solutions; it proposes, simply, an arrangement of words according to the greater probability that they make some sense, according to the database that supports it. There is no awareness, feeling, or empathy; the proposed solution is based on a probabilistic method to ensure that the words used make sense, the way they are linked together.
In legal terms, the proposal to produce a legal act (or its production) based on artificial intelligence is equivalent to turning on a traffic light: they are legal acts produced without conscience or will.
Let's look at the field in which this probabilistic tool is used in law and its legitimacy since they are legal acts produced without conscience or will.
c) Artificial Intelligence in the Preparatory Phase of the Production of Legal Acts
13. In May 2018, the website of the Supreme Court of Brazil (STF) reported the start of a project called 'Victor', which would be an artificial intelligence tool asked to "read" all appeals sent to that Court and identify their link to issues of 'general repercussion'.[19] This reading, we now know, either transforms this database into a "word queue" or, in a more evolved form, pours each word into a token,
which is transformed into multidimensional vectors ('embeddings') that intend to preserve the semantic idea according to similar words and their position in each sentence (position 'encoding'). This is what the "reading" performed by 'Victor' means.
But that is not all. At that stage, according to the news, 'Victor' was "in the construction phase of its neural networks to learn from thousands of decisions already handed down in the STF regarding the application of various issues of general repercussion." As reported, at that stage of development of 'Victor', it was sought to achieve high levels of accuracy – which is the measure of effectiveness of the machine -, allowing that tool to assist servers in their analyses.
The news concludes by stating that "The machine does not decide, it does not judge. This is human activity. It is being trained to work in layers of process organization to increase the efficiency and speed of judicial evaluation."
14. In this context, 'Victor' is used to produce a legal act that, in the course of a procedure, enables, instrumentalizes, or creates the necessary conditions for the final legal act to be produced. Thus, 'Victor' produces preparatory legal acts. It innovates the legal order but does not propose (or did not propose, at that time) the final decision to be adopted.
Therefore, it is noticeable that, despite the news reported by the STF stating that 'Victor' does not exercise a jurisdictional function, this artificial intelligence interferes with the judicial decision to be handed down, as it innovates the legal order and produces acts that can be accepted in the production of legal acts.
15. Now, if the database of 'Victor' is composed of a vast set of decisions handed down by the STF in which the processing of an appeal regarding a specific matter has not been allowed – even though there are cases in which that matter has been examined by the STF due to the peculiarities of the case –, 'Victor' will learn and decide in a biased way, as it will not admit appeals in which the referred matter is debated. Considering this, if 'Victor' is fed only with decisions that have not admitted the appeal in these cases, it will decide that way, as its database will not have decisions to the contrary. And even if its database is filled with contrary decisions, it will tend to apply the decisions that are at its disposal in greater quantity, and it will not know the appeal on that matter. But that is not all.
As this method is based on artificial intelligence, the decision suggestion derives from the mathematical probability that the words that formed its database have some logical relevance to each other, according to the question asked. As seen, the proposed solution derives from a mathematical probability that these elements make sense; there is no reasoning, there is no analysis of the concrete case, there is no look at reality. It is a legal act produced without conscience or will.
Hence the result proposed through this tool can be circumstantially correct, incorrect, or, hopefully, a real hallucination. All this is based on the probability of an arrangement between words since it is a legal act produced without conscience or will.
d) Artificial Intelligence used in the Proposition and Production of Final Legal Acts
16. Celso Antonio Bandeira de Mello has been saying for some time that there is no obstacle to the production of an administrative act by artificial means, called legal acts produced without conscience or will. According to him:
"(...) there are legal - administrative - acts that are not necessarily voluntary human behavior. They may arise 'casually', i.e., without their producer having intended to dispatch them and, therefore, with no intention of generating the corresponding effects.
Take the following hypothesis as an example. Suppose that a public agent in charge of interfering, when necessary, in a control center for traffic lights in the city (or in a certain part of it), usually managed by computer, inadvertently presses a button concerning a given traffic light signal. As a result, on a certain corner, the green light – i.e., the order to 'go' – lights up three, four, or five seconds earlier than programmed and, correspondingly, the red light lights up – the order to 'stop' – at the other angle of the intersection. Due to this, the mentioned orders will have been unintentionally generated, which are administrative, legal acts. And he or she may even never know it happened. This is what would happen if he or she were talking to another employee, with their back to the keyboard, having touched it without even realizing it. There was no volitional manifestation, and there was an administrative, legal act.
Imagine, now, the same control center for traffic lights, commanded by a computer, and it is now in charge of changing lights according to radar signals indicating the levels of traffic congestion in the region. It is a machine that will be carrying out the successive 'go' or 'stop' orders, symbolized by the color of the lights; not a man. By the way, in the future, acts performed by machines will certainly be common. Even today there are already other cases in addition to the one mentioned before. There are 'parking meters' that issue fines once the parking period has been exceeded. In these cases, therefore, there are demonstrations that there may be administrative acts that are not produced by men. On the other hand, one cannot speak of the will of the machine that dispatches them $^{20}$, an idea that extends to modern Artificial Intelligence tools (...) $^{21}$
17. So much so that the Attorney General's Office of the Federal District[22] uses Mrs. Luzia, considered the first "robot lawyer" in Brazil, created by the startup LegalLabs[23]. Its role in that public body is to analyze the progress of cases and, based on that, suggest manifestations of Public Advocacy, researching and collating information regarding the qualifications of individuals, such as addresses and assets[24].
18. Luis Felipe Salomão and Daniel Vianna Vargas testify that artificial intelligence has been used to carry out typically jurisdictional acts. They state that "in another turn, the identification of causes of action, factual and legal configurations, issues, ratio decidendi, suitability of causes, distinguishing, and reasoning are functions inexorably linked to the exercise of jurisdiction. Some of the practices listed above - with utilitarian justification and quantitative efficiency - reveal the use of artificial intelligence in the decision-making process, that is, in the jurisdictional activity." They go on to note that "depending on the result of the legislative process, judgments made through (with the help of) the RADAR and 'VICTOR' systems may be questioned, mainly by those who feel harmed by the result. Remember, the simplest and most repeated judgment is still a judgment."
And, for this very reason, they conclude that "It is necessary, at the very least, a legal theoretical framework of the tools of governance, regulation, and control of the so-called algorithms used to aid and, sometimes, replace the judge in the act of judging."[25]
19. This alert sounds especially relevant in the "automation in judgments" because, depending on the artificial intelligence system used for this purpose, the machine learning process adopted by the algorithm in the production of the final legal act takes place in a "technological black box" environment.
A study undertaken by "The Australasian Institute of Judicial Administration Incorporated" describes this process as follows, with our highlights:
"A technological 'black box' refers to human inability to grasp the inner workings of some technological systems. Even if humans can sometimes understand the inputs and outputs of a technological system, were they to view the inner workings of that system, they might find it incomprehensible. Accordingly, the person is unable to verify the integrity of the process used by the AI system to arrive at the output from the input. An explanation of connections in an artificial neural network is as unhelpful in understanding the system as is a neuron-by-neuron description of a human brain in understanding the reasons for a complex decision made by a human. This has led to interest in 'explainable' AI."2627
20. These circumstances reveal the need for the technological positivization system, with the use of artificial intelligence, to be transparent and allow the administrator to identify the logical paths (step-by-step) or ratio decidendi followed by the algorithm for the production of a legal act. Hence, some designate this impediment as the "principle of interdiction to algorithmic arbitrariness."28 And all this so that, even in the case of a legal act produced without conscience or will, it is the subjective right of the administrator to have access to the method by which it was produced, to assess the legitimacy of its automatic production.
e) The Right of the Administered Concerning the Act Performed or Whose Performance was based on Algorithms
21. Given these considerations, it seems to us that the principle of motivation - associated with the principles of publicity, transparency, and good faith - gives the administered the subjective constitutional right to:
- 21.1. (i) access the data and the database that served as support for the construction of the algorithm on which the artificial intelligence is based and built. This is so that it is possible to determine, for example, whether the database on which the algorithm will start its "learning process" – also known as 'machine learning' – (a) adopts minority, majority, or isolated thinking from jurisprudence or doctrine; (c) takes precedent or
outdated legislation as support; (d) adopts sexist, sexist, or prejudiced standards as a parameter etc.
Therefore, the construction of an algorithm free of bias - guaranteeing impartiality, objectivity, and predictability in decision-making - must be subject to the widest verification and control, which is why unrestricted access to the information that formed the database on which the algorithm rests is required.
After all, as Luis Felipe Salomão and Daniel Vianna Vargas observe, "From the outset, it is clear that the supervision of data insertion is necessary for a minimum of control and regulation of the operation using algorithms, since these can be decisive for the results achieved and, sometimes, it is the only human 'influence' on the task developed through artificial intelligence."[29]
Supervision in the insertion of data refers to another equally relevant point, envisaged by Lenio Luiz Streck: "how to control discretion to structure the algorithm that will solve the problem of discretion?"[30]
21.2. (ii) access the source code of the algorithm used, because, as Henrique Alves Pinto correctly observed, "...decisions (based on artificial intelligence) have undergone an intense process of automation based on criteria, most of the time, unknown or well explained by their creators, so that they begin to have a great influence on people's daily lives without them necessarily noticing it."31
Hence, by accessing the source code, it will be possible to determine and audit the way adopted by the artificial intelligence for (a) the construction of the chain of logical and factual assumptions accepted for a final decision, (b) the logical chain of operations resulting from the application from positive law (or the interpretation that the Courts make of it) to the facts submitted to their examination, and, from this, (c) the conclusions that will materialize in a legal act.
The possibility of examining and controlling the way the algorithm works is essential to determine the legal fairness of the intermediate conclusions that serve as support for the final decision-making, since, as warned by Luis Felipe Salomão and Daniel Vianna Vargas "(...) when an artificial intelligence system is used to identify the cause of action, the legal framework of a given claim filed, delimiting the object of judgment and, from the consultation of a database (case law), lead to a conclusion about that conflict brought to court, the situation is considerably different. The opacity and algorithmic biases make this task extremely difficult (or impossible) to control.[32]
This stage of gauging the fairness of acts produced based on artificial intelligence resembled an autonomous principle for some foreign and Brazilian authors, such as Ricardo Muñoz[33] and Juarez Freitas[34], respectively, starting to be called the principle of explainability.
In this context, we adhere to the criticism foreseen by Lenio Luiz Streck[35]: explaining is not carrying out hermeneutics or interpreting. And the 'leading cases'? Does artificial intelligence explain without interpreting? What about the valuation of legal assets? How can artificial intelligence value the dignity of the human person without being a human person? What about the right to a dignified life without even having a life?
21.3. (iii) access and operate the interface handled by the Public Authority for the use of artificial intelligence, so that the individual can, if he or she so wishes, use the same means to anticipate the proposed response (or the response itself) to be provided using that mechanism. In this way, an environment of greater legal certainty will be created (since the individual will know in advance if, in the eyes of the Public Authority, a conduct that he or she aspires to carry out will be mandatory, permitted, or prohibited) and if the answer provided in advance using artificial intelligence seems compatible or incompatible with positive law and, if applicable, challenge it in court.
In sum: In cases where the solution (or proposal for a solution), subject to State scrutiny due to any given situation, comes to be provided by the use of artificial intelligence, in which the legal act is produced without conscience or will, the layers of the principle of motivation, publicity, transparency, and good faith, impose on the Public Authority the duty to grant unrestricted access (i) to the database on which the algorithm started its learning process (or 'machine learning') $^{36}$; (ii) the source code of the algorithm, the artificial neural network (ANN) $^{37}$, or other algorithmic mechanisms used (such as Transformer); and (iii) the interface that the Public Authority itself uses to request a proposed solution from artificial intelligence.
It is possible, however, that access to this interface by administrators may be prohibited or limited for reasons of public interest. This is what can be foreseen, for example, in the hypothesis that this platform is used for surveillance or fraud detection, in which case free access by users could weaken these control mechanisms.
22. As long as the proposed guarantees are met, the production of a "linked act" based on artificial intelligence is allowed, as long as it does not require any kind of motivation, as this act will have been produced by means devoid of conscience and will.
Is there the same solution regarding a legal act whose emanation is rooted in a discretionary competence? Can this "discretionary act" be produced without will and conscience, through artificial intelligence? Based on the chosen assumptions and according to our current normative model, the answer is negative, as discretion requires awareness and will, an exclusively human attribute that cannot be reproduced in algorithms.
But as the law creates its own realities, there is the legal possibility of authorizing the production of discretionary acts without conscience or will. Perhaps then, within this possibility, it will be possible to determine whether the proverb said in Asinaria, by Titus taken. The programmed algorithm will give detailed instructions for each of the conversions to be carried out by the driver, but it is known that the path is the one previously defined by the programmer. As for non-programmed algorithms, they have random learning capacity. The data and the desired objective are inserted into the system (input). The system will be the one to produce the algorithm (output), transforming it into another, 'writing' its own programming, without interference from a human programmer. It is the so-called 'machine learning' technique. The machine collects data, interprets them, and transforms them into new data, making predictions about intermediate results, 'learning' from them and developing models and new algorithms without the need for new programming. The more data initially inserted, the greater the possibility of the system to learn." In: SALOMão, L.F. & VARGAS, D.V. "Inteligência artificial no Judiciário. Ricos de um positivismo Tecnológico" (...), Ibid. pp. 26-30.
Maccius Plautus, will once again have a course among us. After all: Homo homini lupus<sup>38</sup?>
## III. CONCLUSIONS
23. There is not and never has been a legal or logical impediment for legal and administrative acts to be produced by machines or an artificial intelligence system. This is an option open to legislative choice.
Despite this, the Constitution of the Republic, by positing the principles of motivation, publicity, transparency, and good faith, imposes that the public agent indicates the facts and the logical concatenation adopted for, given the system of positive law, producing any legal act.
This burden placed on the State serves as a skillful instrument to allow the administered to control the fairness of the produced state legal acts.
In this way, the wide latitude of this guarantee is not affected or relativized, the fact that the legal act was produced using an algorithm or was produced by a human being, based on a proposal constructed using an algorithm, provided that it is a legal act based on binding competence. And, in the first case, as long as it does not require any kind of motivation, it will have been produced by means devoid of conscience and will.
Generating HTML Viewer...
References
29 Cites in Article
L Salomão,D Vargas (2022). Inteligência artificial no Judiciário. Riscos de um positivismo tecnológico.
L Streck (2020). Um robô pode julgar? Quem programa o robô?.
H Pinto (2020). A utilização da inteligência artificial no processo de tomada de decisões: por uma necessária accountability.
E Araújo (2010). Administrative Law Course.
C Araujo (2017). Semiótica jurídica) Legal semiotics.
C Bandeira De Mello (2023). Mandado de segurança contra denegação ou concessão de liminar) Writ of mandamus against denial or granting of injunction.
Celso Bandeira De Mello (2023). Projeto De Lei Orçamentária.
Jay Geller (2023). Dogged by Destiny.
A Becker (1998). Teoria Geral do Direito Tributário) General Theory of Tax Law.
F Bell (2002). Controversies, State Court Judges, and Decision Making.
J Borges (1997). O Princípio da segurança jurídica na criação e aplicação do tributo) The Principle of legal certainty in the creation and application of taxes.
Brazil (2023). Appendix: Constitution of the People’s Republic of China 1 1This English translation is available on the official website of the National People’s Congress of China, http://www.npc.gov.cn/englishnpc/Law/2007-12/05/content_1381903.htm..
Brazil,Law (1999). Unknown Title.
Brazil,Law (1999). {BLR 2923} BioWorld–WWW.
Brazil (1994). AWT-Info / HTM 03-2023.
A Carvaho (2023). Relatório de Pesquisa do projeto de pesquisa 'Tecnologia aplicada à gestão dos conflitos no âmbito do Poder Judiciário Brasileiro') Research Report of the research project 'Technology applied to conflict management within the scope of the Brazilian Judicial Branch.
T Hutchinson (1980). La actividad administrativa, la máquina y el Derecho Administrativo) Administrative activity, machinery and Administrative Law.
A Martínez (2013). Motivación del acto administrativo y buena administración) Motivation of the administrative act and good administration.
R Muñoz (2022). El control judicial de la actividad administrativa automatizada) The judicial control of automated administrative activity.
T Plauto,I Asinaria,Gredos (2023). Unknown Title.
Pgdf (2023). Inteligência artificial em Execução Fiscal) Artificial Intelligence in Tax Execution.
Pgdf (2023). Model for procedural acts.
H Pinto (2020). A utilização da inteligência artificial no processo de tomada de decisões: por uma necessária accountability) The use of artificial intelligence in the decision-making process: for a necessary accountability.
Pedro A. Lemos (2023). INTELIGÊNCIA ARTIFICIAL EM CARDIOLOGIA INTERVENCIONISTA.
L Salomão,D Vargas (2022). Inteligência artificial no Judiciário. Riscos de um positivismo tecnológico) Artificial intelligence in the Judicial Branch. Risks of technological positivism.
Stf (2023). Inteligência artificial vai agilizar a tramitação de processos no STF) Artificial intelligence will speed up the processing of lawsuits at the STF.
L Streck (2020). Um robô pode julgar? Quem programa o robô) Can a robot judge? Who programs the robot?.
Maurício Zockun,Carolina Zancaner Zockun (2019). A relação de sujeição especial no direito brasileiro.
No ethics committee approval was required for this article type.
Data Availability
Not applicable for this article.
How to Cite This Article
Carolina Zancaner Zockun. 2026. \u201cThe Use of Artificial Intelligence in The Production of State Acts\u201d. Global Journal of Human-Social Science - F: Political Science GJHSS-F Volume 23 (GJHSS Volume 23 Issue F5).
Explore published articles in an immersive Augmented Reality environment. Our platform converts research papers into interactive 3D books, allowing readers to view and interact with content using AR and VR compatible devices.
Your published article is automatically converted into a realistic 3D book. Flip through pages and read research papers in a more engaging and interactive format.
Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]
Thank you for connecting with us. We will respond to you shortly.