[author]Yi Junlin
[content]
Yi Junlin | On the Trust Building of Human Computer Collaborative Trial
*Author Yi Junlin
Assistant Professor at Kaiyuan Law School, Shanghai Jiao Tong University
Abstract: With the popularization and promotion of artificial intelligence in the construction of intelligent justice in China, judicial trials have gradually presented a human-machine collaborative trial pattern of "judges as the main and machines as the auxiliary". At present, judicial artificial intelligence is temporarily hidden behind judges, and trial trust has been replaced by trust in the judge's personality, court system, or artificial intelligence algorithms. Personality trust is based on the assumption that judges are "perfect" individuals, and there is a real gap between them and the actual "ordinary" judges. The court organization uses the judicial responsibility system as the main institutional arrangement to constrain human-computer interaction, but under the influence of predictive trial logic, there is a risk of "human-machine collusion" by blindly adopting machine suggestions. Although the algorithmic trust mechanism can play a local role in trust remediation, it is limited by the binary thinking of "subject tool" and difficult to constrain the collaborative process between judges and algorithms. Starting from the communication and interaction between the trustor and the trusted object, a reflective integration of personality trust, institutional trust, and algorithm trust should be carried out, expanding judicial artificial intelligence from an internal case handling tool to an external communication tool, promoting the full development of human-machine collaborative legal discussions, and laying a more solid foundation of trust for human-machine collaborative trials.
1. The trust theme of human-machine collaborative trial
In recent years, artificial intelligence technology has flourished, bringing a series of opportunities and challenges to human society. Driven by massive corpora and abundant computing power, large language models based on neural networks and deep learning have brought disruptive innovations to artificial intelligence technology, and machine algorithms have quietly crossed the threshold of the Turing test. With the steady advancement of smart judicial construction in various countries around the world, artificial intelligence technology continues to infiltrate and empower the judicial trial work of courts, involving multiple aspects such as trial execution, litigation services, and judicial management. Since the 13th Five Year Plan period, China has been laying out in the field of intelligent justice, and the collaborative trial mode of deep integration between human judges and artificial intelligence is gradually becoming the new normal of judicial work.
1.1 The Development Stages of Artificial Intelligence Intervention in Judicial Trials
The intervention of artificial intelligence in judicial trials is closely related to the breakthrough development of computer science and technology, and presents a fracture phenomenon associated with disruptive innovation events. By observing the technological breakpoints, two main lines can be roughly divided: human-machine confrontation and human-machine collaboration. The former focuses on whether machine algorithms can replace human judges, while the latter emphasizes the interaction and collaboration between human judges and machine algorithms in judicial trials. Of course, the two are not either or, but more like two intertwined voices in counterpoint, but the main theme of academic debate in a specific period can still be identified through the use of breakpoints.
The first breakpoint occurred in 2016, initiating a heated discussion on the issue of subjective human-machine confrontation in trials. Google's AlphaGo, which was launched that year, defeated the famous chess player Lee Sedol, becoming a sensation in the technology industry and triggering widespread reflection from all sectors of society on which areas of human territory artificial intelligence still lacks the ability or should not be involved in. Against this backdrop, the debate in the legal community about the "human-machine confrontation" between human judges and artificial intelligence has begun. To a certain extent, judges have become chess players representing a new round of competition between human intelligence and artificial intelligence, but the chessboard has transformed into a sensitive and complex field of judicial trial. In addition, the highly anticipated case of Loomis v. Wisconsin in the United States was finally sentenced in 2016. In this case, the position of the human judge as the main judge has not changed, but the question of whether the judge has overly relied on machine algorithms has raised many concerns. Therefore, outside the main battlefield of human-machine confrontation, the reflection on the potential risks of human-machine cooperation has also laid the groundwork.
The second breakpoint occurred in 2022, when the collaborative trial model of "judges as the main and machines as the auxiliary" gradually became a consensus. In the field of technology, ChatGPT led the breakthrough in large language modeling technology that year. For a long time, the effectiveness of artificial intelligence in processing human natural language text has not been satisfactory, hindering its popularization and promotion in daily work and life. The big language model has completely changed this situation, and the general public can quickly get started using it through situational dialogue. As a result, artificial intelligence has become a collaborative partner for people in various fields such as software development, teaching and research, and content design, and an interesting phenomenon of "asking AI first when faced with problems" has emerged. The human-machine dispute surrounding the subjectivity of the trial also settled in the dust at that time, and the human-machine collaborative trial model of "judges as the main, machines as the auxiliary" was established in China's judicial practice. The landmark event is the release of the "Opinions of the Supreme People's Court on Regulating and Strengthening the Judicial Application of Artificial Intelligence" (Fa Fa [2022] No. 33) in December 2022, which focuses on responding to the controversies surrounding human-machine confrontation since 2016. At this point, the subjectivity of human judges in trials has been basically consolidated, and more research has begun to shift towards the issue of collaboration between judges and machines.
It should be pointed out that the discussion on human-machine confrontation and human-machine collaboration in China can be traced back to at least the early 21st century's "computer sentencing". In 2004, the Zichuan District Court in Zibo City, Shandong Province introduced computer-assisted sentencing technology, which can automatically calculate the sentence based on the criminal circumstances. This attempt sparked widespread discussion, but ultimately failed to be widely promoted due to significant controversy. Due to the technological limitations at that time, the so-called computer sentencing was nothing more than simple queries and arithmetic based on sentencing tables, far from being comparable to today's large language models such as ChatGPT and DeepSeek. However, the Zichuan Court also faced doubts about the possibility of machines replacing judges at that time, and its response was quite familiar - computers were only auxiliary tools, and judges were the main judges. It can be seen that early thinking on human-machine confrontation and human-machine collaboration still has an impact today.
1.2 Trust challenges faced by human-machine collaborative trials
From early computer sentencing to today's "future judge assistants" based on big language models, academic research often seems to be a follow-up and response to technological milestones. Technological innovation allows people to focus on specific problems in a short period of time, but new technologies can also throw out new problems at any time, leading to some jumps or even breaks in related research.
However, there is a continuous logical thread behind the fracture, and the theme that runs through it is the trust of judgment. Trust generally refers to the state of confidence in the reliability of actions taken by individuals, organizations, or systems. Trial trust, as a subordinate concept of trust, refers to the confidence and trust of the general public in judicial trial activities. Trust has the function of simplifying complexity: when the level of trust is high, people will reduce their intervention and control over the trusted object; When the trust level is low, control behavior may be taken to improve the predictability of results, or contact with trusted objects may be avoided. In terms of judicial trials, the construction of judicial trust not only helps maintain social order and legal authority, but also promotes the effective implementation of judicial results.
From the perspective of observing trust in trials, a series of discussions on the involvement of artificial intelligence in judicial trials can be seen as discussions on trust issues at a deeper level. Whether it is the controversy over computer sentencing at the beginning of the 21st century or the debate over the subjectivity of "AI judges" since 2016, it reflects people's distrust of machine algorithms intervening in trials and even stealing judges' judicial power. Even after the collaborative division of labor with judges as the main and machines as the auxiliary, people's trust in judicial trial activities is clearly not rock solid. From a broader perspective, the trust risk of human-machine collaborative trials is closely related to the trust crisis caused by current artificial intelligence. A series of social hot issues such as algorithmic black boxes and algorithmic biases reveal the trust dilemma of artificial intelligence systems at a deep level. The intervention of artificial intelligence in judicial trials has touched people's deep-seated fear of 'humans being judged by machines'. It can be said that the trust issue in human-machine collaborative trials is precisely at the forefront of this trust crisis.
1.3 Finding the Trust Foundation for Human Computer Collaborative Trial
Trial trust is not a new issue in the era of artificial intelligence, and it sometimes presents as a narrative discourse of "judicial credibility" in China. Judicial credibility usually refers to the credibility and reputation of the court, that is, the public evaluation caused by the court's behavior, performance, or achievements. There is a mutual relationship between judicial trust and judicial credibility, but there are also subtle and important differences. Credibility is viewed from the perspective of the court as the trusted object, while trust emphasizes observing the trusted object from the perspective of the trustor. Therefore, judicial trust is a concept with a richer meaning, involving a wider range of trust objects, which can include individual judges, court organizations, and judicial artificial intelligence in the research field.
Starting from the perspective of trust in trials, three types of trust and corresponding trust building models can be extracted from research literature related to "artificial intelligence+judicial trials". Firstly, simplify the issue of judicial trust and replace it with personal trust in the professional abilities and moral qualities of the adjudicator. Secondly, trust in the judicial system should be placed on trust in the judicial organs, especially in the systems of judicial responsibility and judicial supervision. Thirdly, by improving the accuracy, interpretability, and accountability indicators of judicial artificial intelligence, and leveraging algorithmic trust to solidify the foundation of trial trust. Based on the arrangement and combination of the three types mentioned above, other trust building methods can be formed, making the trial trust in the era of artificial intelligence present exceptionally rich connotations and diverse construction approaches.
By criticizing and reflecting on the three types of trust building models mentioned above, the author attempts to depict the core logic behind the establishment and maintenance of trial trust, reveal the changes and constants of trial trust in the era of artificial intelligence, and based on this, propose an integrated trust building theory based on legal communication and discussion, laying a more solid and reliable foundation for human-machine collaborative trials.
2. The practical limitations of the judge's personality trust
The trust of the general public in judicial trials is largely inseparable from their trust in judges. As a window between the court and the general public, judges earn respect and trust from the outside world for their professional competence, moral character, and impartiality. With the application of artificial intelligence in judicial trials, the first question that arises is whether people trust artificial intelligence as much as they trust human judges. A further question is whether trust in human judges can form a solid foundation for judicial trust in the human-machine collaborative trial mode.
2.1 Human machine confrontation based on the subjectivity of the trial
In the early stages of artificial intelligence development, machine algorithms were unable to perform complex reasoning tasks like human judges, so discussions about "machine judges" were mostly at the theoretical and imaginative level. In recent years, with the rapid development of large language models, artificial intelligence has been able to generate logically coherent and clearly expressed natural language texts. Therefore, the application of artificial intelligence in the legal field has become a widely anticipated landing scenario. Artificial intelligence represents a logic of intelligent operation that is different from that of humans, and is examined within the binary adversarial framework of human intelligence and machine intelligence. In this context, whether artificial intelligence will replace judges as the subject of trial has become a research hotspot in the legal community.
In the competition for judicial subjectivity, human judges and artificial intelligence compete on the same stage with a rather abstract and simplified appearance. Judges, as representatives of human value rationality, are romanticized as "perfect individuals" in terms of professional knowledge and moral character. In terms of creative thinking, tacit knowledge and practical rationality, social experience, etc., human judges have advantages that artificial intelligence cannot match. At the same time, a perfect judge has empathy and compassion, and can achieve the integration of reason, reason, and law in the trial process. In contrast, artificial intelligence has advantages in efficiency, consistency, neutrality, and can quickly process large amounts of case information, shorten trial time, and improve judicial efficiency. Algorithms are also not influenced by personal emotions and interests, and can promote the standardization of judgment scales based on historical data of similar cases. It is not difficult to see that the comparison between human judges and artificial intelligence is placed in the context of the opposition between fairness and efficiency, value rationality and instrumental rationality.
At present, human judges have won the first round of the trial subjectivity dispute. Advocating value rationality, limiting instrumental rationality, and adhering to human subjectivity have become the common discourse in the debate over judicial subjectivity. Specifically, the reasons for opposing artificial intelligence replacing human judges mainly include: firstly, artificial intelligence is difficult to handle value judgment issues properly; Secondly, the performance of artificial intelligence is mainly determined by training data, so it may be affected by data bias and result in bias or errors; Thirdly, artificial intelligence is more accurate for simple typological cases, but it is difficult to achieve individual justice in complex cases. In addition, some scholars have pointed out that people's opposition to algorithms replacing judges is not due to differences in accuracy and fairness of trials, but rather stems from a subjective lack of "perceived justice". In summary, due to people's lack of trust in artificial intelligence to fulfill the role of a judge, the position of human judges as the main body of judgment is preserved.
2.2 Endorsement of judge's personality trust in human-machine collaborative trial
The human-machine collaborative trial mode of "judges as the main and machines as the auxiliary" has become a consensus in both theoretical and practical circles. Specifically, judges are the ultimate judges of trial activities and bear judicial responsibility for the trial results; Correspondingly, artificial intelligence cannot replace judges, and its operational results are for reference only. Of course, although machines cannot replace judges, their application prospects as trial aids are quite promising. Under the pressure of "more cases and fewer people" in the court, artificial intelligence has significant value in improving trial efficiency and strengthening judicial management. Our country's courts have clarified the functional positioning of judicial artificial intelligence as an auxiliary trial tool. On the premise of adhering to the subjectivity of judges' trials, the intervention of artificial intelligence in court trials has become quite common. During the trial of a case, artificial intelligence first performs "rough processing", and then human judges implement more refined trials. This human-machine integration approach that balances justice and efficiency seems to have emerged before us.
The adherence to the subjectivity of judges' trials largely relies on the endorsement of judges' personality trust in human-machine collaborative trials. The so-called 'personality trust' refers to the trust in 'another person', who becomes the trusted object as an orderly and personalized actor. The trust in the personality of judges mainly involves two dimensions: professional competence and moral qualities. In the dimension of professional competence, trust judges to be competent in identifying risks related to judicial artificial intelligence and correcting any errors; In terms of moral character, trust judges can impartially exclude irrelevant factors such as personal interests when deciding whether to adopt machine assisted advice. At present, the theoretical and practical circles have cast a vote of confidence in human judges, and do not trust that artificial intelligence can fulfill the role of a judge. Under the division of labor with judges as the main body and machines as the auxiliary body, the assumption of "judge perfection" in the context of human-computer confrontation is still continued - judges have unparalleled advantages in professional ability that machines cannot match, and can carefully and responsibly identify machine assisted suggestions and eliminate the relevant influence of factors outside the case.
Building trust in trials on trust in the personality of the investigating judge reflects a simplified way of thinking about judicial trials. The judicial process has been simplified into a decision-making process centered around the judge, and judicial trust has been reduced to personal trust in the judge. Since judges as adjudicators are trustworthy, then judge led human-machine collaborative trials are also trustworthy. As long as judges are ultimately responsible for making judgments, their specific interaction with artificial intelligence seems to become a matter for judges to make autonomous decisions. Refusing or adopting machine assisted opinions is entirely at the discretion of the judge. At this point, artificial intelligence has retreated behind the judge's robe, without direct contact with parties, lawyers, and other litigation participants. This approach has the value of simplifying complexity, and it seems that court organizations do not need to disclose the complex interactive process of human-machine collaboration to the public.
In summary, under the existing human-machine collaborative trial mode, trial trust relies on the trust endorsement of the investigating judge. Judges are regarded as gatekeepers of fairness and justice, exercising their subjective initiative and discretion to make the final decision on how to adopt AI assisted recommendations in order to find the best solution for case adjudication.
2.3 The Ideal Vision and Reality Gap of Judge Personality Trust
In comparison with artificial intelligence, judges, as representatives of human intelligence and fairness and justice, are endowed with the ideal image of "perfect people". Judges are proficient in "empirical rules, logical rules, and trial techniques" while adhering to "conscience, reverence, and the concept of justice", as representatives of the rule of law and justice, conveying judicial authority to society. Judges are regarded as the embodiment of justice, which not only includes expectations for the social role of judges, but also reflects the practical need to shape the authority of judges. In judicial practice, praising the personal achievements of excellent judges is of great significance for judicial organs to win social trust. Diligent dedication, adherence to strict and fair justice, wholehearted service to the people, and efforts to meet the judicial needs of the people "is an idealized benchmark image for Chinese judges.
However, an idealized vision based solely on trust in the judge's personality is not sufficient to establish a solid foundation of judicial trust. Judges in reality are not perfect, but rather "ordinary people" with human weaknesses. Like ordinary people, judges also have blind spots in knowledge and experience, and may engage in opportunistic behavior driven by self-interest. Their cognition and decision-making may also be influenced by irrational factors. Building trust in judges is not absolutely reliable. Once there is an "imaginary rupture" between ideas and reality, it will lead to an exceptionally fragile trust in the judge's personality, causing people's attitudes towards judicial trust to instantly reverse. One of the purposes of building smart courts in our country is to prevent and correct potential deviations and improper behavior of judges in the process of handling cases, thereby achieving the improvement of trial quality and efficiency. However, in the context of human-machine confrontation, the limitations of judges as ordinary people are temporarily obscured by the ideal image of a perfect judge.
In human-machine collaborative trials, human judges are in a "human in the loop" situation and are responsible for evaluating and correcting machine assisted recommendations. However, given the technical complexity of artificial intelligence and the black box nature of algorithms, it is not easy to identify and correct machine assisted suggestions. Especially when faced with questions such as discretionary sentencing and recidivism risk assessment that lack standard answers, there are doubts about whether judges can detect and correct potential biases in the results generated by artificial intelligence. In addition, judges may also be influenced by irrational factors such as case handling pressure and cognitive blind spots during the human-machine collaborative decision-making process. Therefore, it cannot be assumed that judges can make judgments without being influenced or even misled by algorithmic errors.
In addition, basing trial trust on personality trust fails to take into account the differences between artificial intelligence and traditional mechanical tools, and ignores the complex interaction process between human judges and artificial intelligence. Mechanical tools do not have emotions or biases, nor do they participate in subjective interpretations of "meaning". They only provide objective factual information directly through nonverbal forms. As for traditional computer application systems, although electronic components have replaced mechanical parts such as gears and levers, they can still be regarded as a mechanical auxiliary tool completely controlled by the user. However, under the new wave of artificial intelligence technology, machine algorithms are demonstrating unprecedented levels of intelligence and gradually taking on a certain role in interpreting meanings, thus there are essential differences between them and traditional auxiliary tools. In human-machine collaborative trials, artificial intelligence is not only based on judge instructions to complete simple tasks such as legal queries, retrieval and sorting, but has also deeply intervened in the analysis of legal and factual issues. Therefore, if the trust in the trial is entirely based on the trust in the judge's personality, it is undoubtedly treating artificial intelligence as an inert tool completely controlled by the judge, underestimating the risks and challenges of human-machine collaborative trials.
2.4 Expanding from the perspective of personality trust to system trust
With the continuous increase of social complexity, the object of trust has gradually expanded from specific individuals to non personalized abstract objects such as organizational systems and algorithm systems. Based on the diversity of trusted objects, trust can be subdivided into several different types. Among them, the most basic trust object is others who can have face-to-face interactions with the subject. In addition, trust objects can also be extended to more abstract objects such as social roles, organizational structures, technological systems, and even social systems. If interpersonal trust is taken as the center to move towards system trust, the degree of concreteness of the trust object gradually weakens and the degree of abstraction continues to increase. The classic dichotomy of "personality trust system trust" is a simplified expression of the aforementioned trust spectrum. The trust in the personality of judges discussed in this article is not only directed towards specific individual judges, but also closely related to the group of judges as social roles. Judges themselves are a professional public role that easily arouses trust, and promoting excellent examples can further enhance people's trust in the professional role of judges.
People's trust in the personality of judges is actually based on the overall trust of the judicial system as the institutional background. Personality trust is built on trust in the moral qualities and professional abilities of others, while trust in the system is based on the correctness and rationality of the system's operating mechanism, emphasizing trust in abstract principles and technical knowledge. Despite their differences, personality trust and system trust are not mutually substitutable or either or relationships, and personality trust can become a representation mechanism of system trust. Due to the overly generalized and abstract concept of the system, people are prone to develop a psychological bond of trust in personified objects based on sensory experience, which is a simplified mechanism developed by individuals in adapting to social complexity. In terms of judicial activities, the operational details within the judicial system appear too unfamiliar and abstract to ordinary people. In contrast, judges are the "intersection" between the judicial system and the public, and people often have a stronger sense of familiarity with judges through personal experience, hearsay, or media promotion. In practice, establishing an ideal image of a judge is also an important way for the court to win external trust. As judges attract public attention, the judicial system, which serves as the institutional background, becomes blurred.
The analysis of judicial trust should not be limited to personality trust, but should broaden its perspective to the understanding and trust of the entire judicial system. From a systemic perspective, judges are only one link in the process of forming judicial trials. The trial process of judges cannot be arbitrary and subjective, but must be based on law, facts, and procedures. It is precisely out of cautious skepticism towards personality trust that the judicial system has established a series of organizational mechanisms and even technical means to restrict subjective arbitrariness. Therefore, the consideration of trust in trials should be further expanded to include trust in the judicial system.
3. The risk of alienation of trust in the judicial system
People's trust in the personality of judicial personnel cannot be separated from their systematic trust in the judicial organs, so the analysis of judicial trust should extend from individual judges to the overall court organization. The court utilizes a series of institutional arrangements such as litigation procedures, judicial transparency, and judicial accountability to ensure the fairness and impartiality of judicial trials. Therefore, trust in the organization of the court is ultimately based on institutional trust. In the human-machine collaborative trial mode, it is necessary to further explore whether the relevant institutional rules organized by the court can play the expected role, or whether there may be risks of failure or even alienation.
3.1 The institutional core of trust in the judicial system
The court, as a judicial organ exercising national jurisdiction, is a more abstract organizational system compared to judges. When the court acts as a trust object, the corresponding trust type belongs to system trust. The general public may not necessarily understand the operational principles and specific details within the court, but they may still develop trust in the court organization. As Giddens pointed out, people often need to trust abstract systems that go beyond their comprehension. System trust has to some extent facilitated the separation of trust and understanding. Ordinary people may not understand the complexity of medical, legal, or technological systems, but they can still trust the professional judgment of doctors, judges, and engineers. Therefore, when people have sufficient trust in the organization of the court, although they may not be aware of its internal operational details, they can still trust and recognize the court's trial results.
The court organization system (hereinafter referred to as "court organization") has a dual color of expert system and hierarchical organization. Expert system "is a systematic mechanism for organizing and integrating professional knowledge, technology, and business experts, providing reliable decision support for people in modern society. The court conducts trial business, handles legal disputes, interprets legal rules, and constitutes a typical expert system by organizing and managing judicial personnel such as case handling judges. At the same time, as a national judicial institution, the court also has a hierarchical organizational color. As a hierarchical organizational system, the court clarifies the responsibilities and authorities of its members through rules and procedures. The court also regulates, motivates, and constrains the judicial work of judges through performance evaluation, sentencing guidelines, case retrieval, supervision and punishment, and other trial management mechanisms.
The trust in the organization of the court ultimately lies in the trust in its relevant institutional rules. Compared to personality trust, the main reason why people trust abstract systems is not because of the qualities or motivations of individuals belonging to the system, but because of their trust in institutions, norms, mechanisms, or principles. Trust in organizational systems has a depersonalization characteristic and cannot be reduced to trust in individual members of the organization. The court is not a simple, mechanical aggregation of individual judges, but an organic integration through a systematic and orderly approach, forming a holistic abstract system. The court constrains the behavior of individual judges through a series of measures such as mechanism design, professional ethics, and accountability mechanisms, thereby ensuring the overall credibility of the trial activities. Undoubtedly, the ways for courts to win public trust are not limited to institutional construction, but also include other methods such as establishing model judges, promoting legal knowledge, and media coverage. But it cannot be denied that institutional trust constitutes the trust core of the court organizational system. Especially in the context of a rule of law society, the court is an "agency of accountability" that supervises and enforces other social entities, providing fundamental support for safeguarding the spirit of contracts, optimizing the business environment, and establishing a trustworthy society. As the cornerstone of overall social trust, the systematic trust of the judiciary must be based on the spirit of the rule of law, with the governance of institutions and rules as the core. Therefore, to analyze the system trust of the judicial organs, it is necessary to delve into the design principles and operational logic of relevant institutional rules, and examine whether they can continue to maintain their institutional trust in the human-machine collaborative trial mode.
3.2 The "Black Box" of Human Computer Collaborative Trial and Judicial Responsibility System
With the widespread application of artificial intelligence technology in court trials, the human-machine collaborative trial mode has brought profound challenges to the traditional institutional structure of court organizations. On the one hand, artificial intelligence technology itself still faces problems such as machine illusion and poor interpretability; On the other hand, the intervention of artificial intelligence in court trial activities may further exacerbate and amplify potential risks that court organizations already have to guard against, such as bias and discrimination, mechanical justice, bureaucratic justice, etc. Therefore, if people's concerns about the aforementioned risks cannot be dispelled, the institutional trust of the court organization will be in crisis.
The core challenge facing trust in the court system is how to make the outside world trust the decision-making process of human-machine collaborative trial, which has a "black box" color. The focus on black boxes mainly stems from research on "algorithmic black boxes". In a broader sense, the metaphor of black box emphasizes the opacity of the decision-making process, and is therefore not limited to the field of computer technology. In judicial trial activities, the deliberation of the collegial panel and the discussion of the trial committee are not disclosed to the public, and only the judge himself can directly perceive the process of the judge's heart examination. In this regard, the traditional judicial process itself carries a certain degree of decision-making black box color. It is through the use of systems, mechanisms, and procedures to constrain the decision-making black box in the trial process that the trust of the court organization system can be established. With the intervention of artificial intelligence in court trials, human-machine collaborative trials constitute a new type of decision-making activity distinct from traditional trials. Although judges are the ultimate adjudicators, artificial intelligence can indirectly affect the outcome of trials by influencing the judge's testimony. Human computer collaborative trial is a complex decision-making process formed by the overlapping of "algorithmic black box" and "human brain black box". The algorithmic black box has a certain degree of stability, continuity, and universality, while the human brain black box has more individual, random, and constantly changing characteristics. Of course, judges can explain their decision-making process through means such as the power of interpretation and forensic examination, but only the judge himself knows the true forensic situation. Therefore, how to constrain the interaction process between judges and algorithms, and avoid deviating from judicial laws and principles such as firsthand experience, independence, and judgment, requires the court to make institutional responses.
Faced with the aforementioned trust challenges, the court mainly responds by reaffirming the judicial responsibility system and the status of judges as the main body. The connotation of the judicial responsibility system can be summarized as "letting the adjudicator make the judgment, and the judge is responsible", aiming to emphasize the consistency of the rights and responsibilities of human judges. In the era of artificial intelligence, court organizations have further strengthened the subjectivity and judicial responsibility of judges - "AI assisted results can only serve as a reference for trial work or trial supervision and management, ensuring that judicial judgments are always made by judges, judicial powers are always exercised by trial organizations, and judicial responsibility is ultimately borne by judges. Compared to pure trust in the personality of judges, the judicial responsibility system uses accountability mechanisms to form external constraints on case handling judges, urging them to correct errors in algorithm assisted opinions. Based on this logic, the human-machine collaborative trial in practice presents two major characteristics: one is the internal division of labor with "judges as the main and machines as the auxiliary", and the other is the external form of "judges in the light and machines in the dark". There is currently no trace of how judges and algorithms interact in individual cases through public channels such as court hearings and judgment documents. In terms of appearance, there seems to be no change between human-machine collaborative trial and traditional trial activities.
Correspondingly, the institutional arrangement of human-machine collaborative trials by court organizations is to hide artificial intelligence behind the subjectivity of judges, making judicial responsibility the logical fulcrum for maintaining trust in the court system. The underlying logic behind this institutional arrangement is that, on the basis of consolidating the judicial responsibility system of judges, the current institutional structure of the court is sufficient to absorb the risks and uncertainties brought by human-machine collaborative trials. Since the ultimate judge of human-machine collaborative trials is still the adjudicators, the traditional institutional logic that constrains the adjudicators can still be effective, so the institutional trust of the court organization should be maintained as a result. However, whether the aforementioned logic can withstand scrutiny still needs to be further examined in conjunction with the characteristics of human-machine collaborative trials.
3.3 Predictive Trial and Institutional Alienation of Court Organization
The algorithm logic of judicial artificial intelligence can be summarized as predicting and simulating judicial decisions based on judgment document data. At present, most mainstream artificial intelligence models are trained and predicted based on neural networks and deep learning techniques. In terms of training data, due to the abundant data resources, structured and standardized text content of judicial judgments, they have become the main source of data in the field of legal artificial intelligence. As for case files, panel discussions, and other data, due to their high sensitivity and varying quality, they have not been widely used. It can be said that judicial artificial intelligence is essentially a probabilistic prediction model that generates auxiliary trial recommendations by analyzing the trend of judgment probabilities in judicial documents.
The predictive trial characteristics of judicial artificial intelligence conflict with traditional judicial principles that emphasize firsthand experience, independence, and judgment. As a trial aid tool, the core mission of artificial intelligence is to make accurate predictions based on historical data, rather than evaluating whether the results are fair and just. This inevitably reminds people of Holmes' depiction of judicial prediction theory: not caring about axiomatic principles or rule deduction, only predicting the court's judgments. Similarly, as long as the results generated by the algorithm are adopted by the judge, it means that the prediction is accurate. The algorithm logic of predictive judgment based on artificial intelligence is difficult to achieve interest measurement and value judgment based on the law of reason and reason, which may lead to algorithmic bias, mechanical justice and other drawbacks. Undoubtedly, traditional judicial trial activities may also have risks such as mechanical justice and judicial bureaucratization. Through data mining and pattern recognition using machine algorithms, related risks may further spread and worsen. In extreme cases, judicial trials may even become a routine formality, and people's trust in judicial trials will be seriously undermined.
Under the influence of predictive trials, human judges' attitudes towards judicial artificial intelligence may shift from "confrontation" to "collusion". The so-called 'human-machine collusion' is an extreme form of human-machine collaborative trial, in which human judges adopt all suggestions pushed by judicial artificial intelligence. When "human-machine collusion" occurs, the adoption rate of machine assisted suggestions continues to rise, and the deviation rate of judges from relevant suggestions continues to decrease. The judgment results of judges and machine assisted suggestions will be difficult to distinguish from each other. At this point, the judge may still be a judge in name, but in reality, they may blindly accept recommendations pushed by algorithms. In fact, the risk of "human-machine collusion" is not sensational. For judicial artificial intelligence, prediction accuracy is the core indicator for evaluating its performance, and algorithm updates and iterations will naturally select models with higher adoption rates. For investigating judges, machine assisted suggestions provide a new perspective for predicting the trial behavior of the judge community. As a result, machine assisted suggestions may create a "focus game" like effect within the judicial community. Judges adjust their own intentions or expectations to match others, ultimately making the algorithm output the consensus choice of the group.
Faced with the risk of "human-machine collusion", the existing system of incentives and constraints for judges in the court may become ineffective or even alienated. Firstly, adopting machine assisted suggestions can significantly increase the number of cases handled by judges within a unit of time, achieving more case handling targets in the context of "more cases and fewer people". Taking the document assisted generation function as an example, artificial intelligence can generate highly completed judgments within seconds. Clicking on the "one click document generation" button is undoubtedly very attractive to judges. If the judge refuses to adopt the machine assisted suggestion, more time and effort will need to be invested. Secondly, by adopting AI assisted recommendations, judges can also reduce the risk of judicial accountability. In the context of the existing judicial responsibility system, the question of whether one should be held accountable often relies on the court conducting internal investigations and determinations. The internal accountability of the court is guided by the "standard uniformity" of similar case judgments, with the risk of deviation from the standard. At present, deviation warning has been integrated into the daily work of judges and has become an important clue for court accountability. As artificial intelligence predictions become increasingly accurate, auxiliary suggestions will gradually become the standard answer, invisibly forcing judges to approach them. Therefore, from the perspective of individual judges, relying on machine assisted advice to make judgments can not only alleviate the pressure of handling cases, but also avoid triggering accountability due to deviation from warnings.
Overall, the court organization has not yet made comprehensive institutional arrangements for human-machine collaborative trials. The current system fails to fully consider the impact of predictive trials on judge incentives and constraints, and cannot effectively mitigate the risk of institutional failure that may arise from the combination of algorithmic logic and court bureaucracy. In the transformation of human-machine collaborative trials, there have been cracks in the institutional trust of court organizations.
4. Local reinforcement of trust in artificial intelligence algorithms
With the continuous penetration of artificial intelligence into social life, how to establish trust in artificial intelligence technology itself has become a challenge. As a vertical application scenario, artificial intelligence judicial applications are also plagued by algorithmic trust issues such as accuracy, interpretability, and accountability. Correspondingly, improving the reliability of artificial intelligence systems has positive implications for building trust in trials. However, algorithmic trust mainly focuses on trust in technical systems and is difficult to cover the complex interaction process between parties, judges, and artificial intelligence in trial activities. Therefore, it can only partially strengthen trust in human-machine collaborative trials.
4.1 The basic concept of trust in artificial intelligence algorithms
In the context of human-machine collaborative trial, algorithm trust refers to the trust in the artificial intelligence trial assistance system deployed by the court. The original intention of artificial intelligence technology originated from the simulation of human intelligence, and with the continuous breakthroughs in related technologies, the exploration of the subjectivity of artificial intelligence has not stopped. However, algorithmic trust in judicial artificial intelligence is essentially a system trust, not a personality trust. The artificial intelligence trial assistance system adopted by courts at all levels in China is a complex information system composed of diversified and heterogeneous algorithm modules. Therefore, judicial artificial intelligence should be seen as an abstract technological system rather than a personified moral agent. Correspondingly, algorithmic trust in artificial intelligence is a unique system trust.
The technical indicators of accuracy, security, and robustness of artificial intelligence systems constitute the basic requirements for algorithm trust. It is worth noting that the aforementioned indicators are not specifically targeted at artificial intelligence systems, but are also applicable in traditional information systems such as operating systems, databases, and high-performance computing. Firstly, accuracy is the fundamental threshold for artificial intelligence to win user trust. If the system frequently encounters errors, it is obviously difficult for users to trust it. In judicial practice, using machine algorithms with poor accuracy is not only meaningless, but may also interfere with the normal decision-making process. Secondly, security is another important dimension of algorithm trust. Once a security risk event such as data leakage or privacy infringement occurs, it may completely destroy user trust. In the era of artificial intelligence, with the rapid increase in data scale and model complexity, security issues have become more severe, and any negligence may trigger a widespread crisis of trust. Finally, system robustness refers to the ability of a technical system to maintain stable operation in the face of uncertainty, interference, or abnormal situations. Artificial intelligence will inevitably encounter scenarios in actual production environments that differ from the training environment, and its performance in these unknown situations will also have a certain impact on algorithm trust. From this, it can be seen that although artificial intelligence far exceeds traditional information systems in terms of data scale and model complexity, technical indicators such as accuracy, security, and robustness are still important factors affecting algorithm trust.
In addition to the above indicators, the interpretability and accountability issues of artificial intelligence have attracted much attention in recent years. Algorithm trust is not only related to the technical reliability of artificial intelligence systems themselves, but also closely related to people's cognitive understanding and familiarity with technology. Through the deep integration of massive data and complex models, artificial intelligence systems have demonstrated astonishing "intelligence emergence" capabilities, while also raising deep concerns about their potential to surpass human understanding and break free from human control. By requiring interpretability and accountability of machine algorithms, it is possible to achieve a certain degree of perspective and intervention in technological systems, thereby enhancing people's trust in artificial intelligence. Meanwhile, with the help of interpretability and accountability, institutional constraints can be established for machine algorithms, enhancing people's control over algorithmic systems. Therefore, interpretability and accountability have become important entry points for exploring trust issues in artificial intelligence algorithms.
4.2 Interpretability and Accountability of Artificial Intelligence Algorithms
The interpretability and accountability of artificial intelligence systems directly affect the expected establishment and maintenance of algorithmic trust. The premise of establishing trust is to form specific expectations for the decision-making behavior of the system. The value of interpretability lies in helping people understand algorithmic systems and form stable expectations, while the significance of accountability is reflected in dealing with situations where expectations may fall short. In other words, when the algorithm system can provide a clear explanation of its operation process, external users form reasonable expectations and gradually establish trust in the output results. And when there are errors or biases in the algorithm system, accountability helps to properly handle and absorb the dissatisfaction caused by the failure of expectations.
4.2.1 Algorithmic interpretability
Algorithmic interpretability is an important foundation for building and maintaining trust in artificial intelligence algorithms. Interpretability emphasizes that artificial intelligence systems can be understood by people, enabling them to establish familiarity and reasonable expectations. Early research on algorithm regulation placed great emphasis on code transparency. For example, Lawrence Lesger believes that since "code is law" in the digital space, code should be made public just like legal texts. However, for artificial intelligence systems, the problem is not simply due to the code being deliberately concealed, but rather due to the overly complex operational logic and technical principles of the model algorithms. Taking open source models such as DeepSeek and LLaMa as examples, their source code has been made public, but static analysis of the code still struggles to reveal the decision-making logic of the algorithm in specific applications. In addition to the differences in knowledge base, tuning technology, prompt engineering, etc., even the performance of the same model in different scenarios may be very different. Algorithmic interpretability does not require code to be publicly available, but rather focuses on describing the operating principles of the algorithm so that it can be understood by people. At the same time, algorithmic interpretability helps to trace model development and machine decision-making processes from a factual perspective, providing a basis for value judgments, institutional design, and responsibility determination.
Due to the high complexity of artificial intelligence systems, algorithm interpretability still faces certain challenges in technical implementation. In the early development of artificial intelligence, expert knowledge models based on logical rules were once popular. This type of system has an interpretable knowledge base and reasoning rules, but its self-learning and generalization abilities are weak. In contrast, the widely popular large language models currently exhibit amazing performance, but how to explain the operating principles of their billions of parameters remains a major challenge in practice. Artificial intelligence is not a simple imitation of human intelligence, but has its own unique storage and computing logic architecture. As some scholars have pointed out, there is a tense relationship between interpretability and model performance, and excessive pursuit of algorithm interpretability is somewhat like forcing artificial intelligence to be "artificially stupid enough that we can understand how it draws conclusions". Therefore, the technical feasibility of attempting to explain artificial intelligence algorithms at the scale of human cognition still needs to be demonstrated.
4.2.2 Algorithmic Accountability
The accountability of algorithms is an important guarantee for consolidating trust in algorithms. As mentioned earlier, there is a potential conflict between the interpretability and performance of artificial intelligence systems, and coupled with the limitations of human thinking in cognitive complexity, research on interpretability related technologies still faces significant challenges. Therefore, using post event accountability to get rid of technical details can alleviate the need for interpretability to some extent. By adopting strict liability, compulsory insurance and other institutional rules, the law can achieve post accountability without struggling in technical quagmire. In addition, accountability can also be used to promote the implementation of interpretability, urging system developers of artificial intelligence to develop algorithm models with higher interpretability. When the developer is unable to provide necessary explanations, they may need to bear corresponding legal responsibilities.
When artificial intelligence systems encounter problems or fail to meet expectations, accountability can effectively absorb the negative impact of risks and uncertainties. The requirement for algorithm accountability is to ensure that the algorithm system is responsible and secure, and to establish corresponding mechanisms for auditing, accountability, and remediation. Through post accountability, developers and users of artificial intelligence systems are required to take responsibility for their actions, thereby promoting the iteration and optimization of machine algorithms. Taking human-machine collaborative trial as an example, any problem in the interaction between judges, algorithm systems, and technical service providers can cause complex chain reactions, exacerbating the difficulties of responsibility determination and allocation. Therefore, when algorithms are complex, regulatory approaches based on accountability can force system developers to take relevant measures to avoid adverse consequences caused by negligence or errors in system design and use.
4.3 Local reinforcement of algorithmic trust on judicial trust
Establishing an algorithm trust mechanism for artificial intelligence is of positive significance for building trust in human-machine collaborative trials. Accuracy, interpretability, accountability and other evaluation indicators constitute the general requirements of "trustworthy artificial intelligence", and therefore can also be applied to the application scenario of human-machine collaborative trials. Article 6 of the Opinions of the Supreme People's Court on Standardizing and Strengthening the Judicial Application of Artificial Intelligence requires that the entire process of judicial artificial intelligence from research and development to application should be interpretable and verifiable, ensuring that the operational results are predictable, traceable, and trustworthy. This is inconsistent with the concept of algorithmic trust.
However, algorithmic trust mainly focuses on the artificial intelligence system itself, rather than the interactive process of human-machine collaborative trial, and therefore can only play a partial reinforcement role in the trust of the trial. Once there is a lack of systematic analysis of the human-machine collaboration process, it is easy to be constrained by the binary opposition between the subject and the tool, making it difficult to form constraints on the specific interaction between judges and algorithms. In fact, the intervention of artificial intelligence in judicial trials is indirect and requires the handling judge as a medium to influence the trial results by influencing the judge's mental state. As for how judges use artificial intelligence, it goes beyond the scope of algorithmic trust itself. Questions such as how judges evaluate, how to make choices, whether to adopt machine assisted opinions, and whether machine assisted suggestions need to be disclosed to the public are difficult to find answers directly from the theoretical framework of algorithmic trust. Therefore, there is a "two-tier" phenomenon in the practice of human-machine collaborative trial: although the interpretability and accountability of artificial intelligence are highly valued within the court organization, outsiders still have no way of knowing how artificial intelligence affects individual case trials.
In addition, excessive emphasis on trust in artificial intelligence algorithms may have an impact on personality trust and institutional trust. From the perspective of the trust subject, judges, institutions, and algorithms, as potential objects of trust, have a certain degree of functional equivalence with each other. When judicial artificial intelligence first appeared, due to the lack of established trust in algorithms, people had a high level of trust in courts and judges, and the resistance of the latter to reject or deviate from machine suggestions was also relatively small. As the adoption rate of machine suggestions gradually increases, people's trust in algorithms may gradually increase. If there is a lack of sufficient trust in the courts and judges in society, there is a high possibility of trust transfer, hoping that seemingly more neutral and objective artificial intelligence can make judgments. As people's familiarity with "AI judges" continues to increase, algorithmic trust is gradually taking root, and trial trust may evolve into algorithmic trust. Once courts and judges deviate from AI assisted recommendations, they will face significant criticism, leading to interference with the normal operation of judicial power. It is not difficult to see that the risk of algorithmic logic surpassing judicial logic mentioned earlier has reappeared. However, the pressure to move closer to algorithms at this time does not come from court organizations but from the general public.
In summary, research on algorithmic trust touches upon the credibility of artificial intelligence systems, without assuming that judges can correct any issues with machine assisted recommendations. However, algorithm trust and its related regulatory strategies are still within the static analysis framework of human-machine separation and subject object duality, and have not fully considered the dynamic collaborative process between judges and machines, court organizations and algorithm systems. The trust problem of human-machine collaborative trial lies not only in the technical complexity of the algorithm model, but also in the communication complexity of human-machine collaborative interaction. Although algorithmic trust contributes to the construction of judicial trust, it only constitutes a partial solution. Therefore, the exploration of judicial trust cannot stop at algorithmic trust.
5. Integrated trust building based on communication and discussion
Whether it is trust in the personality of judges, the court system, or artificial intelligence algorithms, they are only partial links of trial trust, and no type of trust alone can form the cornerstone of trust for human-machine collaborative trials. The key is how to find the link that integrates these trusted objects, so as to build trust in human-machine collaborative trials as a whole. To achieve the integration of three types of trust, it is necessary to focus on the mutual relationship between the trust subject and the trust object, and to dynamically construct judicial trust through legal communication and discussion.
5.1 The Communication Interaction Principle of Trust Building
The trust construction of human-machine collaborative trial is built on a comprehensive network supported by multiple trust objects, and there is no single trust foundation. In this trust network, personality trust, institutional trust, and algorithmic trust all play important roles. However, each of the three only constitutes a part of the trust in the trial, and their relationship is not simply a linear progression, but a network of cyclic references. Trial trust cannot be reduced to a single type of trust, and there is no 'Archimedean pivot'. Some argue that since there is no single basis for judicial trust, it can be established on a "composite" trust foundation. The challenge lies precisely in how different types of trust can be integrated into a whole. From a static observation perspective, different types of trust such as personality trust, institutional trust, and algorithm trust can be derived from trial trust. Looking at them separately, reinforcement measures can also be proposed for different trust objects. However, the cracks between trust objects will continue to stir and amplify during the process of mutual reference, ultimately leading to the collapse of the entire trust network. Therefore, the construction of judicial trust requires the integration of personality trust, institutional trust, and algorithmic trust into a cohesive trust network.
The key to integrating personality trust, institutional trust, and algorithmic trust lies in promoting communication and interaction between the trust subject and the trust object, achieving a reflective integration of the three types of trust. Trust exists in the interactive relationship between the trust subject and the trust object. The process of building trust is not static, isolated, or monotonically linear, but dynamic, holistic, and cyclical. Through careful reflection by the trust subject, the originally separated personality trust, institutional trust, and algorithm trust are adjusted and balanced, forming a unified and integrated trust (or distrust) evaluation. If the subject makes a positive trust judgment, it means that they have integrated the trust objects in a reflective sense; If choosing negative distrust, it means having doubts about local aspects or believing that there are gaps and faults between trust objects. Through dynamic and continuous communication and interaction, the subject can make judgments of trust or distrust towards various parts, and adjust the overall level of trust based on new information or experience.
Through communication and interaction, the trust subject can obtain relevant information about the trust object, thereby reflecting and calibrating their own trust attitude. The value of communication and interaction in building trust in face-to-face interpersonal communication is almost self-evident. A general regularity seems to indicate that closeness, closeness, and familiarity open up channels for relevant information, thereby reducing opportunities for manipulation and deception. However, for abstract systems such as court organizations and artificial intelligence, it is difficult for others to gain insight into the internal details of the system from the outside, which may lead to distrust due to a lack of familiarity. At this point, the "trust calibration" between the trust subject and the abstract system is particularly important. Open and transparent channels for obtaining information are conducive to establishing a stronger foundation of trust. Through communication and interaction, we can surpass blind faith or pessimism and achieve mutual alignment between the subject's trust attitude and the reliability of the object.
The reflective integration mechanism based on communication and interaction highlights the positive function of trust subjects in the process of trust building. In the digital age, with the increasing complexity of society, the operating rules of trust objects continue to evolve and change. Correspondingly, people need to continuously update information and knowledge to form trust judgments. The establishment of trust is not passive, but has a positive and proactive aspect. This positive trust mechanism is in stark contrast to the "frozen trust" in traditional society, which relies on established and static social structures and norms. In other words, trust building is not a one-time process, but a continuous process with negotiation and openness. Of course, after the initial establishment of trust, the trust subject may not always examine and check the trust object. But when necessary, the trust subject can only verify and evaluate whether the trust foundation is solid through communication and interaction with the trust object.
In summary, trust should return to the communication between the trust subject and the trust object, achieving the integration of personality trust, institutional trust, and algorithmic trust. Once detached from the parties and the general public who are the subjects of trust, no matter how many experts endorse, design systems, or issue standards, they will inevitably fall into a situation of speaking for themselves. When ordinary people evaluate the trust of judicial trials, they often are not familiar with the specific presiding judge, do not understand the internal complexity of the court organization, and are not clear about the technical principles of artificial intelligence systems. Nevertheless, people can still conduct comprehensive evaluations from a holistic and integrated perspective, and form an overall judgment of trust in the trial based on careful consideration.
5.2 The Essence of Legal Discussion in Trial Procedure
Communication and interaction have significant value in building trust, especially in the field of judicial trials. Due to the characteristics of openness, participation, neutrality, and debate in the trial process, the courtroom space is considered the best venue for determining facts and applying the law. In this sense, the trial procedure ultimately boils down to a process of legal discussion. In judicial trials, communication and interaction are manifested as legal discussions with discourse game characteristics. Judges must avoid bias and fully listen to the opinions of both parties during the trial. Regardless of how legitimate and reasonable the claims of either party may be, they must allow for the existence of opposing opinions. Only when the opinions of all parties are fully discussed can we avoid the tendency to draw hasty conclusions on decision-making matters.
In judicial proceedings, the parties involved are not bystanders, but rather influence the outcome of the trial by participating in legal procedures. According to the principle of due process, all parties with an interest in the trial result have the right to participate, to present their own claims and evidence in their favor, and to refute the other party's views and evidence. The participation of the parties in the proceedings is an important reason for the legitimacy of the trial results: "In the process of constantly refuting and arguing discourse technology competition, the diversity of solutions will gradually be eliminated and reduced, until a universally recognized or accepted correct solution is finally found, at least the only judgment answer." Therefore, the core value of the judicial trial procedure is to ensure the full development of legal discussions. The parties involved directly participate in the formation process of the judgment result through interactive communication and debate in court, and establish trust in the trial result.
For the general public, their trust in judicial trials also largely stems from the argumentative function of the trial process. Not everyone has firsthand experience in litigation, therefore, the public's judgment of trust in trials has an indirect argumentative color. To some extent, the public regards the parties involved as their representatives, and by observing their actions and treatment during the trial process, they form a trust judgment towards the judicial trial. In the most ideal scenario, judicial trust can reach a state of "winning or losing" - even the parties who suffer unfavorable outcomes can still recognize that the outcome conforms to universal fairness and justice in the abstract sense of public personality. Therefore, ensuring the fairness, transparency, and participation of legal communication and discussion is not only crucial for the parties involved, but also the key to winning public trust.
The crux of the trust dilemma in human-machine collaborative trials lies in the transformation of communication and discussion in judicial trials. The introduction of judicial artificial intelligence, although improving trial efficiency, may also lead to the formalization of court trials and weaken the openness and participation of the judicial process. The intervention of artificial intelligence in judicial trial activities has created a soundproof space outside the traditional courtroom between the investigating judge and artificial intelligence. The lack of clear rules or standards on how judges use algorithms can lead to the black box problem of human-machine collaborative trials. The specific judgment stages in which the algorithm intervenes and the adoption of algorithm suggestions by judges have not been disclosed to the public. In this situation, it is difficult not to raise doubts: everything depends on the private interaction between the judge and the machine, and face-to-face communication in court has become a mere formality.
5.3 Repositioning the role function of artificial intelligence
At present, artificial intelligence trial assistance systems mainly serve case handling judges. By hiding the algorithm behind the judge, the court maintained the appearance of traditional trial activities. The interactive mode of 'judges in the light, machines in the dark' has brought a series of negative impacts on judicial trust. However, if the interaction between judges and artificial intelligence is directly disclosed, this penetrative approach may constrain judges' excessive reliance on algorithms, but it may also undermine the efficiency and authority of judicial trials. For example, when a judge uses algorithms to search for legal provisions or similar cases during a trial, if they directly disclose the relevant results, it will undoubtedly disrupt the pace of the trial. It is also not advisable to disclose machine generated reference opinions during the production process of judicial documents. Otherwise, if the judgment document is consistent with the reference suggestions, it raises doubts about whether the court has become a rubber stamp; If there is a deviation, it is inevitable to raise questions about the reasons for the deviation, weakening the finality and authority of the trial. Therefore, building a communication and discussion mechanism that conforms to the characteristics of human-machine collaborative trials is a challenging task.
To promote communication and discussion in human-machine collaborative trials, the role and function of judicial artificial intelligence should be repositioned, expanding it from an internal case handling tool to an external communication tool. At present, when evaluating the effectiveness of the application of judicial artificial intelligence, courts at all levels mainly focus on improving case handling efficiency. Most applications such as intelligent case retrieval, dispute resolution, and judicial assistance suggestions are mainly aimed at serving case handling judges. We should gradually open up relevant application services to external users such as parties and the general public, so that they have technical capabilities similar to judges, and use artificial intelligence technology to perform operations such as case retrieval, legal article retrieval, and dispute resolution. The parties involved can evaluate the reliability of machine assisted suggestions, promptly identify problems, and submit feedback to help prevent and correct errors in artificial intelligence systems. At the same time, judges can still independently use algorithms for information retrieval, sentencing assistance, document writing, and other activities without worrying about being constantly exposed to the outside world. In this way, artificial intelligence is no longer a mysterious black box hidden behind judges, but a universal tool that empowers all parties to communicate and discuss, while also avoiding excessive interference with the court's jurisdiction.
As artificial intelligence moves from behind the scenes to the forefront, the trust building process based on communication and discussion will come into play, integrating trust in judges, courts, and algorithms. Through interaction with judges, courts, and algorithms, parties and the general public form trust judgments in human-machine collaborative trials, thereby alleviating the dilemma of personality trust, institutional trust, and algorithm trust. Firstly, in response to the issue of "imaginary rupture" in the trust of judges' personalities, by introducing the deep participation of the parties involved, the public can directly observe and evaluate the behavior of judges, enhancing their confidence in their professional judgment. At the same time, the role of judges has shifted from a single decision-maker to a communication guide, which helps to reshape the public's expectations of trust in judges. Secondly, regarding the issue of institutional trust in court organizations, by strengthening the substantive participation of parties in legal discussions, it can promote greater transparency and inclusiveness in court system design, and reduce the risk of "human-machine collusion" between case handling judges and artificial intelligence. Finally, by repositioning the role of artificial intelligence in the judicial process and expanding it from an efficiency tool to a communication tool, the interaction between local algorithm trust and overall trial trust can be strengthened. The parties and the public can have a more intuitive understanding of the functional positioning and performance of artificial intelligence in the trial scenario, and form a clearer and more accurate trust evaluation of its output results.
5.4 Legal Discussion on Human Computer Collaboration in Court Trials
The court trial has an irreplaceable significance for legal discussions and is the central stage for communication and interaction among all parties. The principle of equal adversarial debate promotes in-depth exploration of facts and laws, and maintains fairness and justice in judicial trials. Therefore, the following will explain how to promote communication and discussion in human-machine collaborative trials, taking into account the relevant stages of the trial activities.
In the pre-trial preparation stage, judicial artificial intelligence can provide services for the parties' trial preparation and allow them to correct any errors made by artificial intelligence. The court should provide artificial intelligence assisted advice on laws and regulations, evidence rules, and similar case retrieval to the parties, promoting equal capacity and information symmetry between the parties and the judge. If the parties and their agents discover errors in the machine assisted suggestions, they can provide timely correction and feedback opinions; It is even better to encourage the parties to actively provide auxiliary materials such as case search reports to the judge, forming a competition of viewpoints with machine assisted suggestions, and promoting the judge to achieve a win-win situation. Furthermore, with the increasing maturity of related technologies, it is possible to explore the use of artificial intelligence to directly organize transactional work such as evidence exchange and dispute resolution between both parties. After the pre-trial conference, artificial intelligence can generate auxiliary mediation suggestions based on case information and sort out the focus of disputes between the two parties. The pre-trial stage provides the parties with an opportunity for in-depth interaction with artificial intelligence, enabling them to identify issues, provide timely feedback, and prepare for communication and discussion in subsequent court activities.
During the trial process, artificial intelligence should sort out and display the dispute focus of both parties in real-time, promoting the substantive development of legal discussions. At present, speech recognition technology has achieved accurate recording of courtroom debates. On this basis, artificial intelligence technology can dynamically sort, classify, and visualize the focus of court disputes. Through the agreement and coding of argumentation factors such as litigation requests, essential facts, and proof refutation, the discourse interaction of "arguments and compromise points between the plaintiff and the defendant" is revealed, promoting the back and forth dialogue and communication between the two parties. After the trial is over, the machine generated list of points of contention, along with the trial transcript, will be provided to the parties for their review. If the parties discover any omissions or errors, they may take this opportunity to request correction. It is worth noting that human judges still have a dominant role in court trials. Judges should comprehensively use communication methods such as the obligation to clarify and the disclosure of evidence, and on the basis of reflecting on the impact of machine suggestions on evidence, timely remind the parties to ensure the full development of legal discussions. This not only helps to ensure the substantive nature of the trial and avoid sudden judgments, but also provides an important opportunity for judges to establish personal trust.
In the judgment document, the judge should interpret and reason around the disputed points of the case, and disclose the machine assisted suggestions closely related to the conclusions of the disputed points. At this point, it is not about disclosing the operational logic of the machine algorithm, but rather providing reasons for its output results. For example, if a judge considers a case recommended by a machine as an important reference for drawing conclusions on the issue at hand, the relevance of the case should be explained. For content involving numerical calculations, such as amounts or sentences, judges should disclose relevant elements such as benchmarks, formulas, parameters, etc. provided by artificial intelligence. By disclosing and interpreting machine assisted suggestions in judicial documents, the transparency and credibility of judgments can be enhanced, allowing the general public to verify and discern relevant information. Therefore, judicial documents serve as a bridge connecting court judgments with public understanding, legal progress, and social reflection, and are of great significance in winning the trust and recognition of the public through human-machine collaborative trials.
In summary, through interaction with judges, courts, and algorithms, parties and the general public form a trust judgment towards human-machine collaborative trials, while promoting the improvement of trust in judges' personalities, court systems, and artificial intelligence algorithms. The investigating judge is still the leader of human-machine collaborative trials, playing a core role in court command, testimony, interpretation and reasoning activities. The court organization consolidates its institutional trust foundation by providing artificial intelligence services to the outside world and ensuring the participation of parties in the proceedings. Artificial intelligence, as a communication tool to promote legal discussions, moves from behind the scenes to the front stage, where parties supervise and provide feedback on artificial intelligence, promoting its continuous optimization in accuracy, fairness, and interpretability.
6. Conclusion
In the era of artificial intelligence, the construction and maintenance of judicial trust are particularly important. Trust exists in the intentional relationship between the trust subject and the trust object, and is inevitably influenced by the individual knowledge and life experience of the trust subject, resulting in complex and varied situations that vary from person to person. Legal researchers who have a better understanding of the internal workings of the legal system may establish stronger trust, but they may also recognize more risks and uncertainties that make it difficult to establish sufficient trust. In contrast, the general public may develop trust in judicial trials based on a simple sense of trust, or form an overall attitude towards trials based on past experience, hearsay, and public opinion, without going to great lengths to discern where trust is based. This article provides a critical analysis of personality trust, institutional trust, and algorithmic trust, revealing the complexity and diversity of judicial trust. Based on this, it attempts to find a solid foundation for the establishment and maintenance of trust. In the short term, these discussions may not directly affect the public's trust attitude towards human-machine collaborative trials. But in the long run, building trust is a dynamic and interactive gradual process. Strengthening communication and discussion among the general public, court system, and algorithm system can form a trust building mechanism that allows trust subjects to personally experience, carefully verify, and carefully reflect. Undoubtedly, this process cannot be achieved overnight, but the trust in the trial formed on this basis, once established, will not be easily shaken.