[author]Ji Weidong
[content]
New dimensions of co evolutionary artificial intelligence and computational law
——Speech at the Opening Ceremony of the Third Annual Conference of CCF Computational Law Branch
Ji Weidong
The 2024 Nobel Prize in Physics and Chemistry were awarded to the discovery and invention of artificial intelligence technology and its scientific applications, causing a shock and making everyone more aware that we are in the midst of the digital revolution of artificial intelligence. Against this backdrop, we celebrate the third anniversary of the establishment of the CCF Computational Law Branch, and the third annual meeting officially opens today, making us full of expectations and nervousness about the prospects for the next three years.
The theme of this year's conference focuses on "Computational Law and Modernization of National Governance", with keynote speeches by Academician Guo Lei, Vice President Shi Jianzhong, Dean Zuo Weimin, and Vice President Li Zheng. There are also two keynote speeches and four sub forums, with rich content. Here, on behalf of CCF Computational Law Branch and the affiliated China Law and Society Research Institute of Shanghai Jiao Tong University, I would like to pay tribute and express our gratitude to the host organization, the Chinese Computer Society, the Data Science Research Institute and Law School of Shandong University Weihai Campus as co organizers, and the Shandong Center for Systems and Computational Law Research. We also warmly welcome all co organizers, guests, and colleagues attending the conference! Of course, we would like to express our sincere gratitude to Professor Wang Fang and his team, who are responsible for undertaking the specific work, for their hard work!
The modernization of national governance includes two fundamental aspects. One is to rationalize state power and improve governance efficiency through the rule of law. Another is to expand individual freedom and improve governance fairness through humanism. There is no doubt that artificial intelligence can significantly improve efficiency. But will artificial intelligence lead the world into a post human or post humanistic era? This is still a problem that is sparking debate. Related to this, Yuval Harari mentioned an important concept in "A Brief History of the Future" called "Homo Deus", which seems to have not received sufficient attention in China. Will post humans, transhumans, gods, or superheroes form a crushing force on humans and their individuals, which may pose a serious challenge to the modern governance system of nations.
In the past two to three years, large-scale models have been iterating rapidly, and generative artificial intelligence is affecting every aspect of daily life. The generative discourse order formed through human-machine coexistence and human-machine dialogue is enabling everyone to live in a collective interaction. The concept of 'me' is bound to become relative, and 'we' will increasingly become the subject of the living world. In other words, in the current era of the strong rise of generative artificial intelligence, the trend of philosophical and legal thinking shifting from the individual "I" to the collective "we" has actually become more difficult to stop. Although the basic unit of computer system users is still based on individuals, authentication and data management are still carried out according to the principles of one person, one account, and autonomous individual freedom, the self that was once inflated in social networks, after entering the stage of generative artificial intelligence, is increasingly integrated into large models, increasingly integrated into interactive collections, and transformed into the "us" in the discourse order and re presented. From individual centered to collective centered, this is a fundamental change in social and research paradigms that both computational law and digital rule of law have to face.
In addition, traditional judicial judgments are largely based on empirical rules. Therefore, judges, lawyers, and parties have to engage in one "incomplete information game after another". Given the limitations of time and information, judges must abstract the specific circumstances of the case and turn them into simple rules for handling. Only in a few simple cases does there exist a "complete information game" where both facts and laws are clear and distinct. However, artificial intelligence with machine learning capabilities selects the best solution through a full dataset in a vast database. Therefore, the essence of smart justice is to transform all incomplete information games into complete information games for processing, so as to make complex responses to specific situations. Of course, artificial intelligence has always been limited by the so-called framework problem and symbol grounding problem, and it is difficult to fully reflect human intuition, common sense, tacit knowledge, and value judgments. However, the language big model and multimodal big model incorporate all movement data, including intuition, common sense, tacit knowledge, and value judgments, as well as all expression data such as images, sounds, and videos, into the scope of learning and refining models. This can to a considerable extent resolve the so-called framework and symbol grounding issues. Moreover, humanoid robots can increase the knowledge elasticity of artificial intelligence by setting physical or physical boundaries and limiting the scope of information processing. This has the potential to go beyond deep learning and not rely solely on data scale to determine victory or defeat. The rapid advancement of artificial intelligence technology means that the trial methods of smart courts will also achieve a leap from quantitative to qualitative changes.
For a long time, artificial intelligence has indeed relied on algorithms and has basically belonged to the world of computation. Therefore, when considering the relationship between law and artificial intelligence, if we only talk about computational law and the interpretability and fairness of algorithms, everything will basically be fine. In this sense, the relationship between computational law and artificial intelligence governance can be summarized as XAI (explainable artificial intelligence). However, in today's situation, rapid iteration of generative artificial intelligence, coexistence between humans and machines, and human-machine dialogue have become the norm in society. Computational law must also introduce a new keyword: CAI - Co evolutionary AI, which stands for human-machine co evolutionary artificial intelligence. This means that artificial intelligence is entering a new stage, and artificial intelligence governance is bound to enter a new stage accordingly. It is very likely that there will be some kind of intelligence that humans cannot understand. When artificial intelligence can write code and modify programs on its own, the inherent black box nature of big data will inevitably be further strengthened in generative artificial intelligence, and even trigger the risk of AI losing control and social unrest.
Given the new situation of artificial intelligence development, AI security has become a focus of attention in today's world. From the perspective of AI security, how to properly treat AI legislation will undoubtedly become a central topic in the field of artificial intelligence and the field of computational law in the next three to five years. At the United Nations University International Symposium on AI held in Macau this spring, my speech analyzed four models of AI legislation, namely the EU's hard law model, the US and Japan's soft law model, China's soft hard hybrid model, and Singapore's program technology model. I particularly emphasized that legislators must maintain an appropriate balance between AI research and development and AI security. I believe that if the technological development of large models is not only the object of AI governance, but can also empower AI governance in turn, then technology companies will not be worried about AI legislation. In fact, if the security research of large models can form a toolbox for testing, evaluation, and monitoring through a technology program approach, including promoting digital watermarking technology, developing AI verification small models, forming an AIGC anti-counterfeiting system, establishing an AI ethical management index system and certification platform, and compiling an AI security guarantee network, then regulation and development will no longer be a zero sum game. AI governance can also open up new investment opportunities or market space for AI research and development, and form a technological blue ocean for enterprises through differentiated competition. From this perspective, the CCF Computational Law Branch should actively participate in discussions on artificial intelligence legislation to expand the space for computational law research and enhance the social influence of this academic institution.
I believe that the third annual conference of CCF Computational Law Branch will conduct extensive and in-depth discussions on the above-mentioned but not limited to these issues, and further promote interdisciplinary and cross disciplinary dialogue and communication around artificial intelligence and law. It will surely use artificial intelligence as a lever to promote paradigm innovation in the rule of law. I wish the 3rd CCF Computational Law Branch Annual Meeting a complete success!