亚洲一线产区二线产区区别在哪,亚洲AV永久无码精品,久久精品国产精品国产精品污,亚洲精品亚洲人成年,青青草视频在线观看综合网,亚洲国产色欲AV一区二区三区在线,亚洲美乳字幕日韩无线码高清专区

Location : Home > Resource > Paper > Theoretical Deduction
Resource
ZHAO Jingwu | The Path of Embedding Technology Ethics into the Governance System of Artificial Intelligence - Taking Autonomous Driving Application Scenarios as an Example
2024-12-10 [author] ZHAO Jingwu preview:

[author]ZHAO Jingwu

[content]

The Path of Embedding Technology Ethics into the Governance System of Artificial Intelligence - Taking Autonomous Driving Application Scenarios as an Example



ZHAO Jingwu

 Associate Professor at the School of Law, Beihang University



Abstract: The risks of artificial intelligence technology have the characteristics of overlapping and intersecting risks, and a single governance tool cannot effectively control technological risks. Therefore, the guiding and normative role of technological ethics governance has begun to receive attention and has been introduced into the artificial intelligence governance system. However, existing research mostly focuses on exploring the corresponding governance mechanism construction path at the level of technological ethics, and has not fully elucidated how the governance mechanism of technological ethics can be combined with other governance mechanisms such as law, technology, and market to achieve synergistic governance effects. Technology ethics governance essentially belongs to a directional and guiding governance mechanism, with specific governance functions including guidance, negotiation, identification, and feedback. Corresponding governance mechanisms include technology ethics risk assessment and management mechanism, technology ethics risk identification and evaluation mechanism, and technology ethics risk feedback and response mechanism. Specifically in the field of autonomous driving, it is manifested as an integrated assessment that includes technological ethical risks, technological safety risks, rights infringement risks, and market disorder risks, confirming the importance and urgency of different risks and adopting targeted governance mechanisms.


1. Problem posing


The level of intelligence demonstrated by ChatGPT far exceeds social expectations and has pushed the digital society to the "artificial intelligence technology stage". At present, the most popular core research topic in artificial intelligence technology is how to fully realize the social benefits of artificial intelligence technology and control potential technological risks, which has also sparked a wave of research on artificial intelligence risk governance in many disciplines such as computer science, law, and ethics. In the field of law, there is a trend of artificial intelligence law research occupying half of the legal research. Admittedly, there are more or less "research bubbles" or "hot spots", and even some research shows that "the theoretical research of technological governance is far beyond the practice of technological development", but this is the inevitable process of the academic interface to the situation of gradual maturity of scientific and technological innovation. As scholars delve deeper into the argumentation of new and old risks, as well as the issue of authenticity, the corresponding research trend has gradually shifted from risk countermeasure theories such as "listing risks one by one and providing suggestions one by one" to specific issues such as the legitimate use of training data and the allocation of tort liability. In other words, the field of artificial intelligence requires a more systematic and comprehensive governance system.

The research trend of this artificial intelligence governance system not only responds to the current social focus on unified legislation of artificial intelligence, but also aims to address the impact of artificial intelligence technology risks on existing governance systems. In the face of current and potential technological risks, any single governance mechanism, including legal, ethical, technological, and market mechanisms, is difficult to effectively prevent and control. Especially in scenarios such as "using artificial intelligence to revive the dead" and "using autonomous driving applications to disrupt the ride hailing market", diverse risk types such as legal risks and technological ethical risks are intertwined, making it necessary to adopt a more comprehensive governance model for technological governance practices in order to globally address the issue of diversified risks in technological applications. In fact, governance theories such as systemic governance, comprehensive governance, and collaborative governance are regarded by scholars as the best choice for solving artificial intelligence governance problems. Even when it comes to such research, many scholars must mention Lawrence Lesger's four-dimensional governance framework of "market, architecture, social norms, and law" proposed in "Code 2.0". Therefore, governance tools such as law, ethics, technology, and market have been gradually incorporated into the framework of artificial intelligence governance, and cross disciplinary governance mechanisms such as technology ethics review and national technology security standards have been constructed.

However, after reviewing existing research, although the diverse governance mechanisms under the framework of artificial intelligence governance have become increasingly rich, issues such as how to effectively connect these mechanisms and how to clarify their differences have not been well addressed. Especially in the field of technological ethical risks, some studies in the legal field habitually combine such risks with infringement risks, making the governance mechanism of technological ethics one of the specific contents of the legal system. Little do they know that ethical governance and legal governance in technology ultimately belong to two different governance models, and the ways in which their governance effects are achieved are also different. Confusing the two may lead to the failure of governance goals. Therefore, it is necessary to re-examine how the governance mechanism of technological ethics is embedded in the governance system of artificial intelligence technology, and coordinate its connection with other governance mechanisms such as law, market, and technology.


2. Theoretical basis and common paradigms of ethical governance in artificial intelligence technology


2.1 The governance objectives and theories of technological ethics governance

In practice, the risks of artificial intelligence technology have complex and comprehensive characteristics, involving not only the infringement risk of technology application damaging rights and interests, but also the technological ethical risk of whether technology application overturns the human subject status, the technological risk of whether information systems are secure and reliable, and other risks. Based on the differences in risk types, artificial intelligence governance theory extends to different governance models and paradigms. Ethical governance of technology is a governance model that focuses on addressing ethical risks in technology, and combines the flexibility and openness of ethical norms to respond more quickly to various technological risks arising from technological innovation. Therefore, many scholars regard technological ethics as an important supplementary tool to fill the gap of other governance mechanisms that are difficult to effectively respond to emerging risks in technological innovation. In particular, the ethical review mechanism for technology is currently receiving much attention from legal scholars, and the mainstream view tends to regard it as a prerequisite legal requirement for carrying out technological innovation activities. However, this research tendency actually overlooks the inherent differences in governance logic between technological ethics governance and legal governance. Compared to the specific rights protection and other governance goals that legal governance focuses on, technology ethics governance essentially belongs to a directional and goal oriented governance. Because technological ethics itself does not have a particularly clear and specific connotation, it often focuses on abstract social values as the main content. This determines that technological ethics governance provides directional guidance for technological innovation activities, specifically manifested as "using ethics as principles as guidance, solving ethical and social problems faced by technological development, and enhancing the sum of various ways in which science and technology develop for the well-being of people"

In practice, the core issue of technological ethics governance in the field of artificial intelligence is how to promote consensus among diverse social entities on technological ethics. In the process of institutionalizing ethical governance in technology, this issue has transformed into how to establish governance standards that comply with the rights and interests of all legal entities in the specific institutional construction process. To this end, domestic scholars have extended three types of theories on technological ethical governance: the first type is the legalization of technological ethical governance, emphasizing the orderly interaction between the rule of law and ethical governance, and integrating fragmented technological ethical norms into an integrated governance mechanism. For example, once a certain violation of technological ethics occurs, it should be clarified whether it should be raised to the scope of criminal law regulation, and under what circumstances administrative measures should be taken. Some scholars believe that the difference between the governance logic of the rule of law and technological ethical governance lies in the fact that the rule of law takes "safety" as the social value starting point for constructing the regulatory framework of technology, while technological ethical governance focuses on responding to the wide-ranging social issues caused by technology. Therefore, the way in which the law intervenes in technological ethical issues should be to provide differentiated responses to "upstream" activities such as technology research and development, design, and "downstream" activities such as technology application. The second type is ethical intervention, which means that in the process of technological governance, technological ethics should be prioritized over other governance tools and guided by directional ethical norms to promote the healthy development of technological innovation. This is because the inherent logic of technological ethical governance often manifests as "responding to technological uncertainty with ethical red lines," "constraining technological capital with ethical norms," and "supplementing the limited legal role with ethical standards. The third type is to explain the basic connotation of technological ethical governance from an ethical perspective, such as starting from the principles of applied ethics, in the field of artificial intelligence, technological ethical principles "provide a unified vocabulary as a cognitive framework to frame the solutions to all problems, for artificial intelligence developers to consider potential risks".

Comparing these three types of theoretical propositions, the ethical interpretation positions of the second and third types are more in line with the core characteristics of technological ethical governance. Firstly, the first type of proposition to some extent confuses the difference between "the law stipulates the mechanism of technological ethical governance" and "the legalization of technological ethical governance". The basic connotation of ethical governance in technology is relatively broad, and not all governance mechanisms can be institutionalized. Even though laws and regulations stipulate ethical management in technology, the actual operational mechanism is still mainly based on abstract value evaluation and measurement, and cannot form an integrated mechanism for such claims. Secondly, the second type of proposition aligns with the complementary governance characteristics of technological ethics, as technological ethical norms often do not have mandatory effectiveness. Therefore, their regulatory role needs to come before other governance mechanisms, and they belong to a supplementary governance mechanism that solves problems that other governance mechanisms cannot cover. Finally, the third type of proposition directly points to the ethical essence of technological ethical governance, and the core mechanism of technological ethical governance is to provide researchers with a universal cognitive system. In the context of multiculturalism, it is difficult to form a unified consensus on technological ethics, and technological ethics governance focuses abstract ethical values on specific technological application methods through ethical principles, ethical reviews, and other means, promoting the transformation of abstract ethical norms into visual application standards.


2.2 The governance function of artificial intelligence technology ethics

The existing ethical governance model for technology is only a form of "soft law governance", and its practical governance functions are also quite limited. However, from the perspective of division of labor and cooperation in governance mechanisms, this weakly binding governance model has never been intended to impose particularly clear constraints on specific technological innovation behaviors. On the other hand, it is not difficult to find that the ultimate governance goal of this type of governance model is to provide "optimization suggestions" for ethical direction while minimizing the impact on scientific research freedom and technological innovation space. The core goal of artificial intelligence technology governance is to balance technological innovation and technological security, as there is no unique solution to the trade-off between these two governance goals. Therefore, it is necessary to use the ethical consultation space formed by the technology ethics governance mechanism to achieve maximum consensus among different parties on the ways to achieve these two goals. Of course, some scholars believe that the governance system of science and technology has three main goals: "promoting the development of science and technology", "ensuring the healthy and orderly process of scientific and technological development", and "controlling the consequences and impacts of science and technology". The synchronous achievement of these three goals cannot rely solely on other governance tools such as laws and technology. The theory of ethical intervention is proposed based on this, which means that before carrying out technological innovation activities, necessary interventions at the strategic and planning levels should be carried out by technological ethical norms, and at the same time, necessary ethical evaluations and external supervision should be conducted at the technical application level. In summary, the implementation of the governance function of technological ethics is to absorb the opinions of social groups through open technological ethics, so that technological innovation activities can infinitely approach the ideal state.

This functional implementation method is not fabricated, but closely related to the specific functions of technological ethical governance. Specifically, the ethical governance functions of artificial intelligence technology include guidance, negotiation, identification, and feedback. Firstly, the guiding function mainly refers to the influence of technological ethical norms on technological innovation activities, where various aspects such as research and development design, application promotion, etc. are guided by social consensus principles of technological ethics. The most intuitive manifestation is to cultivate technological ethics concepts such as engineering ethics and design ethics for technological innovation personnel, so that they can consciously practice technological ethics concepts. Secondly, the negotiation function mainly refers to the open content of technological ethics that allows various social entities to express different attitudes and interpretation plans towards technological ethics, and seeks consensus on technological ethical principles in the noisy social public dialogue. For example, regarding the use of artificial intelligence to "resurrect" the deceased, with the consent of close relatives, whether there is an ethical issue of eroding the subject status of the deceased is one of the representatives of social public dialogue content. Thirdly, the identification function mainly refers to the ability of technological ethical norms to serve as a criterion for judging the rationality of technological innovation activities, and to discover and confirm whether there are potential technological ethical risks. For example, the technology ethics review mechanism evaluates artificial intelligence innovation activities that involve sensitive research fields, and then determines whether there are any technology ethics risks that need to be avoided beforehand. Fourthly, the feedback function mainly refers to the self adjustment of technological innovation activities that have undergone ethical evaluation. Similar technological innovation activities will make necessary adjustments and risk prevention based on existing ethical issues, avoiding the recurrence of similar ethical issues.

The four specific functions of guidance, negotiation, identification, and feedback enable technological ethical norms to play a normative role in a way that is distinct from other governance models such as legal governance. Some scholars have also summarized the functions of "ethics" from an ethical perspective as "the reflective function of examining merits and demerits," "the evaluation function of facing the value issues of scientific and technological activities and their results," "the function of imposing sanctions based on ethics," and "the guiding function of linking past experiences with future development. This functional decomposition is essentially constructed around the governance mechanism feature of providing directional and goal oriented guidance for technological ethics governance. However, the "ethical sanction function" mentioned in it still needs to be discussed. The reason why ethics has the so-called "sanction" function, such as punishment based on environmental ethics issues, is because ethical norms have been widely recognized by the public and transformed into legally binding norms. This cannot simply conclude that "ethics have a sanction function". Undoubtedly, a series of social governance issues arising from technological innovation activities will indeed have a certain impact on existing governance theories. As some scholars have found, "technological ethical governance emphasizes strengthening ethical regulations from the interaction between technology and society, and enhancing moral control in technological governance. However, the strengthening of this ethical normative function essentially points to the importance and necessity of valuing technological ethical governance, rather than attempting to introduce mandatory legal governance into the framework of technological ethical governance.


2.3 Common mechanisms for ethical governance of artificial intelligence technology

Before exploring the integration of technology ethics governance into the artificial intelligence governance system, there is still a question that needs to be clarified: what specific governance methods or mechanisms are included in technology ethics governance. If we only understand technological ethics as using technological ethical norms to judge technological innovation activities, it will only limit the discussion of relevant issues to a fairly general scope and cannot respond to specific problems at the level of governance practice. The function of technology ethics governance determines the corresponding governance mechanism, and the four functions of guidance, negotiation, identification, and feedback also correspond to four types of technology ethics governance mechanisms: firstly, the guidance function extends to industry guidance governance mechanisms, including regulatory agencies' pre formulation of technology innovation industry plans, best practice examples, etc. Technology industry planning combines technology ethics governance with regulatory practice, emphasizing that regulatory agencies should consider technology ethics related factors in the process of formulating industrial policies and plans, and guide, encourage, and support technology enterprises and research institutions to adopt technology innovation methods that meet the requirements of technology ethics. Secondly, the negotiation function extends to the public participation mechanism of technology ethics governance, which means that in response to controversial technology ethics issues, methods such as soliciting public opinions are adopted to explore the level of technology ethics and promote the formation of consensus ethical principles. Thirdly, the identification mechanism extends to governance mechanisms such as technology ethics review mechanism and technology ethics assessment mechanism. The main function of the technology ethics review mechanism is to discover and identify whether there are serious technology ethics risks, and its scope of application is mainly specific technology innovation activities at the practical level; The main function of the technology ethics assessment mechanism is to dynamically track and assess the risks in high-risk areas of technological innovation. It is suitable for specific technological innovations that are still in the initial development stage, and the future direction of technological development and potential manifestations of technological ethics risks are uncertain. It assesses the dynamics of industrial technological innovation at home and abroad, and then predicts the corresponding technological ethics risks. Fourthly, the feedback function extends to optimize and improve governance mechanisms such as technology ethics management and supervision mechanisms, technology ethics rectification and supervision mechanisms. The direct target of technology ethics governance is scientific researchers. Through mechanisms such as technology ethics review and risk assessment, the results of technology ethics risk identification are fed back to scientific researchers, enabling them to make appropriate technological optimizations in the research and development design application process.

There is a connection between the four functions of guidance, negotiation, identification, and feedback, which in turn leads to a systematic connection between the corresponding technology ethics governance mechanisms. Firstly, the technological ethical norms involved in the guidance function are mainly based on technological ethical principles, such as the "Opinions on Strengthening Technological Ethical Governance" which lists principles such as "promoting human well-being", "respecting the right to life", "upholding fairness and justice", and "maintaining openness and transparency". These belong to the direction of technological ethical governance that needs to be strictly followed for the negotiation, identification, and feedback functions. Therefore, the governance mechanisms derived from the functions of negotiation, identification, and feedback should not contradict the governance mechanisms established by the guiding functions. Secondly, the negotiation function is to refine the ethical judgment criteria for technology within the framework of the ethical principles established by the guidance function, which will be directly applied to the governance mechanisms established by the identification and feedback functions. Especially in the field of artificial intelligence technology applications, there are significant differences in technological ethics issues, and it is even more necessary to establish consensus based technological ethics standards before other governance mechanisms. Then, the governance mechanisms established by the negotiation, identification, and feedback functions overlap and intersect at the level of technology ethics review. Because technology ethics review is usually evaluated and judged by various experts with professional technical knowledge, legal and ethical knowledge backgrounds, this review process is also a negotiation process of technology ethics concepts. In addition, under the governance mechanism established by the feedback function, the moral judgments and expert suggestions made by the technology ethics review will be fed back to the reviewed object, once again realizing the technology ethics guidance function in specific technology application fields, and thus achieving the functional cycle of technology ethics governance.

It should be noted that the specific content of technological ethical norms is not fixed, and the corresponding governance mechanism of technological ethics will also change with the innovation of technological ethical norms. The flexibility of technological ethical governance is not only reflected in the continuous evolution of technological ethical norms based on technological innovation activities, but also in the formation of more targeted governance mechanisms based on the basic characteristics of technological ethical risks. Therefore, when exploring the integration of technology ethics governance into the artificial intelligence governance system, the core issue is not which type or types of governance mechanisms should be legalized, but how to clarify which legal, technological, and market governance mechanisms can be linked to the functions of technology ethics governance.


3. The institutionalized embedding path of ethical governance in artificial intelligence technology


3.1 Theoretical premise for institutionalized embedding of technological ethics governance: risk identification

Most existing theories take risk prevention, risk governance, and other related theories as the theoretical basis for technological risk governance. Some scholars even habitually cite the concept of "risk society" proposed by foreign scholar Ulrich Beck and explain it as the name suggests: because modern society is constantly in the midst of artificially created risks, it requires legal and other governance tools for comprehensive governance and prevention. This theoretical interpretation ignores the fact that the theory of risk society belongs to a grand sociological theory, which leads to the corresponding conclusion being merely a rephrasing of the consensus that "modern society needs to pay attention to risk prevention governance", without truly explaining the impact of modern social risks on the social governance system. One of the theoretical contributions of the "risk society" theory is the discovery of scientific interest group risks in the process of risk definition. Different interest groups may make non objective evaluations of risk definition based on their own interests, in order to avoid ethical and legal responsibilities. Some scholars also take the crisis of nuclear power plants as an example to explain that each interest group avoids risks that may affect their own interests through risk definition. That is, the reason why nuclear power plants are vulnerable to natural disasters is that such risks are unpredictable and uncontrollable. Therefore, it is not necessary to talk about whether there are technical defects, management loopholes, and other factors in the nuclear power plant itself. In the field of artificial intelligence, legal governance can certainly clarify the security obligations of developers and service providers, but research institutions or technology companies may use professional knowledge barriers to include the occurrence of risk events in the scope of exemption reasons, such as the limitations of existing technology that cannot solve such problems. At this point, the intervention of other governance mechanisms becomes particularly necessary.

The artificial intelligence governance system is a governance system that integrates risk management and promoting technological innovation. Risk identification is not only an important guarantee for ensuring technological security, but also a fundamental element for promoting high-quality technological innovation. In practice, the risks of artificial intelligence technology are usually composed of various risk combinations, and governance theories such as collaborative governance and system governance have therefore received attention from academia. From the perspective of collaborative governance theory, the sources and causes of risks in artificial intelligence technology have diverse characteristics. The high risk of a technological innovation activity is not solely due to researchers ignoring laws, ethics, etc., but may also include factors such as the failure of market regulation mechanisms to control irrational pursuit of economic benefits, and the unpredictability of technological innovation directions. Therefore, the solution to the risks of artificial intelligence technology requires both government agencies and non-governmental organizations to freely participate in technology governance activities, and use different governance tools to exert their respective governance effects, achieving consistent technology governance goals. The mainstream view usually attributes the implementation of collaborative governance theory to the adjustment of government functions, the introduction of market mechanisms, the strengthening of legal guarantees, and the participation of the general public, emphasizing the interconnection and interaction between different governance entities and mechanisms. There may be theoretical differences in the specific implementation methods of collaborative governance, but their commonality is that they are all based on the complexity of artificial intelligence technology risks as a theoretical premise, and consider collaboration on the basis of governance division of labor. In other words, complex technological risks still need to be subdivided into specific risk types in advance, and then different governance mechanisms should be set up to connect them based on the specificity of the risk types.

At the level of technological ethical risks, there are two types of differences in the scope of specific risk forms: the general social risk theory tends to include all risk forms that affect individual rights and interests in the category of technological ethics. For example, some scholars believe that risks such as data privacy breaches, excessive data collection, and ambiguous liability for infringement are all technological ethical challenges faced by the development of artificial intelligence. The narrow ethical risk theory limits technological ethical risks to traditional ethical categories, mainly taking risks such as personal rights infringement and damage to social fairness and justice as specific risk forms. The generalized social risk theory actually equates technological ethical risks with legal risks, and the corresponding governance conclusions cannot effectively connect technological ethical governance with legal governance. Although there is a certain correlation between privacy protection and ethical norms, the privacy infringement risk in the field of artificial intelligence essentially belongs to legal risk or security technology risk. If ethical values are used as the legitimate basis for incorporating privacy issues into the scope of technological ethical risks, it will only lead to the assimilation of technological ethical governance mechanisms with other governance mechanisms, and the evaluation process of technological ethical norms will become a "subsidiary" of other governance mechanisms. It should be pointed out that the core form of technological ethical risks should be the mismatch between technological development and human ethical concepts. The discovery, confirmation, and identification of ethical risks in artificial intelligence technology should be based on technological ethical concepts, rather than legal protection of rights and interests. Some scholars have also proposed that the basic principle of scientific research ethics is to "promote responsible development of scientific and technological activities and promote the good development of science and technology". The ethical risks of science and technology are more related to whether scientific and technological innovation is reasonable, rather than whether it is legal or whether technological applications are safe.


3.2 Individualized embedding of technology ethics governance mechanism

The specific form of the governance mechanism for technological ethics will change according to the specific form of technological ethics risks, but at the level of governance logic, it will still maintain relative independence, following the logic of "guidance negotiation identification feedback" to solve technological ethics risks in a long-term manner. The overall integration of technology ethics governance mechanisms into the artificial intelligence governance system needs to be carried out in conjunction with the functional characteristics of such governance mechanisms. Specifically, the characteristics of ethical governance in technology are mainly manifested as a cyclical and continuous governance process. For specific technological innovation activities, the evaluation of technological ethics not only includes pre evaluation and follow-up evaluation, but also conducts secondary evaluation due to changes in technological ethical norms, with the goal of promoting technological innovation activities to continuously approach the ideal state. Some scholars refer to this governance process as a direct manifestation of the concept of "reflective development", which addresses the inherent opposition between technological development and governance from three aspects: "forward-looking foresight", "real-time evaluation", and "systematic adjustment". The essence of this technological ethical governance concept is the embodiment of the fundamental risk governance logic of "risk assessment risk discovery, identification, evaluation risk response". When dealing with technology ethics risks in a single context, the embedding of technology ethics governance mechanisms should be achieved by connecting the internal logical system of single risk governance, namely constructing a technology ethics risk assessment and management mechanism, a technology ethics risk identification and evaluation mechanism, and a technology ethics risk feedback response mechanism.

Firstly, the mechanism for assessing and managing ethical risks in technology belongs to an anticipatory governance mechanism. Looking back at the development process of artificial intelligence technology, research on technology governance at different stages presents different research focuses. Due to technological limitations, early research did not detect the special ethical risks associated with artificial intelligence technology, and some ethical issues were seen as transcending the application of technology itself. After the emergence of artificial intelligence products such as ChatGPT and Cladue, research on the ethics of artificial intelligence technology has gradually focused on issues such as discrimination and bias, and this is also true in the field of legal research. Neither technical experts nor regulatory agencies can make accurate predictions about the future development direction of artificial intelligence technology. In the face of various technological risks beyond the foreseeable scope, limiting them with specific legal systems will actually restrict technological innovation activities. Therefore, more flexible technological ethical norms are needed. In situations where it is difficult to accurately predict potential risks, technological ethical norms should be used to guide technological innovation activities to consciously adopt ways that are beneficial to the public interest.

Secondly, the mechanism for identifying and evaluating ethical risks in technology belongs to a risk confirmation based governance mechanism. When it comes to the ethical governance of artificial intelligence technology, a common question is what kind of technological ethical standards should be followed for governance, and which technological ethical standards have authority. The openness of technological ethical norms often leads to different subjects forming different technological ethical concepts based on their own interests, making it difficult to reconcile and form relatively clear technological ethical norms. Relatively speaking, the core function of the risk identification and assessment mechanism is to confirm what kind of risks belong to technological ethical risks, ensuring that ethical issues in social disputes are on the same dimension. There is always controversy over what technological ethical issues arise from the application of artificial intelligence technology, and the phenomenon of mixing legal, technological, and technological ethical issues is common. It is necessary to introduce such mechanisms to first determine what technological ethical issues exist. Moreover, the content setting of the risk identification and assessment mechanism emphasizes the participation of behavioral subjects with diverse knowledge backgrounds, which also avoids the strong personal subjectivity and excessive professional orientation of risk identification and assessment conclusions made by subjects with a single professional knowledge background. In addition, another common question about ethical governance in technology is whether such governance mechanisms may become an unnecessary burden on technological innovation. The main function of the technology ethics risk identification and assessment mechanism is to discover and confirm the basic types of technology ethics risks. If they belong to prohibitive matters already covered by current legislation, they can be resolved in accordance with prohibitive legal norms, and there is no need to set up additional mandatory ethical norms for secondary constraints.

Thirdly, the feedback and response mechanism for technological ethical risks belongs to a risk responsive governance mechanism. The ultimate target of technological ethical norms is technological innovation institutions and personnel. Therefore, the ethical judgments made by the risk identification and assessment mechanism mentioned above should be fed back to these innovation entities, so that they can optimize and adjust the way of technological innovation based on the rectification suggestions and opinions of technical experts, ethical experts, and legal experts. This risk feedback response mechanism includes both immediate feedback and group feedback. Instant feedback refers to the process of providing feedback on the identification of technological ethical risks and expert recommendations made by governance mechanisms such as technological ethics review to the subject under review, requiring them to adjust their technological innovation activities based on these results and opinions. Group feedback refers to the practice of regulatory agencies or technology ethics committees combining technology ethics governance to provide feedback on specific technology innovation activities with high risk levels through prohibitive lists and other means.


3.3 Linkage embedding of technology ethics governance mechanism

After clarifying the unidirectional embedding path of technology ethics governance mechanism in the artificial intelligence governance system, it is also necessary to consider how this governance mechanism can be connected with other governance mechanisms such as law, technology, and market. The governance system of artificial intelligence emphasizes the goal of balancing the promotion of technological innovation and the security of technological applications. The unidirectional embedding path mentioned above aims to achieve the security of technological applications, while promoting technological innovation requires solving the connection between governance mechanisms. The basic principle that should be followed in technological governance is not to set up governance mechanisms with duplicate functions for the same risk issue. The establishment of any governance mechanism will have varying degrees of impact on technological innovation. Only by effectively handling the governance relationships between various governance mechanisms and clearly distinguishing the substantive differences between them, can we promote the development of technological innovation while ensuring security. For example, at the level of responsibility mechanisms, ethical responsibility and legal responsibility belong to two different types of responsibilities, and the institutionalization of technological ethical governance cannot lead to mandatory adverse consequences for violating technological ethical norms. In the process of institutionalization, technological ethical norms have already been incorporated into specific legal norms, and the direct basis for assuming legal responsibility is the violation of mandatory norms, rather than the violation of technological ethical norms. At the ethical level, ethical responsibility is often parallel to concepts such as regulatory responsibility and research and development responsibility. In the view of some scholars, the ethical responsibility of scientific and technological personnel mainly includes "the professional ethical norms that should be followed" and "the social responsibility for the consequences of scientific and technological innovation". Therefore, under the concept of collaborative governance, the connection between ethical responsibility and legal responsibility is also one of the key issues that need to be clarified in the integration of technology ethics governance mechanisms.

The connection logic between legal, technological, ethical, and market governance mechanisms always revolves around risk characteristics. The core connection method of these governance mechanisms is to comprehensively evaluate the risks of artificial intelligence technology, merge cross repeated evaluation items, and avoid the situation of repeated evaluation of risks. At the level of technology ethics governance, its connection with other governance mechanisms is manifested as follows:

Firstly, in terms of the connection between ethics and law, some consensus based principles of technological ethics have been internalized as basic legal principles. The connection method is to construct a system of technological ethics review, which includes technology innovation activities in sensitive research fields in the review activities. On the premise of distinguishing between mandatory review and voluntary review, whether mandatory review is carried out in accordance with the law and the review results will be used as the criteria to judge whether the technological innovation subject has subjective fault for the damage facts. At the level of responsibility, only mandatory technology ethics review involves the assumption of legal responsibility, while ethical responsibility is more about constraining the subject of technological innovation with internal norms beforehand, and choosing appropriate innovation methods under the reflection of ethical concepts.

Secondly, in terms of ethics and technology, ethical norms do not directly affect the artificial intelligence technology solution itself, but rather the subject of technological innovation. Therefore, the connection between the two is to promote the subjective consciousness of technological ethics as the subject of technological innovation. Some scholars point out that the governance of technological ethics is not something that can be achieved overnight through sports style governance. The key is to cultivate technological innovators with a sense of technological ethics and moral sensitivity. Faced with the uncertainty of the future development of artificial intelligence technology, the governance system needs to play a guiding role in technological ethics governance, embedding engineering ethics and professional ethics training into technological innovation activities. Although in the field of ethics, philosophers have not reached a consensus on the basic theories or principles of ethics, this does not affect the conscious practice of the most fundamental ethical values by technological innovation subjects with different technological ethical concepts. Foreign scholars have proposed a practice oriented "ethical embedding" that involves ethics in the entire development process, analyzing and discussing potential ethical issues in a collaborative manner, and jointly seeking solutions with researchers.

Thirdly, at the ethical and market level, the most common market risks in artificial intelligence governance practices are disorderly competition and development, such as adopting unsafe open-source models to quickly capture the market, and developing high-risk artificial intelligence application products for profit purposes. The intersection of market risk and technological ethical risk is usually an industry development trend that is not conducive to the long-term development of the industry. Therefore, the connection between the two is mainly through industry self-regulation norms. On the one hand, before the industry practice is fully mature, consensus self-regulation norms are formed to address potential risks of unfair competition; On the other hand, the application products and services of artificial intelligence technology are still in a state of continuous development. Integrating technological ethical norms into the scope of industry self-discipline norms can help guide technology enterprises to consciously choose more reasonable ways of technology application in their continuous development.


4.Example of Embedded Construction of Ethical Governance in Artificial Intelligence Technology: Autonomous Driving Technology


Recently, the pilot application of "Apollo Go" automatic driving has caused great social disputes. In addition to the existing legal and technical risks such as automatic driving safety and data safety, the disputes also include social issues such as whether automatic driving online car hailing may occupy employment space. Under the framework of embedding governance mechanisms mentioned above, the ethical governance rules of autonomous driving technology can be gradually constructed in the following three steps.


4.1 Step 1: Analysis and Identification of Ethical Risks in Autonomous Driving Technology

The scientific and technological ethical issues faced by the application of AI technologies such as "Apollo Go" and other automated driving technologies are similar, because whether it is an unmanned online car hailing or an unmanned delivery vehicle, it involves how to arrange and set protection levels with different safety values. On the one hand, in the event of an emergency during driving, whether the autonomous driving information system should prioritize the personal safety of the driver inside the vehicle or weigh the severity of the actual damage caused inside and outside the vehicle before making a choice, this hypothetical technological ethical risk is becoming an ethical dilemma for the popularization and application of autonomous driving. On the other hand, what standards should be established for the ethical values of driving safety? If only manufacturers and information system developers are responsible, it is inevitable that there will be a situation where commercial interests take precedence over technological ethics; But if ethical and legal experts participate in the responsibility, how can we ensure that these external experts will not impose their subjective technological ethical concepts on the application of autonomous driving technology? Among them, the most commonly mentioned ethical issue in technology is the "tram dilemma", which means that in the face of inevitable traffic accidents, minimizing casualties and protecting passengers in the car often lead to ethical dilemmas for autonomous driving applications. However, it should be noted that the "trolley dilemma" is a typical thought experiment, and its core function is not to obtain a "standard answer", but to provide a platform for exploring ethical standards. The moral dilemma pointed to by the "trolley dilemma" is not a technological ethical risk at the governance practice level. Some ethical studies believe that the complexity of the moral scenarios involved in the application of AI far exceeds the "trolley dilemma". This ethical risk assertion overestimates the intelligence level of the auto drive system, which is essentially based on the strong AI technology that is likely to emerge in the future. Moreover, there is still room for debate on the extent to which existing autonomous driving technologies can achieve ethical choices in complex moral situations.

In addition, the pilot application of "Apollo Go" has also triggered another social hot issue, that is, the low-cost automatic driving online car hailing has seized the traditional online car hailing market, greatly reducing the order volume of online car hailing drivers. Therefore, there are also attempts to incorporate statements such as "autonomous driving applications eroding the living space of ride hailing drivers" into the realm of technological ethics. Objectively speaking, the phenomenon of technological innovation causing changes in social employment structure is not limited to the field of autonomous driving technology. The significant replacement of paper media by digital media has also led to changes in employment structure. Explaining the technological ethical risks of autonomous driving applications based on the threat of artificial intelligence technology to social employment stability and other reasons clearly deviates from the conceptual positioning of technological ethics, and generalizes technological ethical issues to all social governance issues related to social stability. More precisely, the issue of unemployment risk caused by the application of autonomous driving technology actually falls within the scope of employment public policies, rather than technological ethical risks. The ethical risks of technology in the governance system of artificial intelligence are more manifested as whether technological innovation activities themselves overturn the human subject status. The risk of unemployment itself belongs to the inherent adjustment of the social employment structure that technological innovation activities will inevitably lead to, and does not involve issues related to technological ethics.

Although there are many differences in the identification and confirmation of ethical risks in artificial intelligence technology, such as some scholars believing that ethical risks mainly include three aspects: tense relationships between humans and machines, lack of ethical perception, and algorithmic bias, from the perspective of AI governance goals, the criteria for judging ethical risks in technology should be whether technological innovation is reasonable, and whether unreasonable technological innovation activities cause a degradation of human dignity. There is still doubt about whether the potential technological ethical risks in the field of autonomous driving exhibit characteristics such as "quite serious" and "particularly sensitive". In the current situation where autonomous driving has not yet reached a high level of intelligent decision-making, the technological ethical risk of autonomous driving applications is whether autonomous vehicles can make the most reasonable driving decisions based on considerations of respecting life safety and personal dignity when encountering complex road conditions during the driving process. Generally speaking, technology ethics governance emphasizes a more comprehensive and systematic governance architecture, while when it comes to specific application scenarios, technology ethics governance highlights the scenario orientation of specific governance mechanisms based on the general technology ethics governance architecture. In the application scenario of autonomous driving, compared to general ethical risks, technological ethical risks focus on technological innovation activities and whether researchers comply with ethical norms in specific technological fields. The corresponding governance mechanism embedding construction needs to be carried out around the technological ethical risks in specific scenarios. In addition, the technological ethical risks of autonomous driving applications also include algorithmic discrimination risks and unreasonable prioritization of services that exist in other artificial intelligence applications. However, these technological ethical risks are already covered at the general level of the technological governance system.


4.2 Step 2: Single governance of ethical risks in autonomous driving technology

The main technological ethical risks faced by the application of autonomous driving technology are that driving decisions do not comply with technological ethical norms. According to the risk governance model of "risk assessment risk discovery, identification, evaluation risk response", the individual embedding methods of technological ethical governance in autonomous driving application scenarios include autonomous driving technology ethical assessment and management mechanisms, autonomous driving technology ethical risk discovery and identification mechanisms, and autonomous driving technology ethical risk feedback and response mechanisms.

Firstly, the ethical assessment and management mechanism for autonomous driving technology is mainly based on the practical development status of the autonomous driving industry, combined with industry development policies and technological innovation plans, to clarify the specific manifestations of technological ethical risks in the field of autonomous driving and the technological ethical norms that should be followed. At the level of risk types, regulatory agencies and technology ethics committees can periodically assess changes in technological ethical risks in the field of autonomous driving. Regulatory agencies mainly guide the adoption of technology solutions that comply with technological ethical standards for autonomous driving application technology at the industry policy level, and clarify the benign development direction of autonomous driving technology innovation; The Technology Ethics Committee periodically updates scenario oriented technology ethics norms applicable to the field of autonomous driving based on the pilot situation of the autonomous driving industry, domestic and foreign industry application cases, and research results of technology ethics theory. At the same time, in response to the complex traffic conditions that autonomous driving applications may face, regulatory agencies and technology ethics committees can jointly issue technical application guidance documents through industry guidance standards, technical application practice examples, etc., listing driving routes and driving methods that comply with technology ethics standards in common driving situations such as vehicle intersections, highways, and narrow lanes.

Secondly, the mechanism for identifying ethical risks in autonomous driving technology mainly involves discovering, recognizing, and confirming specific technological ethical risks based on specific application scenario factors such as the driving route and mode of autonomous driving. The institutional function of technology ethics review is to make moral and ethical judgments, but the institutionalization of this governance mechanism should not be rigidly understood as requiring prior review for all technology innovation activities, and only technology innovation activities involving sensitive research fields need to undergo mandatory review. Not all technological innovation activities in the field of autonomous driving necessarily require mandatory technology ethics review, as only some technological application scenarios may involve the requirement of Article 4 of the "Measures for Technology Ethics Review (Trial)" that "research content involves technology ethics sensitive areas". In the field of autonomous driving, "sensitive areas" mainly refer to technological innovation activities involving life safety and traffic safety. For example, scientific and technological innovation activities have changed the decision-making ability and intelligent level of the existing auto drive system. At this time, it is necessary to conduct scientific and technological ethics review on the actual effect and influence range of the improvement of decision-making ability to determine whether there is serious scientific and technological ethics risk. Of course, if technological innovation activities do not involve decision-making ability, but only involve functions such as optimizing passenger comfort in the car, there is no need for mandatory technological ethics review. Of course, the governance of technological ethics in this aspect also includes other forms of governance mechanisms, such as conducting technological ethics risk assessments at the beginning of autonomous driving technology innovation activities. This ethical embedding model can avoid the additional research and development costs paid by technological innovation entities to adjust the technology plan again before application.

Thirdly, the ethical risk feedback response mechanism for autonomous driving technology mainly adjusts and optimizes the autonomous driving technology scheme based on the technological ethical risk patterns of autonomous driving technology innovation activities and the moral judgments and technical improvement suggestions made in the previous governance process. Most existing research on ethical governance in technology has overlooked the correlation between technical solution adjustments and ethical risk identification and assessment results. In the case of mandatory technology ethics review, the review results are usually closely related to the safety of autonomous driving, which belongs to the sensitive field of technology ethics. The review suggestions and opinions formed by the review experts should be an important basis for the adjustment of technical solutions by the technology innovation subject. Undoubtedly, ethical norms in technology are not mandatory, but in the case of mandatory review, the principles of technological ethics intersect with the basic principles of law. Whether the subject of technological innovation changes the original technical solution based on the review recommendations can directly serve as a basis for whether there is subjective fault. In the case of voluntary technology ethics review, as it does not involve technology ethics risks in sensitive areas, the corresponding review results and optimization suggestions only need to be fed back to the technology innovation subject, who will judge whether the corresponding technical solutions should be adjusted according to the optimization suggestions based on their own situation.


4.3 Step 3: Coherent governance of ethical risks in autonomous driving technology

In the field of autonomous driving, because the core technological ethical risks are "driving safety" and "life safety", the key to the connection between technological ethical governance mechanisms and other governance mechanisms lies in the identification, assessment, prevention, and control of safety risks. At this point, the adjustment targets of legal, technological, and market governance mechanisms all involve different levels of security risk types, such as: legal governance mechanisms focus on preventing personal injury risks caused by the failure of obligated parties to fulfill their security obligations; The technical governance mechanism focuses on preventing the risk of unreliable auto drive system caused by technical reasons such as technical loopholes and system design defects; The market governance mechanism focuses on preventing industry enterprises from excessively pursuing technological leadership advantages, adopting measures such as simplifying security management processes and prematurely implementing pilot applications in immature technology, which may lead to the risk of the entire industry falling into disorderly competition. However, the specific contents of these governance mechanisms have certain similarities: legal governance mechanisms involve network security, data security and even algorithm security assessment mechanisms. For example, Article 26 of the Network Security Law of the People's Republic of China stipulates that network operators should carry out security risk assessment and other activities, and Article 7 of the Internet Information Service Algorithm Recommendation Management Regulations stipulates that algorithm recommendation service providers should establish and improve security assessment monitoring management mechanisms. The technical governance mechanism involves national technical standards related to artificial intelligence and autonomous driving, most of which also involve safety risk assessment. As stated in "5.2 Testing Requirements" of the "Technical Requirements for Safety Testing of Intelligent Connected Vehicles" (GB/T 43766-2024), the testing subject shall submit a testing report, which shall cover matters related to risk assessment such as testing vehicles, testing roads, testing projects, etc. As for the market governance mechanism, as autonomous driving technology is still an emerging industry, mature business practice models, industry universal standards, and industry self-discipline standards have not yet been formed. However, considering that the goal of market governance is to regulate the application of technology in an orderly competitive state, future industry self-discipline and other specific governance mechanisms should make safety risks the main content of self-discipline norms.

Therefore, the connection between the ethical governance mechanism of autonomous driving technology and other governance mechanisms should be an integrated assessment of safety risks, incorporating various safety risk factors of autonomous driving into the technical governance framework to avoid repeated evaluations of the same governance objectives. Specifically, three basic principles need to be followed: firstly, the types of security risk assessments that have already been evaluated will not be re evaluated and can be directly used as conclusions for other security risk assessments, such as the results of technology ethics reviews, which can serve as measurement standards for algorithm security risk assessment systems. Secondly, the security risk assessment extended by various governance mechanisms does not involve multiple evaluations of the same matter. For example, when national technical standards already involve technology ethics risk assessment matters, there is no need to conduct technology ethics review on the same ethical matters again. Thirdly, regular safety risk assessments should be the main focus, supplemented by irregular safety risk assessments. In order to avoid frequent redundancy in risk assessment activities for technological innovation entities, periodic safety risk assessments should only be conducted when autonomous driving technology generates substantial innovation or has a substantial impact on driving safety.

Of course, besides the integrated assessment of security risks, there are also other ways to connect governance mechanisms. As some scholars believe, the ethical governance of autonomous driving algorithms should "focus on implementing through self regulatory management of enterprises and industries", such as achieving ethical governance effects throughout the entire lifecycle through mechanisms such as ethics committees, industry self regulatory conventions, ethical standards and certifications, ethical embedding design, technology and management tools for algorithm ethics, and algorithm ethics rewards. As for the governance goals of autonomous driving, effective control of various safety risks is the top priority, so how to integrate and construct an efficient safety risk assessment mechanism is particularly crucial. The "Artificial Intelligence Risk Management Framework" released by the National Institute of Standards and Technology in 2023 adopts an integrated risk assessment logic and believes that artificial intelligence risks present a specific risk overlap pattern, such as privacy risks, data security risks, and software and hardware security risks. In the field of autonomous driving safety risk assessment, distinguishing ethical risks, technological risks, and other types of risks in advance is to better determine the importance and urgency of various safety risks, and to provide corresponding normative guidance for the subsequent actions of governance entities such as technological innovation entities and regulatory agencies.


5.Conclusion


Whether it is artificial intelligence technology innovation or future information technology innovation, ethical governance of technology is an important governance tool to address various technological risks behind these technological innovations. However, theoretical research and institutional practice should not infinitely magnify the practical functions of technological ethical governance. Technological ethics ultimately belongs to an ethical norm that differs substantially from legal norms. Therefore, in the process of constructing the governance system of artificial intelligence technology, it is necessary to clarify the governance functions and mechanisms of various governance tools, set specific governance mechanisms in a targeted manner, and strengthen the internal connection and unity of governance logic between various governance mechanisms. After all, risk governance is a comprehensive social governance activity, and governance tools such as law, technology, ethics, and market objectively have their own functional limitations when facing emerging technology risks. Only by following a unified governance logic can we ensure that various governance tools can fully play their due governance functions. For the governance of technological ethics, the core governance effect is to guide the healthy development of technological innovation activities and urge researchers to consciously abide by technological ethical norms. Therefore, the institutionalized embedding path of technology ethics governance needs to explore more diversified ethical concept guidance models, rather than just limited to technology ethics review mechanisms, to promote institutional coordination at the logical level of risk governance.