[author]Dai Xin
[content]
Will No-fault Liability Rule for AI Harms Obstruct AI Advancement?——Insights from Law and Economics
*Author: Dai Xin, Associate Professor, Law School, Peking University
Abstract: Establishing tort liability rules for AI-related harm is a challenging issue in the field of AI legislation. Due to the difficulty in interpreting the mechanisms of black-box systems and the involvement of multiple parties in harm-causing processes, applying negligence liability to AI-related harm entails high institutional costs. Conversely, the application of no-fault liability not only incurs lower institutional costs and achieves better remedial effects but also, based on law and economics principles concerning the impact of liability rules on care and activity levels, may not necessarily result in excessive inhibition of AI innovation. Moreover, while risk regulation through administrative oversight remains the primary institutional approach to addressing the risks of AI-related harm, tort compensation liability—particularly the adoption of no-fault liability—can contribute to maintaining public trust in the AI industry and regulatory frameworks.
Keywords: No-fault liability; Care level; Activity level; Ex-ante regulation; Sound isolation
1. Introduction: Why Oppose No-fault Liability?
Determining the appropriate tort liability rules for harm caused by AI systems has become a prominent topic of debate in recent years. Whether involving traffic accidents caused by autonomous vehicles, industrial robots injuring human “co-workers” in production facilities, or medical diagnostic systems issuing harmful misdiagnoses, these scenarios either have already materialized in the real world or are anticipated to do so imminently. This urgency underpins the pressing need to define applicable tort liability rules.
One major focus of current discussions revolves around identifying the responsible parties for tort liability arising from AI-related harm. The involvement of autonomous or self-decision-making systems—and, in some cases, humanoid-like machines—has raised ethical arguments for assigning legal responsibility directly to AI entities. While such discussions, imbued with a futuristic allure, captivate public imagination, the proposal to grant AI legal personhood appears less a visionary leap and more a pragmatic response to immediate challenges. Specifically, AI-related harm typically involves multiple social actors engaged in the development, deployment, production, service provision, and use of AI systems (hereinafter collectively referred to as “AI harm participants”). The attribution of liability among these parties is a traditional conundrum in tort law, further complicated by the interpretative and cognitive limits of understanding complex AI mechanisms. Whether through the moral principles of corrective justice or the efficiency principle of the least-cost avoider, assigning liability remains a challenging endeavor. To circumvent these challenges, some have suggested the “ingenious” solution of making AI entities bear liability independently.
However, a closer examination reveals that granting AI systems legal personhood to assume tort liability independently requires them to possess assets upon obtaining legal status, and these assets must be substantial. AI entities without adequate compensation capability would merely function as tools for natural and legal persons to shift or evade liability. The initial funding for AI entities—whether provided by developers, manufacturers, operators, or end-users—inevitably comes from human sources. Market logic dictates that AI products or services that successfully reach the market at accessible prices are unlikely to maintain sufficient reserves for liability coverage; instead, they are likely backed by liability insurance policies. Regardless of the source of insurance costs, ensuring insurability for AI-related harm risks presupposes clear legal provisions on liability allocation—bringing us full circle to the multi-party attribution problem that granting AI legal personhood sought to bypass, effectively rendering such efforts futile.
In comparison, discussions on liability principles for AI-related harm—particularly the choice between negligence and no-fault liability—have garnered relatively less attention. However, the prevailing insistence on negligence liability arises from the difficulty of determining how liability should be distributed among the participants in AI-related harm. If the mechanisms of harm are challenging to explain, by what behavioral standards can the negligence of different participants be assessed? Scholars have noted that existing negligence liability rules, including defect-based product liability, are ill-suited to the AI context. Specifically, if the goal of tort liability rules is to ensure victim compensation, harm scenarios involving multiple participants and unclear causation mechanisms often incentivize those most capable of providing relief (e.g., large corporations) to identify and exploit others’ faults as defenses to mitigate or eliminate their own liability.
Drawing on classical yet straightforward theories from law and economics, this paper proposes adopting a no-fault liability rule for AI-related harm. Under no-fault liability, legislators or adjudicators may, based on policy considerations, designate one or more parties—such as system developers, deployers, manufacturers, or service providers—to bear compensation responsibility without examining whether their conduct involved negligence. Evidence suggests that no-fault liability does not necessarily increase litigation and can reduce judicial costs compared to negligence liability. In the AI context, no-fault liability offers a promising solution to the multi-party attribution problem without creating independent legal personhood for AI. With legislation clearly assigning no-fault liability to specific entities, market participants across the AI development and application chain can establish further agreements regarding risk prevention and harm allocation through contracts and insurance. This arrangement allows each party to adopt reasonable care levels consistent with technological and commercial realities.
Fault-based liability is the standard principle under China’s civil tort liability system. Applying no-fault liability or presumed fault liability, however, requires specific legal provisions. In recent years, various AI legislation drafts proposed by domestic research teams have generally adhered to the principle of fault-based liability (including presumed fault) or, at the very least, refrained from advocating for no-fault liability. Based on the author’s understanding, while the legal community recognizes the risks posed by AI-related harm and emphasizes the importance of victim compensation, it often hesitates to adopt no-fault liability due to concerns about overburdening participants in the AI value chain. This hesitation stems from fears that such legislative moves might exacerbate the challenges faced by domestic technology innovation, which is already under pressure from foreign technological constraints (“bottleneck issues”). Notably, this concern has also been explicitly voiced by industry stakeholders.
While such concerns are understandable and warrant a response, they do not necessarily provide sufficient grounds for adhering to fault-based liability in the context of AI-related harm. This paper argues, through theoretical analysis, that compared to fault-based liability, no-fault liability does not impose unreasonably high care levels on actors. Instead, its incentive effect primarily manifests in reduced “activity levels,” that is, the scale or intensity of risk-related activities, compared to those under fault-based liability. From a risk control perspective, particularly for AI research and operations involving significant harm risks, relatively lower activity levels are more aligned with optimizing social welfare.
Even if policymakers worry about the “chilling effect” or aim to encourage higher activity levels in AI-related endeavors, combining no-fault liability with mechanisms such as safe harbor provisions, liability caps, and insurance systems can ensure that its low institutional cost advantages are retained without excessively discouraging activity levels.
The conclusion reached here—that tort liability for AI-related harm (especially involving highly hazardous activities) should adopt no-fault liability—is not novel and has been previously proposed. Similarly, the legal and economic analytical framework employed in this paper is not groundbreaking. However, the following discussion aims to deepen the academic and practical understanding of no-fault liability. While the legal governance of AI must remain value-driven, no-fault liability should not be perceived solely as a defensive legal tool adopted to enhance safety. It may, in fact, be more efficient than fault-based liability. This counterintuitive perspective merits further theoretical exploration.
2. Fault-based Liability and AI-related Harm
This discussion on fault-based liability focuses on negligence, as addressing liability for intentional harm similarly relies on comparing the safety benefits and preventive costs of specific precautionary actions (e.g., refraining from harmful activities). In other words, analyses centered on negligence liability can also encompass intentional torts. The primary advantage of the no-fault liability advocated in this paper over negligence liability lies in its ability to avoid requiring judicial authorities to conduct cost-benefit analyses in individual cases when retrospectively determining negligence. This, in turn, grants AI harm participants greater autonomy and flexibility to dynamically manage risks.
2.1 Corrective Justice or Efficient Prevention
Corrective justice and efficient prevention represent the two main normative foundations of negligence liability. According to corrective justice theory, compensation responsibility arises from the moral obligation of an actor to remedy harm caused to others through their wrongful actions during specific interactions. In contrast, efficient prevention theory posits that liability rules aim to incentivize actors to adopt reasonable preventive measures ex ante and minimize the total expected societal costs, which include accident costs and prevention costs.
From the perspective of corrective justice, discussing and justifying tort liability for AI-related harm aligns more closely with common moral intuitions—that “only those at fault should be held accountable.” However, this approach often leads to a quagmire of debates about distributing moral and ethical responsibility among the network of harm-causing participants. In most AI-related contexts, except for relatively straightforward scenarios, it is challenging to trace specific harm to the wrongful actions of one or more identifiable parties. Additionally, it is foreseeable that participants will employ various moral arguments to deflect blame onto others. Therefore, this paper sets aside corrective justice for the time being and focuses on analyzing the effects of different liability rules within the framework of efficient prevention. It is worth noting, however, that the liability rules that align with efficient prevention often reflect principles of corrective justice as well.
2.2 Reasonable Care in AI-related Harm
Under the framework of efficient prevention, the allocation of negligence liability aims to minimize overall social costs by requiring actors to fulfill reasonable care obligations. In other words, negligence liability does not—and is not intended to—achieve “absolute safety”; rather, it seeks the optimal cost-benefit balance in accident prevention efforts.
However, concepts such as “optimal” and “minimized” hold precise meaning only in theoretical economic calculations. In judicial practice, determining “reasonable care” relies on extensive case-specific judgments. Judges draw upon accumulated experience from similar situations as well as fact-based and counterfactual analyses in individual cases. The primary theoretical framework for such analyses is the so-called “Hand Formula,” which evaluates whether the cost of the precautionary measure not taken (B) exceeds the expected accident cost (PL)—i.e., the total loss from an accident multiplied by the probability of its occurrence without the precaution. As Grady’s classic work reveals, the proper interpretation and application of the Hand Formula, both theoretically and practically, should focus on marginal analysis: If an actor could have invested one additional unit of prevention effort that would have yielded safety benefits exceeding the cost of that unit, the failure to take such measures constitutes negligence. Consequently, the plaintiff’s core task in proving negligence is to identify and demonstrate the existence of such cost-efficient but unimplemented preventive measures.
In the context of AI-related harm, significant challenges arise in determining whether reasonable care obligations were breached from the perspective of efficient prevention. Applying the marginal approach of the Hand Formula becomes particularly problematic due to the rapid advancement and evolution of AI technologies. Plaintiffs, armed with hindsight, can endlessly propose hypothetical preventive measures that the defendant “could have but did not” take, claiming these measures would have been cost-efficient at the margin.
This lack of clarity complicates the development of consistent judicial precedents in the emerging field of AI-related torts. Consequently, the market struggles to derive stable liability expectations based on judicial outcomes. Conversely, defendants in such cases are likely to exhaustively argue that other parties, including the plaintiff, could also have taken cost-efficient preventive measures. They would contend that other parties faced lower marginal costs and could achieve higher marginal safety benefits, using doctrines like contributory negligence or comparative negligence as the basis to argue for reduced or even eliminated liability.
In this scenario, negligence liability risks becoming a battleground for participants seeking to shift blame. For policymakers and the public, such outcomes are undesirable and counterproductive.
The complexities of applying negligence liability to AI-related harm can be attributed to two main factors. First, such harm typically arises within a network of multiple participants. The design, development, manufacturing, and provision of AI-related products and services are not the work of a single actor. Each participant in this network could potentially be accused of failing to implement some “marginally efficient” preventive measure. Second, AI systems often lack transparency or comprehensibility regarding their underlying principles and mechanisms. While this lack of transparency does not inherently increase risk, it significantly complicates post-incident determinations of liability.
When these two factors combine, it becomes exceedingly difficult to assess how each participant’s actions contributed to the harm within a specific incident. This not only makes it challenging to establish negligence in legal terms after the fact but also hampers participants from rationally planning their actions ex ante. Moreover, given the severe information asymmetry between participants and judicial decision-makers, actors may have strong incentives to engage in opportunistic behavior, knowing that disputes will be difficult to resolve effectively.
Even within the normative framework of efficient prevention, which avoids the moral dilemmas of corrective justice, determining reasonable care in AI-related harm remains a daunting task. Theoretically, the application of negligence liability to AI-related harm is fraught with inconsistencies. Practically, it is likely to increase disputes and institutional costs.
From an incentive perspective, negligence liability relies on judicial decision-makers’ ability to reliably define reasonable care obligations. When the information required to make these determinations exceeds what is typically available to the judiciary, the resulting legal uncertainty undermines the preventive incentives that negligence liability aims to provide. In such cases, negligence liability is unlikely to achieve optimal deterrence.
2.3 Presumed Negligence
Presumed negligence provides an alternative approach that attempts to address the challenges associated with establishing reasonable care in AI-related harm. Under this framework, even if the plaintiff cannot provide direct evidence, the occurrence of harm is presumed to be the result of negligence. The burden then shifts to the defendant to prove that they exercised reasonable care.
On the surface, presumed negligence seems to mitigate the increased disputes and high costs associated with standard negligence liability. However, it is important to note that presumed negligence is not equivalent to established negligence. Defendants still have the opportunity to argue that the harm resulted from factors beyond their control—particularly actions by other parties, including the plaintiff. Even if such arguments do not fully absolve them of liability, they can potentially implicate others, diluting their own responsibility.
The very possibility of such defenses incentivizes defendants to allocate resources toward securing more favorable outcomes in the loss allocation process, effectively engaging in rent-seeking behavior. Given the complexity of AI-related harm and the significant information asymmetry between harm-causing actors, victims, and the courts, defendants often have greater capacity to mount such counterarguments, leading to substantial rent-seeking expenditures.
As a result, the practical significance of presumed negligence is more limited than anticipated. Whether the legal objective is to avoid the high judicial costs of defining reasonable care standards or to provide better compensation for victims, presumed negligence is less effective than directly applying no-fault liability.
2.4 Product Liability
Product liability is often proposed as an alternative to negligence liability in discussions on AI-related torts. Under product liability, the designers, manufacturers, and sellers of AI products that cause harm are required to compensate victims for injuries caused by defective products. Victims must prove the existence of design or manufacturing defects to claim compensation.
However, the applicability of traditional product liability to AI-related harm is limited. AI-related harm does not always involve “products”: injuries resulting from AI-enabled services or AI-assisted decision-making fall outside the scope of product liability. Furthermore, while traditional tort law often characterizes product liability as a form of no-fault liability—where victims are not required to prove specific negligent behavior on the part of producers or sellers—its central requirement of proving “defect” implicitly involves analyses related to reasonable care. For example, the mere fact that a product is not perfectly designed or fails to guarantee absolute safety does not necessarily indicate a defect. A product is typically considered defective when its safety falls short of consumers’ reasonable expectations or when its safety risks outweigh the utility it is designed to achieve. In other words, determining defectiveness remains an analysis closely tied to concepts of reasonable care and efficient prevention.In this sense, product liability is effectively a form of de facto negligence liability.
Because of its reliance on the concept of defect, product liability does not fundamentally resolve the attribution challenges or institutional cost issues associated with negligence liability in the context of AI-related harm. The primary advantage of product liability compared to general negligence liability lies in its relatively narrow scope of responsible parties, often excluding AI product users from liability. Even so, when product liability regimes include defenses based on misuse, questions about users’ reasonable care may re-enter the discussion. Thus, product liability does not entirely avoid the difficulties associated with negligence-based frameworks in the realm of AI-related torts.
3. No-fault Liability vs. Negligence Liability: What Sets Them Apart?
The theoretical and practical challenges faced by negligence liability in addressing AI-related torts are relatively easier to resolve under a no-fault liability regime. Of course, this alone does not suffice to justify no-fault liability, as there remains the concern that “pressing down on one side of the gourd may cause the other side to pop up.” A particularly practical concern is whether no-fault liability might impose an excessive burden on AI innovators. Will relevant actors engage in excessive precautionary measures out of fear of liability? This line of thinking is not without merit, but the reasoning behind it may not entirely align with intuitive assumptions. The following sections, based on the principles of law and economics, will interpret how no-fault liability might generate behavioral incentives different from those under negligence liability.
3.1 Economic Analysis of Liability Rules: Level of Care
The economic significance of liability rules lies in incentivizing actors to adopt the most efficient level of care. According to classical models, “optimal care” minimizes the sum of the expected costs of accidents and prevention measures, otherwise known as “social costs.” In theory, the “reasonable” care standard under an efficient negligence rule corresponds to this optimal level of care. Under negligence liability, as long as an actor exercises reasonable care, they will not be deemed negligent. Even if their actions result in harm to the victim, they are not required to bear tort liability. Conversely, under an efficient negligence rule, rational actors are incentivized to adopt a level of care that is neither “too low” nor “too high”: If the level of care falls below optimal, actors may incur lower prevention costs, but they would need to bear the full cost of accidents, and the total cost would exceed that of adopting the optimal care level. If the level of care exceeds optimal, actors would not need to bear accident costs, but they would incur extra prevention costs that, at least legally, they are not obligated to bear. Only by adopting the optimal level of care can actors minimize their private costs as rational decision-makers, which simultaneously minimizes social costs.
How does no-fault liability influence the level of care compared to negligence liability? No-fault liability dictates that whenever an action results in harm, the actor must bear the cost of compensation regardless of whether they exercised reasonable care. Given that victims can more easily initiate and succeed in claims under no-fault liability, it may appear that actors are incentivized to be more cautious, adopting a higher level of care than under negligence liability.
However, based on the previously outlined principles of law and economics, this conclusion may not hold true. If the legal standard of “reasonable care” aligns with the level of care that minimizes the sum of accident costs and prevention costs, then rational actors will adopt the same level of care under both negligence liability and no-fault liability when engaging in specific activities. This is because, under either liability regime, actors aim to minimize their private costs. Even when subject to no-fault liability, meaning they must bear all social costs associated with accidents (including residual accident costs and prevention costs) regardless of their level of care, rational actors will still choose the level of care that minimizes their private costs, which is the same level that minimizes overall social costs under negligence liability. In other words, while no-fault liability imposes compensation obligations even when actors are not negligent, it does not imply that actors will overinvest in prevention to minimize liability.
3.2 Economic Analysis of Liability Rules: Level of Activity
If no-fault liability does not necessarily lead to higher care or prevention levels, what distinguishes it from negligence liability? An intuitive difference lies in the allocation of costs: when reasonable care has been exercised and the actor is not negligent in the legal sense, their actions are not “harmless.” For the residual harm from accidents that still occur (hereafter referred to as “residual accidents”), negligence liability assigns the corresponding costs to the victim, while no-fault liability requires the actor to bear these costs.
This differing allocation of costs naturally prompts discussions of fairness. Beyond fairness, however, the allocation of residual losses carries significant implications for efficiency, a matter that involves the classic debate in the economic analysis of tort law concerning the level of activity. According to Shavell’s analysis, the incentives created by liability rules affect not only actors’ levels of care but also their decisions regarding the extent or scale of risk-related activities they engage in. Given a fixed level of care, actors’ choices about their level of activity inherently impact the social costs of their behavior. For instance, a cautious driver who drives one day a week versus five days a week, even at consistent levels of care, generates differing accident costs due to differences in activity levels(As shown in Table 1).
However, a higher level of activity, which correlates with higher residual accident costs, does not necessarily imply that activity levels should be minimized. The introduction of activity levels into the analysis aims to provide a more comprehensive cost-benefit perspective within the economic analysis of tort law. In general, the “harmful activities” addressed by tort law inherently possess positive value, with damages and risks as byproducts of value-generating activities. Therefore, activity levels are not only associated with the expected costs of accidents but also with the scale of value production. Given a reasonable level of care, the optimal level of activity is clearly not the one where expected accident costs are minimized (at zero activity), but rather the one where the net value of the activity is maximized. As previously discussed, under ideal conditions, both negligence liability and no-fault liability can incentivize actors to adopt the optimal level of care. However, under negligence liability, actors are not required to bear residual accident costs, as these are borne by the victims. This provides actors with an incentive to elevate their activity levels beyond the socially optimal level—an inefficient outcome, as externalities are not fully internalized. In contrast, under no-fault liability, even after taking reasonable care, actors are required to fully internalize residual accident costs through compensation obligations. This means that actors will choose the level of activity that maximizes the net value of their actions after accounting for expected accident costs. This level corresponds to the socially optimal level of activity. For instance, in the example shown in Table 1, if the actor anticipates bearing all residual accident costs, they will, from the perspective of maximizing private welfare, choose an activity level of two units—precisely the same level as the socially optimal activity level. Thus, by requiring actors to conduct a thorough cost-benefit analysis of their risky activities and ensuring that activity levels are optimal rather than excessive, the efficiency advantage of no-fault liability over negligence liability becomes particularly apparent.
3.3 Residual Accident Risk and Insurance
Beyond the question of whether activity levels become excessive, the differing allocation of residual accident costs under negligence liability and no-fault liability—given that they incentivize the same level of care—raises another layer of efficiency considerations related to insurance. Insurance acknowledges the objective reality that risks cannot be entirely eliminated. By leveraging the law of large numbers and the principle of diminishing marginal utility, it seeks to distribute risks more efficiently across populations, thereby reducing the subjective utility loss associated with risks and damages on a broader scale.
Insurance, of course, comes with its costs. Apart from the investment in market infrastructure, policyholders must pay to obtain insurance, converting uncertain risks into fixed costs. However, accessibility to insurance often varies significantly depending on one’s position in the market or society. Different entities face differing abilities and costs in obtaining first-party or third-party insurance. Under negligence liability, the residual accident costs of an activity are borne by potential victims, who therefore have a demand for insurance. Actors engaging in risky activities, meanwhile, need only exercise reasonable care and are not obligated to insure against residual accident risks. Conversely, under no-fault liability, since residual accident risks are borne by the actors, they retain the incentive to seek insurance even after minimizing total social costs through reasonable care. This means that the choice between negligence liability and no-fault liability for a specific harmful activity should consider which party can more efficiently secure insurance in the given context.
Additionally, as discussed in the analysis of activity levels, the higher activity levels associated with negligence liability—potentially exceeding the efficient scale—result in victims facing a greater scope of expected risks that they must insure against. This leads to higher overall insurance costs. Under no-fault liability, while actors directly bear the cost of insurance premiums, they may pass these costs onto victims as users by raising the prices of products and services. Such cost-shifting could introduce efficiency losses due to cross-subsidization. For example, if there is significant heterogeneity in risk preferences among users or victims, risk-tolerant users might be unwilling to pay premiums set based on average willingness to pay, prompting their exit from the market.
However, in many AI-related harm scenarios involving consumers—such as accidents caused by autonomous vehicles—consumer risk preferences may be relatively homogeneous (after all, “safety and convenience” are universally valued in such contexts). Designers and manufacturers, who possess superior information about the risks associated with algorithmic systems, are better positioned to obtain first-party insurance and have stronger negotiating power. Therefore, incentivizing tortfeasors to seek insurance under a no-fault liability framework is often more efficient.
4. No-fault Liability for AI-related Harm
In modern tort law, no-fault liability has consistently been regarded as an exception to fault-based liability. For example, in theoretical discussions within Anglo-American tort law, no-fault liability is often likened to small “pockets” sewn into a coat—these pockets are limited in capacity and cannot be filled indiscriminately. Activities that are most commonly placed within the no-fault liability pocket involve those deemed “abnormally dangerous,” such as blasting operations in mining. Whether the application of AI systems constitutes “abnormally dangerous” activities is not a question that can be universally answered. Moreover, the “dangerousness” and “normalcy” of AI applications are not necessarily correlated. AI systems used in everyday life may pose systemic threats to humanity, while industrial robots used for blasting and mining may not be as dangerous.
With the increasing proliferation of AI systems across various scenarios, applying no-fault liability to AI-related torts would presumably create a significantly larger pocket, potentially undermining the principle status of fault-based liability in the tort law system. Nonetheless, applying no-fault liability to AI-related torts involving black-box mechanisms is not only worthwhile but may also be unavoidable. Meanwhile, the potential negative effects of no-fault liability, particularly its potential to stifle innovation, require more precise understanding and could be mitigated through well-designed supporting mechanisms.
4.1 Level of Care for AI Harm Participants under No-fault Liability
As previously mentioned, the main difficulty in applying negligence liability to AI-related torts lies in the challenges of explaining harm mechanisms and the excessive number of potential negligent parties, leading to high uncertainty in accountability and hindering compensation for victims. Under no-fault liability, judicial decision-makers are not required to retrospectively determine what reasonable care measures various harm participants should have taken before the incident. Since residual risks are allocated to tortfeasors under this regime, they can autonomously decide the level of investment in risk prevention. Rational actors, motivated by minimizing their own costs (which align with minimizing social costs), will have sufficient incentive to explore levels of care that approximate the optimal social cost.
This does not mean that every AI harm participant in every specific scenario will precisely identify and adopt the optimal level of care. However, compared to relying on post hoc judicial judgment, a decentralized prevention decision-making system, where tortfeasors with greater expertise and more timely opportunities for adjustment seek to minimize their own costs, is undoubtedly more practical and likely to achieve better preventive outcomes.
Even so, no-fault liability does not eliminate the problem of determining which among the multiple potential harm participants should bear liability. Unlike multi-party torts under negligence liability, no-fault liability allows the choice and scope of liable parties to be determined entirely from a policy perspective, unburdened by the constraints of post hoc adjudication. In other words, under negligence liability, whether liability is apportioned or joint, all tortfeasors must first be found negligent or presumed negligent. Under no-fault liability, however, legislators can predefine which parties must make preventive investments for AI-related harm risks and include them in the scope of liability. This type of policy-based determination of liability scope is not new in tort law. For example, the range of parties responsible for product liability—such as sellers, manufacturers, and distributors, but not users—essentially reflects a policy choice.
Theoretically, legislators should still adhere to the “l(fā)east-cost avoider” principle when delineating the scope of liable parties. However, even if the delineation extends beyond least-cost avoiders to include parties that are unlikely to take cost-efficient preventive measures (e.g., certain end users), as long as the scope of liability is clear and transaction costs among parties are manageable, the involved parties can rearrange liability among themselves through negotiated agreements. The outcome would likely still allocate preventive responsibilities to the least-cost avoider. Given the central role of enterprises in relevant contexts, firms within the same industry should have sufficient capacity to redistribute liability through transactions. Even if individual developers or engineers are included within the scope of strict liability, in a competitive labor market, these individuals could theoretically transfer this liability to their employer organizations.
Of course, the complex structure of the AI industry, characterized by power imbalances between leading firms and numerous upstream and downstream small and medium-sized enterprises, raises concerns that leading firms might use contractual mechanisms (e.g., indemnity clauses) to shift all tort liabilities onto smaller firms. It should be noted, however, that such risk-shifting cannot be entirely avoided under negligence liability either and may not necessarily represent equilibrium. If leading firms excessively transfer risks, smaller firms might exit the market or be absorbed by leading firms. These issues should be addressed by directly improving market structures.
Another related issue is whether victims’ own negligence could serve as a defense to mitigate or exempt AI harm participants’ liability under no-fault liability. It is relatively clear that intentional harm-seeking by victims should serve as an exemption defense; otherwise, opportunistic behaviors such as “fabricated accidents” would become frequent. However, whether victims’ negligence (e.g., failing to exercise reasonable care when using products or services) should result in reduced liability for tortfeasors is more complex.
Given that the limitations of AI systems and the risks associated with using related products or services are known, failing to require victims to make their own reasonable preventive efforts would conflict with the principle of minimizing social costs. However, most AI-related harm scenarios are closer to “unilateral harm” rather than “bilateral harm,” where the scale of damage is primarily influenced by the tortfeasor’s preventive efforts rather than the victim’s. Moreover, in AI-related harm cases, the concept of “unilateral harm” should be understood not just descriptively but normatively. Products and services based on AI systems are inherently expected to free human users from attentional demands. If the safety of AI systems largely depends on whether users or other human victims exercise reasonable care, this suggests a fundamental design flaw in the human-machine interface of such products.
In the domain of autonomous driving, for instance, the well-known “handoff problem” illustrates the complexities surrounding liability. If an intelligent system allows a human driver to retake control in an emergency but the driver, distracted or drowsy due to prolonged reliance on autopilot, fails to respond in time and causes an accident, the driver’s negligence might seem to bear significant responsibility for the incident. However, considering the original intent and evolving process of AI technology development, it becomes evident that the purpose of autonomous driving is precisely to transform drivers into passengers during commutes, enabling them to relax, disengage, or even focus on non-driving-related tasks such as work or entertainment. Imposing liability rules that incentivize autonomous vehicle users to maintain or even heighten vigilance runs counter to this vision.
If AI tortfeasors cannot be exonerated by citing victims’ negligence under a no-fault liability framework, it eliminates any incentive for tortfeasors to shift blame onto victims. Instead, they will be motivated to focus on two key aspects: (1) achieving optimal prevention levels in a static sense, minimizing their combined costs of accident prevention and residual accident costs through efficient implementation of insurance measures; and (2) pursuing technological improvements dynamically to reduce the overall social costs of harm caused by AI systems.
Additionally, for corporate entities involved in the AI industry value chain, no-fault liability can more effectively incentivize internal oversight of individual behaviors within their organizations. Under negligence liability, since fault is difficult to identify externally, firms may strategically avoid rigorous self-examination. This approach not only makes it harder for external parties to detect issues but also makes it easier for firms to argue that they had no reasonable opportunity to exercise due care when damage occurs. In contrast, under no-fault liability, as firms bear the expected accident costs, they are incentivized to adopt internal monitoring measures as long as such measures have risk-prevention value.
4.2 Level of Activity for AI Harm Participants under No-fault Liability
A common argument against no-fault liability is that it may suppress the level of activity in AI innovation. Except in scenarios where AI technologies and products are deliberately developed to cause harm, activities such as the technological development, production, and provision of AI products and services are inherently aimed at creating positive social value (including corporate profits as a form of positive value), with risks as byproducts of value-generating activities. As previously discussed, compared to negligence liability, no-fault liability does not necessarily result in higher or excessive levels of care. However, as it assigns the residual accident costs to actors, the level of activity under no-fault liability is lower than under negligence liability. In other words, the scale of value-creating activities like technological development, production, and service provision does decrease under no-fault liability compared to negligence liability.
If the potential risks associated with a particular AI system are indeed as severe as societal concerns suggest, it would be prudent to use strict liability to prevent the rapid and potentially uncontrollable expansion of related harmful activities in the short term. It is essential to note, however, that the relatively “l(fā)ower” level of activity under no-fault liability is not merely “cautious” or “conservative” but is actually efficient. As the theoretical analysis in Section 3 indicates, since the scale of AI activities is proportionate not only to the value they create but also to the risks they generate, the optimal level of activity is the one where net benefits are maximized, rather than the highest possible level. Under no-fault liability, actors who fully internalize externalities achieve an activity level that aligns precisely with the efficient level of activity (see Table 1).
Furthermore, the level of activity under no-fault liability is self-selected and self-regulated by actors. Similar to the level of care, actors can dynamically adjust their level of activity by improving technology and reducing residual accident costs, thereby increasing both their private and societal net benefits. In contrast, under negligence liability, as actors do not bear residual accident costs, they have weaker incentives to dynamically enhance the safety of related technologies and products.
4.3 Bounded Rationality and Information Issues
The preceding analysis, based on simple legal economic theories, highlights the potential efficiency advantages of no-fault liability over negligence liability in AI-related tort scenarios. Legal economic analysis is valuable for its clarity, focusing on core factors through specific theoretical assumptions (rational actors, perfect information). However, its conclusions are often criticized for being overly dependent on these assumptions and lacking real-world applicability. The methodological debates are well-worn and need not be repeated here. Instead, this section expands on the assumption of rationality used in the earlier economic analysis.
The argument that no-fault liability does not lead to higher levels of care than negligence liability and corresponds to relatively lower levels of activity is particularly dependent on the rationality assumption: that actors seek to minimize total costs and maximize net benefits based on comprehensive considerations. In reality, AI industry participants’ concerns about no-fault liability—or policymakers’ expectations of market resistance to it—stem precisely from the perception that relevant actors are not fully rational. Upon hearing the term “no-fault liability,” they may panic, investing excessively in prevention, drastically reducing activity levels, or even ceasing activity altogether.
Although corporate or entrepreneurial decision-making tends to align more closely with the rational actor model, it is not inherently rational. Enterprises may misjudge legal frameworks or their intent and respond blindly. However, compared to individual actors, corporate entities are more likely to benefit from opportunities for learning, trial-and-error, and repeated interactions with liability rules. Particularly if accompanied by authoritative and effective explanations of the rationale and objectives behind no-fault liability, corporations are more likely to understand that such liability does not require absolute safety through heightened care or reduced activity levels. Instead, it incentivizes them to (1) minimize total costs through reasonable care, (2) maximize net benefits through reasonable activity levels, and (3) efficiently insure against residual risks.
If corporate entities are constrained by cognitive biases or face information asymmetries that cannot be mitigated through education, communication, or learning, they may misunderstand “no-fault liability” as necessitating “infinite prevention.” This could result in excessive behavioral responses, such as further lowering activity levels below the efficient level—essentially “opting out.” However, this assumption of irrational behavior may itself lack realism. If enterprises are indeed so skittish, we should also expect them to be equally cautious under negligence liability. In fact, under negligence liability, the uncertainty surrounding reasonable care standards in AI-related torts—both ex ante and ex post—places actors in a similar cognitive position to gamblers, with their approach to prevention investments dictated by risk preferences or perceptions. If AI stakeholders are assumed to be risk-averse or prone to overestimating risks, they may likewise over-correct under negligence liability by adopting overly conservative activity levels.
Thus, even if we accept behavioral assumptions that deviate from rational choice in AI-related torts, negligence liability is not necessarily more conducive than no-fault liability to maintaining innovation in the AI industry. In such contexts, the institutional cost advantages of no-fault liability remain compelling.
4.4 Activity Levels and Liability Caps
If AI harm participants indeed respond with irrational fears, particularly given that insurance may not always be available or comprehensive, they might suppress their activity levels excessively. For policymakers seeking to “release momentum” and “activate innovation,” it is possible to adopt additional mechanisms alongside no-fault liability to encourage higher activity levels.
One obvious institutional tool is the use of liability caps. Even in the absence of liability caps, the scale of tort compensation often falls short of victims’ actual damages. However, liability caps explicitly aim to limit liability below the actual harm incurred. The typical rationale for liability caps is that in certain tort contexts, the actual magnitude of harm caused by the injurious activity is difficult to ascertain with precision. While liability caps may result in insufficient victim compensation, they provide clarity that helps reduce judicial costs in highly uncertain information environments and discourage opportunistic behaviors, such as victims exaggerating damages.
In AI-related tort scenarios, the advantages of liability caps may be even more pronounced. Part of the heightened public attention to AI-related harm lies in the tendency to overstate the damages caused by AI. As Lobel puts it, this “human-machine double standard” means that identical injuries appear more severe when caused by AI than by natural persons, often without clear justification.
Under no-fault liability, if AI harm participants face a statutory limit on liability, and this cap falls below the victim’s actual loss, it can encourage higher activity levels. For example, in Table 2, building on the scenario in Table 1, introducing a condition that “the tortfeasor’s maximum liability cannot exceed 25” results in the actor increasing their activity level from 2 units to 3 units.
However, applying liability caps requires attention to two key considerations. First, if the liability cap is set too low compared to the actual damages, it may not only lead to excessive activity levels but also undermine reasonable care. Specifically, if the marginal utility of certain harmful activities for the actor does not diminish as activity levels increase, liability caps should not be applied in such cases. This is because the actor will have an incentive to endlessly pursue higher activity levels, and once the total prevention cost for a given activity level exceeds the liability cap, the actor will abandon efficient prevention measures.
Second, as illustrated in Table 2, while liability caps can help increase activity levels, they do so at the cost of externalizing part of the harm caused by the activity from the actor’s rational decision-making considerations. This may result in suboptimal static social welfare. If policymakers and legislators still aim to increase activity levels under these circumstances, their reasoning might be that the relevant activities generate positive externalities beyond the actor’s utility—externalities that are difficult to observe or quantify at the time. In fact, to achieve dynamic efficiency, policymakers may choose to sacrifice static efficiency and support relevant actors.
In the uncertain field of AI, policymakers cannot always make precise calculations but must base their decisions on predictions and understandings of major trends in technological development and societal transformation. If they believe that supporting AI development is an urgent priority and that no step can be delayed, then even if, at a specific point in time, the marginal value created by higher activity levels of developers, producers, and providers appears to be outweighed by the residual accident losses caused by the liability cap, policymakers might still decide to “bite the bullet” and continue using liability caps to “safeguard” innovation activities.
Indeed, the early development of the internet industry in both China and the United States underwent a similar process, where laws compressed liability scopes and externalized damage costs to stimulate innovation and industrial growth. Undoubtedly, such decisions involve risks, do not always yield good results, and not all policymakers may accept a value orientation that prioritizes development over protection or compensation.
Additionally, it should be noted that liability caps have another advantage: they facilitate the emergence of a liability insurance market. A mature liability insurance market not only helps further increase activity levels but also, on average, improves victim compensation.
5. Tort Liability and Risk Regulation
This paper aims to argue that no-fault liability is more suitable than negligence liability for addressing civil torts involving harm caused by artificial intelligence. However, a preliminary question that has been widely debated is whether civil tort liability still plays a significant role in managing AI-related harm and risks. More specifically, in the broader context of new technology risk governance, which increasingly relies on administrative regulation, is there still much need to focus on the choice between negligence and no-fault liability in civil law?
The following discussion briefly addresses this question. Overall, risk regulation through administrative oversight is undoubtedly the primary approach for managing AI-related harm. However, civil liability remains necessary for both economic and socio-psychological reasons. A proper understanding of the institutional functions of civil liability and the relationship between ex-ante regulatory mechanisms and ex-post compensatory systems still requires the application of principles from law and economics. Moreover, as more complex and normalized regulatory frameworks are established in the future, the construction and application of civil liability rules should take these regulatory frameworks as their foundation. Policymakers should avoid behavior-suppressing overlaps between systems and consciously seek to combine public and private law instruments to create a richer and more effective incentive structure.
5.1 Ex-post Liability and Ex-ante Regulation
Why do activities involving risks of harm face not only compensation liability after damages occur but also ex-ante regulatory requirements such as licensing, operational norms, reporting, and safety reviews? For readers unfamiliar with legal economic analysis, the question itself might seem strange or even nonsensical: ex-post compensation is aimed at remedying individual victims, while ex-ante regulation seeks to “prevent problems before they occur.” The coexistence of the two seems self-evident.
From an economic analysis perspective, however, the institutional function of ex-post tort liability—though not its only function—is precisely to create incentives for actors to adopt reasonable precautions ex-ante. In other words, while compensation is made ex-post to specific victims, the legal goal of compensation rules lies in influencing potential harm-causers’ ex-ante behavior.
According to this theoretical logic, if the ex-post liability system operated perfectly, actors would naturally have sufficient incentives to adopt socially optimal precautions. Under such conditions, ex-ante administrative regulation would be entirely redundant in ensuring effective preventive measures. However, this logic should not be taken at face value or dismissed as impractical. The very fact that “perfect operation of liability systems” is unattainable in reality underscores the need for regulation and its thoughtful design.
From an efficiency perspective, there are two primary reasons why ex-post liability alone is insufficient to address risks effectively, necessitating supplementary ex-ante regulation: First, under negligence liability, while actors may have sufficient incentives to adopt optimal care levels, they are not required to bear residual accident costs. This can lead to excessive activity levels, which regulators can address by directly intervening to control activity levels—for example, by limiting the number of days individuals are allowed to drive each week. Second, under both negligence and no-fault liability, determining whether an actor’s level of precaution is optimal often requires ex-post judicial judgment. Given the information constraints faced by courts and the high costs of private litigation, actors’ expectations of liability may be discounted, resulting in insufficient precautionary measures. In such cases, ex-ante regulatory requirements established by more specialized and better-informed regulatory bodies are clearly necessary to supplement the shortcomings of ex-post liability incentives.
In AI-related harm scenarios, even if no-fault liability is adopted as suggested in this paper, victims pursuing compensation still face costs related to proving harm, causation, and procedural matters. Moreover, courts cannot be presumed to make correct judgments in all cases. Thus, civil liability alone is undoubtedly insufficient to address AI-related risks effectively.
Today, few would deny the necessity of regulation and oversight in managing risks, especially in the context of new technologies. In areas such as personal data protection and data security, the question often raised is whether, after establishing systematic administrative regulation, the slow, inconsistent, and uncertain mechanism of civil liability remains necessary. In the data protection and security context, arguments for maintaining civil liability often emphasize that administrative regulation cannot fully address the institutional value of providing remedies for victims. Beyond compensation, however, the same logic applies: as long as regulators face information costs and their ex-ante risk prevention requirements do not perfectly correspond to the optimal level of care, maintaining ex-post liability serves as a valuable supplementary mechanism for incentivizing precautionary behavior. In AI-related harm scenarios, we can assert that while regulators generally have more expertise and information than judicial bodies, they do not possess perfect information in all specific cases and cannot always objectively prescribe optimal behavioral requirements. Particularly when regulatory bodies, driven by policies aimed at encouraging and supporting AI development, deliberately relax behavioral standards or enforcement efforts to “unleash potential” for industry actors, maintaining ex-post liability becomes even more crucial for ensuring sufficient precautionary incentives.
5.2 Overlapping Effects and Safe Harbor Rules
The preceding discussion analyzed the relationship between ex-post compensation and ex-ante regulatory systems. While these two mechanisms could theoretically substitute for one another under ideal conditions, they are both necessary and prudent in AI-related harm scenarios due to the absence of perfect information and the non-zero operational costs of any system.
However, although ex-ante regulation and ex-post liability can complement each other, their simultaneous operation does not guarantee an appropriate balance. One possible outcome is an overlapping effect where stringent regulatory requirements, combined with ex-post liability—particularly no-fault liability—impose excessive precautionary demands on actors. This may exceed the socially optimal level of reasonable care, thereby discouraging actors from engaging in value-generating activities.
As previously discussed, the excessive burden is unlikely to stem from no-fault liability itself but rather from regulatory requirements that exceed what is necessary to address the deficiencies of ex-post liability incentives. Under a no-fault liability framework for AI-related harm, actors are already incentivized to adopt optimal reasonable precautions. However, if liability does not cover 100% of the damages, regulators may feel justified in imposing additional behavioral requirements based on their perception of the appropriate level of care.
Regulators, however, cannot perfectly identify the optimal level of care or accurately measure the extent to which ex-post liability falls short in providing adequate incentives. As a result, regulatory demands are often either excessive or insufficient, with the overlapping effect of over-deterrence occurring in the former case. Given the generally conservative regulatory tendencies domestically and internationally, the overlapping effect of excessive regulatory burdens is more likely to emerge in AI risk regulation.
Addressing the Overlapping Effect
How should this overlapping effect be understood and addressed? While it is true that “too much can be as bad as too little,” the immense uncertainty surrounding both the risks posed by current AI technologies and their future developments makes it understandable and acceptable for policymakers to adopt a more cautious approach guided by the precautionary principle. This paper merely seeks to caution that simultaneously imposing stringent ex-ante regulation and ex-post civil liability is highly likely to result in overlapping effects, causing over-deterrence and suppressing activity levels. Such outcomes should ideally be deliberate rather than accidental, with policymakers consciously articulating and justifying the value judgments underlying their decisions to society and other stakeholders.
If policymakers favor a more open, progressive, and risk-tolerant approach, then the overlapping effect should be avoided. Three possible options could be considered: (1)Abolish or significantly relax ex-ante regulation after implementing no-fault liability; (2)Eliminate ex-post liability after establishing an ex-ante regulatory framework; (3)Combine both systems but limit the application of ex-post liability through mechanisms like safe harbor rules.
The first option, a deregulation paradigm, is unlikely to be adopted in practice, as it contradicts the operational logic of regulatory agencies under public choice theory. More importantly, given the judiciary’s passive, slow, and information-constrained nature, deregulation would almost certainly result in excessive risks—not only beyond socially optimal levels but also beyond what society and governments, even under risk-tolerant assumptions, could bear. The second option is more realistic. Entrusting all risk prevention decisions to regulatory agencies has its drawbacks and limitations, but their expertise and responsiveness surpass those of the judiciary. Moreover, eliminating ex-post liability would undoubtedly provide immediate relief for industry actors by reducing compliance burdens. Nevertheless, this paper leans against the wholesale removal of ex-post liability. Beyond considerations of victim compensation, maintaining civil liability serves an important socio-psychological function by upholding public trust.
The third option appears to be the most balanced institutional arrangement. A safe harbor rule involves regulatory agencies specifying a set of more detailed compliance requirements on top of mandatory regulations. Actors are not required to follow these requirements, but choosing to do so grants immunity from ex-post liability. In other writings, the author has elaborated on the advantages and implementation strategies of safe harbor rules in regulating emerging technologies. In general, safe harbor rules provide regulated entities with optional compliance pathways, reducing their compliance costs, avoiding overlapping effects, and preserving flexibility for both market actors in selecting risk prevention strategies and regulators in refining their regulatory approaches. For policymakers who prioritize innovation over safety, safe harbor rules might offer a more comprehensive and “l(fā)iberalizing” approach to risk regulation that is worth considering.
5.3 Social Psychology and Acoustic Separation
Finally, it is necessary to briefly explain why, despite concerns that overlapping effects might lead to over-deterrence, and even under the premise that administrative regulation is bound to become the primary mechanism for governing AI risks, I still believe that civil tort liability rules should be explicitly established and retained.
In practice, if an effective insurance system can be established, the goal of providing remedies could be achieved independently of tort liability, especially in the long term. While in the short term, the emergence and development of AI harm insurance might rely on judicial activities concerning tort liability to provide information and experience, I believe that the more important value of tort liability—particularly no-fault liability—lies in its ability to support the development, application, and other activities related to AI in an environment of sufficient social trust. Currently and in the foreseeable future, whether in domestic or international contexts, public concerns and fears about AI are expected to remain at levels noticeable to industry actors. Indeed, a mere glance at the discourse on AI risks and threats among legal and policy elites makes it evident that as AI technologies advance and their applications become more widespread, society and policymaking will likely be increasingly influenced by negative sentiments, including Ludditism. In such circumstances, regardless of the reasonableness or desirability of a system that abolishes civil liability on the basis of strict regulation, if legal rules are perceived by the public as “exempting AI from all liability for harm,” it will severely undermine public confidence in the law’s and policy’s ability to balance AI development and safety needs. It will also exacerbate public distrust of AI and even the broader new technology industry.
In fact, this consideration of calming social psychology also lends an additional layer of support, beyond economic analysis, for the proposal to apply no-fault liability to AI-related harm. Although no-fault liability rules are not “strict” in practice (i.e., they do not require actors to adopt precautions beyond a reasonable level), they present a highly “serious” impression to the public. In other words, because no-fault liability is likely to convey a societal message of “the law taking AI risks very seriously” and “zero tolerance for harm,” its application can foster a social perception of heightened trust and create a relatively more favorable environment for AI industry development, without substantially increasing the industry’s regulatory costs.
Even if industry actors exhibit excessive risk aversion or overreact to liability and regulation, and policymakers adopt measures like liability caps or safe harbor rules to reduce the actual application of tort liability to protect activity levels, the “expressive functions” of no-fault liability need not be lost. However, this requires achieving a sufficient degree of acoustic separation.
As a legal theoretical concept, “acoustic separation” traditionally explains the differentiation between rules governing conduct and rules guiding adjudication. According to this theory, in an ideal scenario, actors and adjudicators receive different messages from the same legal rules. The former often perceive legal prohibitions as strict or even harsh based on the literal text, while the latter interpret these rules flexibly in light of their underlying legal spirit, applying more lenient standards. The combined effect is strong ex-ante deterrence and more lenient ex-post adjudication—a principle of “punish to warn, remedy to save.”
Acoustic separation hinges on the premise that the same legal norms are understood differently by different categories of audiences, and these audiences cannot effectively bridge their differences through communication. In market terms, there is no arbitrage space between these groups. Regarding AI tort liability, is it possible to achieve a broader form of acoustic separation—where the general public believes that liability is strict and universal, while judicial adjudicators actually impose relatively light and limited liability on the industry?
For this empirical question, I lack sufficient evidence to provide a definitive answer. However, some preliminary analyses suggest that such a scenario is not implausible. In AI legislation, tort liability rules (e.g., no-fault liability) are likely to appear as fundamental norms in the core statutory text, while safe harbor rules (specific compliance requirements) or liability caps might at most be abstractly mentioned in the core provisions, with their specific details relegated to supplementary regulations, judicial interpretations, or other implementing measures. In other words, the cognitive salience, ease of understanding, and accessibility of liability provisions compared to liability-limiting provisions create a plausible basis for acoustic separation in AI tort liability discussions. Certainly, legal professionals and scholars with deeper knowledge of the law can weaken acoustic separation through public education and advocacy. However, acoustic separation persists in many legal domains, suggesting that professional arbitrage alone is often insufficient to bridge the gap between public and professional understanding. Given that the legal framework governing AI is inherently less accessible and comprehensible to the general public compared to conventional civil and criminal laws, there seems little reason to believe that the acoustic separation effect of AI tort liability rules adjusted through safe harbor mechanisms would be weaker than that achieved in traditional legal domains.
6. Conclusion
Since the Industrial Revolution, the types of risks that human societies address through institutional frameworks have steadily increased. This trend may appear to indicate that our living conditions are becoming more perilous, but it might actually reflect the fact that modern society now commands a far greater abundance of resources to sustain legal systems than ever before. In other words, with more resources available, we regulate more. Even so, the institutional costs required to implement different rules vary significantly. Moreover, it is crucial to recognize that the emergence of new risks in the modern era primarily stems from humanity’s continuous discovery of new ways to create value and improve individual and collective living conditions. Thus, considerations of risk management and value creation should be integrated as much as possible into the same deliberative process when formulating relevant legal rules.
The economic analysis of tort law provides a relatively straightforward framework for incorporating such institutional considerations into normative analyses. While the technical principles underlying AI-related harm are often complex and difficult to unravel, the principles for designing tort liability rules for AI-related harm—at least from a structural perspective—do not necessarily require highly sophisticated new theories. Not all “classical” theories contribute positively to problem-solving. For example, dogmatic theories that insist liability must be tied to “autonomous” moral agents offer little constructive insight into AI-related harm. However, many readily applicable principles from legal economic analysis have yet to be fully utilized in AI legislation-related research and discussions, even though efficiency has consistently been recognized as a key consideration in this domain.
This paper argues that no-fault liability, rather than negligence liability, should apply to torts involving AI systems, primarily for reasons of efficiency. Without the aid of legal economic analysis, proponents of no-fault liability often focus solely on “safety,” while opponents concentrate on “burden.” Both perspectives are significant but inherently one-sided. The analysis and conclusions presented here may not be infallible, but they offer a theoretical logic worth considering: applying no-fault liability to AI-related harm does not necessarily suppress innovation and may even help sustain its vibrancy. From my perspective, similar legal economic analyses could help broaden the scope of discussions on other issues related to AI legislation, potentially enabling a more comprehensive and balanced approach to policymaking in this evolving field.