The purpose of this book is to draw readers' attention to various legal intricacies associated with deploying self-directed artificial intelligence systems (AIS), particularly emphasizing the limits of the law, vis-a-vis liability problems that may emerge within third-party contracts. With the advent of today's ostensive "Amazon Halo or Alexa," consumers are having to conclude contracts (e.g., sale of goods and distant financial services) in much more complex (cybernetic) environments. Generally, with one party acting in the capacity of a human being while the other (as an autonomous thing/device [AIS] with capabilities well beyond that of humans) representing the interests of others (not just other humans). Yet traditional jurisprudence is limited in scope for holding these systems legally accountable if they were to malfunction and cause harm. Interestingly, within the judicial system itself, the use of AIS is more prevalent now, including within the criminal justice system in some jurisdictions. In the United States, for instance, AIS algorithms are utilized to determine sentencing and bail processing. Still, jurists find themselves limited to traditional legal methodologies and tools when tackling novel situations brought about by these systems. For example, traditional strict liability concept, as applied in tort law, typically ties responsibility to the person(s) (e.g., AIS developers) influencing the decision-making process. In contract law, particularly where third parties are concerned, AIS are equated to tools for the purposes of traditional strict liability rules. Thus, binding anyone on whose behalf they would have acted (irrespective of whether such acts were intentional or foreseeable).
Beyond Intellect and Reasoning: A scale for measuring the progression of artificial intelligence systems (AIS) to protect innocent parties in third-pa
The purpose of this book is to draw readers' attention to various legal intricacies associated with deploying self-directed artificial intelligence systems (AIS), particularly emphasizing the limits of the law, vis-a-vis liability problems that may emerge within third-party contracts. With the advent of today's ostensive "Amazon Halo or Alexa," consumers are having to conclude contracts (e.g., sale of goods and distant financial services) in much more complex (cybernetic) environments. Generally, with one party acting in the capacity of a human being while the other (as an autonomous thing/device [AIS] with capabilities well beyond that of humans) representing the interests of others (not just other humans). Yet traditional jurisprudence is limited in scope for holding these systems legally accountable if they were to malfunction and cause harm. Interestingly, within the judicial system itself, the use of AIS is more prevalent now, including within the criminal justice system in some jurisdictions. In the United States, for instance, AIS algorithms are utilized to determine sentencing and bail processing. Still, jurists find themselves limited to traditional legal methodologies and tools when tackling novel situations brought about by these systems. For example, traditional strict liability concept, as applied in tort law, typically ties responsibility to the person(s) (e.g., AIS developers) influencing the decision-making process. In contract law, particularly where third parties are concerned, AIS are equated to tools for the purposes of traditional strict liability rules. Thus, binding anyone on whose behalf they would have acted (irrespective of whether such acts were intentional or foreseeable).