A novel approach to hybrid AI aimed at developing trustworthy agent collaborators. The vast majority of current AI relies wholly on machine learning (ML). However, the past thirty years of effort in this paradigm have shown that, despite the many things that ML can achieve, it is not an all-purpose solution to building human-like intelligent systems. One hope for overcoming this limitation is hybrid AI: that is, AI that combines ML with knowledge-based processing. In Agents in the Long Game of AI, Marjorie McShane, Sergei Nirenburg, and Jesse English present recent advances in hybrid AI with special emphases on content-centric computational cognitive modeling, explainability, and development methodologies. At present, hybridization typically involves sprinkling knowledge into an ML black box. The authors, by contrast, argue that hybridization will be best achieved in the opposite way: by building agents within a cognitive architecture and then integrating judiciously selected ML results. This approach leverages the power of ML without sacrificing the kind of explainability that will foster society's trust in AI. This book shows how we can develop trustworthy agent collaborators of a type not being addressed by the "ML alone" or "ML sprinkled by knowledge" paradigms--and why it is imperative to do so.
A novel approach to hybrid AI aimed at developing trustworthy agent collaborators. The vast majority of current AI relies wholly on machine learning (ML). However, the past thirty years of effort in this paradigm have shown that, despite the many things that ML can achieve, it is not an all-purpose solution to building human-like intelligent systems. One hope for overcoming this limitation is hybrid AI: that is, AI that combines ML with knowledge-based processing. In Agents in the Long Game of AI, Marjorie McShane, Sergei Nirenburg, and Jesse English present recent advances in hybrid AI with special emphases on content-centric computational cognitive modeling, explainability, and development methodologies. At present, hybridization typically involves sprinkling knowledge into an ML black box. The authors, by contrast, argue that hybridization will be best achieved in the opposite way: by building agents within a cognitive architecture and then integrating judiciously selected ML results. This approach leverages the power of ML without sacrificing the kind of explainability that will foster society's trust in AI. This book shows how we can develop trustworthy agent collaborators of a type not being addressed by the "ML alone" or "ML sprinkled by knowledge" paradigms--and why it is imperative to do so.