论文标题
可解释的人工智能推荐系统通过利用不利儿童经验的语义:概念证明原型开发
Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development
论文作者
论文摘要
在过去的20年中,对不良童年经历及其后果的研究已经出现。在这项研究中,我们旨在利用可解释的人工智能,并为知识驱动的基于证据的推荐系统提出概念验证原型,以改善对儿童不良经历的监视。我们使用了我们开发的本体概念来使用Google DialogFlow引擎构建和培训提问代理。除了提问代理外,初始原型还包括利用第三方图技术的知识图生成和推荐组件。为了展示框架功能,我们在这里提出了一个原型设计,并通过目前在田纳西州孟菲斯的一家儿童医院实施的一项倡议的四个用例场景中展示了主要功能。原型的持续开发需要实施建议的优化算法,通过个人健康库纳入隐私层,并进行临床试验以评估实施的可用性和实用性。这个语义驱动的可解释的人工智能原型可以增强医疗保健从业者为他们做出的决策提供解释的能力。
The study of adverse childhood experiences and their consequences has emerged over the past 20 years. In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve surveillance of adverse childhood experiences. We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology. To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation. This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners ability to provide explanations for the decisions they make.