By Dr. Ahmed Hassoon and Rachel Reed.
Artificial Intelligence (AI) may hold a promise to redefine the landscape of healthcare, with the potential to augment patient outcomes, streamline operational processes, and strengthen research capabilities. However, the journey to harness the full potential of AI in healthcare is fraught with both internal and external implementation challenges. As with any major shift in the sociotechnical systems of healthcare, potential difficulties exist. The roll out of AI in healthcare is no exception and will be met with healthy skepticism, resistance to change, a deficiency of understanding of AI technology, and apprehensions regarding data privacy, security, and ethics. These issues are further exacerbated because we are struggling to simply speak with each other and to understand many of the features and flaws of this quickly evolving technology. If not addressed, these problems will limit the utilization and positive impact of AI technologies in healthcare settings.
How can we bridge these challenges?
Guiding Universal Communication Principles
One of the most significant barriers to the effective implementation of AI in healthcare is the absence of universal communication principles. The absence of a standardized communication strategy will lead to the propagation of inaccurate information and disinformation. Within the context of the recent surge in accessibility to AI technologies, a lack of concretized communication standards can distort individual and societal perceptions and dampen enthusiasm for these transformative technologies; downstream impacts of which have already been accompanied by a substantial public reaction to AI-related news.
Shown in the figure below, we have developed a proposed essential framework for guiding communication strategies for the successful implementation of AI in healthcare is predicated on three foundational principles: 1) transparency, 2) inclusivity, and 3) adaptability. Transparency necessitates forthright communication regarding the capabilities, limitations, and potential ramifications of AI technologies. Inclusivity emphasizes the imperative of engaging a comprehensive spectrum of stakeholders in the AI journey, encompassing healthcare providers, patients, administrators, and policymakers. Adaptability entails maintaining abreast of the latest advancements in AI and exhibiting responsiveness to the evolving needs and expectations of stakeholders.
How Many Steps to AI Integration?
The application of these three foundational principles to the integration of AI in the domains of patient safety and quality of care will require several key steps:
1) It is imperative to articulate the specific goals and objectives of implementing AI technology, such as augmenting patient outcomes, enhancing operational efficiency or advancing research capabilities.
2) It is crucial to identify the key stakeholders and groups that will be directly impacted by AI implementation, and map their experiences.
3) It is necessary develop an appropriate and consistent lexicon, and to tailor messaging and engaging content to address specific stakeholder concerns and needs.
4) It is important to establish bidirectional channels for stakeholders to provide feedback on the AI technology and its implementation and to provide timely responses to feedback.
5) It is crucial to systematically measure and evaluate the communication strategy to ensure its effectiveness and to identify areas for improvement.
At its core, this comprehensive strategy aims to facilitate the smooth integration of AI into healthcare settings and to enhance the ability to identify and address challenges. Collectively, we expect that the adoption of this holistic approaches within the healthcare sector will yield improvement in the quality of patient care by harnessing the full transformative capacity of AI as a member of the health care team to ensure maximal benefit patients, practitioners, and health systems as a whole.
About the authors:
Ahmed Hassoon, MD, MPH, PMP is an Assistant Research Professor at Johns Hopkins Bloomberg School of Public Health. His research focuses on the application of data science and artificial intelligence in diagnostic safety.
Rachel Reed is an MPH student at the Johns Hopkins Bloomberg School of Public Health. She is a communications strategist with experience across various health sector disciplines.
The opinions expressed here are those of the authors and do not necessarily reflect those of The Johns Hopkins University.