In part two of this blog series, we outline the steps for successful artificial intelligence (AI) implementations, pitfalls to avoid and the challenges faced. Read PT. 1 of the blog series.
Implementing an AI initiative
Building and deploying an AI algorithm typically requires multiple stakeholders and expertise. The development of AI initiatives requires a structured and strategic approach to ensure successful AI integration into activities. Implementation follows a clear logical pathway:
- Define and articulate clear objectives and goals: Putting this in place for an AI initiative is a foundational activity. For example, what is the problem to be solved? Can we define a barrier to delivering care or identifying patients that likely have outstanding care needs? Defining these elements can involve surveying patients, speaking with healthcare providers, and analyzing data to pinpoint areas where AI can make a meaningful impact.
- Assess current capabilities and resources: Identify what exists and whether any additional resources are needed. Once there’s a good understanding of goals, capabilities, and resources, priorities can be identified, focusing on the most important problems and the areas where AI can have the biggest impact.
- Prioritize data collection and quality: AI systems are trained on data, so it is critical to collect, prepare, and accurately label high-quality data for AI initiatives. Examples of data include claims, patient medical records, genetic information, treatment histories, and patient-reported outcomes. Data collection, storage, and management methods must be ethical, compliant with privacy and healthcare regulations, and prioritize patient data security.
- Develop, test, and implement: Once data has been collected and prepared, AI solutions can be developed and implemented relative to specific target use cases. It’s important to engage data scientists and AI experts to test AI models for the chosen use cases. Rigorous testing and validation are essential to ensure accuracy and reliability. AI implementation is an iterative process so there may be the need to adjust plans and strategies as more is learned about AI and as needs change.
- Evaluate and monitor: Once AI solutions are up and running, continuous evaluation and monitoring of performance helps to ensure that they are meeting goals and not introducing any unintended consequences. Adjustments and improvements are made based on feedback and data analysis.
Challenges
While benefits of AI integration are obvious and significant, the journey is not without hurdles. Developing and integrating AI into clinical practice is a sizable undertaking that requires collaboration, strategy, focus, and adequate resourcing. The following are a list of challenges faced with AI initiatives:
- Funding: Designing, building, and deploying AI to the high standard required, particularly when dealing with highly personal data, requires significant financial investment. Post-pandemic, healthcare organizations are focused on investment in tangible return on investment opportunities and prioritizing IT development on staffing support, provider satisfaction, and operational efficiencies.
- Data access: Gaining access to high quality data is a large barrier when it comes to AI algorithm creation. Data quality, completeness, and impartiality influence model performance and trust, and as the saying goes, “garbage in, garbage out.” AI is only as good as the data it trains on, and accessing and managing high-quality, diverse, and relevant data can be a major obstacle as it may be siloed, inconsistent, or lacking in sufficient volume, and it can be spread across multiple systems and formats. Basically, if you don't have good data, you cannot create a good algorithm. This highlights that accurate, high-quality, well-labeled data is key to optimizing AI and training AI systems to produce accurate and reliable results.
- Data privacy and security: Personal data within the healthcare ecosystem is inherently highly sensitive, so it is crucial to ensure that AI systems are designed and implemented in a way that protects privacy and security. This can be challenging, especially considering the increasing risk of cyberattacks. Apprehension about sharing personal information without explicit consent or the potential liabilities associated with unintentional protected health information disclosures cannot be overlooked.
- Regulatory compliance: Keeping up with evolving regulations and compliance standards, especially related to AI in healthcare, is challenging. AI tool developers need to stay abreast of changing laws and adapt their AI initiatives quickly and accordingly to ensure that AI workflows are updated to adhere to ethical guidelines and comply with relevant regulations.
- Proof of effectiveness: While operational efficiencies and measures of technical accuracy are gaining traction, there's a burgeoning need for concrete evidence that underscores AI's benefits for clinical management (e.g., reduction in errors, patient outcomes, and quality of care) before clinicians will be willing to utilize the technology.
- Interpretability and transparency: "Black box" solutions fail to gain trust with clinicians when methodologies used are unclear, and the output is difficult to translate for a clinically minded audience or doesn’t align with local clinical practice.
- Adoption and implementation: Even if you succeed in building an algorithm that is highly valuable, it requires it to be adopted and utilized in clinical practice to have a real-world impact. There are several pitfalls at this stage. For example, solutions that demand excessive cognitive load introduce non-standard workflows or target the wrong audience will invariably face adoption hurdles.
To learn more about how IQVIA can help you with AI, contact us at pr-contact@iqvia.com.