Blog
Responsible AI: Exploring IQVIA’s vision on AI governance in healthcare
Zahra Timsah, PhD, MBA, MSc, IQVIA Analytics Center of Excellence
Aug 20, 2024

AI is everywhere today. But with its growing popularity comes many questions about responsibility and trust. That’s why IQVIA builds on our more than a decade of experience in AI to promote the need for responsible AI governance all around the world. Our vision of a responsible AI framework is built on five key principles that are necessary to support such governance and form the core of all of our AI efforts.

As AI capabilities continue to progress, it’s essential that we work to improve the quality and relevance of AI output, including mitigating bias and hallucinations. This work helps ensure AI is used in the most responsible ways for the good of the patient, and for the good of public health. As a leader in healthcare and life sciences, IQVIA is well positioned to lead the industry in working to ensure such governance is in place. To this end, we advocate for policies that prioritize patient benefits and manage risks, ensuring AI is developed and deployed in the most ethical and responsible way.

Five essential principles to support responsible AI use

IQVIA adheres to the following five essential guiding principles to develop and deploy AI as responsibly as possible:

  1. Fairness: AI fairness encompasses ensuring our AI systems are inclusive and suitable for their intended use case. Significant efforts are made to ensure AI doesn’t perpetuate inequities or have a negative impact on vulnerable populations. In addition, any decisions generated by AI must follow anti‐discrimination laws.
  2. Transparency: It’s essential that we provide full disclosure when people are engaging with AI. This includes making sure people understand up-front the role AI plays in decision-making and how this impacts possible outcomes.
  3. Respect: All data used in AI must be collected, stored, and handled in accordance with data governance and privacy laws. AI systems must respect the rights and dignity of the people, groups, and communities served.
  4. Accountability: Responsible AI requires incorporating adequate oversight to hold organizations responsible for the impact of their systems. This includes requiring AI developers to identify and mitigate any potential biases in the allocation of healthcare resources. It also means that in high-risk scenarios, those deploying AI must be responsible for monitoring, identifying, and reporting risks to ensure ongoing fairness and safety.
  5. Auditability: All AI systems should be configured in such a way that they can be monitored and assessed so that robust evaluations can be conducted to ensure AI models are reliable and effective in a similar way to how clinical testing is used to validate the safety and effectiveness of new medicines.

The importance of international alignment on AI regulations

International alignment on AI regulations is essential to:

  • Prevent divergent laws: Avoid creating conflicting standards that could hinder the global application of AI in healthcare.
  • Facilitate global innovation: Ensure AI development and use are consistent worldwide, promoting equitable access to AI advancements.
  • Adopt consistent terminology: Align AI terminology and standards with international efforts led by organizations such as OECD, ISO/IEC, and NIST.

As the life sciences and healthcare industries continue to apply AI and otherwise adopt new technologies, we must continue to be vigilant in our approach to ensure we use AI responsibility, in ways that mitigate bias, and ultimately benefit patients everywhere.

Learn more about IQVIA Healthcare-grade AI™.

Related solutions

Contact Us