Blog
Managing AI in Practice: A Structured Approach to Reliable and Defensible Systems
Mar 31, 2025

The rapid expansion of AI applications in health and wellness is unlocking new possibilities, from early diagnosis to personalized treatment plans.1 Due to the sensitive nature of AI applications—where patient outcomes and safety are at stake—and the sensitive data they rely on, regulatory and standards bodies are establishing guardrails that promote defensible use. For example, the International Organization for Standardization (ISO) has an entire committee producing reports and standards on the topic of AI,2 and both the European Medicines Agency (EMA) and U.S. Food and Drug Administration (FDA) have published workplans and documents on managing AI.

Managing AI in real-world health and wellness applications requires a multi-layered approach that accounts for complexity, regulation, and system integration. This is especially true with organizations increasingly shifting their focus towards a platform-based approach to consolidate, operate, and manage healthcare AI applications. This shift is driven by the need for a unified framework that can handle the complexities of multiple AI systems, ensuring seamless integration and defensible use. In that regard, platform operations can provide a structured AI management approach through necessary infrastructure and processes to support the deployment, monitoring, and maintenance of AI applications.


Building a Scalable AI Management Framework

Platform operations encompass the processes, technologies, and frameworks that can facilitate the deployment, monitoring, and maintenance of AI systems. In healthcare, where AI applications can significantly impact patient outcomes, the goal of platform operations is to be robust, secure, and aligned to regulatory standards and ethical expectations. This is because AI applications in healthcare range from image recognition systems for radiology to predictive analytics for patient management, often involving the handling of vast amounts of sensitive data.

We’ve found that a three-pronged approach creates a strong foundation for AI systems—ensuring effectiveness while to mitigating risks and maintaining integrity, security, and privacy throughout the lifecycle:

  • Understanding what’s needed: The first step in establishing robust platform operations is identifying the necessary controls for the use cases being explored, informed by applicable regulations and best practice guidelines in the domains of data protection, privacy, and AI. This process, known as controls & requirements mapping, helps organizations align AI practices with guidelines. These controls can then be mapped to specific actions or requirements that need to be implemented. International frameworks such as NIST Privacy Framework can be leveraged to provide a neutral, common framing to consolidate requirements across various guardrails being considered.
  • Understanding how you’ll build it: Once the controls and requirements are mapped, they guide technical feature design, shaping the system architecture and structures needed for secure implementation. Requirements can be translated into people, process, and technology specifications to define the platform operational architecture. This can include determining the optimal data flows, governance protocols, and risk and credibility assessments needed to create a technical system design.
  • Understanding how you’ll run it: With the system design in place, the next step is to establish operating guidelines for continuous improvement, monitoring, and regulatory alignment. This can include monitoring protocols to track the performance and use of AI systems, including periodic validation of underlying AI models to ensure they remain accurate and relevant as new data becomes available. Staying abreast of evolving regulations and standards through robust horizon scanning practices can ensure that the platform and AI systems remain aligned with the original controls & requirements mapping.

A Cross-Functional Effort for Reliable AI

A fourth, equally important pillar in AI management is understanding how interested parties need to be involved. Platform operations is a cross-functional exercise. Fostering collaboration amongst interested parties from privacy, information governance, IT, implementation, and end-users is important to ensure alignment on expectations and address challenges collectively.

A real-world example highlights why robust AI management is essential. In this case, we introduced a new approach involving federated learning for a healthcare AI platform due to regulatory challenges in working with different data sources that needed to be kept separated. The federated approach draws statistics from different sources to securely combine information for machine learning model training and deployment.

  • Controls Mapping: We identified critical controls using an expert-in-the-loop regulatory mapping solution, such as ensuring patient data anonymity, implementing role-based access controls, and establishing audit trails for data processing activities.3
  • System Design: We designed the infrastructure to extract statistics from segregated data environments, with robust security measures to protect sensitive data. Redundant systems are implemented to ensure continuous availability.
  • Operating Guidelines: We set up continuous monitoring systems to track the tool's performance, with regular model validation to maintain accuracy. An incident response plan is in place to address any potential breaches or system failures.

Through this multi-layered approach, the AI-driven platform operates effectively, enhancing the machine learnings capabilities while ensuring patient data security and alignment with regulatory standards. From design to deployment, we involved information governance and privacy professionals to work with teams to ensure successful execution.


Conclusion

The integration of AI in healthcare holds immense potential, but as organizations continue to evolve and integrate AI applications at scale, so too must the frameworks and practices that support their management and maintenance. By implementing a structured AI management strategy, healthcare organizations can harness the power of AI while mitigating risks, aligning with regulatory expectations, and maintaining public trust. Robust platform operations can pave the way for transformative advancements in healthcare, ultimately improving patient outcomes and fostering innovation in the industry.


References
  1. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives https://www.mdpi.com/2227-9032/12/2/125
  2. ISO/IEC Joint Technical Committee (JTC1) for information technology, Sub-committee 42 Artificial intelligence https://www.iso.org/committee/6794475.html
  3. NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, version 1.0. https://www.nist.gov/publications/nist-privacy-framework-tool-improving-privacy-through-enterprise-risk-management

Related solutions

Contact Us