Building Inclusive AI for Healthcare: Lessons from Odin Vision
Posted on
This year’s International Day of Women and Girls in Science theme is “Synergising AI, Social Science, STEM and Finance: Building Inclusive Futures for Women and Girls.” In this collaborative blog with Accelerator alumnus Odin Vision, we explore approaches to bridging inequalities through the responsible development and implementation of AI.
Artificial Intelligence (AI) is a transformative force reshaping how we work, live and deliver healthcare services. The 10 Year Health Plan for England highlights AI as a key enabler for a new model of care, with the potential to support earlier diagnosis, improve efficiency and reduce administrative burden.
AI also poses key challenges, including issues regarding transparency, data privacy and the presence of gender and racial bias in algorithms that are trained on non-representative datasets. This emphasises the need for effective regulation and governance to ensure safe, equitable deployment.
In this blog, we reflect on the theme of this year’s International Day of Women and Girls in Science, outlining approaches that innovators can take to ensure AI is developed and used equitably. Accelerator alumnus Odin Vision also shares insights from their experiences developing cloud computing and AI-enabled devices, providing new approaches to endoscopy to improve patient outcomes.
Co-design and a diverse workforce
The AI sector faces a significant gender gap in its workforce. Only 22% of AI professionals are women, and just 14% hold senior roles. Having diverse perspectives in the development and design of AI systems can help earlier identification of bias or unchecked assumptions.
The Odin Vision AI Research and Data Science department bucks the industry trend with women representing half of the team. Initiatives like the entry-level Kickstarter scheme, PhD studies and leadership training further support workforce development. Odin collaborates closely with clinicians, including women across gastroenterology and clinical research, to ensure AI models align with real workflows and reflect diverse clinical perspectives.
Meaningful co-design offers another solution. By involving people with lived experience throughout the development process, AI systems can better reflect the needs of the full population. Leap alumnus Punta Health, for example, conducted interviews with patients, carers, and clinicians to identify critical gaps in current dementia care models and develop an AI-enabled dementia management platform.
Co-design also requires continuous improvement. NHS procurement models increasingly recognise that buying a digital product “once” is not enough. Instead, organisations must create procurement structures that allow for continual evaluation, iteration and improvement. Guidance, such as the HIN white paper on remote monitoring and market partnerships, highlights the importance of long-term collaboration between innovators and the NHS to deliver safe, adaptive technology.
This approach embeds equity from the outset, ensuring AI products remain inclusive not only during regulatory assessment, but throughout real-world development and implementation.
Odin Vision develops AI tools in close collaboration with clinicians, including women working across gastroenterology and clinical research. This helps ensure alignment with real clinical workflows while incorporating diverse perspectives into design, usability and validation.
Equity does not end at regulatory approval, and AI performance must be monitored in real-world settings. Odin Vision supports continuous evaluation after deployment to identify drift or emerging biases and to maintain reliable performance across patient populations as clinical practice evolves.
Juana González-Bueno, Research Team lead, Odin Vision
Education, training and early detection of bias
Recent research by Cornerstone found that although 80% of UK employees use AI tools in their work, 51% have not received any AI training. Within healthcare, the NHS Long Term Workforce Plan emphasises the need for training across specialities so clinicians can confidently adopt new technologies involving AI.
As AI becomes embedded in clinical workflows, training and upskilling staff is crucial. Not only to understand how these systems work but also to identify issues early, challenge outputs and prevent harm. This supports safer deployments, strengthens organisational capability and reduces transformation costs.
Odin supports employees in building confidence around monitoring and understanding AI by equipping them with practical knowledge through AI literacy training, targeted upskilling, and cross-team knowledge sharing. This shared understanding enables better decision-making across the product lifecycle, from model development through to performance monitoring.
By embedding good machine learning practices into day-to-day work and encouraging teams to challenge and assess AI behavior, insights into model performance and behavior directly inform risk management and product improvement, supporting the delivery of safe, effective and high-quality medical devices.
Continuous evaluation and regulatory frameworks
AI performance can vary when used across different clinical settings and patient populations. There are several national programmes that are supporting innovators to contribute to the developments of regulation frameworks for AI as Medical Device (AIaMD) and highlight evaluative practices in this developing field. These include:
- MHRA AI Airlock: a regulatory sandbox for AI as a Medical Device (AIaMD) products, enabling innovators like Accelerator alumnus Panakeia to safely test and navigate regulatory challenges.
- The Ambient Voice Technology (AVT) Self-Certificated Supplier Registry: a self-declared list that supports NHS organisations to understand the readiness, capabilities, features and compliance standards of listed suppliers, including Accelerator alumni TORTUS and Accurx.
Odin also prioritises continuous post-implementation monitoring to track performance across diverse patient groups and evolving clinical practice, ensuring their tools remain fair and accurate in real-world settings.
Continuous post-market monitoring allows Odin to understand how their AI products perform in real clinical settings, not just during development. By regularly reviewing usage data, performance trends, and user feedback, they identify opportunities to improve accuracy, usability, and workflow integration. This ongoing insight has directly led to product updates such as refining AI outputs, improving user interfaces, and adjusting how information is presented to better support users.
Odin Vision actively plan for and capture feedback from a diverse range of users, including clinicians across different specialties, experience levels, and healthcare environments. Listening to these perspectives enables us to adapt our products in response to real needs and a rapidly evolving healthcare and technology landscape. By combining continuous monitoring with inclusive feedback, they ensure their AI solutions remain relevant, trusted, and aligned with how people actually work – driving meaningful innovation.
Odin Vision
By prioritising practical approaches like co-design, training and continuous evaluation, innovators developing and implementing AI-enabled products for our health and care system can deliver meaningful benefits. These practices ensure that new technologies are not only effective but also align with the diverse needs and experiences of the populations they serve.


