Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

The role of diverse development teams in healthcare AI

Written by Gugu Ntsele | May 28, 2025

As healthcare institutions increasingly deploy AI tools for diagnosis, treatment planning, and resource allocation, the diversity—or lack thereof—within the teams that build these systems shapes their impact on patient care.

As AI ethicists Auxane Boch and Alexander Kriebitz from the Technical University of Munich explain, "Diversity encompasses many dimensions, and its exact understanding can be contentious. Nevertheless, existing definitions of diversity share an essential commonality by emphasising the distinctions within a group and context-dependent.” They further clarify that "a 'diverse team' could consist of different nationalities, gender identities, political views and life experiences."

Beena Ammanath, Global Deloitte AI Institute Leader, reinforces this perspective in her Forbes article, "AI models do not spring from the technology either fully formed and ready to deploy. They are conceived, shaped and managed by humans. Within a business context, human stakeholders make fundamental decisions on what AI to build, how to build and use it and how to manage the risks.” The human element in AI development cannot be overstated.

Beyond its conceptualization, Boch and Kriebitz emphasize that "diversity carries significant ethical implications. It speaks to the aspiration of representing society, particularly in positions of power, societal influence or wealth, but also in its dedication to attention and care."

 

The diversity crisis in AI development

The AI development community faces diversity challenges that directly impact healthcare applications. Recent data reveals disparities across multiple dimensions:

Gender representation in AI

  • Global workforce: Women constitute approximately 22% of AI professionals worldwide, indicating a gender disparity in the field.
  • AI research: In AI research, only 13.83% of authors are women, and this proportion has remained relatively stagnant since the 1990s.
  • AI upskilling opportunities: A 2024 Randstad report revealed that 71% of AI-skilled workers are men, while only 29% are women. Furthermore, women are less likely to be offered AI upskilling opportunities and often feel less confident in the training they receive.

Racial representation in the AI workforce

  • Underrepresentation in tech: According to McKinsey's State of AI 2022, less than 25% of AI employees identify as racial or ethnic minorities, with only a third of companies reporting that their AI teams are racially diverse.
  • Educational pipeline: The 2024 AI Index Report by Stanford HAI notes that while white students remain the majority among computer science graduates in the U.S. and Canada, representation from other ethnic groups has been growing. Since 2011, the proportion of Asian CS bachelor's degree graduates has increased by 19.8 percentage points, and the proportion of Hispanic CS bachelor's degree graduates has grown by 5.2 percentage points.

Public perception and trust

A 2025 Pew Research Center survey found that about three-quarters of AI experts believe the perspectives of white adults are well-represented in AI design. In contrast, only half say this about Asian adults' perspectives, and even smaller shares say this about the views of Black or Hispanic adults.

 

Why diversity matters in healthcare AI

The consequences of homogeneous development teams manifest in healthcare AI through various mechanisms:

Detecting data biases

Historical healthcare data reflects decades of systemic inequities, including unequal access to care, clinical trials that underrepresented minority populations, and diagnostic biases. When development teams lack diversity, these biases often go undetected.

As Boch and Kriebitz point out, "The importance of diversity in AI becomes evident when considering the representation of individuals in the data sets that these AI systems analyse. AI solutions are not designed to consider all demographic groups equally, and this inherent bias can lead to significant performance disparities.”

Ammanath echoes this concern, "Part of the challenge is developing models and use cases that are equally accessible, valuable, trustworthy and compliant for end users across demographics and geographies. I believe that achieving this at scale takes diversity".

For example, a team with racial and ethnic diversity is more likely to question whether a dermatology algorithm's training data adequately represents various skin tones. 

 

Defining appropriate performance metrics

The metrics chosen to evaluate AI systems directly influence their design and behavior. Diverse teams bring varied perspectives on what constitutes "success" in healthcare applications.

For instance, a team composed solely of technical experts might optimize for statistical accuracy across the entire patient population. In contrast, a team that includes members from underserved communities might prioritize minimizing performance disparities between demographic groups, even if that means slightly lower overall accuracy.

 

Understanding clinical context

Healthcare AI tools exist within settings shaped by human factors, institutional practices, and systemic constraints. Development teams with clinical diversity bring insights about how algorithms will function in real-world settings.

Clinicians from varied practice settings—from academic medical centers to rural health clinics—understand the different resources, workflows, and patient populations that impact AI implementation. Without this diversity of clinical experience, algorithms may be designed for settings that don't represent healthcare environments.

 

Anticipating diverse patient needs

Patients interact with healthcare systems in different ways based on their cultural backgrounds, language proficiency, health literacy, disability status, and socioeconomic circumstances. Diverse development teams are better positioned to anticipate these needs.

For example, a team with members who have experienced language barriers in healthcare settings might prioritize developing multilingual interfaces for patient-facing AI tools. Similarly, developers with disabilities bring invaluable perspectives on accessibility requirements that might otherwise be overlooked.

 

The benefits of diverse teams in healthcare AI

As highlighted in "The Importance of Diverse Teams in Building Human-Centric AI," diversity in AI development "is not just a matter of representation; it encompasses a broad range of factors that contribute to the richness and depth of problem-solving and innovation." This diversity includes "gender, ethnicity, cultural background, educational experience, professional expertise, and cognitive diversity" which together create teams that "reflect a variety of perspectives—ranging from technical skills to lived experiences—[that] are better equipped to tackle complex, multifaceted challenges." 

The research article on the Importance of Diverse Teams in Building Human-Centric AI outlines the following benefits:

More inclusive and equitable AI systems

Diverse teams create "more inclusive and equitable systems. AI that is developed by teams with varied perspectives is more likely to account for the needs and experiences of a broader range of people." In healthcare specifically, "a diverse AI team designing healthcare tools might ensure that the system accounts for cultural differences in medical care or takes into consideration socioeconomic barriers to access, resulting in a more universally applicable and fair solution."

 

Enhanced creativity and innovation through diverse viewpoints

"Diversity fosters creativity and innovation by encouraging out-of-the-box thinking and challenging the status quo. When team members bring different perspectives, it becomes easier to identify new solutions to problems that may not be apparent to those with similar experiences." For healthcare AI, this means "new algorithms, models, and applications that are more effective and robust," whether addressing data security challenges or finding novel ways to improve patient experience.

 

Better representation of global populations and their needs

"For AI to be truly effective, it must reflect the diverse realities of the global population. Teams that include individuals from varied cultural and geographic backgrounds are better equipped to design AI systems that understand and respect these differences." This is particularly significant in healthcare, where cultural factors significantly influence care delivery, health beliefs, and treatment adherence.

 

Reducing the risk of perpetuating harmful stereotypes or inequities

Diverse teams are vital for "reducing the risk of perpetuating harmful stereotypes or inequities." The research emphasizes that "a diverse team is more likely to recognize these risks early in the development process and take proactive steps to ensure that their AI systems do not perpetuate harmful biases." In healthcare, where existing disparities already cause significant harm, this preventative approach is essential.

 

Measuring and incentivizing team diversity

For healthcare organizations implementing AI systems, evaluating development team diversity should be part of vendor assessment and internal development processes. This aligns with emerging regulatory frameworks, as Boch and Kriebitz note: "The European Union (EU) AIA emphasises measures to identify and mitigate biases in data management but also calls for diverse teams to develop AI solutions".

Ammanath highlights the continuous nature of diversity considerations throughout the AI lifecycle, "Evaluating whether a model is performing as intended means considering not just the value delivered but also the risks, both expected and surprising... The diversity of people contributing to this effort helps the assessment process. Are unexpected risks emerging in the application? Is the model performing equally well for all stakeholders?". This ongoing assessment by diverse teams ensures the AI system remains effective and equitable over time.

 

Balancing expertise and representation

While diversity is essential, healthcare AI development also requires technical expertise and domain knowledge. Organizations must balance these considerations through:

  • Tiered involvement models: Creating multiple pathways for diverse stakeholders to contribute based on their expertise and availability
  • Training and knowledge exchange: Building capacity among diverse stakeholders to engage meaningfully with technical aspects of AI development
  • Clear role definition: Establishing where different perspectives have decision-making authority versus advisory input
  • Facilitating effective collaboration: Using structured processes that enable meaningful exchange across disciplines and backgrounds

 

FAQs

How does team diversity impact AI accuracy in healthcare?

Diverse teams are more likely to identify potential biases and blind spots, leading to more accurate and fair AI models.

 

What role do patient advisory boards play in healthcare AI development?

They provide crucial insights into patient needs and concerns, helping developers create more patient-centered AI tools.

 

Why is interdisciplinary collaboration important in healthcare AI?

It allows for the integration of medical, technical, and ethical perspectives, resulting in more robust AI systems.

 

How can AI developers avoid data bias in healthcare algorithms?

By using diverse and representative training datasets and continuously monitoring model performance across demographic groups.

 

What challenges do diverse teams face in healthcare AI?

They often confront implicit bias, underrepresentation, and limited access to mentorship and advancement opportunities.