Code & Compassion: Redefining Diagnosis, Bridging the Divide, Saving Lives

Jan 06, 2025By Yvette Schmitter
Yvette Schmitter

The Transformative Promise of AI in Medical Diagnosis

Picture yourself waiting for test results in a doctor's office, experiencing that well-known pangs of anxiety. While healthcare professionals sifted through data to get the answers you so sorely sought, this wait used to be days or even weeks.

Artificial intelligence (AI) has the potential to completely transform this experience in the near future by giving results in a matter of minutes, identifying patterns that the human eye could miss, and lowering prices to make healthcare more affordable for all. The promise for quicker, more precise diagnoses, less costs, and ultimately better care makes this a remarkable development.

A Few Tangible Benefits of AI in Healthcare

  • Speed: Results in minutes instead of weeks.
  • Precision: Diagnostic accuracy that rivals—and sometimes surpass human expertise.
  • Affordability: Reducing costs to make care accessible for more people.

Beyond the brilliant potential for AI-driven healthcare, however, we have to confront the stark reality that this revolution is only going to be as successful as the motivations behind it.

The Marvel of Machine Learning Meets Medicine

The impact of AI on healthcare has never been fully assessed. Massive amounts of patient data can be processed by machine learning algorithms, which can then identify subtleties and patterns that could indicate the onset of a disease. A 2023 study published in Nature highlighted AI’s diagnostic accuracy for certain cancers, now on par with top oncologists, reducing human error and delivering quicker, more precise results. Imagine what this means for conditions like breast cancer, where early detection dramatically improves survival rates. The power of AI to save lives is staggering, its impact transformative. 

Yet with this power comes an immense responsibility.

Technology, in its purest form, is neutral — it sees only data. However, the data we provide AI isn’t impartial; it’s shaped by human hands, it reflects and reinforces the values of those who develop it. Technology is more than a container for existing social biases influenced by our histories, and sometimes reflects societal inequities; it is also a tool that can actively contribute to or exacerbate racism. Bias can enter the data lifecycle as early as collection. For example, someone generating surveys that are subsequently going to inform how a program model or algorithm works, a designer’s preconceived notions could end up baked into the process

The Case for Responsible AI in Healthcare

Think about this: when presented with patients whose demographics differ, an AI diagnostic tool that was mostly trained on data from white, middle-class to upper-class patients may perform poorly. The result? Health disparities that are already widespread in underprivileged populations are being exacerbated by misdiagnosis, mistreatment, and other factors. As leaders in the healthcare industry, we must understand that, despite its strength, AI can absorb our unconscious prejudices if we are not careful.

Bias in healthcare is widespread. A few years ago, a commonly used healthcare algorithm unintentionally suggested less care for Black patients with comparable medical requirements than for White patients. The issue wasn't intentional racism, but rather that the algorithm had been trained on data that underestimated the healthcare needs of Black patients. This led to an outcry and the eventual overhaul of the algorithm, but it underscored an urgent lesson: AI must be as equitable as it is innovative.

For healthcare leaders, adopting AI responsibly isn’t just a box to check or checkmark on an innovation list —it’s a moral imperative. Every tool adopted must be scrutinized for equity in its training data and tested across diverse populations to ensure it serves everyone.

While AI has a lot of promise, there is a serious risk that technology will exacerbate already existing healthcare inequities. Technology cannot be neutral; as it's a reflection of the biases present in its development process and training data.

The question is apparent as we stand at the intersection of technical advancement and human care: Will we commit to creating an inclusive, compassionate healthcare system, or will we allow AI benefit just a select few?

Healthcare of the future is about who AI can help, not just what it can do.

According to a groundbreaking Institute of Medicine (IOM) report, African Americans, including myself, and members of other marginalized groups routinely receive fewer procedures and lower-quality care than White people, regardless of the most sophisticated diagnostic tools or the most basic treatments. Even after controlling for variables like insurance coverage, the severity of the illness, income, education, and the kind of healthcare facility, these differences still exist. The data is unmistakable: these care deficiencies are caused by systemic injustices that are exacerbated by unintentional bias among medical personnel.

Healthcare is being revolutionized by AI, but this change will not succeed unless there is a corresponding commitment to accountability. It's easy to get caught up in the excitement and the buzz surrounding new developments in technology, but the healthcare industry is not the place to "move fast and break things." Every mistake has actual repercussions in this situation, affecting lives, eroding trust, and endangering the welfare of society.

As an additional illustration, a 2019 study that was published in Science News looked at a popular healthcare algorithm that forecasted patients' future medical requirements by using historical healthcare spending. The health needs of Black individuals were significantly understated by this method, the researchers discovered. This bias developed because Black patients frequently paid less for healthcare than White patients with comparable diseases because of institutional barriers and discrepancies in access to care. Black patients were consequently given lower risk rankings by the algorithm, which limited their access to essential care. The study emphasized the drawback of using healthcare spending as a stand-in for health needs since it ignores the current disparities that restrict underserved areas' access to care.

Key Findings:

  • Algorithms can inadvertently discriminate based on incomplete or biased data
  • Historical healthcare spending fails to capture true health needs
  • Marginalized communities suffer most from such technological shortcomings

To create a model, someone is making choices about what’s deemed essential and important enough to include, and in those choices lie our biases, our priorities. So, in essence, a model’s blind spots reflect the judgements and priorities of its creators. Models are merely mathematical expressions of opinions and judgments, codified. Its more than simply asking who designed the model but also what is that person/company is trying to accomplish.

This is more than an issue of biased data - it is a matter of life and death.

Each misdiagnosis, each overlooked risk, each missed opportunity to provide equitable care is a person whose future hangs in the balance. A father's medical care is being postponed. A child is being raised in a system that has already determined their value based on their skin tone.

More than just technical solutions are needed to address this issue; everyone must be committed to accountability, transparency, and teamwork. Uncomfortable facts about the established structures and the prejudices they reflect and reinforce must be faced. We need to pose challenging questions: Who does this data represent? Who's missing? Whose opinions are influencing these algorithms? The question of who is being left behind is maybe the most important.

When algorithms are taught on biased data, they may inadvertently discriminate. Real patient demands are not captured by historical measures, such as healthcare spending. Frequently, marginalized communities are the ones that suffer the most from these mistakes. We must not let this go on.

AI bias is a moral failing as opposed to a technical error.

More individuals may have access to prompt, accurate, and fair healthcare if AI were to serve as a bridge. However, that bridge will only be strong if it is constructed with purpose, inclusivity, and attention to detail. Without taking action, we cannot afford to let this moment pass. AI has a lot of promise, but it also has a lot of risks. Let this serve as a reminder to all of us—researchers, doctors, business leaders, engineers, and regular people—to make sure that technology represents our greatest goals rather than our worst biases. Then and only then will we be able to turn AI into a tool that benefits everyone, heals, and uplifts.

In a time of extraordinary technological advancement, let’s remember that true innovation is measured not by the sophistication of our algorithms but by the lives they change for the better. For leaders, the challenge is to make sure AI is more than a tool for profit or prestige; it’s a force for real, positive change, reaching every patient, every community, and benefiting society at large.

Therefore, as we consider the future of healthcare, we find ourselves at a crossroads where we must decide whether to allow AI transform healthcare for only a handful of people or to use it to build a system that will be inclusive and beneficial to everybody.

The answer is obvious to me: let's build a system that's taking care and inclusive of all. The future depends on us and our commitment to a human-centered revolution.

Equity: Untapped Power of AI in Healthcare

AI has the potential to revolutionize healthcare by providing millions of people with individualized care and resolving logistical issues in underdeveloped and resource-constrained areas. Envision a rural hospital with little access to experts in a far-flung village. These facilities can now use AI-powered diagnostic technologies that previously only existed in urban areas. Imagine if artificial intelligence (AI) could identify early stroke symptoms and provide an elderly patient in a small town with the same level of treatment as someone at a prestigious hospital. 

But if we want AI to make healthcare more inclusive, it demands a reimagining of healthcare delivery, infrastructure, and addressing the digital divide. We’ve witnessed how the rapid adoption of technology can widen gaps. This digital divide is a legacy of underinvestment in broadband access for Black and other underserved communities, which has limited engagement with these tools. There’s a paradigm shift underway culminating into an Internet of Medical Things, a network of connected medical devices, software applications, health systems and services not only aimed to help streamline clinical operations and workflow management but significantly improve patient outcomes - regardless if the patient is well or unwell. How can we ensure everyone has access to this new way of healthcare delivery? It’s a one two punch to an already marginalized population in dire need of improved healthcare, health equity and access. Absent of the underlying minimally required infrastructure, those in most need won’t have access to virtual based clinical/diagnostic services and capabilities focused on prediction, prevention and early diagnosis. The digital divide limits the ability of many to benefit from virtual healthcare, leaving them metaphorically stuck in the waiting room.

As we enter this era of AI-driven healthcare, we face an urgent mission to prevent another divide. To empower everyone, we must provide infrastructure that supports AI enablement for all, encompassing education on AI tools, internet access, and computational power. This is a critical gap that must be closed to equip the next generation of diverse leaders with the resources, education, and technical access needed to master evolving tech.

Equitable AI requires investments in technology infrastructure, increased transparency in data sourcing, and regulatory oversight to protect and prevent against misuse. Leaders must shift focus from short-term gains to long-term impacts – a difficult but necessary pivot. Healthcare’s return on investment (ROI) in AI must reach beyond financials to actualizing the embodiment in deeds and actions aligned to a sworn modern-day Hippocratic Oath: "I will respect the hard-won scientific gains and share such knowledge with those who follow." Which means healthcare leadership’s view of ROI must include encompassing societal benefits, improved outcomes and closing care gaps for those historically left in the shadows of data. 

Bridging the Divide: AI as a Force for Good

To make this vision a reality, we need more than technology—we need courage. We must confront the systems and structures that persistently perpetuate inequality.

  • Millions of people in underserved areas lack the broadband needed for AI-powered care.
  • Data that drives AI innovation frequently ignores marginalized groups.
  • AI runs the risk of widening the very rifts it was intended to bridge if nothing is done about it.

The good news is that we have the ability to make this different. By working together, we can make sure AI serves as a tool rather than a barrier to improved healthcare.

The Future Is in Our Hands

AI has the power to transform healthcare—but transformation without intention is just chaos. We must ask ourselves hard questions:

  • Are we creating systems that include everyone, or are we leaving people behind?
  • Are we prioritizing equity, or are we perpetuating privilege?
  • Are we using technology to divide, or to unite?

For me, the answer is clear.

We have a responsibility—a moral obligation—to ensure that AI serves all people, regardless of their race, gender, socioeconomic status, zip code, or geography.

From Hype to Humanity: A New Kind of ROI

The lives we transform are the true test of innovation, not the complexity of our algorithms. AI's return on investment (ROI) cannot be solely focused on monetary gains. Closing care disparities, uplifting communities, and creating a future that benefits everyone must be the goals of society.

We must be dedicated to this vision as leaders, legislators, and people.

The Call for Responsible AI

What I know is that, if we build it to be so, AI has the potential to be the great equalizer in healthcare. Consider a small rural town clinic where doctors can use AI to identify strokes before they occur. Imagine a single parent who can provide their child with life-saving care without having to forgo a day's pay.

We can build a future like this. However, it necessitates that we move quickly and purposefully.

The Blueprint for Equity in AI

  • Representation Matters: AI must be trained on data that reflects the full spectrum of humanity—not just the privileged few.
  • Bridging the Digital Divide: Every community must have access to the infrastructure that supports AI-driven healthcare.
  • Accountability: We need ethical oversight to ensure that AI serves everyone, not just those who already have access to care.

The core values we chose to uphold will determine the way forward of healthcare in the future, not the code. We have an opportunity to create a system that is revolutionary, inclusive, and compassionate. But only together can we get there.

So, here’s my challenge to you: Talk about this. Share this. Demand better. Because the future of healthcare isn’t just about what AI can do—it’s about who it can help.

Let’s build a system that lifts us all. Let’s use AI not just to advance technology, but to advance humanity.

The choice is ours. And the time is now.

©2025 Yvette Schmitter, All Rights Reserved