Final Paper Submission- Ivadny Ochoa Rembis -2
pdf
keyboard_arrow_up
School
Arizona State University *
*We aren’t endorsed by this school
Course
320
Subject
Medicine
Date
Jun 19, 2024
Type
Pages
8
Uploaded by ConstableOwlPerson647
Ochoa Rembis 1
Eliminating racial bias in health care AI: Expert panel offers guidelines
Ivadny Ochoa Rembis
Arizona State University
MED 320
Dr. Rollin Medcalf
April 14, 2024
Ochoa Rembis 2
Introduction
The article "Eliminating racial bias in health care AI: Expert panel offers guidelines"
published in the
JAMA Network Open
addresses the critical issue of bias in healthcare algorithms
and offers a structured framework to mitigate these biases. It emphasizes the significant impact
that bias in algorithm development and application can have on racial and ethnic minoritized
groups, leading to disparities in healthcare outcomes. According to the article, “Health care
algorithms, defined as mathematical models used to inform decision-making, are ubiquitous and
may be used to improve health outcomes. However, algorithmic bias has harmed minoritized
communities in housing, banking, and education, and health care is no different” (Marshall
2023). The panel of experts convened by the Agency for Healthcare Research and Quality and
the National Institute for Minority Health and Health Disparities proposes a comprehensive
approach to promote equity in healthcare through the algorithm lifecycle, from development to
deployment and monitoring. The panel developed a conceptual framework that applies these
principles across the algorithm's life cycle, focusing on health and healthcare equity for patients
and communities within the broader context of structural racism and
discrimination
. The article
highlights the importance of multiple stakeholders' collaboration in mitigating and preventing
algorithmic
bias,
including
problem
formulation,
data
selection,
algorithm
development,
deployment, and monitoring. Addressing algorithmic bias is urgent, as highlighted by a Biden
Administration Executive Order aimed at preventing and remedying
discrimination
, including
protection from algorithmic
discrimination
. The article provides examples of biased algorithms
in healthcare that have resulted in disparities in treatment and access to services for racial and
ethnic minoritized groups. To finalize, the article presents a call to action for stakeholders to
implement the guiding principles and create a framework that supports health and healthcare
Ochoa Rembis 3
equity, transparency, community engagement, identification of fairness issues, and accountability
in all phases of the health care algorithm life cycle.
Discussion
The ethical issue at the core of the article "Eliminating racial bias in health care AI:
Expert panel offers guidelines" revolves around the presence and impact of racial and ethnic bias
in healthcare algorithms. These biases, when embedded in algorithms used for diagnosis,
treatment, prognosis, risk stratification, and allocation of healthcare resources, can lead to
disparate and inequitable health outcomes for racial and ethnic minority groups. Yale School of
Medicine states the following… “Artificial intelligence (AI) is revolutionizing the way clinicians
make decisions about patient care. But health care algorithms that power AI may include bias
against underrepresented communities and thus amplify existing racial inequality in medicine,
according to a growing body of evidence” (Backman 2023). The use of biased algorithms in
healthcare settings can exacerbate existing health disparities and inequalities, leading to worse
outcomes for historically marginalized populations. This goes against the principle of
justice in
healthcare
, which demands that all individuals have equal access to care and the benefits of
medical advancements, regardless of their racial or ethnic background. The ethical issue also
encompasses the lack of transparency and accountability in the development and deployment of
healthcare
algorithms
.
Without
clear
standards
for
transparency,
it's
challenging
for
stakeholders, including patients and healthcare providers, to understand how decisions are made
by these algorithms and to trust their fairness and accuracy. This lack of transparency can
undermine the ethical principle of
autonomy
, where patients have the right to be informed and
make decisions about their healthcare based on clear, accurate, and unbiased information. The
article addresses these ethical concerns by proposing a framework and guiding principles aimed
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Ochoa Rembis 4
at mitigating and preventing bias in healthcare algorithms. These principles include promoting
health and healthcare equity, ensuring transparency and explainability, engaging patients and
communities authentically, identifying and addressing fairness issues explicitly, and establishing
accountability for outcomes. The goal is to guide the healthcare industry toward the ethical use
of AI and algorithms that advance health equity rather than perpetuate disparities. Participation
in unethical acts, such as the development or deployment of biased healthcare algorithms, often
stems from a complex interplay of factors that can include both intentional and unintentional
motives. Understanding the humanity of the situation requires examining the various incentives,
pressures, and constraints that individuals and organizations might face. The pressure to quickly
bring new technologies to market can lead to shortcuts in the development process, such as
insufficient testing for bias across diverse populations. According to Faster Capital, “One
perspective argues that profit and ethics are inherently incompatible, as the pursuit of profit often
leads to unethical practices such as exploitation of labor, environmental degradation, or unethical
marketing tactics” (2024). For some organizations, the potential financial gains from deploying
an algorithm widely might outweigh concerns about its fairness or accuracy for all groups. The
complexity of machine learning models and the challenge of working with large, heterogeneous
datasets can make it difficult to detect and correct for bias. In some cases, the technical challenge
of ensuring fairness may be seen as too costly or time-consuming relative to other priorities. The
desire for recognition, whether in the form of academic accolades, industry leadership, or market
share, can sometimes lead to overlooking ethical considerations.
In large organizations, the
decision-making process regarding the development and deployment of algorithms is often
diffuse, involving many stakeholders with different priorities and expertise.
Stakeholders
Ochoa Rembis 5
The development of laws and regulations to address biases and inequalities, especially in
healthcare and technology, has often been prompted by historical events and societal shifts that
highlighted systemic injustices. While the specific domain of
healthcare algorithms
is relatively
new and thus directly corresponding laws are in their nascent stages, the broader context of
healthcare, civil
rights
, and data protection has seen significant legislative responses to historical
injustices and ethical lapses. The Tuskegee Syphilis Study: This infamous study, which ran from
1932 to 1972, involved withholding treatment from African American men who had syphilis
without their informed consent to study the disease's progression. According to the Centers for
Disease Control and Prevention, “The study initially involved 600 Black men – 399 with
syphilis, 201 who did not have the disease. Participants' informed consent was not collected.
Researchers told the men they were being treated for “bad blood,” a local term used to describe
several ailments, including syphilis, anemia, and fatigue.” (Wenger 2022) The public outcry
following the revelation of this study's ethical breaches led to the National Research Act of 1974
in the U.S., which established the National Commission for the Protection of Human Subjects of
Biomedical and Behavioral Research. The commission developed the Belmont Report, outlining
basic
ethical principles
for research involving human subjects.
Health Insurance Portability
and Accountability Act (HIPAA)
of 1996. While not a direct response to a single event,
HIPAA
was
influenced
by
growing
concerns about the privacy and security of health
information in the digital age. It established national standards to protect sensitive patient health
information from being disclosed without the patient's consent or knowledge. According to the
Centers for Disease Control and Prevention “While the
HIPAA
Privacy Rule safeguards PHI, the
Security Rule protects a subset of information covered by the Privacy Rule. This subset is all
individually identifiable health information a covered entity creates, receives, maintains, or
Ochoa Rembis 6
transmits in electronic form. This information is called
electronic protected health information,
or
e-PHI
. The Security Rule does not apply to PHI transmitted orally or in writing.”(Centers for
Disease Control and Prevention 2022). These historical contexts underscore the pattern of
legislation following public recognition of systemic injustices or ethical failings. As the field of
healthcare technology, particularly the use of AI and algorithms, continues to evolve, it is likely
that new laws and regulations will be developed in response to emerging ethical challenges. The
ongoing dialogue around algorithmic bias in healthcare suggests that this area will be a
significant focus of ethical and legislative attention in the years to come, continuing the pattern
of legal evolution in response to societal needs and ethical imperatives. The ethical choices
surrounding the development and deployment of healthcare algorithms have significant and
varied impacts on different groups within society. Racial and Ethnic Minoritized Groups: These
groups are disproportionately affected by biases embedded in healthcare algorithms, which can
lead to misdiagnosis, delayed treatments, or inappropriate care recommendations. For example,
an algorithm that underestimates the healthcare needs of Black patients compared to White
patients for the same conditions can lead to Black patients receiving less medical attention or
being placed lower on waiting lists for procedures.
Conclusion
In conclusion, the ethical dilemmas and violations associated with the use of biased
healthcare
algorithms
highlight
significant
challenges
at
the
intersection
of
technology,
healthcare, and ethics. These issues matter deeply because they directly impact the quality of
care, equity, and trust within the healthcare system. At the heart of these dilemmas is the
potential for algorithms to perpetuate and even exacerbate existing health disparities among
racial and ethnic minoritized groups, thereby undermining efforts to achieve equitable healthcare
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Ochoa Rembis 7
outcomes. The significance of addressing these ethical concerns cannot be overstated. Biased
algorithms
can
lead
to
misdiagnosis,
inappropriate
treatments,
and
delayed
care,
disproportionately affecting vulnerable populations and further entrenching systemic inequities.
Beyond the immediate health impacts, these biases erode trust in healthcare institutions and
technologies, potentially deterring individuals from seeking care or participating in digital health
initiatives. Avoiding these violations and ethical dilemmas requires a multi-pronged approach.
First, there must be an increased awareness and acknowledgment of the potential for bias within
healthcare algorithms, coupled with a commitment to equity as a foundational principle in
algorithm development and deployment. Developers, alongside ethicists and diverse stakeholder
groups, should engage in rigorous testing and validation processes to identify and mitigate
biases. This process includes ensuring that datasets are representative and that algorithms are
transparent and interpretable to both healthcare providers and patients.
Ochoa Rembis 8
References
Chin, M. H., Afsar-Manesh, N., Bierman, A. S., Chang, C., Colón-Rodríguez, C. J., Dullabh, P.,
Duran, D. G., Fair, M., Hernandez-Boussard, T., Hightower, M., Jain, A., Jordan, W. B., Konya,
S., Moore, R. H., Moore, T. T., Rodriguez, R., Shaheen, G., Snyder, L. P., Srinivasan, M., &
Umscheid, C. A. (2023). Guiding Principles to Address the Impact of Algorithm Bias on Racial
and Ethnic Disparities in Health and Health Care.
JAMA Network Open
,
6
(12), e2345050.
https://doi.org/10.1001/jamanetworkopen.2023.45050
CDC.
(2021,
May
3).
Tuskegee
Study
-
Timeline
-
CDC
-
NCHHSTP
. Www.cdc.gov.
https://www.cdc.gov/tuskegee/timeline.htm#:~:text=The%20study%20initially%20involved%20
600
Ethical Considerations: Balancing Profit Motive with Moral Responsibility
. (n.d.). FasterCapital.
https://fastercapital.com/content/Ethical-Considerations--Balancing-Profit-Motive-with-Moral-R
esponsibility.html
Centers for Disease Control and Prevention. (2022, June 27).
Health insurance portability and
accountability
act
of
1996
(HIPAA)
.
Centers
for
Disease
Control
and
Prevention.
https://www.cdc.gov/phlp/publications/topic/hipaa.html
Backman, I. (2023, December 21).
Eliminating Racial Bias in Health Care AI: Expert Panel
Offers
Guidelines
.
Medicine.yale.edu.
https://medicine.yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offe
rs-guidelines/