AAA Part 1 Sample Assignment

docx

School

Humber College *

*We aren’t endorsed by this school

Course

123

Subject

Sociology

Date

Feb 20, 2024

Type

docx

Pages

10

Uploaded by icy59

Report
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment STEP 1 Artificial intelligence can discriminate on the basis of race and gender, and also age https://theconversation.com/artificial-intelligence-can-discriminate-on-the-basis-of-race-and-gender-and-also-age- 173617 Legend: Blue Highlighting = Main Concepts Green Highlighting = Signal Words Yellow Highlighting = Main point of each paragraph Underlining = Unknown Words: Words that I do not know the meaning of. [Underlining + Square Brackets ] = Notes to self. [1] We have accepted the use of artificial intelligence (AI) in complex processes — from health care to our daily use of social media — often without critical investigation, until it is too late. The use of AI is inescapable <incapable of being avoided, ignored, or denied: inevitable> in our modern society, and it <AI> may perpetuate discrimination without its <AI’s> users being aware of any prejudice. When health-care providers rely on biased technology, there are real and harmful impacts. This <real and harmful impacts> became clear recently when a study showed that pulse oximeters — which measure the amount of oxygen in the blood and have been an essential tool for clinical management of COVID-19 — are less accurate on people with darker skin than lighter skin [I completely agree with the point the author is making. It extremely hurts to become aware of the effects that AI can have on certain racial groups or even minorities. I cannot even start to imagine what individuals with darker skin would be feeling after the results of this study are found out.] The findings resulted in a sweeping racial bias review now underway, in an attempt to create international standards for testing medical devices. __________________________________________________________ [2] There are examples in health care, business, government and everyday life where biased algorithms have led to problems, like sexist searches and racist predictions of an offender’s likelihood of re-offending. AI is often assumed to be more objective than humans. In reality, however, AI algorithms make decisions based on human-annotated data, which can be biased and exclusionary. Current research on bias in AI focuses mainly on gender and race. But what about age-related bias — can AI be ageist? __________________________________________________________ Ageist technologies? 1 | P a g e
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment [3] In 2021, the World Health Organization released a global report on aging, which called for urgent action to combat ageism because of its <ageism> widespread impacts on health and well-being. Ageism is defined as “a process of systematic stereotyping of and discrimination against people because they are old.” It <ageism> can be explicit or implicit, and can take the form of negative attitudes, discriminatory activities, or institutional practices. __________________________________________________________ [4] The pervasiveness <the quality of spreading widely or being present throughout an area or a group of people> of ageism has been brought to the forefront throughout the COVID-19 pandemic. Older adults have been labelled as “burdens to societies,” and in some jurisdictions, age has been used as the sole criterion for lifesaving treatments. __________________________________________________________ [5] Digital ageism exists when age-based bias and discrimination are created or supported by technology. A recent report indicates that a “digital world” of more than 2.5 quintillion bytes of data is produced each day. Yet even though older adults are using technology in greater numbers — and benefiting from that use — they <older adults> continue to be the age cohort least likely to have access to a computer and the internet. Digital ageism can arise when ageist attitudes influence technology design, or when ageism makes it more difficult for older adults to access and enjoy the full benefits of digital technologies. __________________________________________________________ Cycles of injustice [6] There are several intertwined cycles of injustice where technological, individual and social biases interact to produce, reinforce and contribute to digital ageism. __________________________________________________________ [7] Barriers to technological access can exclude older adults from the research, design and development process of digital technologies. Their <older adults> absence in technology design and development may also be rationalized with the ageist belief that older adults are incapable of using technology. As such, older adults and their <older adults’> perspectives are rarely involved in the development of AI and related policies, funding, and support services. The unique experiences and needs of older adults are overlooked, despite age being a more powerful predictor of technology use than other demographic characteristics including race and gender. __________________________________________________________ [8] AI is trained by data, and the absence of older adults could reproduce or even amplify the above ageist assumptions in its output. Many AI technologies are focused on a stereotypical image of an older adult in poor health — a narrow segment of the population that ignores 2 | P a g e
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment healthy aging. This <stereotypical images of older adults in poor health> creates a negative feedback loop that not only discourages older adults from using AI, but also results in further data loss from these demographics that would improve AI accuracy. Even when older adults are included in large datasets, they <older adults> are often grouped according to arbitrary divisions by developers. For example, older adults may be defined as everyone aged 50 and older, despite younger age cohorts being divided into narrower age ranges. As a result, older adults and their needs can become invisible to AI systems. In this way, AI systems reinforce inequality and magnify societal exclusion for sections of the population, creating a “ digital underclass ” primarily made up of older, poor , racialized and marginalized groups. __________________________________________________________ Addressing digital ageism [9] We must understand the risks and harms associated with age-related biases as more older adults turn to technology. The first step is for researchers and developers to acknowledge the existence of digital ageism alongside other forms of algorithmic biases, such as racism and sexism. They <researchers and developers> need to direct efforts towards identifying and measuring it <digital ageism>. The next step is to develop safeguards for AI systems to mitigate <make less severe, serious, or painful> ageist outcomes. __________________________________________________________ [10] There is currently very little training, auditing or oversight of AI-driven activities from a regulatory or legal perspective. For instance, Canada’s current AI regulatory regime is sorely lacking. [I question the truth of this claim about Canada regulatory regime sorely lacking. Based on my general knowledge, I am aware that there is currently many regulations and legislations regarding technology, as well as regulatory institutions in place to regulate AI-driven activities. ] This <very little training, auditing, or oversight of AI drive activities > presents a challenge, but also an opportunity to include ageism alongside other forms of biases and discrimination in need of excision. To combat digital ageism, older adults must be included in a meaningful and collaborative way in designing new technologies. [How can older adults be included in a meaningful and collaborative way? What method will be used to include older adults in the design of new technologies? Some adults may not want to be included because some older adults are simply not willing to use technology, whereas some will want to be included. So how will we differentiate between which ones should be included?] __________________________________________________________ [11] With bias in AI now recognized as a critical problem in need of urgent action, it is time to consider the experience of digital ageism for older adults, and understand how growing old in an increasingly digital world may reinforce social inequalities , exclusion and marginalization. __________________________________________________________ 3 | P a g e
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment Main Conclusion: It is time to consider the experience of digital ageism for older adults and understand how growing old in an increasingly digital world may reinforce social inequalities, exclusion, and marginalization. [Paragraph 11] Summary : 1. The use of AI is inescapable in our modern society, and it may perpetuate discrimination without its users being aware of any prejudice. 2. In reality, however, AI algorithms make decisions based on human-annotated data, which can be biased and exclusionary. 3. In 2021, the World Health Organization released a global report on aging, which called for urgent action to combat ageism because of its widespread impacts on health and well-being. 4. The pervasiveness of ageism has been brought to the forefront throughout the COVID- 19 pandemic. 5. Digital ageism can arise when ageist attitudes influence technology design, or when ageism makes it more difficult for older adults to access and enjoy the full benefits of digital technologies. 6. There are several intertwined cycles of injustice where technological, individual and social biases interact to produce, reinforce and contribute to digital ageism. 7. Barriers to technological access can exclude older adults from the research, design and development process of digital technologies. 8. In this way, AI systems reinforce inequality and magnify societal exclusion for sections of the population, creating a “digital underclass” primarily made up of older, poor, racialized and marginalized groups. 9. We must understand the risks and harms associated with age-related biases as more older adults turn to technology. 10. There is currently very little training, auditing or oversight of AI-driven activities from a regulatory or legal perspective. 11. With bias in AI now recognized as a critical problem in need of urgent action, it is time to consider the experience of digital ageism for older adults, and understand how growing old in an increasingly digital world may reinforce social inequalities, exclusion and marginalization. STEP 2 Artificial Intelligence and Discrimination 4 | P a g e
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment STEP 3 ANSWER FOR MODR1730B: Main Social Issue Does artificial intelligence reinforce social inequalities, exclusion, and marginalization for older adults leading to digital ageism? STEP 3 ANSWER FOR MODR1760B: Main Moral Issue Is the digital ageism arising from artificial intelligence ethically permissible, given that older adults are experiencing social inequalities, exclusion, and marginalization? STEP 4 ANSWER FOR MODR1730B: Is the main social issue empirical, conceptual, ethical, or mixed? STEP 4 ANSWER FOR MODR1760B: Explain why you think the main moral issue is a “moral” issue with reference to what it means to be a moral issue or to the characteristics of a moral issue. DO NOT COMPLETE STEPS 3 AND 4. STEP 5 It is time to consider the experience of digital ageism for older adults and understand how growing old in an increasingly digital world may reinforce social inequalities, exclusion, and marginalization. [Paragraph 11] STEP 6 I could not find any examples of emotive language and only one weak example of prejudicial language in this article. The weak example of prejudicial language is neutralized below. The reason I believe that this article does not contain prejudicial or emotive language is that the article was written by credible academics who choose their words carefully. Three of the four authors are Assistant Professors at credible universities. The third author is a Senior Research Fellow at a credible university in the UK. Since this article is written with academic rigour, the one example I found is weak, however, can be interpreted as very mildly prejudicial. “In this way, AI systems reinforce inequality and magnify societal exclusion for sections of the population, creating a “digital underclass” primarily made up of older, poor, racialized and marginalized groups.” [Paragraph 8] 5 | P a g e
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment These terms are not neutral but are only very mildly prejudicial. Reinforce – an architectural, construction term that suggests strength. Magnify – a term that invokes a metaphor of the societal inclusion obviously increasing, as simple as looking through a microscope. Rewrite in neutral terms: In this way, AI systems promote inequality and increase societal exclusion for sections of the population, creating a “digital underclass” primarily made up of older, poor, racialized and marginalized groups. [Paragraph 8] STEP 7 Example of a Generalization: “This became clear recently when a study showed that pulse oximeters — which measure the amount of oxygen in the blood and have been an essential tool for clinical management of COVID-19 — are less accurate on people with darker skin than lighter skin. The findings resulted in a sweeping racial bias review now underway, in an attempt to create international standards for testing medical devices.” [Paragraph 1] Hint word indicating a generalization: “study.” We know this is a generalization because a study showed that pulse oximeters are less accurate on people with darker skin than lighter skin. Also, “findings” are referred to that resulted in a sweeping racial bias review now underway. While the population and sample are not stated, this example clearly contains a generalization, one that we know very little about. Furthermore, the study was easy to find with a quick Google search: https://www.jwatch.org/na55581/2022/12/28/pulse-oximetry-less-accurate-patients- with-darker-skin Example 1 of a Causal Argument: 6 | P a g e
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment “Digital ageism can arise when ageist attitudes influence technology design, or when ageism makes it more difficult for older adults to access and enjoy the full benefits of digital technologies.” [Paragraph 5] Hint word indicating a causal argument: “can arise when.” Cause 1: ageist attitudes influence technology design. Cause 2: more difficult for older adults to access and enjoy the full benefits of digital technologies. Effect: Digital ageism. We know this is a causal argument because Cause 1 and 2 contribute to the production of another event, process, or state, Digital Ageism. So, Digital Ageism is partly dependent upon ageist attitudes influencing technology design, and access to and enjoyment of digital technologies being more difficult for older adults. Example 2 of a Causal Argument: “AI is trained by data, and the absence of older adults could reproduce or even amplify the above ageist assumptions in its output. Many AI technologies are focused on a stereotypical image of an older adult in poor health — a narrow segment of the population that ignores healthy aging. This <stereotypical images of older adults in poor health> creates a negative feedback loop that not only discourages older adults from using AI, but also results in further data loss from these demographics that would improve AI accuracy.” Hint words indicating a causal argument: “this creates” and “also results in.” Cause: Many AI technologies are focused on a stereotypical image of an older adult in poor health. Effect 1: A negative feedback loop that discourages older adults from using AI. Effect 2: Data loss from these demographics that would improve AI accuracy. We know this is a causal argument because many AI technologies being focused on a stereotypical image of an older adult in poor health is said to contribute to the production of another 2 events, processes, or states: A negative feedback loop that discourages older adults from using AI, and Data loss from these demographics that would improve AI accuracy. So, the two effects stated above are partly dependent upon a negative feedback loop and data loss. STEP 8 (a) 7 | P a g e
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment Artificial Intelligence (AI) Discrimination STEP 8 (b) To what extent can artificial intelligence perpetuate discrimination? STEP 8 (c) Artificial Intelligence Discrimination We have accepted artificial intelligence’s use in various complex processes: - From health care to our daily use of social media. Inescapable. Healthcare providers rely on biased technology. Use results in real and harmful impacts. AI is often assumed to be more objective than humans. - “However, in reality, AI algorithms make decisions based on human- annotated data, which can be biased and exclusionary”. Older adults are rarely involved in, “the development of AI and related policies, funding, and support services”. AI is trained by data. Very little training, auditing, or oversight of AI-driven activities from a regulatory or legal perspective. Even in Canada, the regulatory regime for AI is lacking and must be improved. Sexist and racist predictions. Biased algorithms. Systemic stereotyping based on race, gender, or age. Prejudice. Biased technology. May result in harmful impacts that are seen in the real world. For example, a study showed that pulse oximeters are less accurate on people with darker skin compared to people with lighter skin. “The findings resulted in a sweeping racial bias”. These studies may have had a significant negative impact on those with darker skin colour as this AI technology resulted in a racial bias. “Racist predictions of an offender’s likelihood of re- offending. Digital Ageism AI can also be considered ageist. It has created a problem which the author refers to as age-related bias, or ageism. Ageism is, “a process of 8 | P a g e
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment systematic stereotyping of and discrimination against people because they are old”. o It can be explicit or implicit, and take the form of discriminatory activities. o There is a belief that older adults do not know how to use technology at all. “Many AI technologies are focused on a stereotypical image of an older adult in poor health”. AI systems exacerbate inequality and exclusion, especially for older adults. o 2021: WHO released a global report on aging. o Essentially, the author believes that AI leads to discrimination based on age, excluding older people from the realm of digital technology. “Widespread impacts on health and well-being”. Older adults: labelled “burden to societies.” Cycles of Injustice Intertwined cycles of injustice. Technological, individual, and social biases interact. Produce and reinforce digital ageism. Barriers that exclude access. Unique experiences and needs are overlooked. Reproduce or amplify assumptions. Negative feedback loop. Older adults grouped according to arbitrary divisions. Creation of a digital underclass primarily made up of older, poor, racialized, and marginalized groups. 9 | P a g e
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
MODR1730 & 1760 Article Analysis Assignment Part 1 Sample Assignment Needs to be combatted. Needs to be acknowledged. 10 | P a g e