Discussion 5

docx

School

Southern New Hampshire University *

*We aren’t endorsed by this school

Course

260

Subject

Computer Science

Date

Nov 24, 2024

Type

docx

Pages

2

Uploaded by rgdj77

Report
Discussion 3-2 The discussion in this video with Cathy O'Neil on how it is feasible for humans to write and program algorithms with personal prejudices like racism, sexism, and bigotry caught my attention the most. Since I have trouble understanding math, data, and algorithms, I genuinely thought that algorithms were based on accurate and objective scientific data. Although the theory makes perfect sense to me, it is something that I never really gave too much thought on. I guess because I assumed that when it came to data, it was to be trusted. Like many others, I have also been a victim of believing algorithms without considering the potential personal biases associated with them due to a lack of better terminology. To reduce the chance of bias and obtain the most accurate data, it is essential to make sure that, while developing an algorithm, your data represents the complete population. You can be missing out on important data if you simply include fragments of information or a small portion of the population. This suggests that we could lessen the possibility that biases will affect the algorithms without increasing the number of samples and data that are used. Because algorithms are created by humans, and because biases are a part of all human beings, both overtly and implicitly, human biases might affect the data used to test hypotheses. When data is used to test hypotheses, human bias can have a negative impact on the results by leaving out parts of the population, leading to inaccurate and untrustworthy conclusions. With or without consciously knowing it, “we are injecting those biases into the algorithms, by choosing what data to collect… and by trusting data that is actually picking up on past practices and choosing the definition of success” (O’Neil, 2017). Therefore, as O’Neil suggests algorithms should be checked, edited, and audited. (2017). The possibility of human influences on data interpretation and selection, in my opinion, has an impact on social justice and ethics. Given that some companies develop algorithms exclusively for profit, it has ethical implications. It is important to keep to ethical principles in order to reduce bias and gather accurate and unbiased data. When groups are targeted without consideration for the effects such acts may have on people's lives, social justice is compromised. O'Neil (2017) says that there is ample evidence of biased policing and judicial system data, as well as extreme segregations in many cities and towns. Social justice has been violated when data is altered to further the creator's objective, going against research ethics and disclosing results without providing full context. Reference:
O’Neil, C. (2017). The era of blind faith in big data must end. Cathy O’Neil: The era of blind faith in big data must end | TED Talk. https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help