IBM’s Watson Article on bias in Machine learning Data

docx

School

Kaplan University *

*We aren’t endorsed by this school

Course

CYBER SECU

Subject

Computer Science

Date

Nov 24, 2024

Type

docx

Pages

3

Uploaded by AdmiralQuailMaster1067

Report
1 IBM’s Watson Article on bias in Machine learning Data Student’s Name Affiliation Course Name Instructor Date
2 IBM’s Watson Article on bias in Machine learning Data In this discussion, a number of factors that could cause bias in data and its effect on machine learning. First, consider the data acquisition procedure that goes into creating a machine learning algorithm. The second factor is the processing and preparation of the training data. The final factor is choosing the specifications and algorithms that will be used. Bias in machine learning can manifest itself in a variety of ways. This essay also discusses the consequences of these biases. Firstly, biased systems are more prone to mistakes, such as false negatives and false positives. Additionally, the risks that the model would overload the training data set is also increased by bias, which can lead to an upsurge in variance. Finally, some biases may result in discriminatory or unjust machine learning frameworks. Bias can be reduced by training algorithms on broader, more diverse datasets and measuring their efficacy through cross-validation. Regardless of bias, machine learning program suggestions have a serious effect on individuals and organizations (soba & Welser, 2017). Machine learning algorithms with bias can serve to propagate prejudice in a self-fulfilling manner. As a result, it is critical to uncover and reduce bias in the model to the greatest extent feasible. The Association of National Advertisers (ANA) makes a case that the data that advertisers frequently depend on is insufficient and skewed in an article (link located from outside IBM.com). Data is gathered in many different ways, and frequently there is incomplete data. This might result in judgments being made relying on unfinished research. In the context of gender prejudice in technology recruitment, a machine learning algorithm may assume that female applicants are not deserving applicants and anticipate such as being less hirable if the historical data of hires excludes females (Bellamy et al., 2019). The framework is unaware that females were excluded from technology jobs based on their sexuality, not their competence. Machine learning algorithms must increasingly guarantee impartiality, transparency, and openness as artificial intelligence plays a key role in daily operations and plans. It is important to remember that biases can also come from sources other than training data (O’Leary, 2022). Additionally, it may be brought about by faulty model design, poor model choice, or unsuitable data processing. The consumption data can potentially be impacted by bias. Diagnostic prediction algorithms that were trained on possibly biased data can give patients unfair results.
3 References Bellamy, R., Dey, K., Hind, M., Hoffman, S., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. and Zhang, Y., 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development , 63(4/5), pp.4:1-4:15. O’Leary, L., 2022. How IBM’s Watson Went From the Future of Health Care to Sold Off for Parts . [online] Slate Magazine. Available at: <https://slate.com/technology/2022/01/ibm- watson-health-failure-artificial-intelligence.html> [Accessed 3 September 2022]. Osoba, O. A., & Welser IV, W. (2017). An intelligence in our image: The risks of bias and errors in artificial intelligence . Rand Corporation.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help