Summary: Chapters 6–7
Chapter 6: Ineligible to Serve: Getting a Job
O’Neil next examines the use of mathematical modeling in hiring practices. She notes that personality tests often “redlight” applicants with a history of mental illness, which puts them in a gray area of employment law. Charting the rise of such tests, O’Neil observes that intelligence tests and medical exams have already been outlawed in hiring. She also remarks on a technical problem with the tests: because companies have no incentive to follow up with the employees they screened out, there is no way to provide the model with useful feedback. Neither the developers nor the users of the test ever learn whether it is helping them hire strong candidates. Moreover, when organizations do develop automatic models based on past practices, they often entrench discriminatory patterns that have nothing to do with the applicants’ qualifications. O’Neil gives the example of a medical school whose automated screening looked at the data from past admissions and, like the human admissions officers before it, proceeded to give less weight to applications from women and immigrants.
Chapter 7: Sweating Bullets: On the Job
However, the use of these models does not stop when an employee is hired, as O’Neil soon shows. Companies of all kinds attempt to minimize spending on labor and thus avoid eating away at profits. For this reason, large enterprises like Starbucks use scheduling models to keep from overstaffing their shops. However, these models do not take into account the preferences or well-being of the staff. Thus, they sometimes produce phenomena like “clopening,” in which the same person is charged with closing the store one night and returning early the next morning to reopen it. O’Neil observes that the type of modeling used in these scenarios is ultimately a descendant of the operations modeling done during warfare.
The use of mathematical modeling in human resources is not limited to food service and retail. O’Neil describes attempts to automatically find “idea generators” in a company by analyzing their communications. She notes that this concept is not malicious in itself but depends on AI’s (so far very limited) ability to understand the context of emails and messages. Moreover, like any kind of mathematical model, idea-detection software can be used to justify decisions that seem impartial (because they are based on “science”) but are actually quite dubious. For instance, firms can—and do—fire employees during an economic downturn based on whether this software recognizes their value as “idea generators.” O’Neil remarks on the strong parallels between these practices and the teacher-scoring schemes discussed in the introduction.
Analysis: Chapters 6–7
Considered together, these two chapters once again show a situation in which mathematical models, poorly tested and trained, gain control over an important area of people’s lives. As in her discussion of the education industry and the criminal justice system, O’Neil is careful to point out that people across socioeconomic classes suffer from this kind of automated treatment. Those seeking entry-level jobs in food service, retail, and customer support must increasingly pass through a battery of automated tests and screenings. Those in salaried, white-collar positions, meanwhile, must conform to a machine’s idea of what an “idea generator” or a “team player” looks like.
O’Neil qualifies this point by explaining that some people—usually the wealthier and more influential—receive more personal attention in hiring, homebuying, and school admissions. When a well-connected applicant for a prestigious job has an inconsistency on their background check, the hiring manager will likely take some effort to understand what happened. When someone applying for a job as a cashier or fast-food worker has the same thing happen, a machine is likely to reject the application without warning or recourse. This discrepancy is an important part of O’Neil’s broader argument about justice. Because models and algorithms exercise the tightest control over the lives of poor and working-class individuals, any errors or biases in those algorithms disproportionately punish those groups.