Summary: Chapters 8–9
Chapter 8: Collateral Damage: Landing Credit
Next, O’Neil looks at the consumer finance industry. She first considers the way individual loan applications used to proceed: an individual spoke with a small-town banker who likely knew their family, employer, and coworkers. The banker’s subjective assessment of the applicant sometimes played a large role in whether the loan got approved, and discrimination of various kinds arose as part of this process. Now, O’Neil says, credit scores provide relatively unbiased information about a person’s probable creditworthiness based on relevant financial information. However, she says, not all scores used in finance meet these standards. So-called e-scores factor in all sorts of information, such as zip codes and internet browsing habits, to determine whom to lend to, market to, and approve for credit cards. These scores, unlike credit scores proper, are often unregulated. They rely on proxies, much as the individual banker in the olden days but at a much larger scale. What’s more, O’Neil says, the credit scores themselves are increasingly used as proxies in recruiting and employment, where a person’s financial health becomes a stand-in for their personal responsibility and trustworthiness.
This leads O’Neil back to a broader point about the use of mathematical models. Often, individuals with relative wealth and privilege deal with human beings when they seek a job or a loan. Those humans can investigate discrepancies in a background check or consider special circumstances that a credit score might not fully capture. People of lower socioeconomic status, meanwhile, deal mainly with machines and algorithms, which cannot be appealed to and do not go out of their way to fact-check apparent errors. This will become more and more of a problem, O’Neil says, as financial institutions and employers make use of an ever-greater variety of data in their hiring and lending decisions.
Chapter 9: No Safe Zone: Getting Insurance
Similar patterns, O’Neil says, apply in the insurance industry. Although race is no longer (by law) an explicit factor in home and life insurance policies, many other proxies for a person’s health, longevity, and risk exposure have cropped up. Auto insurers, for instance, use credit data as part of their pricing model so that a safe driver with poor credit pays more than a driver with good credit and a DUI on their record. In response to calls to “price me by how I drive,” some insurers have begun offering discounts to policyholders who allow for surveillance of their driving habits. This, O’Neil points out, has problems of its own. People who live in “risky” neighborhoods or have long commutes—which often means poorer people—end up paying more. In the workplace, meanwhile, the drive to lower insurance costs has led to the creation of “health scores” and wellness programs that penalize those deemed unhealthy. O’Neil raises the concern that this practice will spread to hire/fire decisions, since the health score data are not legally protected in the way that medical records are.
Analysis: Chapters 8–9
These two chapters continue several thematic threads from earlier in the book. O’Neil offers more examples of the disproportionate use of automation in dealing with poor and working-class individuals. She also shows how proxies for physical health, safe driving, and personal responsibility end up penalizing people who live in the “wrong” part of town or struggle financially.
Another major focus in Chapters 8–9 is the balance between automation and human intervention. O’Neil does not pretend that modern society can run without some automation of its decision-making. She suggests, however, that human oversight is necessary to prevent algorithms from riding roughshod over the poorest and most vulnerable. Despite their biases and prejudice—problems that automation can, at its best, help to counter—humans have certain traits that so far simply cannot be algorithmized. O’Neil cites “context, common sense, and fairness” as three such contributions. Human beings can use contextual cues to determine whether a job applicant is likely to have a criminal alias or whether the background check is confusing two very different people. They can apply common sense in, for example, not scheduling a barista for two shifts that are four hours apart. Finally, and crucially for O’Neil, humans can bring a sense of fairness to their scrutiny of models and their results. We alone, at least at present, can decide whether an outcome is not only efficient but just.