Weapons of Math Destruction Discussion Questions
What is the significance of the title Weapons of Math Destruction?
O’Neil uses the term Weapons of Math Destruction to refer to a specific set of mathematical models used in commerce, politics, and government. The term is a play on weapons of mass destruction, a phrase closely associated with efforts to prevent the spread and use of nuclear, biological, and chemical weapons. Like those other WMDs, the “weapons” that O’Neil refers to are capable of causing “mass” harm because of the scale at which they are used. Chapter 3 explores this point in detail: when a particular kind of model is used to rank all schools, assess all prospective employees, or evaluate all potential borrowers, it acquires a kind of destructive power by dint of its sheer scale. The “destruction” factor, though less obvious than the devastation caused by a nuclear bomb or bioterrorism incident, lies in the fact that these models harm many of the people whose lives they impact.
The ”weapon” element is perhaps the loosest part of the analogy that O’Neil draws to physical weapons of war. Some of the models, she acknowledges, are not used to deliberately penalize or exclude people, but many individuals become “collateral damage” when the model assesses them as uncreditworthy or likely to commit a crime. (Bombs, of course, also infamously cause “collateral damage” among civilians.) Some other models, however, are knowingly weaponized against the population that they analyze. Ultimately, the term Weapon of Math Destruction, or WMD, aptly captures the idea of a process that threatens to cause significant harm on a wide scale if not properly overseen.
In what ways does the use of mathematical models disproportionately affect poor people?
In the introduction to Weapons of Math Destruction, O’Neil points out that “the privileged . . . are processed more by people, the masses by machines.” Thus, any errors or biases written into a model will harm those who are least able to explain or defend themselves. Chapter 8 shows this at work when a hypothetical job applicant has an implausible criminal history come up in a background check. Because the applicant comes from a top-tier school and is interviewing for a prestigious job, the interviewer dismisses the background check result as nonsense. Those applying to be grocery-store clerks or administrative assistants are unlikely to receive the same personalized treatment. A side effect of this is that people who design and deploy these models in finance, recruitment, and law enforcement may not realize how much control they are handing over to machines.
A second major factor here is the use of proxies. As O’Neil explains throughout the book, the makers and users of Big Data models often face legal or practical barriers to obtaining the data that would be most relevant to them. For instance, in deciding on a prison sentence for a convicted criminal, a judge may consult a recidivism-risk model. However, the model cannot tell the judge directly how likely the individual is to commit another crime in the future. Instead, it relies on all sorts of proxies, such as the convicted person’s place of residence, their past issues with substance abuse, or the criminal records of their friends and family. Some of these proxies, like place of residence, correlate strongly with a person’s economic status: poor people, law-abiding or otherwise, are likelier to live in high-crime areas. A model that uses proxies of this sort runs the risk of conflating poverty with criminality—or simply with untrustworthiness—in ways that further harm a person’s life prospects.
How do mathematical models affect the financial lives of individuals?
O’Neil shows in Weapons of Math Destruction that mathematical models now govern almost every aspect of consumer finance. They influence lending, from payday loans to mortgages, by identifying good targets for high-interest loans and judging the creditworthiness of homebuyers. They shape insurance premiums in increasingly complicated ways, taking into account not just people’s zip codes and driving habits but also their credit history to decide what risks they carry behind the wheel. Indeed, such models even influence a person’s ability to find employment, sifting them through a series of personality tests that may or may not predict on-the-job performance.
However, O’Neil argues that the overall effect is more than the sum of its parts because these models feed on one another. For instance, she says, predatory advertising leads people to make dubious financial choices, such as pursuing a diploma with little market value or accepting a payday loan with a 300% interest rate. This in turn increases the risk that they will fall behind on their bills, harming their credit history, which is increasingly used as part of job applicant screening. Thus, once a person is in the crosshairs of one such model—say, the one responsible for marketing payday loans—it becomes harder and harder to escape the imputation that they are risky and untrustworthy in general. Everything, including car insurance, becomes more expensive.
According to O’Neil, is every mathematical model a Weapon of Math Destruction? Why or why not?
O’Neil stops short of claiming that all mathematical models, or even all Big Data models used in public life, are WMDs. To be a WMD under O’Neil’s definition, a model must satisfy three major criteria. It must be opaque, meaning its workings are either kept secret or prohibitively complex for most people to understand. It must operate at a large scale, which in many of O’Neil’s examples means nation- or industry-wide. Finally, it must cause significant and widespread harm.
Although many of the models described in Weapons of Math Destruction fit this definition, many more do not. Some models, like traditional credit scores, are reasonably transparent in terms of what they measure and how different factors are weighted. They can still be used in harmful ways, but they are not inherently harmful. Other models are opaque and harmful but do not operate at a scale where they can fairly be called WMDs. O’Neil gives examples from the world of employment screenings, where, as of her writing, a federation of individual models was still used for assessing candidates and no dominant standard had emerged. For these, O’Neil says, the concern is that a standard will emerge and acquire the same coercive power as college ranking systems or predatory advertising algorithms. Finally, some large-scale models are used for benign purposes, such as identifying at-risk households and offering financial or other assistance. These models are in the minority, but O’Neil calls for their refinement and expansion, even suggesting that some existing WMDs might be reconfigured for this purpose.
What is the relation between mathematical modeling and social justice?
At first glance, data science may seem like a morally neutral field. O’Neil acknowledges that some people hold this opinion but argues vigorously that this is not the case. Mathematical modeling—the act of building a mathematical representation of some aspect of the world—has no clear moral valence in itself. However, the choices made in producing a mathematical model reflect the opinions and biases of its creator. Sometimes, as in baseball statistics, these opinions are relatively far removed from issues of social justice: the managers of one team may believe that home-run hitters are worth investing in while another may emphasize their higher strikeout rate and look to hire more consistent if less exciting batters. Few people in society at large stand to suffer from these kinds of differences in opinion.
In other cases, the opinions encoded in a model matter a great deal. A firm producing a personality test for employment screening might believe, for example, that agreeability is the most important trait of an employee and might give employees a score that weights that trait highly. School districts can and do make standardized test scores a priority in assessing teacher performance. Overreliance on such models—and the tendency to treat and market these models as though they reveal fundamental truths—can lead to serious harm. Because they have no inherent notion of fairness, O’Neil argues that mathematical models broach issues of social justice whenever they are widely used to make important decisions. In the afterword, she calls on her peers to help hold algorithms to account, to apply human values of fairness and equality to the processes that increasingly govern our lives.