1
docx
keyboard_arrow_up
School
University of North Carolina, Charlotte *
*We aren’t endorsed by this school
Course
1100
Subject
Statistics
Date
Apr 3, 2024
Type
docx
Pages
10
Uploaded by DeaconIceBaboon6761
Explore Fundamental Data Science Principles
When used correctly, data can be a powerful source of inspiration, insight, and improved decision making. To use data correctly, however, requires balancing any observations from your
data with an understanding of the limitations of your data. Skilled data scientists are able to extract accurate information from their data by taking several important considerations into account. In this module, you will develop an understanding of several key principles needed for making decisions based on data. First, you will work to recognize when you can make generalizations about your data and why it is crucial to be cautious when doing so. Second, when you find associations in your data, it does not always mean you have discovered a cause, but you will practice identifying and accounting for factors that make it more likely you will uncover real relationships. Finally, you will develop an appreciation for the randomness inherent in any experiment and understand how statistics can help you account for uncertainty.
Using Data to Make Decisions
Data-driven decision making is critical in many fields, including business and science. In this video, Professor Basu introduces you to using data science concepts to make informed decisions. He also describes several key points you should be aware of when making decisions based on data, including generalizing your conclusions, interpreting association as causation, and assessing uncertainty around your results.
“ Data science focuses on extracting knowledge from data and making data-driven decisions to complement subject matter expertise.
By nature, data science is interdisciplinary, so it builds upon ideas and technologies developed in mathematics, statistics, and computer science to answer questions that arise in biological and social sciences, policy-making problems in government, and also industry applications. Let me give you an example of data-driven decision-making. I used to work as a business analyst with a retail chain who had about 1,000 stores nationwide. Every time they were considering rolling out a big promotional campaign, like back-to-school discounts on select backpacks, across all their 1,000 stores, they would be interested in testing whether this promotional campaign would actually work. So what did they do? First, they would consult with a subject matter expert to design those promotion flyers, and
then they would run this promotion in a handful of, say, 20 stores for a few days and collect data. Did the sales increase, did the profit margins increase, and so on. My job as a business analyst was to analyze those data and understand how the promotion offer is actually working. Sometimes you'd find that the backpacks sales in promotion stores would go up even 5% to 7% compared to other stores. At that point, the management would look at these results and consult with the subject matter experts and then make a final call if they want to roll out this promotion nationwide or just drop it altogether. This was an example where
we weighed our expert knowledge base against results from cold hard data to make an informed decision. What did we do when we saw sales increase in promotion stores? This is where thinking like a data scientist will help. We carefully thought through three specific issues. First, generalization. Second, association versus causation. Third, uncertainty around our
conclusion. So what's the issue of generalization? We ask, how do I know if I can generalize my results from a small set of 20 stores to a large set of 1,000 stores? After all, there is a risk of overgeneralization here, when we see 5% to 7% sales increase in 20 stores and conclude that a similar sales spike would happen in all the stores. The question here is, can we choose our 20 promotion stores carefully, smartly to reduce this risk of overgeneralization? The second issue is the pitfall of interpreting association as causation. How do we know whether my promotion is
actually what's causing the sales to increase? Could there be something else that's going on? When we see a 5% to 7% spike in sales, we don't immediately conclude that the promotion is working. Rather, we used to pause and ask, were there more schools near the promotion stores? Were there offers coming out of some competing retail stores near the other stores? The third issue is the issue of uncertainty. How certain should I be that we will see a similar spike in sales across all 1,000 stores? Should it be 100% sure? Should it be 50% sure? 20% sure? Is there a way to quantify this uncertainty? Because, you see, if we choose a different set of 20 stores to run our promotion, we would surely see a different level of spike or even drop in sales. How certain should we be that the promotion will actually work nationwide across all 1,000 stores? We will see how to systematically think about all these issues and address them one by one. “
Making Generalizations From Data
When you make a discovery in some members of a group but want to extrapolate your conclusion to the whole group, you need to make a generalization. Making generalizations can be extremely useful, but it also comes with some risks. Here, Professor Basu explains several issues you may encounter as you try to generalize your conclusions. He also suggests some strategies to avoid those issues, such as selecting a random sample.
“ Let's discuss the issue of generalization. In data science, we often learn something from a small set of data and then attempt to generalize our conclusion to a larger context. There's always a risk of overgeneralization or extrapolation. To guard against this, we start by asking three questions first. What's our unit of observation? In the context of a promotional campaign, that would be one single store. Then we ask: What's our population? What's the set of units where we want to generalize our conclusion? That would be all 1,000 stores of the retail chain nationwide. Then we ask: What's our sample? What's the set of units on which we actually collect the data, analyze the data and draw our conclusion? That would be 20 stores where we choose to run the promotion offers. Once we pin down population and sample, we ask ourselves: How is the sample selected from the population? Is the sample really representative of the population? Because if not, we may end up overgeneralizing our conclusion. In the context of the promotional campaign offer, if I select all of my stores from a
single neighborhood of Detroit, Michigan, because imagine there is a printing facility nearby and it was easier for me to distribute the flyers. That won't be representative of the whole nation, right? On the other hand, if we select our stores randomly across different states, some from New York, some from California, some from Texas, that's expected to be more representative. To ensure representativeness of our sample, we follow two principles: randomization and stratification. Randomization is crucial because it helps us avoid subjective bias in our selection. The gold standard method for sampling is actually called simple random sampling. It's very much like picking a card from a well-shuffled deck. If I'm selecting 20 stores to run a promotion using simple random sampling, I'll start with a list of all 1,000 stores, generate a random number on a computer between one and 1,000, and if that number is say 37, I'll pick the 37th store from my list. Then I'll go back and generate another random number to select one more store from the remaining 999 ones and keep doing this for 20 times. Selecting my samples this way will ensure that any subset of 20 stores is equally likely to be selected as my sample. My samples are not going to be based on convenience, it's not going to be based on my personal preference, but they're going to be based on just random numbers that come out of a machine. This way, we'll be able to avoid subjective or selection bias. Stratification, on the other hand, is used when population has different groups. In data science we also call them blocks. These are different groups with different types of units. In such situations, we take one simple random sample from each block, but make sure that the sample size is proportional to the size of the block. This is called stratified random sampling. In our example of promotional campaigns, sometimes we would see that in the entire set of 1,000
stores, there are 20% who have much bigger sections, like groceries and pharmacy and garden, and they attract more clientele. The other remaining 80% are regular stores. When we are selecting, say, 20 sample stores using stratified random sample, we would pick 20% of that 20 stores, that is four promotion stores, using a simple random sample of superstores. Then we'll pick the remaining 16 stores using a simple random sample of the regular stores. Randomization and stratification together allows us to select samples that represent the
population well. They also help us quantify the level of uncertainty associated with our results from the finite sample data analysis.
”
Accounting for Confounding Factors
A data scientist’s primary goal is to identify what causes particular outcomes, but this can be extremely challenging. How can you use the associations you find in your data set to make a strong case that you’ve identified the cause
of an outcome? As Professor Basu explains in this video, one way to strengthen your evidence for a causal claim is to account for confounding factors — variables that influence one group more than another, thus altering the outcome of a
study. You'll explore confounding factors through a famous study on the cause of cholera outbreaks that was conducted by John Snow in London, England, in the 1850s.
“ As a data scientist, you should always be careful to distinguish between association and causation. For example, if sales spiked in stores where we ran promotions, it could simply be because there were more
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
schools near these promotion stores compared to the other stores, and the spike in sales may not have anything to do with the promotion. The key point here is we need to make sure what we see is not an effect of some other unobserved factors. To illustrate this point, let's discuss a classic case study of John Snow and the cholera outbreak in 19th century England. The study is one of the first of its kind, and it helps us with how to think about association and causation clearly. Cholera is an infectious disease that affects the small intestine, and it can lead to death. In August of 1854, there was a major cholera outbreak in London. In those days, the predominant theory was that cholera spreads by breathing bad air. John Snow, a physician, noticed that there was a high number of cholera-related deaths near a communal street pump on Broad Street. He suspected that contaminated water may have something to do with the spread. He looked into cholera-related deaths in Greater London, which back then was served by two major water suppliers — Lambeth and S&V, which is short for Southwark and Vauxhall. Both these companies drew their water from the Thames river, but Lambeth drew water upriver from where sewage was released, and S&V drew water from downriver from the city sewage release. Now, if indeed contaminated water was causing cholera as Snow was predicting, then there should have been more cholera-related deaths in the households that were serviced by S&V compared to the households which were served by Lambeth. In data science terminology, we say these are two groups, the group of households serviced by S&V as a treatment group and the group of households serviced by Lambeth as a control group. Snow realized that just comparing death rates between all S&V households and all Lambeth households in the entire London is not the correct approach, because there may be many other factors besides source of drinking water that could lead to higher cholera-related deaths. This is where Snow did something that really pins down how one should think about association versus causation. Instead of comparing deaths over the entire London, he chose to focus at only the households in the neighborhoods where both companies provided service. Snow's data showed that out of every 1,000 S&V households there are on average 31 deaths, compared to out of every 1,000 Lambeth household there were only three to four deaths on average. It was very clear that there is indeed a higher chance of contracting cholera if you're drinking water from S&V. What was critically important in this study was the fact that there was no substantial difference between the houses serviced by Lambeth and the houses serviced by S&V. In other words, Snow was taking steps to minimize what we call confounding variables, or confounders. Confounders are variables which could give one of the groups treatment or control somewhat higher exposure to disease, and that's how we skew the result. Suppose Snow was not controlling or accounting for these confounders. For instance, imagine if the residents of the S&V houses were mostly hard laborers and the residents of Lambeth houses were mostly upper-class citizens. In that case, one could argue that hard laborers in S&V are simply more exposed to breathing bad air, and that's what was causing more deaths. There would have been no way for Snow to determine whether higher cholera deaths is due to water supply or due to different work conditions. By arguing that the S&V and Lambeth homes were similar in all respects but their water, Snow made a stronger case that these treatment and control groups are differing due to water source. The lesson here is associations are not always causation, because they could be simply driven by confounding variables, and to look for causal relationship, you should always identify and account for confounders as much as possible. ”
Using Randomized Experiments to Make Causal Claims
If you plan an experiment with confounding factors in mind, you can use sampling strategies to account for those confounding factors. When you do an observational study, however, you need to make assumptions about how confounding factors affected the treatment groups. In this video, Professor Basu discusses blocked (stratified) and randomized sampling strategies that you can use when designing an experiment, as well as the type of assumptions you must arrive at in order to make causal claims based on data from an observational study.
“ Now let's see how to systematically think about treatment, control and confounders, and when can we address the issue of confounding and make causal claims from associations that we observe in data. In most data science problems, we start by identifying first the outcome variable, the treatment variable, and associated treatment control groups and potential confounders, and if these confounders can disproportionately affect the outcomes in treatment
and control groups. Because if they do, any difference we see in the outcome between treatment and control groups, it might be caused by the treatment whose effect we are trying to measure, or it could very well be mediated by the confounder. In the John Snow example, our outcome was the rate of cholera-related deaths. The treatment variable was whether the household received water from S&V or Lambeth. Potential confounders were the occupation or social class of members in the household. In the promotional offer example, our outcome was increase in sales. The treatment variable was whether a back-to-school promotion flyer was distributed in the neighborhood of the store. A potential confounder was whether there are many schools nearby the promotion stores. Once we identify the treatment, control and confounder, how do we go about guarding against this confounding issue? The answer is, in general, we cannot always do that. It depends on our method of data collection, if our data is from a randomized experiment or from an observational study. In a randomized experiment, data scientists have complete control over which unit is assigned to treatment and which unit is assigned to control. It gives us a way to systematically account for both observed and unobserved confounders. Let's illustrate this using randomized control trials, or RCTs, which is a common technique in clinical studies. When we're testing effect of a new treatment, we start with a big pool of patients who enrolled in the clinical study. Our goal is to divide them into treatment and control. Treatment group will receive the new drug. Patients in the control group will just receive a placebo. Now, suppose some patients are high risk while others are low risk to the disease. This risk status can affect them to respond differently to the new treatment. This could be an example of an observed confounder, that we know who is high
risk and who is in the low-risk group. If we randomly split this big pool of patients into two groups, it may be possible that there are mostly high-risk patients in treatment and mostly low-
risk patients in control. That can skew the result of your study. Instead, what we do is we go about it in a more systematic way. First, we divide the pool into two blocks of subgroups based on this observed confounder, which is the low and high-risk status. We create one pool of patients which is entirely high-risk patients and another pool of entirely low-risk patients. This
process is called blocking. Now within each block, we randomly select half of the patients and assign them to treatment and keep the other half to control group. We do this randomization to deal with possible unobserved confounders like pre-existing conditions or genetic makeups of patients. We hope that this random assignment is expected to balance on average the effect of all the unobserved confounders. Now blocking and randomization together makes randomized experiment a very powerful tool to learn causal connection from observed associations in data. In our example of promotional offer, that's the case for randomized experiment, because we could clearly choose which stores will run promotions and which stores will not. Unfortunately however, we do not often have resources to conduct a randomized experiment. We have to depend on what we call an observational study. In observational studies, like the John Snow example, we cannot assign units to treatment and control group ourselves, like we could not specify in which household there will be a death and in which household there won't be a death. In such cases, one can only measure association. If we want to make causal claims out of these associations, we need to make separate assumptions about the unobserved confounders, in the sense that those unobserved confounders have, on average, nothing to do with how the treatment effects were assigned. For every data science problem, we ask ourselves, when it's an observational study, is it reasonable to impose this assumption in the context of that problem? For instance, John Snow's argument that everything other than water source was the same between the S&V and the Lambeth household should be taken as an assumption and carefully studied in the context of that problem. ”
The Importance of Uncertainty
Once you’ve collected your data, determined that you can generalize it to a larger population, and accounted for confounding factors, you might be tempted to decide that your results are conclusive. Yet you also need to account for the uncertainty and randomness present in the results of your study. Here, Professor Basu discusses how you can use statistics to assess the uncertainty around the outcomes you find in your data. He uses a real-world example of how uncertainty principles were used during the trial of Kristen Gilbert, a nurse who was ultimately convicted of murdering some of her patients.
“ We mentioned that conclusions drawn from samples come with a level of uncertainty, and it's important to quantify this. In the example of promotional offers, if we saw a 10% spike in sales, we would be more certain about its effectiveness than if we only saw a 3% spike. But the question is, how much more certain? Let's talk about a courtroom trial where quantifying such uncertainty was a crucial piece. It's the court case of United States versus Kristen Gilbert from 1998. Gilbert was a nurse in the medical ward of Veterans Administration Hospital in Northampton, Massachusetts. She developed a strong reputation in responding to cardiac arrests. But there seemed to be an unusual number of cardiac
arrests happening when Gilbert was on duty. Some patients even died. So the hospital at some point launched an investigation into the apparent abnormality, and Assistant U.S. Attorney William Welch
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
convened a grand jury. Now Welch had to show that the number of deaths that occurred while Gilbert worked was too high to be just coincidental. Now this was important because there is some randomness in death. Sometimes there are fewer deaths than expected in shifts, other times there are more deaths. These natural fluctuations don't necessarily indicate anything sinister is going on. But perhaps there are so many deaths happening when Gilbert was on duty, that it was virtually impossible that they were caused by a chance variation. Now if Welch could prove this, then he would be able to secure an indictment. The question of whether there are too many deaths to be coincidental, it can be answered using statistics. So Welch hired an epidemiologist, Stephen Gehlbach, to explain it to the jury. Gehlbach used a coin toss example to explain how unusual this death rate was. He said, suppose we do an experiment where we toss a fair coin 10 times. Since the coin is fair, we expect to see five heads or five tails, but it will not always be so. Sometimes the coin will land on heads six out of 10 times, sometimes seven out of 10, or sometimes even three out of 10. This is completely normal because tosses are random. The question here is at what point we start to suspect that the coin is not fair, and biased towards landing a head. If we see seven out of 10 heads, if we see eight out of 10, nine out of 10. So Gehlbach answered this question using probability. He said, if we repeat the experiment of tossing a fair coin 10 times, in nearly 38% of the cases you'll see at least six heads out of 10. Now, since 38% isn't particularly a small probability, it's not reasonable to conclude that our coin is biased simply because we observe six instead of five. But what if we see seven or more heads? There he said the calculations of probability say that the odds are 17%. If something occurs 17% of the time it's still not rare, so perhaps we shouldn't conclude that the coin is biased. However, if we get nine or more heads in 10 flips, that can happen on average 1% of the time. So if we observe nine or more heads, it's OK to suspect that our coin is biased. Then he used the same logic to analyze the data on death rates. As you can see from the table, the fraction of shifts which resulted in death was 74 out of 1567, and that comes to about 4.5%. Now if we treat each shift like a coin toss, so that if the coin comes up in heads, there is a death in the shift. If the coin comes up in tails, there's no death. Then, given the probability of death on a single shift is 4.5%, what are the chances of observing 40 or more deadly shifts out of 257 where Gilbert worked? Now note that this 40 over 257 comes to about 15.5% death rate, which is more than 10% than the on average death rate. So Gehlbach repeated the same calculation as the coin toss example, and concluded that the answer is one out of 10 to the 11, which is less than one in a billion. So in other words, the chance of observing this many deaths just due to natural fluctuations was astronomically small. This tiny number was enough to convince the grand jurors that there was probable cause that Kristen Gilbert was killing her patients, and she was sent to trial. Now this way of converting raw numbers like 15% death rate, 5% death rate into chances or odds like one in a billion, one in 10 billion, it helps us quantify how confident we should be in our results from finite subsets. ”
Using Experiments in the Workplace
Organizations and businesses across many industries collect vast amounts of data and employ data scientists to use those data to improve their processes or business outcomes. Collecting data in a controlled way, with carefully designed experiments, can give data scientists more insight into their customer's behavior and preferences. In these videos, Professor Basu sat down with several experts to discuss how their organizations use experiments to improve business outcomes. They also discuss why understanding how to design effective experiments is critical for data scientists.
Question
How does your organization use experiments or user data to design or analyze products or features?
“ In the social media space, we essentially use experiments everyday to make decisions. It's so natural because it's such a large part of what we do. Just to give a very economical, almost silly example, consider, an example online experiment where we are trying to optimize for a metric, let's say click-through rate. We have a decision to make, which is the button; the color of the button. Essentially how you will set that up is you have a control which is a black button, versus a couple of variants like green, blue, purple, pink, maybe a really flashy one that's like multicolored and flashes all around. You try to understand whether each one of these is better than black. That's sort of like a hypothesis you have, that one of these variants is better than your control. In that setting, essentially what you do is you run traffic, some small percentage of traffic, to each one of these versions. So that let's say for a given user, as they come to the website, they have one out of, let's say, seven chance of hitting one of these colors. Once you start doing that, when you want to ensure is, through the process, you want to ensure basically that your data is not biased in some meaningful way. You want to make sure that there is randomness in that way you can assign these colors to what the customer sees. And that way, you're ensuring that the physical properties we care about are there, essentially, for your experiment. In analyzing the data, you essentially use permutation tests or perhaps potentially p-tests that you have, if you can make some normal distribution if you have data that's large enough. But essentially you go through the basic statistical tests that basically says, "Hey, is this click through rate better than the the baseline click through rate for each one of these variants?" Some common mistakes or pitfalls to avoid are making sure that the amount of views for each of these buckets, for each of these versions, is relatively similar. In my very fake example, maybe the flashing color button — that is so distracting that people essentially leave your website and never come back. And that's bad, because then if you care about click through rate, your click through rate may be even inflated. Because then the people who
actually stay really like flashing colors and click on it more. But it turns out that's only a smaller population of your overall population, and people are actually not coming back to your website. There are things like that to kind of watch out for to make sure you analyze for beforehand basically. Another pitfall though we often run into is this multiple comparison issue. Where if you have many, many, metrics
you care about, and you start monitoring them all against many versions, well, guess what? Some percentage of the time you're going to have a lot of false positives — a rate of five percent to be exact, if your alpha is 0.05, so you're going to run into false positives. Multiple comparisons sometimes make our lives easier by reducing that pitfall essentially. So yeah. ”
Question
How does your organization use data-driven decision-making and experiments for product development?
“ Product development and the process of
using software engineering to develop high quality products and apps for our users is a deeply data-driven process that includes both the type of user data that you mentioned, as well as large-scale experiments. People think about product development and they think about developing specific features. Going out and saying, "I know what I need to build. I'm going to build that, there is some machine learning that goes
into that, there's some user experience that goes into that." But it turns out that developing products is very much about first, understanding our users and our business, which requires things like, your user research and taking a look at our customer support tickets, but also involves having automated data streams about the quality of the user experience. For example, as we look at the rider's experience in Uber's app, we take a look at the trip requests experience and in that experience, we want to understand how difficult was it for the rider to request a trip and how successful was that process? We look at things like the time to request, we look at the number of taps that the user had to make in order to make that request. We do what's called funnel analysis, looking at the proportion of users who successfully got through each stage of the funnel, and how long it took them. This is part of
holistically understanding the quality of the user experience during that part of taking a trip. Once we have that user experience, we use it to drive improvements in the app and in the features. We use it to prioritize what changes we make and what are the things that are really tripping up our users. And so, how do we specifically go and address those through better experiences, more data-driven technologies, for example. Then throughout our process of improving our technology, is each time we make an improvement, we use experimentation to evaluate what is the impact of that change on the user experience and on user engagement. This is really what modern software engineering is about, and it really is deeply data-driven, from start to finish. ”
Question
Why is it important for data scientists to understand experimental design?
“ In terms of statistical areas,
if you want to focus on something, one thing that we could always use more of is people with some good experimental design understanding, because we're running experiments all the time. We're trying to avoid biases, we're trying to set up experiments that we run them efficiently. If we are paying people to give answers to survey questions, we would like to minimize the cost of that. One project that I was involved with was for Google Maps. Google Maps in the U.S. and Europe are pretty good — trust me on that. India, Indonesia, not so much. So when Google Maps — they're always trying to get better data there and when they get a new source of data, they have to decide whether this new source of data is good enough to check in and use. Well, what they were doing was randomly picking 500 instances out of the set of data and going and verifying all of them. In some cases this meant actually going on site: Is there actually a house where this source of data says there is? That was very expensive. I proposed using a sequential approach, where you start with a smaller number, check those, and if the results are really bad, you just stop and give up. A little background on that is sequential design was what got me to Google in the first place. Where I went and heard a Ted talk on Google Web Optimizer, which was something that Google provided for website owners to improve their websites, where they would have their standard webpage and then it would have various alternatives. And they would add some JavaScript from Google to the main page that would randomly leave them on that page or send them to alternate pages. Then some additional JavaScript that record if people did what they wanted: Buy something, make a donation, download some software. And I suggested that a sequential approach would be good here. Rather than decide in advance how many
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Related Documents
Recommended textbooks for you


Glencoe Algebra 1, Student Edition, 9780079039897...
Algebra
ISBN:9780079039897
Author:Carter
Publisher:McGraw Hill

Algebra for College Students
Algebra
ISBN:9781285195780
Author:Jerome E. Kaufmann, Karen L. Schwitters
Publisher:Cengage Learning

Intermediate Algebra
Algebra
ISBN:9781285195728
Author:Jerome E. Kaufmann, Karen L. Schwitters
Publisher:Cengage Learning

Big Ideas Math A Bridge To Success Algebra 1: Stu...
Algebra
ISBN:9781680331141
Author:HOUGHTON MIFFLIN HARCOURT
Publisher:Houghton Mifflin Harcourt

Elementary Geometry for College Students
Geometry
ISBN:9781285195698
Author:Daniel C. Alexander, Geralyn M. Koeberlein
Publisher:Cengage Learning
Recommended textbooks for you
- Glencoe Algebra 1, Student Edition, 9780079039897...AlgebraISBN:9780079039897Author:CarterPublisher:McGraw HillAlgebra for College StudentsAlgebraISBN:9781285195780Author:Jerome E. Kaufmann, Karen L. SchwittersPublisher:Cengage Learning
- Intermediate AlgebraAlgebraISBN:9781285195728Author:Jerome E. Kaufmann, Karen L. SchwittersPublisher:Cengage LearningBig Ideas Math A Bridge To Success Algebra 1: Stu...AlgebraISBN:9781680331141Author:HOUGHTON MIFFLIN HARCOURTPublisher:Houghton Mifflin HarcourtElementary Geometry for College StudentsGeometryISBN:9781285195698Author:Daniel C. Alexander, Geralyn M. KoeberleinPublisher:Cengage Learning


Glencoe Algebra 1, Student Edition, 9780079039897...
Algebra
ISBN:9780079039897
Author:Carter
Publisher:McGraw Hill

Algebra for College Students
Algebra
ISBN:9781285195780
Author:Jerome E. Kaufmann, Karen L. Schwitters
Publisher:Cengage Learning

Intermediate Algebra
Algebra
ISBN:9781285195728
Author:Jerome E. Kaufmann, Karen L. Schwitters
Publisher:Cengage Learning

Big Ideas Math A Bridge To Success Algebra 1: Stu...
Algebra
ISBN:9781680331141
Author:HOUGHTON MIFFLIN HARCOURT
Publisher:Houghton Mifflin Harcourt

Elementary Geometry for College Students
Geometry
ISBN:9781285195698
Author:Daniel C. Alexander, Geralyn M. Koeberlein
Publisher:Cengage Learning