Assignment_7_15_and_7_16
docx
keyboard_arrow_up
School
Clark University *
*We aren’t endorsed by this school
Course
123
Subject
Statistics
Date
Apr 3, 2024
Type
docx
Pages
14
Uploaded by MateWolverine2181
Q: 7.15
Refer to Commercial properties Problems 6.18 and 7.7. Calculate R^2Y4, R^2Y1, R^2Y1|4, R^214, R^2Y3|124, and R2. Explain what each coefficient measures and interpret your results. How is the degree of marginal linear association between Y and X 1 affected, when adjusted for X4? Answer:
To compute the R-squared values for various regression models, we utilize the lm function in R to fit the linear regression models, followed by employing the summary
function to extract the R-squared values. Here is the code for it:
Code: # Read the data into a data frame
data <- read.csv("C:\\Users\\Aayush\\OneDrive\\Desktop\\MSDA\\Sem 2\\Linear Regression\\Assignements\\7.15\\Commercial_Properties.csv")
View(data)
SST <- sum((data$Y - mean(data$Y))^2)
SST
#R2Y4
Y4_model <- lm(X4 ~ Y, data)
R2Y4 <- summary(Y4_model)$r.squared
R2Y4
#R2Y1
Y1_model <- lm(Y ~ X1, data)
R2Y1 <- summary(Y1_model)$r.squared
R2Y1
#R2Y1|4
P1.modelX4 <- lm(Y ~ X4, data)
SSE.X4 <- deviance(P1.modelX4); SSR.X4 <- SST-SSE.X4
P1.modelX14 <- lm(Y ~ X1+X4, data)
SSE.X1X4 <- deviance(P1.modelX14); SSR.X1X4 <- SST-SSE.X1X4
SSRX1_X4<-SSR.X1X4-SSR.X4 R2Y1_4<-SSRX1_X4/SSE.X4
R2Y1_4
#R214
I4_model <- lm(Y ~ X1+X4, data)
R214 <- summary(I4_model)$r.squared
R214
#R2Y2|14
P1.modelX14 <- lm(Y ~ X1+X4, data)
SSE.X1X4 <- deviance(P1.modelX14); SSR.X1X4 <- SST-SSE.X1X4
P1.modelX124 <- lm(Y ~ X1+X2+X4, data)
SSE.X1X2X4 <- deviance(P1.modelX124); SSR.X1X2X4 <- SST-SSE.X1X2X4
SSRX2_X1X4<-SSR.X1X2X4-SSR.X1X4 R2Y2_14<-SSRX2_X1X4/SSE.X1X4
R2Y2_14
#R2Y2|13
P1.modelX124 <- lm(Y ~ X1+X2+X4, data)
SSE.X1X2X4 <- deviance(P1.modelX124); SSR.X1X2X4 <- SST-SSE.X1X2X4
P1.modelX1234 <- lm(Y ~ X1+X2+X3+X4, data)
SSE.X1X2X3X4 <- deviance(P1.modelX1234); SSR.X1X2X3X4 <- SST-SSE.X1X2X3X4
SSRX3_X1X2X4<-SSR.X1X2X3X4-SSR.X1X2X4 R2Y3_124<-SSRX3_X1X2X4/SSE.X1X2X4
R2Y3_124
#R2
full_model <- lm(Y ~ X1 + X2 + X3 + X4, data)
R2 <- summary(full_model)$r.squared
R2
Output:
> # Read the data into a data frame
> data <- read.csv("C:\\Users\\Aayush\\OneDrive\\Desktop\\MSDA\\Sem 2\\Linear Regression\\Assignements\\7.15\\Commercial_Properties.csv")
> View(data)
> SST <- sum((data$Y - mean(data$Y))^2)
> SST
[1] 236.5575
> #R2Y4
> Y4_model <- lm(X4 ~ Y, data)
> R2Y4 <- summary(Y4_model)$r.squared
> R2Y4
[1] 0.2865058
> #R2Y1
> Y1_model <- lm(Y ~ X1, data)
> R2Y1 <- summary(Y1_model)$r.squared
> R2Y1
[1] 0.06264236
> #R2Y1|4
> P1.modelX4 <- lm(Y ~ X4, data)
> SSE.X4 <- deviance(P1.modelX4); > SSR.X4 <- SST-SSE.X4
> P1.modelX14 <- lm(Y ~ X1+X4, data)
> SSE.X1X4 <- deviance(P1.modelX14); > SSR.X1X4 <- SST-SSE.X1X4
> SSRX1_X4<-SSR.X1X4-SSR.X4 > R2Y1_4<-SSRX1_X4/SSE.X4
> R2Y1_4
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
[1] 0.2504679
> #R214
> I4_model <- lm(Y ~ X1+X4, data)
> R214 <- summary(I4_model)$r.squared
> R214
[1] 0.4652132
> #R2Y2|14
> P1.modelX14 <- lm(Y ~ X1+X4, data)
> SSE.X1X4 <- deviance(P1.modelX14); > SSR.X1X4 <- SST-SSE.X1X4
> P1.modelX124 <- lm(Y ~ X1+X2+X4, data)
> SSE.X1X2X4 <- deviance(P1.modelX124); > SSR.X1X2X4 <- SST-SSE.X1X2X4
> SSRX2_X1X4<-SSR.X1X2X4-SSR.X1X4 > R2Y2_14<-SSRX2_X1X4/SSE.X1X4
> R2Y2_14
[1] 0.2202037
> #R2Y2|13
> P1.modelX124 <- lm(Y ~ X1+X2+X4, data)
> SSE.X1X2X4 <- deviance(P1.modelX124); > SSR.X1X2X4 <- SST-SSE.X1X2X4
> P1.modelX1234 <- lm(Y ~ X1+X2+X3+X4, data)
> SSE.X1X2X3X4 <- deviance(P1.modelX1234); > SSR.X1X2X3X4 <- SST-SSE.X1X2X3X4
> SSRX3_X1X2X4<-SSR.X1X2X3X4-SSR.X1X2X4 > R2Y3_124<-SSRX3_X1X2X4/SSE.X1X2X4
> R2Y3_124
[1] 0.004254889
> #R2
> full_model <- lm(Y ~ X1 + X2 + X3 + X4, data)
> R2 <- summary(full_model)$r.squared
> R2
[1] 0.5847496
In summary, R^2Y4 = .2865
R^2Y1 = .0626 R^2Y1|4 = .2505
R^214 = .4652
R^2Y2|14 = .2202
R^2Y3|124 =.0043 R2 = .5848
Therefore, 1)
R^2Y4 = 0.2865058: This denotes the fraction of variance in Y that can be accounted for solely by the variable X4.
2)
R^2Y1 = 0.06264236: This represents the portion of variance in Y explained by X1 alone.
3)
R^2Y1|4 = 0.2504679: This indicates the proportion of variance in Y explained jointly by both X1 and X4.
4)
R^214 = 0.4652132: Here, it shows the fraction of variance in Y explained by
both X1 and X4, along with any other independent variables in the model.
5)
R^2Y2|14 = 0.2202037: This reflects the amount of variance in Y explained by both X2 and X4, along with other independent variables.
6)
R^2Y3|124 = 0.004254889: It denotes the portion of variance in Y explained by both X2 and X3, along with other independent variables.
7)
R^2 = 0.5847496: This is the overall proportion of variance in Y explained by all independent variables in the model.
Interpreting these outcomes, it's evident that X4 emerges as the most influential independent variable in explaining Y's variation, with an R-squared value of 0.2865. Although X1 alone contributes modestly to Y's variation (R-squared of 0.0626), its joint consideration with X4 leads to a larger explained variance in Y (R-squared of 0.2505).
When all independent variables are considered together (R-squared of 0.5847), they
moderately explain the variance in Y. While X2 alone explains a small fraction of Y's variance (R-squared of 0.0043), its inclusion alongside X4 and other variables enhances the model's explanatory power (R-squared of 0.2202).
Regarding the marginal linear association between Y and X1, it could be influenced by adjusting for X4. This adjustment may alter the strength of the association between Y and X1, depending on the correlation between X1 and X4. If they are positively correlated, adjusting for X4 might diminish the association between Y and
X1. Conversely, if they are negatively correlated, adjusting for X4 could intensify the
association between Y and X1, as the confounding effect of the X1-X4 relationship diminishes.
Q: 7.16
Refer to Brand preference Problem 6.5.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
a. Transform the variables by means of the correlation transformation (7.44) and fit the standardized regression model (7.45). b. Interpret the standardized regression coefficient bi*. c. Transform the estimated standardized regression coefficients by means of (7.53) back to the ones for the fitted regression model in the original variables. Verify that they are the same as the ones obtained in Problem 6.5b.
Answer:
I have used the following code to get the output for the 3 required problems:
Code:
# Read the dataset
brand_data <- read.csv("C:\\Users\\Aayush\\OneDrive\\Desktop\\MSDA\\Sem 2\\
Linear Regression\\Assignements\\7.16\\Brand_Prefrences.csv")
# Standardize the variables using correlation transformation
standardized_brand_data <- as.data.frame(scale(brand_data))
# Fit the standardized regression model
standardized_model <- lm(Yi ~ Xi1 + Xi2, data = standardized_brand_data)
# Print the summary of the standardized model
summary(standardized_model)
#
a) Yhat* = .885X*1 + .402X*2 ---- Answer
b) Answer:
In the standardized regression model Yhat* = .885X1 + .402X2, coefficients represent the change in Yhat* for a one-standard-deviation increase in the corresponding standardized independent variable. Specifically, a one-standard-
deviation increase in X1 corresponds to a .885 standard deviation increase in Yhat, while a one-standard-deviation increase in X2 corresponds to a .402 standard deviation increase in Yhat. These coefficients quantify the relative importance and direction of the relationships between the independent variables and the dependent
variable.
#Running through it again for part c(manually)
#c)
# Data
Y <- c(64, 73, 61, 76, 72, 80, 71, 83, 83, 89, 86, 93, 88, 95, 94, 100)
X1 <- c(4, 4, 4, 4, 6, 6, 6, 6, 8, 8, 8, 8, 10, 10, 10, 10)
X2 <- c(2, 4, 2, 4, 2, 4, 2, 4, 2, 4, 2, 4, 2, 4, 2, 4)
# Combine into a data frame
data <- data.frame(Y = Y, X1 = X1, X2 = X2)
# Correlation transformation
cor_matrix <- cor(data)
X1_std <- (data$X1 - mean(data$X1)) / sd(data$X1)
X2_std <- (data$X2 - mean(data$X2)) / sd(data$X2)
Y_std <- (data$Y - mean(data$Y)) / sd(data$Y)
# Fit the standardized regression model
lm_std <- lm(Y_std ~ X1_std + X2_std, data = data)
# Summary of the model
summary(lm_std)
# Interpret the standardized regression coefficient b1*
std_coef <- summary(lm_std)$coefficients[, "Estimate"]
b1_star <- std_coef["X1_std"]
b1_star
# Transformation to original variables
b0 <- mean(data$Y) - b1_star * mean(data$X1) * sd(data$Y) / sd(data$X1) - std_coef["X2_std"] * mean(data$X2) * sd(data$Y) / sd(data$X2)
b1 <- b1_star * sd(data$Y) / sd(data$X1)
b2 <- std_coef["X2_std"] * sd(data$Y) / sd(data$X2)
# Display the coefficients
cat("Original Coefficients:\n")
cat("b0:", b0, "\n")
cat("b1:", b1, "\n")
cat("b2:", b2, "\n")
# Compute sY, s1, and s2
sY <- sd(data$Y)
s1 <- sd(data$X1)
s2 <- sd(data$X2)
# Output sY, s1, and s2
cat("sY:", sY, "\n")
cat("s1:", s1, "\n")
cat("s2:", s2, "\n")
Output:
> # Read the dataset
> brand_data <- read.csv("C:\\Users\\Aayush\\OneDrive\\Desktop\\MSDA\\Sem 2\\
Linear Regression\\Assignements\\7.16\\Brand_Prefrences.csv")
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
> > # Standardize the variables using correlation transformation
> standardized_brand_data <- as.data.frame(scale(brand_data))
> > # Fit the standardized regression model
> standardized_model <- lm(Yi ~ Xi1 + Xi2, data = standardized_brand_data)
> > # Print the summary of the standardized model
> summary(standardized_model)
Call:
lm(formula = Yi ~ Xi1 + Xi2, data = standardized_brand_data)
Residuals:
Min 1Q Median 3Q Max -0.46424 -0.18227 0.02285 0.15994 0.36661 Coefficients:
Estimate Std. Error t value Pr(>|t|) (Intercept) 0.00000 0.06294 0.000 1 Xi1 0.88503 0.06500 13.615 4.53e-09 ***
Xi2 0.40223 0.06500 6.188 3.28e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2518 on 13 degrees of freedom
Multiple R-squared: 0.9451,
Adjusted R-squared: 0.9366 F-statistic: 111.8 on 2 and 13 DF, p-value: 6.439e-09
>
> #a) Yhat* = .885X*1 + .402X*2 ---- Answer
> > #b) Answer:
> # In the standardized regression model Yhat* = .885X1 + .402X2, coefficients > # represent the change in Yhat* for a one-standard-deviation increase in the > # corresponding standardized independent variable. Specifically, > # a one-standard-deviation increase in X1 corresponds to a .885 standard deviation > # increase in Yhat, while a one-standard-deviation increase in X2 corresponds > # to a .402 standard deviation increase in Yhat. These coefficients quantify the > # relative importance and direction of the relationships between the independent > # variables and the dependent variable.
> > #Running through it again for part c(manually)
> #c)
> # Data
> Y <- c(64, 73, 61, 76, 72, 80, 71, 83, 83, 89, 86, 93, 88, 95, 94, 100)
> X1 <- c(4, 4, 4, 4, 6, 6, 6, 6, 8, 8, 8, 8, 10, 10, 10, 10)
> X2 <- c(2, 4, 2, 4, 2, 4, 2, 4, 2, 4, 2, 4, 2, 4, 2, 4)
> > # Combine into a data frame
> data <- data.frame(Y = Y, X1 = X1, X2 = X2)
> > # Correlation transformation
> cor_matrix <- cor(data)
> X1_std <- (data$X1 - mean(data$X1)) / sd(data$X1)
> X2_std <- (data$X2 - mean(data$X2)) / sd(data$X2)
> Y_std <- (data$Y - mean(data$Y)) / sd(data$Y)
> > # Fit the standardized regression model
> lm_std <- lm(Y_std ~ X1_std + X2_std, data = data)
> > # Summary of the model
> summary(lm_std)
Call:
lm(formula = Y_std ~ X1_std + X2_std, data = data)
Residuals:
Min 1Q Median 3Q Max -0.38423 -0.15391 0.00218 0.13863 0.36677 Coefficients:
Estimate Std. Error t value Pr(>|t|) (Intercept) -2.220e-16 5.880e-02 0.000 1 X1_std 8.924e-01 6.073e-02 14.695 1.78e-09 ***
X2_std 3.946e-01 6.073e-02 6.498 2.01e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2352 on 13 degrees of freedom
Multiple R-squared: 0.9521,
Adjusted R-squared: 0.9447 F-statistic: 129.1 on 2 and 13 DF, p-value: 2.658e-09
> > # Interpret the standardized regression coefficient b1*
> std_coef <- summary(lm_std)$coefficients[, "Estimate"]
> b1_star <- std_coef["X1_std"]
> b1_star
X1_std
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
0.8923929 > > # Transformation to original variables
> b0 <- mean(data$Y) - b1_star * mean(data$X1) * sd(data$Y) / sd(data$X1) - std_coef["X2_std"] * mean(data$X2) * sd(data$Y) / sd(data$X2)
> b1 <- b1_star * sd(data$Y) / sd(data$X1)
> b2 <- std_coef["X2_std"] * sd(data$Y) / sd(data$X2)
> > # Display the coefficients
> cat("Original Coefficients:\n")
Original Coefficients:
> cat("b0:", b0, "\n")
b0: 37.65 > cat("b1:", b1, "\n")
b1: 4.425 > cat("b2:", b2, "\n")
b2: 4.375 > > # Compute sY, s1, and s2
> sY <- sd(data$Y)
> s1 <- sd(data$X1)
> s2 <- sd(data$X2)
> > # Output sY, s1, and s2
> cat("sY:", sY, "\n")
sY: 11.45135 > cat("s1:", s1, "\n")
s1: 2.309401 > cat("s2:", s2, "\n")
s2: 1.032796
Summary of Regression Analysis
We conducted a regression analysis to understand how two predictor variables (X1 and X2) relate to a response variable (Y). Our dataset included 16 observations.
Model Summary
We used a standardized regression model to predict the response variable. The model equation is:
Predicted Y = b0 + b1*X1 + b2*X2
where:
- b0, b1, and b2 are the regression coefficients.
The estimated coefficients are:
- b1 = 4.425
- b2 = 4.375
- b0 = 37.650
Interpretation
- A one-standard-deviation increase in X1 corresponds to a 4.425 standard deviation
increase in the predicted Y, holding other variables constant.
- Similarly, a one-standard-deviation increase in X2 corresponds to a 4.375 standard
deviation increase in the predicted Y, holding other variables constant.
- The intercept (b0) represents the expected value of Y when both predictors are at their average values. In our case, it's 37.650.
Standard Deviations
The standard deviations of the variables are:
- Standard deviation of Y: 11.45135
- Standard deviation of X1: 2.30940
- Standard deviation of X2: 1.03280
These values describe how spread out the data are around their averages.
Related Documents
Related Questions
3. (a). Differentiate the different types of linear regression and describe the use of regression in geographical study.
(b). Calculate the value of the slope, b, for observations of n = 22, r2 = 0.73, standard deviation of x = 2.3 and the regression sum of square = 1324.
arrow_forward
Which of the following statements are not correct?
O The coefficient of determination, denoted by r^2 is interpreted as the proportion
of observed y variation that cannot be explained by the simple linear regression
model.
O The higher the value of the coefficient of determination, the more successful is
the simple linear regression model in explaining y variation.
If the coefficient of determination is small, an analyst will usually want to search
for an alternative model (either a nonlinear model or a multiple regression model
that involves more than a single independent variable).
The coefficient of determination can assume any value between 0 and 1,
inclusive.
arrow_forward
This is my question! All parts please.
arrow_forward
A researcher records age in years (x) and systolic blood pressure (y) for volunteers. They perform a
regression analysis was performed, and a portion of the computer output is as follows:
ŷ = 4.5+ 14.4x
Coefficients
(Intercept)
x
Estimate
4.5
Ho: B₁ = 0
H₁: B₁ > 0
Ho: B₁ = 0
Ha: B₁ <0
14.4
Ho: B₁ = 0
Ha:
B₁ #0
Std. Error Test statistic
2.9
4.7
1.55
3.06
P-value
Specify the null and the alternative hypotheses that you would use in order to test whether a linear
relationship exists between x and y.
0.07
0
arrow_forward
A researcher records age in years (x) and systolic blood pressure (y) for volunteers. They perform a
regression analysis was performed, and a portion of the computer output is as follows:
ŷ = 3.3 +12.7x
Coefficients
(Intercept)
X
Estimate Std. Error Test statistic
O Ho: B₁: = 0
Ha: B₁ 0
O Ho: B₁ = 0
Ha: B₁ 0
12.7
2.2
6.4
1.5
1.98
P-value
Specify the null and the alternative hypotheses that you would use in order to test whether a positive
linear relationship exists between x and y.
0.08
0.03
arrow_forward
B
b. What does the scatter diagram developed in part (a) indicate about the relationship between the two variables?
The scatter diagram indicates a positive
linear relationship between a = average number of passing yar
and y = the percentage of games won by the team.
c. Develop the estimated regression equation that could be used to predict the percentage of games won given the avera
passing yards per attempt. Enter negative value as negative number.
WinPct =|
|)(Yds/Att) (to 4 decimals)
d. Provide an interpretation for the slope of the estimated regression equation (to 1 decimal).
The slope of the estimated regression line is approximately
So, for every increase
: of one yar
number of passes per attempt, the percentage of games won by the team increases by
%.
e. For the 2011 season, the average number of passing yards per attempt for the Kansas City Chiefs was was 5.5. Use th
regression equation developed in part (c) to predict the percentage of games won by the Kansas City Chiefs.…
arrow_forward
Suppose you examined blood of 36 patients with the aim to study the relation between sugar level in blood (in mg/dL) and the amount of artificial sweetener (measured in grams). Your regression shows: blood=7.1 + 0.4*sweatener - 0.2*female. What is the most precide interpretation of the estimated coefficient for the constant?
arrow_forward
8. Which is the linear regression equation for a scatterplot with these points rounded to the nearest tenth: (4, 35), (6.5, 92),
(2. 10). (5, 50), (6, 85), (10, 110)?
arrow_forward
4b) The data shows a systolic and a diastolic blood pressure of certain patients. Find the
linear regression equation, using the first variable x (systolic) as the independent variable.
Find the best predicted diastolic blood pressure for a patient with a systolic blood pressure
(y) reading of 140. What is the correlation coefficient, r? Using a significance level of
a = 0.05, is there a significant linear relationship between systolic and diastolic blood
pressure?
Blood Pressure:
Systolic Diastolic
112
125
115
136
143
116
123
124
elimii
70
89
65
90
97
64 SUTT
nisinoo aqdM 21.SS bns aqdM
78
ahoqnis erit te zbesqz steb ils to
69 bns ago 20.EI to adimil srit terit sonabilnos 2
nistnoo aqdM 21.SS bnc agdM
sgsavs arit ferli mislo a hoqnis orti roqque lovedni sonsbilnos 3028
wolsd insmsisiz tomo artezorio SeqdM 2.55 al
2.SS to sulavadi znistmoodi ezusaed mish ads toqque ton zaob
2.55 to sulsy sdt anistroo ti sausosd mislo ert hoqquz 200b
to sulav orit nisinoo ton zoob 11 saussed misbb adi…
arrow_forward
The distance between the Y value in the data and the predicted Y value from the regression equation is known as the residual. What is the value for the sum of the squared residuals?
a.
SSresidual = r2(SSX)
b.
SSresidual = r2(SSY)
c.
SSresidual = (1 – r2)(SSX)
d.
SSresidual = (1 – r2)(SSY)
arrow_forward
A company has a set of data with employee age (X) and the corresponding number of annual on-the-job-accidents (Y). Analysis on the set finds that the regression equation is Y=60-0.5*X.
What can be said of the correspondence (relation) between age and accidents?
Are younger workers safer or more prone to accident? What is the likely number of accidents for someone aged 25?
arrow_forward
Pls do fast and i will rate instantly for sure
Try to give solution in typed form..
arrow_forward
A company believes that the variation in the cost of a shipment can be explained by the weight of
the package. A sample of 20 customer shipments was selected, and this scatter plot and
regression line were obtained:
Shipping
y = 1.101x + 4.4995
r= 0.8764
30
25
20
15
10
5
10
15
20
25
Weight (Ib)
a. Find r' and interpret the meaning of r in the context of this problem.
b. Interpret the meaning of the slope in the context of this problem.
c. Interpret the meaning of the y-intercept in the context of this problem. Explain whether or
not it makes sense.
d. Using the regression equation, calculate the expected cost of a package that weighs 11 lbs.
e. Using the regression equation, find the weight of a package that cost $20 to ship.
Cost (dollars)
arrow_forward
The article "The Undrained Strength of Some Thawed Permafrost Soils"+ contained the accompanying data on the following.
y = shear strength of sandy soil (kPa)
x₂ = depth (m)
x₂= water content (%)
The predicted values and residuals were computed using the estimated regression equation
ŷ-140.14
13.15x₁ + 12.22x₂ + 0.070x3 -0.227x4 + 0.413x5
where x3 = x₁²₁x4x₂², and x = x1x2.
y
X1
14.7 8.8 31.6
48.0 36.7 27.1
x2
25.6 36.7 25.8
10.0 6.0 39.0
16.0 6.8 39.1
16.8 7.0 38.4
20.7 7.2 33.8
38.8 8.5 33.7
16.9 6.6
27.0 7.9 33.0
7.3
27.8
16.0 4.6 26.2
24.9 10.0 37.7
2.8
34.5
12.8 2.1 36.5
Predicted y
23.92
46.76
26.79
11.42
14.23
16.77
23.03
25.48
16.21
24.09
15.00
29.13
14.88
7.79
Residual
-9.22
1.24
-1.19
-1.42
1.77
0.03
-2.33
13.32
0.69
2.91
1.00
-4.23
-7.58
5.01
(a) Use the given information to calculate SSResid, SSTO, and SSRegr. (Round your answers to four decimal places.)
SSTO=
SSResid=
SSRegr =
(b) Calculate R² for this regression model. (Round your answer to three decimal places.)
R² =
How…
arrow_forward
SEE MORE QUESTIONS
Recommended textbooks for you

College Algebra
Algebra
ISBN:9781305115545
Author:James Stewart, Lothar Redlin, Saleem Watson
Publisher:Cengage Learning

Algebra and Trigonometry (MindTap Course List)
Algebra
ISBN:9781305071742
Author:James Stewart, Lothar Redlin, Saleem Watson
Publisher:Cengage Learning
Algebra & Trigonometry with Analytic Geometry
Algebra
ISBN:9781133382119
Author:Swokowski
Publisher:Cengage

Glencoe Algebra 1, Student Edition, 9780079039897...
Algebra
ISBN:9780079039897
Author:Carter
Publisher:McGraw Hill

Functions and Change: A Modeling Approach to Coll...
Algebra
ISBN:9781337111348
Author:Bruce Crauder, Benny Evans, Alan Noell
Publisher:Cengage Learning

Big Ideas Math A Bridge To Success Algebra 1: Stu...
Algebra
ISBN:9781680331141
Author:HOUGHTON MIFFLIN HARCOURT
Publisher:Houghton Mifflin Harcourt
Related Questions
- 3. (a). Differentiate the different types of linear regression and describe the use of regression in geographical study. (b). Calculate the value of the slope, b, for observations of n = 22, r2 = 0.73, standard deviation of x = 2.3 and the regression sum of square = 1324.arrow_forwardWhich of the following statements are not correct? O The coefficient of determination, denoted by r^2 is interpreted as the proportion of observed y variation that cannot be explained by the simple linear regression model. O The higher the value of the coefficient of determination, the more successful is the simple linear regression model in explaining y variation. If the coefficient of determination is small, an analyst will usually want to search for an alternative model (either a nonlinear model or a multiple regression model that involves more than a single independent variable). The coefficient of determination can assume any value between 0 and 1, inclusive.arrow_forwardThis is my question! All parts please.arrow_forward
- A researcher records age in years (x) and systolic blood pressure (y) for volunteers. They perform a regression analysis was performed, and a portion of the computer output is as follows: ŷ = 4.5+ 14.4x Coefficients (Intercept) x Estimate 4.5 Ho: B₁ = 0 H₁: B₁ > 0 Ho: B₁ = 0 Ha: B₁ <0 14.4 Ho: B₁ = 0 Ha: B₁ #0 Std. Error Test statistic 2.9 4.7 1.55 3.06 P-value Specify the null and the alternative hypotheses that you would use in order to test whether a linear relationship exists between x and y. 0.07 0arrow_forwardA researcher records age in years (x) and systolic blood pressure (y) for volunteers. They perform a regression analysis was performed, and a portion of the computer output is as follows: ŷ = 3.3 +12.7x Coefficients (Intercept) X Estimate Std. Error Test statistic O Ho: B₁: = 0 Ha: B₁ 0 O Ho: B₁ = 0 Ha: B₁ 0 12.7 2.2 6.4 1.5 1.98 P-value Specify the null and the alternative hypotheses that you would use in order to test whether a positive linear relationship exists between x and y. 0.08 0.03arrow_forwardB b. What does the scatter diagram developed in part (a) indicate about the relationship between the two variables? The scatter diagram indicates a positive linear relationship between a = average number of passing yar and y = the percentage of games won by the team. c. Develop the estimated regression equation that could be used to predict the percentage of games won given the avera passing yards per attempt. Enter negative value as negative number. WinPct =| |)(Yds/Att) (to 4 decimals) d. Provide an interpretation for the slope of the estimated regression equation (to 1 decimal). The slope of the estimated regression line is approximately So, for every increase : of one yar number of passes per attempt, the percentage of games won by the team increases by %. e. For the 2011 season, the average number of passing yards per attempt for the Kansas City Chiefs was was 5.5. Use th regression equation developed in part (c) to predict the percentage of games won by the Kansas City Chiefs.…arrow_forward
- Suppose you examined blood of 36 patients with the aim to study the relation between sugar level in blood (in mg/dL) and the amount of artificial sweetener (measured in grams). Your regression shows: blood=7.1 + 0.4*sweatener - 0.2*female. What is the most precide interpretation of the estimated coefficient for the constant?arrow_forward8. Which is the linear regression equation for a scatterplot with these points rounded to the nearest tenth: (4, 35), (6.5, 92), (2. 10). (5, 50), (6, 85), (10, 110)?arrow_forward4b) The data shows a systolic and a diastolic blood pressure of certain patients. Find the linear regression equation, using the first variable x (systolic) as the independent variable. Find the best predicted diastolic blood pressure for a patient with a systolic blood pressure (y) reading of 140. What is the correlation coefficient, r? Using a significance level of a = 0.05, is there a significant linear relationship between systolic and diastolic blood pressure? Blood Pressure: Systolic Diastolic 112 125 115 136 143 116 123 124 elimii 70 89 65 90 97 64 SUTT nisinoo aqdM 21.SS bns aqdM 78 ahoqnis erit te zbesqz steb ils to 69 bns ago 20.EI to adimil srit terit sonabilnos 2 nistnoo aqdM 21.SS bnc agdM sgsavs arit ferli mislo a hoqnis orti roqque lovedni sonsbilnos 3028 wolsd insmsisiz tomo artezorio SeqdM 2.55 al 2.SS to sulavadi znistmoodi ezusaed mish ads toqque ton zaob 2.55 to sulsy sdt anistroo ti sausosd mislo ert hoqquz 200b to sulav orit nisinoo ton zoob 11 saussed misbb adi…arrow_forward
- The distance between the Y value in the data and the predicted Y value from the regression equation is known as the residual. What is the value for the sum of the squared residuals? a. SSresidual = r2(SSX) b. SSresidual = r2(SSY) c. SSresidual = (1 – r2)(SSX) d. SSresidual = (1 – r2)(SSY)arrow_forwardA company has a set of data with employee age (X) and the corresponding number of annual on-the-job-accidents (Y). Analysis on the set finds that the regression equation is Y=60-0.5*X. What can be said of the correspondence (relation) between age and accidents? Are younger workers safer or more prone to accident? What is the likely number of accidents for someone aged 25?arrow_forwardPls do fast and i will rate instantly for sure Try to give solution in typed form..arrow_forward
arrow_back_ios
SEE MORE QUESTIONS
arrow_forward_ios
Recommended textbooks for you
- College AlgebraAlgebraISBN:9781305115545Author:James Stewart, Lothar Redlin, Saleem WatsonPublisher:Cengage LearningAlgebra and Trigonometry (MindTap Course List)AlgebraISBN:9781305071742Author:James Stewart, Lothar Redlin, Saleem WatsonPublisher:Cengage LearningAlgebra & Trigonometry with Analytic GeometryAlgebraISBN:9781133382119Author:SwokowskiPublisher:Cengage
- Glencoe Algebra 1, Student Edition, 9780079039897...AlgebraISBN:9780079039897Author:CarterPublisher:McGraw HillFunctions and Change: A Modeling Approach to Coll...AlgebraISBN:9781337111348Author:Bruce Crauder, Benny Evans, Alan NoellPublisher:Cengage LearningBig Ideas Math A Bridge To Success Algebra 1: Stu...AlgebraISBN:9781680331141Author:HOUGHTON MIFFLIN HARCOURTPublisher:Houghton Mifflin Harcourt

College Algebra
Algebra
ISBN:9781305115545
Author:James Stewart, Lothar Redlin, Saleem Watson
Publisher:Cengage Learning

Algebra and Trigonometry (MindTap Course List)
Algebra
ISBN:9781305071742
Author:James Stewart, Lothar Redlin, Saleem Watson
Publisher:Cengage Learning
Algebra & Trigonometry with Analytic Geometry
Algebra
ISBN:9781133382119
Author:Swokowski
Publisher:Cengage

Glencoe Algebra 1, Student Edition, 9780079039897...
Algebra
ISBN:9780079039897
Author:Carter
Publisher:McGraw Hill

Functions and Change: A Modeling Approach to Coll...
Algebra
ISBN:9781337111348
Author:Bruce Crauder, Benny Evans, Alan Noell
Publisher:Cengage Learning

Big Ideas Math A Bridge To Success Algebra 1: Stu...
Algebra
ISBN:9781680331141
Author:HOUGHTON MIFFLIN HARCOURT
Publisher:Houghton Mifflin Harcourt