BUSI820DB6

docx

School

Southern New Hampshire University *

*We aren’t endorsed by this school

Course

820

Subject

Business

Date

Feb 20, 2024

Type

docx

Pages

4

Uploaded by fmalik89

Report
School of Business, Liberty University Faizan Malik Week 6 Discussion Assignment Author Note: Faizan Malik I have no known conflict of interest to disclose. Correspondence concerning this article should be addressed to Faizan Malik: Fmalik@Liberty.edu D6.8.1 Why would we graph scatterplots and regression lines? D6.8.1.a. According to Morgan et al. (2020), scatterplots provide a visual representation between two continuous variables while regression lines help researchers understand if a relationship exists between those variables (Morgen et al, 2020). Scatterplots and regression lines also aid in identifying outliers within the day and can help researchers in validating their hypothesis or other assumptions, which in return facilitate the prediction of the dependent variable based on the values of the independent variable. A study performed by Meyer and Shinar (1991) found that the ability to estimate correlations from scatterplots is a skill that can be improved with practice, but it is also influenced by perceptual factors that are largely independent of statistical knowledge, such as the way a person’s eyes are drawn to certain features of the scatterplot and the way their brain interprets the relationships between those features (Meyer & Shinar, 1991). D6.8.2 In Output 8.2, (a) What do the correlation coefficients tell us? (b) What is r2 for the Pearson correlation? What does it mean? (c) Compare the Pearson and Spearman correlations on both correlation size and significance level; (d) When should you use which type in this case? D6.8.2.a. Output 8.2 indicates a correlation coefficient (r) of 0.34 indicates a positive correlation for this study Correlation coefficients have a range between -1 to +1, where +1 represents a positive correlation, -1 represents a negative correlation, and 0 indicates no correlation (Asuero et al, 2006). As such, a correlation of 0.34 indicates a moderately positive between a participant’s mother’s education level and their score on a math achievement test, but other factors could also be influencing the math achievement test scores. D6.8.2.b. R-squared (r2), or the coefficient of determination, provides the proportion of the variance in the dependent variable that is accounted for or explained by the variance in the independent variable (Ozar, 1985). From Output 8.2, we have a r2 value of approximately 0.1156 or 11.56%, indicating that 11.56% of participant’s math achievement test scores can be attributed to the variation in the mother's education level while the remaining participant’s scores are likely impacted by other factors.
D6.8.2.c For correlation size, a Pearson correlation coefficient of 0.34 and a Spearman correlation coefficient of 0.32 both indicate a positive correlation between variables, albeit moderate. However, both carry a p-value of less than .005, indicating the relationship between the variables is unlikely to be due to random chance. D6.8.2.d Pearson correlations are best suited for when the relationship between variables expects being linear, specifically when variables are normally distributed with equal distribution, whereas Spearman correlations are ideal for when data is not expected to be linear or when there are numerous outliers, including studies utilizing ordinal data (De Winter et al., 2016). D6.8.5 In Output 8.5, what do the standardized regression weights or coefficients tell you about the ability of the predictors to predict the dependent variable? D6.8.5.a Standardized regression weights represent the change in the dependent variable's value corresponding to a one-standard-deviation increase in the predictor variable when all other predictors are held constant and can provide insight into dependent variables (Richards, 1982). For Output 8.5, there is a standardized coefficient of 0.504, indicating that for every one-standard-deviation increase in constant grades, scores on the math achievement test are expected to increase by 0.504 standard deviations. This essentially indicates a positive correlation between constant grades in high school and scores on the math achievement test, where constant grades tend to be associated with higher math achievement scores. References Asuero, A. G., Sayago, A., & González, A. G. (2006). The correlation coefficient: An overview.  Critical reviews in analytical chemistry 36 (1), 41-59. De Winter, J. C., Gosling, S. D., & Potter, J. (2016). Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data.  Psychological methods 21 (3), 273. Meyer, J., & Shinar, D. (1991, September). Perceiving correlations from scatterplots. In  Proceedings of the Human Factors Society Annual Meeting  (Vol. 35, No. 20, pp. 1537-1540). Sage CA: Los Angeles, CA: SAGE Publications. Morgan, G., Leech, N., Gloeckner, G., Barrett, K. (2020). IBM SPSS for Introductory Statistics (5th Ed.). New York, NY Ozer, Daniel J. "Correlation and the coefficient of determination."  Psychological bulletin  97.2 (1985): 307. Richards Jr, J. M. (1982). Standardized versus unstandardized regression weights.  Applied Psychological Measurement 6 (2), 201-212.
Hey Dean, great post! In your post, you cited a study and stated, “An example is a y variable of blood pressure and the x variables of age, weight, and sex (Schneider et al., 2010).” This is a great example of linear regression and analysis can aid medical research. Researchers and medical experts can explore how blood pressure varies on factors such as age, weight, and sex as you mentioned, often presented in the form of scatterplots which act as a visual representation between two continuous variables (Morgen et al, 2020). Regression analysis allows for the identification of potential confounding interactions amongst variables, such as the relationship between age and blood pressure and how it vary based on sex or weight, and vice versa. One such study was performed by Miall and Lovell (1967) and found that that blood pressure tends to increase with age, but the rate of increase varies between individuals, and the of increase is influenced by a number of factors, including sex, smoking, and body weight (Miall and Lovell, 1967). However, specifically in the medical field, experts should be cautious when utilizing such data as an overreliance on statistical findings and generalization from studies with limited data can yield poor outcomes for their patients. Take for example John Carlisle, an anesthesiologist, who works to expose bad data from various medical studies, which has resulted in over 120 retracted studies in Japan and Italy between two physicians (Adams, 2019). For those in the medical field, continued about the appropriate use of statistical methods such as regression analysis are essential to make informed decisions and protect the well-being of patients. References Adam, D. (2019, August 6). How a data detective exposed suspicious medical trials . Scientific American. https://www.scientificamerican.com/article/how-a-data-detective-exposed-suspicious- medical-trials/ Miall, W. E., & Lovell, H. G. (1967). Relation between change of blood pressure and age.  British Medical Journal 2 (5553), 660. Morgan, G., Leech, N., Gloeckner, G., Barrett, K. (2020). IBM SPSS for Introductory Statistics (5th Ed.). New York, NY Hey Kodjo, great post! In your post you cited Roni & Djajadikerta (2021), who stated that “one should only when comparing two groups may the Mann-Whitney U test be used.” Also known as the Wilcoxon rank-sum test, the Mann-Whitney U, is specifically designed to assess differences in the distributions of ranks between two independent groups and is well-suited for situations where the assumptions of parametric tests, such as the t-test, cannot be met. This often occurs when dealing with non-normally distributed data, or when the variances are not equal between groups, and can act as an alternative that doesn't rely on strict parametric assumptions. The Mann-Whitney U test is also valuable for analyzing ordinal or skewed data. In such cases, the Mann-Whitney U test offers a way to assess group differences without making inappropriate assumptions about the nature of the data (Ruxton, 2006). However, the Mann-Whitney U carries some limitations, specifically related to tied observations, where multiple data points have the same value. Such results can impact the calculation of ranks and, consequently, the results of the Mann-Whitney U test. Ties can lead to challenges in distinguishing the magnitude of differences
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
between groups and potentially reduce the test's sensitivity (Nachar, 2008). Additionally, The Mann-Whitney U test is designed for simple independent group comparisons and cannot handle more complex experimental designs involving repeated measures, blocking, or other intricate arrangements. References Morgan, G., Leech, N., Gloeckner, G., Barrett, K. (2020). IBM SPSS for Introductory Statistics (5th Ed.). New York, NY Nachar, N. (2008). The Mann-Whitney U: A test for assessing whether two independent samples come from the same distribution.  Tutorials in quantitative Methods for Psychology 4 (1), 13-20. Ruxton, G. D. (2006). The unequal variance t-test is an underused alternative to Student's t-test and the Mann–Whitney U test.  Behavioral Ecology 17 (4), 688-690. Hey Manique, great post! In your post, when explaining how the correlation and the t test differ, you stated, “These are both statistical methods that are used to analyze relationships and differences between variables, but they provide specific type of information different from one another.” While this is true, the benefits and limitations of each test should be examined to provide more context. For example, correlations provide a measure of the strength and direction of the linear association between the variables and is essential for identifying patterns, trends, and potential connections in data. Correlation coefficients, such as Pearson's correlation coefficient, offer a quantified value that helps researchers grasp the degree of relationship between variables (Meng et al., 1992). However, as the saying goes, correlation does not imply causation. As such, researchers cannot assume a strong correlation between two variables equates to changes in one variable cause changes in the other, as there might be confounding factors or third variables that influence the observed relationship. T-tests provide a way to determine if the means of two groups are statistically significantly different and can be applied to various scenarios, such as comparing means of two different treatments (Kim, 2015). However, T-tests are only suitable for studies with two variables. References Kim, T. K. (2015). T test as a parametric statistic.  Korean journal of anesthesiology 68 (6), 540- 546. Meng, X. L., Rosenthal, R., & Rubin, D. B. (1992). Comparing correlated correlation coefficients.  Psychological bulletin 111 (1), 172. Morgan, G., Leech, N., Gloeckner, G., Barrett, K. (2020). IBM SPSS for Introductory Statistics (5th Ed.). New York, NY