ED516- Dr. Smith's Quiz Questions For Consideration
docx
keyboard_arrow_up
School
University of West Alabama *
*We aren’t endorsed by this school
Course
500
Subject
Statistics
Date
Feb 20, 2024
Type
docx
Pages
7
Uploaded by Resa48
Teresa Johnson
Dr. Smith
EDU 516
November 20,2023
“Quiz to Consider” Questions and Answers
1.
What is the difference between inferential statistics and descriptive statistics?
Descriptive statistics involves summarizing and describing the primary features of a dataset. It focuses on organizing, presenting, and analyzing data in a meaningful way. Descriptive statistics provide measures of central tendency (mean, median and mode), measures of dispersion (range, variance, standard deviation), to describe the characteristics of a dataset. Descriptive statistics aims to provide a clear and concise summary of the data, allowing researchers to understand and interpret the information easily (Simplilearn, 2021).
Inferential statistics involves making inferences and drawing conclusions about a population based on a sample. It uses probability theory and sampling techniques to generalize their findings from a sample to a larger population. Inferential statistics help researchers make predictions, test hypotheses, and determine the significance of relationships or differences between variables (Simplilearn, 2021). 2. The results of an independent samples t-test was conducted to determine if the means of two different collections of data were statistically different. Specifically, the test was conducted to determine whether to reject the null hypothesis: that there is no difference in the two means. The test was run at an alpha level of .05. The results returned a calculated significance level (p value) of .02.
A.
What does the phrase “alpha level of .05” mean, in common language? The phrase “alpha level of .05” refers to the significance level or the level of confidence used in hypothesis testing. It represents the threshold or cutoff point at which a researcher
determines whether the results of a statistical test are statistically significant or not (McLaughlin, 2023). B.
Given the calculated p value = .02m, should the researchers reject the null hypothesis? Said another way, is there a statistically significant difference in the means of these two collections of data?
If the calculated p-value is .02, and the significance level (alpha level) chosen by the researchers is .05 then the researchers should reject the null hypothesis. The p-value represents the probability of obtaining the observed result if the null hypothesis is true. In
this case, since the p-value is less than the significance level (.02 compared to .05) it means that the observed result is statistically significant. Therefore, the researchers have sufficient evidence to reject the null hypothesis and accept the alternative hypothesis (McLauglin, 2023).
C.
What if the test was run using an alpha level of .01? Should the researchers reject the null hypothesis? In hypothesis testing, the alpha level represents the threshold for determining statistical significance. If the p-value is less than or equal to the alpha level, it indicates that the observed result is statistically significant, and the null hypothesis should be rejected.
In this case even though the alpha level is more stringent (lower) at .01, the calculated p-
value of .02 is still less than .01. Therefore, the researchers have sufficient evidence to reject null hypothesis and accept the alternative hypothesis, regardless of the more conservative alpha level (McLaughlin, 2023).
1.
Under what circumstances might it be wise to use the mean rather than the median to describe the central tendency of a data set? Why? Under what circumstance would it be better to use the median? Why?
It might be wise to use the mean rather than the median to describe the central tendency of a data set under the certain circumstances such as:
1.
When the data is normally distributed: The mean is a suitable measure of central tendency when the data follows a normal distribution. In a normal distribution the mean, median, and mode are all equal. Therefore, using the mean as a measure of central tendency accurately represents the typical value of the data.
2.
When there are not extreme outliers: the mean is sensitive to extreme values or outliers in a data set. If there are no extreme outliers that significantly affect
the overall distribution, the mean can provide an accurate representation of the
central tendency. However, if there are outliers present, they can heavily influence the mean making it less representative of the typical value.
3.
When the goal is to calculate further statistical measures: The mean is often used in various statistical calculations, such as variance, standard deviation, and regression analysis. Use the mean as the measure of central tendency ensures consistency and compatibility with these calculations. 4.
It is more important to note that the choice between mean and median depends
on the specific characteristics of the data and the research question (McLaughlin, 2023). 4. Give an example of how Z scores might help researchers compare two data sets that have
different standard deviations.
Let us pretend we have two data sites, Site A and Site B, which provide scores for a certain variable. Researchers want to compare the scores from these two sites, but they have two different standard deviations. For example:
Site A:
---Mean score: 75
----Standard deviation: 5
Site B:
Mean score: 80
Standard deviation: 10
To compare the scores from these two sites the researchers can use a standardized score called z-
score. The z-score measures how many standard deviations a particular score is away from the mean. Let us say a participant has a score of 77 on Site A. To calculate the z-score for this use the formula:
Z= (x-mean? /standard deviation
Z=77-75)/5
Z=2/5
Z=0.4
So, the z-score for a score of 77 on Site A is 0.4.
For Site B to calculate the z-score, the participant’ score is 77, the mean and standard deviation for Site B are 80 and 10, respectively.
Z= (77-80)/10
Z=-3/10
Z= -0.3
By calculating the z-scores researchers can compare the participant’s score relative to the mean and standard deviation of each site. They can make meaningful comparisons even though the standard deviations are different.
5. What does the concept of “standard deviation” mean in common language? (Feel free to draw or reference pictures here too!). What makes it useful?
In simple terms, standard deviation is a way of understanding how much variety or difference there is in a set of numbers. For example, let us say you and your friends all decide to run a mile and time yourselves. If everyone finishes around the same time say between 9 and 11 minutes, then the standard deviation (or the amount of difference between your times) is small. But if some friends finish in 6 minutes and others in 16 minutes, then the standard deviation is large—
there is a lot of variation in your mile times. So standard deviation is a measure of how spread out or varied the numbers in a group are. If the standard deviation is small the numbers are close together if it is large the numbers are more spread out (see following picture)
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
6
. What is meant by nominal, ordinal, scale, and ratio data? Why is it important to understand what kind of data you are working with before you conduct a statistical test?
Nominal, ordinal, scale, and ratio data are various levels of measurement used in statistics and research to categorize and analyze data. a.
Nominal data: the simplest form of data and represents categories or labels without any inherent order or numerical value. It is used to classify data into distinct groups or categories. Examples would be gender, eye color, or marital status (McLaughlin, 2023) (Frost, 2022).
b.
Ordinal data: categories that have a natural order or ranking but do not have a consistent numerical difference between them. It allows for the relative comparison of data points in terms of greater or lesser, but not the measurement of the exact difference between them. Examples of ordinal data include educational levels, customer satisfaction ratings, or military ranks (McLaughlin, 2023) (Frost, 2022).
c.
Scale data: Also known as Interval data, represents data that has a consistent numerical difference between values and allows for the measurement of the exact difference between data points. Does not have a true zero point though. Examples include temperature, IQ scores, or Likert scale ratings (McLaughlin, 2023) (Frost,
2022).
d.
Ratio data: like scale but has a true zero point which allows for the comparison of ratios and the calculations of meaningful ratios between values and a true zero point. Examples include height, weight, time, or income.
(McLaughlin, 2023) (Frost, 2022).
by Unknown Author is licensed under by Unknown Author is licensed under
It is important to note the type of data you are working with before conducting a statistical test because several types of data require different statistical techniques and analyses. Here are a few
reasons why understanding the type of is crucial:
a.
Appropriate Statistical Test selection: Different statistical tests are designed to analyze specific types of data. Using an inappropriate test can lead to inaccurate results and misleading conclusions.
b.
Validity of Results; the validity of statistical results depends on the compatibility between
the data and the statistical test used. If the data violates the assumptions of the chosen test, the results may not be valid or reliable. Understanding the types of data helps to ensure that the chosen statistical test is appropriate and that the results are valid and meaningful. c.
Interpretation of results: the interpretation of statistical results depends on the type of data. Diverse types of data provide diverse levels of information and allow for different types of conclusions. For example, nominal data can only be used for categorical comparisons while scale or ratio data allow for more precise measurements and calculations. Understanding the type of data helps in appropriately interpreting the results
and drawing accurate conclusions.
d.
Data preparation and transformation: several types of data may require specific data preparation or transformation techniques before conducting statistical tests. For example, transforming skewed data to achieve normality or recording categorical variables may be necessary understanding the type of data helps in identifying the appropriate data preparation steps required to meet the assumptions of the chosen statistical test.
e.
Understanding the type of data is essential before conducting A statistical test because it guides the selection of the appropriate test, ensures the validity of results, aids in result interpretation and helps in preparing the data appropriately. It ensures that the
f.
statistical analysis aligns with the characteristics of the data leading to accurate and meaningful conclusions.
7. Why is sample size a concern with inferential statistical testing?
Sample size is a concern with inferential statistical testing because it directly affects the reliability and accuracy of the results obtained. Sample size is important because:
a.
representativeness: a larger sample size increases the likelihood that the sample represents the population accurately. With a small sample size, there is a higher chance of
sampling error. A larger sample size helps to minimize this error and provides more reliable estimates of population parameters.
b.
Precision and Confidence: A larger sample size leads to more precise estimates and narrower confidence intervals. Inferential stats tests involve making inferences about the population based on the sample data. With a larger sample size, the estimates of population parameters become more precise, reducing the uncertainty associated with the estimates (Impact, n.d.).
8. Why is random sampling important with inferential statistical testing?
Random sampling is important in inferential statistical testing because it helps to ensure that the sample is representative of the population and reduces the potential for bias here are a few reasons why random sampling is crucial:
a.
Representative sample: random sampling helps to create a sample that is representative of the population. By randomly selecting individuals from the population every member has an equal chance of being included in the sample. This reduces the risk of systematic bias and ensures that the sample reflects the diversity and characteristics of the population. A representative sample is essential for making accurate inferences about the population based on the sample data.
b.
Generalizability: the goal of inferential statistical testing years to draw conclusions about
the population based on the sample data. Random sampling increases the likelihood that the findings from the sample can be generalized to the larger population. If the sample is not randomly selected it may not accurately represent the population leading to biased or misleading conclusions.
c.
Statistical Assumptions: many statistical tests and techniques assume that the data are randomly sampled from the population. Violating this assumption can lead to incorrect results and invalid influences. Random sampling helps to meet this assumption and ensures that these statistical tests are applied appropriately.
d.
Control of confounding variables: random sampling helps to control confounding variables which are factors that may influence the relationship between the variables of interest. By randomly selecting individuals from the population the likelihood of confounding variables being distributed evenly across the sample is increased this allows for a more accurate assessment of the relationship between the variables under investigation
e.
Random sampling is important in inferential statistics because it helps to create a representative sample, allows for generalizability, meets statistical assumptions, and controls for confounding variables. It increases the validity and reliability of the results obtained from the sample and enhances the accuracy of the inferences made about the population.
9. What is the difference between a type 1 error and a type 2 error? Do the chances of making a type 1 error increase or decrease
if you decide to move the alpha level of your test from .05 to .01? Why?
1
. Type one and type 2 errors are potential errors that can occur in statistical hypothesis testing. Type 1 error occurs when the null hypothesis (the hypothesis of no effect or difference) is incorrectly rejected when it is true. This is also known as a false positive. For example, if you are
testing a new drug and you conclude that the drug has an effect when it actually does not you have made a type 1 error.
2. If you decide to move the alpha level of your test from .05 to .01 the chances of making a type 1 error decrease. The alpha level of a test is the probability threshold at which you reject the
null hypothesis. It represents the likelihood of rejecting the null hypothesis when it is true, which
is the definition of a type 1 error. If you set it at .05 you are willing to accept a 5% chance of
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
making a type 1 error. If you decrease to .01 you are only willing to accept a 1% chance of making a type 1 error. Therefore, by lowering the alpha level you are making your criteria for rejecting the null hypothesis more stringent which reduces the likelihood of making a type 1 error. However, it is important to note that while reducing it decreases the chances of a type 1 error it also reduces the statistical power of the test and creases the chances of a type 2 error. This is an example of a tradeoff between type one and type 2 errors in statistical testing (Banerjee, et. al., 2019).
10. Type 1 and Type 2 errors run in opposite directions: lowering your odds of making a type 1 error increases your odds of making a type 2 error. For example, if a company develops a potentially life-saving cancer drug for a deadly kind of cancer, it may be OK to set the type 1 error rate (the ‘false positive’ condition) high and thereby risk using it even if
it turns out it doesn’t work that well and comes with troubling side effects. On the other hand, if the drug is designed to treat, say, ingrown toenails, it may be better to set the odds of making a type 1 error very low. Provide your own example of a research scenario in which it might be preferable to make a
type
2 error and avoid a type 1 error. Give an example in which the opposite might be true, that it would be better to make a type 1 error and avoid a type 2?
a.
In a study to determine whether a new pesticide has harmful effects on a specific species of endangered birds, the null hypothesis is that the pesticide does not have any negative impact on the birds. In this case, a Type I error would occur if you rejected the null hypothesis and concluded that the pesticide does have harmful effects on the birds when it does not. This could lead to unnecessary restrictions on the use of pesticide, potentially impacting agricultural practices and economic factors. On the other hand, a Type II error would occur if you failed to reject the null hypothesis and conclude that the pesticide does not have harmful effects on the birds when it does. This could result in the continued use of the pesticide leading to actual harm to the endangered bird population.
b.
In this scenario, it might be considered better to make a Type I error (false positive) by restricting the use of the pesticide, even if it does not have harmful effects on the birds. This cautious approach prioritizes the protection of the endangered species and minimizes the risk of causing harm (Banerjee, et. al., 2019).
Related Documents
Recommended textbooks for you

Glencoe Algebra 1, Student Edition, 9780079039897...
Algebra
ISBN:9780079039897
Author:Carter
Publisher:McGraw Hill

Big Ideas Math A Bridge To Success Algebra 1: Stu...
Algebra
ISBN:9781680331141
Author:HOUGHTON MIFFLIN HARCOURT
Publisher:Houghton Mifflin Harcourt

Holt Mcdougal Larson Pre-algebra: Student Edition...
Algebra
ISBN:9780547587776
Author:HOLT MCDOUGAL
Publisher:HOLT MCDOUGAL
Recommended textbooks for you
- Glencoe Algebra 1, Student Edition, 9780079039897...AlgebraISBN:9780079039897Author:CarterPublisher:McGraw HillBig Ideas Math A Bridge To Success Algebra 1: Stu...AlgebraISBN:9781680331141Author:HOUGHTON MIFFLIN HARCOURTPublisher:Houghton Mifflin HarcourtHolt Mcdougal Larson Pre-algebra: Student Edition...AlgebraISBN:9780547587776Author:HOLT MCDOUGALPublisher:HOLT MCDOUGAL

Glencoe Algebra 1, Student Edition, 9780079039897...
Algebra
ISBN:9780079039897
Author:Carter
Publisher:McGraw Hill

Big Ideas Math A Bridge To Success Algebra 1: Stu...
Algebra
ISBN:9781680331141
Author:HOUGHTON MIFFLIN HARCOURT
Publisher:Houghton Mifflin Harcourt

Holt Mcdougal Larson Pre-algebra: Student Edition...
Algebra
ISBN:9780547587776
Author:HOLT MCDOUGAL
Publisher:HOLT MCDOUGAL