Readiness quiz week four stats 7000

docx

School

St. Catherine University *

*We aren’t endorsed by this school

Course

7000

Subject

Statistics

Date

Jan 9, 2024

Type

docx

Pages

4

Uploaded by dragonwall

Report
Readiness quiz week four stats 7000 1. Statistical power is a probability. a. True b. False 2. Which of the following does the power of a study depend on (choose all that apply): a. Sample size b. Variability c. Effect size d. Effect size e. Significance level f. P-value 3. The probability of a study finding that a drug works assuming that the drug really does work is best described by a. Power b. Type I error (aka false positive) c. Type II error (aka false negative) d. 1 – alpha 4. The probability of a study finding that a drug does not work assuming that the drug really works is best described by a. Power b. Type I error (aka false positive ) c. Type II error (aka false negative) d. 1 – alpha 5. The probability of a study finding that a drug does not work assuming that the drug really works is best described by a. Power b. Type I error (aka false positive) c. Type II error (aka false negative) d. 1 – alpha 6. Which of the following statements is most correct for when it makes sense to do a power calculation? a. When planning a study b. After completing a study c. At any point in time before during after your study d. Immediately prior to data analysis 7. A power calculation should be based on (choose the single best answer): a. The minimum size of interest/scientific relevance . b. The effect size from a previous study c. The effect size that will guarantee a high power. d. The sample size you can afford to conduct a study on 8. The more statistical tests one conducts, the more likely one is to achieve a statistically significant result. a. True b. False 9. When doing more than one hypothesis test in a study, ______________ to adjust for the number of tests being conducted in order to declare statistical significance. a. A more lenient threshold should be applied. b. A more strict threshold should be applied . c. No change(s) needs to be made. d. Studies with more than one hypothesis test should not be conducted 10. There exists only one method to deal with the issue of multiple comparisons. a. True b. False 11. If the null hypothesis is really true, you should expect alpha percent of all experiments done to test that hypothesis to be significant simply due to chance.
a. True b. False 12. All things being equal, the familywise error rate is always at least as large as the per-comparison error rate. a. True b. False 13. A student who was performing a single hypothesis test decided to do a Bonferroni correction to adjust her significance threshold. Using an initial alpha level of 0.05, her Bonferroni adjusted alpha level was a. 0.01 b. 0.025 c. 0.05 d. 0.1 14. The primary objective of the methods to address multiple comparisons is to help ensure that unimportant results are not being declared important accidently. a. True b. False 15. A clear definition of what comparisons comprise the 'family' of comparisons is important, but often ambiguous. a. True b. false 16. Ensuring the significance level applies to each individual comparison and not the overall family of comparisons is the most common approach to handling multiple comparisons. a. True b. False 17. The problem of multiple comparisons is a ubiquitous issue seen in many disciplines. a. True b. False 18. Analyzing your data in many ways with multiple different tests is a recommended strategy in order to help ensure you optimize the chances of finding a statistically significant result. a. True b. False 19. Taking a continuous variable and creating categories or groupings for analysis is not a good idea because (choose all that apply) a. Information is reduced/lost. b. Multiple comparisons may be generated. c. The resulting analyses become more complex. d. The original questions/test may not be able to be answered given the new form of the variable . 20. Coming up with hypotheses to test after a study is completed and declaring resulting tests significant at the alpha level is a good statistical practice. a. True b. False 21. p-hacking, like life-hacking, is a great way to implement shortcuts in statistical analyses to derive accurate results more efficiently. a. True b. False 22. The best strategies to employ to deal with multiple comparisons include (select all that apply) a. Plan studies with focused hypotheses b. Define methods to account for any multiple comparisons done. c. P-hack d. Analyzing the same data with multiple different methods to ensure robustness. 23. A high (large) p-value for a normality tests indicates a. A statistically significant result. b. The data is ‘not different’ than a normal distribution . c. The data is ‘different’ than a normal distribution.
d. A t-distribution should be utilized. 24. Determining whether data are 'normal enough' (i.e., consistent with a normal distribution) to use tests that assume an underlying normal distribution is very 'cut-and-dried' or 'black-and-white'. a. True b. False 25. A variety of plots and tests exist to help the researcher determine whether his/her data are consistent with a normal distribution. a. True b. False 26. The more data in one's sample, the more likely that small inconsistencies from a normal distribution will result in a test that suggests one should reject the null hypothesis that the data are sampled from a normal population. a. True b. False 27. A small (significant) p-value automatically means that normality assumptions are invalid and one should pursue alternative methods a. True b. False 28. Outliers should be removed from your dataset whenever you come across them without further investigation. a. True b. False 29. If an outlier is identified as a true error and can be corrected, it should be corrected and left in the dataset. a. True b. False 30. Outliers that are due to random chance or valid but unusual observations should be left in the dataset. a. True b. False 31. If an invalid observation is discovered in your data that is not an outlier but instead lies amongst all the other data points, it should be left as is in the dataset. a. True b. False 32. It is easy for an investigator to fall prey to bias when deciding which, if any, outliers to remove. a. True b. False 33. Robust methods or statistics are a. Insensitive to the presence of outliers b. Sensitive to the presence of outliers c. Unaffected by outliers 34. A study with an insufficient sample size is most likely to (choose the single best answer) a. Be unable to detect a substantial treatment effect . b. Spare patients from adverse events. c. Allow the study to be complete more quickly. d. Have too much statistical power. 35. Which of the following are required to be specified to determine sample size (choose all that apply): a. Effect size b. Power c. Significance level d. Variance e. Bias 36. All else being equal, the wider the confidence interval, the smaller the sample size. a. True b. False
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
37. Choosing a sample size to determine the maximum acceptable width of a confidence interval is recommended/preferred because it puts the focus on effect size rather than statistical significance. a. True b. False 38. Sample size and power are inversely related. a. True b. False 39. As the prior probability that your scientific hypothesis increases, your choice of alpha level should a. Increase b. Decrease c. Stay the same d. Depends on each individual study 40. All else equal/the same, a one-tailed test will require a smaller sample size than a two tailed test. a. True b. False 41. A power of 85% means that, if the hypothesized effect size is observed in the study conducted, there is an 85% chance the effect is statistically significant and a 15% chance the results are not statistically significant. a. True b. False 42. The best way to determine an effect size of interest is to choose a. The smallest effect size you care about. b. An effect size you’ve seen in the literature. c. An effect size that will generate a feasible sample size. d. A standard/rule-of-thumb/’canned’ effect size 43. Sample size calculations should be based on the smallest effect size worth detecting based on scientific or clinical considerations. a. True b. False 44. If conducting a study with different sample sizes in each group, to retain the same power, the total sample size should ________ as compared to the scenario where the sample size is the same in each group. a. Remain the same b. Increase c. Decrease 45. Conducting a paired t-test (when appropriate) will decrease power/reduce the required sample size compared to a two-sample independent groups t-test. a. True b. False 46. Interactive back-and-forth with an investigator during the planning stages of a study ('negotiating') is a good way to come to an agreeable sample size for his/her study. a. True b. False 47. Sample sizes that are too large may be a problem because (choose all that apply) a. Can unnecessarily expose participants to harm . b. A scientifically irrelevant difference may be declared statistically significant . c. Wastes time and resources with more participants than necessary d. More complex statistical methods are required. 48. Sample sizes that are too small may be a problem because (choose all that apply) a. Inadequate precision likely to yield unreliable results . b. Wastes time and resources due to little chance of detecting important effect size. c. Unethically exposes participants to potential harm with unreasonably small chance of detecting important effect size. d. Less complex statistical methods can be used.