we look at a random sample of 1000 united flights in the month of December comparing the actual arrival time to the scheduled arrive time. computer output of the descriptive statistics for the difference in actual and expected arrival time of these 1000 flights are shown below : N=1000 Mean =4.06 St Dev. = 45.4 SE Mean =1.44 Min= -35 Q1 = -14 Median = -5 Q3 =9 Max =871 what is the sample mean difference in actual and expected arrival times? what is the standard deviation of the difference ? x bar = and s= ? based on the computer output is it safe to assume this sample is normally distributed? can we proceed using the t- distribution to build a confidence interval for this mean ? use the summary statistics to compute a 95% confidence interval for the average difference in actual and scheduled arrival times on united flights in december? round to two decimal places. interpret the confidence interval you found in context. round to two decimal places.
we look at a random sample of 1000 united flights in the month of December comparing the actual arrival time to the scheduled arrive time. computer output of the
Trending now
This is a popular solution!
Step by step
Solved in 3 steps