Homework #6

Answers

 

4.1(a).   Null hypothesis:  Last night’s game was actually

an NHL hockey game.

 

4.1(b).   On the basis of the null hypothesis, I would expect between 0 and 6 points maximum per team.  The number of points scored was substantially larger than 6 points per team so it would be highly unlikely that scoring this many points would occur were this a hockey game (as I assumed in the null hypothesis).  Therefore, I would reject the null hypothesis and conclude that the scores reported must have been for something other than a hockey game, which explains why I believe the person was mistaken about reading those scores for a hockey game.

 

4.2(b).   No, $4.25 is a common observation, so it is

unlikely that you were overcharged for lunch.

 

4.2(c).   I set up the null hypothesis that I was charged

correctly.  Therefore, I would expect to receive about $1.00 in change, give or take a quarter or so.  The change that I received is in line with that expectation (there is a 31.7% probability that I was not overcharged) and, therefore, I have no basis for rejecting H0.

 

4.3.                            A type I error would be concluding that I had been shortchanged when in fact I had not.

 

4.4.                            A type II error would be concluding that I had not been shortchanged when in fact I had.

 

4.5.                            The critical value would be that amount of change below which I would decide that I had been shortchanged (or the price above which I would decide that I had been overcharged).  The rejection region would be all amounts less than the critical value in the case of getting too little change (or above the critical value in the case of getting overcharged)—i.e., all amounts that would lead to rejection of H0.

 

 

1.        The sample with the mean of 40 would have a mean

that more closely approximates µ The larger the n, the less of a problem error (or random noise) will be.  Since error is assumed to be random, the larger your n, the more likely you will have an extreme observation on one end and an extreme observation on the other end of a distribution—thus, the noise and error are assumed to balance out (equal zero) with a great enough n and the greater then, the closer to 0 error will be.

 

2.        µ=600

σ=100

n=65

 

SE=

 

The standard error you could expect with a sample of 65 scores would be 12.40.

 

3(a).     µ=250

σ=20

95%→1.96

 

X=250+(1.96)(20)=289.2

X=250-(1.96)(20)=210.8

0

The 95% confidence interval for the distribution would be 210.8 to 289.2.

 

3(b).     µ=250

σ=20

80%→1.28

 

X=250+(1.28)(20)=275.6

X=250-(1.28)(20)=224.4

 

The 80% confidence interval for this distribution would be 224.4 and 275.6.

 

4.        µ=80

99%→2.58

Upper bound=97

 

97=80+(2.58)(σ)

17=2.58 σ

σ=

 

X=80-(2.58)(6.589)=63

 

The standard deviation is 6.589 and the lower bound of the 99% confidence interval is 63.

 

5.       

 

XA

Xfreq A

Xweighted

(XA-)2

20

2

40

2(28.09)

19

1

19

18.49

18

1

18

10.89

17

5

85

5(5.29)

16

1

16

1.69

15

1

15

.09

14

1

14

.49

13

1

13

2.89

12

3

36

3(7.29)

10

2

20

2(22.09)

9

2

18

2(32.49)

N=20                            

 

 

σA=

 

Mode=17

STEM | LEAF

   0 | 99

   1*| 00

   1t| 2223

   1f| 45

   1s| 677777

   1.| 89

   2*| 00

 

Figure 5.  Stem and leaf plot of the population data.

 

If you did histograms or frequency distributions, this is good, but I was looking more for stem-and-leaf and boxplots.  [I did not display them because there are so many different ways to group them/graph them appropriately].

 

Median location=

 

Median=

 

Hinge location=

 

Lower hinge=12

 

Upper hinge=17

 

H-spread=17-12=5

 

H-spread*1.5=5(1.5)=7.5

 

Upper fence=17+7.5=24.5

 

Lower fence=12-7.5=4.5

 

Lower adjacent value=9

 

Upper adjacent value=20

 

4  6  8  10  12  14  16  18  20  22  24  26

---------------------------------------------

|                                   |

|           |---------|             |

|     |-----|     |   |-----|       |

|           |---------|             |

|                                   |

 

Figure 5(b).  Boxplot of population data.

 

5(b).    

 

z=

 

z=1.22→2(.1112)=.2224

 

There is a 22.24% likelihood of getting a score as extreme as 19 in this population.

 

 

Z=

 

z=-3.32→2(.0006)=.0012

 

There is a .12% likelihood of getting a score as extreme as 3 in this population.

 

 

Z=

 

Z=3.77→2(.0001)=.0026

 

There is a .02% likelihood of getting a score as extreme as 28 in this population.