Types of Statistical Error 

Type 1 (alpha,     ) and Type II (beta,    ) 

    Hypothesis Conclusion
    Accept Null    Reject Null  

Truth  

   Null Hyp. Correct Type 1
Research Hyp. Type II Correct

Type I or Alpha = the probability of rejecting the null when it was true (saying there is a relationship when there isn’t one).  In the sciences, we focus on minimizing this type of error. 

Type II or Beta =  the probability of accepting the null when it was false (saying there is no relationship when there is one).

Which alpha level (also called significance level) you choose depends on the research question, and the tradition of your discipline.

1. Research question:  How important is it that you don’t make a mistake in your conclusion?

Ex. Testing new drug that can have horrible side effects (cancer and chemo)-- probably want 99% confidence, 1% error

Ex. Testing drug that has little side effects -- probably 90% ok.

2.  Discipline:  The most common alpha or significance levels used in the social sciences are 01, .05, and .10.  With an alpha level of .01 we will must 99% confident in our hypothesis conclusions.  With an alpha level of .05 we will must 95% confident in our hypothesis conclusions.  With an alpha level of .10 we will must 90% confident in our hypothesis conclusions.  So as the error level goes down, the more confident we are in our conclusions.  And, subsequently, as the error level goes down the harder it is to reject the null hypothesis. 


So we will focus on alpha error.  And unless stated otherwise, assume we set all tests at .05. 

The error level is the maximum amount of error you are willing to accept in your hypothesis conclusions.  It is the maximum amount of error you are willing to accept in rejecting the null hypothesis when it is actually correct.  It is the maximum probability you will allow of making a mistake.  This error has to do with the possibility that your sample does not actually represent the population due to either sample bias or random sampling variation.    

We set the error level apriori.  Before we collect the data.  Before we start the analysis.

How much error will you accept?  How confident do you want to be in your hypothesis results? 

For example, at a .05 error level, if we rejected the null hypothesis that men and women consume similar amounts of alcoholic beverages, we would be 95% confident that in the population our conclusion would hold true, that men actually do drink more than women.  Or you could think of it as, with .05 error level, if we pulled a total of 100 samples of the same population, we would expect 95 of them to generate the same sample statistic and the same hypothesis conclusion.  Or you can think of the error level as the probability that you could have gotten a test statistic of that size due to simply sample random variation alone. 


One tailed vs. two tailed tests:

A two tailed test is:

H1: There is a difference between group 1 and group 2 on Y.  X influences Y.

See diagram on board.

 

Because we did not specify direction, we have to look for possible error on both sides of the distribution.  So we have to divide up our allowable error (say .05) in half.

 

A one tailed test is:

H1:  Group 1 is bigger than Group 2 on Y.  Or as X increases, Y increases.

or

H1:  Group 1 is less than Group 2 on Y.  Or as X increases, Y decreases.

See diagram on board.

 

Because we specified direction, we only look on one side of the distribution for our test results.  So we can put all of our error on that side.

Hence, it is easier to reject the null with one tailed tests than with two tailed tests.  But you should always write your hypotheses, and test your hypotheses, based on your theoretical or substantive expectations.  Don't write a one tailed hypothesis because you know it will be easier to reject the null that way.