PSY 555 Homework 21

Answers

 

1.   The two kinds of error rates are pairwise, or per comparison, error rates (PC) and familywise error rates (FW).  PC error rates are the probability of making a Type I error on any given comparison.  The probability of making a PC error rate is your alpha (α) requirement (for multiple comparisons it would be the corrected alpha for a particular test).  FW error rates refer to the probability that a family (group-on the same data) of conclusions will contain at least one Type I error.  The FW error rate, if alpha is not corrected, is very dependent on the number of tests being run (or, in other words, included in a family). 

    

     FW: α=1-(1-α’)c

 

(C=number of comparisons.  This formula really only works for independent observations, but it is a decent estimate of FW when observations are dependent.)

 

2.   One way to reduce the FW error rate would be to choose a select number of a priori tests to submit your data to.  However, we often want to test a number of hypotheses, so the most common way to reduce the FW error rate is to use a more conservative level of α for each test.  Such a correction is a Bonferroni correction whereby the alpha level is corrected to be more conservative by dividing the desired alpha requirement by the number of comparisons and then using the results for the new alpha requirement for a given comparison (e.x., original α=.05 using 5 tests; .05=5=.01; so the new alpha level a statistic/test must meet to be significant is .01).  You also can reduce the FW error rate by running each test at a stringent alpha level (e.x., α=.001) that is subjectively derived (rather than the precisely calculated Bonferroni adjusted α).

 

  1. We adjust our alpha level for post hoc testing because otherwise we could be capitalizing on chance.  If we looked at our data and their means and then picked one test to run, we would undoubtedly test those two groups for which we expect the greatest likelihood of a significant finding.  However, in order to decide the groups that are most likely to be significantly different, we must have eyeballed the data and made cursory comparisons between all groups.  So although we only end up running one test, we have actually done multiple tests with our eyes.  Thus, our FW is not our alpha level, but rather (the number of comparisons x α).  So, if we still hope to retain our original α level (e.x., α=.05), we need to correct our alpha level.  This is why we employ alpha (FW error rate) corrections such as the Bonferroni—to ensure that our results are still unlikely to be resulting from chance (even though we have done multiple comparisons).

 

4.   Yes, we often adjust our alpha level for a priori tests.  We do this, as was the case with post hoc corrections, to minimize the FW error rate.  So, anytime, we know we will be running multiple comparisons, we known that our FW rate will be greater than our desired alpha unless we use a corrected, more conservative alpha level for each test.  Thus, anytime we run multiple comparisons a priori we would determine the appropriate corrected α used to minimize FW error concerns.

 

5.   A priori tests may always resemble post hoc tests in that the same statistical procedures may be used for either type of tests.  Additionally, a priori tests may resemble post hoc tests when multiple a priori tests are chosen (as post hocs generally tend to be more numerous when conducted).  For example, if you decide a priori to test all possible comparisons, there is little difference between the a priori and all the possible post hoc tests that could be done (so they are very similar in this instance).

 

6.   FW: α’=1-(1-.05)4

FW: α’=1-.81450625

FW: α’=.1855

 

The approximate probability of committing a Type I (α) error (or, in other words, the FW error rate) is .1855.