Analysis of Variance Tables for One-Way ANOVA With Type of Product User as the Factor

 

Source of Variation

Sum of Squares

Degrees of Freedom

Mean Square

F Ratio

 

Example One

SSB

1000

2

500

Infinity

SSW

0

12

0

 

SST

1000

14

 

 

 

Example Two

SSB

1000

2

500

1.56

SSW

3850

12

320.8

 

SST

4850

14

 

 

 

Please note that Example One above is the ANOVA table for the "raw" calculations from Example One. Example Two above is the ANOVA table for the "raw" calculations from Example Two.

 

In an ANOVA, you first have to calculate the sums of squares and then you have to put the resulting figures in a table.  Each respective sum of squares   is taken from your raw calculations.  Note how they match up.  Note also that there is a relationship between the 3 sums of squares that makes calculations easier.  The degrees of freedom above are computed in the following manner:

 

SSB  degrees of freedom = # of groups - 1. 

SSW  degrees of freedom = Number of observations - number of
        groups. 

SST  degrees of freedom = Number of observations - 1. 

 

To find the test statistic (the table value) or the value that the calculated statistic must exceed in order to reject the null, then go to a table of the F distribution (in your text), find your numerator degrees of freedom (in the above examples, it is 2) and your denominator degrees of freedom (in the above example, it is 12).    Now, find the point of intersection on the table, and this is your test (critical) statistic.  In the above examples, it should be 3.89 at the .05 level of significance.  So, as you can see, in Example One (with infinity as the F ratio), the calculated value exceeds the test value, so we would reject the null hypothesis of equality between means.  In Example Two, the F ratio is 1.56, which is less than the table value of 3.89, so we cannot reject the null of no difference between groups.

 

Now, to reiterate class discussion about ANOVA problems:

 

1.    The "independent" variable is categorical, and is at 3 levels or more.

2.    The "dependent" variable is continuous.

3.    What you are doing in a statistical sense is assessing the ratio of between-group variability to within-group variability. The greater the value of this ratio, the greater likelihood that you have a statistically significant relationship.  Keep in mind our discussion about the confidence that you would have if 10 people told you, with a reasonable degree of consistency (this is evidenced in Example One's data), that something was a given value.  Contrast that scenario with 10 people who, on average, say the same thing but they are spread all over the chart: some are very positive, some are very negative (this is evidenced in Example Two's data).

4. Again, ANOVA is discussed on pages 478-483 of the textbook.  The book's notation is different than what is on my examples, but it is the same basic formula.