In this section I will discuss how to present the results for a one-way between groups ANOVA.

First, you will need to know the basic structure to report the results of an ANOVA.

F(dfB, dfW) = #.##, p < .05

And

F(dfB, dfW) = #.##, p < .05

The first statement is for a significant finding indicating that there is a statistically significant difference among means.

The second statement is for when there is a non-significant result indicating there is not a statistically significant difference among means.

So what does each part mean?

The F is your obtain test statistic. In the case of ANOVA the test statistic is an F. Make sure it is in italic font.

In the parentheses are the degrees of freedom. For ANOVA you must report two different degrees of freedom. You must report the between groups degrees of freedom and also the within groups degrees of freedom. Above they are designated as dfB and dfW respectively. The between groups degrees of freedom comes first and is followed by the within groups degrees of freedom.

The #.## is the obtained F value for the test.

Finally, you must report the p value associated with the test. This will be either p < .05 or p < .01 for significant findings and p > .05 for non-significant findings. Make sure you use italic font for the p.

Remember to include spaces between before and after the equal sign and greater than or less than sign. You need to make it easy on your reader. Justasthisis difficult to read without spaces so are the results of statistical tests without spaces.

Example 1

For this example let’s suppose that you deprive individuals of sleep and then evaluate them using a timed addition and subtraction test. You decide to break your volunteers into three groups. One group you keep awake for 24 hours. A second group you keep awake for 32 hours. And the final third group you keep awake for 40 hours before taking the addition and subtraction test.

The results of the analysis is shown below.


Oneway

Descriptives
Test.Score
  N Mean Std. Deviation Std. Error 95% Confidence Interval for Mean Minimum Maximum
Lower Bound Upper Bound
24 Hours 11 77.3636 8.61711 2.59816 71.5746 83.1527 66.00 90.00
32 Hours 11 70.9091 11.75972 3.54569 63.0088 78.8094 50.00 90.00
40 Hours 11 68.1818 9.02018 2.71969 62.1220 74.2417 56.00 80.00
Total 33 72.1515 10.35049 1.80179 68.4814 75.8216 50.00 90.00

ANOVA

Test.Score
  Sum of Squares df Mean Square F Sig.
Between Groups 489.152 2 244.576 2.496 .099
Within Groups 2939.091 30 97.970    
Total 3428.242 32      


Post Hoc Tests

Multiple Comparisons
Dependent Variable: Test.Score
Bonferroni
(I) Group (J) Group Mean Difference (I-J) Std. Error Sig. 95% Confidence Interval
Lower Bound Upper Bound
24 Hours 32 Hours 6.45455 4.22051 .410 -4.2476 17.1567
40 Hours 9.18182 4.22051 .113 -1.5203 19.8839
32 Hours 24 Hours -6.45455 4.22051 .410 -17.1567 4.2476
40 Hours 2.72727 4.22051 1.000 -7.9748 13.4294
40 Hours 24 Hours -9.18182 4.22051 .113 -19.8839 1.5203
32 Hours -2.72727 4.22051 1.000 -13.4294 7.9748

Even though we are discussing how to report the results for an ANOVA, this cannot be done in isolation. You can’t just report F(dfB, dfW) = #.##, p < .05. This alone doesn’t tell you much of anything useful. You must include enough information to give the reader a complete picture of the analysis, thereby allowing him or her to understand what the results may or may not mean. This can mean that you need to give some background information.

You will need to report the basic descriptive information for each group (means and standard deviations). Also, because ANOVA is an omnibus test it only tells you that there is a difference among the means. It does not tell where the difference is. Therefore, if the results are statistically significant you will need to run and report post hoc tests to identify what means are different.

Here is one way to write the results.

A one-way ANOVA was computed to determine if there was a statistically significant difference among mean scores on a timed addition and subtraction test for three sleep deprived groups. One group took the test after staying awake for 24 hours (M = 77.36, SD = 8.61), a second group took the test after staying awake for 32 hours (M = 70.91, SD = 11.76), and a third group took the test after staying awake for 40 hours (M = 68.18, SD = 9.02). The results showed that there was not a significant difference among the mean test scores for the groups, F(2, 30) = 2.50, p > .05.

This is a pretty simple example because there are only three groups and the results did not achieve statistical significance. Therefore, the post hoc comparisons did not need to be reported.

Notice that I have been using the word “among.” With ANOVA you generally have three means. Thus, you should use “among.” You use “between” when there are only two groups. For example, “I used a t test to determine if there was a significant difference between means.” If you have three or more groups you use “among.” For example, “I used a one-way to determine if there was a significant difference among means.”

Here are a couple of other ways to say the same thing.

A one-way ANOVA showed that there was not a statistically significance difference in performance on a timed math test (F(2, 30) = 2.50, p > .05) for the three sleep deprived groups. The group that took the test after 24 hours of staying awake (n = 11) had a mean test score of 77.36 (SD = 8.61). The group that took the test after staying awake for 32 hours (n = 11) had a mean score of 70.91 (SD = 11.76). The final group took the test after staying awake for 40 hours (n = 11) had a mean score of 68.18 (SD = 9.02).

You may have noticed that in this example I reported the number of individuals in each group. You may or may not want to report this information here. If you reported the number in each group in your method section then you probably don’t need it here. The important thing is to ensure that the reader understands what was done and has enough information to interpret the results.

Participants were randomly assigned to one of three groups. Each group consisted of 11 participants. Group 1 was sleep deprived for 24 hours, Group 2 was sleep deprived for 32 hours, and Group 3 was sleep deprived for 40 hours. Immediately after the sleep deprivation participants took a timed addition and subtraction test. The results showed that Group 1 performed the best on the test (M = 77.36, SD = 8.61). This was followed by Group 2 (M = 70.91, SD = 11.76) and Group 3 had the worst test performance (M = 68.18, SD = 9.02). However, a one-way ANOVA showed that there was not a significant difference in mean test performance among the groups, F(2, 30) = 2.50, p > .05.

Example 2

Let’s suppose for a research methods course you have to complete a research project. After much thought you decide to examine if the job satisfaction of your professors varies based on their rank. Not every professor is full professor. Typically the lowest (or entry) level is an assistant professor. After a few years he or she can be promoted to an associate professor. Then later promoted to the rank of professor.

After getting all of the necessary approvals you send your survey invitation out and 25 faculty members complete your survey. You run the analyses and get the output below. Job satisfaction was measured on a 5-point scale with higher numbers indicating greater job satisfaction.

Descriptives
Job Satisfaction                
  N Mean Std. Deviation Std. Error 95% Confidence Interval for Mean Minimum Maximum
          Lower Bound Upper Bound    
Assistant Professor 10 3.01 0.93 0.29 2.34 3.68 2.00 4.00
Associate Professor 8 4.05 0.72 0.26 3.45 4.65 3.00 5.00
Professor 7 4.29 0.76 0.29 3.59 4.98 3.00 5.00
Total 25 3.70 0.98 0.20 3.30 4.10 2.00 5.00

 

ANOVA
Job Satisfaction          
  Sum of Squares df Mean Square F Sig.
Between Groups 8.142 2 4.071 6.012 0.008
Within Groups 14.898 22 0.677    
Total 23.04 24      

 

Multiple Comparisons
Dependent Variable: Job Satisfaction          
Tukey HSD            
(I) Rank (J) Rank Mean Difference (I-J) Std. Error Sig. 95% Confidence Interval
          Lower Bound Upper Bound
Assistant Professor Associate Professor -1.04000* 0.39034 0.036 -2.0205 -0.0595
  Professor -1.27571* 0.40553 0.013 -2.2944 -0.2570
Associate Professor Assistant Professor 1.04000* 0.39034 0.036 0.0595 2.0205
  Professor -0.23571 0.42589 0.846 -1.3056 0.8341

Even though we are discussing how to report the results for an ANOVA, this cannot be done in isolation. You can’t just report F(dfB, dfW) = #.##, p < .05. This by itself doesn’t tell you much. You must include enough information to give the reader a complete picture of the analysis, thereby allowing him or her to understand what the results may or may not mean. This means that you need to give some background information.

You will need to report the basic descriptive information for each group (means and standard deviations). Remember that ANOVA is an omnibus test. It only tells you that there is a difference among the means. It does not tell where the difference is. Therefore, if the results are statistically significant you will need to run and report post hoc tests to identify what means are different. In the first example the results were not significant. In this example they are so you will also need to report the results for the post hoc tests.

Below are several examples of how you could write up these results. There are countless ways to report the results and these are just a few.

Ten assistant professors, eight associate professors, and seven professors completed and returned the survey. The results showed that professors had the highest job satisfaction (M = 4.29, SD = 0.76). This was followed by associate professors (M = 4.05, SD = 0.72). Assistant professors had the lowest job satisfaction (M = 3.01, SD = 0.93). An ANOVA showed that there was a statistically significant difference among the means, F(2, 22) = 6.01, p < .01. Follow-up post hoc comparisons using Tukey’s HSD showed that assistant professors has significantly lower job satisfaction than both associate professors and professors. The job satisfaction of professors and associate professors was not significantly different.

The findings indicated that assistant professors (n = 10) had the lowest job satisfaction (M = 3.01, SD = 0.93) and professors (n = 7) had the highest job satisfaction (M = 4.29, SD = 0.76). The job satisfaction for associate professors (n = 8) fell between assistant professors and professors (M = 4.05, SD = 0.72). Based on the results of an ANOVA there was a statistically significant difference among the means, F(2, 22) = 6.01, p < .01. Post hoc tests using Tukey’s HSD showed that assistant professors has significantly lower job satisfaction than both associate professors and professors. The job satisfaction of professors and associate professors was not significantly different.

The mean job satisfaction for each rank was as follows: professors M = 4.02, SD = 0.76, associate professors M = 4.50, SD = 0.72, assistant professor M = 3.01, SD = 0.93. An ANOVA indicated that job satisfaction varied based on professor rank, F(2, 22) = 6.01, p < .01. Using Tukey’s HSD it was found that the job satisfaction associate professors and professors had significantly higher job satisfaction than assistant professors.