# work through two-way ANOVAs

One-way ANOVA
This workshop deals with one-way ANOVAs in R. There is a separate document to work
through two-way ANOVAs. This workshop is based on materials from Childs, 2018. We’ll work
through some examples initially but first we need to consider ‘factors’ in R. Make sure you
are confident in downloading data and performing an ANOVA before the end of the session.
As always, code is highlighted in grey and tasks for you to do are in red.
Factors in R
Remember factors? An experimental factor is a controlled variable whose levels (‘values’) the
experimenter sets. R is primarily designed to carry out data analysis so we should not be
surprised that it has a special type of vector to represent factors. This kind of vector in R is
called, rather sensibly, a factor. We have largely ignored factors in R up until now because we
haven’t needed to use them. However, we now need to understand how they work because
much of R’s plotting and statistical modelling facilities rely on factors.
We’ll look at the corn crake data (stored in corn_crake) to begin getting a sense of how they
work. Download the corncrake data from VITAL and upload it into R, named corn_crake.
N.B. You can call it something else here, however if you call it something else, you will need to
replace the code in this document where I refer to corn_crake to whatever you have called
your data. My code won’t work as it won’t be able to find corn_crake if you have called it
something different.
First, we need to be able to actually recognise a factor when we see one. Here is the result of
using str(structure) with the diet data:
str(corn_crake)
This tells us that there are 40 observations in the data set and 2 variables (columns),
called \$Supplement and \$WeightGain. The text next to \$ Supplement says <fctr>. We can
guess what that stands for…. it is telling us that the Supplement vector inside corn_crake is a
factor. The Supplement factor was created automatically when we read the data stored in
corn_crake.CSV into R. When we read in a column of data that is non-numeric, read.csv will
often decide to turn it into a factor for us.
R is generally fairly good at alerting us to the fact that a variable is stored as a factor. For
example, look what happens if we extract the Supplement column from corn_crake and print
it to the screen:
corn_crake\$Supplement
This obviously prints out the values of each element of the vector, but look at the last line:
This alerts us to the fact that Supplement is a factor, with 5 levels (the different supplements).
Look at the order of the levels: these are alphabetical by default. Remember that! The order
of the levels in a factor can be important as it controls the order of plotting and it affects the
way R presents the summaries of statistics. This is why we are introducing factors now—we
are going to need to manipulate the levels of factors to alter the way our data are plotted and
analysed.
Visualising the data
As always before we fit a model we should plot our data. We can do this using a boxplot. You
know how to do this from last week and the data visualisation workshop.
Produce a boxplot examining the variability in weight gain according to supplement with
appropriate axis labels.
Fitting the ANOVA model
Carrying out ANOVA in R is quite simple, but as with regression, there is more than one step.
The first involves a process known as fitting the model (or just model fitting). This is the step
where R calculates the relevant means, along with the additional information needed to
generate the results in step two. We call this step model fitting because an ANOVA is a type
of model for our data: it is a model that allows the mean of a variable to vary among groups.
How do we fit an ANOVA model in R? We will do it using the lm function. ‘lm’ in this case
stand for ‘(general) linear model’. So… an ANOVA model is just a special case of the general
linear model. Here is how we fit a one-way ANOVA in R, using the corncrake data:
corncrake_model <- lm(WeightGain ~ Supplement, data = corn_crake)
Hopefully by now this kind of thing is starting to look familiar. We have to assign two
arguments:
1. The first argument is a formula. We know this because it includes a ‘tilde’ symbol: ~.
The variable name on the left of the ~ must be the numeric response variable whose
means we want to compare among groups. The variable on the right should be the
indicator (or predictor) variable that says which group each observation belongs to.
These are WeightGain and Supplement, respectively.
2. The second argument is the name of the data frame that contains the two variables
listed in the formula. As you have included this, there is no need to include the \$ to
indicate the column, you have told R which dataset to use.
Notice that we did not print the results to the console. Instead, we assigned the result a name
—corncrake_model now refers to a model object. What happens if we print this to the
console?
corncrake_model
Not a great deal. Printing a fitted model object to the console is not very useful when working
with ANOVA. We just see a summary of the model we fitted and some information about the
coefficients of the model. Yes, an ANOVA model has coefficients, as you will see with
regressions.
Interpreting the results
What we really want is a p-value to help us determine whether there is statistical support for
a difference among the group means. That is, we need to calculate things like degrees of
freedom, sums of squares, mean squares, and the F-ratio. This is step 2.
We use the anova function to do this:
anova(corncrake_model)
Notice that all we did was pass the anova function one argument: the name of the fitted
model object. Let’s step through the output to see what it means. The first line just informs
us that we are looking at an ANOVA table, i.e. a table of statistical results from an analysis of
variance. The second line just reminds us what variable we analysed.
The important information is in the table that follows:
This is an Analysis of Variance Table. It summarises the parts of the ANOVA calculation: Df –
degrees of freedom, Sum Sq – the sum of squares, Mean Sq – the mean square, F value –
the F-ratio (i.e. variance ratio), Pr(>F) – the p-value.
The F-ratio (variance ratio) is the key term. This is the test statistic. It provides a measure of
how large and consistent the differences between the means of the five different treatments
are. Larger values indicate clearer differences between means, in just the same way that large
values of Student’s t indicate clearer differences between means in the two sample situation.
The p-value gives the probability that the observed differences between the means, or a more
extreme difference, could have arisen through sampling variation under the null hypothesis.
What is the null hypothesis: it is one of no effect of treatment, i.e. the null hypothesis is that
all the means are the same. As always, the p-value of 0.05 is used as the significance
threshold, and we take p < 0.05 as evidence that at least one of the treatments is having an
effect. For the corncrake data, the value is 5.1, and the probability (p) of getting an F-ratio
this large is given by R as 0.0023, i.e. less than 0.05. This provides good evidence that there
are differences in weight gain between at least some of the treatments.
So far so good. The test that we have just carried out is called the global test of significance.
It goes by this name because it doesn’t tell us anything about which means are different. The
analyses suggest that there is an effect of supplement on weight gain, but some uncertainty
remains because we have only established that there are differences among at least some
supplements. A global test doesn’t say which supplements are better or worse. This could be
very important. If the significant result is generated by all supplements being equally effective
(hence differing from the control but not from each other) we would draw very different
conclusions than if the result was a consequence of one supplement being very effective and
all the others being useless. Our result could even be produced by the supplements all being
less effective than the control!
So having got a significant result in the ANOVA, we should always look at the means of the
treatments to understand where the differences actually lie. We did this in the previous
section but here is the figure again anyway:
What looking at the means tells us is that the effect of the supplements is generally to increase
weight gain (with one exception, ‘Allvit’) relative to the control group (‘None’), and that it
looks as though ‘Earlybird’ is the most effective, followed by ‘Sizefast’ and ‘Linseed’.
Often inspection of the means in this way will tell us all we need to know and no further work
will be required. However, sometimes it is desirable to have a more rigorous way of testing
where the significant differences between treatments occur. A number of tests exist as ‘add
ons’ to ANOVA which enable you to do this. These are called post hoc multiple comparison
tests (sometimes just ‘multiple comparison tests’). We’ll see how to conduct these in
the ‘Multiple comparison tests’ document.
Summarising and presenting the results of ANOVA
As with all tests it will usually be necessary to summarise the result from the test in a written
form. With an ANOVA on several treatments, we always need to at least summarise the
results of the global test of significance, for example:
There was a significant effect of supplement on the weight gain of the corncrake hatchlings
(ANOVA: F=5.1; d.f.= 4,35; p<0.01).
There are several things to notice here:
• The degrees of freedom are always quoted as part of the result, and…there are two
values for the degrees of freedom to report in ANOVA because it involves F-ratios.
These are obtained from the ANOVA table and should be given as the treatment
degrees of freedom first, followed by the error degrees of freedom. Order matters.
Don’t mix it up.
• The degrees of freedom are important because, like a t-statistic, the significance of
an F-ratio depends on the degrees of freedom, and giving them helps the reader to
judge the result you are presenting. A large value may not be very significant if the
sample size is small, a smaller value may be highly significant if the sample sizes are
large.
• The F-ratio rarely needs to be quoted to more than one decimal place.
When it comes to presenting the results in a report, it helps to present the means, as the
statement above cannot entirely capture the results. We could use a table to do this, but
tables are ugly and difficult to interpret. Additionally this could be written as part of the
statistical sentence.
Do it yourself!
The dataset, antid.csv, shows the wellbeing of 15 randomly selected volunteers each placed
in one of 3 treatment groups in a clinical trial testing the effect of anti-depressant drug – high,
low or a placebo. (Wellbeing is self-reported on a 7 point scale, with high scores indicating
high wellbeing).
Using the skills you have learnt from this ANOVA workshop, can you test whether there are
differences between the 3 groups? Include an appropriate figure in your analysis. Posted in Uncategorized