My colleague, Amanda Avelar de Oliveira, gave a presentation on ASRgenomics in which she talked a lot about A, G and H matrices. I was unfamiliar with these terms so afterwards I tried Googling and got absolutely nowhere! Lots of lengthy papers and highly detailed explanations, but no short and simple definitions. So, I had a chat with Amanda to try and get a straight answer. Here’s what I learned.
Jane
During your ASRgenomics [1] talk I was Googling "What is the difference between genomic and pedigree matrices". I gave up.
I imagine the explanation is something like this:
But this is just my guess!
Amanda
Your definition is right, but in a very informal way. Let’s consider this paragraph in the book Genetic Data Analysis for Plant and Animal Breeding by Fikret Isik, James Holland, Christian Maltecca.
Traditional genetic evaluations combined the phenotypic data and resemblance coefficients between relatives to predict the genetic merit of individuals. The resemblance coefficients, derived from pedigrees, are based on probabilities that alleles are identical by descent (IBD). The resulting matrix of pairwise pedigree relationships is referred to as the A matrix because the elements in the matrix are pedigreebased estimates of additive genetic relationships.
More recently, genetic markers distributed throughout the entire genome have been used to measure genetic similarities more precisely than by using pedigree information (VanRaden, 2008). Genetic markers estimate the proportion of chromosome segments shared by individuals based on the identical by state (IBS) matching of marker alleles. The matrix of pairwise realized genomic relationships estimated from marker information is referred to as the G matrix.
A DNA segment in two or more individuals is

Jane
Ok, so if we can get genomic relationship information why would we even bother with the A matrix any more, (unless we do not have access to DNA data?). Why does it make sense to combine the A and G matrices to create an H matrix? Why not just chuck out the A matrix?
Amanda
Basically, to get genotypic information we need to genotype the individuals in our population, which is really expensive! Pedigree information can be easier and cheaper to obtain. In animal breeding, for example, breeders have very good pedigree information  they basically know the whole ancestry of an individual for generations! In plants, it is less common to have a good pedigree, especially for annual crops.
Jane
So, the pedigree information is still valuable, particularly if you only have partial genomic information.
Amanda
Absolutely. And when you trust your pedigree, you can also use it to identify possible mistakes in your genomic data.
Jane
Ooer, how can genomic data be mistaken? Perhaps the person collecting the data mixed up the samples, for example?
Amanda
Yes. Or even errors in the lab, contamination by pollen or semen, and so on. There are many possible sources of error.
Jane
So, the final H matrix, combining the A and G matrices is useful...how?
Amanda
To ensure good quality genomic data it needs to be genotyped with a high coverage. This is a very expensive process, and generally people cannot afford to do much of it. The H matrix is useful because it combines the information from the pedigree with genomic marker information, enabling you to use all the information on genetic similarity you have. For example, if you only have money to genotype 100 individuals, but you have pedigree (and phenotype) information on 1000 individuals, you can run your analysis on all 1000 individuals by using an H matrix to combine the genomic and pedigree information. We call it the Hmatrix because it’s a hybrid of the pedigreebased A matrix and the genomicbased G matrix.
Jane
Great! That makes sense. Thank you so much.
Amanda
No problem.
A matrix: contains pedigreebased estimates of additive genetic relationships G matrix: contains realized genomic relationships estimated from marker information H matrix: combined estimates of the genetic relationships from the pedigree and genomic (i.e., marker) information. 
[1] ASRgenomics is an R package that provides a series of molecular and genetic routines to assist in analytical pipelines for Genomic Selection and/or GenomeWide Association Studies (GWAS). It can be used both before and after genomic analysis by ASRemlR (or another R library). The main routines included are used for:

Amanda Avelar de Oliveira is an Agronomist with M.Sc and Ph.D. in Genetics and Plant Breeding from the University of São Paulo (ESALQ/USP). She has experience on quantitative genetics, genomic prediction, field trial analysis and genotyping pipelines. Currently, she works as a consultant at VSN International, UK.
“I believe in the power of knowledge sharing and multidisciplinary efforts to increase genetic gains in plant breeding while ensuring sustainability in agriculture”.
Jane Cohen is a technical author with a bachelor's degree in information technology and graduate diploma in technical communication. She is a selfconfessed science nerd with a keen interest in biology and statistics. When she's not honing her DIY skills she can generally be found either with a book in hand or perambulating in the countryside.
Related Reads
Kanchana Punyawaew
7 months agoThis blog illustrates how to analyze data from a field experiment with a balanced lattice square design using linear mixed models. We’ll consider two models: the balanced lattice square model and a spatial model.
The example data are from a field experiment conducted at Slate Hall Farm, UK, in 1976 (Gilmour et al., 1995). The experiment was set up to compare the performance of 25 varieties of barley and was designed as a balanced lattice square with six replicates laid out in a 10 x 15 rectangular grid. Each replicate contained exactly one plot for every variety. The variety grown in each plot, and the coding of the replicates and lattice blocks, is shown in the field layout below:
There are seven columns in the data frame: five blocking factors (Rep, RowRep, ColRep, Row, Column), one treatment factor, Variety, and the response variate, yield.
The six replicates are numbered from 1 to 6 (Rep). The lattice block numbering is coded within replicates. That is, within each replicates the lattice rows (RowRep) and lattice columns (ColRep) are both numbered from 1 to 5. The Row and Column factors define the row and column positions within the field (rather than within each replicate).
To analyze the response variable, yield, we need to identify the two basic components of the experiment: the treatment structure and the blocking (or design) structure. The treatment structure consists of the set of treatments, or treatment combinations, selected to study or to compare. In our example, there is one treatment factor with 25 levels, Variety (i.e. the 25 different varieties of barley). The blocking structure of replicates (Rep), lattice rows within replicates (Rep:RowRep), and lattice columns within replicates (Rep:ColRep) reflects the balanced lattice square design. In a mixed model analysis, the treatment factors are (usually) fitted as fixed effects and the blocking factors as random.
The balanced lattice square model is fitted in ASRemlR4 using the following code:
> lattice.asr < asreml(fixed = yield ~ Variety,
random = ~ Rep + Rep:RowRep + Rep:ColRep,
data=data1)
The REML loglikelihood is 707.786.
The model’s BIC is:
The estimated variance components are:
The table above contains the estimated variance components for all terms in the random model. The variance component measures the inherent variability of the term, over and above the variability of the subunits of which it is composed. The variance components for Rep, Rep:RowRep and Rep:ColRep are estimated as 4263, 15596, and 14813, respectively. As is typical, the largest unit (replicate) is more variable than its subunits (lattice rows and columns within replicates). The "units!R" component is the residual variance.
By default, fixed effects in ASRemlR4 are tested using sequential Wald tests:
In this example, there are two terms in the summary table: the overall mean, (Intercept), and Variety. As the tests are sequential, the effect of the Variety is assessed by calculating the change in sums of squares between the two models (Intercept)+Variety and (Intercept). The pvalue (Pr(Chisq)) of < 2.2 x 1016 indicates that Variety is a highly significant.
The predicted means for the Variety can be obtained using the predict() function. The standard error of the difference between any pair of variety means is 62. Note: all variety means have the same standard error as the design is balanced.
Note: the same analysis is obtained when the random model is redefined as replicates (Rep), rows within replicates (Rep:Row) and columns within replicates (Rep:Column).
As the plots are laid out in a grid, the data can also be analyzed using a spatial model. We’ll illustrate spatial analysis by fitting a model with a separable first order autoregressive process in the field row (Row) and field column (Column) directions. This is often a useful model to start the spatial modeling process.
The separable first order autoregressive spatial model is fitted in ASRemlR4 using the following code:
> spatial.asr < asreml(fixed = yield ~ Variety,
residual = ~ar1(Row):ar1(Column),
data = data1)
The BIC for this spatial model is:
The estimated variance components and sequential Wald tests are:
The residual variance is 38713, the estimated row correlation is 0.458, and the estimated column correlation is 0.684. As for the balanced lattice square model, there is strong evidence of a Variety effect (pvalue < 2.2 x 1016).
A loglikelihood ratio test cannot be used to compare the balanced lattice square model with the spatial models, as the variance models are not nested. However, the two models can be compared using BIC. As the spatial model has a smaller BIC (1415) than the balanced lattice square model (1435), of the two models explored in this blog, it is chosen as the preferred model. However, selecting the optimal spatial model can be difficult. The current spatial model can be extended by including measurement error (or nugget effect) or revised by selecting a different variance model for the spatial effects.
Butler, D.G., Cullis, B.R., Gilmour, A. R., Gogel, B.G. and Thompson, R. (2017). ASRemlR Reference Manual Version 4. VSN International Ltd, Hemel Hempstead, HP2 4TP UK.
Gilmour, A. R., Anderson, R. D. and Rae, A. L. (1995). The analysis of binomial data by a generalised linear mixed model, Biometrika 72: 593599..
Dr. John Rogers
6 months agoEarlier this year I had an enquiry from Carey Langley of VSNi as to why I had not renewed my Genstat licence. The truth was simple – I have decided to fully retire after 50 years as an agricultural entomologist / applied biologist / consultant. This prompted some reflections about the evolution of bioscience data analysis that I have experienced over that half century, a period during which most of my focus was the interaction between insects and their plant hosts; both how insect feeding impacts on plant growth and crop yield, and how plants impact on the development of the insects that feed on them and on their natural enemies.
My journey into bioscience data analysis started with undergraduate courses in biometry – yes, it was an agriculture faculty, so it was biometry not statistics. We started doing statistical analyses using full keyboard Monroe calculators (for those of you who don’t know what I am talking about, you can find them here). It was a simpler time and as undergraduates we thought it was hugely funny to divide 1 by 0 until the blue smoke came out…
After leaving university in the early 1970s, I started working for the Agriculture Department of an Australian state government, at a small country research station. Statistical analysis was rudimentary to say the least. If you were motivated, there was always the option of running analyses yourself by hand, given the appearance of the first scientific calculators in the early 1970s. If you wanted a formal statistical analysis of your data, you would mail off a paper copy of the raw data to Biometry Branch… and wait. Some months later, you would get back your ANOVA, regression, or whatever the biometrician thought appropriate to do, on paper with some indication of what treatments were different from what other treatments. Dosemortality data was dealt with by manually plotting data onto probit paper.
Inhouse ANOVA programs running on central mainframes were a step forward some years later as it at least enabled us to run our own analyses, as long as you wanted to do an ANOVA…. However, it also required a 2 hours’ drive to the nearest card reader, with the actual computer a further 1000 kilometres away.… The first desktop computer I used for statistical analysis was in the early 1980s and was a CP/M machine with two 8inch floppy discs with, I think, 256k of memory, and booting it required turning a key and pressing the blue button  yes, really! And about the same time, the local agricultural economist drove us crazy extolling the virtues of a program called Lotus 123!
Having been brought up on a solid diet of the classic texts such as Steele and Torrie, Cochran and Cox and Sokal and Rohlf, the primary frustration during this period was not having ready access to the statistical analyses you knew were appropriate for your data. Typical modes of operating for agricultural scientists in that era were randomised blocks of various degrees of complexity, thus the emphasis on ANOVA in the software that was available inhouse. Those of us who also had lessstructured ecological data were less well catered for.
My first access to a comprehensive statistics package was during the early to mid1980s at one of the American Land Grant universities. It was a revelation to be able to run virtually whatever statistical test deemed necessary. Access to nonlinear regression was a definite plus, given the nonlinear nature of many biological responses. As well, being able to run a series of models to test specific hypotheses opened up new options for more elegant and insightful analyses. Looking back from 2021, such things look very trivial, but compared to where we came from in the 1970s, they were significant steps forward.
My first exposure to Genstat, VSNi’s stalwart statistical software package, was Genstat for Windows, Third Edition (1997). Simple things like the availability of residual plots made a difference for us entomologists, given that much of our data had nonnormal errors; it took the guesswork out of whether and what transformations to use. The availability of regressions with grouped data also opened some previously closed doors.
After a deviation away from handson research, I came back to biologicaldata analysis in the mid2000s and found myself working with repeatedmeasures and survival / mortality data, so ventured into repeatedmeasures restricted maximum likelihood analyses and generalised linear mixed models for the first time (with assistance from a couple of Roger Payne’s training courses in Hobart and Queenstown). Looking back, it is interesting how quickly I became blasé about such computationally intensive analyses that would run in seconds on my laptop or desktop, forgetting that I was doing ANOVAs by hand 40 years earlier when John Nelder was developing generalised linear models. How the world has changed!
Of importance to my Genstat experience was the level of support that was available to me as a Genstat licensee. Over the last 15 years or so, as I attempted some of these more complex analyses, my aspirations were somewhat ahead of my abilities, and it was always reassuring to know that Genstat Support was only ever an email away. A couple of examples will flesh this out.
Back in 2008, I was working on the relationship between insectpest density and crop yield using R2LINES, but had extra linear X’s related to plant vigour in addition to the measure of pest infestation. A supportenquiry email produced an overnight response from Roger Payne that basically said, “Try this”. While I slept, Roger had written an extension to R2LINES to incorporate extra linear X’s. This was later incorporated into the regular releases of Genstat. This work led to the clearer specification of the pest densities that warranted chemical control in soybeans and dry beans (https://doi.org/10.1016/j.cropro.2009.08.016 and https://doi.org/10.1016/j.cropro.2009.08.015).
More recently, I was attempting to disentangle the effects on caterpillar mortality of the two Cry insecticidal proteins in transgenic cotton and, while I got close, I would not have got the analysis to run properly without Roger’s support. The data was scant in the bottom half of the overall doseresponse curves for both Cry proteins, but it was possible to fit asymptotic exponentials that modelled the upper half of each curve. The final doubleexponential response surface I fitted with Roger’s assistance showed clearly that the dosemortality response was stronger for one of the Cry proteins than the other, and that there was no synergistic action between the two proteins (https://doi.org/10.1016/j.cropro.2015.10.013)
One thing that I especially appreciate about having access to a comprehensive statistics package such as Genstat is having the capacity to tease apart biological data to get at the underlying relationships. About 10 years ago, I was asked to look at some data on the impact of cold stress on the expression of the Cry2Ab insecticidal protein in transgenic cotton. The data set was seemingly simple  two years of pottrial data where groups of pots were either left out overnight or protected from low overnight temperatures by being moved into a glasshouse, plus temperature data and Cry2Ab protein levels. A REML analysis, and some correlations and regressions enabled me to show that cold overnight temperatures did reduce Cry2Ab protein levels, that the effects occurred for up to 6 days after the cold period and that the threshold for these effects was approximately 14 Cº (https://doi.org/10.1603/EC09369). What I took from this piece of work is how powerful a comprehensive statistics package can be in teasing apart important biological insights from what was seemingly very simple data. Note that I did not use any statistics that were cutting edge, just a combination of REML, correlation and regression analyses, but used these techniques to guide the dissection of the relationships in the data to end up with an elegant and insightful outcome.
Looking back over 50 years of work, one thing stands out for me: the huge advances that have occurred in the statistical analysis of biological data has allowed much more insightful statistical analyses that has, in turn, allowed biological scientists to more elegantly pull apart the interactions between insects and their plant hosts.
For me, Genstat has played a pivotal role in that process. I shall miss it.
Dr John Rogers
Research Connections and Consulting
St Lucia, Queensland 4067, Australia
Phone/Fax: +61 (0)7 3720 9065
Mobile: 0409 200 701
Email: john.rogers@rcac.net.au
Alternate email: D.John.Rogers@gmail.com
Kanchana Punyawaew and Dr. Vanessa Cave
7 months agoThe term "repeated measures" refers to experimental designs or observational studies in which each experimental unit (or subject) is measured repeatedly over time or space. "Longitudinal data" is a special case of repeated measures in which variables are measured over time (often for a comparatively long period of time) and duration itself is typically a variable of interest.
In terms of data analysis, it doesn’t really matter what type of data you have, as you can analyze both using mixed models. Remember, the key feature of both types of data is that the response variable is measured more than once on each experimental unit, and these repeated measurements are likely to be correlated.
To illustrate the use of mixed model approaches for analyzing repeated measures, we’ll examine a data set from Landau and Everitt’s 2004 book, “A Handbook of Statistical Analyses using SPSS”. Here, a doubleblind, placebocontrolled clinical trial was conducted to determine whether an estrogen treatment reduces postnatal depression. Sixty three subjects were randomly assigned to one of two treatment groups: placebo (27 subjects) and estrogen treatment (36 subjects). Depression scores were measured on each subject at baseline, i.e. before randomization (predep) and at six twomonthly visits after randomization (postdep at visits 16). However, not all the women in the trial had their depression score recorded on all scheduled visits.
In this example, the data were measured at fixed, equally spaced, time points. (Visit is time as a factor and nVisit is time as a continuous variable.) There is one betweensubject factor (Group, i.e. the treatment group, either placebo or estrogen treatment), one withinsubject factor (Visit or nVisit) and a covariate (predep).
Using the following plots, we can explore the data. In the first plot below, the depression scores for each subject are plotted against time, including the baseline, separately for each treatment group.
In the second plot, the mean depression score for each treatment group is plotted over time. From these plots, we can see variation among subjects within each treatment group that depression scores for subjects generally decrease with time, and on average the depression score at each visit is lower with the estrogen treatment than the placebo.
The simplest approach for analyzing repeated measures data is to use a random effects model with subject fitted as random. It assumes a constant correlation between all observations on the same subject. The analysis objectives can either be to measure the average treatment effect over time or to assess treatment effects at each time point and to test whether treatment interacts with time.
In this example, the treatment (Group), time (Visit), treatment by time interaction (Group:Visit) and baseline (predep) effects can all be fitted as fixed. The subject effects are fitted as random, allowing for constant correlation between depression scores taken on the same subject over time.
The code and output from fitting this model in ASRemlR 4 follows;
The output from summary() shows that the estimate of subject and residual variance from the model are 15.10 and 11.53, respectively, giving a total variance of 15.10 + 11.53 = 26.63. The Wald test (from the wald.asreml() table) for predep, Group and Visit are significant (probability level (Pr) ≤ 0.01). There appears to be no relationship between treatment group and time (Group:Visit) i.e. the probability level is greater than 0.05 (Pr = 0.8636).
In practice, often the correlation between observations on the same subject is not constant. It is common to expect that the covariances of measurements made closer together in time are more similar than those at more distant times. Mixed models can accommodate many different covariance patterns. The ideal usage is to select the pattern that best reflects the true covariance structure of the data. A typical strategy is to start with a simple pattern, such as compound symmetry or firstorder autoregressive, and test if a more complex pattern leads to a significant improvement in the likelihood.
Note: using a covariance model with a simple correlation structure (i.e. uniform) will provide the same results as fitting a random effects model with random subject.
In ASRemlR 4 we use the corv() function on time (i.e. Visit) to specify uniform correlation between depression scores taken on the same subject over time.
Here, the estimate of the correlation among times (Visit) is 0.57 and the estimate of the residual variance is 26.63 (identical to the total variance of the random effects model, asr1).
Specifying a heterogeneous firstorder autoregressive covariance structure is easily done in ASRemlR 4 by changing the variancecovariance function in the residual term from corv() to ar1h().
When the relationship of a measurement with time is of interest, a random coefficients model is often appropriate. In a random coefficients model, time is considered a continuous variable, and the subject and subject by time interaction (Subject:nVisit) are fitted as random effects. This allows the slopes and intercepts to vary randomly between subjects, resulting in a separate regression line to be fitted for each subject. However, importantly, the slopes and intercepts are correlated.
The str() function of asreml() call is used for fitting a random coefficient model;
The summary table contains the variance parameter for Subject (the set of intercepts, 23.24) and Subject:nVisit (the set of slopes, 0.89), the estimate of correlation between the slopes and intercepts (0.57) and the estimate of residual variance (8.38).
Brady T. West, Kathleen B. Welch and Andrzej T. Galecki (2007). Linear Mixed Models: A Practical Guide Using Statistical Software. Chapman & Hall/CRC, Taylor & Francis Group, LLC.
Brown, H. and R. Prescott (2015). Applied Mixed Models in Medicine. Third Edition. John Wiley & Sons Ltd, England.
Sabine Landau and Brian S. Everitt (2004). A Handbook of Statistical Analyses using SPSS. Chapman & Hall/CRC Press LLC.