How many SNP markers do I need for my genomic analyses?

How many SNP markers do I need for my genomic analyses?

VSNi User Avatar

Dr. Salvador A. Gezan

21 June 2022
image_blog

At the present, with the fast development of genotyping, we have access to genomic data that we can use with different statistical and computational approaches, for example, to accelerate genetic gains and to select outstanding genotypes as parents for commercial release. Genomic data is also useful to assess genetic variability and diversity in the context of breeding programs or other research studies.

Most of the genomic data used for breeding comes from single nucleotide polymorphism (SNP). Typically, we see this data as a matrix containing several individuals, often thousands, genotyped for a number of SNPs (i.e., nucleotide readings AA, AC, TT, etc.). Depending on the quality and characteristics of this data, we can use it for different analytical purposes. 

In this blog, we will describe some of the uses of this SNP data. We will focus mainly on the available number of SNPs (i.e., markers) and what analyses these enable. Note, the classification of low-, medium- and high-density panels is somewhat arbitrary but it’s what’s often used in plant breeding.

Low-density (LD) panels

Let’s start with what is known as low-density (LD) panels. These typically contain fewer than 200 SNPs, and are the cheapest option you can have (often just a couple of US dollars per sample). With this small number of SNP markers, their main use is for quality assurance (QA) and quality control (QC). That is, they can be used for:

  • Verification of Crosses. If parents are genotyped, then it is possible to check that the correct crosses were performed.
  • Parentage Reconstruction. If parents are genotyped, it is possible to reconstruct the full pedigree of a group of offspring. 
  • Marker Assisted Selection. If a preliminary group of markers were identified to be associated with one or more QTLs of interest, these can easily be incorporated into the panel and used to discriminate genotypes.
  • Population Assignment. A group of markers can be used to assign individuals to different populations (e.g., origins).
  • Assign Sex. Depending on the dynamics of the organism, sometimes it is possible to identify one or more markers that can be used to assign sex to the individuals before they mature.

Other uses are possible but they can be limited. For example, we could identify sibships on some individuals (e.g., full-sibs), and it may be possible to calculate some population statistics of genetic diversity. However, these uses can yield high levels of uncertainty.

An important aspect of LD panels it that they often have markers for verification, and in addition some markers will be missing due to genotyping issues. Hence, their effective size will be lower than their nominal size. Another important consideration is that often these commercial panels are constructed for general use; hence, they may not exactly be based on the population of interest. This will lead to some fixated (MAF = 0%) or almost fixated (MAF < 2%) markers that have little or no contribution to the above uses. 

However, despite the above shortcomings, LD panels constitute a very good and cheap alternative for some small programs to start considering genomic tools in their breeding; and for example, verifying crosses is a critical step!

Medium-density (MD) panels

The medium-density (MD) panels are defined very differently depending on the field. Here, we will consider the plant breeding definition of between 2,000 to 10,000 SNPs (i.e., 2K to 10K). In animal breeding, MD panels will be at least 5 times larger. This larger number of SNPs offers more opportunities, but the cost of an MD panel is several times greater than an LD panel. This increased cost often limits, for most breeding programs, the number of individuals to be genotyped. However, they offer more genomic analysis opportunities. In addition to the uses for LD panels mentioned above, for MD panels we also have:

  • Genomic Relationship Matrix Estimation. It is now possible to estimate with reasonable accuracy the relatedness between any pair of individuals. And these matrices can be used for many other objectives.
  • Genomic Prediction Models. MD panels allows us to fit genomic prediction (GP) models. However, these tend to be on the low level of accuracy, but still sufficient and useful for operational use.
  • Marker Imputation. Missing marker data, if complemented with individuals genotyped using a high-density (HD) panel, can be imputed successfully, and that data can be used for other purposes, like genomic prediction.  
  • Genetic Linkage Maps. This large number of markers allows for the construction of reliable linkage (or genetic) maps with plenty of other uses, such as imputation.
  • QTL Analysis. This MD panel has a reasonable number of markers to perform QTL analysis, for example on recombinant inbred lines (RIL) populations. 
  • Diversity Studies. It is easier to perform several genomic studies that deal with diversity, as it is now possible to follow over generations, for example, inbreeding, effective population size, etc.

In addition, MD panels can also include several markers previously detected for use in Marker Assisted Selection (MAS) or for sex determination. Given the larger number of SNPs, the presence of verification, missing or fixed markers is of less concern, but should be kept under control in order to maximize the usefulness of the panel. Good or poor selection of SNP markers can make a large difference on the life of the panel and the accuracy of the GP models. Hence, it is recommend to ensure markers are well selected for the construction of these panels.

At the present, MD panels should be the panel of choice for most breeding operations. These panels will ensure that the collected data is reasonably good for use in the future, especially as technology and prices will change, justifying the genotyping investment in the long term. In addition, these represent the best panels for breeding programs to start applying (and playing) with more sophisticated genomic tools, such as genomic prediction with GBLUP and/or Bayes B.

High-density (HD) panels

Finally, we have the high-density (HD) panels with more than 20,000 SNPs (20K). But again, this is relative. Most HD panels for plants and aquatic species are in the range of 30K to 70K, but in some species the size considered is in excess of 700K (as in dairy) or even in the millions (as in humans). The cost of an HD panel varies greatly with the number of SNPs considered, the technology used, and number of individuals to genotype. The uses that these panels provide, in addition to all the previous uses for LD and MD panels, are:

  • High Accuracy Genomic Prediction Models. We expect greater accuracy from these panels, possible with a long useful life not requiring tuning over time, and little or no imputation required. 
  • Genome-wide Association Studies (GWAS). This is the key data for the discovery of markers under GWAS studies, where it will be more likely to find a marker positioned directly on a coding area within a functional gene.

These HD panels often are very redundant in their information. Hence, they tend to present a good portion of fixed alleles, and missing data that rules many markers out. However, they are still sufficiently large enough to provide all required above mentioned uses. It is also possible that these panels can be used by a combination of programs or research groups, allowing: 1) pooling of resources, 2) price negotiation, and 3) sharing of genomic information. This is commonly seen in some genomic consortiums. In addition, these HD panels, if originally constructed using a diverse base population, might not require much future revisions.

Another important aspect of HD panels is that they constitute the raw information for imputation in MD panels (and, although only remotely possible, in LD panels) and they will be the ones that in the future will link the array of available panels used in a breeding program (including, LD, MD or from different groups). Hence, HD panels allow us to connect different sources of genomic data.

Final Comments

One important note is in relation to the number of markers required for genomic selection (GS). We recommend at least an MD panel is used for this, with no fewer than 2K useful (post-filtering) SNPs. It is interesting to note that some studies have reported that dropping the number of SNPs to 1K or fewer results in a considerable loss of accuracy of the GP models. And, also, often more than 10K SNPs does not yield considerably better accuracies than 5K SNPs. In addition, some studies have successfully focused on 2-3K SNP panels supported with imputation from an HD panel, with an interesting increase in accuracy. Hence, there are plenty of options to exploit MD panels.

Another aspect is in relation to maximizing the accuracy of GP models. Of course, the more informative SNPs available the better! But there are many other aspects that affect, in good or bad ways, the success of a genomic model. For example, the level of relatedness between the training and evaluation populations, linkage-disequilibrium, genetic architecture of the traits, and of course heritability of the traits of interest. All of these elements may eventually tilt the decision from one type of panel to another.

Another difficulty that can arise is the detection of markers associated with some traits that are present in the population at very low rates, such as the case of ‘standing genetic variation’. This implies that it is difficult to find these markers on most panels as they will tend to be dropped early. Therefore, specific or very large HD panels might be needed in these cases.

Finally, as mentioned before, the use of LD or MD panels requires a careful pre-selection of markers to make the most of these panels. If this is done poorly, or for another population (e.g., using an available panel developed for another breeding group), then the benefits of the corresponding panels will possibly be greatly reduced. This also implies that, particularly for the LD panel, the set of markers in use has to be constantly reviewed as the population changes over time or new markers from MAS or sex determination are discovered.

In this blog, we have pointed out a few of the uses and benefits of each of the different panels. As the cost and offering of these panels changes constantly, we suspect that at some point we will be able to afford HD panels for a few cents (or pennies)! But before we get there, we need to make the most of our current resources, and gathering the right data for the right analysis is critical.

About the author

Dr. Salvador Gezan is a statistician/quantitative geneticist with more than 20 years’ experience in breeding, statistical analysis and genetic improvement consulting. He currently works as a Statistical Consultant at VSN International, UK. Dr. Gezan started his career at Rothamsted Research as a biometrician, where he worked with Genstat and ASReml statistical software. Over the last 15 years he has taught ASReml workshops for companies and university researchers around the world. 

Dr. Gezan has worked on agronomy, aquaculture, forestry, entomology, medical, biological modelling, and with many commercial breeding programs, applying traditional and molecular statistical tools. His research has led to more than 100 peer reviewed publications, and he is one of the co-authors of the textbook Statistical Methods in Biology: Design and Analysis of Experiments and Regression.

Related Reads

READ MORE
VSNi User Avatar

Dr. Salvador A. Gezan

09 March 2022

Meta analysis using linear mixed models

Meta-analysis is a statistical tool that allows us to combine information from related, but independent studies, that all have as an objective to estimate or compare the same effects from contrasting treatments. Meta-analysis is widely used in many research areas where an extensive literature review is performed to identify studies that had a similar research question. These are later combined using meta-analysis to estimate a single combined effect. Meta-analyses are commonly used to answer healthcare and medical questions, where they are widely accepted, but they also are used in many other scientific fields.

By combining several sources of information, meta-analyses have the advantage of greater statistical power, therefore increasing our chance of detecting a significant difference. They also allow us to assess the variability between studies, and help us to understand potential differences between the outcomes of the original studies.

The underlying premise in meta-analysis is that we are collecting information from a group of, say n, studies that individually estimated a parameter of interest, say . It is reasonable to consider that this parameter has some statistical properties. Mainly we assume that it belongs to a Normal distribution with unknown mean and variance. Hence, mathematically we say:

In meta-analysis, the target population parameter θ can correspond to any of several statistics, such as a treatment mean, a difference between treatments; or more commonly in clinical trials, the log-odds ratio or relative risk.

There are two models that are commonly used to perform meta-analyses: the fixed-effect model and the random-effects model. For the fixed-effect model, it is assumed that there is only a single unique true effect our single θ above, which is estimated from a random sample of studies. That is, the fixed-effect model assumes that there is a single population effect, and the deviations obtained from the different studies are only due to sampling error or random noise. The linear model (LM) used to describe this process can be written as:


where is each individual observed response, is the population parameter (also often known as  μ, the overall mean), and is a random error or residual with assumptions of . The variance component is a measurement of our uncertainty in the information (i.e., response) of each study. The above model can be easily fitted under any typical LM routine, such as R, SAS, GenStat and ASReml.

For the random-effects model we still assume that there is a common true effect between studies, but in addition, we allow this effect to vary between studies. Variation between these effects is a reasonable assumption as no two studies are identical, differing in many aspects; for example, different demographics in the data, slightly differing measurement protocols, etc. Because, we have a random sample of studies, then we have a random sample of effects, and therefore, we define a linear mixed model (LMM) using the following expression:


where, as before, is each individual observed response, is the study-specific population parameter, with the assumption of and is a random error or residual with the same normality assumptions as before. Alternatively, the above LMM can be written as:


where and is a random deviation from the overall effect mean θ with assumptions .

This is a LMM because we have, besides the residual, an additional random component that has a variance component associated to it, that is or . This variance is a measurement of the variability ‘between’ studies, and it will reflect the level of uncertainty of observing a specific  . These LMMs can be fitted, and variance components estimated, under many linear mixed model routines, such as nlme in R, proc mixed in SAS, Genstat or ASReml.

Both fixed-effect and random-effects models are often estimated using summary information, instead of the raw data collected from the original study. This summary information corresponds to estimated mean effects together with their variances (or standard deviations) and the number of samples or experimental units considered per treatment. Since the different studies provide different amounts of information, weights should be used when fitting LM or LMM to summary information in a meta-analysis, similar to weighted linear regression. In meta-analysis, each study has a different level of importance, due to, for example, differing number of experimental units, slightly different methodologies, or different underlying variability due to inherent differences between the studies. The use of weights allows us to control the influence of each observation in the meta-analysis resulting in more accurate final estimates.

Different statistical software will manage these weights slightly differently, but most packages will consider the following general expression of weights:


where is the weight and is the variance of the observation. For example, if the response corresponds to an estimated treatment mean, then its variance is , with MSE being the mean square error reported for the given study, and the number of experimental units (or replication).

Therefore, after we collect the summary data, we fit our linear or linear mixed model with weights and request from its output an estimation of its parameters and their standard errors. This will allow us to make inference, and construct, for example, a 95% confidence interval around an estimate to evaluate if this parameter/effect is significantly different from zero. This will be demonstrated in the example below.

Motivating example

The dataset we will use to illustrate meta-analyses was presented and previously analysed by Normand (1999). The dataset contains infromation from nine independent studies where the length of hospitalisation (measured in days) was recorded for stroke patients under two different treatment regimes. The main objective was to evaluate if specialist inpatient stroke care (sc) resulted in shorter stays when compared to the conventional non-specialist (or routine management) care (rm).

The complete dataset is presented below, and it can also be found in the file STROKE.txt. In this table, the columns present for each study are the sample size (n.sc and n.rm), their estimated mean value (mean.sc and mean.rm) together with their standard deviation (sd.sc and sd.rm) for both the specialist care and routine management care, respectively.

alt text

Statistical analyses

We will use the statistical package R to read and manipulate the data, and then the library ASReml-R (Butler et al. 2017) to fit the models. 
First, we read the data in R and make some additional calculations, as shown in the code below:

STROKE <- read.table("STROKE.TXT", header=TRUE)
STROKE$diff <- STROKE$mean.sc - STROKE$mean.rm 
STROKE$Vdiff <- (STROKE$sd.sc^2/STROKE$n.sc) + (STROKE$sd.rm^2/STROKE$n.rm) 
STROKE$WT <- 1/(STROKE$Vdiff) 

The new column diff contains the difference between treatment means (as reported from each study). We have estimated the variance of this mean difference, Vdiff, by taking from each treatment its individual MSE (mean square error) and dividing it by the sample size, and then summing the terms of both treatments. This estimate assumes, that for a given study, the samples from both treatments are independent, and for this reason we did not include a covariance. Finally, we have calculated a weight (WT) for each study as the inverse of the variance of the mean difference (i.e., 1/Vdiff).

We can take another look at this data with these extra columns:

alt text

The above table shows a wide range of values between the studies in the mean difference of length of stay between the two treatments, ranging from as low as −71.0 to 11.0, with a raw average of −15.9. Also, the variances of these differences vary considerably, which is also reflected in their weights.

The code to fit the fixed-effect linear model using ASReml-R is shown below:

library(asreml) 
meta_f<-asreml(fixed=diff~1, 
               weights=WT, 
               family=asr_gaussian(dispersion=1), 
               data=STROKE)

In the above model, our response variable is diff, and the weights are indicated by the variate WT. As the precisions are contained within the weights the command family is required to fix the residual error (MSE) to exactly 1.0, hence, it will not be estimated.

The model generates output that can be used for inference. We will start by exploring our target parameter, i.e. θ, by looking at the estimated fixed effect mean and its standard error. This is done with the code:

meta_effect <- summary(meta_f, coef=TRUE)$coef.fixed

Resulting in the output:

alt text

The estimate of θ is equal to −3.464 days, with a standard error of 0.765. An approximate 95% confidence interval can be obtained by using a z-value of 1.96. The resulting approximate 95% confidence interval [−4.963;−1.965] does not contain zero. The significance of this value can be obtained by looking at the approximated ANOVA table using the command:

wald.asreml(meta_f)

Note that this is approximated as, given that weights are considered to be known, the degrees of freedom are assumed to be infinite; hence, this will be a liberal estimate.

alt text

The results from this ANOVA table indicate a high significance of this parameter (θ) with an approximated p-value of < 0.0001. Therefore, in summary, this fixed effect model analysis indicates a strong effect of the specialised care resulting in a reduction of approximately 3.5 days in hospitalisation.

However, as indicated earlier, a random-effects model might seem more reasonable given the inherent differences in the studies under consideration. Here, we extend the model to include the random effect of study. In order to do this, first we need to ensure that this is treated as a factor in the model by running the code:

STROKE$study <- as.factor(STROKE$study)_f)

The LMM to be fitted using ASReml-R is:

meta_r<-asreml(fixed=diff~1,  
               random=~study, 
               weights=WT, 
               family=asr_gaussian(dispersion=1), 
               data=STROKE)

Note in this example the only difference from the previous code is the inclusion of the line random=~study. This includes the factor study as a random effect. An important result from running are the variance component estimates. These are obtained with the command:

summary(meta_r)$varcomp

alt text

In this example, the variance associated with the differences in the target parameter (θ) between the studies is 684.62. When expressed as a standard deviation, this corresponds to 26.16 days. Note that this variation is large in relation to the scale of the data, reflecting large differences between the random sample of studies considered in the meta-analysis.

We can output the fixed and random effects using the following commands:

meta_effect <- summary(meta_r, coef=TRUE)$coef.fixed 
BLUP <- summary(meta_r, coef=TRUE)$coef.random

alt text

Note that now that our estimated mean difference corresponds to −15.106 days with an standard error of 8.943, and that the approximate 95% confidence interval [−32.634;2.423] now contains zero. An approximated ANOVA based on the following code:

wald.asreml(meta_r)

results in the output:

alt text

We have a p-value of 0.0912, indicating that there is no significant difference in length of stay between the treatments evaluated. Note that the estimates of the random effects of study, also known as BLUPs (best linear unbiased predictions) are large, ranging from −45.8 to 22.9, and widely variable. The lack of significance in the random-effects model, when there is a difference of −15.11 days, is mostly due to the large variability of 684.62 found between the different studies, resulting in a substantial standard error for the estimated mean difference.

In the following graph we can observe the 95% confidence intervals for each of the nine studies together with the final parameter estimated under the Random-effects Model. Some of these confidence intervals contain the value zero, including the one for the random-effects model. However, it can be observed that the confidence interval from the random-effects model is an adequate summarization of the nine studies, representing a compromising confidence interval.

alt text

An important aspect to consider is the difference in results between the fixed-effect and the random-effects model that are associated, as indicated earlier, with different inferential approaches. One way to understand this is by considering what will happen if a new random study is included. Because we have a large variability in the study effects (as denoted by ), we expect this new study to have a difference between treatments that is randomly within this wide range. This, in turn, is expressed by the large standard error of the fixed effect θ, and by its large 95% confidence interval that will ensure that for ‘any’ observation we cover the parameter estimate 95% of the time. Therefore, as shown by the data, it seems more reasonable to consider the random-effects model than the fixed-effect model as it is an inferential approach that deals with several sources of variation.

Summary

In summary, we have used the random-effects model to perform meta-analysis on a medical research question of treatment differences by combining nine independent studies. Under this approach we assumed that all studies describe the same effect but we allowed for the model to express different effect sizes through the inclusion of a random effect that will vary from study to study. The main aim of this analysis was not to explain why these differences occur, here, our aim was to incorporate a measure of this uncertainty on the estimation of the final effect of treatment differences.

There are several extensions to meta-analysis with different types of responses and effects. Some of the relevant literature recommended to the interested reader are van Houwelingen et al. (2002) and Vesterinen et al. (2014). Also, a clear presentation with further details of the differences between fixed-effect and random-effects models is presented by Borenstein et al. (2010).

Files to download

Dataset: STROKE.txt
R code: STROKE_METAA.R

References

Borenstein, M; Hedges, LV; Higgins, JPT; Rothstein, HR. 2010. A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods 1: 97-111.

Butler, DG; Cullis, BR; Gilmour, AR; Gogel, BG; Thompson, R. 2017. ASReml-R Reference Manual Version 4. VSNi International Ltd, Hemel Hempstead, HP2 14TP, UK.

Normand, ST. 1999. Meta-analysis: Formulating, evaluating, combining, and reporting. Statistics in Medicine 18: 321-359.

van Houwelingen, HC; Arends, LR; Stignen, T. 2002. Advanced methods in meta-analysis: multivariate approach and meta-regression. Statistics in Medicine 21: 589-624.

Vesterinen, HM; Sena, ES; Egan, KJ; Hirst, TC; Churolov, L; Currie, GL; Antonic, A; Howells, DW; Macleod, MR. 2014. Meta-analysis of data from animal studies: a practical guide. Journal of Neuroscience Methods 221: 92-102.

About the author

Salvador Gezan is a statistician/quantitative geneticist with more than 20 years’ experience in breeding, statistical analysis and genetic improvement consulting. He currently works as a Statistical Consultant at VSN International, UK. Dr. Gezan started his career at Rothamsted Research as a biometrician, where he worked with Genstat and ASReml statistical software. Over the last 15 years he has taught ASReml workshops for companies and university researchers around the world. 

Dr. Gezan has worked on agronomy, aquaculture, forestry, entomology, medical, biological modelling, and with many commercial breeding programs, applying traditional and molecular statistical tools. His research has led to more than 100 peer reviewed publications, and he is one of the co-authors of the textbook “Statistical Methods in Biology: Design and Analysis of Experiments and Regression”.

READ MORE
VSNi User Avatar

Kanchana Punyawaew and Dr. Vanessa Cave

01 March 2021

Mixed models for repeated measures and longitudinal data

The term "repeated measures" refers to experimental designs or observational studies in which each experimental unit (or subject) is measured repeatedly over time or space. "Longitudinal data" is a special case of repeated measures in which variables are measured over time (often for a comparatively long period of time) and duration itself is typically a variable of interest.

In terms of data analysis, it doesn’t really matter what type of data you have, as you can analyze both using mixed models. Remember, the key feature of both types of data is that the response variable is measured more than once on each experimental unit, and these repeated measurements are likely to be correlated.

Mixed Model Approaches

To illustrate the use of mixed model approaches for analyzing repeated measures, we’ll examine a data set from Landau and Everitt’s 2004 book, “A Handbook of Statistical Analyses using SPSS”. Here, a double-blind, placebo-controlled clinical trial was conducted to determine whether an estrogen treatment reduces post-natal depression. Sixty three subjects were randomly assigned to one of two treatment groups: placebo (27 subjects) and estrogen treatment (36 subjects). Depression scores were measured on each subject at baseline, i.e. before randomization (predep) and at six two-monthly visits after randomization (postdep at visits 1-6). However, not all the women in the trial had their depression score recorded on all scheduled visits.

In this example, the data were measured at fixed, equally spaced, time points. (Visit is time as a factor and nVisit is time as a continuous variable.) There is one between-subject factor (Group, i.e. the treatment group, either placebo or estrogen treatment), one within-subject factor (Visit or nVisit) and a covariate (predep).

alt text

Using the following plots, we can explore the data. In the first plot below, the depression scores for each subject are plotted against time, including the baseline, separately for each treatment group.

alt text

In the second plot, the mean depression score for each treatment group is plotted over time. From these plots, we can see variation among subjects within each treatment group that depression scores for subjects generally decrease with time, and on average the depression score at each visit is lower with the estrogen treatment than the placebo.

alt text

Random effects model

The simplest approach for analyzing repeated measures data is to use a random effects model with subject fitted as random. It assumes a constant correlation between all observations on the same subject. The analysis objectives can either be to measure the average treatment effect over time or to assess treatment effects at each time point and to test whether treatment interacts with time.

In this example, the treatment (Group), time (Visit), treatment by time interaction (Group:Visit) and baseline (predep) effects can all be fitted as fixed. The subject effects are fitted as random, allowing for constant correlation between depression scores taken on the same subject over time.

The code and output from fitting this model in ASReml-R 4 follows;

alt text

alt text

alt text

The output from summary() shows that the estimate of subject and residual variance from the model are 15.10 and 11.53, respectively, giving a total variance of 15.10 + 11.53 = 26.63. The Wald test (from the wald.asreml() table) for predep, Group and Visit are significant (probability level (Pr) ≤ 0.01). There appears to be no relationship between treatment group and time (Group:Visit) i.e. the probability level is greater than 0.05 (Pr = 0.8636).

Covariance model

In practice, often the correlation between observations on the same subject is not constant. It is common to expect that the covariances of measurements made closer together in time are more similar than those at more distant times. Mixed models can accommodate many different covariance patterns. The ideal usage is to select the pattern that best reflects the true covariance structure of the data. A typical strategy is to start with a simple pattern, such as compound symmetry or first-order autoregressive, and test if a more complex pattern leads to a significant improvement in the likelihood.

Note: using a covariance model with a simple correlation structure (i.e. uniform) will provide the same results as fitting a random effects model with random subject.

In ASReml-R 4 we use the corv() function on time (i.e. Visit) to specify uniform correlation between depression scores taken on the same subject over time.

alt text

Here, the estimate of the correlation among times (Visit) is 0.57 and the estimate of the residual variance is 26.63 (identical to the total variance of the random effects model, asr1).

Specifying a heterogeneous first-order autoregressive covariance structure is easily done in ASReml-R 4 by changing the variance-covariance function in the residual term from corv() to ar1h().

alt text

Random coefficients model

When the relationship of a measurement with time is of interest, a random coefficients model is often appropriate. In a random coefficients model, time is considered a continuous variable, and the subject and subject by time interaction (Subject:nVisit) are fitted as random effects. This allows the slopes and intercepts to vary randomly between subjects, resulting in a separate regression line to be fitted for each subject. However, importantly, the slopes and intercepts are correlated.

The str() function of asreml() call is used for fitting a random coefficient model;

alt text

The summary table contains the variance parameter for Subject (the set of intercepts, 23.24) and Subject:nVisit (the set of slopes, 0.89), the estimate of correlation between the slopes and intercepts (-0.57) and the estimate of residual variance (8.38).

References

Brady T. West, Kathleen B. Welch and Andrzej T. Galecki (2007). Linear Mixed Models: A Practical Guide Using Statistical Software. Chapman & Hall/CRC, Taylor & Francis Group, LLC.

Brown, H. and R. Prescott (2015). Applied Mixed Models in Medicine. Third Edition. John Wiley & Sons Ltd, England.

Sabine Landau and Brian S. Everitt (2004). A Handbook of Statistical Analyses using SPSS. Chapman & Hall/CRC Press LLC.

READ MORE
VSNi User Avatar

The VSNi Team

27 April 2021

Evolution of statistical computing

It is widely acknowledged that the most fundamental developments in statistics in the past 60+ years are driven by information technology (IT). We should not underestimate the importance of pen and paper as a form of IT but it is since people start using computers to do statistical analysis that we really changed the role statistics plays in our research as well as normal life.

In this blog we will give a brief historical overview, presenting some of the main general statistics software packages developed from 1957 onwards. Statistical software developed for special purposes will be ignored. We also ignore the most widely used ‘software for statistics’ as Brian Ripley (2002) stated in his famous quote: “Let’s not kid ourselves: the most widely used piece of software for statistics is Excel.” Our focus is some of the packages developed by statisticians for statisticians, which are still evolving to incorporate the latest development of statistics.

Ronald Fisher’s Calculating Machines

Pioneer statisticians like Ronald Fisher started out doing their statistics on pieces of paper and later upgraded to using calculating machines. Fisher bought the first Millionaire calculating machine when he was heading Rothamsted Research’s statistics department in the early 1920s. It cost about £200 at that time, which is equivalent in purchasing power to about £9,141 in 2020. This mechanical calculator could only calculate direct product, but it was very helpful for the statisticians at that time as Fisher mentioned: "Most of my statistics has been learned on the machine." The calculator was heavily used by Fisher’s successor Frank Yates (Head of Department 1933-1968) and contributed to much of Yates’ research, such as designs with confounding between treatment interactions and blocks, or split plots, or quasi-factorials.

alt text

Frank Yates

Rothamsted Annual Report for 1952: "The analytical work has again involved a very considerable computing effort." 

Beginning of the Computer Age

From the early 1950s we entered the computer age. The computer at this time looked little like its modern counterpart, whether it was an Elliott 401 from the UK or an IBM 700/7000 series in the US. Although the first documented statistical package, BMDP, was developed starting in 1957 for IBM mainframes at the UCLA Health Computing Facility, on the other side of the Atlantic Ocean statisticians at Rothamsted Research began their endeavours to program on an Elliot 401 in 1954.

alt text

Programming Statistical Software

When we teach statistics in schools or universities, students very often complain about the difficulties of programming. Looking back at programming in the 1950s will give modern students an appreciation of how easy programming today actually is!

An Elliott 401 served one user at a time and requested all input on paper tape (forget your keyboard and intelligent IDE editor). It provided the output to an electric typewriter. All programming had to be in machine code with the instructions and data on a rotating disk with 32-bit word length, 5 "words" of fast-access store, 7 intermediate access tracks of 128 words, 16 further tracks selectable one at a time (= 2949 words – 128 for system).

alt text

Computer paper tape

fitting constants to main effects and interactions in multi-way tables (1957), regression and multiple regression (1956), fitting many standard curves as well as multivariate analysis for latent roots and vectors (1955).

Although it sounds very promising with the emerging of statistical programs for research, routine statistical analyses were also performed and these still represented a big challenge, at least computationally. For example, in 1963, which was the last year with the Elliott 401 and Elliott 402 computers, Rothamsted Research statisticians analysed 14,357 data variables, and this took them 4,731 hours to complete the job. It is hard to imagine the energy consumption as well as the amount of paper tape used for programming. Probably the paper tape (all glued together) would be long enough to circle the equator.

Development of Statistical Software: Genstat, SAS, SPSS

The above collection of programs was mainly used for agricultural research at Rothamsted and was not given an umbrella name until John Nelder became Head of the Statistics Department in 1968. The development of Genstat (General Statistics) started from that year and the programming was done in FORTRAN, initially on an IBM machine. In that same year, at North Carolina State University, SAS (Statistical Analysis Software) was almost simultaneously developed by computational statisticians, also for analysing agricultural data to improve crop yields. At around the same time, social scientists at the University of Chicago started to develop SPSS (Statistical Package for the Social Sciences). Although the three packages (Genstat, SAS and SPSS) were developed for different purposes and their functions diverged somewhat later, the basic functions covered similar statistical methodologies.

The first version of SPSS was released in 1968. In 1970, the first version of Genstat was released with the functions of ANOVA, regression, principal components and principal coordinate analysis, single-linkage cluster analysis and general calculations on vectors, matrices and tables. The first version of SAS, SAS 71, was released and named after the year of its release. The early versions of all three software packages were written in FORTRAN and designed for mainframe computers.

Since the 1980s, with the breakthrough of personal computers, a second generation of statistical software began to emerge. There was an MS-DOS version of Genstat (Genstat 4.03) released with an interactive command line interface in 1980.

alt text

Genstat 4.03 for MSDOS

Around 1985, SAS and SPSS also released a version for personal computers. In the 1980s more players entered this market: STATA was developed from 1985 and JMP was developed from 1989. JMP was, from the very beginning, for Macintosh computers. As a consequence, JMP had a strong focus on visualization as well as graphics from its inception.

The Rise of the Statistical Language R

The development of the third generation of statistical computing systems had started before the emergence of software like Genstat 4.03e or SAS 6.01. This development was led by John Chambers and his group in Bell Laboratories since the 1970s. The outcome of their work is the S language. It had been developed into a general purpose language with implementations for classical as well as modern statistical inferences. S language was freely available, and its audience was mainly sophisticated academic users. After the acquisition of S language by the Insightful Corporation and rebranding as S-PLUS, this leading third generation statistical software package was widely used in both theoretical and practical statistics in the 1990s, especially before the release of a stable beta version of the free and open-source software R in the year 2000. R was developed by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, and is currently widely used by statisticians in academia and industry, together with statistical software developers, data miners and data analysts.

Software like Genstat, SAS, SPSS and many other packages had to deal with the challenge from R. Each of these long-standing software packages developed an R interface R or even R interpreters to anticipate the change of user behaviour and ever-increasing adoption of the R computing environment. For example, SAS and SPSS have some R plug-ins to talk to each other. VSNi’s ASReml-R software was developed for ASReml users who want to run mixed model analysis within the R environment, and at the present time there are more ASReml-R users than ASReml standalone users. Users who need reliable and robust mixed effects model fitting adopted ASReml-R as an alternative to other mixed model R packages due to its superior performance and simplified syntax. For Genstat users, msanova was also developed as an R package to provide traditional ANOVA users an R interface to run their analysis.

What’s Next?

We have no clear idea about what will represent the fourth generation of statistical software. R, as an open-source software and a platform for prototyping and teaching has the potential to help this change in statistical innovation. An example is the R Shiny package, where web applications can be easily developed to provide statistical computing as online services. But all open-source and commercial software has to face the same challenges of providing fast, reliable and robust statistical analyses that allow for reproducibility of research and, most importantly, use sound and correct statistical inference and theory, something that Ronald Fisher will have expected from his computing machine!