logo
Software
Sector
Learning
Resources
Blogs
About us
Contact
Why machine learning is not (yet) working for genomic prediction

Dr. Salvador A. Gezan

5 months ago

In plant and animal breeding the use of genomic predictions has become widespread, and it is currently being implemented in many species resulting in increased genetic gains. In genomic prediction (GP) thousands of SNP markers are used as input to predict the performance of genotypes. A good model allows the estimation of the performance of a genotype before it is phenotypically measured allowing for cheaper and earlier selections, accelerating breeding programs.

At present, most of these predictive models use the SNP markers information to fit linear models, where each marker is associated with an estimated effect. These models are linear, and they incorporate our current understanding of the accumulation of allele effects and the use of the infinitesimal model, where the phenotypic response of an individual is the result of hundreds or thousands of QTLs with small effects.

Machine Learning - the holy grail?

Machine learning (ML) has become widely used in many areas over the last few years. ML is a methodology in which computers are trained with large amounts of data to make predictions. There are many methods, but some of the most common are neural networks, random forests, and decision trees. In ML you do not need to understand the biological system; briefly, you provide the computer algorithm with huge amounts of data as training and you obtain a predictive system that can be used to estimate responses. Of course, its implementation is more complex than this description, and a critical part is evaluating the quality of the predictive system obtained.

ML has proven very useful, for example, to compare images to differentiate pictures of cats from dogs, and many other practical uses. Therefore, ML methods seem the logical tool for GP, particularly as we can have a set of genomic data for our crop of interest with up to 200,000 SNPs that were obtained with hundreds or even thousands of individuals. 

There have been several studies on the use of ML in GP but the results often have been disappointing. In all cases, our traditional genomic prediction methods (BayesB and GBLUP) consistently have been superior to most ML algorithms. Based on these studies, we are tempted to say that ML is not working for breeding and genomics. Yet this is a surprising result for a tool such as ML that is constantly being praised in the media as very powerful and that is often associated with solving many daily predictive problems. 

Where Machine Learning is at a disadvantage…for now

So, currently ML is not a good option for use in GP, but … it is my belief that ML is still at a disadvantage against other GP methods, and with time it might become as good as other approaches or even the gold standard. Some of the reasons for this statement are detailed below.

  • ML requires large, often very large, amounts of data. This is usually not available for most of our current breeding programs. It is true that we have thousands, or even millions of SNPs, but these are poor in information, and highly correlated. In addition, our phenotypic records used to train these ML tools, are probably only in the thousands, and not in the hundreds of thousands or millions that are reported in other fields where ML has been used successfully
  • We have a pretty good understanding of gene action. Note that ML is often a black box, where our understanding of the biological system is ignored. However, for our GP models, we have good clarity on the mode of action of the accumulation of alleles to denote additive effects, and this can be extended to dominant effects. This, followed by the dynamics of Mendelian and Fisherian genetics where we have a few QTLs with strong influences or a large number of QTLs with small influences, has led us to use marker assisted selection and pedigree-based analyses successfully over the last 50 years.
  • We have an important gap between the computer scientists developing the ML tools we can use, and breeders or quantitative geneticists. In most successful breeding programs, there is a strong statistical component for design and analysis of experiments, and now with the use of genomic data, we have extended our models from pedigree-based analyses to molecular-based analyses or a combination. However, the use of computationally intensive and rapidly evolving ML methods, have been elusive to most breeding programs, and in some cases, this is accompanied by a lack of understanding of the software that trains the ML models.

The routine implementation of ML in breeding programs will take some time. But as we accumulate information, and we learn and interact with ML software and its routines, we will slowly see it being used in our crops. This will not mean the end of our more traditional tools or their replacement by ML applications. Our current understanding of the biology and the specific nature of our crops will still make our current toolbox valuable. It is our understanding that at the present, machine learning is not ready for breeding, but in due time it will creep up next to us!! 

Salvador A. Gezan

March, 2021

LinkedIn

Related Reads

READ MORE

The VSNi Team

3 months ago
What is a p-value?

A way to decide whether to reject the null hypothesis (H0) against our alternative hypothesis (H1) is to determine the probability of obtaining a test statistic at least as extreme as the one observed under the assumption that H0 is true. This probability is referred to as the “p-value”. It plays an important role in statistics and is critical in most biological research. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/blog_p_value_7e04a8f8c5.png) #### **What is the true meaning of a p-value and how should it be used?** P-values are a continuum (between 0 and 1) that provide a measure of the **strength of evidence** against H0. For example, a value of 0.066, will indicate that there is a probability that we could observe values as large or larger than our critical value with a probability of 6.6%. Note that this p-value is NOT the probability that our alternative hypothesis is correct, it is only a measure of how likely or unlikely we are to observe these extreme events, under repeated sampling, in reference to our calculated value. Also note that this p-value is obtained based on an assumed distribution (e.g., t-distribution for a t-test); hence, p-value will depend strongly on your (correct or incorrect) assumptions. The smaller the p-value, the stronger the evidence for rejecting H0. However, it is difficult to determine what a small value really is. This leads to the typical guidelines of: p \< 0.001 indicating very strong evidence against H0, p \< 0.01 strong evidence, p \< 0.05 moderate evidence, p \< 0.1 weak evidence or a trend, and p ≥ 0.1 indicating insufficient evidence \[1\], and a strong debate on what this threshold should be. But declaring p-values as being either significant or non-significant based on an arbitrary cut-off (e.g. 0.05 or 5%) should be avoided. As [Ronald Fisher](https://mathshistory.st-andrews.ac.uk/Biographies/Fisher/) said: “No scientific worker has a fixed level of significance at which, from year to year, and in all circumstances he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas” \[2\]. A very important aspect of the p-value is that it **does not** provide any evidence in support of H0 – it only quantifies evidence against H0. That is, a large p-value does not mean we can accept H0. Take care not to fall into the trap of accepting H0! Similarly, a small p-value tells you that rejecting H0 is plausible, and not that H1 is correct! For useful conclusions to be drawn from a statistical analysis, p-values should be considered alongside the **size of the effect**. Confidence intervals are commonly used to describe the size of the effect and the precision of its estimate. Crucially, statistical significance does not necessarily imply practical (or biological) significance. Small p-values can come from a large sample and a small effect, or a small sample and a large effect. It is also important to understand that the size of a p-value depends critically on the sample size (as this affects the shape of our distribution). Here, with a very very large sample size, H0 may be always rejected even with extremely small differences, even if H0 is nearly (i.e., approximately) true. Conversely, with very small sample size, it may be nearly impossible to reject H0 even if we observed extremely large differences. Hence, p-values need to also be interpreted in relation to the size of the study. #### References \[1\] Ganesh H. and V. Cave. 2018. _P-values, P-values everywhere!_ New Zealand Veterinary Journal. 66(2): 55-56. \[2\] Fisher RA. 1956. _Statistical Methods and Scientific Inferences_. Oliver and Boyd, Edinburgh, UK.

READ MORE

Kanchana Punyawaew and Dr. Vanessa Cave

5 months ago
Mixed models for repeated measures and longitudinal data

The term “**repeated measures**” refers to experimental designs or observational studies in which each experimental unit (or subject) is measured repeatedly over time or space. "**Longitudinal data**" is a special case of repeated measures in which variables are measured over time (often for a comparatively long period of time) and duration itself is typically a variable of interest. In terms of data analysis, it doesn’t really matter what type of data you have, as you can analyze both using mixed models. Remember, the key feature of both types of data is that the response variable is measured more than once on each experimental unit, and these repeated measurements are likely to be correlated. ### Mixed Model Approaches To illustrate the use of mixed model approaches for analyzing repeated measures, we’ll examine a data set from Landau and Everitt’s 2004 book, “_A Handbook of Statistical Analyses using SPSS”. Here, a double-blind, placebo-controlled clinical trial was conducted to determine whether an estrogen treatment reduces post-natal depression. Sixty three subjects were randomly assigned to one of two treatment groups: placebo (27 subjects) and estrogen treatment (36 subjects). Depression scores were measured on each subject at baseline, i.e. before randomization (predep_) and at six two-monthly visits after randomization (_postdep_ at visits 1-6). However, not all the women in the trial had their depression score recorded on all scheduled visits. In this example, the data were measured at fixed, equally spaced, time points. (_Visit_ is time as a factor and _nVisit_ is time as a continuous variable.) There is one between-subject factor (_Group_, i.e. the treatment group, either placebo or estrogen treatment), one within-subject factor (_Visit_ or _nVisit_) and a covariate (_predep_). ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/repeated_measures_data_4f63d505a9_20e39072bf.png) Using the following plots, we can explore the data. In the first plot below, the depression scores for each subject are plotted against time, including the baseline, separately for each treatment group. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/repeated_measures_1_4149bce2a1_20e3c0f240.png) In the second plot, the mean depression score for each treatment group is plotted over time. From these plots, we can see variation among subjects within each treatment group that depression scores for subjects generally decrease with time, and on average the depression score at each visit is lower with the estrogen treatment than the placebo. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/repeated_measures_2_92810e7fc9_da9b1e85ff.png) ### Random effects model The simplest approach for [analyzing repeated measures data](https://www.theanalysisfactor.com/repeated-measures-approaches/) is to use a random effects model with _**subject**_ fitted as random. It assumes a constant correlation between all observations on the same subject. The analysis objectives can either be to measure the average treatment effect over time or to assess treatment effects at each time point and to test whether treatment interacts with time. In this example, the treatment (_Group_), time (_Visit_), treatment by time interaction (_Group:Visit_) and baseline (_predep_) effects can all be fitted as fixed. The subject effects are fitted as random, allowing for constant correlation between depression scores taken on the same subject over time. The code and output from fitting this model in [ASReml-R 4](https://www.vsni.co.uk/software/asreml-r) follows; ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/4_020d75dee9.png) ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/5_ef250deb61.png) ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/6_15e353865d.png) The output from summary() shows that the estimate of subject and residual variance from the model are 15.10 and 11.53, respectively, giving a total variance of 15.10 + 11.53 = 26.63. The Wald test (from the wald.asreml() table) for _predep_, _Group_ and _Visit_ are significant (probability level (Pr) ≤ 0.01). There appears to be no relationship between treatment group and time (_Group:Visit_) i.e. the probability level is greater than 0.05 (Pr = 0.8636). ### Covariance model In practice, often the correlation between observations on the same subject is not constant. It is common to expect that the covariances of measurements made closer together in time are more similar than those at more distant times. Mixed models can accommodate many different covariance patterns. The ideal usage is to select the pattern that best reflects the true covariance structure of the data. A typical strategy is to start with a simple pattern, such as compound symmetry or first-order autoregressive, and test if a more complex pattern leads to a significant improvement in the likelihood. Note: using a covariance model with a simple correlation structure (i.e. uniform) will provide the same results as fitting a random effects model with random subject. In ASReml-R 4 we use the corv() function on time (i.e. _Visit_) to specify uniform correlation between depression scores taken on the same subject over time. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/7_3f3a2b825a.png) Here, the estimate of the correlation among times (_Visit_) is 0.57 and the estimate of the residual variance is 26.63 (identical to the total variance of the random effects model, asr1). Specifying a heterogeneous first-order autoregressive covariance structure is easily done in ASReml-R 4 by changing the variance-covariance function in the residual term from corv() to ar1h(). ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/8_27fce61956.png) ### Random coefficients model When the relationship of a measurement with time is of interest, a [random coefficients model](https://encyclopediaofmath.org/wiki/Random_coefficient_models) is often appropriate. In a random coefficients model, time is considered a continuous variable, and the subject and subject by time interaction (_Subject:nVisit_) are fitted as random effects. This allows the slopes and intercepts to vary randomly between subjects, resulting in a separate regression line to be fitted for each subject. However, importantly, the slopes and intercepts are correlated. The str() function of asreml() call is used for fitting a random coefficient model; ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/9_ec27199248.png) The summary table contains the variance parameter for _Subject_ (the set of intercepts, 23.24) and _Subject:nVisit_ (the set of slopes, 0.89), the estimate of correlation between the slopes and intercepts (-0.57) and the estimate of residual variance (8.38). ### References Brady T. West, Kathleen B. Welch and Andrzej T. Galecki (2007). _Linear Mixed Models: A Practical Guide Using Statistical Software_. Chapman & Hall/CRC, Taylor & Francis Group, LLC. Brown, H. and R. Prescott (2015). _Applied Mixed Models in Medicine_. Third Edition. John Wiley & Sons Ltd, England. Sabine Landau and Brian S. Everitt (2004). _A Handbook of Statistical Analyses using SPSS_. Chapman & Hall/CRC Press LLC.

READ MORE

The VSNi Team

3 months ago
Evolution of statistical computing

It is widely acknowledged that the most fundamental developments in statistics in the past 60+ years are driven by information technology (IT). We should not underestimate the importance of pen and paper as a form of IT but it is since people start using computers to do statistical analysis that we really changed the role statistics plays in our research as well as normal life. In this blog we will give a brief historical overview, presenting some of the main general statistics software packages developed from 1957 onwards. Statistical software developed for special purposes will be ignored. We also ignore the most widely used ‘software for statistics’ as Brian Ripley (2002) stated in his famous quote: “Let’s not kid ourselves: the most widely used piece of software for statistics is Excel.” Our focus is some of the packages developed by statisticians for statisticians, which are still evolving to incorporate the latest development of statistics. ### **Ronald Fisher’s Calculating Machines** Pioneer statisticians like [Ronald Fisher](https://www.britannica.com/biography/Ronald-Aylmer-Fisher) started out doing their statistics on pieces of paper and later upgraded to using calculating machines. Fisher bought the first Millionaire calculating machine when he was heading Rothamsted Research’s statistics department in the early 1920s. It cost about £200 at that time, which is equivalent in purchasing power to about £9,141 in 2020. This mechanical calculator could only calculate direct product, but it was very helpful for the statisticians at that time as Fisher mentioned: "Most of my statistics has been learned on the machine." The calculator was heavily used by Fisher’s successor [Frank Yates](https://mathshistory.st-andrews.ac.uk/Biographies/Yates/) (Head of Department 1933-1968) and contributed to much of Yates’ research, such as designs with confounding between treatment interactions and blocks, or split plots, or quasi-factorials. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/Frank_Yates_c50a5fbf55.jpg) _Frank Yates_ Rothamsted Annual Report for 1952: "The analytical work has again involved a very considerable computing effort."  ### **Beginning of the Computer Age** From the early 1950s we entered the computer age. The computer at this time looked little like its modern counterpart, whether it was an Elliott 401 from the UK or an IBM 700/7000 series in the US. Although the first documented statistical package, BMDP, was developed starting in 1957 for IBM mainframes at the UCLA Health Computing Facility, on the other side of the Atlantic Ocean statisticians at [Rothamsted Research](https://www.rothamsted.ac.uk/) began their endeavours to program on an Elliot 401 in 1954. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/Elliott_NRDC_401_computer_b39fd1bbe3.jpg) ### **Programming Statistical Software** When we teach statistics in schools or universities, students very often complain about the difficulties of programming. Looking back at programming in the 1950s will give modern students an appreciation of how easy programming today actually is! An Elliott 401 served one user at a time and requested all input on paper tape (forget your keyboard and intelligent IDE editor). It provided the output to an electric typewriter. All programming had to be in machine code with the instructions and data on a rotating disk with 32-bit word length, 5 "words" of fast-access store, 7 intermediate access tracks of 128 words, 16 further tracks selectable one at a time (= 2949 words – 128 for system). ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/computer_paper_tape_99626ba274.jpg) _Computer paper tape_ fitting constants to main effects and interactions in multi-way tables (1957), regression and multiple regression (1956), fitting many standard curves as well as multivariate analysis for latent roots and vectors (1955). Although it sounds very promising with the emerging of statistical programs for research, routine statistical analyses were also performed and these still represented a big challenge, at least computationally. For example, in 1963, which was the last year with the [Elliott 401](https://www.ithistory.org/db/hardware/elliott-brothers-london-ltd/elliott-401) and [Elliott 402](https://www.ithistory.org/db/hardware/elliott-brothers-london-ltd/elliott-402) computers, Rothamsted Research statisticians analysed 14,357 data variables, and this took them 4,731 hours to complete the job. It is hard to imagine the energy consumption as well as the amount of paper tape used for programming. Probably the paper tape (all glued together) would be long enough to circle the equator. ### **Development of Statistical Software: Genstat, SAS, SPSS** The above collection of programs was mainly used for agricultural research at Rothamsted and was not given an umbrella name until John Nelder became Head of the Statistics Department in 1968. The development of Genstat (General Statistics) started from that year and the programming was done in FORTRAN, initially on an IBM machine. In that same year, at North Carolina State University, SAS (Statistical Analysis Software) was almost simultaneously developed by computational statisticians, also for analysing agricultural data to improve crop yields. At around the same time, social scientists at the University of Chicago started to develop SPSS (Statistical Package for the Social Sciences). Although the three packages (Genstat, SAS and SPSS) were developed for different purposes and their functions diverged somewhat later, the basic functions covered similar statistical methodologies. The first version of SPSS was released in 1968. In 1970, the first version of Genstat was released with the functions of ANOVA, regression, principal components and principal coordinate analysis, single-linkage cluster analysis and general calculations on vectors, matrices and tables. The first version of SAS, SAS 71, was released and named after the year of its release. The early versions of all three software packages were written in FORTRAN and designed for mainframe computers. Since the 1980s, with the breakthrough of personal computers, a second generation of statistical software began to emerge. There was an MS-DOS version of Genstat (Genstat 4.03) released with an interactive command line interface in 1980. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/MSDOS_Genstat_4_03_619aab193a.jpg) _Genstat 4.03 for MSDOS_ Around 1985, SAS and SPSS also released a version for personal computers. In the 1980s more players entered this market: STATA was developed from 1985 and JMP was developed from 1989. JMP was, from the very beginning, for Macintosh computers. As a consequence, JMP had a strong focus on visualization as well as graphics from its inception. ### **The Rise of the Statistical Language R** The development of the third generation of statistical computing systems had started before the emergence of software like Genstat 4.03e or SAS 6.01. This development was led by John Chambers and his group in Bell Laboratories since the 1970s. The outcome of their work is the S language. It had been developed into a general purpose language with implementations for classical as well as modern statistical inferences. S language was freely available, and its audience was mainly sophisticated academic users. After the acquisition of S language by the Insightful Corporation and rebranding as S-PLUS, this leading third generation statistical software package was widely used in both theoretical and practical statistics in the 1990s, especially before the release of a stable beta version of the free and open-source software R in the year 2000. R was developed by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, and is currently widely used by statisticians in academia and industry, together with statistical software developers, data miners and data analysts. Software like Genstat, SAS, SPSS and many other packages had to deal with the challenge from R. Each of these long-standing software packages developed an R interface R or even R interpreters to anticipate the change of user behaviour and ever-increasing adoption of the R computing environment. For example, SAS and SPSS have some R plug-ins to talk to each other. VSNi’s ASReml-R software was developed for ASReml users who want to run mixed model analysis within the R environment, and at the present time there are more ASReml-R users than ASReml standalone users. Users who need reliable and robust mixed effects model fitting adopted ASReml-R as an alternative to other mixed model R packages due to its superior performance and simplified syntax. For Genstat users, msanova was also developed as an R package to provide traditional ANOVA users an R interface to run their analysis. ### **What’s Next?** We have no clear idea about what will represent the fourth generation of statistical software. R, as an open-source software and a platform for prototyping and teaching has the potential to help this change in statistical innovation. An example is the R Shiny package, where web applications can be easily developed to provide statistical computing as online services. But all open-source and commercial software has to face the same challenges of providing fast, reliable and robust statistical analyses that allow for reproducibility of research and, most importantly, use sound and correct statistical inference and theory, something that Ronald Fisher will have expected from his computing machine!

plant
plant
plant
A world leader in the advancement and application of algorithmic and analytical content for the smart/precision biotech sector

Follow us

youtube     twitter     linkedin
Copyright © 2000-2021 VSN International Ltd. | Privacy Policy | EULA | Terms & Conditions | Sitemap
VSN International Limited is registered in England & Wales, company number: 4027977 VAT number: GB750 0348 63