logo
Software
Sector
Learning
Resources
Blogs
About us
Licensing
Contact
Using genetic correlations to improve the University of Florida's strawberry breeding program

Sujeet Verma, Luis Osorio, and Vance Whitaker

a month ago

The strawberry breeding program at the University of Florida develops cultivars for the 11,000 acre Florida industry and for winter and spring production regions globally. The program is more than seventy years old, and for the majority of its history was based on phenotypic recurrent selection. Most selection was based on visual characteristics and tasting of fruit, while limited data was collected on yield and diseases. In the last decade the program has supplemented these methods with an expansion of data collection for various traits, more sophisticated experimental designs for clonally-replicated trials, and the use of quantitative genetics to inform selection decisions. ASReml software has been key to this transformation.

We have been using ASReml since 2010 for the estimation of genetic variances, genotype-by-environment (G × E) interactions, and genetic correlations among the traits of interest. In addition, ASReml-R has provided a flexible platform for genomic prediction. The following are specific descriptions of the ways ASReml has been used in our strawberry breeding program.  

Spatial analysis

Spatial analysis can improve the estimation of genetic effects by modelling more accurately the spatial distribution of error effects in a field. Like all plant breeders we try our best to select and prepare homogeneous field sites, but there is always heterogeneity within a plot due to issues such as soil type, water distribution, nutrient distribution, etc. Standard methods like RCBD may not work well when the number of blocks and/or test genotypes become larger. Spatial analysis exploits correlations between rows and columns to adjust for heterogeneity. It is difficult to know all trends in heterogeneity because some, such as soil type, may be there before planting and others might appear after planting due to management and data collection. Spatial analysis, using row-column alpha design, allows blocking in both the row and column directions for pairwise entry comparisons. We don’t recommend solely using spatial analysis: rather, one must use both RCBD/IBD and spatial analysis as post-hoc analysis. 

alt text
A typical variogram plot from AR1⊗AR1 model using ASReml-R. 

At University of Florida, we started using spatial analysis about five years ago with the aim of improving our genomic predictions by correcting the input phenotypic data for spatial effects. We test different models for each trait of interest including, but not limited to: 

  • The use of residual variance structures (AR1⊗AR1)
  • spline models
  • phenotypic data adjustments by the experimental factors. We use several statistics including loglikelihood, AIC, BIC, MSE and the F-test for fixed effects to compare models and select the model with the best fit to the data. We perform these procedures in ASReml-SA (standalone); however, they can be easily performed in ASReml-R as well.

Genomic best linear unbiased prediction (gBLUP)

One of our first goals in using ASReml was to select parents based on breeding values where available, in addition to phenotypes. We first began calculating BLUPs using pedigrees, and then after a couple of years transitioned to gBLUP once genome-wide markers were available across the breeding program. We use the “predict” function to predict BLUP values and “coef. random” to extract BLUP values. We also estimate BLUE values for fixed effects. 

Population structure

We have recently started using the VSNi-developed R package, ASRgenomics, for our genomic selection pipeline. In the process we have sometimes encountered issues of singularities. The “kinship. diagnonostic” tools within the ASRgenomics R package enables us to visualize diagonal and off-diagonal values of the relationship matrix and helps identify potential duplicate clones that are causing singularities issues. Higher diagonal values indicate higher inbreeding, and a larger off-diagonal value indicates potential similarity issues within a dataset.

alt text
ASRgenomics diagnostic plots. The scree plot indicates that the first 10 PCs can explain nearly 100% of genetic diversity, with the first 3 PCs explaining ~ 50% of diversity. The individuals PCA plot visualizes clusters of individuals and the distance between them. The bottom plots show the distributions of diagonal and off-diagonal values from a relationship matrix.

Other features in ASRgenomics that have been useful are PCA and scree plots for examining germplasm diversity. In addition, a dendrogram heatmap of a relationship matrix can help visualise sub-population structures.

AdditiveDominance
alt text
Visualization of a relationship matrix: additive and dominance. Y-axis represents individuals and X-axis represents markers.

Genotype × Environment Interactions (G × E)

Performance of a selection in different environments is key to developing and deploying strawberry varieties. We use ASReml to estimate G × E effects via type b correlations.

Genetic correlations 

Genetic correlations, also known as correlations among breeding values among traits, vary in magnitude and direction from negative correlations to positive correlations, and play a significant role in our breeding strategy and estimation of prediction accuracy in genomic selection. They are important because if the traits are strongly correlated, the following scenarios might occur: 
(1) selection on one trait might cause a change in the other trait
(2) performance of an easy to measure trait can help to predict the breeding values of a costly or difficult to measure trait
(3) in genomic selection the predictive ability of a low-heritability trait can be increased by its genetic correlation with a highly heritable trait.

One example is the correlation of solid soluble content (SSC or brix) and marketable yield which are strong and negatively correlated in our breeding program. In this case, to avoid selecting individuals with low breeding values for either SSC or marketable yield, we look for genotypes that have high breeding values for both traits, also known as “correlation breakers” or select individuals that reach minimum threshold breeding values set in the program. We use selection indices to improve the predictive ability of breeding values and rank phenotypes for selection.

Additive/dominance/epistasis models

One of the major advantages of gBLUP using ASReml is the ability to build a complex statistical model combining relationships matrices based on additive, dominance, and epistasis effects. We have estimated dominance effects for some trials. Nevertheless, we have found that additive models work best for genomic prediction thus far.

Fixed effects

One major advantage of ASReml is that it allows the addition of QTL as fixed effects in the mixed model analyses. Later a simple Wald test can suggest whether the fixed effect was significant or not, and whether it affected variance component estimates overall. While we have not yet found a trait or situation in which adding a fixed effect for a QTL has increased predictive ability, we are continuing to explore this approach.

Conclusions

ASReml has been invaluable to the evolution of our breeding program. Indeed, over the last decade it has become an integral part of our breeding pipeline. The ability to perform spatial analyses, generate and predict breeding values, examine population structures, and estimate G × E and genetic correlations have led to real-world applications in our breeding program that are helping us to develop better strawberry varieties.

About the authors

Dr Sujeet Verma is a statistical and molecular geneticist at the University of Florida working for the strawberry breeding program. He enjoys analysing large genotypic and phenotypic datasets and is passionate about quantitative genetics and finding genetic solutions for breeders. Linkedin profile Sujeet Verma

Dr Luis Osorio is a highly experienced research professional at the University of Florida. He is part of the research team working for the strawberry breeding program. His main research interests include Genomic Selection, Phenomics and climate change impact on breeding populations. Linkedin profile Luis Osorio

Dr. Vance Whitaker is an Associate Professor of Horticulture at the University of Florida. Dr. Whitaker develops strawberry varieties for the university and his breeding program is enhanced through genetic research and collaborations. Linkedin profile Vance Whitaker

Related Reads

READ MORE

The VSNi Team

6 months ago
Evolution of statistical computing

It is widely acknowledged that the most fundamental developments in statistics in the past 60+ years are driven by information technology (IT). We should not underestimate the importance of pen and paper as a form of IT but it is since people start using computers to do statistical analysis that we really changed the role statistics plays in our research as well as normal life.

In this blog we will give a brief historical overview, presenting some of the main general statistics software packages developed from 1957 onwards. Statistical software developed for special purposes will be ignored. We also ignore the most widely used ‘software for statistics’ as Brian Ripley (2002) stated in his famous quote: “Let’s not kid ourselves: the most widely used piece of software for statistics is Excel.” Our focus is some of the packages developed by statisticians for statisticians, which are still evolving to incorporate the latest development of statistics.

Ronald Fisher’s Calculating Machines

Pioneer statisticians like Ronald Fisher started out doing their statistics on pieces of paper and later upgraded to using calculating machines. Fisher bought the first Millionaire calculating machine when he was heading Rothamsted Research’s statistics department in the early 1920s. It cost about £200 at that time, which is equivalent in purchasing power to about £9,141 in 2020. This mechanical calculator could only calculate direct product, but it was very helpful for the statisticians at that time as Fisher mentioned: "Most of my statistics has been learned on the machine." The calculator was heavily used by Fisher’s successor Frank Yates (Head of Department 1933-1968) and contributed to much of Yates’ research, such as designs with confounding between treatment interactions and blocks, or split plots, or quasi-factorials.

alt text

Frank Yates

Rothamsted Annual Report for 1952: "The analytical work has again involved a very considerable computing effort." 

Beginning of the Computer Age

From the early 1950s we entered the computer age. The computer at this time looked little like its modern counterpart, whether it was an Elliott 401 from the UK or an IBM 700/7000 series in the US. Although the first documented statistical package, BMDP, was developed starting in 1957 for IBM mainframes at the UCLA Health Computing Facility, on the other side of the Atlantic Ocean statisticians at Rothamsted Research began their endeavours to program on an Elliot 401 in 1954.

alt text

Programming Statistical Software

When we teach statistics in schools or universities, students very often complain about the difficulties of programming. Looking back at programming in the 1950s will give modern students an appreciation of how easy programming today actually is!

An Elliott 401 served one user at a time and requested all input on paper tape (forget your keyboard and intelligent IDE editor). It provided the output to an electric typewriter. All programming had to be in machine code with the instructions and data on a rotating disk with 32-bit word length, 5 "words" of fast-access store, 7 intermediate access tracks of 128 words, 16 further tracks selectable one at a time (= 2949 words – 128 for system).

alt text

Computer paper tape

fitting constants to main effects and interactions in multi-way tables (1957), regression and multiple regression (1956), fitting many standard curves as well as multivariate analysis for latent roots and vectors (1955).

Although it sounds very promising with the emerging of statistical programs for research, routine statistical analyses were also performed and these still represented a big challenge, at least computationally. For example, in 1963, which was the last year with the Elliott 401 and Elliott 402 computers, Rothamsted Research statisticians analysed 14,357 data variables, and this took them 4,731 hours to complete the job. It is hard to imagine the energy consumption as well as the amount of paper tape used for programming. Probably the paper tape (all glued together) would be long enough to circle the equator.

Development of Statistical Software: Genstat, SAS, SPSS

The above collection of programs was mainly used for agricultural research at Rothamsted and was not given an umbrella name until John Nelder became Head of the Statistics Department in 1968. The development of Genstat (General Statistics) started from that year and the programming was done in FORTRAN, initially on an IBM machine. In that same year, at North Carolina State University, SAS (Statistical Analysis Software) was almost simultaneously developed by computational statisticians, also for analysing agricultural data to improve crop yields. At around the same time, social scientists at the University of Chicago started to develop SPSS (Statistical Package for the Social Sciences). Although the three packages (Genstat, SAS and SPSS) were developed for different purposes and their functions diverged somewhat later, the basic functions covered similar statistical methodologies.

The first version of SPSS was released in 1968. In 1970, the first version of Genstat was released with the functions of ANOVA, regression, principal components and principal coordinate analysis, single-linkage cluster analysis and general calculations on vectors, matrices and tables. The first version of SAS, SAS 71, was released and named after the year of its release. The early versions of all three software packages were written in FORTRAN and designed for mainframe computers.

Since the 1980s, with the breakthrough of personal computers, a second generation of statistical software began to emerge. There was an MS-DOS version of Genstat (Genstat 4.03) released with an interactive command line interface in 1980.

alt text

Genstat 4.03 for MSDOS

Around 1985, SAS and SPSS also released a version for personal computers. In the 1980s more players entered this market: STATA was developed from 1985 and JMP was developed from 1989. JMP was, from the very beginning, for Macintosh computers. As a consequence, JMP had a strong focus on visualization as well as graphics from its inception.

The Rise of the Statistical Language R

The development of the third generation of statistical computing systems had started before the emergence of software like Genstat 4.03e or SAS 6.01. This development was led by John Chambers and his group in Bell Laboratories since the 1970s. The outcome of their work is the S language. It had been developed into a general purpose language with implementations for classical as well as modern statistical inferences. S language was freely available, and its audience was mainly sophisticated academic users. After the acquisition of S language by the Insightful Corporation and rebranding as S-PLUS, this leading third generation statistical software package was widely used in both theoretical and practical statistics in the 1990s, especially before the release of a stable beta version of the free and open-source software R in the year 2000. R was developed by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, and is currently widely used by statisticians in academia and industry, together with statistical software developers, data miners and data analysts.

Software like Genstat, SAS, SPSS and many other packages had to deal with the challenge from R. Each of these long-standing software packages developed an R interface R or even R interpreters to anticipate the change of user behaviour and ever-increasing adoption of the R computing environment. For example, SAS and SPSS have some R plug-ins to talk to each other. VSNi’s ASReml-R software was developed for ASReml users who want to run mixed model analysis within the R environment, and at the present time there are more ASReml-R users than ASReml standalone users. Users who need reliable and robust mixed effects model fitting adopted ASReml-R as an alternative to other mixed model R packages due to its superior performance and simplified syntax. For Genstat users, msanova was also developed as an R package to provide traditional ANOVA users an R interface to run their analysis.

What’s Next?

We have no clear idea about what will represent the fourth generation of statistical software. R, as an open-source software and a platform for prototyping and teaching has the potential to help this change in statistical innovation. An example is the R Shiny package, where web applications can be easily developed to provide statistical computing as online services. But all open-source and commercial software has to face the same challenges of providing fast, reliable and robust statistical analyses that allow for reproducibility of research and, most importantly, use sound and correct statistical inference and theory, something that Ronald Fisher will have expected from his computing machine!

READ MORE

The VSNi Team

5 months ago
What is a p-value?

A way to decide whether to reject the null hypothesis (H0) against our alternative hypothesis (H1) is to determine the probability of obtaining a test statistic at least as extreme as the one observed under the assumption that H0 is true. This probability is referred to as the “p-value”. It plays an important role in statistics and is critical in most biological research.

alt text

What is the true meaning of a p-value and how should it be used?

P-values are a continuum (between 0 and 1) that provide a measure of the strength of evidence against H0. For example, a value of 0.066, will indicate that there is a probability that we could observe values as large or larger than our critical value with a probability of 6.6%. Note that this p-value is NOT the probability that our alternative hypothesis is correct, it is only a measure of how likely or unlikely we are to observe these extreme events, under repeated sampling, in reference to our calculated value. Also note that this p-value is obtained based on an assumed distribution (e.g., t-distribution for a t-test); hence, p-value will depend strongly on your (correct or incorrect) assumptions.

The smaller the p-value, the stronger the evidence for rejecting H0. However, it is difficult to determine what a small value really is. This leads to the typical guidelines of: p < 0.001 indicating very strong evidence against H0, p < 0.01 strong evidence, p < 0.05 moderate evidence, p < 0.1 weak evidence or a trend, and p ≥ 0.1 indicating insufficient evidence [1], and a strong debate on what this threshold should be. But declaring p-values as being either significant or non-significant based on an arbitrary cut-off (e.g. 0.05 or 5%) should be avoided. As Ronald Fisher said:

“No scientific worker has a fixed level of significance at which, from year to year, and in all circumstances he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas” [2].

A very important aspect of the p-value is that it does not provide any evidence in support of H0 – it only quantifies evidence against H0. That is, a large p-value does not mean we can accept H0. Take care not to fall into the trap of accepting H0! Similarly, a small p-value tells you that rejecting H0 is plausible, and not that H1 is correct!

For useful conclusions to be drawn from a statistical analysis, p-values should be considered alongside the size of the effect. Confidence intervals are commonly used to describe the size of the effect and the precision of its estimate. Crucially, statistical significance does not necessarily imply practical (or biological) significance. Small p-values can come from a large sample and a small effect, or a small sample and a large effect.

It is also important to understand that the size of a p-value depends critically on the sample size (as this affects the shape of our distribution). Here, with a very very large sample size, H0 may be always rejected even with extremely small differences, even if H0 is nearly (i.e., approximately) true. Conversely, with very small sample size, it may be nearly impossible to reject H0 even if we observed extremely large differences. Hence, p-values need to also be interpreted in relation to the size of the study.

References

[1] Ganesh H. and V. Cave. 2018. P-values, P-values everywhere! New Zealand Veterinary Journal. 66(2): 55-56.

[2] Fisher RA. 1956. Statistical Methods and Scientific Inferences. Oliver and Boyd, Edinburgh, UK.

READ MORE

The VSNi Team

5 months ago
Should I drop the outliers from my analysis?

Outliers are sample observations that are either much larger or much smaller than the other observations in a dataset. Outliers can skew your dataset, so how should you deal with them?

An example outlier problem

Imagine Jane, the general manager of a chain of computer stores, has asked a statistician, Vanessa, to assist her with the analysis of data on the daily sales at the stores she manages. Vanessa takes a look at the data, and produces a boxplot for each of the stores as shown below.

alt text

alt text

What do you notice about the data?

Vanessa pointed out to Jane the presence of outliers in the data from Store 2 on days 10 and 22. Vanessa recommended that Jane checks the accuracy of the data. Are the outliers due to recording or measurement error? If the outliers can’t be attributed to errors in the data, Jane should investigate what might have caused the increased sales on these two particular days. Always investigate outliers - this will help you better understand the data, how it was generated and how to analyse it.

Should we remove the outliers?

Vanessa explained to Jane that we should never drop a data value just because it is an outlier. The nature of the outlier should be investigated before deciding what to do.

Whenever there are outliers in the data, we should look for possible causes of error in the data. If you find an error but cannot recover the correct data value, then you should replace the incorrect data value with a missing value.

alt text

However, outliers can also be real observations, and sometimes these are the most interesting ones! If your outlier can’t be attributed to an error, you shouldn’t remove it from the dataset. Removing data values unnecessarily, just because they are outliers, introduces bias and may lead you to draw the wrong conclusions from your study.

What should we do if we need/want to keep the outlier?

  • Transform the data: if the dataset is not normally distributed, we can try transforming the data to normalize it. For example, if the data set has some high-value outliers (i.e. is right skewed), the log transformation will “pull” the high values in. This often works well for count data.
  • Try a different model/analysis: different analyses may make different distributional assumptions, and you should pick one that is appropriate for your data. For example, count data are generally assumed to follow a Poisson distribution. Alternatively, the outliers may be able to be modelled using an appropriate explanatory variable. For example, computer sales may increase as we approach the start of a new school year.

In our example, Vanessa suggested that since the mean for Store 2 is highly influenced by the outliers, the median, another measure of central tendency, seems more appropriate for summarizing the daily sales at each store. Using the statistical software Genstat, Vanessa can easily calculate both the mean and median number of sales per store for Jane.

alt text

Vanessa also analyses the data assuming the daily sales have Poisson distributions, by fitting a log-linear model.

alt text

alt text

Notice that Vanessa has included “Day” as a blocking factor in the model to allow for variability due to temporal effects.  

From this analysis, Vanessa and Jane conclude that the means (of the Poisson distributions) differ between the stores (p-value < 0.001). Store 3, on average, has the most computer sales per day, whereas Stores 1 and 4, on average, have the least.

alt text

alt text

There are other statistical approaches Vanessa might have used to analyse Jane’s sales data, including a one-way ANOVA blocked by Day on the log-transformed sales data and Friedman’s non-parametric ANOVA. Both approaches are available in Genstat’s comprehensive menu system.

alt text

What is the best method to deal with outliers?

There are many ways to deal with outliers, but no single method will work in every situation. As we have learnt, we can remove an observation if we have evidence it is an error. But, if that is not the case, we can always use alternative summary statistics, or even different statistical approaches, that accommodate them.

plant
plant
plant
A world leader in the advancement and application of algorithmic and analytical content for the smart/precision biotech sector

Follow us

youtube     twitter     linkedin
Copyright © 2000-2021 VSN International Ltd. | Privacy Policy | EULA | Terms & Conditions | Sitemap
VSN International Limited is registered in England & Wales, company number: 4027977 VAT number: GB750 0348 63