logo
Software
Sector
Learning
Resources
Blogs
About us
Contact
Give me confidence! Why I want to see your confidence interval

Dr. Vanessa Cave

2 months ago

As an applied statistician working on team research projects, on occasion I’ve battled the misguided view that my key contribution is “rubber stamping” p-values onto research output, and not just any p-value, ideally ones less than 0.05! I don’t dislike p-values, in fact, when used appropriately, I believe they play an important role in inferential statistics. However, simply reporting a p-value is of little value: we also need to understand the strength and direction of the effect and the uncertainty in its estimate. That is: I want to see the confidence interval (CI). Let me explain why.

P-values are used to assess whether there is sufficient evidence of an effect. More formally, they provide a measure of strength against the null hypothesis (of no effect). The smaller the p-value, the stronger the evidence for rejecting the null hypothesis in favour of an alternative hypothesis. However, the p-value does not convey any information on the size of the effect or the precision of its estimate. This missing information is very important, and it is here that CIs are extremely helpful.

A confidence interval provides a range of plausible values within which we can be reasonably confident the true effect actually falls.

Let’s look at an example.

A few years ago I was involved in a research programme studying the effect a particular treatment had on the pH of meat. pH is important because it impacts the juiciness, tenderness, and the taste of meat. If we could manage pH, we could improve the eating quality. A randomised controlled experiment was conducted, the data analysed, and tadah! There was a statistically significant difference between the treatment and control means (p-value = 0.011).

  • which mean is lower?
    • a lower pH is better
  • how big is the difference?
    • a small difference in pH, such as less than 0.1, won’t result in a practical difference in eating quality for consumers
  • how precise is the estimated size of the effect?
    • having enough precision is critical for meat producers to make an informed decision about the usefulness of the treatment

A CI will give us this information.

In our example, the 95% CI for the difference between the control and treatment mean is:

Telling us that:

  • the mean pH for the treatment is indeed statistically lower than for the control
  • the 95% CI is narrow (i.e., the estimate of the effect is precise enough to be useful)

but …

  • the effect size is likely too small to make any real-world practical difference in meat eating quality.

As the above example illustrates, CIs enable us to draw conclusions about the direction and strength of the effect and the uncertainty of its estimation, whereas, p-values are useful for indicating the strength of evidence against the null hypothesis. Hence, CIs are important for understanding the practical or biological importance of the estimated effect.

About the author

Dr Vanessa Cave is an applied statistician, interested in the application of statistics to the biosciences, in particular agriculture and ecology. She is a team leader of Data Science at AgResearch Ltd, New Zealand's government-funded agricultural institute, and is a developer of the Genstat statistical software package. Vanessa is currently President of the Australasian Region of the International Biometric Society, past-President of the New Zealand Statistical Association, an Associate Editor for the Agronomy Journal, on the Editorial Board of The New Zealand Veterinary Journal and a member of the Data Science Industry Advisory Group for the University of Auckland. She has a PhD in statistics from the University of St Andrew.

Vanessa has over a decade of experience collaborating with scientists, using statistics to solve real-world problems.  She provides expertise on experiment and survey design, data collection and management, statistical analysis, and the interpretation of statistical findings. Her interests include statistical consultancy, mixed models, multivariate methods, statistical ecology, statistical graphics and data visualisation, and the statistical challenges related to digital agriculture.

Related Reads

READ MORE

The VSNi Team

3 months ago
What is a p-value?

A way to decide whether to reject the null hypothesis (H0) against our alternative hypothesis (H1) is to determine the probability of obtaining a test statistic at least as extreme as the one observed under the assumption that H0 is true. This probability is referred to as the “p-value”. It plays an important role in statistics and is critical in most biological research. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/blog_p_value_7e04a8f8c5.png) #### **What is the true meaning of a p-value and how should it be used?** P-values are a continuum (between 0 and 1) that provide a measure of the **strength of evidence** against H0. For example, a value of 0.066, will indicate that there is a probability that we could observe values as large or larger than our critical value with a probability of 6.6%. Note that this p-value is NOT the probability that our alternative hypothesis is correct, it is only a measure of how likely or unlikely we are to observe these extreme events, under repeated sampling, in reference to our calculated value. Also note that this p-value is obtained based on an assumed distribution (e.g., t-distribution for a t-test); hence, p-value will depend strongly on your (correct or incorrect) assumptions. The smaller the p-value, the stronger the evidence for rejecting H0. However, it is difficult to determine what a small value really is. This leads to the typical guidelines of: p \< 0.001 indicating very strong evidence against H0, p \< 0.01 strong evidence, p \< 0.05 moderate evidence, p \< 0.1 weak evidence or a trend, and p ≥ 0.1 indicating insufficient evidence \[1\], and a strong debate on what this threshold should be. But declaring p-values as being either significant or non-significant based on an arbitrary cut-off (e.g. 0.05 or 5%) should be avoided. As [Ronald Fisher](https://mathshistory.st-andrews.ac.uk/Biographies/Fisher/) said: “No scientific worker has a fixed level of significance at which, from year to year, and in all circumstances he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas” \[2\]. A very important aspect of the p-value is that it **does not** provide any evidence in support of H0 – it only quantifies evidence against H0. That is, a large p-value does not mean we can accept H0. Take care not to fall into the trap of accepting H0! Similarly, a small p-value tells you that rejecting H0 is plausible, and not that H1 is correct! For useful conclusions to be drawn from a statistical analysis, p-values should be considered alongside the **size of the effect**. Confidence intervals are commonly used to describe the size of the effect and the precision of its estimate. Crucially, statistical significance does not necessarily imply practical (or biological) significance. Small p-values can come from a large sample and a small effect, or a small sample and a large effect. It is also important to understand that the size of a p-value depends critically on the sample size (as this affects the shape of our distribution). Here, with a very very large sample size, H0 may be always rejected even with extremely small differences, even if H0 is nearly (i.e., approximately) true. Conversely, with very small sample size, it may be nearly impossible to reject H0 even if we observed extremely large differences. Hence, p-values need to also be interpreted in relation to the size of the study. #### References \[1\] Ganesh H. and V. Cave. 2018. _P-values, P-values everywhere!_ New Zealand Veterinary Journal. 66(2): 55-56. \[2\] Fisher RA. 1956. _Statistical Methods and Scientific Inferences_. Oliver and Boyd, Edinburgh, UK.

READ MORE

Kanchana Punyawaew and Dr. Vanessa Cave

5 months ago
Mixed models for repeated measures and longitudinal data

The term “**repeated measures**” refers to experimental designs or observational studies in which each experimental unit (or subject) is measured repeatedly over time or space. "**Longitudinal data**" is a special case of repeated measures in which variables are measured over time (often for a comparatively long period of time) and duration itself is typically a variable of interest. In terms of data analysis, it doesn’t really matter what type of data you have, as you can analyze both using mixed models. Remember, the key feature of both types of data is that the response variable is measured more than once on each experimental unit, and these repeated measurements are likely to be correlated. ### Mixed Model Approaches To illustrate the use of mixed model approaches for analyzing repeated measures, we’ll examine a data set from Landau and Everitt’s 2004 book, “_A Handbook of Statistical Analyses using SPSS”. Here, a double-blind, placebo-controlled clinical trial was conducted to determine whether an estrogen treatment reduces post-natal depression. Sixty three subjects were randomly assigned to one of two treatment groups: placebo (27 subjects) and estrogen treatment (36 subjects). Depression scores were measured on each subject at baseline, i.e. before randomization (predep_) and at six two-monthly visits after randomization (_postdep_ at visits 1-6). However, not all the women in the trial had their depression score recorded on all scheduled visits. In this example, the data were measured at fixed, equally spaced, time points. (_Visit_ is time as a factor and _nVisit_ is time as a continuous variable.) There is one between-subject factor (_Group_, i.e. the treatment group, either placebo or estrogen treatment), one within-subject factor (_Visit_ or _nVisit_) and a covariate (_predep_). ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/repeated_measures_data_4f63d505a9_20e39072bf.png) Using the following plots, we can explore the data. In the first plot below, the depression scores for each subject are plotted against time, including the baseline, separately for each treatment group. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/repeated_measures_1_4149bce2a1_20e3c0f240.png) In the second plot, the mean depression score for each treatment group is plotted over time. From these plots, we can see variation among subjects within each treatment group that depression scores for subjects generally decrease with time, and on average the depression score at each visit is lower with the estrogen treatment than the placebo. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/repeated_measures_2_92810e7fc9_da9b1e85ff.png) ### Random effects model The simplest approach for [analyzing repeated measures data](https://www.theanalysisfactor.com/repeated-measures-approaches/) is to use a random effects model with _**subject**_ fitted as random. It assumes a constant correlation between all observations on the same subject. The analysis objectives can either be to measure the average treatment effect over time or to assess treatment effects at each time point and to test whether treatment interacts with time. In this example, the treatment (_Group_), time (_Visit_), treatment by time interaction (_Group:Visit_) and baseline (_predep_) effects can all be fitted as fixed. The subject effects are fitted as random, allowing for constant correlation between depression scores taken on the same subject over time. The code and output from fitting this model in [ASReml-R 4](https://www.vsni.co.uk/software/asreml-r) follows; ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/4_020d75dee9.png) ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/5_ef250deb61.png) ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/6_15e353865d.png) The output from summary() shows that the estimate of subject and residual variance from the model are 15.10 and 11.53, respectively, giving a total variance of 15.10 + 11.53 = 26.63. The Wald test (from the wald.asreml() table) for _predep_, _Group_ and _Visit_ are significant (probability level (Pr) ≤ 0.01). There appears to be no relationship between treatment group and time (_Group:Visit_) i.e. the probability level is greater than 0.05 (Pr = 0.8636). ### Covariance model In practice, often the correlation between observations on the same subject is not constant. It is common to expect that the covariances of measurements made closer together in time are more similar than those at more distant times. Mixed models can accommodate many different covariance patterns. The ideal usage is to select the pattern that best reflects the true covariance structure of the data. A typical strategy is to start with a simple pattern, such as compound symmetry or first-order autoregressive, and test if a more complex pattern leads to a significant improvement in the likelihood. Note: using a covariance model with a simple correlation structure (i.e. uniform) will provide the same results as fitting a random effects model with random subject. In ASReml-R 4 we use the corv() function on time (i.e. _Visit_) to specify uniform correlation between depression scores taken on the same subject over time. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/7_3f3a2b825a.png) Here, the estimate of the correlation among times (_Visit_) is 0.57 and the estimate of the residual variance is 26.63 (identical to the total variance of the random effects model, asr1). Specifying a heterogeneous first-order autoregressive covariance structure is easily done in ASReml-R 4 by changing the variance-covariance function in the residual term from corv() to ar1h(). ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/8_27fce61956.png) ### Random coefficients model When the relationship of a measurement with time is of interest, a [random coefficients model](https://encyclopediaofmath.org/wiki/Random_coefficient_models) is often appropriate. In a random coefficients model, time is considered a continuous variable, and the subject and subject by time interaction (_Subject:nVisit_) are fitted as random effects. This allows the slopes and intercepts to vary randomly between subjects, resulting in a separate regression line to be fitted for each subject. However, importantly, the slopes and intercepts are correlated. The str() function of asreml() call is used for fitting a random coefficient model; ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/9_ec27199248.png) The summary table contains the variance parameter for _Subject_ (the set of intercepts, 23.24) and _Subject:nVisit_ (the set of slopes, 0.89), the estimate of correlation between the slopes and intercepts (-0.57) and the estimate of residual variance (8.38). ### References Brady T. West, Kathleen B. Welch and Andrzej T. Galecki (2007). _Linear Mixed Models: A Practical Guide Using Statistical Software_. Chapman & Hall/CRC, Taylor & Francis Group, LLC. Brown, H. and R. Prescott (2015). _Applied Mixed Models in Medicine_. Third Edition. John Wiley & Sons Ltd, England. Sabine Landau and Brian S. Everitt (2004). _A Handbook of Statistical Analyses using SPSS_. Chapman & Hall/CRC Press LLC.

READ MORE

Kanchana Punyawaew

5 months ago
Linear mixed models: a balanced lattice square

This blog illustrates how to analyze data from a field experiment with a balanced lattice square design using linear mixed models. We’ll consider two models: the balanced lattice square model and a spatial model. The example data are from a field experiment conducted at Slate Hall Farm, UK, in 1976 (Gilmour _et al_., 1995). The experiment was set up to compare the performance of 25 varieties of barley and was designed as a balanced lattice square with six replicates laid out in a 10 x 15 rectangular grid. Each replicate contained exactly one plot for every variety. The variety grown in each plot, and the coding of the replicates and lattice blocks, is shown in the field layout below: ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/lattice_layout_7f57633d37_892b6cf234.png) There are seven columns in the data frame: five blocking factors (_Rep, RowRep, ColRep, Row, Column_), one treatment factor, _Variety_, and the response variate, _yield_. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/lattice_data_bd9f4ee008_06c8a6e6fc.png) The six replicates are numbered from 1 to 6 (_Rep_). The lattice block numbering is coded within replicates. That is, within each replicates the lattice rows (_RowRep_) and lattice columns (_ColRep_) are both numbered from 1 to 5. The _Row_ and _Column_ factors define the row and column positions within the field (rather than within each replicate). ### Analysis of a balanced lattice square design To analyze the response variable, _yield_, we need to identify the two basic components of the experiment: the treatment structure and the blocking (or design) structure. The treatment structure consists of the set of treatments, or treatment combinations, selected to study or to compare. In our example, there is one treatment factor with 25 levels, _Variety_ (i.e. the 25 different varieties of barley). The blocking structure of replicates (_Rep_), lattice rows within replicates (_Rep:RowRep_), and lattice columns within replicates (_Rep:ColRep_) reflects the balanced lattice square design. In a mixed model analysis, the treatment factors are (usually) fitted as fixed effects and the blocking factors as random. The balanced lattice square model is fitted in [ASReml-R4](https://www.vsni.co.uk/software/asreml-r) using the following code: ```plaintext &gt; lattice.asr &lt;- asreml(fixed = yield ~ Variety, random = ~ Rep + Rep:RowRep + Rep:ColRep, data=data1) ``` The REML log-likelihood is -707.786. The model’s BIC is: ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/lattice_2_ac553eac69_6d6d40e073.jpg) The estimated variance components are: ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/lattice_3_69e11e2dff_c34641a3a9.jpg) The table above contains the estimated variance components for all terms in the random model. The variance component measures the inherent variability of the term, over and above the variability of the sub-units of which it is composed. The variance components for _Rep_, _Rep:RowRep_ and _Rep:ColRep_ are estimated as 4263, 15596, and 14813, respectively. As is typical, the largest unit (replicate) is more variable than its sub-units (lattice rows and columns within replicates). The _"units!R"_ component is the residual variance. By default, fixed effects in ASReml-R4 are tested using sequential Wald tests: ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/lattice_4_e237aed045_274881533e.jpg) In this example, there are two terms in the summary table: the overall mean, (_Intercept_), and _Variety_. As the tests are sequential, the effect of the _Variety_ is assessed by calculating the change in sums of squares between the two models (_Intercept_)+_Variety_ and (_Intercept_). The p-value (Pr(Chisq)) of  \< 2.2 x 10-16 indicates that _Variety_ is a highly significant. The predicted means for the _Variety_ can be obtained using the predict() function. The standard error of the difference between any pair of variety means is 62. Note: all variety means have the same standard error as the design is balanced. ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/lattice_5_575ede3e94_5b9209f7c3.jpg) Note: the same analysis is obtained when the random model is redefined as replicates (_Rep_), rows within replicates (_Rep:Row_) and columns within replicates (_Rep:Column_). ### Spatial analysis of a field experiment As the plots are laid out in a grid, the data can also be analyzed using a spatial model. We’ll illustrate spatial analysis by fitting a model with a separable first order autoregressive process in the field row (_Row_) and field column (_Column_) directions. This is often a useful model to start the spatial modeling process. The separable first order autoregressive spatial model is fitted in ASReml-R4 using the following code: ```plaintext &gt; spatial.asr &lt;- asreml(fixed = yield ~ Variety, residual = ~ar1(Row):ar1(Column), data = data1) ``` The BIC for this spatial model is: ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/lattice_6_3b978358f9_e792bcc2bd.jpg) The estimated variance components and sequential Wald tests are: ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/lattice_7_82255b3b94_b5bc40e6ab.jpg) ![alt text](https://web-global-media-storage-production.s3.eu-west-2.amazonaws.com/lattice_8_544d852c25_53b792377f.jpg) The residual variance is 38713, the estimated row correlation is 0.458, and the estimated column correlation is 0.684. As for the balanced lattice square model, there is strong evidence of a _Variety_ effect (p-value \< 2.2 x 10-16). A [log-likelihood ratio test](https://www.statisticshowto.com/likelihood-ratio-tests/) cannot be used to compare the balanced lattice square model with the spatial models, as the variance models are not nested. However, the two models can be compared using BIC. As the spatial model has a smaller BIC (1415) than the balanced lattice square model (1435), of the two models explored in this blog, it is chosen as the preferred model. However, selecting the optimal spatial model can be difficult. The current spatial model can be extended by including measurement error (or nugget effect) or revised by selecting a different variance model for the spatial effects. #### References Butler, D.G., Cullis, B.R., Gilmour, A. R., Gogel, B.G. and Thompson, R. (2017). _ASReml-R Reference Manual Version 4._ VSN International Ltd, Hemel Hempstead, HP2 4TP UK. Gilmour, A. R., Anderson, R. D. and Rae, A. L. (1995). _The analysis of binomial data by a generalised linear mixed model_, Biometrika 72: 593-599..

plant
plant
plant
A world leader in the advancement and application of algorithmic and analytical content for the smart/precision biotech sector

Follow us

youtube     twitter     linkedin
Copyright © 2000-2021 VSN International Ltd. | Privacy Policy | EULA | Terms & Conditions | Sitemap
VSN International Limited is registered in England & Wales, company number: 4027977 VAT number: GB750 0348 63