logo
Software
Sector
Learning
Resources
Blogs
About us
Licensing
Contact
The changing landscape of the pharmaceutical statistician

Dr. Jo Burke

a month ago

Over the past forty years the number of statisticians in the pharmaceutical industry has grown enormously. An excellent, though perhaps somewhat dated, summary of the history of pharmaceutical statisticians may be found in Andy Grieve’s Presidential address to the Royal Statistical Society (2005). In his address, Andy pointed out that membership of PSI (Statisticians in the Pharmaceutical Industry - a predominantly UK organisation), grew from 47 in 1977 to over 1100 in 2004. This growth in numbers has been mirrored in other related disciplines such as statistical programming.

It isn’t purely in numbers that things have changed: the role itself has evolved and continues to do so. From the majority being employed historically in drug discovery we now see that the majority are employed in clinical research; but such a simple description does not adequately describe the variety. Today, pharmaceutical statisticians may be found supporting high throughput screening, microarrays, toxicology, formulation and stability testing, assay validation, genomics, pharmacology, pharmacokinetics, pharmacodynamics, clinical research, manufacturing, post marketing studies, safety surveillance and pharmacoeconomics, to name but a few! Many companies also include a team of statisticians dedicated to researching methodology and innovation.

The effect of increased regulation and computational power

I think it’s fair to say that one of, if not the main factor, in the evolution of the pharmaceutical statistician role has been the changing regulatory landscape. From the 1960s, the number of laws, regulations and guidelines covering applications for new drug products began to increase globally. Eventually, following initial discussions between Europe, Japan and the US, the International Council for Harmonisation of technical requirements for pharmaceuticals for human use (ICH), was conceived. A quick glance at the ICH website today will provide the reader with a set of guidelines that should be followed, with one in particular close to a clinical trial statistician’s heart: ICH E9 Statistical Principles for Clinical Trials. It would be remiss to think that this is the only guideline covering statistical issues. There are many, ranging from adjusting for baseline covariates to the use of Bayesian statistics. Indeed the UK’s own National Institute for Health and Care Excellence (NICE) provides some excellent guidance regarding network meta-analyses. Guidelines also exist covering the standardisation of data collection, datasets, clinical trial reports and the Integrated Summaries of Effectiveness and Integrated Summaries of Safety.

It may appear that with so many standards, much of the work of a pharmaceutical statistician supporting clinical trials could be automated. This is indeed the case for data such as demographics, etc., but along with the increase in regulations governing drug approval, we have seen an explosion in computing power. This has increased the number and complexity of analysis methods available.

It wasn’t too long ago that the imputation of missing data or the use of Bayesian designs and analyses would be unthinkable and yet now in early clinical development, adaptive dose finding studies, health technology assessments and increasingly, medical devices and diseases in rare populations and paediatric populations, the use of Bayesian statistics is becoming commonplace. With so much at our fingertips these days, we should not forget the importance of ensuring the basics are covered: plot the data, understand the models and how the data were collected, the methods, the assumptions and the experimental design. In this era of “Big Data”, “Machine Learning” and “Artificial Intelligence” this is particularly true (yes, these may be encountered in pharmaceutical statistics). One of the consequences of having easy access to often quite complex methods, is a “black box” approach to analysis. Whilst this certainly has its merits, users are sometimes unaware of assumptions underpinning the analyses and so do not present their results with the appropriate caveats. One assumption that is often overlooked and not checked, even by the strongest of theoretical statisticians, is that an iterative method such as those used in many Bayesian analyses, has actually converged to a stable solution.

The importance of experimental design

The need to cover the basics is not only true of the analysis of trial data, but also its design. Many trials use complex adaptive designs which fundamentally rely on the appropriate randomisation, correct doses, population and endpoints. Sometimes what may be considered a simple stratified randomisation needs to be thought about very carefully. Some companies have no, or very little, experience of crossover designs whereas in others, they are very popular. The basics of experimental design can be forgotten and designs are taken from the shelf simply because they had been used by that department for a long time. I would suggest that there is still a lot that can be learnt from Cox’s Design of Experiments, particularly in those designs that are more complex than a simple 2x2 Crossover.

Strategic thinking

One will see and hear many presentations discussing the ever increasing cost of developing new drugs. One Contract Research Organisation estimates pivotal Phase III trials cost a median of approximately $41,000 per patient. This increasing cost does not appear to be matched with an increase in the number of drugs being approved. In some companies, statisticians will be involved in the drug development plan from the very beginning, making use of utility analyses and decision theory, helping with strategic decisions.

In an attempt to reduce the time to market many companies have used adaptive trial designs combining Phase II and Phase III. Sometimes these fail not because of a lack of efficacy or safety issues, but because of rushing into the trial without enough knowledge of the product. Some companies are beginning to take a different approach and focus on more exploratory trials and/or analyses prior to Phase III in order to maximise their knowledge, often developing predictive models during the earlier stages. These can be remarkably complex to derive. Statisticians involved in pre-clinical and early clinical research bring a wealth of knowledge to the development of these models so perhaps we shall see, if not quite a full circle, a return of more pharmaceutical statisticians to the earlier stages of drug development coupled with more thinking time.

About the author 

Dr Burke, PhD, CStat, has worked for approximately 30 years as a statistical consultant in the pharmaceutical industry supporting clinical trials from early to late stage development.

Her experience has involved direct clinical trial, regulatory submission and manufacturing support as well as working with statistical research groups. This experience has covered a wide variety of therapeutic areas with trial designs and analyses ranging from the standard to the more novel.

Limited time special offer - mixed model analysis of medical data course

Until December 24, 2021 you can get 50% off our ASReml-R course "Mixed model analysis of medical data". Sign up and use the coupon MEDICAL50.

Related Reads

READ MORE

The VSNi Team

7 months ago
Should I drop the outliers from my analysis?

Outliers are sample observations that are either much larger or much smaller than the other observations in a dataset. Outliers can skew your dataset, so how should you deal with them?

An example outlier problem

Imagine Jane, the general manager of a chain of computer stores, has asked a statistician, Vanessa, to assist her with the analysis of data on the daily sales at the stores she manages. Vanessa takes a look at the data, and produces a boxplot for each of the stores as shown below.

alt text

alt text

What do you notice about the data?

Vanessa pointed out to Jane the presence of outliers in the data from Store 2 on days 10 and 22. Vanessa recommended that Jane checks the accuracy of the data. Are the outliers due to recording or measurement error? If the outliers can’t be attributed to errors in the data, Jane should investigate what might have caused the increased sales on these two particular days. Always investigate outliers - this will help you better understand the data, how it was generated and how to analyse it.

Should we remove the outliers?

Vanessa explained to Jane that we should never drop a data value just because it is an outlier. The nature of the outlier should be investigated before deciding what to do.

Whenever there are outliers in the data, we should look for possible causes of error in the data. If you find an error but cannot recover the correct data value, then you should replace the incorrect data value with a missing value.

alt text

However, outliers can also be real observations, and sometimes these are the most interesting ones! If your outlier can’t be attributed to an error, you shouldn’t remove it from the dataset. Removing data values unnecessarily, just because they are outliers, introduces bias and may lead you to draw the wrong conclusions from your study.

What should we do if we need/want to keep the outlier?

  • Transform the data: if the dataset is not normally distributed, we can try transforming the data to normalize it. For example, if the data set has some high-value outliers (i.e. is right skewed), the log transformation will “pull” the high values in. This often works well for count data.
  • Try a different model/analysis: different analyses may make different distributional assumptions, and you should pick one that is appropriate for your data. For example, count data are generally assumed to follow a Poisson distribution. Alternatively, the outliers may be able to be modelled using an appropriate explanatory variable. For example, computer sales may increase as we approach the start of a new school year.

In our example, Vanessa suggested that since the mean for Store 2 is highly influenced by the outliers, the median, another measure of central tendency, seems more appropriate for summarizing the daily sales at each store. Using the statistical software Genstat, Vanessa can easily calculate both the mean and median number of sales per store for Jane.

alt text

Vanessa also analyses the data assuming the daily sales have Poisson distributions, by fitting a log-linear model.

alt text

alt text

Notice that Vanessa has included “Day” as a blocking factor in the model to allow for variability due to temporal effects.  

From this analysis, Vanessa and Jane conclude that the means (of the Poisson distributions) differ between the stores (p-value < 0.001). Store 3, on average, has the most computer sales per day, whereas Stores 1 and 4, on average, have the least.

alt text

alt text

There are other statistical approaches Vanessa might have used to analyse Jane’s sales data, including a one-way ANOVA blocked by Day on the log-transformed sales data and Friedman’s non-parametric ANOVA. Both approaches are available in Genstat’s comprehensive menu system.

alt text

What is the best method to deal with outliers?

There are many ways to deal with outliers, but no single method will work in every situation. As we have learnt, we can remove an observation if we have evidence it is an error. But, if that is not the case, we can always use alternative summary statistics, or even different statistical approaches, that accommodate them.

READ MORE

Dr. John Rogers

9 months ago
50 years of bioscience statistics

Earlier this year I had an enquiry from Carey Langley of VSNi as to why I had not renewed my Genstat licence. The truth was simple – I have decided to fully retire after 50 years as an agricultural entomologist / applied biologist / consultant. This prompted some reflections about the evolution of bioscience data analysis that I have experienced over that half century, a period during which most of my focus was the interaction between insects and their plant hosts; both how insect feeding impacts on plant growth and crop yield, and how plants impact on the development of the insects that feed on them and on their natural enemies.

Where it began – paper and post

My journey into bioscience data analysis started with undergraduate courses in biometry – yes, it was an agriculture faculty, so it was biometry not statistics. We started doing statistical analyses using full keyboard Monroe calculators (for those of you who don’t know what I am talking about, you can find them here).  It was a simpler time and as undergraduates we thought it was hugely funny to divide 1 by 0 until the blue smoke came out…

After leaving university in the early 1970s, I started working for the Agriculture Department of an Australian state government, at a small country research station. Statistical analysis was rudimentary to say the least. If you were motivated, there was always the option of running analyses yourself by hand, given the appearance of the first scientific calculators in the early 1970s. If you wanted a formal statistical analysis of your data, you would mail off a paper copy of the raw data to Biometry Branch… and wait.  Some months later, you would get back your ANOVA, regression, or whatever the biometrician thought appropriate to do, on paper with some indication of what treatments were different from what other treatments.  Dose-mortality data was dealt with by manually plotting data onto probit paper. 

Enter the mainframe

In-house ANOVA programs running on central mainframes were a step forward some years later as it at least enabled us to run our own analyses, as long as you wanted to do an ANOVA…. However, it also required a 2 hours’ drive to the nearest card reader, with the actual computer a further 1000 kilometres away.… The first desktop computer I used for statistical analysis was in the early 1980s and was a CP/M machine with two 8-inch floppy discs with, I think, 256k of memory, and booting it required turning a key and pressing the blue button - yes, really! And about the same time, the local agricultural economist drove us crazy extolling the virtues of a program called Lotus 1-2-3!

Having been brought up on a solid diet of the classic texts such as Steele and Torrie, Cochran and Cox and Sokal and Rohlf, the primary frustration during this period was not having ready access to the statistical analyses you knew were appropriate for your data. Typical modes of operating for agricultural scientists in that era were randomised blocks of various degrees of complexity, thus the emphasis on ANOVA in the software that was available in-house. Those of us who also had less-structured ecological data were less well catered for.

My first access to a comprehensive statistics package was during the early to mid-1980s at one of the American Land Grant universities. It was a revelation to be able to run virtually whatever statistical test deemed necessary. Access to non-linear regression was a definite plus, given the non-linear nature of many biological responses. As well, being able to run a series of models to test specific hypotheses opened up new options for more elegant and insightful analyses. Looking back from 2021, such things look very trivial, but compared to where we came from in the 1970s, they were significant steps forward.

Enter Genstat

My first exposure to Genstat, VSNi’s stalwart statistical software package, was Genstat for Windows, Third Edition (1997). Simple things like the availability of residual plots made a difference for us entomologists, given that much of our data had non-normal errors; it took the guesswork out of whether and what transformations to use. The availability of regressions with grouped data also opened some previously closed doors. 

After a deviation away from hands-on research, I came back to biological-data analysis in the mid-2000s and found myself working with repeated-measures and survival / mortality data, so ventured into repeated-measures restricted maximum likelihood analyses and generalised linear mixed models for the first time (with assistance from a couple of Roger Payne’s training courses in Hobart and Queenstown). Looking back, it is interesting how quickly I became blasé about such computationally intensive analyses that would run in seconds on my laptop or desktop, forgetting that I was doing ANOVAs by hand 40 years earlier when John Nelder was developing generalised linear models. How the world has changed!

Partnership and support

Of importance to my Genstat experience was the level of support that was available to me as a Genstat licensee. Over the last 15 years or so, as I attempted some of these more complex analyses, my aspirations were somewhat ahead of my abilities, and it was always reassuring to know that Genstat Support was only ever an email away. A couple of examples will flesh this out. 

Back in 2008, I was working on the relationship between insect-pest density and crop yield using R2LINES, but had extra linear X’s related to plant vigour in addition to the measure of pest infestation. A support-enquiry email produced an overnight response from Roger Payne that basically said, “Try this”. While I slept, Roger had written an extension to R2LINES to incorporate extra linear X’s. This was later incorporated into the regular releases of Genstat. This work led to the clearer specification of the pest densities that warranted chemical control in soybeans and dry beans (https://doi.org/10.1016/j.cropro.2009.08.016 and https://doi.org/10.1016/j.cropro.2009.08.015).

More recently, I was attempting to disentangle the effects on caterpillar mortality of the two Cry insecticidal proteins in transgenic cotton and, while I got close, I would not have got the analysis to run properly without Roger’s support. The data was scant in the bottom half of the overall dose-response curves for both Cry proteins, but it was possible to fit asymptotic exponentials that modelled the upper half of each curve. The final double-exponential response surface I fitted with Roger’s assistance showed clearly that the dose-mortality response was stronger for one of the Cry proteins than the other, and that there was no synergistic action between the two proteins (https://doi.org/10.1016/j.cropro.2015.10.013

The value of a comprehensive statistics package

One thing that I especially appreciate about having access to a comprehensive statistics package such as Genstat is having the capacity to tease apart biological data to get at the underlying relationships. About 10 years ago, I was asked to look at some data on the impact of cold stress on the expression of the Cry2Ab insecticidal protein in transgenic cotton. The data set was seemingly simple - two years of pot-trial data where groups of pots were either left out overnight or protected from low overnight temperatures by being moved into a glasshouse, plus temperature data and Cry2Ab protein levels. A REML analysis, and some correlations and regressions enabled me to show that cold overnight temperatures did reduce Cry2Ab protein levels, that the effects occurred for up to 6 days after the cold period and that the threshold for these effects was approximately 14 Cº (https://doi.org/10.1603/EC09369). What I took from this piece of work is how powerful a comprehensive statistics package can be in teasing apart important biological insights from what was seemingly very simple data. Note that I did not use any statistics that were cutting edge, just a combination of REML, correlation and regression analyses, but used these techniques to guide the dissection of the relationships in the data to end up with an elegant and insightful outcome.

Final reflections

Looking back over 50 years of work, one thing stands out for me: the huge advances that have occurred in the statistical analysis of biological data has allowed much more insightful statistical analyses that has, in turn, allowed biological scientists to more elegantly pull apart the interactions between insects and their plant hosts. 

For me, Genstat has played a pivotal role in that process. I shall miss it.

Dr John Rogers

Research Connections and Consulting

St Lucia, Queensland 4067, Australia

Phone/Fax: +61 (0)7 3720 9065

Mobile: 0409 200 701

Email: john.rogers@rcac.net.au

Alternate email: D.John.Rogers@gmail.com

READ MORE

Kanchana Punyawaew

9 months ago
Linear mixed models: a balanced lattice square

This blog illustrates how to analyze data from a field experiment with a balanced lattice square design using linear mixed models. We’ll consider two models: the balanced lattice square model and a spatial model.

The example data are from a field experiment conducted at Slate Hall Farm, UK, in 1976 (Gilmour et al., 1995). The experiment was set up to compare the performance of 25 varieties of barley and was designed as a balanced lattice square with six replicates laid out in a 10 x 15 rectangular grid. Each replicate contained exactly one plot for every variety. The variety grown in each plot, and the coding of the replicates and lattice blocks, is shown in the field layout below:

alt text

There are seven columns in the data frame: five blocking factors (Rep, RowRep, ColRep, Row, Column), one treatment factor, Variety, and the response variate, yield.

alt text

The six replicates are numbered from 1 to 6 (Rep). The lattice block numbering is coded within replicates. That is, within each replicates the lattice rows (RowRep) and lattice columns (ColRep) are both numbered from 1 to 5. The Row and Column factors define the row and column positions within the field (rather than within each replicate).

Analysis of a balanced lattice square design

To analyze the response variable, yield, we need to identify the two basic components of the experiment: the treatment structure and the blocking (or design) structure. The treatment structure consists of the set of treatments, or treatment combinations, selected to study or to compare. In our example, there is one treatment factor with 25 levels, Variety (i.e. the 25 different varieties of barley). The blocking structure of replicates (Rep), lattice rows within replicates (Rep:RowRep), and lattice columns within replicates (Rep:ColRep) reflects the balanced lattice square design. In a mixed model analysis, the treatment factors are (usually) fitted as fixed effects and the blocking factors as random.

The balanced lattice square model is fitted in ASReml-R4 using the following code:

> lattice.asr <- asreml(fixed = yield ~ Variety,
                        random = ~ Rep + Rep:RowRep + Rep:ColRep,
                        data=data1)

The REML log-likelihood is -707.786.

The model’s BIC is:

alt text

The estimated variance components are:

alt text

The table above contains the estimated variance components for all terms in the random model. The variance component measures the inherent variability of the term, over and above the variability of the sub-units of which it is composed. The variance components for Rep, Rep:RowRep and Rep:ColRep are estimated as 4263, 15596, and 14813, respectively. As is typical, the largest unit (replicate) is more variable than its sub-units (lattice rows and columns within replicates). The "units!R" component is the residual variance.

By default, fixed effects in ASReml-R4 are tested using sequential Wald tests:

alt text

In this example, there are two terms in the summary table: the overall mean, (Intercept), and Variety. As the tests are sequential, the effect of the Variety is assessed by calculating the change in sums of squares between the two models (Intercept)+Variety and (Intercept). The p-value (Pr(Chisq)) of  < 2.2 x 10-16 indicates that Variety is a highly significant.

The predicted means for the Variety can be obtained using the predict() function. The standard error of the difference between any pair of variety means is 62. Note: all variety means have the same standard error as the design is balanced.

alt text

Note: the same analysis is obtained when the random model is redefined as replicates (Rep), rows within replicates (Rep:Row) and columns within replicates (Rep:Column).

Spatial analysis of a field experiment

As the plots are laid out in a grid, the data can also be analyzed using a spatial model. We’ll illustrate spatial analysis by fitting a model with a separable first order autoregressive process in the field row (Row) and field column (Column) directions. This is often a useful model to start the spatial modeling process.

The separable first order autoregressive spatial model is fitted in ASReml-R4 using the following code:

> spatial.asr <- asreml(fixed = yield ~ Variety,
                        residual = ~ar1(Row):ar1(Column),
                        data = data1)

The BIC for this spatial model is:

alt text

The estimated variance components and sequential Wald tests are:

alt text

alt text

The residual variance is 38713, the estimated row correlation is 0.458, and the estimated column correlation is 0.684. As for the balanced lattice square model, there is strong evidence of a Variety effect (p-value < 2.2 x 10-16).

A log-likelihood ratio test cannot be used to compare the balanced lattice square model with the spatial models, as the variance models are not nested. However, the two models can be compared using BIC. As the spatial model has a smaller BIC (1415) than the balanced lattice square model (1435), of the two models explored in this blog, it is chosen as the preferred model. However, selecting the optimal spatial model can be difficult. The current spatial model can be extended by including measurement error (or nugget effect) or revised by selecting a different variance model for the spatial effects.

References

Butler, D.G., Cullis, B.R., Gilmour, A. R., Gogel, B.G. and Thompson, R. (2017). ASReml-R Reference Manual Version 4. VSN International Ltd, Hemel Hempstead, HP2 4TP UK.

Gilmour, A. R., Anderson, R. D. and Rae, A. L. (1995). The analysis of binomial data by a generalised linear mixed model, Biometrika 72: 593-599..

plant
plant
plant
A world leader in the advancement and application of algorithmic and analytical content for the smart/precision biotech sector

Follow us

youtube     twitter     linkedin
Copyright © 2000-2021 VSN International Ltd. | Privacy Policy | EULA | Terms & Conditions | Sitemap
VSN International Limited is registered in England & Wales, company number: 4027977 VAT number: GB750 0348 63