Why do I hate zeros in my dataset?

The dark side of zeros in your dataset: the hidden threat to statistical analysis

Dr. Salvador A. Gezan

19 April 2021
image_blog

It is always good practice to explore the data before you fit a model. A clear understanding of the dataset helps you to select the appropriate statistical approach and, in the case of linear models, to identify the corresponding design and treatment structure by defining relevant variates and factors.  

So, I have in my hands a dataset from a given study, and I proceed to explore it, maybe to do some data cleaning, but mainly to get familiar with it. Assessing predictors is important but more critical is to evaluate the single or multiple response variables that need to be analysed. And it is in these columns where I often find surprises. Sometimes they contain not only numbers, as they should for linear model responses, but also non-numeric data. I have found comments (‘missing’ or ‘not found’), letters (‘?’), and one or more definitions of missing values (‘NA’, ‘NaN’, ‘*’, ‘.’ or even ‘-9’). But what is the most disturbing to me is the ZEROS, and I especially hate them when they come in masses! 

But why do zeros make me angry?! Because their definition is not clear, and they can be the cause of large errors, and ultimately incorrect models. Here are some of my reasons…  

Missing values

First, it is common to use zero as the definition for missing values. For example, a plant that did not have any fruit has a zero value. But what if the plant died before fruiting? Yes, it will have zero fruits, but here the experimental unit (the plant) no longer exists. In this case, there is a big difference between a true zero that was observed and a zero because of missing data. 

Default values

Second, zeros are sometimes used as default values in the columns of spreadsheets. That is, you start with a column of zeros that is replaced by true records. However, for many reasons data points may not be collected (for example, you could skip measuring your last replication), and hence some cells of the spreadsheet are not visited, and their values are unchanged from the zero default. Again, these are true missing values, and therefore they need to be recorded in a way that indicates that they were not observed! 

Misleading values

Third, zeros are often values reflecting measurements that are below the detection limit. For example, if the weighing balance precision is <0.5 grams then any weight of seed below 0.5 grams will be recorded as a zero. Yes, we do have a range of seed weights reaching 23 grams, and a small portion might be below 1 gram, but in this case the zeros are not really zeros, they approximate a true unknown record between 0 and 0.5 grams. 

When, under an initial exploration of the dataset we discover that there are lots of zeros, we need to question why they are occurring. Of course, conversations with the researcher and the staff doing the data recording will give critical insight. This should help us identify the true zeros from the false ones. If there are no missing values recorded in the data, then we might think that some of these zeros are missing values. Here is where I like to explore additional columns (e.g., survival notes) to help ‘recover’ the missing values. However, it might be impossible to discriminate between the true zeros and the missing values if this extra information was not recorded in the dataset. This unfortunate situation, to the misfortune of my collaborators, might mean that the dataset must be completely discarded.  

In the case of missing values due to detection limits, the best approach is to ask the researcher. Here, I like to first make sure that this is really the case, and from there make an educated decision on how to analyse the data. Replacing undetected observations by a zero creates two undesired issues:  

  1. A bias, as these values are not zero, but for example, as in our previous case they have an average value of 0.25 grams (i.e., half the detection limit), and
  2. Reduced background variability, as all undetected observations are recorded with exactly the same value when in fact they are not identical, but we can’t see this variability!

Finally, there is another reason for me to hate zeros. Suppose that they are all verified valid numbers, but that we still have a high proportion of zeros in our dataset. For example, in a study on fruit yield, I might have 20% of live plants producing no fruit, resulting in 20% true zeros in my dataset. This large proportion of zeros creates difficulties for traditional statistical analyses. For example, when fitting a linear model, the assumption of an approximate Normal distribution might no longer hold, and this will be reflected in residual plots with a strange appearance!  

So, what is the solution for this ‘excess’ of zeros? In some cases, a simple transformation could reduce the influence of these zeros in my analyses. Often, the most logical alternative is to rethink the biological process to model, and this might require something different than our typical statistical tools. For example, we could separate the process into two parts. The first part separates the zeros from the non-zeros using a Binomial model that includes several explanatory variables (e.g., age, size, sex). The second part deals only with the non-zero values and fits another model based on, say a Normal distribution, that will include the same or other explanatory variables, but in this case we model the magnitude of this response. This is the basis of some of the Hurdle models, but other statistical approaches, particularly Bayesian, are also available. 

In summary

I have many reasons to hate zeros, and you might have a few additional ones. However, I believe they are a critical part of data exploration: not only they can be the tip of an iceberg leading to a better understanding and modelling of the process under which the data was obtained, but they also help to identify potentially more adequate models to describe the system. Hence, perhaps I should embrace the zeros in my dataset and not be so angry about them!

About the author

Dr. Salvador Gezan is a statistician/quantitative geneticist with more than 20 years’ experience in breeding, statistical analysis and genetic improvement consulting. He currently works as a Statistical Consultant at VSN International, UK. Dr. Gezan started his career at Rothamsted Research as a biometrician, where he worked with Genstat and ASReml statistical software. Over the last 15 years he has taught ASReml workshops for companies and university researchers around the world. 

Dr. Gezan has worked on agronomy, aquaculture, forestry, entomology, medical, biological modelling, and with many commercial breeding programs, applying traditional and molecular statistical tools. His research has led to more than 100 peer reviewed publications, and he is one of the co-authors of the textbook Statistical Methods in Biology: Design and Analysis of Experiments and Regression.