Challenges in Exploratory Data Analysis

  • Post author:
  • Post category:Fan
Image by bluebudgie. Downloaded from pixabay.com

You are given a dataset and asked: “what do the data tell us? Do not assume we know anything about the subject, just tell us what the data say?” This is often the task referred to as “exploratory data analysis,” and it is harder than might seem. I see two main challenges.

The first is the request to “not assume we know anything about the subject.” This request is easy to forget without realizing. For example, say you have a dataset with twenty variables. It is perfectly fine during exploratory analysis to want to look at, not just individual variables in your dataset, but also how variables fluctuate relative to each other, that is, correlation. Now, how easy is it to look at correlations within the dataset with no prior inclination to think some of the twenty variables will be more likely correlated than others? We can fight the urge to pay more attention to those by always including all twenty variables in any and all considerations about correlation, but this requires discipline. One could even argue that we should, indeed, spend more time exploring correlations that we have a basis to believe have a causal connection, and that focusing equally on other correlations are a waste of time and possibly misleading. In any case, how to explore data given the mental models we all approach them with is a potential issue to be dealt with. I will likely return to this in a future post.

The second challenge I see in exploratory data analysis is identifying, and keeping in mind at all times, the sources of uncertainty in our data. The sources of uncertainty are several: from what we don’t know about how the variables were chosen and the data were collected, cleaned, stored and checked, to whether we are, consciously or not, asking questions, not about the dataset itself, but about the underlying generating process, that is, about a population of which we can consider the dataset to be a sample.

This last point I find is often overlooked. In some cases, we know that we are looking at a sample and asking questions about a population. For example survey data is often clearly extracted from a broader population in which we are interested. This is the classic use of inferential statistics that we all learn about in college – although, even in this case, we often see analyses focusing on point estimates rather than the more appropriate confidence intervals. But there are cases where we lose track of the sources of uncertainty in our data (or sources of uncertainty in our analysis) and must maintain discipline to correctly assess what our analysis is actually telling us.

For example, say we have data for five characteristics (five variables) for every inhabitant of a community. We are only interested in that community, so we understand we have “population” data (not a sample). In looking at correlation between our five variables, we decide to look at linear correlation among them through a linear regression. Our statistical software spits out a summary of results from our linear regression that includes coefficients and p-values for those coefficients. But p-values assume a distribution for the observed coefficients. If there is a distribution, there is a source of uncertainty (a random variable). Where did that uncertainty come from? Aren’t we looking at population data and, therefore, what we see is all there is to know?

My answer is that the uncertainty stems from assuming there is a linear relationship with variables when what we observe does not perfectly fit that linear relationship. There is, therefore, an “error” term associated with each observation relative to the calculated linearly predicted relationship. The whole linear regression exercise is asking questions about an assumed underlying generating process in the data, not about the observed data itself. We started making assumptions about the data and asking questions about an underlying process, very possibly without noticing.

So here are my tentative initial guidelines for doing exploratory data analysis:

  1. Start by understanding the data: publishing source; when and where the data was collected and who collected it; what universe is it supposed to represent and was it intended as a sample of a larger population; definitions – are the variables well defined; what errors may have been inserted in the data during transmission, cleaning, storing or other manipulation.
  2. Go onto univariate analyses and then cover correlations, being mindful of any potential assumptions we are making and, if we feel we absolutely need to make these assumptions, be explicit about them, and keep them in mind when drawing conclusions.
  3. Keep in mind at all times whether our questions are focusing on the data at hand or on an underlying generating process, i.e., whether we are “going beyond the data.” Again, be explicit if doing so.
  4. Be aware that exploratory analysis is supposed to focus on extracting inspiration from our data. It is not sufficient to draw conclusions. These require a separate step:  testing hypotheses with a second set of data that can be assumed extracted through the same generating process (from the same population). We do not test hypotheses during exploratory data analysis, nor discuss causality and modelling, other than possibly as suggestions for the next step of hypothesis testing.