A brief introduction to colliders

Blog written by Sean Roberts, Research Associate, Anthropology and Archaeology, University of Bristol

Causal graphs

Causal graphs are great ways of expressing your idea about how the world works. They can also help you design your analyses, including choosing what you need to control for in order to exclude alternative explanations for the patterns you find. Standard methods about how to choose control variables are often vague (Pearl & Mackenzie, 2018), and many assume that controlling for more variables makes the central test more robust. However, controlling for some variables can create spurious correlations due to colliders. This is very worrying! But if we draw our ideas as causal graphs, then we can spot colliders and try to avoid them.

Pearl & Mackenzie (2018) talk about causality and correlation in terms of the flow of information.  Causal effects ‘flow’ between nodes following the causal arrows, but correlation can flow both ways. In the example below, taking a certain medicine might affect your health. But your age might affect both your health and whether you remember to take the medicine. In this case, there might be a correlation between taking medicine and health either because of the causal connection (the causal path), or because of the confounding correlational path though age (the non-causal path):

In a randomised control experiment, the link between health and taking medicine might be blocked by intervening and randomly assigning who takes the medicine (and in this hypothetical example, ensuring that they do take the medicine). That is, the only thing that decides whether the medicine is taken is our experimenter’s random decision.  This means that the only path which connects medicine and health (in our hypothesis) is the causal path that we are interested in.

Blocking causal paths can also be done by ‘conditioning’, for example controlling for the effect of age in a statistical regression:

The final thing that can block a correlation path is a collider. A collider is a node on a causal path with two causal links (arrows) pointing into it. In the graph below, X and Y affect Z. We wouldn’t expect a correlation between X and Y, because that path is blocked by the collider at Z.

Colliders behave differently to other causal structures. Below are the four different types of connection between three nodes (excluding ones where X and Y are connected). In the first three, there is a path between X and Y, so X and Y should be correlated.  This is the root of one of the central problems in research: we cannot tell the first three systems apart just by observation. We would have to manipulate one of the variables (e.g. in an intervention study) and see whether it had an effect.

In the first two graphs, Z is a ‘pipe’. It connects X and Y. If we were to intervene or control for Z, then the relationship between X and Y would be broken and there should be no correlation. In a statistical framework, we would not want to control for Z.  The third system, X and Y are correlated due to a ‘common cause’ or ‘fork’ in Z. However, the behaviour is the same: X and Y should be correlated except for when we control for Z.

The final example is a ‘collider’, and it is different from the rest. Here, X and Y are not correlated except for when we control for Z, at which point they will become correlated (Elwert 2013).

To help understand this, imagine that you and I are working at a waterworks. We can each control the rate of flow in our pipe (X and Y) and our pipes are connected so that the final rate of flow is the combined rate from each of us, Z. I can turn my cog independently of you, and vice versa.

If you turn your rate up it increases the flow in Z, but that has no effect on me, so our rates (X and Y) should not be correlated.  However, then our manager calls and tells us that we have to maintain a certain rate of flow in Z (they are conditioning or fixing Z). Now, if I turn my rate up, then you have to turn your rate down to maintain the required rate in Z. And if you turn your rate down, I have to turn my rate up.  So someone observing X and Y would see a correlation.  That is, X and Y are not correlated, except for when conditioning on Z.

So, correlations will be blocked by a collider unless we control for it, at which point the correlation path ‘opens up’ again.

Here’s another example. Imagine we’re filling sacks with potatoes and carrots. The weight depends independently on each vegetable, and there’s no correlation between them. But if we split the observations by weight, then the number of potatoes predicts the number of carrots.

The behaviour of colliders means that controlling for some variables can lead to spurious correlations (Elwert & Winship 2014, Ding & Miratrix 2015, Westfall & Yarkoni 2016, Rohrer 2018, Middleton et al 2016, York 2018).

Considering colliders is important when deciding which variables to control for in a statistical test. Imagine that we’re investigating reaction times (RT) in reading, and we have measured the length of a word, reaction times for reading the word, the frequency of the word and the word’s valence (the degree of pleasantness of the meaning of the word). We’re interested in testing whether word length affects reaction time. What should we control for? Let’s say that we have the following hypotheses: frequency is affected by length (Zipf’s law, Zipf 1935) and valence (Polyanna hypothesis, Boucher & Osgood 1969), and valence affects reaction time (e.g. Kuperman, Stadthagen-Gonzalez & Brysbaert 2012):

Frequency is actually a collider along the path from length to valence. This means that, although there is a non-causal path between length and RT, the flow of information is blocked. In this case, we should not control for frequency in our statistical model, since doing so would cause length and valence to become correlated, opening up a non-causal path from length to RT.

This is not a hypothetical problem. It will genuinely affect real analyses. For example, in the R code below, we create some artificial data generated by a world where there is no causal path between length and RT:

 

<code>

n = 200

length =  sample(1:7, n, replace = T)

valence = sample(1:7, n, replace = T)

freq = length + valence + rnorm(n)

RT = valence + rnorm(n)

</code>

We can run a statistical model, predicting reaction time by length and we see that there is no significant correlation (as we expected):

<code>

summary(lm(RT ~ length))

Estimate Std. Error t value Pr(>|t|)

length      -0.03436    0.07971  -0.431    0.667

</code>

However, when we add frequency as an independent variable, suddenly both length and frequency are significantly correlated with RT:

<code>

summary(lm(RT ~ length + freq))

Estimate Std. Error t value Pr(>|t|)

length      -0.83004    0.06520 -12.730   <0.001 ***

freq         0.85081    0.04647  18.310   <0.001 ***

</code>

Of course, this may not be the correct or complete causal model in the real world. There are many possible models (some of them don’t have colliders and so you should control for frequency). The point is that your hypothesis should affect the design of your statistical model or your experimental manipulations. Throwing all your variables into the model may actually worsen your ability to infer what’s happening.

It is therefore vital to clearly define our prior hypotheses, and causal graphs are an excellent way of doing this.

As Pearl & Mackenzie explain, we can use causal graphs to identify variables that we should control for. If we’re interested in the effect of X on Y, then we might be able to calculate:

  1. The observational relationship between X and Y (e.g. probability of observing Y given X).
  2. The state of Y if we were to manipulate X (an intervention)

A confounding variable is anything that leads to a difference between these two calculations. To remove confounding, we need to block every non-causal path without blocking any causal paths (block any ‘back door paths’). This means we should control for any variable Z on a non-causal path from X to Y that starts with an arrow pointing to X and where Z is not a descendant of X (there’s no way to get from X to the Z following causal paths, see Shrier & Platt 2008).

In the example below, there is a path from X to Y through Z1 and Z2, so we should control for either Z1 or Z2 (or both) in order to block this non-causal path.

Other examples can become more complicated. For example, in the graph below, there is a non-causal path that needs to be closed: X – Z2 – Z3 – Y. However, controlling for Z2 creates a correlation between Z1 and Z3, opening a new non-causal path. In this case we should control for Z3 rather than Z2.

Tools like Dagitty have algorithms for calculating the options for which variables should be controlled for. The Causal Hypotheses in Evolutionary Linguistics Database (CHIELD is a database of causal claims with tools for exploring connections between them. Graphs in CHIELD can be exported to Dagitty.

A causal approach to research asks us to be brave and make commitments about how we think the world works. If we do this, we might be able to make better decisions and to extract a lot more from our data than we expected.

CHIELD

CHIELD is a searchable database of causal hypotheses in evolutionary linguistics. It has tools for expressing and exploring hypotheses. Anyone can contribute, and the code is open source for other fields to build their own databases. Contact sean.roberts@bristol.ac.uk for more information.

Recommended reading

Rohrer, J. M. (2017). Thinking Clearly About Correlations and Causation: Graphical Causal Models for Observational Data.

Pearl, J., & Mackenzie, D. (2018). The book of why: the new science of cause and effect. Basic Books.

Roberts, S. (2018). Robust, causal and incremental approaches to investigating linguistic adaptation. Frontiers in Psychology, 9, 166.

 

Flying far from the nest – the biggest adventure or a mental health disaster?

A blog post by Angharad Stell, PhD student, Atmospheric Chemistry Research Group, School of Chemistry, University of Bristol

Mental health at university has been making headlines for all the wrong reasons. Every day seems to bring a new shocking article:

  • “one in four students suffer from mental health problems” [1]
  • “student suicide increase warning” [2]
  • “mental health: a university crisis” [3]

Whilst there are plenty of scary numbers out there, there is little knowledge of the cause. Perhaps if we can understand that, we can combat the issue more effectively. Here, data science is used to investigate whether moving away from family to a new environment has an impact on students’ loneliness.

University is often a young adult’s first move away from their family and the area they grew up in. This movement is visible in the UK’s internal migration data shown opposite, with peaks above the underlying trend for when students arrive and leave university.

Different universities will encourage different amounts of movement: a better or more specialised university will attract students from further away. So, if we can compare different universities’ mental health conditions prescriptions, (depression and social anxiety), by selecting the nearest GP, we should be able to see differences related to student movement.

However, there will be many other factors that might affect loneliness at different universities, which we will have to try to take account of:

  • Number of students – could large institutes and the associated anonymity be a cause of loneliness? Or could small institutes mean there is less chance to meet friends? It will also be harder to pick up smaller institutes in the data.
  • Proportion of UK students compared to international – coming from abroad is the biggest move you can make, does this impact loneliness?
  • University Ranking – are higher ranking institutes pressure cookers for young people? Here, the entry requirements, graduate prospects, and student satisfaction are used. Research quality has been cut out as we are only looking for undergraduates, the other three criteria were not combined here, as they measure quite different things.
  • Widening Participation – do students from less typical backgrounds find the transition harder? Here, the number of students that come from a state school, low participation area, or receive Disabled Student’s Allowance (DSA) compared to what it would be in a socially equal admissions scenario, are combined together to form one indicator.

As the GP data is chosen by matching each university to its nearest GP, other factors will affect the data, which will show varying amounts of students and the general population. For example, the University of Exeter’s nearest GP specialises in caring for the homeless, and the University of Salford’s cares for those in nursing homes. Therefore, an attempt is made to consider these external factors:

  • Deprivation of the area – do worse off areas have poorer mental health?
  • Median age – more young people are lonely [4], and suffer from mental health conditions [5]

The Sutton Trust has a dataset containing how far students have moved, or commute from the family home, to go to university. There are three distance groups: short (0-57 miles from home to university), medium (57-152 miles), and long (more than 152 miles). The complete dataset examined is a combination of this, the university characteristics, and its nearest GP mental health conditions prescriptions and characteristics. Once all the data is collected, the fun bit can begin: data analysis!

Visualising the Data

First, let’s examine the correlation matrix, shown on the right. Strong correlations are shown in red, strong anti-correlations are blue, and weak correlations are shown in the paler shades. There are some obvious relations: a better ranked university has more international students, and more movers. There is also a nod to the social inequality of higher ranked institutions taking fewer non-typical background students. Interestingly, student satisfaction is a poor indicator of everything!

Higher rates of mental health conditions appear to be related to:

  1. Higher number of students at a university
  2. Higher number of medium movers (lower numbers of short commuters)
  3. Higher university ranking (entry standards and graduate prospects)
  4. Lower median age

1 and 4 likely just show that young people have higher rates of mental health issues, however 2 and 3 could be interesting. However, the relations are unlikely to be nice and simple linear ones, and so just looking at this correlation matrix is not enough, let’s try a better way to visualise the data.

Dimension Reduction

Humans are not good at dealing with graphs in more than two dimensions, let alone the 16 in this dataset. So, let’s reduce the dimensionality, using a dimension reducing algorithm, (t-SNE). This algorithm attempts to plot points that are similar close to each other, and points that are dissimilar far apart. The result can be seen in the figure below, where each subplot is coloured by the values of the title variable, (zero indicates the lowest value of the variable, and one is the highest). From the distribution of the points, you can see there appear to two clusters: one smaller top left and a larger bottom one. From the colourings, there does also seem to be structure in the bottom cluster, so perhaps this can be further divided.

Clustering

Clustering can be very fickle, this is especially true for real datasets with noise, outliers, and not nice circular clumps. So, here four clustering algorithms have been put through their paces, and the best one for our needs selected: K-means, (though agglomerative hierarchical clustering gives a very similar answer and leads to the same inferences).

We can look at how the algorithm clustered the points on the reduced dimension projection in the plot on the right. We have here a top, bottom left and bottom right cluster, which seems to mainly conform to the structure that the dimension reduction algorithm suggested.

Now, let’s see what we can learn from these clusters. In the plot below, each cluster’s density is shown by the filled areas, over the 16 variables. The black lines show each cluster’s median value. On the x-axis, zero is the variable’s lowest value in the dataset, and one is the highest.

The top green cluster has the following characteristics:

  • Highest number of students
  • Lowest number of UK origin students (highest numbers of international students)
  • Highest numbers of medium movers and lowest numbers of short commuters
  • Highest entry standards and graduate prospects
  • Highest rates of depression and social anxiety
  • Lowest median age

The very low median age, high number of students, and high number of movers, (commuters are unlikely to show up at the GP nearest their university), suggest these points have an abnormally high proportion of students at these GPs. Therefore, explaining the highest rates of depression and social anxiety. In fact, a quick google suggests that many of the GPs with the highest rate of depression and social anxiety specialise in student health.

The remaining two clusters have similar higher median age distributions, making them more comparable, as they likely have similar numbers of students relative to the general population. There are differences in the number of medium movers, number of short commuters, and deprivation, but these clusters have similar distributions of depression and social anxiety. This suggests that the distance moved to university does not affect the rate of mental health issues, or offsets the effect of deprivation.

Regression

Another method we can use to understand the dataset is regression modelling. After testing a few methods, a decision tree seemed to work best. In the plot below, starting at the top box, is the first statement true or false? Proceed to the next box based on your answer, (left for true, right for false), repeat until you get to the final box in your chain.

As in the clustering, the first split is in terms of median age: young people have more mental health issues than older people. In the high density of students branch, (left hand side), further division is done on the data from the university rankings: higher graduate prospects leads to higher rates of mental health problems, and subdividing that group again is done by student satisfaction: worse student satisfaction gives higher rates of mental illness.

High Student Density Areas

From the above analysis, the main difference in mental health issues is caused by the number of students in an area, rather than the distance the students moved. If we take a quick look within the K-means top green cluster, (the high density of students one), and those with a median age of less than or equal to 23.5 (as suggested by the first split in the decision tree), perhaps we can see differences within this small group where the student signal is clearest. Looking at the correlation matrices, (see below), most of the initial correlation of the mental health conditions with medium movers and short commuters disappears. This tells us that the initial correlation comes from better universities having more students that move into the local area, and students in general suffering from more mental health issues.

In the low median age group, correlation between the mental health conditions and graduate prospects and student satisfaction is present, (as seen by the divisions in the decision tree). Median age’s correlation is still there too though, perhaps this is just continuing to show that better universities attract more people to move to the local area, hence lowering the median age, and pushing up the rates of mental health conditions.

Though for the high student density cluster, a fairly strong anti-correlation between the mental health conditions with the number of long movers has appeared, as well as a strong correlation with wider participation and UK origin. Plotting these though, (see below), they are not very convincing, and since this is now a small sample, it is hard to identify the outliers.

Conclusions

Students in general suffer from mental health conditions more than the rest of general population, and the distance moved from their family does not have a major impact.

There is perhaps a hint of a relationship with long distance movers within the high density of students cluster, but this is too small a sample to draw conclusions from. In order to better explore this, different data would be beneficial: actual distance measures rather than short, medium, and long moves. In addition, a good way to remove the noise of the general population from the GP data would be to look at the rates of usage of university counselling services instead. This would only include students and would also include the commuters, who will just be too hard to find within the noise of the general population in the current dataset.

In terms of the immediately available datasets, it would also be good to look at monthly data: are there more prescriptions during exams? Or is it worse at the start of academic year as students move away for the first time?

Overall, we have seen the power of data science. What first looked like an interesting correlation between mental health conditions and more students moving rather than commuting, has been shown to be misleading. Whilst this work’s conclusions did not succeed in revolutionising the university mental health care system, perhaps with more university specific data, this kind of approach could reveal interesting relationships, and be the beginnings of a solution.

References

  1. https://yougov.co.uk/topics/lifestyle/articles-reports/2016/08/09/quarter-britains-students-are-afflicted-mental-hea
  2. https://www.bbc.co.uk/news/education-43739863
  3. https://www.theguardian.com/education/series/mental-health-a-university-crisis
  4. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/771482/Community_Life_Survey_Focus_on_Loneliness_201718.pdf
  5. https://epi.org.uk/publications-and-research/prevalence-of-mental-health-issues-within-the-student-aged-population/

Acknowledgments

A special thanks goes to Matthew Boyd, who helped me find data and questioned my logic.

The Jean Golding Institute Data Competition

This project was one of our runners up into our recent ‘Loneliness for Education’ competition. We run various data competitions throughout the year – find out more

The Beauty of Data 2019 – A JGI data visualisation competition

We are excited to announce that the winner of the 2019 JGI Beauty of Data competition is Vincent Cheng from Population Health Sciences with his visualisation project ‘Automated Forward Citation Snowballing using Google Scholar and Machine Learning’.

The winning visualisation is a short video, and you can view the full submission below:

 

About the winning project

This is an exploratory project that aims to understand how studies are being cited in Google Scholar, and to explore its application to evidence searches in a systematic review. In a systematic review, searching for studies is one of the most crucial steps. Although Google Scholar acts as a comprehensive database, its searching criteria and processing are not reproducible and transparent for conducting systematic reviews. This project uses a visualisation of a citation pattern in Google Scholar from a forward snowballing exercise (identifying new studies based on those papers citing the study being examined).

In the video, each node represents a search result (study) from Google Scholar. The size of a node represents the number of times a study has been cited. The width of an edge represents the number of duplicates. The visualisation demonstrates the search “Lassa fever ribavirin” on Google Scholar and extracted the first 10 search results as a start set (Iteration 0; n=10 as shown in the video). A trained machine learning model then selected potential studies based on the title, abstract, authors and journal of each study for the forward snowballing process. In each iteration, the information from the first 10 studies citing a potential study was extracted. After 10 iterations, there were n=4,765 search results (with n=1,384 unique results). The number of retrieved studies increased with each iteration. However, the number of duplicated studies also increased in later iterations, suggesting inefficient retrievals.

The data visualisation provides spatial relationships between each iteration in a chronological order to inspect the change. The results provide an insight into the Google Scholar search algorithm and help us to search and utilise the database more efficiently.

More about the competition

The winner received £100 in prize money and was invited to present his visualisation as a poster at the Data Visualisation Symposium at the Alan Turing Institute in London on 13 September 2019. You can take a look at the full poster of the winning visualisation here: Turing AI Symposium Poster.

Two runners-up each receiving £50 each are Chris Moreno-Stokoe and Valeriia Haberland.

Take a look at their visualisations below:

‘History appears to have repeated itself with unsubstantiated claims about the effects of bilingualism’ by Chris Moreno-Stokoe.

 

‘From a data space to knowledge discovery’ by Valeriia Haberland

The Jean Golding Institute Beauty of Data competition challenges staff and students to submit their work in this exciting challenge to find the best University of Bristol data visualisation. You take a look past entries on our Flickr page.

The Jean Golding Institute data competitions

We run a number of competitions throughout the year – to find out more take a look at Data competitions.

Loneliness competition winners announced

Photo by Danielle MacInness on Unsplash

We are pleased to announce that the winners of the competition are Nina Di Cara from Population Health Sciences and Tiff Massey, Analyst from Ernst and Young with their project ‘Is loneliness associated with movement for education?’. The specific research question assumes that in most cases, movement for primary and secondary education is associated with upward social mobility. That is, moving to try to get into a better school than is available in their current local area.  

The team’s research question was ‘Is community-level loneliness associated with the quality of local schools, and how far can this be attributed to the movement of families pursuing upward social mobility through education?’  

The winning team explored several models and created novel metrics to explore the relationship between loneliness and movement education. They found the population change caused by moving of children aged 4-15 has a small impact on loneliness in communities. They hypothesised that the reason children of this age move, is mostly to pursue better educational opportunities and so movement for the purpose of education in primary and secondary students is associated with loneliness. We will hear more about the details of the analysis in Nina Di Cara’s upcoming blog, to be published on the ONS Data Science Campus website. 

Nina Di Cara said “We were so excited at the opportunity to take part in the data challenge, especially since it gave us the chance to try out using open data to answer a question that has real-life significance. It was a lot of fun to work together and challenge ourselves – we both learned a lot by taking part so winning was a bonus!” 

Jasmine Grimsley, Senior Data Scientist at the ONS Data Science Campus, said “Congratulations to the winners of this year’s Jean Golding Institute Loneliness data challenge which provided an opportunity for students to use their cutting-edge analysis skills to answer current questions relevant to government. Students brought together alternative data sources, admin data, and combined it with existing open government data in novel ways.

“At the Data Science Campus we want to work with people from across the country to try new ways of analysing data to provide new information which can inform decisions. The methods our winners used are exciting and will help in future explorations of how the country can make better use of its data.”

The winners received £1,000 in prize money and have also been invited to the Office for National Statistics (ONS) Data Science Campus to share new ideas for data analysis. They will also have the opportunity to present their findings and spend a “Day in the life” of a Government Data Scientist. Furthermore, their work will be showcased on the Data Science Campus website in blog form.   

The two runners-up each receiving £250 are Angharad Stell and Robert Eyre.  

More about the competition

The Office for National Statistics have developed a loneliness index using open prescription data which is available at the MSOA (Middle layer Super Output Area in the ONS coding system) level across England. These data also provide information to identify MSOA’s that are within geographical clusters where the loneliness index is high or low. We would like to understand if the mobility of people for education is associated with the risk for being in a high or low cluster. The movement of people for education can be locally or across a great distance.

In this competition, we challenged participants to put forward a research question related to loneliness and movement for education, and answer it using the loneliness dataset provided (see below) alongside other suggested data sources.  Read more

The Jean Golding Institute data competitions

We run a number of competitions throughout the year – to find out more take a look at Data competitions.

 

Computer Experiments

Blog written by Jonathan Rougier, Professor of Statistical Science, University of Bristol

In a computer experiments we run our experiment in silico, in situations where it would be expensive or illegal to run them for real.

Computer code which is used as an analogue for the underlying system of interest is termed a simulator; often we have more than one simulator for a specified system. I have promoted the use of ‘simulator’ over the also-common ‘model’, because the word ‘model’ is very overloaded, especially in Statistics (see Second-order exchangeability analysis for multimodel ensembles).

Parameters and Calibration

The basic question in a computer experiment is how to relate the simulator(s) and the underlying system. We need to do this in order to calibrate the simulator’s parameters to system observations, and to make predictions about system behaviour based on runs of the simulator.

Parameters are values in the simulator which are adjustable. In principle every numerical value in the code of the simulator is adjustable, but we would usually leave physically-based values like the gravitational constant alone. It is common to find parameters in chunks of code which are standing-in for processes which are not understood, or which are being approximated at a lower resolution. In ocean simulators, for example, we distinguish between ‘molecular viscosity’, which is a measurable value, and ‘eddy viscosity‘, which is the parameter used in the code.

The process of adjusting parameters to system observations is a statistical one, requiring specification of the ‘gap’ between the simulator and the system, termed the discrepancy, and the measurement errors in the observations. In a Bayesian analysis this process tends to be called calibration. When people refer to calibration as an inverse problem it is usually because they have (maybe implicitly) assumed that the simulator is perfect and the measurement error is Normal with a simple variance. These assumptions imply that the Maximum Likelihood value for the parameters is the value which minimizes the sum of squared deviations. But we do not have to make these assumptions in a statistical analysis, and often we can use additional insights to do much better, including quantifying uncertainty.

The dominant statistical model for relating the simulator and the system is the best input model, which asserts that there is a best value for the parameters, although we do not what it is. Crucially, the best value does not make the simulator a perfect analogue of the system: there is still a gap. I helped to formalize this model, working with Michael Goldstein and the group at Durham University (e.g. Probabilistic formulations for transferring inferences from mathematical models to physical systems and Probabilistic inference for future climate using an ensemble of climate model evaluations). Michael Goldstein and I then proposed a more satisfactory reified model which was better-suited to situations where there was (or could be) more than one simulator (Reified Bayesian modelling and inference for physical systems). The paper has been well-cited but the idea has not (yet) caught on.

In a Bayesian analysis, calibration and prediction tend to be quite closely related, particularly because the same model of the gap between the simulator and the system has to be used for both calibration (using historical system behaviour) and prediction (future system behaviour). There are some applications where quite simplistic models have been widely used, such as ‘anomaly correction’ in paleoclimate reconstruction and climate prediction (See Climate simulators and climate projections).

Emulators

Calibration and prediction are fairly standard statistical operations when the simulator is cheap enough to run that it can be embedded ‘in the loop’ of a statistical calculation. But many simulators are expensive to run; for example, climate simulators on super-computers run at about 100 simulated years per month. In this case, each run has to be carefully chosen to be as informative as possible. The crucial tool here is an emulator, which is a statistical model of the simulator.

In a nutshell, carefully-chosen (expensive) runs of the simulator are used to build the emulator, and (cheap) runs of the emulator are used ‘in the loop’ of the statistical calculation. Of course, there is also a gap between the emulator and the simulator.

Choosing where to run the simulator is a topic of experimental design.

Early in the process, a space-filling design like a Latin Hypercube is popular. As the calculation progresses, it is tempting to include system observations in the experimental design. This is possible and can be very advantageous, but the book-keeping in a fully-statistical approach can get quite baroque, because of keeping track of double-counting – see Bayes linear calibrated prediction for complex systems. It is quite common in a statistical calculation to split learning about the simulator on the one hand, and using the emulator to learn about the system on the other, for pragmatic reasons (Comment on article by Sanso et al (PDF)).

Sometimes the emulator will be referred to as the surrogate simulator, particularly in Engineering. Often the surrogate is a flexible fitter with a restricted statistical provenance (e.g.’polynomial chaos (PDF)‘). This makes it difficult to use surrogates for statistical calculations, because a well-specified uncertainty about the simulator is a crucial output from an emulator. Statistics and Machine Learning have widely adopted the Gaussian process as a statistical model for an emulator.

Gaussian processes can be expensive to compute with, especially when the simulator output is high-dimensional, like a field of values (Efficient emulators for multivariate deterministic functions). The recent approach of inducing points looks promising  (On sparse variational methods and the Kullback-Leibler divergence between stochastic processes (PDF)).

Emulators have also been used in optimization problems. Here the challenge is to approximately maximize an expensive function of the parameters; I will continue to refer to this function as the ‘simulator’. Choosing the parameter values at which to run the simulator is another experimental design problem. In the early stages of the maximization the simulator runs are performed mainly to learn about the gross features of the simulator’s shape, which means they tend to be widely-scattered in the input space. But as the shape becomes better known (i.e., the emulator’s uncertainty reduces), the emphasis shifts to homing-in on the location of the maximum, and the simulator runs tend to concentrate in one region. There are some very effective statistical criteria for managing this transition from explore to exploit. This topic tends to be known as ‘Bayesian optimization’ in Machine Learning, see Michael Osborne’s page for some more details.