The University of Bristol's central hub for data science and data-intensive research, connecting a multidisciplinary community of experts across the University and beyond.
Daniel Collins (Ask-JGI Data Science Support 2023-24)
Daniel Collins, 2nd year PhD Student in the School of Computer Science at the University of Bristol
I applied to Ask-JGI as a 2nd year PhD student on the Interactive AI CDT. Before starting my PhD, I spent several years working in Medical Physics for the NHS. Without a formal background in data science, transitioning to an AI-focused PhD felt like a significant shift. I was looking for opportunities to gain more practical experience in areas of data science outside of my research topic, and Ask-JGI has been the perfect place to do this!
Working with Ask-JGI has been a hugely rewarding experience, and I’ve really appreciated the variety it introduced into my day-to-day work. With a PhD, you’re often working towards a long-term goal in a very specific domain area, with projects that can span several months at a time. With Ask-JGI, each query becomes a self-contained mini-project with a much smaller scope and timeline. These short bursts of exploration and learning have been really valuable to have alongside my PhD.
The queries involve supporting researchers from various specialisms across the University, and can involve a broad range of topics and technical skills. I’ve particularly enjoyed queries that have involved writing demo code e.g. for data processing, visualisation or modelling. One of the highlights has been my work with GenROC, visualising the number of children with different rare genetic conditions recruited to the study. To try to make it more engaging for the children and families involved, we developed a pipeline for creating 3D bubble plots with a space theme using the Blender Python API. This was great because I got to spend time learning a new software tool while also learning more about the important work the GenROC researchers are doing at the University!
Example of the Blender API bubble plots made for GenROC, with anonymised and randomised data
I wholeheartedly recommend joining the team if you have experience in any area of data science and you’re looking to develop your skills. The JGI team have created an incredibly friendly and supportive environment for learning and collaboration. It’s an excellent opportunity to learn from others, and gain exposure to the different ways data science can be applied in academic research!
A public event organised by The Alan Turing Institute – 20 June 2024 Blog post by Léo Gorman, Data Scientist, Jean Golding Institute
Let’s say you are a researcher approaching a new dataset. Often it seems that there is a virtually infinite number of legitimate paths you could take between loading your data for the first time and building a model that is useful for prediction or inference. Even if we follow statistical best practice, it can feel that even more established methods still don’t allow us to communicate our uncertainty in an intuitive way, to say where our results are relevant and where they are not, or to understand whether our models can be used to infer causality. These are not trivial issues. The Alan Turing Institute (the Turing) hosted a theory and methods challenge fortnight (TMCF), where leading researchers got together to discuss these issues.
JGI team members Patty Holley, James Thomas and Léo Gorman (left to right) at the Turing
Members of the Jean Golding Institute (Patty Holley, James Thomas, and Léo Gorman) went to London to participate in this event, and to meet with staff at the Turing to discuss opportunities for more collaboration between the Turing and the University of Bristol.
In this post, I aim to provide a brief summary of my take-home messages that I hope you will find useful. At the end of this post, I recommend materials from all three speakers which will cover these topics in much more depth.
Andrew Gelman – Beyond “push a button, take a pill” data science
Andrew Gelman presenting
Gelman mainly discussed how are statistics used to assess the impact of ‘interventions’ in modern science. Randomised controlled trials (RCTs) are considered the gold-standard option, but according to Gelman, the common interpretation of these studies could be improved. First, the trials need to be taken in context, and it needs to be understood that these findings might be different in another scenario.
We need to move beyond the binary “it worked” or “it didn’t” outcomes. There are intermediate outcomes which help us understand how well a treatment worked. For example, let’s take cancer treatment trial. Rather than just looking at if a treatment worked for a group, we could look at how the size of the tumour changed, and whether this changed for different people. As Gelman says in his blog: “Real-world effects vary among people and over time”.
Jessica Hullman – What do we miss with average model effects? How can simulation and data visualisation help?
Jessica Hullman presenting
Hullman’s talk expanded on some of the themes in Gelman’s talk, Let’s continue with the example of an RCT for cancer treatment. If we saw an average effect of 0.1 between treatment and control, how would that vary for different characteristics (represented by the x-axis in the quartet of graphs below). Hullman demonstrated how simulation and visualisation can help us understand how different scenarios can lead to the same conclusion.
Causal quartets, as shown in Gelman, Hullman, and Kennedy’s paper. All four plots show an average effect of 0.1, but these effects vary as a function of an explanatory variable (x-axis)
Hadley Wickham – Challenges with putting data science into production
Hadley Wickham presenting
Wickham’s talk focused on some of the main issues with conducting reproducible, collaborative, and robust data science. Wickham framed these challenges under three broad themes:
Not just once: an analysis likely needs to be runnable more than once, for example you may want to run the same code on new data as it is collected.
Not just on my computer: You may need to run some code on your own laptop, but also another system, such as the University’s HPC.
Not just me: Someone else may need to use your code in their workflow.
According to Wickham, for people in an organisation to be able to work on the same codebase, they have the following needs (in order of priority), they need to be able to:
find the code
run the code
understand the code
edit the code.
These challenges exist at all types of organisation, and there are surprisingly few cases where organisations fulfil all criteria.
Panel discussion – Reflections on data science
Cagatay Turkay, Roger Beecham, Hadley Wickham, Andrew Gelman, Jessica Hullman (left to right) at the Turing
Following each of their individual talks, the panellists reflected more generally. Here are a few key points:
Causality and complex relationships: When asked about the biggest surprises in data science over the past 10 years both Gelman and Hullman seemed surprised at the uptake of ‘blackbox’ machine learning methods. More work needs to be done to understand how these models work and to try and communicate uncertainty. The causal quartet visualisations, presented in the talk, only addressed simple/ideal cases for causal inference. Gelman and Hullman both said that figuring out how to understand complex causal relationships for high-dimensional data was at the ‘bleeding edge’ of data science.
People problems not methods/tools problems: All three panellists agreed that most of the issues we face in data science are people problems rather than methods/tools problems. Much of the tools/methods exist already, but we need to think more careful.
Léo’s takeaway
The whole trip reminded me of the importance of continual learning, and I will definitely be spending more time going through some of Gelman’s books (see below).
Gelman and Hullman’s talk in general encouraged people to think: At each point in my analysis, were there alternative choices that could have been made that would have been equally reasonable, and if so, how different would my results have looked had I made these choices? This made me want to think more about multiverse analyses (see analysis package and article).
Further Reading
Theory and Methods Challenge Fortnight – Garden of Forking Paths
The speakers were there as part of the Turing’s Theory and Methods Challenge Fortnight (TMCF), more information can be found below:
TMCF website (worth checking for updates and write-ups)
For people who have not heard of Andrew Gelman before, he is known to be an entertaining communicator (you can search for some of his talks online or look at the Columbia statistics blog). He also has several great books:
Again, check the Columbia statistics blog, where Hullman also contributes. The home page of Hullman’s website also includes selected publications which cover causal quartets, but also reliability and interpretability for more complex models.
Hadley Wickham
Wickham has made many contributions for R and data science. He is chief scientist at Posit and is lead of the tidyverse team. His book R for Data Science is a particularly useful resource. Other work can be found on his website.
We are delighted to announce a few updates regarding one of our previous seed corn funded projects. In 2022-2023, the JGI funded Cheryl McQuire’s (Bristol Medical School) project on “Addressing the fetal alcohol spectrum disorder (FASD) ‘data gap’: ascertaining the feasibility of establishing the first UK National linked database for FASD”. This project allowed Cheryl’s team to explore the feasibility of establishing a National Linked Database for Fetal Alcohol Spectrum Disorder (FASD) as Landmark UK guidance has called for urgent action to increase identification, understanding, and support for those affected with this disorder.
FASD is caused by prenatal alcohol exposure and is thought to be particularly common in the UK population. The aim of the seed corn project was to make the initial steps towards forming a UK National Database for FASD looking at feasibility, acceptability, key purposes and the data structure needed. Through questioning over 100 stakeholders including clinicians, data specialists, researchers, policy makers, charities, and people living with FASD, the project was able todemonstrate a strong support for a national FASD database but there was a common concern among stakeholders about privacy and data sharing. Full details of the project can be found on our previous blog post.
Cheryl and their team also collaborated with the Elizabeth Blackwell Institute (EBI) on “Developing a National Database for Fetal Alcohol Spectrum Disorder (Nat-FASD UK): incorporating the views and recommendations of people with FASD and their carers.” Their findings from the projects funded by JGI and EBI were presented at ADR-UK conference 2023. The abstract for this work can be viewed here. In addition, a pre-print of their FASD National database workshop findings is now available here.
Importantly, this work has been selected to feature in the Office for National Statistics (ONS) Research Excellence Series 2024. Cheryl will be delivering a webinar on “Showcasing methods for diverse stakeholder involvement in database design: establishing the feasibility and acceptability of a National Database for Fetal Alcohol Spectrum Disorder (FASD)” on Thursday 13 June 10:30 to 11:30 BST. The webinar will cover how the team developed a tailored, multi-method approach to public and professional involvement activities, leading to high levels of engagement. In addition, you will also hear what people living with FASD and health care, policy and data science professionals had to say about the feasibility and acceptability of a UK National Linked Database for FASD. There will be an opportunity to ask Cheryl any questions during the dedicated Q&A section. You can register a place on the webinar here.
The work from both projects has been crucial in paving the way for progress in FASD research within the UK. It has also allowed us to get closer to addressing the FASD data gap that has been stalling the progress in prevention, understanding, and appropriate support for too long. Since both projects, Cheryl’s team has continued working on the FASD database and is currently pursuing funding options to establish a National database for FASD.
The Jean Golding Institute offers seed corn projects every year to support and promote activities that will foster interdisciplinary research in the area of data science, based on the principle that a small financial investment will lead onto bigger things. We anticipate that our next seed corn funding call will be announced in the autumn of 2024. Sign up to our mailing list to find out when the call goes live.
Researchers across the University benefit from our JGI Seedcorn Funding. Funding is great when you have someone to do the work – but what if you don’t have the right data science expertise in house? For that, this summer we are trialling a new JGI Data Scientist Support service. This provides an alternative support mechanism for researchers who need expertise and time, but not funding.
The Jean Golding Institute’s team of data scientists and research software engineers are here to support researchers across the University of Bristol fostering a collaborative research environment spanning multiple disciplines. Over the past seven years, our team has expanded thanks to various funding sources, reflecting the increasing importance of data science support in facilitating research outcomes and impact.
Get in touch with our team to find out how they can help you with:
Data analysis – recommendations or support with tools and methods for statistics, modelling, machine learning, natural language processing, computer vision, geospatial datasets and reproducible data analysis.
Software development – technical support, coding (for example: Python, R, MATLAB, SQL, bash scripts), code review and best practices.
Data communication – data visualisation, dashboards and websites.
Research planning – experimental design, data management plans, data governance, data hazards and ethics.
Our aim is to support researchers and groups that may not have in-house expertise but have project ideas that can be developed into applications for funding. We’re seeking projects that can take place over the summer until early autumn (July – October 2024).
The JGI team will get back to you within one week, to discuss your request.
If demand exceeds our current resource levels, we’ll meet with applicants to help prioritise projects. As with seedcorn funding, priority will go to applications that match JGI strategic goals and have clear pathways to benefit, such as an identified funding call or impact case.
Examples of data science projects
Social mobility analysis project – using local and national level data to investigate how different people in Bristol and other UK cities feel about life in their local environment. The JGI data scientist worked as part of a multidisciplinary team including University of Bristol researchers and external stakeholders, for around 2 days per week for 3 months. They analysed survey and geospatial data using Python, presented findings to the group. The output of the project was a grant application in which a data scientist was costed longer-term.
Antimicrobial resistance project – examining patterns in observed levels of antimicrobial resistance during the COVID pandemic. The JGI data scientist worked with a University of Bristol researcher and collaborated with a public sector stakeholder, for around 4 days per week for 4 months. They performed statistical modelling using R, producing data visualisations of the trends found. The project has led to an Impact Acceleration Funding application to develop a tool used to support local health planning.
Transport research-ready dataset grant – linking administrative datasets to support research into car and van use in the UK. The JGI data scientist developed data pipelines and provided methodological and data governance input into a successful ESRC funding application in a collaboration between researchers at the universities of Bristol and Leeds. The data scientist was a named researcher on the application and went on to perform data analysis as part of the project team.