Successful Seedcorn Awardees 2024-2025

The Jean Golding Institute Seedcorn Funding is a fantastic opportunity to develop multi and interdisciplinary ideas while promoting collaboration in data science and AI.  We are delighted that a new cohort of multidisciplinary researchers has been supported through this funding.

Leighan Renaud – Building a Folk Map of St Lucia

Leighan Renaud

Dr. Leighan Renaud is a lecturer in Caribbean Literatures and Cultures in the Department of English. Her research interests include twenty-first century Caribbean fiction, mothering and motherhood in the Caribbean, folk and oral traditions in the Anglophone Caribbean, and creative practices of neo-archiving. 

Louise AC Millard – Using digital health data for tracking menstrual cycles

Dr. Louise Millard is a Senior Lecturer in Health Data Science in the MRC Integrative Epidemiology Unit (IEU) at the University of Bristol. Following an undergraduate Computer Science degree and MSc in Machine Learning and Data Mining, they completed an interdisciplinary PhD at the interface of Computer Science and Epidemiology. Their research interests lie in the development and application of computational methods for population health research, including using digital health and phenotypic data, and statistical and machine learning approaches. 

Photo of Louise AC MIllard on the right

Laura Fryer – Visualisation tool for Enhancing Public Engagement Using Supermarket Loyalty Card Data

Photo of Laura Fryer on the left

Laura is a senior research associate in the Digital Footprints Lab based within the Bristol Medical School. Their aim is to use novel data to unlock insights into behavioural science for the purposes of public good. Laura is particularly passionate about broadening the public’s understanding of digital footprint data (e.g. from loyalty cards, bank transactions or wearable technology such as a smart watch) and demonstrating how vital it can be in developing our understanding of population health within the UK and beyond.  Laura’s project is focused on developing a data-visualisation tool that will support public engagement activities and provide a tangible representation of the types of data that we use – building further trust between the public and scientific researchers.  

Nicola A Wiseman – Cellular to Global Assessment of Phytoplankton Stoichiometry (C-GAPS)

Dr. Nicola Wiseman is a Research Associate in the School of Geographical Sciences. They received their PhD in Earth System Science from the University of California, Irvine, where they specialized in using ocean biogeochemical models to investigate the impacts of phytoplankton nutrient uptake flexibility on ocean carbon uptake. They also are interested in using statistical methods and machine learning to better understand the interactions between marine nutrient and carbon cycles, and the role of these interactions in regulating global climate. 

Photo of Nicola A Wiseman on the right

George Sains – Collecting & Analysing Multilingual EEG Data

Photo of George Sains on the left

George Sains is a Doctoral Teaching Associate in the Neural Computation research group at the School of Computer Science. Their research is focused on the overlap between Computer Science, Neuroscience, and Linguistics. George has worked on developing models to help understand how linguistic traits have evolved. More recently, they have been using Bayesian modelling to find patterns between grammar and neurological response and are now focused on using Electroencephalography experimentation to explore the relationship between linguistic upbringing and how the brain processes language. 

Alex Tasker – Building a Strategic Critical Rapid Integrated Biothreat Evaluation (SCRIBE) data tool for research, policy, and practice

Dr. Tasker is a Senior Lecturer at the University of Bristol, a Research Associate at the KCL Conflict Health Research Group and Oxford Climate Change & (In)Security (CCI) project, and a recent ESRC Policy Fellow in National Security and International Relations. Dr. Tasker is an interdisciplinary researcher working across social and natural sciences to understand human-animal-environmental health in situations of conflict, criminality, and displacement using One Health approaches. Alongside this core focus, Dr. Tasker’s work also explores emerging areas of relevance to biosecurity and biothreat including engineering biology, antimicrobial resistance, subterranean spaces, and the use of new forms of evidence and expertise in a rapidly changing world for climate, security, and defense.

Photo of Alex Tasker on the right

Exploring the Impact of Medical Influencers on Health Discourse Through Socio-Semantic Network Analysis

JGI Seed Corn Funding Project Blog 2023/24: Roberta Bernardi

Gloved hand holding a petri dish with the Twitter bird logo on the dish
This Photo by Unknown Author is licensed under CC BY-NC-ND

Project Background

Medical influencers on social media shape attitudes towards medical interventions but may also spread misinformation. Understanding their influence is crucial amidst growing mistrust in health authorities. We used a Twitter dataset of the top 100 medical influencers during Covid-19 to construct a socio-semantic network, mapping both medical influencers’ identities and key topics. Medical influencers’ identities and the topics they use to represent an opinion serve as vital indicators of their influence on public health discourse. We developed a classifier to identify influencers and their network of actors, used BERTopic to identify influencers’ topics, and mapped their identities and topics into a network.

Key Results

Identity classification

Most Twitter bios include job titles and organization types, which often have similar characteristics. So, we used a machine learning tool to see how accurately we could predict someone’s job based on their Twitter bio. Our main question is: How well can we guess occupations from Twitter bios using the latest techniques in Natural Language Processing (NLP), like few-shot classification and pre-trained sentence embeddings? We manually coded a training set of 2000 randomly selected bios from the to 100 medical influencers and their followers. Table 1 shows a sample of 10 users with (multi-)labels.

Table of users and their multi-labels
Table 1. Users and their multi-labels

We used six prompts to classify the identities of medical influencers and other actors in their social network. The ensemble method, which combines all prompts, demonstrated superior performance, achieving the highest precision (0.700), recall (0.752), F1 score (0.700), and accuracy (0.513) (Table 2).

Table of prompts and their identities classification
Table 2. Comparison of different prompts for the identities classification

Topic Modelling

We used BERTopic to identify topics from a corpus of 424,629 tweets posted by the medical influencers between December 2021 and February 2022 (Figure 1).

Coloured scatter graph of medical influencer topics
Figure 1. Map of medical influencers’ topics

In total, 665 topics were identified. The most prevalent topic is related to vaccine hesitancy (8919 tweets). The second most significant topic focuses on equitable vaccine distribution 6860 tweets. Figures 2a and 2b illustrate a comparison between the top topics identified by Latent Dirichlet Allocation (LDA) and those by BERTopic.

Word map of LDA top 5th topics on the left and bar charts of BERTopic top 8th topics on the right
Figure 2. Comparisons of LDA topics and BERTopic topics

The topics derived from LDA appear more general and lack specific meaning, whereas the topics from BERTopic are notably more specific and carry clearer semantic significance. For example, the BERTopic model shows either the “Hesitancy” or the “Equity” of the vaccine (topic 0, 1), while the LDA model only provides general topic information (topic 0).

Table 3 shows the three different topic representations generated from the same clusters by three different methods: Bag-of-Words with c-TF-IDF, KeyBERTInspired and ChatGPT.

Table of comparison of three different topic representations methods of BERTopic
Table 3: Comparison of three different topic representations methods of BERTopic

The Keyword Lists from Bag-of-Words with c-TF-IDF and KeyBERTInspired provide quick information about the content of the topic, while the narrative Summaries from ChatGPT offer a human-readable summary but may sacrifice some specific details that the keyword lists will provide. BERTopic captures deeper text meanings, essential for understanding conversation context and providing clear topics, especially in short texts like social media posts.

Mapping Identities and Topics in Networks

We mapped actors’ identities and the most prevalent topics from their tweets into a network (Figure 3).

Network representation of actors’ identities and topics
Figure 3. Network representation of actors’ identities and topics

Each user node features an attribute detailing their identities, which defines the influence of medical influencers within their network and how their messages resonate across various user communities. This visualization reveals their influence and how they adapt discourse for different audiences based on group affiliations. It aids in exploring how the perspectives of medical influencers on health issues proliferate across social media communities.

Conclusion

Our work shows how to identify who medical influencers are and what topics they talk about. Our network representation of medical influencers’ identities and their topics provides insights into how these influencers change their messages to connect with different audiences. First, we used machine learning to categorize user identities. Then, we used BERTopic to find common topics among these influencers. We created a network map showing the connections between identities, social interactions, and the main topics. This innovative method helps us understand how the identities of medical influencers affect their position in the network and how well their messages connect with different user groups.


Contact details and links

For further information or to collaborate on this project, please contact Dr Roberta Bernardi (email: roberta.bernardi@bristol.ac.uk)

Acknowledgement

This blog post’s content is based on the work published in Guo, Z., Simpson, E., Bernardi, R. (2024). ‘Medfluencer: A Network Representation of Medical Influencers’ Identities and Discourse on Social Media,’ presented at epiDAMIK ’24, August 26, 2024, Barcelona, Spain

Foodscapes: visualizing dietary practices on the Roman frontiers 

JGI Seed Corn Funding Project Blog 2023/24: Lucy Cramp, Simon Hammann & Martin Pitts

Table laid out with Roman pottery from Vindolanda
Table laid out with Roman pottery from Vindolanda ready for sampling for organic residue analysis as part of our ‘Roman Melting Pots’ AHRC-DFG funded project 

The extraction and molecular analysis of ancient food residues from pottery enable us to reconstruct the actual uses of vessels in the past. This means we can start to build up pictures of dietary patterns in the past, including foodways at culturally diverse communities such as the Roman frontiers. However, there remains a challenge in how we can interpret these complex residues, and both visualise and interrogate these datasets to explore use of resources in the past. 

Nowadays, it is commonplace to extract organic residues from many tens, if not hundreds, of potsherds; within each residue, and especially using cutting-edge high-resolution mass spectrometric (HRMS) techniques, there might be several hundred compounds present, including some at very low abundance. Using an existing dataset of gas chromatography-high resolution mass spectrometric data from the Roman fort and associated settlement at Vindolanda, this project aimed to explore methods through which these dietary information could be spatially analysed across an archaeological site, with a view to developing methods that could be applied on a range of scales, from intra-site through to regional and even global. It was hoped that it would be possible to display the presence of different compounds in potsherds recovered from different parts of a site that are diagnostic of particular foodstuffs, in order to spatially analyse the distribution of particular resources within and beyond sites. 

A fragment from a Roman jar that was sampled from Vindolanda
A fragment from a Roman jar that was sampled from Vindolanda for organic residue analysis as part of our ‘Roman Melting Pots’ AHRC-DFG funded project 

The project started by processing a pilot dataset of GC-HRMS data from the site of Vindolanda, following a previously-published workflow (Korf et al. 2020). These pottery sherds came from different locations at the fort, occupied by peoples of different origins and social standings. This included the praetorium (commanding officer’s house), schola (‘officers’ mess’), infantry barracks (occupied by Tungrians, soldiers from modern-day Belgium and Netherlands), and the non-military ‘vicus’ outside of the fort walls likely occupied by locals, traders and families. Complex data, often containing several hundred compounds per residue were re-integrated using open-source mass spectrometry data processing software MZ Mine, supported by our collaborator from MZ IO gmbh, Dr Ansgar Korf. This produced a ‘feature list’ of compounds and their intensities across the sample dataset. This feature list was then presented to Emilio Romero, a PhD student in Translational Health Sciences, who worked as part of the Ask-JGI helpdesk to support academic researchers on projects such as these. Emilio developed data matrices and performed statistical analyses to identify significant compounds of interest that were driving differences between the composition of organic residues from different parts of the settlement.  This revealed, for example, that biomarkers of plant origin appear to be more strongly associated with pottery recovered from inside the fort compared with the vicus outside the fort walls. He was then able to start exploring ways to spatially visualize these data, with input from Léo Gorman, a data scientist from the JGI, and Levi Wolf from the School of Geographical Sciences. Emilio says: 

‘Over the past year, my experience helping with the Ask-JGI service has been truly rewarding. I was very excited to apply as I wanted to gain more exposure to the world of research in Bristol, meet different researchers and explore with them different ways of working and approaching data. 

One of the most challenging projects was working with chemometric concentrations of different chemical compound residues extracted from vessels used in ancient human settlements. This challenge allowed me to engage in dialogue with specialists in the field and work in a multidisciplinary way in developing data matrices, extracting coordinates and creating maps in R. The most rewarding part was being able to use a colour scale to represent the variation in concentration of specific compounds in settlements through the development of a Shiny application in R. It was certainly an invaluable experience and a technique I had never had the opportunity to practice before.’ 

This work is still in progress, but we have planned a final workshop that will take place in mid-November. Joining us will be our project partners from the Vindolanda Trust, as well as colleagues from across the Roman Melting Pots project, the JGI and the University of Bristol. A funding application to develop this exploratory spatial analysis has been submitted to the AHRC.  


Contact details and links

You can find out more about our AHRC-DFG funded project ‘Roman Melting Projects’ and news from this season’s excavations at Vindolanda and its sister site, Magna

A real-time map-based traffic and air-quality dashboard for Bristol

JGI Seed Corn Funding Project Blog 2023/24: James Matthews

Example screenshot of the Bristol Air Quality and Traffic (AQT) Dashboard with Key
Example of the dashboard in use.

A reduction in urban air quality is known to be a detrimental to health, affecting many conditions including cardiorespiratory health. Sources of poor air quality in urban areas include industry, traffic and domestic wood burning. Air quality can be tracked by many government, university and citizen held pollution sensors. Bristol has implemented a clean air zone, but non-traffic related sources, such as domestic wood burning, are not affected.

The project came about through the initiative of Christina Biggs who approached academics in the school of Engineering Mathematics and Technology (Nikolai Bode) and the School of Chemistry (James Matthews and Anwar Khan) with a vision for an easy to use data dashboard that could empower citizens by drawing data from citizen science, university and council air quality and traffic sensors in order to better understand the causes of poor air quality in their area. The aims were to (1) work with community groups to inform the dashboard design (2) create an online dashboard bringing together air quality and traffic data (3) use council air quality sensors to enable comparison with citizen science sensors for validation and (4) to use this to identify likely sources of poor air quality.

An online dashboard was created using R with Shiny and Leaflet, collecting data using API code, and tested offline. The latest version of the dashboard has been named the Bristol Air Quality and Traffic (AQT) dashboard. The dashboard allows PM2.5 data and traffic numbers to be investigated in specific places and plotted as a time series. We are able to compare citizen sensor data to council and government data, and we can compare to known safety limits.

The dashboard collates traffic data from several sources including Telraam traffic report and Vivacity traffic data which provide information on car numbers from local sensors; and PM2.5 data from different sources including Defra air quality stations and SensorCommunity (previously named as Luftdaten) citizen air quality stations. Clicking onto a data point provides the previous 24 hour time series of measurements. For example, in the screenshots below, one Telraam sensor shows a clear PM2.5 peak during the morning rush hour of 26th June 2024 (a) which is likely related to traffic, while the second shows a higher PM2.5 peak in the evening (b) which could be related to domestic field burning, such as an outdoor barbecue. A nearby traffic sensor shows that the morning peak and smaller afternoon peak do agree with traffic numbers (c), but the evening peak might be unrelated. Data can be selected from historic data sets and is available to download for future interrogation.

Example of data output from the dashboard showing PM2.5 midnight to midnight on 26/06/2024
Figure (a) Example of data output from the dashboard showing PM2.5
Example of data output from the dashboard showing PM2.5 midnight to midnight on 26/06/2024
Figure (b) Example of data output from the dashboard showing PM2.5
Example of data output from the dashboard showing traffic measured using local Bristol sensors
Figure (c) Example of data output from the dashboard showing traffic measured using local Bristol sensors

It is a hope that these snapshots might provide an intuitive way for communities to understand the air quality in their location. Throughout the project, the project team held regular meetings with Stuart Phelps from Baggator, a community group based in Easton, Bristol, so that community needs were put to the forefront of the dashboard design.

We are currently planning a demonstration event with local stakeholders to allow them to interrogate the data and provide feedback that can be used to add explanatory text to the dashboard and enable easy and intuitive analysis of the data. We will then engage with academic communities to consider how to use the data on the dashboard to answer deeper scientific questions.


Contact details and links

Details of the dashboard can be found at the link below, and further questions can be sent to James Matthews at j.c.matthews@bristol.ac.uk

https://github.com/christinabiggs/Bristol-AQT-Dashboard/tree/main

Using Machine Learning to Correct Probe Skew in High-frequency Electrical Loss Measurements 

JGI Seed Corn Funding Project Blog 2023/24: Jun Wang & Song Liu

Introduction 

This project develops a machine learning approach to address the probe skew problem in high-frequency electrical loss measurements. 

 A pipeline using ML model to correct the probe skew in measuring a magnetic hysterysis loop. Model training and model deploy method shown
Fig.1 (main figure) A pipeline using ML model to correct the probe skew in measuring a magnetic hysterysis loop 

What were the aims of the seed corn project? 

To tackle the net-zero challenge through electrification, power electronic converters play an important role in modern electrical systems, such as electric vehicles and utility grids. Accurate characterisation of individual component’s loss is essential for the virtual prototyping and digital twins of these converters. Making loss measurements requires using two different probes, one voltage and one current, each with its own propagation delay. The difference in the delays between the probes, known as skew, causes inaccurate timing measurements which leads to incorrect loss measurements. Incorrectly measured loss will misinform the design process and the digital twin, which can lead to wrongly sized cooling component and potential failure of the converter systems in safety-critical applications, e.g. electric passenger cars.  

As the aim of this project, we proposed to develop a Machine Learning based solution learn from experimentally measured datasets and subsequently generate a prediction model to compensate the probe skew problem. This interdisciplinary project treats the challenge as an image recognition problem with special shape constraints. The goal is a tool developed for the engineering community which takes in raw measurements and outputs the corrected data/image.  

What was achieved? 

Joint research efforts were made by the interdisciplinary team across two schools (EEME and Mathematics) for this project with the following achievements: 

  1. We have explored the options and made a design choice to utilize an open-source database as the foundation for this project (MagNet database produced by PowerLab Princeton), which provides rich datasets of experimentally measured waveforms. We then have developed an approach to artificially augment the data to create training data for our ML model. 
  1. We successfully developed a shape-aware ML algorithm based on the Convolutional Neural Network to capture the shape irregularity in measured waveforms and find its complex correlation to the probe skew in nanoseconds. 
  1. We subsequently develop a post-processing approach to retrospectively compensate the skew and reproduce the corrected image/data.  
  1. We evaluated the proposed ML-based method against testing datasets, which demonstrated a high accuracy and effectiveness. We also tested the model on our own testing rig in the laboratory as a real-life use case. 
  1. We have developed a web-based demonstrator to visualise the process and showcase the correction tool’s capability to the public. The web demonstrator is hosted on Streamlit and accessible through this link
Snapshot of the web application demo showing phase shift prediction at 620 test index
Fig.2 Snapshot of the web application demo 

Future plans for the project 

This completed project is revolutionary in terms of applying ML to correct the imperfection of hardware instruments through software-based post-processing, in contrast to conventional calibration approaches using physical tools. This pilot project will initiate a long-term stream of research area leveraging ML/AI to solve challenges in power electronics engineering. The proposed method can be packaged into a software tool as direct replacement/alternative to commercial calibration tools, which cost ~£1000 each unit. Our plans for the next steps include 

  1. Create documentations for the approach and the pipeline 
  1. Write a conference/journal paper for dissemination 
  1. Explore the commercialisation possibilities of the developed approach 
  1. Further improve the approach to make it more versatile for wider use cases 
  1. Evaluate the approach more comprehensively by testing it on extended sets of data 

Contact details and links 

Dr Jun Wang, Jun.Wang@bristol.ac.uk 

Dr Song Liu, Song.Liu@bristol.ac.uk 

Web demo: https://skewtest-jtjvvx7cyvduheqihdtjqv.streamlit.app/ 

The project was assisted by University of Bristol Research IT.