Archive

Telling Tales: Building a Folk Map of St Lucia

JGI Seed Corn Funding Project Blog 2024/25: Leighan Renaud

Screenshot from the prototype. Includes a video of a man and the transcript, interactive buttons at the top and a map that changes location with the video
Figure 1: A screenshot from the prototype map 

In a research trip funded by the Brigstow Institute, a small research team and I met on the Eastern Caribbean island of St Lucia in the Summer of 2024 and spent 10 days immersed in the island’s folk culture. Our research was guided by the question: how do folk tales live in the 21st century Caribbean? We were particularly interested in learning more about the story of the ‘Ti Bolom’ (a Francophone Creole term that literally translates to ‘little man’): a child-sized spirit, often thought to be servant of the devil, that is summoned into the world to do its master’s nefarious bidding. The story has survived for generations on the island, and although the more specific details of this story shift depending on the storyteller, the moral of the story (which warns against greed) typically remains the same. Working with a local videographer and cultural consultant, we recorded a number of iterations of the Ti Bolom story, and conducted interviews about the island’s folk culture, for archival and analytical purposes.

The stories we heard were captivating. We heard from a diverse collection of people with different experiences of the island, its geography and its storytelling culture. The Ti Bolom stories exhibited a variety of cultural influences, from European Catholicism to West African folklore and beyond. The stories we heard also often made reference to very specific places in St Lucia (including villages and iconic locations) as well as occasionally pointing to connections with other neighbouring islands. This small archive of Ti Bolom stories demonstrated the fluid and embodied nature of folk stories and also suggested that there might be a mappable ‘folk landscape’ of the island.

We were interested in exploring innovative and interactive methods of digitally archiving Caribbean folk stories in such a way that honours their embodied nature, and we were curious about the potential of using folk stories as a decolonial mapping method (‘mapping from below’). JGI Seedcorn funding was secured to test whether we could build a ‘folk map’ of St Lucia. Working closely with Mark McLaren in the Research IT team, the aims of this exploratory project were to:

  1. Investigate existing map-based storytelling approaches
  2. Create folk map prototype(s) to demonstrate potential interfaces and functionalities
  3. Document all findings, keeping in mind the potential for future projects (i.e. mapping stories across multiple islands)

Mark understood our decolonial vision immediately, and took a very considered and meticulous approach to building the prototype map, which features stories and interviews from four of the storytellers we recorded in St Lucia. He asked that I provide transcripts for each video, and a list of locations mentioned. Although St Lucia’s official national language is English, they speak Kwéyòl (a French-based creole language) locally, which means that some of the place names used by storytellers are not necessarily their ‘official’ names. This meant that I needed to be careful about ensuring my translations and transcriptions were correct, and I spoke with storytellers and other contacts in St Lucia to validate some of the locations that feature in the stories and interviews.

Mark helped us to test several levels of interactivity with the map. As such, when one chooses a story to view, the filming location and places mentioned are listed so that a user might click through them at their own pace. Simultaneously, when the video is playing and a place is mentioned, the map automatically moves to the new location. This function demonstrates the existence of the folk geographies we hypothesized during our original research project. Folk stories draw their own maps, demonstrating intricate webs of connections, both within the island and beyond.

There were ethical considerations that arose during the development of the map. Folk stories in the Caribbean are considered true. It is often the case that storytellers reference real people, and locations given can identify those people. This was not necessarily an issue we faced when building the prototype, but it did prompt us to consider how much and what kind of information the map should ethically include were it to be developed.

As an interactive folk-archiving method, mapping folk landscapes has the potential to be an innovative and visually arresting output resource that brings Caribbean folk cultures into dialogue with Digital Humanities, and makes these stories accessible for digital and diasporic audiences. The prototype we developed has proven the concept of a Caribbean folk landscape, and this has been pivotal as we develop a grant application for AHRC funding. Our hope is to secure funding to explore, archive and map three kinds of folk stories told across three islands in the Eastern Caribbean.


Thank you to Mark McLaren from the Research IT team for producing a prototype map.

To find out more about the project, feel free to check out our Instagram page, or contact me via email at Leighan.renaud@bristol.ac.uk

This project was talked about in the fourth episode of Bristol Data Stories. You can listen to it here.

Widening Participation (WP) Research Summer Internships

The Widening Participation (WP) Research Summer Internships provide undergraduates with hands-on experience of research during the summer holidays, with the aim of encouraging a career in research. Interns gain professional experience and knowledge through a funded placement in their chosen subject. This also supports application for postgraduate study and other research jobs.  

This year, the JGI was very pleased to support four internships through the WP scheme. Each of the interns has provided valuable support to an array of diverse and interesting projects related to their fields of interest. We are delighted by the feedback that we have received from their project supervisors and look forward to watching their future progress. Read on for more information on their projects and their experience.

Frihah Farooq 

Frihah Farooq's poster on Automating the linkage of open access data for health services
Research poster on ‘Automating the Linkage of Open Access Data for Health Sciences’ by Frihah Farooq

My name is Frihah, and I’m a third year undergraduate studying Mathematics here at the University of Bristol. My academic interests centre around applied data science and machine learning, and this summer I worked on a project involving the General Practice Workforce dataset published by NHS Digital. My focus was on building tools that could bring accessibility to data that is often scattered and difficult to navigate. 

The aim of the project was to automate the downloading and linkage of open-access datasets, specifically in the context of healthcare services. Many of these records are stored in files with inconsistent formats and structures, often requiring manual effort to piece together a consistent narrative. I developed a codebase in R that could search for the appropriate files, extract the relevant information, and construct a complete dataset that can be used for longitudinal analysis without the need for repeated intervention. While the code was built around the workforce dataset, the methodology generalises well to other datasets published by NHS Digital. 

One observation from the final merged dataset was the trend of decreasing row counts, likely due to restructuring, alongside an increase in the number of recorded variables, a sign that data collection has grown more sophisticated in recent years. This experience strengthened my foundation in data automation and my ability to work with evolving and imperfect data; skills I know will benefit me as I move further into research. 

If you’d like to get in touch, you can reach me at cc22019@bristol.ac.uk 

Grace Gilman 

Hello, my name is Grace Gilman and I am starting my third year studying Computer Science with Artificial Intelligence at the University of Bath. I am hoping to go into academia in the future and pursue computing research specifically with medical applications. You can contact me at gcag20@bath.ac.uk

Over the six weeks I have been participating in a research internship here at the University of Bristol, supported by the Jean Golding Institute. I have been working on a data science project called ‘Using AI to Study Gender in Children’s Books’, for the team Fair Tales, supervised by Chris McWilliams. During my internship I experimented with image analysis using ChatGPT and Vertex AIi, for future integration into the Data Entry app that Fair Tales is producing to semi-automate character and transcript input. I have also been contributing to the database architecture and search and filtering options for users to interact with the database. Some of my work has been analysing the corpus of children’s books using SQL, one pattern I found was that the difference between mother and father characters(1:0.75) is even more pronounced for grandmothers and grandfathers(1:0.5). 

During my time at this internship, I have become much more confident in my abilities to work on a project as well as code that will be used in a research setting. I have learnt more of how research is conducted and what skills are needed for this, and become more sure of an academic future. 

Imogen Joseph 

I am currently studying a Neuroscience MSci with a Year in Industry at the University of Bristol. I’m going into my final year, having just completed a placement year in Southampton General Hospital undertaking clinical research in neonatal respiratory physiology. I’m particularly interested in a career in academia and more specifically looking at molecular mechanisms behind disease for drug discovery. 

This summer, I helped in the development of an R package, ‘midoc’ (Multiple Imputation DOCtor, found on CRAN), designed to guide researchers in analysis with missing data under my supervisor Elinor Curnow. I created several functions that resulted in the display of a summary table of missing data, alongside optional graphs to visualise the distributions of their missing data. This allows the user to explore what is actually missing, and additionally make inferences on whether missingness is random or related to particular variables. 

Before coming into this internship, my R ability was limited to self-teaching via youtube videos. Ample training was provided in this project but more than anything, throwing myself in and actually writing code has been so beneficial to my learning. This knowledge is extremely useful for a career in research – I was even able to apply my acquired skills onto the work carried out in my placement, and used R to analyse the data I gathered. 

I am very grateful for this opportunity given to me under the JGI and will take what I’ve learnt with me into whatever I do next! 

You can contact Imogen at imogenjoseph26@gmail.com 

Sindenyi Bukachi 

Using Big Data to Rethink Children’s Rights (bsindenyi@gmail.com) 

MSci Psychology and Neuroscience, University of Bristol (Year 3) 

Sindenyi Bukachi holding their research poster on 'Investigating attitudes towards children's rights (in education)'
Sindenyi Bukachi holding their research poster

Initially, the project was quite open – the only brief was to explore attitudes towards children’s rights using big data. My early research into Reddit threads, news stories and real-world discourse helped narrow our focus to something more urgent and measurable: children’s right to participation, specifically in educational settings as both my supervisors are based in the School of Education. This became the foundation for the rest of the project, and my supervisors later decided to take it forward as a grant proposal. 

Over the first few weeks, I learned how to do structured literature reviews using academic databases like ERIC, build Boolean search strings, and track findings across a spreadsheet. I explored how participation is talked about and measured, and the themes I identified – like tokenism, power struggles between adults, and the emotional toll of being “heard” but not actually listened to – became central to our research direction. 

In the second half, I moved from qualitative sources to dataset analysis. I used R and RStudio to explore datasets from the UK Data Service. I learned to work with tricky file types (.SAV, .TAB), use new packages, extract variables, visualise trends, and test relationships between predictors — all while thinking critically about how these datasets (often not made for this topic) could reflect participation and children’s agency. 

I’ve gained confidence in data science, research strategy, and independent problem-solving – all skills I’ll take forward into my dissertations and future career. I’m so grateful to Dr Katherin Barg, Professor Claire Fox, and the JGI for the support and trust throughout. 

How to make data science skills stick? Learnings from the OCSEAN project

Written Catherine Upex and Rachel Wood

Left to right: Sena Darmasetiyawan; John Calorio; Komang Sumaryana; Chris Kinipi; Wahyu Widiatmika; Dendi Wijaya standing in front of the Fry Building
Visiting researchers from the OCSEAN project (from left to right: Sena Darmasetiyawan (Udayana University); John Calorio (Davao Medical School Foundation); Komang Sumaryana (Udayana University); Chris Kinipi (University of Papua New Guinea); Wahyu Widiatmika (Udayana University); Dendi Wijaya (Jakarta University)

Introduction

Earlier this summer, the University of Bristol and the JGI welcomed a group of visiting researchers from the “Oceanic and Southeast Asian Navigators” (OCSEAN) project. OCSEAN is a worldwide interdisciplinary consortium researching the demographic history of ancient seafarers across Oceania and Southeast Asia. The visiting humanities researchers from Indonesia and the Philippines arrived in Bristol with the aim of learning more about quantitative methods, how to apply them to their research, and to take these skills home to help their research community do the same.

When asked, most said they had little to no knowledge or experience in coding. The task therefore was to design a training approach to help them feel confident independently using Python for research – all in the space of a few weeks.

Our Approach

The training style followed a traditional workshop format, but importantly with two instructors. This allowed one to talk through the course content, and the other to provide one-to-one help to individuals. Initially, the sessions consisted of lecture-style teaching, but as confidence grew, they transitioned to a more independent format, where small groups collaborated to solve data science problems directly related to their research interests.

As most participants has no prior coding experience, it was important not assume any knowledge of technical terms. Over eight two-hour sessions spanning three weeks, the training slowly built-up coding knowledge, covering the following topics:

  • Introduction to Python (e.g. variables, data types, operators, lists, dictionaries)
  • Intermediate concepts (e.g. using/writing functions, loops, conditional statements)
  • How to use Chatbots for coding (e.g. how to write good prompts, refine responses, when/when not to use, error handling, and sanity checking)
  • Data analysis (e.g. loading/cleaning data, plotting using seaborn and matplotlib, summarising data)

The training also coincided with Bristol Data Week 2025, so the OCSEAN researchers had the opportunity to cement their knowledge by revisiting concepts in similar training sessions from the event.

Comparing training styles

The approach differed to a recent pilot training scheme run by JGI Research Data Science Advocates. The aim of the pilot was to run training on data analysis in Python in a low-stress environment, via a self-led approach. Participants were supplied with materials to work through independently, with optional contact time with facilitators.

Both training styles were designed for researchers with no prior coding experience. It was interesting to see how the hands-on and hands-off approaches compared in order to understand how to most effectively encourage engagement with data science.

Feedback from OCSEAN researchers

By the end of our training period, all the OCSEAN researchers said that they found the training very beneficial for their research. Many acknowledged that they found learning Python challenging. However, the format of the sessions, especially the opportunity to draw upon help from not only facilitators but also ChatGPT, and importantly each other, allowed them to get to grips with new concepts. Intensive successive trainings with a clear syllabus were seen as more beneficial than one-off unconnected sessions.

The importance of structured training was echoed by feedback from the self-led pilot training. Here, participants highlighted that despite a self-led approaching being easier to fit into a working week, they would have benefitted from group discussions and the opportunity to compare their results with others. Additionally, while most of the self-led participants agreed that the pilot scheme facilitated their learning outcomes and expressed a desire to apply what they learnt to their work, some commented that they lacked a basic understanding of Python to independently apply these skills.

Importantly, OCSEAN researchers commented on how it wasn’t just the training structure that facilitated learning. Aspects such as the use of a small meeting room and the inclusion of regular breaks, further encouraged collaboration between participants and drove better understanding. Additionally, the use of datasets adapted to participants’ research fields made coding seem much more accessible and engaging. This highlighted how important it is to facilitate a supportive and personalised teaching environment in order to fully grasp new complex concepts.

Training attendees with their course completion certificates standing beside Dr Dan Lawson, Rachel Wood and, Catherine Upex
Training attendees with their course completion certificates; featured with training facilitators from the University of Bristol: Dr Dan Lawson (Associate Professor of Data Science and member of OCSEAN project; School of Mathematics), Rachel Wood (PhD student; School of Mathematics); Catherine Upex (PhD student; Bristol Medical School)

Reflections and moving forward

This training was facilitated by two PhD students developing their own teaching skills, and the experience taught the team a lot about what makes effective data science training. To feel confident in independently using data science, intensive face-to-face training is needed to make sure basic coding skills are cemented. This can be difficult for many to fit in, but a weekly commitment, combined with a hand-on collaborative atmosphere can effectively drive key concepts home.

Additionally, to drive engagement particularly from disciplines with little data science background, it is important to cater training to specific research questions in that field i.e. using relevant data sets. This way, participants can see how data science can help them in their own research and be more inspired to try for themselves.

So, what’s next? The aim of this training was to provide OCSEAN researchers with data science skills to apply to their own research. It’s been brilliant to see that some have already taken this leap. Using their coding skills and connections made in Bristol, many are developing new projects, applying for PhD positions and forming future collaborations. In the Autumn, the team plan to travel to Bali to aid OCSEAN researchers in sharing coding skills with their research communities, as well as developing more research collaborations.


This blog was written by Catherine Upex and Rachel Wood

Learn more about the OCSEAN project here or contact Daniel Lawson (Dan.Lawson@bristol.ac.uk) or Monika Karmin (monika.karmin@ut.ee) for more information.

MagMap – Accurate Magnetic Characteristic Mapping Using Machine Learning

PGR JGI Seed Corn Funding Project Blog 2023/24: Binyu Cui

Introduction:

Magnetic components, such as inductors, play a crucial role in nearly all power electronics applications and are typically known to be the least efficient components, significantly affecting overall system performance and efficiency. Despite extensive research and analysis on the characteristics of magnetic components, a satisfactory first-principle model for their characterization remains elusive due to the nonlinear mechanisms and complex factors such as geometries and fabrication methods. My current research focuses on the characterization and modelling of magnetic core loss, which is essential for power electronics design. This research has practical applications in areas such as the fast charging of electric vehicles and the design of electric motors.

Traditional modelling methods have relied on empirical equations, such as the Steinmetz equation and the Jiles-Atherton hysteresis model, which require parameters to be curve-fitted in advance. Although these methods have been refined over generations (e.g., MSE and iGSE), they still face practical limitations. In contrast, data-driven techniques, such as machine learning with neural networks, have demonstrated advantages in addressing multivariable nonlinear regression problems.

Thanks to the funding and support from the JGI Institute, the interdisciplinary project “MagMap” has been initiated. This project encompasses testing platform modifications, database setup, and neural network development, advancing the characterization and modelling of magnetic core loss.

Outcome

Previously, a large-signal automated testing platform is produced to evaluate the magnetic characteristics under various conditions. Fig. 1 shows the layout of the hardware section of the testing platform and Fig. 2 shows the user interface of the software that is currently used for the testing. With the help of JGI, I have managed to update the automated procedure of the platform including the point-to-point testing workflow and the large signal inductance characterizing. This testing platform is crucial for generating the practical database for the further machine learning process as its automated function has largely increased the testing efficiency of each operating point (approx 6-8s per data point).

Labelled electrical components in a automated testing platform
Fig. 1. Layout of the automated testing platform.
Code instructions for the interface of the automated testing platform
Fig. 2. User interface of the automated testing platform.

Utilizing the current database, a Long Short-Term Memory (LSTM) model has been developed to predict core loss directly from the input voltage. The model shows a better performance in deducing the core loss than traditional empirical models such as the improved generalized Steinmetz equation. A screenshot of the code outcome is shown in Fig. 3 and an example result of the model for one material is shown in Figure 4. A feedforward neural network has been tried out as a scalar-to-scalar model to deduce the core loss directly from a series of input scalars including the magnetic

flux density amplitude, frequency and duty cycle. Despite the accuracy of the training process, there are limitations in the input waveform types. Convolutional neural networks have also been tested before using the LSTM as a sequence-to-scalar model. However, the model size is significantly larger than the LSTM with hardly any improvement in accuracy.

Code for the demo outcome of the LSTM
Fig. 3. Demo outcome of the LSTM.
Bar chart showing ratio of data points against relative error code loss (%)
Fig. 4. Model performance against the ratio of validation sets used in the training.

Future Plan:

Although core loss measurement and modelling is a key issue in industrial applications, the reason behind these difficulties is the non-linear relationship between the magnetic flux density and the magnetic field strength which is also known as the permeability of the magnetic material. The permeability of ferromagnetic is very sensitive to a series of external parameters including temperature, induced current, frequency and input waveform types. With an accurate fitting between the relationship of magnetic flux density and field strength, not only

the core loss can be precisely calculated but also the current modelling method that is used in Ansys and COMSOL can be improved.

Acknowledgement:

I would like to extend my gratitude to JGI for funding this research and for their unwavering support throughout the project. I am also deeply thankful to Dr. Jun Wang for his continuous support. Additionally, I would also like to express my appreciation to Mr. Yuming Huo for his invaluable advice and assistance with the neural network coding process.

Unveiling Hidden Musical Semantics: Compositionality in Music Ngram Embeddings 

PGR JGI Seed Corn Funding Project Blog 2023/24: Zhijin Guo 

Introduction

The overall aim of this project is to analyse music scores by machine learning.  These of course are different from sound recordings of music, since they are symbolic representations of what musicians play.  But with encoded versions of these scores (in which the graphical symbols used by musicians are rendered as categorical data) we have the chance to turn these instructions in various sequences of pitches, harmonies, rhythms, and so on. 

What were the aims of the seed corn project? 

CRIM concerns a special genre of works from sixteenth century Europe in which a composer took some pre-existing piece and adapted the various melodies and harmonies in it to create a new but related composition. More specifically, the CRIM Project is concerned with polyphonic music, in which several independent lines are combined in contrapuntal combinations. As in the case of any given style of music, the patterns that composers create follow certain rules:  they write using stereotypical melodic and rhythmic patterns. And they combine these tunes (‘soggetti’, from the Italian word for ‘subject’ or ‘theme’) in stereotypical ways. So, we have the dimensions of melody (line), rhythm (time), and harmony (what we’d get if we slice through the music at each instant. 

A network of musical notations
Figure 1. An illustration of music graph, nodes are music ngrams and edges are different relations between them. Image generated by DALL·E.

We might thus ask the following kinds of questions about music: 

  • Starting from a given composition, what would be its nearest neighbour, based on any given set of patterns we might chose to represent?  A machine would of course not know anything about the composer, genre, or borrowing involved in those pieces, but it would be revealing to compare what a machine might tell us about this such ‘neighbours’ in light of what a human might know about them. 
  • What communities of pieces can we identify in a given corpus?  That is, if we attempt to classify of groups works in some way based on shared features, what kinds of communities emerge?  Are these communities related to Style? Genre? Composer? Borrowing? 
  • In contrast, if we take the various kinds of soggetti (or other basic ‘words’) as our starting point, what can we learn about their context?  What soggetti happen before and after them?  At the same time as them?  What soggetti are most closely related to them? And through this what can we say about the ways each kind of pattern is used? 

Interval as Vectors (Music Ngrams) 

How can we model these soggetti?  Of course they are just sequences of pitches and durations.  But since musicians move these melodies around, it will not work simply to look for strings of pitches (since as listeners we can recognize that G-A-B sounds exactly the same as C-D-E).  What we need to instead is to model these as distances between notes.  Musicians call these ‘intervals’ and you could think of them like musical vectors. They have direction (up/down) and they have some length (X steps along the scale). 

Here is an example of how we can use our CRIM Intervals tools (a Python/Pandas library) to harvest this kind of information from XML encodings of our scores.  There is more to it than this, but the basic points are clear:  the distances in the score are translated into a series of distances in a table.  Each column represents the motions in one voice.  Each row represents successive time intervals in the piece (1.0 = one quarter note). 

An ngram for a section of music
Figure 2. An example of ngram: [-3, 3, 2, -2], interval as vectors. 

Link Prediction 

We are interested in predicting unobserved or missing relations between pairs of ngrams in our musical graph. Given two ngrams (nodes in the graph), the goal is to ascertain the type and likelihood of a potential relationship (edge) between them, be it sequential, vertical, or based on thematic similarity. 

  • Sequential is tuples that come near each other time.  This is Large Language Model which computes ‘context’. LLM then produces the semantic information that is latent in the data. 
  • Vertical is tuples that happen at the same time.  It is ANOTHER kind of context. 
  • Thematic is based on some measure of similarity.   

Upon training, the model’s performance is evaluated on a held-out test set, providing metrics such as precision, recall, and F1-score for each type of relationship. The model achieved a prediction accuracy of 78%. 

Beyond its predictive capabilities, the model also generates embeddings for each ngram. These embeddings, which are high-dimensional vectors encapsulating the essence of each ngram in the context of the entire graph, can serve as invaluable tools for further musical analysis.