Successful Staff Seed Corn Awardees 2022-2023

The Jean Golding Institute’s Seed Corn Scheme

The Jean Golding Institute Seed Corn Funding is a fantastic opportunity to develop multi and interdisciplinary ideas and promote collaboration in data science and AI.  We are delighted that a new cohort of interdisciplinary research has been supported through this funding.

The Winners

Tom Williams is an Associate Professor in Molecular Evolution at the University of Bristol, UK. He obtained an undergraduate degree in Genetics and a PhD in bioinformatics from Trinity College Dublin, Ireland, the latter under the supervision of Mario Fares. From 2010-2015, he worked as a postdoc with Martin Embley at Newcastle University, UK, on phylogenetic methods and the origin of eukaryotic cells. He started a research group at the University of Bristol in 2015.

Tom’s work in the lab focuses on studying the early evolution of life, and the genomes of microbes, using phylogenetic and comparative genomic methods (that is, with bioinformatics, on a computer). The key questions relate to the nature of early life, the phylogeny of prokaryotes, the processes of microbial/genome evolution, and the origin of eukaryotic cells.

Neo Poon is a behavioural data scientist and is currently a Senior Research Associate in the Medical School at the University of Bristol. Neo’s research focuses on the socioeconomic factors behind self-medication behaviours, with his doctoral research (PhD Behavioural Science at Warwick Business School) covering a range of topics related to human decision making, including consumer choices and public opinions. He also has research experience in healthcare and well-being, as well as teaching experience in statistics and behaviour change which has led to two awards.

Emmanouil Tranos is a Professor of Quantitative Human Geography at the University of Bristol, and a Fellow at the Alan Turing Institute. Emmanouil has published extensively on the geographies of various digital technologies: from the internet’s backbone networks and the internet’s uptake at a global scale, to the usage of mobile phone and internet speeds at a very granular level of spatial and temporal resolution.

In their research they have been developing research frameworks and computational workflows to use the digital traces human and, more specifically, the economic activities left behind, to better understand cities, their structure, and economies. This is important, as such digital traces allow us to observe behaviours and phenomena and, consequently, answer research questions that traditional data sources have not allowed us to do. To effectively handle the complexities of such unconventional data – from mobile phone records to very large archives of websites – Emmanouil’s research employs diverse methodological tools, from data science, computational linguistics, as well as network science alongside more traditional geographical methods.

Jin Zheng is a Lecturer in Data Science in Engineering Mathematics Department at the University of Bristol. Her research focuses on machine learning, big data engineering, cloud computing, natural language process, with particular attention to financial market. In their project, her team will develop the first open-sourced and intelligent algo-trading platform, which could be used not only by researchers and individual users, but also be used as an education tool for students in Finance, Data Science, Computer Science, and Financial Technology majors. With this platform, the users could create their own trading robots, develop their trading strategies, and design and construct their own trading algorithm.

Cheryl McQuire is a public health researcher with a particular interest in maternal and child health, specialising in foetal alcohol spectrum disorders (FASDs). Cheryl is a NIHR SPHR Postdoctoral Launching Fellow and Programme Manager for the SPHR Healthy Places, Healthy Planet Programme and is based in the Centre for Public Health, within the Population Health Sciences Institute at the University of Bristol. She has expertise in both quantitative and qualitative analyses, natural experimental approaches, causal inference methods and systematic reviews.

During her career Cheryl has led discussions at parliamentary roundtables, convened by the Department of Health, All-Party Parliamentary Group (APPG) for FASD and Welsh Government and has used this to advance policy recommendations for FASD (Policy Bristol Briefing 65).  Her work on the screening prevalence of FASD, using ALSPAC data, was featured in over 400 media outlets worldwide, including appearances on Radio 4’s Women’s Hour, Sky News and Talk Radio.

Wenzhi Zhou is a Research Associate with the Electrical Energy Management Group (EEMG) within the Department of Electrical and Electronic Engineering at the University of Bristol. His background stretches from power electronics to electrical machine. His research interests are mainly oriented towards future applications in the fields of electromechanical propulsion, power conversion and renewable energy. Until now, he has been working on the development of high power density, high efficiency and high reliability drive systems for sustainable mobility and advanced industry automation/robotics, aiming at disruptive performance improvements. 

Jasmina Stevanov & Laszlo Talas

Jasmina Stevanov is a Research Fellow at the Bristol Veterinary School. Trained as an experimental psychologist and artist, she incorporates these disciplines through her research in the neuroscience of aesthetics, vision, and art perception. In her research she is using machine learning and eye-tracking techniques to explore individual preferences for visual art, with the goal to offer automatic feedback about observers’ aesthetic preferences and predict their future choices.

Laszlo Talas is a Lecturer in Animal Sensing & Biometrics at the Bristol Veterinary School. His research interests primarily focus on computational approaches to visual perception, including animal, human, and machine vision. With a background in zoology and experimental psychology, he is particularly passionate about how visual scenes can be “understood” using computers and what comparisons can be drawn with biological visual systems.


Sion Bayliss and Daniel Lawson

Sion Bayliss

In a collaboration between Bristol University Veterinary School and Bristol University School of Mathematics, Sion Bayliss and Daniel Lawson will be conducting a research project titled; ‘Assessing the recombinogenic potential of novel bacterial lineages: Towards an early warning system for problem pathogens.’

Daniel Lawson

In this collaborative research project they will apply innovating methodologies to assess the potential for newly identified bacterial lineages to uptake foreign DNA, and increase their potential to become the problem pathogens of the future.


More information

For more information about other funding we have provided and schemes we offer, find out more on our Funding page, and take a look at previous projects we have supported, on our Projects page.

Successful PGR Seed Corn Awardees 2022- 2023

The Jean Golding Institute’s Seed Corn Scheme

The Jean Golding Institute are pleased to announce the Post Graduate Researcher Seed Corn Funding awards. Every year we provide seed corn funding to Post Doctoral Researchers, but this year we are pleased to also be able to provide funding to small-scale projects for Post Graduate Researchers at the University of Bristol, which we hope will help to develop their projects further. Through our Seed Corn Funding Scheme, we aim to support initiatives to develop interdisciplinary research in data science (including Artificial Intelligence) and data-intensive research. 

The Winners

Zinuo You is currently studying a PhD in Computer Science, at the University of Bristol. They have a Bachelor’s degree in Electronic Science & Technology from Southwest University, in addition to a Master’s  in Electronic & Electrical Engineering from the University of Sheffield. Their research interests include graph neural networks, graph structural learning and deep learning in finance. 


Isolde Glissenaar is a PhD researcher in Glaciology, researching sea ice thickness in the Canadian Arctic using remote sensing and machine learning.

Isolde has created a sea ice thickness product for the channels in the Canadian Arctic Archipelago. Isolde will use their JGI PGR seed corn funding to make the product operationally available for shipping navigators, local communities, and climate researchers to use. 

Holly Fraser is a final year PhD student in Digital Health and Care, with a background in psychology and neuroscience.

During her PhD studies Holly has been using machine learning to investigate risk and protective factors for depression and anxiety, using birth cohort data. Her research involves using natural language processing (NLP) techniques to analyse the online discourse of mental health and medication use, using Reddit data.

Holly is excited to start the project and explore a new data analysis method, which she thinks will be of great value to her existing work.

Jiao Wang & Ahmed Mohamed

Jiao Wang is a fourth-year PhD student studying Hydroinformatics with the Department of Civil Engineering. Her research focuses on identifying and quantifying the information content and transmission in the catchment hydrological modelling system. 

Ahmed Mohamed is a third-year PhD student studying Rainfall Nowcasting with the Department of Civil Engineering. His research focuses on improving rainfall nowcasting based on deep learning methods and optical flow models. 

Both Ahmed and Jiao are interested in water resource problems, climate change, numerical modelling and artificial intelligence. 

Sydney Charitos & Lauren Thompson

Sydney Charitos is a second-year PhD student in Digital Health & Care. Her research focuses on the feasibility of Ecological Momentary Assessment (EMA), asking survey questions both in the moment and in context to assess chronic pain in young children aged 5 – 11 years old.

Due to having a background in Electronics and Computer Science, Sydney is especially interested in EMA data visualization for the many stakeholders in a child’s life.

Lauren Thompson is a third-year PhD student in Digital Health & Care. Her research explores the feasibility of a multi-stakeholder self-management tool for children aged 7-11 years with chronic fatigue, through qualitative and participatory design methods.

More information

For more information about other funding we have provided and schemes we offer, find out more on our Funding page, and take a look at previous projects we have supported, on our Projects page.

Successful PGR Seed Corn Awardees 2021-2022

The Jean Golding Institute’s Seed Corn Scheme

The Jean Golding Institute are pleased to announce the Post Graduate Researcher Seed Corn Funding awards. Every year we provide seed corn funding to Post Doctoral Researchers, but this year we are pleased to also be able to provide funding to small-scale projects for Post Graduate Researchers at the University of Bristol, which we hope will help to develop their projects further. Through our Seed Corn Funding Scheme, we aim to support initiatives to develop interdisciplinary research in data science (including Artificial Intelligence) and data-intensive research. 

The Winners

Abdelwahab Kawafi is doing a PhD in Physiology working on supervised learning for computer vision mainly on CT scans and microscopy. This seed corn project is in material science, working on particle tracking of 200nm colloids using super-resolution confocal microscopy for the analysis of the glass transition.

Ahmed Mohammed is a postdoctoral researcher in the field of water engineering. His PhD research investigates precipitation forecasting at high spatial and temporal scales using various methods such as numerical weather prediction, optical flow-based models, and deep learning techniques.

Ahmed and Hongbo are working together on a project called “Performance evaluation of a deep learning model in short-term radar-based precipitation nowcasting”. Radar precipitation data will be used in this project to refine an existing convolutional neural network precipitation nowcasting model in order to improve end-to-end rainfall nowcasting model performance and compare it to optical flow-based models and the Eulerian persistence baseline model.

Hongbo Bo is a Ph.D. student in computer science. His research focuses on social network analysis by using graph data mining techniques like graph neural networks, continual learning, and contrastive learning methods.





Anuradha Kamble is a Postgraduate Research student at Bristol Composite Institute in the Department of Aerospace Engineering. Prior to this, she had three years of industrial experience in the commercial HVAC (Heating Ventilation and Air Conditioning) and EV (Electric Vehicle) industries. Recent discoveries in materials science have seen growing applications of machine learning for materials discovery by leveraging the experimental data generated over many years in different studies. Her research project is titled “Exploiting the deep learning technique to study a novel nano-modified polymer composite.” The goal of this study is to implement deep learning techniques to investigate the most suitable parameters e.g., material composition, processing temperature, and time, to study a nano-modified polymer composite. The objective of this project is to improve the generalizability of deep learning model so that it can be widely applicable to any novel materials developed in the University of Bristol. Similar applications in the field of bio sciences will be explored by collaborations with other universities across the UK.

Tian Li is a PhD student studying Physical Geography in the School of Geographical Sciences. Her work focuses on using Earth Observation techniques to study the polar glaciers. Tian’s Seed Corn project is “An automated deep learning pipeline for mapping Antarctic grounding zone from ICESat-2 laser altimeter”. This study will pioneer the application of deep learning to satellite altimetry in mapping the Antarctic grounding zone, which is a key indicator of the ice sheet instability. Through this project, Tian aims to develop a novel deep learning framework for mapping different grounding zone features by training the NASA ICESat-2 laser altimetry dataset. This research will contribute to a more efficient and accurate evaluation of grounding line migration, with which we can better understand the contribution of Antarctica to future sea-level rise.

Levke Ortlieb is a final year PhD student in Physics. She works on supercooled liquids and glasses.  Levke uses 200nm colloids as a model system in 3D using super-resolution confocal microscopy. Detecting the particle positions is very challenging so she and Abdelwahab Kawafi are using convolutional neural networks for particle tracking.

More information

For more information about other funding we have provided and schemes we offer, find out more on our Funding page, and take a look at previous projects we have supported, on our Projects page.

OS Data Competition Winner – Samuel Baker

We are pleased to announce that Samuel Baker (Research Associate, School of Economics) is the winner of the 2022 data exploration and visualisation competition, ‘a map with a view’, in association with the Ordnance Survey (OS). We would like to say a big thank you to everyone who participated in the competition, as well as the OS for providing the data.

The OS contains a wealth of data, which could have been used to make a multitude of different products. The OS themselves have utilised this data to make a website, such as the initiative to help people #GetOutside[1]. Whilst a fantastic idea, I thought it was a shame only a small part of their dataset was being used within this product. I used this as an inspiration for how I would, roughly, have made the same application but designed to fully utilise the wealth of materials the OS have to offer.  

Therefore, given how the GetOutside website works, the application would have to be a web-app of sorts. Unfortunately, as someone who mostly makes packages and software in Python and C++, this was not something in my skills toolkit. However, as part of my broader research goal of making tens of thousands of pages of historical data accessible to researchers and the public, I had been experimenting with several backend web frameworks, mostly Ruby on Rails and Django. Now faced with an actual deadline, since I have been programming in python for nearly 5 years now, Django won out.  

Learning beyond the basics of Django was manageable in the time frame, given my Python background. Django has a powerful relational database framework, which allows for data models to be constructed with relative ease. Still, working with a much larger set of data comes with significant challenges. Filtering down ‘big data’ into something meaningful is crucial to good data science, but also for this product. Providing a list of locations, you could visit is relatively meaningless if you don’t know where they are or how to get there. Displaying these locations on the map and having a way to get directions when selecting this location would solve this. There are existing solutions to this, but the next question was, how reliant was I going to be on external APIs? 

Various problems a web-app might have, like map visualisation for example, have dedicated solution providers, like ESRI or even the OS themselves. The OS has generous free API access for open data, but it will throttle if more than 100 people are making a request to a screen worth of tiles every ten seconds. ESRI, however, gives you a free base allowance, but once it’s gone, they will charge you per 1000 requests. It’s highly unlikely that this application would attract over 100 users a minute, especially at the beginning, but if someone was to make a company trying to provide this service, that’s your limit without paying. A solution that isn’t reliant on this will take more work to get running and optimise but gives the code base greater scalability going forward.  

Avoiding using APIs was more of a learning experience than practical, but this was just a proof of concept, so the learning experience is part of the value of the project for me. Still, it resulted in a usable SVG map renderer, using no shortage of stack exchange [2] and my limited JavaScript knowledge. Whilst functional, it’s page load is heavy, which conflicts with Google’s research that found 53% of mobile page loads are abandoned if it takes over 3 seconds [3]. The solution is likely to use a combination of AJAX calls via jquery and the Django REST framework to start lazily loading smaller resolution images, and then improve them as the zoom level increases for raster images still in the viewport. Whilst I have a better idea of this now, this was beyond what I knew and could achieve for this project at the time. 

The result is a functional, although not particularly optimised solution, for exploring the 100,000’s of data points within OS Open Greenspace [4]. As an extension, I also generalised the database loader to allow for external open-source locations, such as from the National Trust [5] of English Heritage [6], to also be explored. Users can search for locations by name or narrow down what type of location and where it is, by using place location filters and the OS Boundary data [7]. Once they find a location, they can also get directions from their own address via linking to google maps. The final addition was adding some basic social media type functions, such comments and favourites, to make having a user account mean something, rather than just an authentication system for the sake of having one.  

If I was to start this again, I would certainly do things slightly differently, but as you gain more experience, that’s usually the way of things. This has certainly been an interesting experiment, which will certainly help my own research aims. If possible, I may further this product further into an actual release, to try to make it a deliverable, but given other work commitments that’s not too likely in the immediate future. If you, for whatever reason, want to do that, then 100% of the code I have written is open source and on GitHub [8], and can be used however you want. If you’re brand new to Django, or just want to play around, there is also a YouTube video I made on ‘how’ to build it here [9].  











An Interview with Widening Participation Intern, Jacob McLaughlin

What did you enjoy about the internship?

I had the opportunity to work on an interesting original project and to gain an insight into how research is conducted at the University. The JGI is a friendly team and the people there are working on a range of interesting things, while the interdisciplinary nature of the project meant there was always lots to learn and various different angles to explore. The wider Widening Participation programme was well-run and I had the opportunity to interact with interns from different departments working on a range of interesting projects.

What did you learn and what skills do you think you developed?

I learned a lot about how the wider University community operates, and about the procedures and processes involved in conducting original research. The project also enabled me to develop both new and existing technical skills, as the work involved creating documentation using tools such as Git and Jupyter Notebook, as well as using Python to conduct analysis. The project I was working on concerned using routine NHS data to predict stroke outcomes, so I also learned more than I expected about medical imaging and the NHS’s data infrastructure!

How has the experience added to your learning with respect to your degree programme?

The technical skills I developed during my internship have been useful during my degree, especially during my Year 3 Data Science module where knowledge of software such as GitHub and added experience using tools such as Python has given me a big head start. In addition, the internship enabled me to improve my research and analytical skills, which has been useful in my degree programme in a wide range of situations. The project also required me to summarise technical information to a variety of people, which has been useful in helping me to convey concepts clearly during my degree programme.

Would you recommend an internship with the JGI and why?

I would definitely recommend an internship with the JGI, as there is a friendly and welcoming atmosphere with plenty of support available. The JGI works on a number of diverse and interesting projects, so it is a great opportunity to utilise and improve your data expertise while simultaneously researching a novel subject. Moreover, getting involved with the JGI and the work they do gives a great insight into how the University works and how research is conducted, which is useful whether you are considering a career in research or not!

An Interview with JGI Intern, Debby Olowu

An Interview with JGI Intern, Debby Olowu

Hi, my name is Debby. Currently, I am a 3rd-year Psychology with Innovation student at the University of Bristol. Earlier this year, for one of my Innovation modules, I was required to complete and write about an internship I took part in. I was privileged that the Jean Golding Institute (JGI) were able to support me in completing an internship for which I am incredibly grateful. Here, I will explain what I learnt from this internship and how it added to my learning.

What did you enjoy about the internship?

I enjoyed learning how the university operates behind the scenes. I particularly enjoyed their biweekly meetings where everyone goes around and introduces themselves and what they have been up to over the past week. This made me feel extremely comfortable because I could get insight into what everyone had been up to, and this made me feel incredibly involved. I also felt less pressure when giving my presentation because I had already met everyone the meeting before. I also enjoyed learning and tackling new projects which were investigating the National AI Strategy and Data Science for primary school students.

What did you learn and what skills do you think you developed?

I learned much from my internship here at the JGI. The biggest thing I learned from this internship was the national AI strategy. This is the government’s 10-year plan to become an AI powerhouse and improve AI across the country. The government has three main aims: the first is to provide the best of the best to work with AI. The second is to implement AI in different sectors such as the NHS. Finally, the last aim is to show people that AI is trustworthy. I did not know much about AI before researching this strategy, but now, I have learned so much about AI. Another thing I learned was about how the primary school and secondary school curriculum integrates data science and artificial intelligence. It was interesting to see the impact data science has on both curricula, but its impact is rarely acknowledged explicitly to students. Data science takes place in so many subjects: ones that most people would be aware of are sciences and maths. However, I was surprised to know that data science can also be incorporated into subjects such as geography and history. Yet, I believe the government should work harder to incorporate data science in the curriculum and to assess it better and originally this was done in coursework but due to significant levels of plagiarism, they had to stop. However, from my research, I feel that data science should be a separate subject to be studied so that students can understand its impact better and be prepared for when it is incorporated into their studies at university.

How has the experience added to your learning with respect to your degree programme?

As mentioned before, for one of my modules in my degree, I had to complete an independent internship module. The JGI not only allowed me to do this but to also improve the skills I use in assessments. This experience enabled me to improve my research and presentation skills. These are essential to my degree as to do well I must be able to research well, and this should strive to be in a way that is engaging yet informative.

Would you recommend an internship with the JGI and why?

I would recommend an internship with JGI especially if you are interested in a career in data science. Even though I am interested in a career in Psychology, conducting projects that require intense research and evaluating whether approaches were beneficial is a necessary aspect of any career, especially Psychology. This shows how doing an internship with the JGI will benefit and aid you, even if you are still deciding on a career path.

I am grateful for the opportunity to work at the JGI and especially to Emma Kuwertz for supporting me throughout my placement!

Turing Interest Groups – New groups launched

Turing Interest Groups – New groups launched

The Alan Turing Institute is pleased to announce that nine new Interest Groups have been launched in October 2022.

Interest Groups aim to promote research collaboration, share knowledge, and communicate emerging scientific concepts to the wider Institute and beyond, around a shared area of interest in data science and AI. In July 2022, a call for new Interest Groups was launched and the response was very positive, receiving a high number of applications from across the University partners, including the Turing University Development Award universities.

The selected Interest Groups cover a wide range of areas of research from theoretical and methodological areas to applications on biodiversity, health and space.

Please visit the Interest Groups webpages to learn more and to find out more information about joining.

The Interest Groups launched in 2022 are:

Secret Life of Data Competition

The Jean Golding Institute’s Secret Life of Data Competition and Awards Ceremony

When we think about the security of data on our phones and computers, we might think about passwords and permissions, or about data encryption – but we rarely think about what our data looks like, or what it does as it moves around hidden inside our phones, computers, digital devices, our apps and networks. This secret life of data – the traces, bits, and fragments of personal information that we leave behind us online – was the focus of this short story competition. The Jean Golding Institute, in collaboration with the Digital Security by Design (DSbD) Futures programme, delivered by the ESRC funded Discribe Hub+, hosted a short story competition exploring ‘the secret life of data’.

The competition sought creative stories that brought to life the secret life of data. The stories could imagine this life as a journey, a quest, a romance, or a tragedy; thinking of a computer’s internal architecture as a house, a jungle, a zoo, or a city; and the data as characters facing danger in the form of various digital threats and vulnerabilities.

The Jean Golding Institute were proud to host an awards ceremony on 2nd November, with readings of the extracts of all ten shortlisted stories, and the JGI extend their congratulations to the winners and runners up:

  • 1st place: Guy Russell – The Task in the Eight-Bit Pyramid
  • 2nd place: Fiona Ritchie Walker – Mini-Me
  • 3rd place: Ben Marshall – The Courier

All ten shortlisted stories have been published in a Secret Life of Data Anthology, available to buy from Bristol Books.

JGI Seed Corn Funding Project Blog 2021: Michael Rumbelow and Alf Coles

JGI Seed Corn Funding Project Blog 2021: Michael Rumbelow and Alf Coles


This seed corn project aimed to explore and extend uses of AI-generated data in an educational context. We have worked on an AI-based app to recognise, gather data on and respond to children’s arrangements of wooden blocks in mathematical block play. The project was inter-disciplinary in two ways. Firstly, the people involved crossed disciplines (teachers, academics, programmers) and, secondly, the app itself provokes engagement in creative activities involving music, chemistry and mathematics.

Developing an app to recognise real-world block play

Block play is a popular activity among children. And in schools there has also been a resurgence in the use of physical blocks in primary mathematics classrooms, particularly in the teaching of maths (drawing on some East Asian practices of using physical blocks as concrete models of abstract mathematical concepts). We were interested in researching children’s interactions with physical blocks, with the aim of supporting their learning across the curriculum, and one of the key challenges was how to capture data on children’s interactions with blocks for analysis.

Previous studies of block play have focused on gathering data variously through sketching or taking photos or videos of children’s block constructions, or embedding radio transmitters in blocks which could transmit their positions and orientations. Recently developments in computer vision technology offer novel ways of capturing data on block play. For example, photogrammetry apps such as 3D Scanner can now create 3D digital models from images or video of objects taken on mobile phones, and AI-based object recognition apps are increasingly able to detect objects they have been trained to ‘see’.

With funding from the JGI we were able to form a small project team of two researchers in the School of Education, a software developer and the head of a local primary school, in order to develop an app to trial with children in the school (see Figure 1).

Figure 1. The experimental set-up as used in the initial trial in a primary school

Technical Developments

Over the course of the JGI project we have developed the app in the following ways:

  • We have rebuilt the app architecture robustly around the Detectron-2 AI algorithm, to facilitate reliable data gathering, training and feature development.
  • We have developed a new mode to enable gathering of data on mathematical block play around proportion (ie detection of relative block sizes as well as adjacency) and carbon chemistry modelling (ie detection of multiple row block adjacency).
  • We have made improvements to the user interface (eg removal of text from the screen for when testing with pre-literate users).
  • We have tested the app with 5-6 year olds in a primary school.
  • The new version generates snap shots of children’s block arrangements and exports data on their positions to a spreadsheet which allows further analysis.

Lessons Learned

As well as developing a prototype, we have been able to trial this phase of development in school, giving us several valuable insights into both the technical development of AI computer vision apps for gathering anonymous data on block play in schools, as well as the usability and potential of apps controlled by children via the arrangement of physical blocks on a tabletop. In particular we have found:

  • Benefits of using platforms available to the target audience as and when feasible. Our aim was to develop an app which is ultimately usable by schools. At the time of development, the AI algorithms used required processing power beyond standard laptops to run at reasonable speeds, and dedicated AI processing hardware such as the Nvidia Jetson NX offered sufficient processing power at a fraction of the cost of higher-end GPU equipped laptops. However, during development, due largely to global chip shortages, this price difference disappeared, Jetson NX’s became scarce, and we decided to switch to higher-end GPU-equipped Windows laptops. This has simplified installation and portability of the app without the need for specialist hardware and opened a route to incremental optimization for the types of standard lower-spec laptops used in schools, as well as easing technical maintenance, and sharing and processing of the data gathered in standard apps such as spreadsheets.
  • The resilience of trained artificial neural network algorithms in practice, as well as the importance of responsively optimising training image datasets. The app was trained to recognise blocks using training datasets originally gathered with a specific higher-spec webcam at a fixed distance from the table, which required a separate support apparatus. In practice when we tried using low-cost webcams with their own built-in gooseneck support these worked relatively well, at a variety of heights, and in a variety of lighting and tabletop environments in the field, and were much more practical to set up. However, dips in reliability became apparent in certain lighting conditions, for instance in distinguishing red and pink blocks, which highlighted the need for fresh training datasets using the new webcam, focusing on these areas of ambiguity apparent in field-testing.
  • Children’s patience, curiosity and creativity in using novel technology. We had minimised the textual buttons in the interface designed for the researcher, to change modes etc, in the assumption that young children would not want to have to bother with them and that their presence might be confusing. In practice children, having seen the buttons used during set-up, were curious to bring up and explore all of the interface buttons themselves. They were also patient when the app occasionally did not immediately detect a block, ‘helping’ it to ‘see’ the block by nudging its position or re-laying it. And rather than copying what they had seen researchers or other children doing, the children were creative in exploring the affordances of the app, for example trying laying blocks horizontally rather than vertically, or reversing the order of a melody played by placing blocks.

Above all, this phase of development and trialling has provided evidence of the feasibility of producing an app which can use AI to detect and respond to block placements by young children in the field, and highlighted several of the key challenges for next steps.

Future Challenges

The potential uses of the app are extensive and, following on from the successes of this JGI project, we now want to:

  • Develop our app, which is currently a prototype, into something potentially ready to move into production.
  • Engage with Research Software Engineering (RSE) at the University of Bristol, to support further app development.
  • Trial and hone the tools and games to support learning using the app
  • Extend the dataset of images used to train the app from several hundreds to several thousands, aligned with the diverse webcams and conditions likely in the field
  • Pilot the app with visually impaired and blind children
  • Pilot the app with teachers interested in teaching climate chemistry
  • Develop an anonymised dataset of children’s block play, including creative free play and guided mathematical block play (inspired by the UoB’s EPIC-KITCHENS data set
  • Enable upload, storage and visualisation of data on block arrangements on a server, for potential research analysis using AI to detect patterns
  • Extend the app to recognise stacked as well as laid-flat block constructions, making use of LIDAR technology.

We are currently taking part in a training programme (SHAPE “Pre-accelerator” course) to help us plan the next stages of development.

JGI Seed Corn Funding Project Blog 2021: Conor Houghton

Bayesian methods in Neuroscience – Conor Houghton

For the last century science has relied on a statistical framework based on hypothesis testing and frequentist inference. Despite its convenience in simple contexts this approach has proved to be intricate, obtuse and sometimes misleading when applied to more difficult problems, particularly problems with the sort of large, complex and untidy datasets that are vital for applications like climate modelling, finance, bioinformatics, epidemiology and neuroscience.

Bayesian inference solves this; the Bayesian approach is easy to interpret and returns science to its traditional reliance on evidence and description rather than a false notion of significance and truth. With a rigorous handling of uncertainty Bayesian inference can dramatically improve statistical efficiency, allowing us to squeeze more insight out of finite, hard-won data which in turn reduces animal and biological tissue use and reduces costs for scientific projects.

With support from the Jean Golding Institute we ran a workshop about Bayesian Modelling: our workshop had lots of different elements, a tutorial for people unfamiliar with the approach, short talks by people in the University who use these methods, a few talks by external speakers and a data study group. In retrospect, we did try to do too much, but the workshop was very helpful, the short talks brought together the local community around Bayesian Modelling and the two external speakers, Hong Ge and Mike Peardon, were excellent and provided real unexpected insight into the current and potential future state of Bayesian Modelling.

We hope to next host a workshop on Hybrid / Hamiltonian Monte Carlo; HMC has quickly become a very useful tool in data science, allowing us to perform Bayesian inference for a host of real world problems that would not have been tractable a few years ago. Perhaps surprisingly, HMC has its origins in high energy particle physics and was invented to perform the high-dimensional integrals involved in Quantum Chromodynamics, the calculations required to predict the results of collider experiments in CERN.

We believe that there is a still a lot these two communities  – particle physics and applied data science – can learn from each other when exploring and developing the power and scope of HMC.