Turing Fellowships 2021-2022 announcement 

The University of Bristol is proud to announce that 39 researchers have been awarded Alan Turing Institute Fellowships starting on 1 October 2021 for one year.  

A collage of the Bristol Turing Fellows 2021

Turing Fellows are scholars with proven research excellence in data science, artificial intelligence (AI) or a related field whose research will be significantly enhanced through active involvement with the Turing network of universities and partners. 

The Bristol Turing Fellows come from a number of disciplines across all Faculties, with expertise ranging from social sciences, health, arts, engineering, computer science, and mathematics demonstrating the power of multidisciplinarity when working on solutions to societal challenges employing new methodologies in machine learning and AI. 

Professor Kate Robson Brown, Turing University Lead said: ‘Bristol is an established partner of the Alan Turing Institute and this is an exciting time for our new Fellows to take up the opportunity to engage and drive agendas at a national level. The success across the university, in every Faculty, is evidence of the strength and breadth of the expertise at Bristol. We aim to lead the way in supporting multidisciplinary research which seeks to lever benefit to our communities.’ 

Professor Phil Taylor, Pro-Vice Chancellor for Research and Enterprise said: ‘Bristol is leading the development of state of the art technologies in data science and AI that are having a profound effect in society. We are proud to support this cohort of Bristol experts who are working on new ways to harness the opportunities offered by these technologies’ 

More information about the Turing Fellows at the University of Bristol can be found in the Jean Golding Institute for data intensive research pages. 

Convolutional neural networks for environmental monitoring

JGI Seed Corn Funded Project Blog

Background

Environmental monitoring is critical for the protection of human health and the environment. As the world’s population continues to increase, industrial development and agricultural practices continue to expand, as does their associated pollution. The requirement for environmental monitoring is thus greater than ever, particularly for freshwater resources utilised for human consumption.

Biological monitoring of freshwater resources involves regular characterisation of dominant microalgal communities that are highly sensitive to nutrient pollution, forming widespread harmful algal blooms (HABs) during the process of eutrophication. But, traditional microscopy-based monitoring techniques to identify and count microalgae represent a significant bottleneck in monitoring capabilities and limit monitoring to institutions with highly trained individuals.

Project Aim

This project was founded to provide proof-of-concept for the application of artificial intelligence, specifically deep learning convolutional neural networks (CNNs), for rapid detection and identification of dominant microalgal groups and trouble HAB-forming species in freshwater samples.

Major actions

  1. Create robust training dataset: The first step to achieve this was to produce a robust, annotated training dataset of both controlled (i.e. mono-species cultures) and wild-type (i.e. natural) samples. A partnership with Dwr Cymru Welsh Water (DCWW) was established, allowing for the provision of water samples from their reservoirs over the spring-summer season, as well as access to their culture collections of dominant HAB-forming taxa. JGI support then allowed to recruit our intern, David Furley, who spent a month imaging and annotating images of both types of samples, with support provided from DCWW experts to ensure the highest accuracy of species identification.

Outcome 1: In total ~5000 annotated wild-type images were produced containing a variety of algal species (e.g. Figure 1), and ~3000 annotated culture-collection images, across dominant cyanobacteria, diatom and chlorophyte algal species; a major feat in such a short timeframe, well done David!

Figure 1: Representative training dataset image of microalgae found within a wild-type water sample at x100 magnification, showing bounding boxes drawn around six different genera of algae classified based on morphology and size.

  1. Test off-the-shelf CNNs for algal detection and identification: Once a robust training dataset was produced, the next step was to test the application of existing CNNs for the tasks of object detection (finding and drawing a bounding box around algal cells within images) and identification (assigning the correct taxonomic label to each object identified). For this proof-of-concept project, we chose to test a PyTorch implementation of the YOLO (You Only Look Once) version 3 CNN. YOLOv3 predicts bounding boxes using dimension clusters as anchor boxes, predicting an objectness score for each bounding box using logistic regression. The class each bounding box may contain is predicted using multilabel classification via independent logistic classifiers. The sum of squared error loss is used for training bounding box predictions, and binary cross-entropy loss for class predictions.

Outcome 2: YOLOv3 proved highly effective at object detection of microalgae within mono-specific culture images but more importantly, wild-type samples containing a mixture of algal species as well as non-algal particles. Overall, however, YOLOv3 performed less well at object identification.

  1. Test bespoke KERAS (TensorFlow) CNNs for algal identification: To build on our initial progress in algal detection, bounding boxes were used to cut algal cells from images within our training dataset, creating a second database of annotated individual algal cells to be used as input into a purely identifier focussed CNN. For this we employed a KERAS-based CNN on images comprising 3 types of algae; Oscillatoria HAB-forming cyanobacteria, Asterococcus Chlorophyte algae, and Tabellaria diatoms. Two training datasets were produced; i) a non-augmented training dataset comprising 273 images (91 from each class); and ii) an augmented training dataset that totaled ~ 6552 images (2184 from each class). Two instances of our novel CNN were then trained for 270 epochs each.

Outcome 3: Whilst the CNN trained on non-augmented images performed relatively well (Fig. 2), with identification accuracies ranging 86 – 100% across three classes of microalgae (Fig. 3), image augmentation significantly improved training outcomes, with Oscillatoria cyanobacteria identified with 97% accuracy, Tabellaria diatoms with 99% accuracy and Asterococcus green algae with 100% accuracy (Fig. 3).

Figure 2: Training (blue lines) and validation (orange lines) accuracy (a & c) and loss (b & d) for bespoke KERAS-CNNs trained on non-augmented (a & b) and augmented (c & d) training datasets over 270 epochs.

Figure 3: Confusion matrices showing classification results for validation data for our KERAS-CNNs trained on non-augmented data (a) and augmented datasets (b). Values represent percentage of correct/incorrect classifications.

Overall

This project has demonstrated proof-of-concept for the application of convolutional neural networks in the monitoring of microalgal communities within critical freshwater resources. We have amassed a sizeable, annotated training dataset of both wild-type and cultured samples, demonstrated the success of off-the-shelf CNNs in microalgal detection within images of water samples, and provided the first step on the road to developing CNNs capable of algal identification.

Future plans

Much work remains to be done on this topic before we have CNNs capable of automated algal detection, identification and enumeration from natural samples. We will continue to test different CNN architectures on our 8000 image training dataset. Collaborations with DCWW are ongoing and the outputs from this work will form the evidence base for a larger project application to drive the incorporation of CNN techniques into environmental monitoring.

Contact details:

Please contact the PI Chris Williamson at c.williamson@bristol.ac.uk and see his research group website at www.microlabbristol.org

What intensities of physical activity during adolescence contribute most to health in adulthood? – A study on the full intensity spectrum (Part-1)

JGI Seed Corn Funded Project Blog

Physical activity (PA) is among the most important human behaviours to improve and maintain health. The level of PA performed by an individual is often measured by accelerometers (the sensors used in fitness trackers or smartphones), but the obtained data is rich and evokes statistical challenges. Hence, novel statistical solutions must be found. Multivariate Pattern Analysis (MPA) could help in this regard and has great potential to provide new insights into how PA relates to health. In this first part of our 2-part blog series we describe how we will study the multivariate PA intensity signature related to early adult physical and mental health.

The problem in a nutshell

In research, accelerometers are typically worn around the hip or wrist for several days. They measure movements of the body multiple times per second and thus produce a massive amount of raw data. In general, being active will increase the measured acceleration (ie, the stored values will be higher). All values collected over the week are then used for the analysis, for example, by averaging them. This average value represents the total amount of PA performed. Another option is to look at the time spent in specific intensities of PA (eg, minutes per week of lower or higher intensity). This can be done by applying so called ‘cut points’ to the measured acceleration (the stored values). For example, if the stored value is greater than 4000, we could assume this minute was of higher intensity (those cut points are usually developed in studies where the accelerometers are compared to other measurements of the intensity of PA). Thus, cut points can be used to estimate the weekly time spent in different intensities of PA.

Many previous studies investigating associations between PA and health have focused on few intensity categories (ie, sedentary, light, moderate, vigorous). Special attention has been paid to time spent in moderate-to-vigorous PA. In fact, current PA guidelines are heavily based on this evidence. The focus on broad and selected parts of the intensity spectrum has at least two problems. First, many activities will be collapsed into the same group. For example, brisk walking and playing Squash, even though their intensity can be vastly different, are included in the same category (moderate-to-vigorous PA). Secondly, we do not know enough about the relative contribution of lower-intensity PA to health (eg, light).

However, including all the intensity categories in a single statistical model (eg, Ordinary Least Squares Regression) is problematic due to the high correlation between the variables and their closed structure (ie, summing up to 24 hours when adding sleep). Therefore, novel statistical solutions are needed to overcome these challenges and to identify the relative contribution of each intensity within the full intensity spectrum. One approach is MPA, which was, among others (eg, compositional data analysis, intensity gradient) recently introduced to the field of PA epidemiology. MPA addresses the collinearity among intensity categories using latent variable modelling (Partial Least Squares Regression (PLS-R)) while allowing for the inclusion of a high-resolution dataset (full intensity spectrum). So, instead of using the above-mentioned categories (sedentary, light, moderate, vigorous) we can not only include all the categories together but also increase their resolution by increasing the number of cut points (eg, time spent in 4000-4499, 4500-4999 instead of using just ‘4000 and greater’). Thus, single cut points (eg, 4000) are becoming less important while at the same time we can study the relative contribution of specific intensities considering all others in the same statistical model.

More information about MPA can be found here

Aims of the project

Previous applications of MPA to PA research have been cross-sectional studies on physical health (eg, cardio-metabolic health) where both the exposure (PA) and outcome (health) are measured at the same time. Therefore, the role of specific PA intensities for a broad range of physical and mental health outcomes is unknown. Moreover, given the importance of adolescence for life-course health, longitudinal studies are needed to explore the role of adolescent PA on future health. This proposed project utilises data from the Avon Longitudinal Study of Parents and Children (ALSPAC) resource, the most detailed study of its kind in the world, to provide novel evidence on associations of the PA intensity spectrum in adolescence (accelerometer measurements at ages 12, 14 and 16 years) with important adult health markers (wellbeing, depression, anxiety, cardiovascular health, metabolic health, adiposity, musculoskeletal, and respiratory health, measured at 25 years). The selected health markers are shown in the Figure below.

Stay tuned for Part-2 which will be published next year and shows the results of this project.

Contact details

Dr Matteo Sattler (Email: matteo.sattler@uni-graz.at, Twitter: @Sattler_Graz)

Institute of Human Movement Science, Sport and Health, University of Graz, Graz, Austria

Dr Ahmed Elhakeem (Email: a.elhakeem@bristol.ac.uk, Twitter: @aelhak19)

MRC Integrative Epidemiology Unit at the University of Bristol, Bristol, UK

Bristol Science Film Festival 2021 Data Science and AI winners

We are pleased to announce the winners of the Bristol Science Film Festival Jean Golding Institute Data Science and AI film prize 2021. The JGI co-hosted a screening with BSSF of the winning films in Data Week Online 2021. 

Bristol Science Film Festival runs an annual science film competition to support film-makers trying to tell the most interesting facts (or science fictions), no matter their resources.  

Winner — The Artificial Revolution 

 

Elyas Masrour 
A young artist investigates the recent advancements in creative Artificial Intelligence to see if we’re approaching the end of art.

Watch it here 

 

Runner up — Not a Robot 

 

George Summers 
A robot tries to break into a human facility, and is asked a security question… 

Watch the trailer here 

 

 

 

The Elizabeth Backwell Institute awarded a prize to health-related films in celebration of the 200th anniversary of Elizabeth Blackwell’s birth. Click here to find out more. 

More about Bristol Science Film Festival and the other category winners