Cog X 2019: The Festival of AI and Emerging Technology – 10-12 June 2019

Blog written by Patty Holley, Jean Golding Institute Manager.

Mayor of London Sadiq Khan

CogX 2019 took place during the first half of London Tech Week in the vibrant Knowledge Quarter of Kings Cross in London. The conference started only 3 years ago, yet this year hosted 15,000 attendees and 500 speakers making it the largest in Europe. CogX 2019 was also supporting 2030 Vision in their ambitions to deliver the Sustainable Global Goals. Mayor of London Sadiq Khan opened the conference with a call for companies to be more inclusive by opening up opportunities for women and the BAME communities, helping London and other cities to find solutions for societal problems.

Here are some highlights:

The State of Today – Professor Stuart Russell

Professor Stuart Russell, University California, Berkeley

The first keynote delivery was from Professor Stuart Russell from University of California, Berkeley where he described the global status of data science and AI. There has been a major investment across the world in the development of these technologies and academic interest has also increased over time. For example, there has been a significant increase from 2010 to 2015 in rate recognition in ImageNet, a dataset of labeled images taking from the web. Learning algorithms are improving constantly but there is a long way to go to reach human cognition. Professor Russell had a cautionary message particularly in autonomous technology as the predicted progress may not be achieved as expected.

Professor Russell also suggested that probabilistic programming and mathematical theory of uncertainty can really make an impact. As an example, he talked about the global seismic monitoring for the comprehensive nuclear test-ban treaty. Evidence data is compared with the model daily, and the algorithm detected the North Korean test in 2013.

What is coming… Robots, personal assistants, web-scale information and extraction and question answering, global vision system via satellite imagery. However, Professor Russell believes that human level AI has a long way to go. Major problems, like the capability of real understanding of language, integration of learning with knowledge, long range thinking at multiple levels of abstraction, cumulative discovery of concepts and theories, all haven’t as yet been resolved.

Finally, Professor Russell added that data science and AI will drive an enormous increase in the capabilities of civilization, however, there are a number of risks, including democracy failures, war and attacks on humanity, so regulation and governance are key.

Gender and AI

Clemi Collett and Sarah Dillon, University of Cambridge

The talks took place in several venues across Kings’ Cross, and on the ‘Ethics’ stage, Sarah Dillon and Clemi Collett from University of Cambridge highlighted the problems with dataset bias. The issue of algorithm bias has been highlighted previously, but not the bias that may come from the actual data. Guidelines are not content or context specific. They suggested that gender specific guidance is needed, guidance on data collection and data handling, theoretical definition of fairness based on current and historic research that will take into account societal drivers, for example to investigate why some parts of society don’t contribute to data collection.

Importantly, the speakers also talked about the diversity of the workforce working in these technologies. Currently, only 17% are female, which really impacts on the technology design and development. Diversification of workforce is vital as it brings discussion within teams and companies. If this issue is not challenged, then existing inequalities will be aggravated. The speakers reiterated the need to investigate the psychological factors that affect diversity in this labour market through qualitative and quantitative research. A panel followed the talk, which included Carly Kind, Director of Ada Lovelace Institute, Gina Neff, University of Oxford, Kanta Dihal, Centre for the Future of Intelligence, University of Cambridge. Carly Kind pointed out that diversity (or lack of) will shape what technologies are being developed and used. Gina Neff highlighted the point that most jobs at risk of automation are those associated with women, and therefore gender equality in the workforce generating new tech is a necessity. One important area that should be encouraged is the interdisciplinary exchanges between gender theorists and AI practitioners and to develop novel incentives for women to encourage involvement in tech. Women need to be part of the decision making process, and support those that can become role models, building profiles that will inspire women.

The Future of Human Machine Interaction

Mark Sagar, Soul Machines

The ‘Cutting Edge’ stage hosted those working on the future of some of the cutting edge technologies. On Human Machine interaction, the conference invited three companies to talk about their current work and future ideas. Mark Sagar from Soul Machines, previously worked on technology to bring digital characters to life in movies like Avatar. Mark talked about the need of the mind to be part of a body and suggested that the mind needs an entire body to learn and interact. To develop better cooperation with new technologies, humans will need a better face-to-face interaction as human reactions are created by a feedback loop using verbal and non-verbal communication and thus, Soul Machines aims to build digital brains in digital bodies. The model learns through lived experiences, learning in real time. Mark demonstrated one example of a new type of avatar, a toddler avatar to demonstrate how digital humans are able to learn new skills. This technology aims to create digital systems and personal assistants that will interact with humans and learn from those interactions.

Sarah Jarvis, and engineer from Prowler.io explained how their platform uses AI to enable complex decision making using probabilistic models. Probability theory is at the core of the technology that is currently being used in finance, logistics and transport, smart cities, energy markets and autonomous systems.

Eric Daimler, CEO, Spinglass Ventures observed that there is a constraint on data availability and quality rather than data science technology. A large problem is lack of large verifiable datasets. This challenge will increase due to concerns about privacy and security. For example, social media has moved to request more regulation. There are limitations on data integration, a gap in theory and practice. Finally Eric suggested that the future brings a new era category theory that could replace calculus.

Edge Computing

Next on the ‘Cutting Edge’ stage we had speakers providing views on Edge computing. Ingmar Bosner Professor of applied AI at the Oxford Robotic Institute, talked about combining edge computing (moving computation closer to where it is being used) and deep learning. Ingmar is interested in challenges such as machine introspection in perception and decision making, data efficient learning from demonstration, transfer learning and the learning of complex tasks via a set of less complex ones. Ingmar explained how these technologies may be effectively combined in driverless technologies. Using a very simplistic method, a sat nav app could integrate with the training data to control driverless cars. In addition, the system uses simulated data to train the models and can translate into better responses in real world scenarios. Joe Baguley from MVware, providers of networking solutions, describe the current idea of taking existing technologies and putting them together to solve novel challenges, i.e. driverless cars. Automation is no longer an optimization but a design requirement and new developments in technologies mean that AI and ML can be used to manage the use of applications across platforms and networks. AI can also be used to optimise those models making them more energy efficient, for example, by making sure that only the necessary data is kept and not data that can be considered wasteful.

How Technology is Changing our Healthcare

Mary Lou Jepson  from Open Water,  a startup working on fMRI-type imaging of the body using holographic, infrared techniques, described how her discovery will offer affordable imaging technology. Samsung’s Cuong Do, who directs the Global Strategy Group described their work developing a 24/7 AI care advisor. The aim of the technology is to support medical efforts and provide an efficient alternative that can alleviate the healthcare system. A game changer use of AI will open the possibility of using biomedical data to personalize and improve the efficacy of medicine. Joanna Shields from BenevolentAI is applying technologies to transform the way medicines are discovered, developed, tested and brought to market.  Meanwhile, Sunil Wadhwani, the CEO of the multimillion dollar company IGATE Corp, is helping not-for-profit organizations to scale technologies in healthcare, leading the innovation in primary service health providers in India, applying AI to benefit those that need it the most. The panel discussed how there is an increased gap between life span and health span, with financial position being the main driver for this gap.  Technology may be able to help close the gap and help train the next generation of health practitioners, optimizing drug creation and delivery and developing cost-effective healthcare for the poorest in society. In the era of data, this can provide an advantage, as personalised data does not only include DNA but where people live, dietary information and environmental data, which brings new opportunities to develop solutions for chronic conditions. Johanna, added that “the healthcare of humans is the best and most complicated machine learning and AI challenge”.

Research Frontiers

The Alan Turing Institute hosted a stage at Cog X this year, more information about speakers and content is available on the Turing website.

Back again at the ‘Cutting Edge’ stage Robert Stojnic recommended a curated site to check the state of the art developments in ML, Papers with Code

Jane Wang’s presentation

Jane Wang, from DeepMind, explained why causality is important for real world scenarios. Jane talked about how reasoning develops in humans, when does it show up?  4 to 5 year olds can make informative interventions based on causal knowledge, sometimes better than adults, as adults have prior knowledge (bias). Jane discussed the possibility of meta learning (”learning to learn”) by learning these priors as well as task specifics. This approach may enable AI to learn causality.

The next speaker was Peader Coyle from Aflorithmic Labs, who is contributing to the online course on probabilistic programming. He talked about the modern Bayesian workflow, and suggested that lots of problems are ‘small’ or ‘heterogeneous’ data problems when traditional ML methods may not work. He is part of the community supporting the development of Probabilistic programming in Python.

Ethics of AI

Joanna Bryson, University of Bath

Increasingly there has been a worrying trend to use data science technologies to perpetuate discrimination, increase power imbalance, and support cyber corruption and a key aspect of the conference was the commitment to incorporate ethical considerations to technology development. On the ‘Ethics stage’ Professor Joanna Bryson from Bath University and one of the leading figures of the Ethics of AI, talked about the advances in the field. Recently, the OECD has published their principles on AI,  to promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values. There is a pressing need for ethics in sustainable AI, for example by looking at bias in data collections process, not just the algorithms. One way to achieve this is by changing incentives, for example, Github can grant stars to those projects that integrate ethics very clearly in their pipeline. Most of the research in the field has been done in silo, sometimes, without addressing the impact, ethical guidelines recommend to closely link research and impact. One very important aspect of this topic is the issue of diversity, as people’s background will affect the outputs in the field. Another important aspect of fairness in this area has been the drive to support open source software However, the community now has a challenge to develop strategies for sustainability.

Data Trusts

Exploring Data Trust

A significantly different approach to data rights, was addressed in the discussion ‘Data Trusts’ by Neil Lawrence, Chair in Neuro and Computer Science, University of Sheffield and Sylvie Delacroix, Professor of Law and Ethics, University of Birmingham and Turing Fellow. With GDPR, we as data providers have rights, but it’s not easy to organise who has our data and what they use it for – we often click ”yes” just to access a website. The speakers suggested the need of new type of entity that operates as a Trust. With this mechanism, data owners choose to entrust their data to data trustees who are compelled to manage the data according the aspirations of the Trusts’ members. As every individual is different, society needs an ecosystem of trusts, which people could choose and move between. This could provide meaningful choices to data providers, ensuring that everybody has to make a choice regarding how to use their data (e.g. economic value, societal value), and contributing to a growing the debate around data ownership.

It was a fascinating couple of days at CogX listening about the great advances in technology. A key message was that these developments need to be guided by the critical need for equality and the environmental challenges we face. Listening to the co-founder of Extinction Rebellion Gail Bradbrook was really an inspiration to continuously strive to use data science and AI for social good.

More information is available in their video channel.

Attendance at Cog X was funded by the Alan Turing Institute.

 

Introducing the University of Bristol’s Turing Fellows: Bill Browne

Our latest series of blogs introduces you to our University of Bristol’s Turing Fellows, where the Jean Golding Institute have been speaking to some of the thirty Alan Turing Institute Fellows to find out a little more about their work and research interests.

Last time we spoke to Philip Hamann, NIHR Clinical Lecturer in Rheumatology and Turing Fellow who spoke to us about his work developing artificial intelligence (AI) methods to monitor and manage individuals with rheumatoid arthritis remotely using remotely captured data from smartphones and wearables – you can read more about him on our previous JGI blog.

Next, we spoke to William (Bill) Browne, Professor of Statistics in the School of Education and Turing Fellow.

What are your main research interests? 

Professor William (Bill) Browne, Professor of Statistics, School of Education

I have always worked on statistical methods and software for realistically modelling data that has underlying dependence structure, and in particular hierarchical or multilevel structure. My research is often very collaborative focussing on specific problems in specific disciplines.

I am also interested in how to explain methods / create software for applied researchers and this has led recently to interests in automation of statistical modelling approaches and of teaching statistics.

Can you give a brief background of your experience?

I studied all my degrees in a Maths school (in Bath) and created the Monte Carlo Markov chain estimation engine within the MLwiN software package as part of my PhD. I then worked as a postdoc in the Centre for Multilevel Modelling (CMM) when it was based in London before holding academic posts in three different schools in turn – Mathematical Sciences in Nottingham and then Veterinary Science and Education in Bristol. While working in Education I also found time to be the first director of the Jean Golding Institute.

What are the big issues related to data science / data-intensive research in your area? 

Over my academic career multilevel modelling techniques have gone from being only available via specialist software to being available in standard software packages and used by researchers with less statistics background. The big issue is therefore to ensure that these tools are used appropriately and so training and potentially automation are important aspects of research in this area i.e. how can we ensure that applied researchers use complex statistical approaches appropriately? This question will become even more important looking at data science more generally given the growth in more ‘black box’ machine learning approaches which often perhaps don’t offer the transparency that statistical modelling does and this interests me as well.

Slide demonstrating software for automatic teaching material generation

Can you tell us of one recent publication in the world of data science or data-intensive research that has interested you?

One wish I always have is the time to read more and keep up with cutting edge developments while still also reading the education literature! I have a current project with Nikos Tzavidis in Southampton and Timo Schmid in Berlin looking at MCMC algorithms and software for Small Area Estimation. Nikos and Timo with colleagues last year have written a JRSS Series A read paper on frameworks for small area estimation which is a good read.

How interdisciplinary is your research? 

The nature of producing statistical software is that it gets used in all manner of disciplines and I enjoy it when google scholar sends me citation links to see what new disciplines have used our software and multilevel models. Of course having worked my last 3 jobs in different faculties has made my research even more interdisciplinary. 

What’s next in your field of research? 

I hope that more research will be done on how best to train applied researchers in cutting edge data science methods – both statistical and machine learning. I also believe more research into increasing the transparency of methods is called for i.e. it has always been easy to show via simulations that new method performs better than old methods for specific scenarios but perhaps harder to explain why this is true or indeed how generalisable this improvement is. 

If anyone would like to get in touch to talk to you about collaborations / shared interests, how can they get in touch?

I am happy to answer emails – william.browne@bristol.ac.uk and there is much more about the Centre for Multilevel Modelling on our website.

Are there any events coming up that you would like to tell us about? 

We will be running a workshop on small area estimation in July and details will soon appear on the CMM website. I have also recently produced a series of online training talks on our software development work for the National Centre for Research Methods which are free to view.


More about The Turing Fellows 

Thirty fellowships and twelve projects have been awarded to Bristol as part of the University partnership with the Turing. This fellowship scheme allows university academics to develop collaborations with Turing partners. The Fellowships span many fields including key Turing interests in urban analytics, defence and health. 

Take a look at the Jean Golding Institute website for a full list of University of Bristol Turing Fellows. 

The Alan Turing Institute

The Alan Turing Institute’s goals are to undertake world-class research in data science and artificial intelligence, apply its research to real-world problems, drive economic impact and societal good, lead the training of a new generation of scientists and shape the public conversation around data. 

Find out more about The Alan Turing Institute.

Introducing the University of Bristol’s Turing Fellows: Philip Hamann

Our latest series of blogs introduces you to our University of Bristol’s Turing Fellows, where the Jean Golding Institute have been speaking to some of the thirty Alan Turing Institute Fellows to find out a little more about their work and research interests.

Last time we spoke to Genevieve Liveley, Reader in Classics and Turing Fellow who spoke to us about her work in narratology and the ancient and future (hi)stories of AI and robots – you can read more about her on our previous JGI blog.

Next, we spoke to Philip Hamann, NIHR Clinical Lecturer in Rheumatology and Turing Fellow at the University of Bristol.

Dr Philip Hamann

JGI: What are your main research interests? 

My research interests centre on developing artificial intelligence (AI) methods to monitor and manage individuals with rheumatoid arthritis remotely using remotely captured data from smartphones and wearables. 

Can you give a brief background of your experience?

I am an academic rheumatologist with over 8 years specialist rheumatology experience. I completed my PhD working on data from the British Society for Rheumatology Biologics Register. This is the worlds largest longitudinal database of patients with rheumatoid arthritis who take some of the new high-cost drugs we use to treat rheumatoid arthritis. I examined the data to see if there were any associations between the demographics of patients and how they responded to these drugs over time. I used latent class trajectory modelling and Bayesian methodology to plot the outcomes of patients over time. Whilst undertaking my PhD, I conceptualised and developed an award-winning smartphone app and cloud-based software in collaboration with industry partners which allows patients to securely record and report rheumatoid arthritis disease activity using validated patient reported outcomes which is now in clinical use in the NHS. I am now working on a project to use the app to try to predict flares remotely and optimise when patients are seen in clinic. 

What are the big issues related to data science / data-intensive research in your area? 

The really big issues in health data science research are around data security, patient confidentiality, ethical algorithms, safety and reproducibility of results.  When working with patient sensitive data that influences clinical care, findings must be safe, fair and reliable. The trust in the doctor-patient relationship is central to being able to deliver best care. Therefore, any research and industry partnership needs to adhere to the highest research and ethical standards and ensure potential issues are identified at the beginning and addressed. Transposing these core values and standards into research using novel remote monitoring services that involve cloud computing and AIpresents new challenges that need to be understood and tackled up-front, which is something we’ve been working on from day one.

Can you tell us of one recent publication in the world of data science or data-intensive research that has interested you? 

A recent and important paper published in Sensors from the end of 2017 is of great interest. Developing Fine-Grained Actigraphies for Rheumatoid Arthritis Patients from a Single Accelerometer Using Machine Learning. Sensors (Basel). 2017 Sep;17(9):2113–22. Andreu-Perez J et al. 

In this study, the authors describe specific activity traces that are able to identify, with a high level of accuracy, different day-to-day activities being undertaken by people with rheumatoid arthritis and differentiate these from traces from health volunteers using a single accelerometer. These findings represent a significant proof-of-concept of how remote monitoring with wearables could be used for patients with RA.

How interdisciplinary is your research? 

As with most research in healthcare, my research is very interdisciplinary and involves working with clinical staff and patients who use the app in the NHS, academics who are working with me to develop our next research study and industrial partners who build, develop and maintain the app. One thing I’ve learnt when working across disciplines and with patients is the importance of getting feedback from users of the app and avoiding jargon, making sure everyone understands everyone else, and understanding the differing objectives of different members of the team. An essential rule in multidisciplinary teams is to pause, recap and check everyone understands before proceeding to avoid difficulties later. 

What’s next in your field of research? 

Developments in healthcare are at a really exciting point right now. Wearable technology is becoming mainstream, smartphones are now more powerful than computers from just 2-3 years ago (and are affordable and widely used) and mobile data infrastructure is now robust, covering the vast majority of the UK. The tremendous advances in AI methodology means that the data collected from this widely available hardware can be synthesised in a manageable way, bringing with it the possibilities of remote sensing and personalised real-time adaptive healthcare. Taken together with the open approach to innovation being championed in the NHS, the possibilities for the new models of care to make a difference to patients may not be that far off. 

If anyone would like to get in touch to talk to you about collaborations / shared interests, how can they get in touch? 

Email is best: Philip.Hamann@bristol.ac.uk 

_________________________________________________________________________________________________________________________________________________________________

More about The Turing Fellows 

Thirty fellowships and twelve projects have been awarded to Bristol as part of the University partnership with the Turing. This fellowship scheme allows university academics to develop collaborations with Turing partners. The Fellowships span many fields including key Turing interests in urban analytics, defence and health. 

Take a look at the Jean Golding Institute website for a full list of University of Bristol Turing Fellows. 

The Alan Turing Institute 

The Alan Turing Institute’s goals are to undertake world-class research in data science and artificial intelligence, apply its research to real-world problems, drive economic impact and societal good, lead the training of a new generation of scientists and shape the public conversation around data. 

Find out more about The Alan Turing Institute. 

Introducing the University of Bristol’s Turing Fellows: Genevieve Liveley

Continuing our blog series, Introducing the University of Bristol’s Turing Fellows,’ the Jean Golding Institute have been speaking to several of the the thirty University of Bristol Alan Turing Institute Fellows in order to find out a bit more about their work and research interests.

You can take a look at our last interview with Levi John Wolf, Lecturer in Quantitative Human Geography, about his work developing new mathematical models and algorithms in applications across field of quantitative geography on the JGI Blog.

Next up in the series, we spoke to Genevieve LiveleyReader in Classics and Turing Fellow, about her work in narratology and the ancient and future (hi)stories of AI and robots.

Dr Genevieve Liveley, Reader in Classics and Turing Fellow

JGI: What are your main research interests? 

My research and teaching centres upon narratologically inflected studies of the ancient world and its modern receptionMy most recent book, Narratology (OUP) exposes the dynamic (mis)appropriation of ancient scripts that gives modern narratology its shape. My new research, on the ancient and future (hi)stories of AI and robots, builds on this work, and seeks a better understanding of the story frames, schemata, and scripts that programme cultural narratives about human interaction with artificial humans, automata, and AI – from across the last 3000 years. 

JGI: Can you give a brief background of your experience? 

I completed my PhD in Classics (on chaos theory and ancient narrative) here at Bristol and, after a post-doc at Berkeley and a year lecturing in Reading, was lucky enough to get a permanent post back here. 

JGI: What are the big issues related to data science / data-intensive research in your area? 

Preliminary research indicates that public attitudes to AI in society are coded by their experience of AI in fiction. So, a better understanding of the narrative dynamics shaping such coding – that is, the narrative scripts and frames that programme human responses to AI – is essential. Not least of all to help us better understand public discourse and private fears around risk and opportunity in AI. As recent controversies over immunization show, the best data can be ‘trumped’ by a single story and significant harms ensue. 

JGI: Can you tell us of one recent publication in the world of data science or data-intensive research that has interested you? 

The Royal Society Report on ‘AI narratives: portrayals and perceptions of artificial intelligence and why they matter.’

The cyborg Talos depicted on a 4th-century BCE krater now in the Jatta National Archaeological Museum

JGI: How interdisciplinary is your research? 

Very! I’m a narratologist (working on the science of stories) based in the department of Classics and Ancient History. 

JGI: What’s next in your field of research? 

Bringing theory and praxis together to develop some practical tools for government and other agencies to use in assessing and anticipating the future risks of AI innovation. 

JGI: If anyone would like to get in touch to talk to you about collaborations / shared interests, how can they get in touch? 

Email me at g.liveley@bristol.ac.uk.

JGI: Are there any events coming up that you would like to tell us about? 

In April I’ll be speaking on AI narratives and ethics at the CYBERUK 2019 conference (24 –25 April 2019): this is the UK government’s flagship cyber security event, hosted by the National Cyber Security Centre (NCSC). 

__________________________________________________________________________________________________________________________________________________________________

More about The Turing Fellows 

Thirty fellowships and twelve projects have been awarded to Bristol as part of the University partnership with the Turing. This fellowship scheme allows university academics to develop collaborations with Turing partners. The Fellowships span many fields including key Turing interests in urban analytics, defence and health. 

Take a look at the Jean Golding Institute website for a full list of University of Bristol Turing Fellows. 

The Alan Turing Institute 

The Alan Turing Institute’s goals are to undertake world-class research in data science and artificial intelligence, apply its research to real-world problems, drive economic impact and societal good, lead the training of a new generation of scientists and shape the public conversation around data. 

Find out more about The Alan Turing Institute. 

Introducing the University of Bristol’s Turing Fellows: Levi John Wolf

Levi John Wolf, Lecturer in Quantitative Human Geography and Turing Fellow

In our blog series Introducing the University of Bristol’s Turing Fellows’, we have been finding out more about the thirty academics at the University of Bristol who have recently become Alan Turing Institute Fellows through a series of interviews. 

You can find the first blog of our series, featuring Iván Palomares Carrascosa, as well as an interview with Paul Wilcox on the JGI Blog.

Next up, the Jean Golding Institute (JGI) spoke to Levi John Wolf, Lecturer in Quantitative Human Geography and Turing Fellow, about his work developing new mathematical models and algorithms in applications across a broad range of problems within the field of quantitative geography.

JGI: What are your main research interests?

I am interested in the fundamental computational and mathematical challenges that geography poses for social science. Many social problems involve situations where ‘who you’re around’ can affect what you do, or who you’re connected to can impact what you’re able to do. Quantitative geography as a domain, seeks to use information about geographical structures and relationships to do better social science. I develop new mathematical models or algorithms in applications across a broad range of problems in elections, campaigns, segregation & sorting, urban analytics, and inequality.

JGI: Can you give a brief background of your experience? 

I did my PhD at Arizona State (defended December 2017) and worked as a fellow at the University of Chicago Center for Spatial Data Science during that time. I also took summers off during my PhD to work as a data scientist/engineer at Nextdoor.com, Inc. and CARTO, a spatial data science company. Since then, I moved from Brooklyn to Bristol in 2017, and have been lecturing at the University of Bristol.  

Detected intensity of sociodemographic & racial boundaries between neighbourhoods in downtown Brooklyn using a “geosilhouette” statistic for spatial clustering.

JGI: What are the big issues related to data science / data-intensive research in your area? 

The biggest issue with computational/quantitative geography is the difficulty of using the data we have in an effective way. While there are many data science methods that can be used to analyse spatial data, geographic relationships and models use special structures to model or leverage fundamental geographical relatedness or the distinctiveness of geographical areas. Existing techniques ignore geography most of the time, and we can build better models when we take geography into account; however, this poses a massive mathematical and computational challenge.  

JGI: Can you tell us of one recent publication in the world of data science or data-intensive research that has interested you? 

A very recent article I find really fascinating is the work by Fowler et al (2019) on the ‘contextual fallacy’, the assumption that a geographical ‘box’ or ‘container’ like the neighbourhood or LSOA (Lower Super Output Area) can accurately represent the social or geographical context that an individual experiences. Using individual-level census data, this research group has been able to unpack a half-century old debate about the fundamental way geographic data is structured, and can affect the conclusions drawn from it in quantitative analysis.  

JGI: How interdisciplinary is your research? 

Geography as a discipline tends to be outward-facing, and focuses strongly on problems that are in other disciplines. My own work on elections and redistricting links political science and sociology; my work on inequality reaches into econometrics and political theory alike. My main body of work engages heavily in statistics and computer science, two domains often not considered in the same thought as Geography.  

JG: What’s next in your field of research? 

Geographic data science. The cutting edge of quantitative geography is all about trying to figure out how to leverage geographical relationships and distinctiveness directly in new computational methods to build better predictions.  

JGI: If anyone would like to get in touch to talk to you about collaborations / shared interests, how can they get in touch? 

Follow/DM me on twitter @levijohnwolf, or email me at levi.john.wolf@bristol.ac.uk.

__________________________________________________________________________________________________________________________________________________________________

More about The Turing Fellows 

Thirty fellowships and twelve projects have been awarded to Bristol as part of the University partnership with the Turing. This fellowship scheme allows university academics to develop collaborations with Turing partners. The Fellowships span many fields including key Turing interests in urban analytics, defence and health. 

Take a look at the Jean Golding Institute website for a full list of University of Bristol Turing Fellows. 

The Alan Turing Institute 

The Alan Turing Institute’s goals are to undertake world-class research in data science and artificial intelligence, apply its research to real-world problems, drive economic impact and societal good, lead the training of a new generation of scientists and shape the public conversation around data. 

Find out more about The Alan Turing Institute