Blog written by Patty Holley, Jean Golding Institute Manager.
CogX 2019 took place during the first half of London Tech Week in the vibrant Knowledge Quarter of Kings Cross in London. The conference started only 3 years ago, yet this year hosted 15,000 attendees and 500 speakers making it the largest in Europe. CogX 2019 was also supporting 2030 Vision in their ambitions to deliver the Sustainable Global Goals. Mayor of London Sadiq Khan opened the conference with a call for companies to be more inclusive by opening up opportunities for women and the BAME communities, helping London and other cities to find solutions for societal problems.
Here are some highlights:
The State of Today – Professor Stuart Russell
The first keynote delivery was from Professor Stuart Russell from University of California, Berkeley where he described the global status of data science and AI. There has been a major investment across the world in the development of these technologies and academic interest has also increased over time. For example, there has been a significant increase from 2010 to 2015 in rate recognition in ImageNet, a dataset of labeled images taking from the web. Learning algorithms are improving constantly but there is a long way to go to reach human cognition. Professor Russell had a cautionary message particularly in autonomous technology as the predicted progress may not be achieved as expected.
Professor Russell also suggested that probabilistic programming and mathematical theory of uncertainty can really make an impact. As an example, he talked about the global seismic monitoring for the comprehensive nuclear test-ban treaty. Evidence data is compared with the model daily, and the algorithm detected the North Korean test in 2013.
What is coming… Robots, personal assistants, web-scale information and extraction and question answering, global vision system via satellite imagery. However, Professor Russell believes that human level AI has a long way to go. Major problems, like the capability of real understanding of language, integration of learning with knowledge, long range thinking at multiple levels of abstraction, cumulative discovery of concepts and theories, all haven’t as yet been resolved.
Finally, Professor Russell added that data science and AI will drive an enormous increase in the capabilities of civilization, however, there are a number of risks, including democracy failures, war and attacks on humanity, so regulation and governance are key.
Gender and AI
The talks took place in several venues across Kings’ Cross, and on the ‘Ethics’ stage, Sarah Dillon and Clemi Collett from University of Cambridge highlighted the problems with dataset bias. The issue of algorithm bias has been highlighted previously, but not the bias that may come from the actual data. Guidelines are not content or context specific. They suggested that gender specific guidance is needed, guidance on data collection and data handling, theoretical definition of fairness based on current and historic research that will take into account societal drivers, for example to investigate why some parts of society don’t contribute to data collection.
Importantly, the speakers also talked about the diversity of the workforce working in these technologies. Currently, only 17% are female, which really impacts on the technology design and development. Diversification of workforce is vital as it brings discussion within teams and companies. If this issue is not challenged, then existing inequalities will be aggravated. The speakers reiterated the need to investigate the psychological factors that affect diversity in this labour market through qualitative and quantitative research. A panel followed the talk, which included Carly Kind, Director of Ada Lovelace Institute, Gina Neff, University of Oxford, Kanta Dihal, Centre for the Future of Intelligence, University of Cambridge. Carly Kind pointed out that diversity (or lack of) will shape what technologies are being developed and used. Gina Neff highlighted the point that most jobs at risk of automation are those associated with women, and therefore gender equality in the workforce generating new tech is a necessity. One important area that should be encouraged is the interdisciplinary exchanges between gender theorists and AI practitioners and to develop novel incentives for women to encourage involvement in tech. Women need to be part of the decision making process, and support those that can become role models, building profiles that will inspire women.
The Future of Human Machine Interaction
The ‘Cutting Edge’ stage hosted those working on the future of some of the cutting edge technologies. On Human Machine interaction, the conference invited three companies to talk about their current work and future ideas. Mark Sagar from Soul Machines, previously worked on technology to bring digital characters to life in movies like Avatar. Mark talked about the need of the mind to be part of a body and suggested that the mind needs an entire body to learn and interact. To develop better cooperation with new technologies, humans will need a better face-to-face interaction as human reactions are created by a feedback loop using verbal and non-verbal communication and thus, Soul Machines aims to build digital brains in digital bodies. The model learns through lived experiences, learning in real time. Mark demonstrated one example of a new type of avatar, a toddler avatar to demonstrate how digital humans are able to learn new skills. This technology aims to create digital systems and personal assistants that will interact with humans and learn from those interactions.
Sarah Jarvis, and engineer from Prowler.io explained how their platform uses AI to enable complex decision making using probabilistic models. Probability theory is at the core of the technology that is currently being used in finance, logistics and transport, smart cities, energy markets and autonomous systems.
Eric Daimler, CEO, Spinglass Ventures observed that there is a constraint on data availability and quality rather than data science technology. A large problem is lack of large verifiable datasets. This challenge will increase due to concerns about privacy and security. For example, social media has moved to request more regulation. There are limitations on data integration, a gap in theory and practice. Finally Eric suggested that the future brings a new era category theory that could replace calculus.
Next on the ‘Cutting Edge’ stage we had speakers providing views on Edge computing. Ingmar Bosner Professor of applied AI at the Oxford Robotic Institute, talked about combining edge computing (moving computation closer to where it is being used) and deep learning. Ingmar is interested in challenges such as machine introspection in perception and decision making, data efficient learning from demonstration, transfer learning and the learning of complex tasks via a set of less complex ones. Ingmar explained how these technologies may be effectively combined in driverless technologies. Using a very simplistic method, a sat nav app could integrate with the training data to control driverless cars. In addition, the system uses simulated data to train the models and can translate into better responses in real world scenarios. Joe Baguley from MVware, providers of networking solutions, describe the current idea of taking existing technologies and putting them together to solve novel challenges, i.e. driverless cars. Automation is no longer an optimization but a design requirement and new developments in technologies mean that AI and ML can be used to manage the use of applications across platforms and networks. AI can also be used to optimise those models making them more energy efficient, for example, by making sure that only the necessary data is kept and not data that can be considered wasteful.
How Technology is Changing our Healthcare
Mary Lou Jepson from Open Water, a startup working on fMRI-type imaging of the body using holographic, infrared techniques, described how her discovery will offer affordable imaging technology. Samsung’s Cuong Do, who directs the Global Strategy Group described their work developing a 24/7 AI care advisor. The aim of the technology is to support medical efforts and provide an efficient alternative that can alleviate the healthcare system. A game changer use of AI will open the possibility of using biomedical data to personalize and improve the efficacy of medicine. Joanna Shields from BenevolentAI is applying technologies to transform the way medicines are discovered, developed, tested and brought to market. Meanwhile, Sunil Wadhwani, the CEO of the multimillion dollar company IGATE Corp, is helping not-for-profit organizations to scale technologies in healthcare, leading the innovation in primary service health providers in India, applying AI to benefit those that need it the most. The panel discussed how there is an increased gap between life span and health span, with financial position being the main driver for this gap. Technology may be able to help close the gap and help train the next generation of health practitioners, optimizing drug creation and delivery and developing cost-effective healthcare for the poorest in society. In the era of data, this can provide an advantage, as personalised data does not only include DNA but where people live, dietary information and environmental data, which brings new opportunities to develop solutions for chronic conditions. Johanna, added that “the healthcare of humans is the best and most complicated machine learning and AI challenge”.
The Alan Turing Institute hosted a stage at Cog X this year, more information about speakers and content is available on the Turing website.
Back again at the ‘Cutting Edge’ stage Robert Stojnic recommended a curated site to check the state of the art developments in ML, Papers with Code
Jane Wang, from DeepMind, explained why causality is important for real world scenarios. Jane talked about how reasoning develops in humans, when does it show up? 4 to 5 year olds can make informative interventions based on causal knowledge, sometimes better than adults, as adults have prior knowledge (bias). Jane discussed the possibility of meta learning (”learning to learn”) by learning these priors as well as task specifics. This approach may enable AI to learn causality.
The next speaker was Peader Coyle from Aflorithmic Labs, who is contributing to the online course on probabilistic programming. He talked about the modern Bayesian workflow, and suggested that lots of problems are ‘small’ or ‘heterogeneous’ data problems when traditional ML methods may not work. He is part of the community supporting the development of Probabilistic programming in Python.
Ethics of AI
Increasingly there has been a worrying trend to use data science technologies to perpetuate discrimination, increase power imbalance, and support cyber corruption and a key aspect of the conference was the commitment to incorporate ethical considerations to technology development. On the ‘Ethics stage’ Professor Joanna Bryson from Bath University and one of the leading figures of the Ethics of AI, talked about the advances in the field. Recently, the OECD has published their principles on AI, to promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values. There is a pressing need for ethics in sustainable AI, for example by looking at bias in data collections process, not just the algorithms. One way to achieve this is by changing incentives, for example, Github can grant stars to those projects that integrate ethics very clearly in their pipeline. Most of the research in the field has been done in silo, sometimes, without addressing the impact, ethical guidelines recommend to closely link research and impact. One very important aspect of this topic is the issue of diversity, as people’s background will affect the outputs in the field. Another important aspect of fairness in this area has been the drive to support open source software However, the community now has a challenge to develop strategies for sustainability.
A significantly different approach to data rights, was addressed in the discussion ‘Data Trusts’ by Neil Lawrence, Chair in Neuro and Computer Science, University of Sheffield and Sylvie Delacroix, Professor of Law and Ethics, University of Birmingham and Turing Fellow. With GDPR, we as data providers have rights, but it’s not easy to organise who has our data and what they use it for – we often click ”yes” just to access a website. The speakers suggested the need of new type of entity that operates as a Trust. With this mechanism, data owners choose to entrust their data to data trustees who are compelled to manage the data according the aspirations of the Trusts’ members. As every individual is different, society needs an ecosystem of trusts, which people could choose and move between. This could provide meaningful choices to data providers, ensuring that everybody has to make a choice regarding how to use their data (e.g. economic value, societal value), and contributing to a growing the debate around data ownership.
It was a fascinating couple of days at CogX listening about the great advances in technology. A key message was that these developments need to be guided by the critical need for equality and the environmental challenges we face. Listening to the co-founder of Extinction Rebellion Gail Bradbrook was really an inspiration to continuously strive to use data science and AI for social good.
More information is available in their video channel.
Attendance at Cog X was funded by the Alan Turing Institute.