Andrew Trask
Senior Research Scientist at DeepMind and Founder of OpenMined
Andrew is a Senior Research Scientist at DeepMind studying Privacy and AI and the Founder and Leader at OpenMined, an open-source community of over 12,000 researchers, engineers, and enthusiasts dedicated to making the concepts and tools necessary for privacy-preserving AI reach mainstream adoption. The mission of the OpenMined community is to create an accessible ecosystem of privacy tools and education.
Andrew has worked with hedge funds, investment banks, healthcare networks, and government intelligence clients on delivering AI solutions.
Andrew is also a passionate AI teacher, with a passion for making complex ideas easy to learn. He is the author of the book “Grokking Deep Learning”, an instructor in Udacity’s Deep Learning nano degree program, and the author of the popular deep learning blog “i am trask”. He is also a member of the United Nations Privacy Task Force, raising awareness and lowering the barrier-to-entry for the use of privacy-preserving analytics within the public sector.

Sample Talks
Andrew details the most important new techniques in secure, privacy-preserving, and multi-owner governed artificial intelligence. Andrew begins with a sober, up-to-date view of the current state of AI safety, user privacy, and AI governance before introducing some of the fundamental tools of technical AI safety: homomorphic encryption, secure multiparty computation, federated learning, and differential privacy. He concludes with an exciting demo from the OpenMined open source project that illustrates how to train a deep neural network while both the training data and the model are in a safe, encrypted state during the entire process.
Andrew touches on ideas such as Differential Privacy, and Secure Multi-Party Computation, and how these ideas come into play.
Many industries are limited by regulations of private data, and in response to the need for greater privacy in AI three cutting edge techniques have been developed that have huge potential for the future of machine learning in healthcare: federated learning; differential privacy; encrypted computation. These modern privacy techniques would allow us to train our models on encrypted data from multiple sources without sharing the data.
Andrew covers how to solve the world’s biggest privacy concerns using various state-of-the-art privacy-preserving technologies, such as privacy-preserving machine learning.