Plenary Speakers

Edward Y. Chang

President of HTC Research & Healthcare (DeepQ)

Advancing Healthcare with AI, VR, and Blockchain

This talk updates DeepQ’s progress in three areas: automated AI, VR facilitated surgery, and our effort in developing DeepLinQ, a distributed ledger system for supporting privacy-preserved deep learning. My talk starts with my prior work at Google on scalable machine learning to motivate the importance of having big data to train deep learning models. I will then discuss how we deal with lacking labeled data challenges, especially in the medical domain. I use both XPRIZE Tricorder and healthcare chatbot as example applications to explain how we overcome the small data problem with reinforcement learning and  CNNs . I will present  DeepQ  AI machine, which making deep learning training simple and automated. The same AI architecture and AI machine that we have developed are being used also to fuel our AR application: VivePaper. I will explain how our AR efforts facilitating medical education and brain surgery. Finally, I will present DeepLinQ, a multi-layer blockchain architecture facilitating privacy-preserved data sharing to balance data-driven AI and user privacy.

Edward Chang currently serves as the President of Research and Healthcare (DeepQ) at HTC and a visiting professor at UC Berkeley & Stanford. Ed's most notable recent work is co-leading the DeepQ project to win the XPRIZR medical IoT context in 2017 with 1M USD prize. The AI architecture that powers DeepQ is also applied to power Vivepaper, an AR product Ed's team launched in 2016 to support immersive augmented reality experiences. Prior to his HTC post, Ed was a director of Google Research for 6.5 years, leading research and development in several areas including scalable machine learning, indoor localization, and Google Q&A. His 2007-2011 contributions in data-driven machine learning (US patents 8798375 and 9547914) and his ImageNet sponsorship helped fuel the success of AlexNet and recent resurgence of AI. His developed open-source codes in parallel SVMs, parallel LDA, parallel spectral clustering, and parallel frequent itemset mining (adopted by Berkeley Spark) have been collectively downloaded over 30,000 times. Prior to Google, Ed was a full professor of Electrical Engineering at the University of California, Santa Barbara. He joined UCSB in 1999 after receiving his PhD from Stanford University. Ed is an IEEE Fellow for his contributions to scalable machine learning.


Pierre Vandergheynst

Vice-President for Education at the Ecole Polytechnique Fédérale de Lausanne (EPFL)

Signal processing on graphs. Past. Present. Future?   

Signal Processing on Graphs is a recent body of work broadly aiming at bringing the power of digital signal processing to a large class of data defined on graphs or networks. It has quickly established itself with numerous interesting results bridging the language of signal processing with machine learning, but also by leveraging computational methods from numerical linear algebra. In this plenary, we will review the basics of SP on graphs and some of its most interesting results (sampling, interpolation, computations among others) and we will put an emphasis on how these methods, with their signal processing roots, offer new insights into network science or novel AI systems.

Pierre Vandergheynst is Professor of Electrical Engineering at the Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also holds a courtesy appointment in Computer Science. A theoretical physicist by training, Pierre is a renown expert in the mathematical modelling of complex data. His current research focuses on data processing with graph-based methods with a particular emphasis on machine learning and network science. Pierre Vandergheynst has served as associate editor of multiple flagship journals, such as the IEEE Transactions on Signal Processing or SIAM Imaging Sciences. He is the author or co-author of more than 100 published technical papers and has received several best paper awards from technical societies. He was awarded the Apple ARTS award in 2007 and the De Boelpaepe prize of the Royal Academy of Sciences of Belgium in 2010.

 

 


Yann LeCun

Facebook AI Research & New York University

Self-Supervised Learning: the Future of Signal Understanding?  

Deep learning has caused revolutions in computer perception, signal restoration/reconstruction, signal synthesis, natural language understanding and control. But almost all these successes largely rely on supervised learning, where the machine is required to predict human-provided annotations. For control and game AI, most systems use model-free reinforcement learning, which requires too many trials to be practical in the real world. In contrast, animals and humans seem to learn vast amounts of knowledge about the world through mere observation and occasional actions.

Based on the hypothesis that prediction is the essence of intelligence, self-supervised learning (SSL) purports to train a machine to predict missing information, for example predicting missing words in a text, occulted parts of an image, future frames in a video, and generally "filling in the blanks". SSL approaches have been very successful in natural language processing, but less so in image understanding because of the difficulty of modeling uncertainty in high-dimensional continuous spaces. A general energy-based formulation of SSL will be presented which relies on regularized latent variable models. These models yield excellent performance in image completion and video prediction. A number of applications will be described, including using a latent-variable video prediction model to train autonomous cars to drive defensively.

Yann LeCun is Director of AI Research at Facebook and Silver Professor at New York University, affiliated with the Courant Institute, the Center for Neural Science and the Center for Data Science, for which he served as founding director until 2014. He received an EE Diploma from ESIEE (Paris) in 1983, a PhD in Computer Science from Université Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU in 2003 after a short tenure at the NEC Research Institute. In late 2013, LeCun became Director of AI Research at Facebook, while remaining on the NYU Faculty part-time. He was visiting professor at Collège de France in 2016. His research interests include machine learning and artificial intelligence, with applications to computer vision, natural language understanding, robotics, and computational neuroscience. He is best known for his work in deep learning and the invention of the convolutional network method which is widely used for image, video and speech recognition. He is a member of the US National Academy of Engineering, the recipient of the 2014 IEEE Neural Network Pioneer Award, the 2015 IEEE Pattern Analysis and Machine Intelligence Distinguished Researcher Award, the 2016 Lovie Award for Lifetime Achievement, 2018 ACM Turing Award, and an honorary doctorate from IPN, Mexico.