The ACML 2018 Workshop on Machine Learning in China (MLChina'18)
Beijing Jiaotong University
November 14 - 16, 2018

Keynote Speakers (sorted in alphabetic order)

  • Prof. Zhouchen Lin, Peiking University, China
  • Title: First-Order Optimization Methods in Machine Learning

    Abstract: Optimization is a key component in machine learning. When problems scale up, normally only first-order optimization methods can be used in practice. In this talk I will briefly review some advances in the first-order optimization methods in machine learning. If time permits, I will also introduce some of my recent work on first order optimization.

    Bio: Zhouchen Lin is currently a professor with the Key Laboratory of Machine Perception, School of Electronics Engineering and Computer Science, Peking University. His research interests include computer vision, image processing, machine learning, pattern recognition, and numerical optimization. He is an area chair of CVPR 2014/2016/2019, ICCV 2015, NIPS 2015/2018 and AAAI 2019, and a senior program committee member of AAAI 2016/2017/2018 and IJCAI 2016/2018. He is an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and the International Journal of Computer Vision. He is a Fellow of IAPR and IEEE.

Invited Speakers (sorted in alphabetic order) (tentative)

  • Prof. Liwei Wang, Peiking University, China

  • Title: Towards Understanding Deep Learning: Two Theories of Stochastic Gradient Langevin Dynamics

    Abstract: Deep learning has achieved great success in many applications. However, deep learning is a mystery from a learning theory point of view. In all typical deep learning tasks, the number of free parameters of the networks is at least an order of magnitude larger than the number of training data. This rules out the possibility of using any model complexity-based learning theory (VC dimension, Rademacher complexity etc.) to explain the good generalization ability of deep learning. Indeed, the best paper of ICLR 2017 “Understanding Deep Learning Requires Rethinking Generalization” conducted a series of carefully designed experiments and concluded that all previously well-known learning theories fail to explain the phenomenon of deep learning. In this talk, I will give two theories characterizing the generalization ability of Stochastic Gradient Langevin Dynamics (SGLD), a variant of the commonly used Stochastic Gradient Decent (SGD) algorithm in deep learning. Building upon tools from stochastic differential equation and partial differential equation, I show that SGLD has strong generalization power. The theory also explains several phenomena observed in deep learning experiments.

    Bio: Liwei Wang is a professor of School of Electronics Engineering and Computer Sciences, Peking University. His main research interest is machine learning theory and has published more than 100 papers on top conferences and journals. He was the first Asian researcher who was named among “AI’s 10 to Watch”. He served as the Area Chair of NIPS and the Associate Editor of PAMI.

  • Prof. Yang Yu , Nanjing University, China

  • Title: Connecting Virtual-World and Real-World Reinforcement Learning

    Abstract: Reinforcement learning achieved significant successes include being part of the AlphaGo system and playing Atari games. However, reinforcement learning is also criticized for applicability only in virtual worlds due to the requirement of huge amount of interaction data. In this talk, we will report our recent progress towards real-world reinforcement learning, including virtualizing real-world tasks and drawing virtual-world policies out to the real world.

    Bio: Yang Yu is an associate professor of computer science in Nanjing University, China. He joined the LAMDA Group as a faculty since he got his Ph.D. degree in 2011. His research area is in machine learning and reinforcement learning. He was recommended as AI’s 10 to Watch by IEEE Intelligent Systems in 2018, invited to have an Early Career Spotlight talk in IJCAI’18 on reinforcement learning, and received the Early Career Award of PAKDD in 2018.

  • Prof. Jun Zhu, Tsinghua University, China

  • Title: Machine Learning in Uncertain and Adversarial Environments

    Abstract: When we apply machine learning in real applications, we need to address some important challenges. First, the world is an uncertain place because of physical randomness, incomplete knowledge, noise, ambiguities, and contradictions. It is critical to model uncertainty and draw inference for intelligent systems. Second, ML algorithms (e.g., deep networks) can be vulnerable to some adversarial noise. This is of high risk in high-stakes and security-critical applications. In this talk, I will present some advances in probabilistic machine learning (particularly a ZhuSuan probabilistic programming library) and adversarial attack and defense for deep networks. We won the first places in all three tasks in NIPS 2017 adversarial attack and defense competition.

    Bio: Dr. Jun Zhu is a Professor at the Department of Computer Science and Technology in Tsinghua University. He was an Adjunct Faculty at the Machine Learning Department in Carnegie Mellon University from 2015 to 2018. Dr. Zhu received his B.E. and Ph.D. degrees in Computer Science from Tsinghua in 2005 and 2009, respectively. Before joining Tsinghua in 2011, he did post-doctoral research in Carnegie Mellon University. His research interest lies in machine learning and applications in text and image analysis. Dr. Zhu has published over 100 papers in the prestigious conferences and journals. He is an associate editor-in-chief for IEEE Trans. on PAMI and editorial board member for Artificial Intelligence. He served as area chair/senior PC for ICML, NIPS, IJCAI, UAI, AAAI, and AISTATS. He was a local co-chair of ICML 2014. He is a recipient of several awards, including IEEE Intelligent Systems "AI's 10 to Watch" Award, MIT TR35 China, NSFC Excellent Young Scholar Award, CCF Young Scientist Award, and CCF first-class Natural Science Award. His work is supported by the National Youth Top-notch Talent Support program.

  • Prof. Wangmeng Zuo, Harbin Institute of Technology, China

  • Title: Guided and transfer learning with multiple domains of visual data

    Abstract: In many vision learning tasks, we may have multiple domain data in the training or testing stage. In general, better learning performance is expected to be attained by properly exploiting multiple domains of visual data. In this talk, we specifically consider two cases. First, when multiple domains of data are available in both training and testing, several guided network architectures are designed for making use of the high quality image in domain to enhance the low quality (degraded) image in another domain. Based on the spatial correlation between guided and degraded images, we design an analysis model guided learning deep architecture and a deformable flow guidance learning network, and apply them for guided depth image enhancement and guided face restoration. Second, when the multiple domains (e.g., source and target) are only available in training, domain adaptation, domain translation, feature consistency can be developed for exploiting multiple domains of data to enhance the model learned for target domain. Several architectures are presented for improved domain transfer by addressing class weight bias and minimizing distribution discrepancy.

    Bio: Wangmeng Zuo received the Ph.D. degree in computer application technology from the Harbin Institute of Technology, Harbin, China, in 2007. He is currently a Professor in the School of Computer Science and Technology, Harbin Institute of Technology. His current research interests include image enhancement and restoration, object detection, visual tracking, and image classification. He has published over 70 papers in top-tier academic journals and conferences. He has served as a Tutorial Organizer in ECCV 2016, an Associate Editor of the IET Biometrics and Journal of Electronic Imaging, and the Guest Editor of Neurocomputing, Pattern Recognition, IEEE Transactions on Circuits and Systems for Video Technology, and IEEE Transactions on Neural Networks and Learning Systems.

  • Prof. Liping Jing, Beijing Jiaotong University, China
  • Title:Representation learning for multi-modal heterogeneous data

    Abstract: With the development of data collection techniques, multi-modal data analysis becomes an emerging research direction to improve the learning performance. Existing work has shown that leveraging multi-modal information is able to provide a rich and comprehensive description. For example, an image can be characterized by color, edge, texture and etc; a web-page can be represented by both page-text and hyperlinks pointing to them. There may be multiple measurement modalities such as simultaneously recorded images, annotation tags or texts in different languages. Each modality generates one kind of description about the object, and various descriptions in different modalitiess usually characterize different and partially information about the objects. One of the core problems is how to sufficiently represent multi-modal heterogeneous data in the analysis. In this talk, we will focus on our recent work about multi-modal heterogeneous data representation learning.

    Bio: Liping Jing received the Ph.D degree in applied mathematics from the University of Hong Kong in 2007. She was a Research Associate with Hong Kong Baptist University, Hong Kong, a Research Fellow with University of Texas at Dallas, USA, from 2007 to 2008, and a Vistiting Scholoar with ICSI and the AMPLab, University of California at Berkeley, USA, from 2015 to 2016. Her research interests include machine learning and its applications. She served as a regular reviewer and program committee member for a number of international journals and conferences. She is PI of several projects including the National Science Fund for Excellent Young Scholars.