Federated and Transfer Learning
Deep learning has been widely applied in many areas to dramatically increase the performance of machine learning models. Known to be data hungry, the success of deep learning relies heavily on massive training data. Nowadays, there is increasing concern on data privacy. Regarding this issue, federated learning emerges as a relatively new but increasing hot research topic in recent years, whose specific goal is to perform data-protected deep learning. In this talk, we will introduce federated learning, with specific focus on its problem definition, optimization solution, and possible applications. Moreover, we will also introduce transfer learning, a prominent technology to help models train in low-data and non-IID situations.
Jindong Wang is currently a researcher at Microsoft Research Asia, Beijing, China. He received his Ph.D. degree from Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China, in 2019. He was a visiting student in Hong Kong University of Science and Technology (HKUST) in 2018. His research interest mainly includes transfer learning, machine learning, data mining, and ubiquitous computing. He has published several articles on top journals and conferences, including ACM TIST, IEEE TNNLS, CVPR, IJCAI, ACMMM, UbiComp, ICDM, PerCom, etc. He serves as the publicity co-chair of IJCAI’19 and session chair of ICDM’19. He is the reviewer or PC member of several leading journals and conferences such as TPAMI, TKDE, TMM, ICLR, ICML, NeurIPS, CVPR, etc.