Xi Lin (林熙)

Xi Lin is currently a Post Doctoral Research Fellow with Department of Computer Science at the City University of Hong Kong, working with Prof. Qingfu Zhang. He received the B.Sc. degree from South China University of Technology, the M.A. degree from Columbia University, and the Ph.D. degree from City University of Hong Kong under the supervision of Prof. Qingfu Zhang and Prof. Sam Kwong.
His research interests include multi-objective optimization, multi-task learning, Bayesian optimization, evolutionary computation, and learning to optimize. His work has been published in top-tier machine learning conferences such as ICML, NeurIPS and ICLR. He serves as an Action Editor for Transactions on Machine Learning Research (TMLR). He is a regular reviewer for many machine learning and evolutionary computation conferences and journals, and has received several outstanding reviewer awards from ICML, ICLR and TMLR.
News
Sep 2023 | Our paper entitled Neural Combinatorial Optimization with Heavy Decoder: Toward Large Scale Generalization was accepted by NeurIPS 2023. |
---|---|
Sep 2023 | Our paper entitled Hypervolume Maximization: A Geometric View of Pareto Set Learning was accepted by NeurIPS 2023. |
Jul 2023 | I serve as an Action Editor for Transactions on Machine Learning Research (TMLR) starting from July 2023. |
Jun 2023 | I was selected as an Expert Reviewer of Transactions on Machine Learning Research (TMLR). |
Apr 2023 | Our paper entitled Continuation Path Learning for Homotopy Optimization was accepted by ICML 2023 as oral presentation. |
Feb 2023 | We will organize CVPR2023 Tutorial on Multi-Objective Optimization for Deep Learning. |
Sep 2022 | Our paper entitled Pareto Set Learning for Expensive Multi-Objective Optimization was accepted by NeurIPS 2022. |
Jul 2022 | I was selected as an Outstanding Reviewer of ICML 2022. |
Apr 2022 | I was selected as a Highlighted Reviewer of ICLR 2022. |
Jan 2022 | Our paper entitled Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization was accepted by ICLR 2022. |