Jindong Wang
Senior Researcher, Microsoft Research Asia
Building 2, No. 5 Danling Street, Haidian District, Beijing, China
jindongwang [at] outlook.com, jindong.wang [at] microsoft.com
Google scholar | DBLP | Github || Twitter/X | Zhihu | Wechat | Bilibili || CV CV (Chinese)
Dr. Jindong Wang is currently a Senior Researcher at Microsoft Research Asia. He obtained his Ph.D from Institute of Computing Technology, Chinese Academy of Sciences in 2019. In 2018, he visited Prof. Qiang Yangâs group at Hong Kong University of Science and Technology. His research interest includes robust machine learning, transfer learning, semi-supervised learning, and federated learning. His recent interest is large language models. He has published over 50 papers with 8000+ citations at leading conferences and journals such as ICLR, NeurIPS, TPAMI, TKDE, IJCV etc. He has 6 highly cited papers according to Google Scholar metrics and 6 Huggingface Featured papers. He received the best paper award at ICCSEâ18 and IJCAIâ19 federated learning workshop and the prestigous excellent Ph.D thesis award (only 1 at ICT each year). In 2023, he was selected by Stanford University as one of the Worldâs 2% Scientists and one of the AI Most Influential Scholars by AMiner. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), senior program committee member of IJCAI and AAAI, and reviewers for top conferences and journals like ICML, NeurIPS, ICLR, CVPR, TPAMI, AIJ etc. He leads several impactful open-source projects, including transferlearning, PromptBench, torchSSL, USB, personalizedFL, and robustlearn, which received over 16K stars on Github. He published a textbook Introduction to Transfer Learning to help starters quickly learn transfer learning. He gave tutorials at IJCAIâ22, WSDMâ23, KDDâ23, and AAAIâ24.
Research interest: robust machine learning, OOD / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications.
Recent interest: Large Language Models (LLMs) evaluation and enhancement. See this page for more details. Interested in internship or collaboration? Contact me. Iâm experimenting a new form of research collaboration. You can click here if you are interested!
Announcement: Call for papers for ACM TIST special issue on Evaluations of Large Langauge Models! [more]
News
Jan 20, 2024 | I was invited to be an Area Chair for ACM Multimedia 2024. |
---|---|
Jan 16, 2024 | We have 2 papers accepted as spotlight and 2 as poster at ICLR 2024! |
Jan 14, 2024 | Our paper âDiversity: a general framework for time series out-of-distribution detection and generalizationâ is accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)! [paper] |
Dec 28, 2023 | Our survey paper âA survey on evaluation of large language modelsâ is accepted by ACM TIST! [paper] [website] |
Dec 22, 2023 | I was invited to be an Associate Editor for IEEE Transactions on Neural Networks and Learning Systems (TNNLS)! |
Dec 18, 2023 | Our Diversify algorithm (ICLRâ23) now extends to virtual reality and the paper âGenerating Virtual Reality Interaction Data from Out-of-Distribution Desktop Data: An Exploration Using Stroke Gesturesâ is accepted by IEEE VR 2024! |
Highlights
- 6 of my papers are highly cited and ranked top 20 globally in recent 5 years in Google scholar metrics. See here. I also have 6 papers featured by Hugging Face.
- I wrote a popular book Introduction to Transfer Learning to make it easy to learn, understand, and use transfer learning.
- I lead the most popular transfer learning, semi-supervised learning, and LLM evaluation projects on Github: Transfer learning repo, Semi-supervised learning repo, PromptBench for LLM evaluation, Personalized federated learning repo.
- I was selected as one of the World's Top 2% Scientist by Stanford and 2022 AI 2000 Most Influential Scholars by AMiner in 2023.