Welcome to Wei Huang’s Homepage
I am a Research Scientist in the Deep Learning Theory Team at RIKEN Center for Advanced Intelligence Project (AIP), working with Prof. Taiji Suzuki. Prior to this role, I gained valuable experience as a Postdoctoral Researcher in the same team, deepening my expertise in the theoretical aspects of deep learning. Before, I was working as a research associate advised by Dr. Xin Cao at UNSW and working with A/Prof. Jie Yin on graph neural network at USYD.
Dr. Huang obtained his Ph.D. degree at Faculty of Engineering and Information Technology, University of Technology Sydney working with Prof. Richard Xu. I obtained my Master degree at the department of modern Physics, University of Science and Technology of China, supervised by Prof. Youjin Deng.
I keep a comprehensive and popular blog on the latest Deep Learning Theory works on Zhihu.
Please feel free to contact me if you want to cooperate or discuss with me (weihuang[dot]uts[at]gmail[dot]com) on deep learning theory and its application.
My office is located at the University of Tokyo, Hongo Campus. If you would like to reach out, please don’t hesitate to do so.
Research Interest
Theoretically understanding deep learning and large foundation modal from expressivity, trainability, and generalization.
Feature Learning; Implicit Regularization/Bias; Neural Tangent Kernel;
Applications powered by deep learning theory:
Large Foundation Model, Graph Neural Networks; Computer Vision
News
02/2024 One paper is accepted by CVPR 2024 (CORE A*, CCF A), Global and Local Prompts Cooperation via Optimal Transport for Federated Learning
01/2024 Two papers are accepted by ICLR 2024 (CORE A*), Understanding Convergence and Generalization in Federated Learning through Feature Learning Theory; Graph Lottery Ticket Automated
12/2023 I will be attending NeurIPS 2023 from 10/12/2023 to 17/12/2023 at the New Orleans Ernest N. Morial Convention Center. Feel free have a chat if you’re there.
12/2023 One paper is accepted by IEEE Transactions on Image Processing (CORE A*, CCF A), DMMG: Dual Min-Max Games for Self-Supervised Skeleton-Based Action Recognition
10/2023 I will be attending IBIS2023 from 29/10/2023 to 01/11/2023 in Kitakyushu. Feel free have a chat if you’re there.
09/2023 Three papers are accepted by NeurIPS 2023 (CORE A*, CCF A), Understanding and Improving Feature Learning for Out-of-Distribution Generalization; Analyzing Generalization of Neural Networks through Loss Path Kernels; Fed-CO_2: Cooperation of Online and Offline Models for Severe Data Heterogeneity in Federated Learning;
08/2023 One paper is accepted by Transactions on Machine Learning Research, Single-Pass Contrastive Learning Can Work for Both Homophilic and Heterophilic Graph
08/2023 I will be attending ICIAM 2023 at Waseda University, Tokyo. Feel free to have a chat if you’re there.
07/2023 I had the privilege of giving a contributed talk at the ICML 2023 workshop on High-Dimensional Learning Dynamics in Honolulu, Hawaii. It was a great experience sharing my work with researchers. Please find my slides here.
06/2023 One paper is accepted by High-dimensional Learning Dynamics Workshop (ICML) 2023, Graph Neural Networks Provably Benefit from Structural Information: A Feature Learning Perspective
05/2023 One paper is accepted by Transactions on Machine Learning Research, Analyzing Deep PAC-Bayesian Learning with Neural Tangent Kernel: Convergence, Analytic Generalization Bound, and Efficient Hyperparameter Selection
05/2023 One paper is accepted by AutoML 2023. No Free Lunch in Neural Architectures? A Joint Analysis of Expressivity, Convergence, and Generalization
03/2023 One paper is accepted by ICLR 2023 workshop, Towards Understanding Feature Learning in Out-of-Distribution Generalization
11/2022 I arrived in Tokyo, Japan!
09/2022 Four papers are accepted by NeurIPS 2022 (CORE A*, CCF A), Deep Architecture Connectivity Matters for Its Convergence: A Fine-Grained Analysis; Deep Active Learning by Leveraging Training Dynamics; Interpreting Operation Selection in Differentiable Architecture Search: A Perspective from Influence-Directed Explanations; Weighted Mutual Learning with Diversity-Driven Model Compression
08/2022 One paper is accepted by Knowledge-Based Systems journal (IF: 8.664)
05/2022 I am invited to serve as a reviewer for ICLR-2023 (CORE A*)
03/2022 I am invited to serve as a reviewer for NeurIPS-2022 (CORE A*, CCF A)
01/2022 Two papers are accepted by ICLR 2022 (CORE A*), Towards Deepening Graph Neural Networks: A GNTK-based Optimization Perspective; Auto-scaling Vision Transformers without Training
12/2021 I am invited to serve as a reviewer for ICML-2022 (CORE A*)
11/2021 I am invited to serve as a reviewer for CVPR-2022 (CORE A*)
09/2021 One paper is accepted by NeurIPS 2021 (CORE A*), On the Equivalence between Neural Network and Support Vector Machine
07/2021 I am invited to serve as a Senior Program Committee member for AAAI-2022 (CORE A*)
06/2021 I am invited to serve as a reviewer for ICLR-2022 (CORE A*)
04/2021 One paper is accepted by IJCAI 2021 (CORE A*, CCF A), On the Neural Tangent Kernel of Deep Networks with Orthogonal Initialization
03/2021 I am invited to serve as a reviewer for NeurIPS-2021 (CORE A*, CCF A)
12/2020 I am invited to serve as a reviewer for ICML-2021 (CORE A*, CCF A)