Publications

Analyzing Deep PAC-Bayesian Learning with Neural Tangent Kernel: Convergence, Analytic Generalization Bound, and Efficient Hyperparameter Selection.

Published in TMLR, 2023

This paper proposes a theoretical convergence and generalization analysis for Deep PAC-Bayesian learning. For a deep and wide probabilistic neural network, our analysis shows that PAC-Bayesian learning corresponds to solving a kernel ridge regression when the probabilistic neural tangent kernel (PNTK) is used as the kernel.

Download here

Weighted Mutual Learning with Diversity-Driven Model Compression.

Published in NeurIPS, 2022

This paper, for the first time, leverages a bi-level formulation to estimate the relative importance of peers with a close-form, to further boost the effectiveness of the distillation from each other. Extensive experiments show the generalization of the proposed framework, which outperforms existing online distillation methods on a variety of deep neural networks.

Download here

Deep Active Learning by Leveraging Training Dynamics.

Published in NeurIPS, 2022

In this paper, by exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics. In particular, we prove that the convergence speed of training and the generalization performance are positively correlated under the ultra-wide condition and show that maximizing the training dynamics leads to better generalization performance.

Download here

Interpreting Operation Selection in Differentiable Architecture Search: A Perspective from Influence-Directed Explanations.

Published in NeurIPS, 2022

In this work, we leverage influence functions, the functional derivatives of the loss function, to theoretically reveal the operation selection part in DARTS and estimate the candidate operation importance by approximating its influence on the supernet with Taylor expansions. We show the operation strength is not only related to the magnitude but also secondorder information, leading to a fundamentally new criterion for operation selection in DARTS, named Influential Magnitude.

Download here

Deep Architecture Connectivity Matters for Its Convergence: A Fine-Grained Analysis.

Published in NeurIPS, 2022

We theoretically characterize the impact of connectivity patterns on the convergence of DNNs under gradient descent training in fine granularity. By analyzing a wide network’s Neural Network Gaussian Process (NNGP), we are able to depict how the spectrum of an NNGP kernel propagates through a particular connectivity pattern, and how that affects the bound of convergence rates.

Download here

Pruning graph neural networks by evaluating edge properties.

Published in Knowledge-Based Systems, 2022

We formulate the performance of GNNs mathematically with respect to the properties of their edges, elucidating how the performance drop can be avoided by pruning negative edges and nonbridges. This leads to our simple but effective two-step method for GNN pruning, leveraging the saliency metrics for the network pruning while sparsifying the graph with preservation of the loss performance.

Download here

Auto-scaling Vision Transformers without Training

Published in ICLR, 2022

This work targets automated designing and scaling of Vision Transformers (ViTs). We propose As-ViT, an auto-scaling framework for ViTs without training, which automatically discovers and scales up ViTs in an efficient and principled manner.

Download here

Towards Deepening Graph Neural Networks: A GNTK-based Optimization Perspective

Published in ICLR, 2022

This work exploits the Graph Neural Tangent Kernel (GNTK), which governs the optimization trajectory under gradient descent for wide GCNs. We formulate the asymptotic behaviors of GNTK in the large depth, which enables us to reveal the dropping trainability of wide and deep GCNs at an exponential rate in the optimization process.

Download here