- Optimal Approximation with Sparsely Connected Deep Neural Networks
- On the Expressive Power of Deep Polynomial Neural Networks
- Deep Network Approximation Characterized by Number of Neurons
- The phase diagram of approximation rates for deep neural networks
- Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality
- Approximation and Non-parametric Estimation of ResNet-type Convolutional Neural Networks
- Optimal Approximation of Piecewise Smooth Functions Using Deep ReLU Neural Networks
- Approximation Spaces of Deep Neural Networks
- On the Power and Limitations of Random Features for Understanding Neural Networks
- Error bounds for deep ReLU networks using the Kolmogorov--Arnold superposition theorem
- Benign Overfitting in Linear Regression
- Risk and Parameter Convergence of Logistic Regression
- Implicit Regularization in Deep Matrix Factorization
- Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
- Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets
- Gradient Dynamics of Shallow Univariate ReLU Networks
- Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations
- A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations