1

Average Approximates First Principal Component? An Empirical Analysis on Representations from Neural Language Models

The average of contextualized representations shares almost the same direction as the first principal component of the matrix whose columns are these representations. We believe this explains why the average representation is always a simple yet strong baseline.

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training

We propose APART, an adaptive adversarial training framework, which parameterizes perturbation generation and progressively strengthens them.

Towards Adaptive Residual Network Training: A Neural-ODE Perspective

We motivate from Neural-ODE perspective and design an adaptive training algorithm for ResNet, which can save ~50% training time.