1

Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting

We found that label noise implicitly exists in adversarial training and can explain the intriguing and problematic robust overfitting phenomenon. Robust overfitting is in fact an early part of an epoch-wise double descent.

LOPS: Learning Order Inspired Pseudo-Label Selection for Weakly Supervised Text Classification

We leverage the persistent and consistent order of deep neural networks in learning data examples to identify high-quality pseudo-labels for text classification. Those pseudo-labels are obtained cheaply based on keyword match.

Towards Adaptive Residual Network Training: A Neural-ODE Perspective

We motivate from the Neural-ODE perspective and design an adaptive training algorithm for ResNet, which can save ~50% training time.

BFClass: A Backdoor-free Text Classification Framework

Backdoor attack introduces artificial vulnerabilities into the model by poisoning a subset of the training data via injecting triggers and modifying labels. Various trigger design strategies have been explored to attack text classifiers, however, …

Average Approximates First Principal Component? An Empirical Analysis on Representations from Neural Language Models

The average of contextualized representations shares almost the same direction as the first principal component of the matrix whose columns are these representations. We believe this explains why the average representation is always a simple yet strong baseline.

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training

We propose APART, an adaptive adversarial training framework, which parameterizes perturbation generation and progressively strengthens them.