Parameter-free Adversarial Attack via Learned Optimizer
Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Parameter-free Adversarial Attack via Learned Optimizer

Abstract

As the domain of adversarial attack countermeasures continues to expand, the accurate evaluationof these defenses remains a challenge. Adversarial attacks pose significant challenges to the security and robustness of deep learning models. Traditional methods typically depend on predetermined parameters, such as ensembles of certain methods and manually designed rules, which may not be optimal for generating effective attacks. In this research, we propose a parameter-free adversarial attack by leveraging a learning-to-learn (L2L) framework. We train a recurrent neural network-based optimizer to adaptively update directions and steps, enabling more efficient and adaptive adversarial attacks. We conduct extensive experiments on robust models trained on the MNIST and CIFAR-10 datasets. Our findings show that the learned optimizer outperforms traditional methods, such as PGD, in generating adversarial attacks for small networks and smaller datasets like MNIST. For larger networks, our method demonstrates improved performance only for smaller attack steps. These results highlight the potential of parameter-free attacks in evaluating and understanding the robustness of deep learning models.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View