Repository logo
 

Universal Acoustic Adversarial Attacks for Flexible Control of Speech-LLMs

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

The combination of pre-trained speech encoders with large language models has enabled the development of speech LLMs that can handle a wide range of spoken language processing tasks. While these models are powerful and flexible, this very flexibility may make them more vulnerable to adversarial attacks. To examine the extent of this problem, in this work we investigate universal acoustic adversarial attacks on speech LLMs. Here a fixed, universal, adversarial audio segment is prepended to the original input audio. We initially investigate attacks that cause the model to either produce no output or to perform a modified task overriding the original prompt. We then extend the nature of the attack to be selective so that it activates only when specific input attributes, such as a speaker gender or spoken language, are present. Inputs without the targeted attribute should be unaffected, allowing fine-grained control over the model outputs. Our findings reveal critical vulnerabilities in Qwen2-Audio and Granite-Speech and suggest that similar speech LLMs may be susceptible to universal adversarial attacks. This highlights the need for more robust training strategies and improved resistance to adversarial attacks.

Description

Keywords

Journal Title

Findings of the Association for Computational Linguistics: EMNLP 2025

Conference Name

Findings of the Association for Computational Linguistics: EMNLP 2025

Journal ISSN

Volume Title

Publisher

Association for Computational Linguistics (ACL)

Rights and licensing

Except where otherwised noted, this item's license is described as All Rights Reserved
Sponsorship
Cambridge Assessment (Unknown)
Cambridge University Press and Assessment