Skip to content

Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves

Notifications You must be signed in to change notification settings

roywang021/IDEATOR

Repository files navigation

IDEATOR

Code for ICCV2025 paper: IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves. Benchmark are available at VLJailbreakBench.

image As large Vision-Language Models (VLMs) gain prominence, ensuring their safe deployment has become critical. Recent studies have explored VLM robustness against jailbreak attacks—techniques that exploit model vulnerabilities to elicit harmful outputs. However, the limited availability of diverse multimodal data has constrained current approaches to rely heavily on adversarial or manually crafted images derived from harmful text datasets, which often lack effectiveness and diversity across different contexts. In this paper, we propose IDEATOR, a novel jailbreak method that autonomously generates malicious image-text pairs for black-box jailbreak attacks. IDEATOR is grounded in the insight that VLMs themselves could serve as powerful red team models for generating multimodal jailbreak prompts. Specifically, IDEATOR leverages a VLM to create targeted jailbreak texts and pairs them with jailbreak images generated by a state-of-the-art diffusion model. Extensive experiments demonstrate IDEATOR’s high effectiveness and transferability, achieving a 94% attack success rate (ASR) in jailbreaking MiniGPT-4 with an average of only 5.34 queries, and high ASRs of 82%, 88%, and 75% when transferred to LLaVA, InstructBLIP, and Chameleon, respectively. Building on IDEATOR’s strong transferability and automated process, we introduce the VLJailbreakBench, a safety benchmark comprising 3,654 multimodal jailbreak samples. Our benchmark results on 11 recently released VLMs reveal significant gaps in safety alignment. For instance, our challenge set achieves ASRs of 46.31% on GPT-4o and 19.65% on Claude-3.5-Sonnet, underscoring the urgent need for stronger defenses.

image

Basic Setup

  1. Prepare the pretrained weights for MiniGPT-4 (Vicuna-13B v0): Please refer to the guide from the MiniGPT-4 repository to get the weights of Vicuna. Then, set the path to the vicuna weight in the model config file here. Get MiniGPT-4 (the 13B version) checkpoint: download from here. Then, set the path to the pretrained checkpoint in the minigpt4_eval.yaml.

Use MiniGPT-4 to jailbreak

python ideator_attack_minigpt4.py --cfg-path minigpt4_eval.yaml  --gpu-id 0

Use Gemini to jailbreak (demo)

We found that stronger base models significantly increase jailbreaking success rates. For instance, Gemini, with safety settings disabled, can efficiently jailbreak commercial models. As an example, we used Gemini combined with Stable Diffusion 3.5 Large to achieve a 46% success rate in jailbreaking GPT-4o. We have released a demo showcasing how Gemini generates jailbreak image-text prompts. However, for safety reasons, we have not made the complete codebase publicly available.

python ideator_attack_gemini_demo.py

About

Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages