Hi authors, thanks for sharing this great work!
I noticed that the current evaluate/safety_evaluate.py seems to be incomplete:
- Missing induced_text prompt:
The script does not contain any prompt/template related to induced_text.
- Wrong image fed into the model:
The script feeds the original image to the model, whereas according to the paper the model should receive the image after the induced text has been inserted.
- Task-completion flag returned as result:
The current script returns a simple task-completion flag as the evaluation result. This is not meaningful for induced_text task.
Could you please provide the final / complete version of the evaluation script so that we can reproduce the numbers reported in the paper?
Thanks in advance!
Hi authors, thanks for sharing this great work!
I noticed that the current
evaluate/safety_evaluate.pyseems to be incomplete:The script does not contain any prompt/template related to
induced_text.The script feeds the original image to the model, whereas according to the paper the model should receive the image after the induced text has been inserted.
The current script returns a simple task-completion flag as the evaluation result. This is not meaningful for induced_text task.
Could you please provide the final / complete version of the evaluation script so that we can reproduce the numbers reported in the paper?
Thanks in advance!