Skip to content

Evaluate script is incomplete: missing induced_text prompts and incorrect image input #9

@bmz-q-q

Description

@bmz-q-q

Hi authors, thanks for sharing this great work!
I noticed that the current evaluate/safety_evaluate.py seems to be incomplete:

  • Missing induced_text prompt:
    The script does not contain any prompt/template related to induced_text.
  • Wrong image fed into the model:
    The script feeds the original image to the model, whereas according to the paper the model should receive the image after the induced text has been inserted.
  • Task-completion flag returned as result:
    The current script returns a simple task-completion flag as the evaluation result. This is not meaningful for induced_text task.

Could you please provide the final / complete version of the evaluation script so that we can reproduce the numbers reported in the paper?

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions