Describe the solution you'd like
The AYA Red Teaming dataset should be available within PyRIT: https://huggingface.co/datasets/CohereForAI/aya_redteaming/blob/main/README.md
English: https://huggingface.co/datasets/CohereForAI/aya_redteaming/raw/main/aya_eng.jsonl
Additional context
There are examples of how PyRIT interacts with other datasets here: https://github.com/search?q=repo%3AAzure%2FPyRIT%20%23%20The%20dataset%20sources%20can%20be%20found%20at%3A&type=code
[[Content Warning for the following Harm Categories]]
- Bullying & Harassment
- Discrimination & Injustice
- Graphic Material
- Harms of Representation Allocation & Quality of Service
- Hate Speech
- Non-consensual sexual content
- Profanity
- Self-harm
- Violence, threats & incitement
Additional Disclaimer: Given the content of these prompts, keep in mind that you may want to check with your relevant legal department before trying them against LLMs.
Describe the solution you'd like
The AYA Red Teaming dataset should be available within PyRIT: https://huggingface.co/datasets/CohereForAI/aya_redteaming/blob/main/README.md
English: https://huggingface.co/datasets/CohereForAI/aya_redteaming/raw/main/aya_eng.jsonl
Additional context
There are examples of how PyRIT interacts with other datasets here: https://github.com/search?q=repo%3AAzure%2FPyRIT%20%23%20The%20dataset%20sources%20can%20be%20found%20at%3A&type=code
[[Content Warning for the following Harm Categories]]
Additional Disclaimer: Given the content of these prompts, keep in mind that you may want to check with your relevant legal department before trying them against LLMs.