Skip to content

CG-Bench/CG-Bench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CG-Bench

This repository contains the implementation of CG-Bench. Follow the steps below to set up and run the benchmark.
Project Website: https://cg-bench.github.io/leaderboard/
Huggingface Link: https://huggingface.co/datasets/CG-Bench/CG-Bench

News

  • [2025-1-26] 📝 Our paper has been accepted to ICLR 2025!
  • [2025-1-10] 🌟 You can now test the CG-Bench dataset in VLMEvalKit!
  • [2024-12-15] 🚀 We released CG-Bench dataset and leaderboard! Dataset | Leaderboard

Setup and Data Preparation

  1. Clone the repository:
git clone https://github.com/CG-Bench/CG-Bench.git
cd CG-Bench
  1. Download and unzip the dataset:
python unzip_hf_zip.py
  1. Process the JSON files:
python run/save_as_jsons.py

Testing

  1. Before running the test, make sure to configure your API credentials in run/run_api.py:

    • Set your api_base
    • Set your api_key
  2. Run the test script:

bash run.sh clue_acc gpt-4o 2024-08-06 32 true true true # (or long_acc, miou, open ...) 
  1. If the frames are already extracted, you can directly run:
python run/run_api.py --task_mode clue_acc --model_name gpt-4o --model_size 2024-08-06 --num_segment 32 --sub true --sub_time true --frame_time true

View Results

  1. Check the test results:
python stat_with_key.py

Note

Make sure you have properly configured your API credentials in run/run_api.py before running the tests. Without valid API credentials, the tests will fail.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors