This repository contains a Cog wrapper for the PartPacker 3D object generation model.
Model API: ayushunleashed/partpacker
PartPacker performs efficient part-level 3D object generation from single-view images using dual volume packing. This Cog wrapper provides a convenient API for running the model on Replicate.
Original Paper: PartPacker: Efficient Part-level 3D Object Generation via Dual Volume Packing
partpacker-cog/
├── cog.yaml # Cog configuration
├── predict.py # Cog prediction interface
├── download_weights.py # Weight downloader script
├── PartPacker/ # Git submodule (original repo)
│ ├── README.md
│ ├── requirements.txt
│ ├── app.py
│ ├── flow/
│ ├── vae/
│ └── ...
└── README.md # This file
This wrapper is built for the PartPacker project. The original repository contains the core implementation which is included as a submodule here.
# Clone the repository
git clone https://github.com/your-username/partpacker-cog.git
cd partpacker-cog
# Initialize and update the submodule
git submodule update --init --recursiveFollow the official Cog installation guide:
# On macOS
brew install replicate/tap/cog
# On Linux/Windows WSL
sudo curl -o /usr/local/bin/cog -L "https://github.com/replicate/cog/releases/latest/download/cog_$(uname -s)_$(uname -m)"
sudo chmod +x /usr/local/bin/cog# Download weights manually (optional)
python download_weights.py
# Build the Docker image
cog build
# Test with an image
cog predict -i image=@path/to/your/image.jpg- Image: JPEG, PNG formats supported
- Single-view image with clear object visibility
- Automatic background removal if no alpha channel present
image(required): Input image filenum_steps: Number of inference steps (1-100, default: 50)cfg_scale: Classifier-free guidance scale (1-20, default: 7.0)grid_resolution: Grid resolution for mesh extraction (256-512, default: 384)seed: Random seed for reproducible results (optional)simplify_mesh: Whether to simplify the output mesh (default: False)target_num_faces: Target number of faces for simplification (10k-1M, default: 100k)
# Basic usage
cog predict -i image=@input.jpg
# With custom parameters
cog predict \
-i image=@input.jpg \
-i num_steps=80 \
-i cfg_scale=9.0 \
-i grid_resolution=512 \
-i seed=42 \
-i simplify_mesh=true \
-i target_num_faces=50000import replicate
output = replicate.run(
"your-username/partpacker",
input={
"image": open("input.jpg", "rb"),
"num_steps": 50,
"cfg_scale": 7.0,
"grid_resolution": 384,
"seed": 42
}
)
print(f"Output GLB file: {output}")- Architecture: Diffusion Transformer (DiT) with Flow Matching
- Input: Single RGB image (518x518 processed)
- Output: GLB file with part-separated 3D mesh
- Part Generation: Dual volume packing for efficient part-level generation
- Memory Requirements: ~8-12GB GPU memory for typical usage
-
Quality vs Speed:
- Lower
num_steps(30-40) = faster generation - Higher
num_steps(70-100) = better quality
- Lower
-
Memory Management:
- Lower
grid_resolution(256-320) = less memory usage - Higher
grid_resolution(448-512) = more detail
- Lower
-
Mesh Optimization:
- Enable
simplify_meshfor smaller file sizes - Adjust
target_num_facesbased on your needs
- Enable
The model outputs a GLB file containing:
- Multiple mesh parts with different colors
- Each part can be separated and manipulated individually
- Optimized for 3D printing and game engine import
- Out of Memory: Reduce
grid_resolutionor use smaller input images - Poor Quality: Increase
num_stepsorcfg_scale - Large File Size: Enable
simplify_meshwith lowertarget_num_faces
- Use high-contrast objects with clear boundaries
- Avoid cluttered backgrounds (auto-removal works best with simple backgrounds)
- Center the object in the image
- Use good lighting conditions
This Cog wrapper follows the same license as the original PartPacker project. See the original repository for license details.
If you use this model, please cite the original PartPacker paper:
@article{tang2024partpacker,
title={Efficient Part-level 3D Object Generation via Dual Volume Packing},
author={Tang, Jiaxiang and Lu, Ruijie and Li, Zhaoshuo and Hao, Zekun and Li, Xuan and Wei, Fangyin and Song, Shuran and Zeng, Gang and Liu, Ming-Yu and Lin, Tsung-Yi},
journal={arXiv preprint arXiv:2506.09980},
year={2025}
}