Skip to content

Possibly GPU memory leak? #24

@kshieh1

Description

@kshieh1

Hi,

Found a GPU out-of-memory(OOM) error when using comple in my project. I made a shorter test program out of your compel-demp.py :

import torch
from compel import Compel
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
from torch import Generator

device = "cuda"
pipeline = StableDiffusionPipeline.from_pretrained("dreamlike-art/dreamlike-photoreal-2.0",
                                                   torch_dtype=torch.float16).to(device)
# dpm++
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config,
                                                             algorithm_type="dpmsolver++")

COMPEL = True
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)

i = 0
while True:
    prompts = ["a cat playing with a ball++ in the forest", "a cat playing with a ball in the forest"]

    if COMPEL:
        prompt_embeds = torch.cat([compel.build_conditioning_tensor(prompt) for prompt in prompts])
        images = pipeline(prompt_embeds=prompt_embeds, num_inference_steps=10, width=256, height=256).images
        #del prompt_embeds # not helping
    else:
        images = pipeline(prompt=prompts, num_inference_steps=10, width=256, height=256).images
    i += 1
    print(i, images)

    images[0].save('img0.jpg')
    images[1].save('img1.jpg')

Tested on Nvidia RTX-3050Ti Mobile GPU w/ 4G VRAM, an OOM exception will occur after 10~20 iterations. No OOM if use COMPEL = False.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions