Make ModelOutput serializable#26493
Conversation
Original PR from diffusers : huggingface/diffusers#5234
|
The documentation is not available anymore as the PR was closed or merged. |
There was a problem hiding this comment.
As per the review in diffusers, good addition IMO.
WDYT @LysandreJik.
Could you add in the PR descripition the issue you were facing before (to have a real use case 😉 )
I have experience regarding things become slow (or even blocked) when passing (large) tensors between processes. But maybe things change along time and this is no longer a common issue. |
Sure, the PR is more here by correctness / symmetry because the same thing is done on diffusers side (but I agree it makes more sense in diffusers since it's a pipeline output while it's a model output in transformers) |
Currently,
@dataclassModelOutputinstances can't be pickled, which can be inconvenient in some situationsThis PR fixes this by adding a custom
__reduce__method to theModelOutputclassOriginal PR from diffusers : huggingface/diffusers#5234
EDIT: Actual use case for me is passing a ModelOutput instance in a multiprocessing queue
(this is needed if a model
__call__is wrapped inside the ZeroGPU decorator :model = spaces.GPU(model))