Skip to content

Conversation

@vvvm23
Copy link
Contributor

@vvvm23 vvvm23 commented Jun 16, 2022

Removes mentions of 🤗Transformers with 🤗Diffusers equivalent in utils/logging.py.
This includes comments and environmental variables.
This was mentioned in issue #12

removes mentions of 🤗Transformers with 🤗Diffusers equivalent.
Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome - thanks for correcting!

@patrickvonplaten patrickvonplaten merged commit ebbba62 into huggingface:main Jun 16, 2022
@vvvm23 vvvm23 deleted the logging-transformers-to-diffusers branch June 16, 2022 16:12
lawrence-cj pushed a commit to lawrence-cj/diffusers that referenced this pull request Dec 23, 2024
add a AdamW optimizer type as config file for reference.

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit 94e7733746c2098800ef81639b524baffc910888)

pre-commit;

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit a1d1e141275a7d944f166908708a25bd6bf33716)

1. update app
2. fix the precision bug in model forward;

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit d3f1c7e899db95d41ec401be465bd6918501571b)

update README.md.

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit 9110d56b32d4d0a4b84130748859f302bc2cbce9)

support dockerfile

(cherry picked from commit 0777f17b3a89d41b6b6da61b007c8c2b20108d55)
Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

pre-commit;

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit edadfec4f0786cdf8e45e7a53a9a6f5c070c5d69)

Fix batch

(cherry picked from commit 9da85500faaf052d08bd504756cbf2ee7ddefa89)

Update on trigger instead of click

(cherry picked from commit c6de237bcd6c13950d31f20b5c750950ab478834)

Remote TEST_TIMES

(cherry picked from commit a4fa81974d666fa0b50db537aa13fe60107a8db8)

Fix lint

(cherry picked from commit 7f092a6c30fb9a117baaaab8a0e6c8286548b0be)

Refine counter

(cherry picked from commit d026366205a488b100b083f7dbcf8f8320e4855d)

Refine graio

(cherry picked from commit f8df502aa8f50d7e6501d482c1d8dedc70527c4b)
Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

Relative import xformer triton depend fix (huggingface#37)

* 1. xformer import logi update;
2. changing all import * to specific package import

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

* 1. change all import * to specific package;
2. fix bugs
3. add `triton` and `xformers` checking into import_utils.py

* support `F.scaled_dot_product_attention` without `xformers`

* pre-commit

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

---------

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
Co-authored-by: Ligeng Zhu <Lyken17@users.noreply.github.com>
(cherry picked from commit fa267d5622565d03ade88be77623068c6cf7099b)

Fix lint

(cherry picked from commit fcc872a8a2fb9544d639f32f05df2c6334a6a4e5)

Fix Dockerfile (huggingface#34)

(cherry picked from commit ee5b3699c647c2b8983a5c0600d1e31ff0400514)
Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

Fix config loading for jupyter environments (huggingface#30)

* Fix config loading for jupyter environments

The current way of loading config does not work on jupyter environments such as Colab - see eladrich/pyrallis#18

* pre-commit;

Signed-off-by: junsongc <cjs1020440147@icloud.com>

---------

Signed-off-by: junsongc <cjs1020440147@icloud.com>
Co-authored-by: junsongc <cjs1020440147@icloud.com>
(cherry picked from commit 85e470c30e4b5379dda14cb286b50cf9bc7e5c8b)

Inference with lower VRAM requirements (huggingface#18)

* Inference with lower VRAM requirements

* reformat

---------

Co-authored-by: root <root@lucas-chambre>
Co-authored-by: LoveSy <shana@zju.edu.cn>

(cherry picked from commit c66ebf9702542a2fd418727903a8735f7d9ec094)
Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
lawrence-cj pushed a commit to lawrence-cj/diffusers that referenced this pull request Dec 23, 2024
add a AdamW optimizer type as config file for reference.

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit 94e7733746c2098800ef81639b524baffc910888)

pre-commit;

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit a1d1e141275a7d944f166908708a25bd6bf33716)

1. update app
2. fix the precision bug in model forward;

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit d3f1c7e899db95d41ec401be465bd6918501571b)

update README.md.

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit 9110d56b32d4d0a4b84130748859f302bc2cbce9)

support dockerfile

(cherry picked from commit 0777f17b3a89d41b6b6da61b007c8c2b20108d55)
Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

pre-commit;

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
(cherry picked from commit edadfec4f0786cdf8e45e7a53a9a6f5c070c5d69)

Fix batch

(cherry picked from commit 9da85500faaf052d08bd504756cbf2ee7ddefa89)

Update on trigger instead of click

(cherry picked from commit c6de237bcd6c13950d31f20b5c750950ab478834)

Remote TEST_TIMES

(cherry picked from commit a4fa81974d666fa0b50db537aa13fe60107a8db8)

Fix lint

(cherry picked from commit 7f092a6c30fb9a117baaaab8a0e6c8286548b0be)

Refine counter

(cherry picked from commit d026366205a488b100b083f7dbcf8f8320e4855d)

Refine graio

(cherry picked from commit f8df502aa8f50d7e6501d482c1d8dedc70527c4b)
Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

Relative import xformer triton depend fix (huggingface#37)

* 1. xformer import logi update;
2. changing all import * to specific package import

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

* 1. change all import * to specific package;
2. fix bugs
3. add `triton` and `xformers` checking into import_utils.py

* support `F.scaled_dot_product_attention` without `xformers`

* pre-commit

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

---------

Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
Co-authored-by: Ligeng Zhu <Lyken17@users.noreply.github.com>
(cherry picked from commit fa267d5622565d03ade88be77623068c6cf7099b)

Fix lint

(cherry picked from commit fcc872a8a2fb9544d639f32f05df2c6334a6a4e5)

Fix Dockerfile (huggingface#34)

(cherry picked from commit ee5b3699c647c2b8983a5c0600d1e31ff0400514)
Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>

Fix config loading for jupyter environments (huggingface#30)

* Fix config loading for jupyter environments

The current way of loading config does not work on jupyter environments such as Colab - see eladrich/pyrallis#18

* pre-commit;

Signed-off-by: junsongc <cjs1020440147@icloud.com>

---------

Signed-off-by: junsongc <cjs1020440147@icloud.com>
Co-authored-by: junsongc <cjs1020440147@icloud.com>
(cherry picked from commit 85e470c30e4b5379dda14cb286b50cf9bc7e5c8b)

Inference with lower VRAM requirements (huggingface#18)

* Inference with lower VRAM requirements

* reformat

---------

Co-authored-by: root <root@lucas-chambre>
Co-authored-by: LoveSy <shana@zju.edu.cn>

(cherry picked from commit c66ebf9702542a2fd418727903a8735f7d9ec094)
Signed-off-by: lawrence-cj <cjs1020440147@icloud.com>
dg845 added a commit that referenced this pull request Jan 6, 2026
* Denormalize audio latents in I2V pipeline (analogous to T2V change)

* Initial refactor to put video and audio text encoder connectors in transformer

* Get LTX 2 transformer tests working after connector refactor

* precompute run_connectors,.

* fixes

* Address review comments

* Calculate RoPE double precisions freqs using torch instead of np

* Further simplify LTX 2 RoPE freq calc

* Make connectors a separate module (#18)

* remove text_encoder.py

* address yiyi's comments.

* up

* up

* up

* up

---------

Co-authored-by: sayakpaul <spsayakpaul@gmail.com>
dg845 added a commit that referenced this pull request Jan 8, 2026
* Initial LTX 2.0 transformer implementation

* Add tests for LTX 2 transformer model

* Get LTX 2 transformer tests working

* Rename LTX 2 compile test class to have LTX2

* Remove RoPE debug print statements

* Get LTX 2 transformer compile tests passing

* Fix LTX 2 transformer shape errors

* Initial script to convert LTX 2 transformer to diffusers

* Add more LTX 2 transformer audio arguments

* Allow LTX 2 transformer to be loaded from local path for conversion

* Improve dummy inputs and add test for LTX 2 transformer consistency

* Fix LTX 2 transformer bugs so consistency test passes

* Initial implementation of LTX 2.0 video VAE

* Explicitly specify temporal and spatial VAE scale factors when converting

* Add initial LTX 2.0 video VAE tests

* Add initial LTX 2.0 video VAE tests (part 2)

* Get diffusers implementation on par with official LTX 2.0 video VAE implementation

* Initial LTX 2.0 vocoder implementation

* Use RMSNorm implementation closer to original for LTX 2.0 video VAE

* start audio decoder.

* init registration.

* up

* simplify and clean up

* up

* Initial LTX 2.0 text encoder implementation

* Rough initial LTX 2.0 pipeline implementation

* up

* up

* up

* up

* Add imports for LTX 2.0 Audio VAE

* Conversion script for LTX 2.0 Audio VAE Decoder

* Add Audio VAE logic to T2V pipeline

* Duplicate scheduler for audio latents

* Support num_videos_per_prompt for prompt embeddings

* LTX 2.0 scheduler and full pipeline conversion

* Add script to test full LTX2Pipeline T2V inference

* Fix pipeline return bugs

* Add LTX 2 text encoder and vocoder to ltx2 subdirectory __init__

* Fix more bugs in LTX2Pipeline.__call__

* Improve CPU offload support

* Fix pipeline audio VAE decoding dtype bug

* Fix video shape error in full pipeline test script

* Get LTX 2 T2V pipeline to produce reasonable outputs

* Make LTX 2.0 scheduler more consistent with original code

* Fix typo when applying scheduler fix in T2V inference script

* Refactor Audio VAE to be simpler and remove helpers (#7)

* remove resolve causality axes stuff.

* remove a bunch of helpers.

* remove adjust output shape helper.

* remove the use of audiolatentshape.

* move normalization and patchify out of pipeline.

* fix

* up

* up

* Remove unpatchify and patchify ops before audio latents denormalization (#9)

---------

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>

* Add support for I2V (#8)

* start i2v.

* up

* up

* up

* up

* up

* remove uniform strategy code.

* remove unneeded code.

* Denormalize audio latents in I2V pipeline (analogous to T2V change) (#11)

* test i2v.

* Move Video and Audio Text Encoder Connectors to Transformer (#12)

* Denormalize audio latents in I2V pipeline (analogous to T2V change)

* Initial refactor to put video and audio text encoder connectors in transformer

* Get LTX 2 transformer tests working after connector refactor

* precompute run_connectors,.

* fixes

* Address review comments

* Calculate RoPE double precisions freqs using torch instead of np

* Further simplify LTX 2 RoPE freq calc

* Make connectors a separate module (#18)

* remove text_encoder.py

* address yiyi's comments.

* up

* up

* up

* up

---------

Co-authored-by: sayakpaul <spsayakpaul@gmail.com>

* up (#19)

* address initial feedback from lightricks team (#16)

* cross_attn_timestep_scale_multiplier to 1000

* implement split rope type.

* up

* propagate rope_type to rope embed classes as well.

* up

* When using split RoPE, make sure that the output dtype is same as input dtype

* Fix apply split RoPE shape error when reshaping x to 4D

* Add export_utils file for exporting LTX 2.0 videos with audio

* Tests for T2V and I2V (#6)

* add ltx2 pipeline tests.

* up

* up

* up

* up

* remove content

* style

* Denormalize audio latents in I2V pipeline (analogous to T2V change)

* Initial refactor to put video and audio text encoder connectors in transformer

* Get LTX 2 transformer tests working after connector refactor

* up

* up

* i2v tests.

* up

* Address review comments

* Calculate RoPE double precisions freqs using torch instead of np

* Further simplify LTX 2 RoPE freq calc

* revert unneded changes.

* up

* up

* update to split style rope.

* up

---------

Co-authored-by: Daniel Gu <dgu8957@gmail.com>

* up

* use export util funcs.

* Point original checkpoint to LTX 2.0 official checkpoint

* Allow the I2V pipeline to accept image URLs

* make style and make quality

* remove function map.

* remove args.

* update docs.

* update doc entries.

* disable ltx2_consistency test

* Simplify LTX 2 RoPE forward by removing coords is None logic

* make style and make quality

* Support LTX 2.0 audio VAE encoder

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Remove print statement in audio VAE

* up

* Fix bug when calculating audio RoPE coords

* Ltx 2 latent upsample pipeline (#12922)

* Initial implementation of LTX 2.0 latent upsampling pipeline

* Add new LTX 2.0 spatial latent upsampler logic

* Add test script for LTX 2.0 latent upsampling

* Add option to enable VAE tiling in upsampling test script

* Get latent upsampler working with video latents

* Fix typo in BlurDownsample

* Add latent upsample pipeline docstring and example

* Remove deprecated pipeline VAE slicing/tiling methods

* make style and make quality

* When returning latents, return unpacked and denormalized latents for T2V and I2V

* Add model_cpu_offload_seq for latent upsampling pipeline

---------

Co-authored-by: Daniel Gu <dgu8957@gmail.com>

* Fix latent upsampler filename in LTX 2 conversion script

* Add latent upsample pipeline to LTX 2 docs

* Add dummy objects for LTX 2 latent upsample pipeline

* Set default FPS to official LTX 2 ckpt default of 24.0

* Set default CFG scale to official LTX 2 ckpt default of 4.0

* Update LTX 2 pipeline example docstrings

* make style and make quality

* Remove LTX 2 test scripts

* Fix LTX 2 upsample pipeline example docstring

* Add logic to convert and save a LTX 2 upsampling pipeline

* Document LTX2VideoTransformer3DModel forward pass

---------

Co-authored-by: sayakpaul <spsayakpaul@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants