[i18n-KO] Translated video_classification.mdx to Korean#23026
[i18n-KO] Translated video_classification.mdx to Korean#23026sgugger merged 18 commits intohuggingface:mainfrom
Conversation
Co-Authored-By: Hyeonseo Yun <0525_hhgus@naver.com> Co-Authored-By: Gabriel Yang <gabrielwithhappy@gmail.com> Co-Authored-By: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-Authored-By: Nayeon Han <nayeon2.han@gmail.com> Co-Authored-By: Wonhyeong Seo <wonhseo@kakao.com> Co-Authored-By: Jungnerd <46880056+jungnerd@users.noreply.github.com>
|
The documentation is not available anymore as the PR was closed or merged. |
| ``` | ||
|
|
||
|
|
||
| **참고**: 위의 데이터셋 파이프라인은 [공식 파이토치 예제](https://pytorchvideo.org/docs/tutorial_classification#dataset)에서 가져온 것입니다. 우리는 UCF-101 데이터셋에 맞게 [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) 함수를 사용하고 있습니다. 내부적으로, 이 함수는 [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) 객체를 반환합니다. `LabeledVideoDataset` 클래스는 PyTorchVideo 데이터셋에서 모든 영상 관련 작업의 기본 클래스입니다. 따라서 PyTorchVideo에서 미리 제공하지 않는 사용자 지정 데이터셋을 사용하려면, 이 클래스를 적절하게 확장하면 됩니다. 더 자세한 사항이 알고 싶다면 `data` API [documentation](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) 를 참고하세요. 또한 위의 예시와 유사한 구조를 갖는 데이터셋을 사용하고 있다면, `pytorchvideo.data.Ucf101()` 함수를 사용하는 데 문제가 없을 것입니다. |
There was a problem hiding this comment.
| **참고**: 위의 데이터셋 파이프라인은 [공식 파이토치 예제](https://pytorchvideo.org/docs/tutorial_classification#dataset)에서 가져온 것입니다. 우리는 UCF-101 데이터셋에 맞게 [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) 함수를 사용하고 있습니다. 내부적으로, 이 함수는 [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) 객체를 반환합니다. `LabeledVideoDataset` 클래스는 PyTorchVideo 데이터셋에서 모든 영상 관련 작업의 기본 클래스입니다. 따라서 PyTorchVideo에서 미리 제공하지 않는 사용자 지정 데이터셋을 사용하려면, 이 클래스를 적절하게 확장하면 됩니다. 더 자세한 사항이 알고 싶다면 `data` API [documentation](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) 를 참고하세요. 또한 위의 예시와 유사한 구조를 갖는 데이터셋을 사용하고 있다면, `pytorchvideo.data.Ucf101()` 함수를 사용하는 데 문제가 없을 것입니다. | |
| **참고**: 위의 데이터 세트 파이프라인은 [공식 파이토치 예제](https://pytorchvideo.org/docs/tutorial_classification#dataset)에서 가져온 것입니다. 우리는 UCF-101 데이터 세트에 맞게 [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) 함수를 사용하고 있습니다. 내부적으로, 이 함수는 [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) 객체를 반환합니다. `LabeledVideoDataset` 클래스는 PyTorchVideo 데이터 세트에서 모든 영상 관련 작업의 기본 클래스입니다. 따라서 PyTorchVideo에서 미리 제공하지 않는 사용자 지정 데이터 세트를 사용하려면, 이 클래스를 적절하게 확장하면 됩니다. 더 자세한 사항이 알고 싶다면 `data` API [documentation](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) 를 참고하세요. 또한 위의 예시와 유사한 구조를 갖는 데이터 세트를 사용하고 있다면, `pytorchvideo.data.Ucf101()` 함수를 사용하는 데 문제가 없을 것입니다. |
| ... return {"pixel_values": pixel_values, "labels": labels} | ||
| ``` | ||
|
|
||
| 그런 다음 이 모든 것을 데이터셋과 함께 `Trainer`에 전달하기만 하면 됩니다. |
There was a problem hiding this comment.
| 그런 다음 이 모든 것을 데이터셋과 함께 `Trainer`에 전달하기만 하면 됩니다. | |
| 그런 다음 이 모든 것을 데이터 세트와 함께 `Trainer`에 전달하기만 하면 됩니다. |
sim-so
left a comment
There was a problem hiding this comment.
번역하는 데 고생이 정말 많으셨을 것 같아요!
저도 열심히 리뷰를 해보았습니다🔥
| [[open-in-colab]] | ||
|
|
||
|
|
||
| 영상 분류는 영상 전체에 레이블 또는 클래스를 지정하는 작업입니다. 각 영상에는 하나의 클래스가 있을 것으로 예상됩니다. 영상 분류 모델은 영상를 입력으로 받아 어느 클래스에 속하는지에 대한 예측을 반환합니다. 이러한 모델은 영상가 어떤 내용인지 분류하는 데 사용될 수 있습니다. 영상 분류의 실제 응용 예는 피트니스 앱에서 유용한 동작 / 운동 인식 서비스가 있습니다. 이는 또한 시각 장애인이 이동할 때 보조하는데 사용될 수 있습니다 |
There was a problem hiding this comment.
| 영상 분류는 영상 전체에 레이블 또는 클래스를 지정하는 작업입니다. 각 영상에는 하나의 클래스가 있을 것으로 예상됩니다. 영상 분류 모델은 영상를 입력으로 받아 어느 클래스에 속하는지에 대한 예측을 반환합니다. 이러한 모델은 영상가 어떤 내용인지 분류하는 데 사용될 수 있습니다. 영상 분류의 실제 응용 예는 피트니스 앱에서 유용한 동작 / 운동 인식 서비스가 있습니다. 이는 또한 시각 장애인이 이동할 때 보조하는데 사용될 수 있습니다 | |
| 영상 분류는 영상 전체에 레이블 또는 클래스를 지정하는 작업입니다. 각 영상에는 하나의 클래스가 있을 것으로 간주합니다. 영상 분류 모델은 영상을 입력으로 받아 어느 클래스에 속하는지 예측하여 반환합니다. 이러한 모델은 영상이 어떤 내용인지 분류하는 데 사용될 수 있습니다. 영상 분류가 실생활에 적용된 사례로, 피트니스 앱에서 유용한 동작/운동 인식 서비스가 있습니다. 시각 장애인을 보조하는 데에도 사용되며, 특히 이동 시에 도움이 됩니다. |
영상가를 영상을로 바꾸고, 문장을 수정해보았습니다.
|
|
||
| 이 가이드에서는 다음을 수행하는 방법을 보여줍니다. | ||
|
|
||
| <!--흠...--> |
There was a problem hiding this comment.
이 부분은 원문에는 없는 것 같은데, 혹시 어떤 의미일까요?
| 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) 데이터셋의 하위 집합을 통해 [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) 모델을 미세 조정하는 법. | ||
| 2. 미세 조정한 모델을 추론에 사용하는 법. |
There was a problem hiding this comment.
| 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) 데이터셋의 하위 집합을 통해 [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) 모델을 미세 조정하는 법. | |
| 2. 미세 조정한 모델을 추론에 사용하는 법. | |
| 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) 데이터 세트의 하위 집합을 통해 [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) 모델을 미세 조정하기 | |
| 2. 미세 조정한 모델을 추론에 사용하기 |
- 현재 glossary에 따라
데이터 세트로 수정합니다. - 위에
~을 수행하는 방법이라고 이미 소개하고 있어서 항목에서는법을 빼보았는데 어떨까요?
|
|
||
| ## 추론하기 [[inference]] | ||
|
|
||
| 좋습니다. 이제 미세 조정된 모델을 갖고 추론하는데 사용할 수 있습니다. |
There was a problem hiding this comment.
| 좋습니다. 이제 미세 조정된 모델을 갖고 추론하는데 사용할 수 있습니다. | |
| 좋습니다. 이제 미세 조정된 모델을 갖고 추론하는 데 사용할 수 있습니다. |
| >>> logits = run_inference(trained_model, sample_test_video["video"]) | ||
| ``` | ||
|
|
||
| `logits`을 디코딩하면, 우리는 다음을 얻을 수 있습니다. |
There was a problem hiding this comment.
| `logits`을 디코딩하면, 우리는 다음을 얻을 수 있습니다. | |
| `logits`을 디코딩하면, 다음 결과를 얻을 수 있습니다. |
There was a problem hiding this comment.
. -> :으로 수정이 필요하여 코멘트 남깁니다!
| >>> clip_duration = num_frames_to_sample * sample_rate / fps | ||
| ``` | ||
|
|
||
| 이제 데이터셋에 특화된 전처리(transform)과 데이터셋 자체를 정의합니다. 먼저 훈련 데이터셋으로 시작합니다. |
There was a problem hiding this comment.
. -> :으로 번경되어야 하여 코멘트 남깁니다!
| ... return {"pixel_values": pixel_values, "labels": labels} | ||
| ``` | ||
|
|
||
| 그런 다음 이 모든 것을 데이터셋과 함께 `Trainer`에 전달하기만 하면 됩니다. |
Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com>
Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com>
| 이 가이드에서는 다음을 수행하는 방법을 보여줍니다: | ||
|
|
||
| <!--흠...--> | ||
| 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) 데이터 세트의 하위 집합을 통해 [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) 모델을 미세 조정하는 법. |
There was a problem hiding this comment.
| 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) 데이터 세트의 하위 집합을 통해 [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) 모델을 미세 조정하는 법. | |
| 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) 데이터 세트의 하위 집합을 통해 [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) 모델을 미세 조정하기. |
| >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") | ||
| ``` | ||
|
|
||
| 데이터 세트의 하위 집합이 다운로드 되면, 압축된 아카이브를 해제해야 합니다. |
There was a problem hiding this comment.
| 데이터 세트의 하위 집합이 다운로드 되면, 압축된 아카이브를 해제해야 합니다. | |
| 데이터 세트의 하위 집합이 다운로드 되면, 파일의 압축을 해제합니다: |
There was a problem hiding this comment.
compressed archive는 압축 파일처럼 사용되는 것 같아 수정해봤습니다.
| ... ) | ||
| ``` | ||
|
|
||
| 학습 데이터셋 변환에는 '균일한 시간 샘플링(uniform temporal subsampling)', '픽셀 정규화(pixel normalization)', '무작위 잘라내기(random cropping)' 및 '무작위 수평 뒤집기(random horizontal flipping)'의 조합을 사용합니다. 검증 및 평가 데이터셋 변환에는 '무작위 잘라내기'와 '수평 뒤집기'를 제외한 동일한 변환 체인을 유지합니다. 이러한 변환의 자세한 내용을 알아보려면 [PyTorchVideo 공식 문서](https://pytorchvideo.org)를 확인하세요. |
There was a problem hiding this comment.
무작위
는
랜덤
으로 쓰는게 더 이해가 쉬울 것 같습니다.
Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com>
|
Hey! Sorry for the long delay. There seems to be 2 suggestions not adresses, should we wait for these? 🤗 |
|
@ArthurZucker the suggestions that i didn't accept were about same sentences or ealier version of our glossary so you don't need to wait for the other suggestions to be accepted!! Thank you!! |
sgugger
left a comment
There was a problem hiding this comment.
Thanks for your contribution!
* Debug example code for MegaForCausalLM (#23382) * Debug example code for MegaForCausalLM set ignore_mismatched_sizes=True in model loading code * Fix up * Remove erroneous `img` closing tag (#23646) See #23625 * Fix tensor device while attention_mask is not None (#23538) * Fix tensor device while attention_mask is not None * Fix tensor device while attention_mask is not None * Fix accelerate logger bug (#23650) * fix logger bug * Update tests/mixed_int8/test_mixed_int8.py Co-authored-by: Zachary Mueller <muellerzr@gmail.com> * import `PartialState` --------- Co-authored-by: Zachary Mueller <muellerzr@gmail.com> * Muellerzr fix deepspeed (#23657) * Fix deepspeed recursion * Better fix * Bugfix: LLaMA layer norm incorrectly changes input type and consumers lots of memory (#23535) * Fixed bug where LLaMA layer norm would change input type. * make fix-copies --------- Co-authored-by: younesbelkada <younesbelkada@gmail.com> * Fix wav2vec2 is_batched check to include 2-D numpy arrays (#23223) * Fix wav2vec2 is_batched check to include 2-D numpy arrays * address comment * Add tests * oops * oops * Switch to np array Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> * Switch to np array * condition merge * Specify mono channel only in comment * oops, add other comment too * make style * Switch list check from falsiness to empty --------- Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> * changing the requirements to a cpu torch version that works (#23483) * Fix SAM tests and use smaller checkpoints (#23656) * Fix SAM tests and use smaller checkpoints * Override test_model_from_pretrained to use sam-vit-base as well * make fixup * Update all no_trainer with skip_first_batches (#23664) * Update workflow files (#23658) * fix * fix --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * [image-to-text pipeline] Add conditional text support + GIT (#23362) * First draft * Remove print statements * Add conditional generation * Add more tests * Remove scripts * Remove BLIP specific linkes * Add support for pix2struct * Add fast test * Address comment * Fix style * small fix to remove unused eos in processor when it's not used. (#23408) * Bump requests from 2.27.1 to 2.31.0 in /examples/research_projects/decision_transformer (#23673) Bump requests in /examples/research_projects/decision_transformer Bumps [requests](https://github.com/psf/requests) from 2.27.1 to 2.31.0. - [Release notes](https://github.com/psf/requests/releases) - [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md) - [Commits](psf/requests@v2.27.1...v2.31.0) --- updated-dependencies: - dependency-name: requests dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump requests from 2.22.0 to 2.31.0 in /examples/research_projects/visual_bert (#23670) Bump requests in /examples/research_projects/visual_bert Bumps [requests](https://github.com/psf/requests) from 2.22.0 to 2.31.0. - [Release notes](https://github.com/psf/requests/releases) - [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md) - [Commits](psf/requests@v2.22.0...v2.31.0) --- updated-dependencies: - dependency-name: requests dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump requests from 2.22.0 to 2.31.0 in /examples/research_projects/lxmert (#23668) Bump requests in /examples/research_projects/lxmert Bumps [requests](https://github.com/psf/requests) from 2.22.0 to 2.31.0. - [Release notes](https://github.com/psf/requests/releases) - [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md) - [Commits](psf/requests@v2.22.0...v2.31.0) --- updated-dependencies: - dependency-name: requests dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Add PerSAM [bis] (#23659) * Add PerSAM args * Make attn_sim optional * Rename to attention_similarity * Add docstrigns * Improve docstrings * Fix typo in a parameter name for open llama model (#23637) * Update modeling_open_llama.py Fix typo in `use_memorry_efficient_attention` parameter name * Update configuration_open_llama.py Fix typo in `use_memorry_efficient_attention` parameter name * Update configuration_open_llama.py Take care of backwards compatibility ensuring that the previous parameter name is taken into account if used * Update configuration_open_llama.py format to adjust the line length * Update configuration_open_llama.py proper code formatting using `make fixup` * Update configuration_open_llama.py pop the argument not to let it be set later down the line * Fix PyTorch SAM tests (#23682) fix Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Making `safetensors` a core dependency. (#23254) * Making `safetensors` a core dependency. To be merged later, I'm creating the PR so we can try it out. * Update setup.py * Remove duplicates. * Even more redundant. * 🌐 [i18n-KO] Translated `tasks/monocular_depth_estimation.mdx` to Korean (#23621) docs: ko: `tasks/monocular_depth_estimation` Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com> Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Fix a `BridgeTower` test (#23694) fix Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * [`SAM`] Fixes pipeline and adds a dummy pipeline test (#23684) * add a dummy pipeline test * change test name * TF version compatibility fixes (#23663) * New TF version compatibility fixes * Remove dummy print statement, move expand_1d * Make a proper framework inference function * Make a proper framework inference function * ValueError -> TypeError * [`Blip`] Fix blip doctest (#23698) fix blip doctest * is_batched fix for remaining 2-D numpy arrays (#23309) * Fix is_batched code to allow 2-D numpy arrays for audio * Tests * Fix typo * Incorporate comments from PR #23223 * Skip `TFCvtModelTest::test_keras_fit_mixed_precision` for now (#23699) fix Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * fix: load_best_model_at_end error when load_in_8bit is True (#23443) Ref: huggingface/peft#394 Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported. call module.cuda() before module.load_state_dict() * Fix some docs what layerdrop does (#23691) * Fix some docs what layerdrop does * Update src/transformers/models/data2vec/configuration_data2vec_audio.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Fix more docs --------- Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * add GPTJ/bloom/llama/opt into model list and enhance the jit support (#23291) Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) (#23479) * Added lion and paged optimizers and made original tests pass. * Added tests for paged and lion optimizers. * Added and fixed optimizer tests. * Style and quality checks. * Initial draft. Some tests fail. * Fixed dtype bug. * Fixed bug caused by torch_dtype='auto'. * All test green for 8-bit and 4-bit layers. * Added fix for fp32 layer norms and bf16 compute in LLaMA. * Initial draft. Some tests fail. * Fixed dtype bug. * Fixed bug caused by torch_dtype='auto'. * All test green for 8-bit and 4-bit layers. * Added lion and paged optimizers and made original tests pass. * Added tests for paged and lion optimizers. * Added and fixed optimizer tests. * Style and quality checks. * Fixing issues for PR #23479. * Added fix for fp32 layer norms and bf16 compute in LLaMA. * Reverted variable name change. * Initial draft. Some tests fail. * Fixed dtype bug. * Fixed bug caused by torch_dtype='auto'. * All test green for 8-bit and 4-bit layers. * Added lion and paged optimizers and made original tests pass. * Added tests for paged and lion optimizers. * Added and fixed optimizer tests. * Style and quality checks. * Added missing tests. * Fixup changes. * Added fixup changes. * Missed some variables to rename. * revert trainer tests * revert test trainer * another revert * fix tests and safety checkers * protect import * simplify a bit * Update src/transformers/trainer.py * few fixes * add warning * replace with `load_in_kbit = load_in_4bit or load_in_8bit` * fix test * fix tests * this time fix tests * safety checker * add docs * revert torch_dtype * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * multiple fixes * update docs * version checks and multiple fixes * replace `is_loaded_in_kbit` * replace `load_in_kbit` * change methods names * better checks * oops * oops * address final comments --------- Co-authored-by: younesbelkada <younesbelkada@gmail.com> Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Paged Optimizer + Lion Optimizer for Trainer (#23217) * Added lion and paged optimizers and made original tests pass. * Added tests for paged and lion optimizers. * Added and fixed optimizer tests. * Style and quality checks. --------- Co-authored-by: younesbelkada <younesbelkada@gmail.com> * Export to ONNX doc refocused on using optimum, added tflite (#23434) * doc refocused on using optimum, tflite * minor updates to fix checks * Apply suggestions from code review Co-authored-by: regisss <15324346+regisss@users.noreply.github.com> * TFLite to separate page, added links * Removed the onnx list builder * make style * Update docs/source/en/serialization.mdx Co-authored-by: regisss <15324346+regisss@users.noreply.github.com> --------- Co-authored-by: regisss <15324346+regisss@users.noreply.github.com> * fix: use bool instead of uint8/byte in Deberta/DebertaV2/SEW-D to make it compatible with TensorRT (#23683) * Use bool instead of uint8/byte in DebertaV2 to make it compatible with TensorRT TensorRT cannot accept onnx graph with uint8/byte intermediate tensors. This PR uses bool tensors instead of unit8/byte tensors to make the exported onnx file can work with TensorRT. * fix: use bool instead of uint8/byte in Deberta and SEW-D --------- Co-authored-by: Yuxian Qiu <yuxianq@nvidia.com> * fix gptj could not jit.trace in GPU (#23317) Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * Better TF docstring types (#23477) * Rework TF type hints to use | None instead of Optional[] for tf.Tensor * Rework TF type hints to use | None instead of Optional[] for tf.Tensor * Don't forget the imports * Add the imports to tests too * make fixup * Refactor tests that depended on get_type_hints * Better test refactor * Fix an old hidden bug in the test_keras_fit input creation code * Fix for the Deit tests * Minor awesome-transformers.md fixes (#23453) Minor docs fixes * TF SAM memory reduction (#23732) * Extremely small change to TF SAM dummies to reduce memory usage on build * remove debug breakpoint * Debug print statement to track array sizes * More debug shape printing * More debug shape printing * Now remove the debug shape printing * make fixup * make fixup * fix: delete duplicate sentences in `document_question_answering.mdx` (#23735) fix: delete duplicate sentence * fix: Whisper generate, move text_prompt_ids trim up for max_new_tokens calculation (#23724) move text_prompt_ids trimming to top * Overhaul TF serving signatures + dummy inputs (#23234) * Let's try autodetecting serving sigs * Don't clobber existing sigs * Change shapes for multiplechoice models * Make default dummy inputs smarter too * Fix missing f-string * Let's YOLO a serving output too * Read __class__.__name__ properly * Don't just pass naked lists in there and expect it to be okay * Code cleanup * Update default serving sig * Clearer error messages * Further updates to the default serving output * make fixup * Update the serving output a bit more * Cleanups and renames, raise errors appropriately when we can't infer inputs * More renames * we're building in a functional context again, yolo * import DUMMY_INPUTS from the right place * import DUMMY_INPUTS from the right place * Support cross-attention in the dummies * Support cross-attention in the dummies * Complete removal of dummy/serving overrides in BERT * Complete removal of dummy/serving overrides in RoBERTa * Obliterate lots and lots of serving sig and dummy overrides * merge type hint changes * Fix for token_type_ids with vocab_size 1 * Add missing property decorator * Fix T5 and hopefully some models that take conv inputs * More signature pruning * Fix T5's signature * Fix Wav2Vec2 signature * Fix LongformerForMultipleChoice input signature * Fix BLIP and LED * Better default serving output error handling * Fix BART dummies * Fix dummies for cross-attention, esp encoder-decoder models * Fix visionencoderdecoder signature * Fix BLIP serving output * Small tweak to BART dummies * Cleanup the ugly parameter inspection line that I used in a few places * committed a breakpoint again * Move the text_dims check * Remove blip_text serving_output * Add decoder_input_ids to the default input sig * Remove all the manual overrides for encoder-decoder model signatures * Tweak longformer/led input sigs * Tweak default serving output * output.keys() -> output * make fixup * [Whisper] Reduce batch size in tests (#23736) * Fix the regex in `get_imports` to support multiline try blocks and excepts with specific exception types (#23725) * fix and test get_imports for multiline try blocks, and excepts with specific errors * fixup * add some more tests * add license * Fix sagemaker DP/MP (#23681) * Check for use_sagemaker_dp * Add a check for is_sagemaker_mp when setting _n_gpu again. Should be last broken thing * Try explicit check? * Quality * Enable prompts on the Hub (#23662) * Enable prompts on the Hub * Update src/transformers/tools/prompts.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Address review comments --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Remove the last few TF serving sigs (#23738) Remove some more serving methods that (I think?) turned up while this PR was open * Fix `pip install --upgrade accelerate` command in modeling_utils.py (#23747) Fix command in modeling_utils.py * Add LlamaIndex to awesome-transformers.md (#23484) * Fix psuh_to_hub in Trainer when nothing needs pushing (#23751) * Revamp test selection for the example tests (#23737) * Revamp test selection for the example tests * Rename old XLA test and fake modif in run_glue * Fixes * Fake Trainer modif * Remove fake modifs * [LongFormer] code nits, removed unused parameters (#23749) * remove unused parameters * remove unused parameters in config * Fix is_ninja_available() (#23752) * Fix is_ninja_available() search ninja using subprocess instead of importlib. * Fix style * Fix doc * Fix style * Bump tornado from 6.0.4 to 6.3.2 in /examples/research_projects/lxmert (#23766) Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.0.4 to 6.3.2. - [Changelog](https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst) - [Commits](tornadoweb/tornado@v6.0.4...v6.3.2) --- updated-dependencies: - dependency-name: tornado dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump tornado from 6.0.4 to 6.3.2 in /examples/research_projects/visual_bert (#23767) Bump tornado in /examples/research_projects/visual_bert Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.0.4 to 6.3.2. - [Changelog](https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst) - [Commits](tornadoweb/tornado@v6.0.4...v6.3.2) --- updated-dependencies: - dependency-name: tornado dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * [`Nllb-Moe`] Fix nllb moe accelerate issue (#23758) fix nllb moe accelerate issue * [OPT] Doc nit, using fast is fine (#23789) small doc nit * Fix RWKV backward on GPU (#23774) * Update trainer.mdx class_weights example (#23787) class_weights tensor should follow model's device * no_cuda does not take effect in non distributed environment (#23795) Signed-off-by: Wang, Yi <yi.a.wang@intel.com> * Fix no such file or directory error (#23783) * Fix no such file or directory error * Address comment * Fix formatting issue * Log the right train_batch_size if using auto_find_batch_size and also log the adjusted value seperately. (#23800) * Log right bs * Log * Diff message * Enable code-specific revision for code on the Hub (#23799) * Enable code-specific revision for code on the Hub * invalidate old revision * [Time-Series] Autoformer model (#21891) * ran `transformers-cli add-new-model-like` * added `AutoformerLayernorm` and `AutoformerSeriesDecomposition` * added `decomposition_layer` in `init` and `moving_avg` to config * added `AutoformerAutoCorrelation` to encoder & decoder * removed caninical self attention `AutoformerAttention` * added arguments in config and model tester. Init works! 😁 * WIP autoformer attention with autocorrlation * fixed `attn_weights` size * wip time_delay_agg_training * fixing sizes and debug time_delay_agg_training * aggregation in training works! 😁 * `top_k_delays` -> `top_k_delays_index` and added `contiguous()` * wip time_delay_agg_inference * finish time_delay_agg_inference 😎 * added resize to autocorrelation * bug fix: added the length of the output signal to `irfft` * `attention_mask = None` in the decoder * fixed test: changed attention expected size, `test_attention_outputs` works! * removed unnecessary code * apply AutoformerLayernorm in final norm in enc & dec * added series decomposition to the encoder * added series decomp to decoder, with inputs * added trend todos * added autoformer to README * added to index * added autoformer.mdx * remove scaling and init attention_mask in the decoder * make style * fix copies * make fix-copies * inital fix-copies * fix from #22076 * make style * fix class names * added trend * added d_model and projection layers * added `trend_projection` source, and decomp layer init * added trend & seasonal init for decoder input * AutoformerModel cannot be copied as it has the decomp layer too * encoder can be copied from time series transformer * fixed generation and made distrb. out more robust * use context window to calculate decomposition * use the context_window for decomposition * use output_params helper * clean up AutoformerAttention * subsequences_length off by 1 * make fix copies * fix test * added init for nn.Conv1d * fix IGNORE_NON_TESTED * added model_doc * fix ruff * ignore tests * remove dup * fix SPECIAL_CASES_TO_ALLOW * do not copy due to conv1d weight init * remove unused imports * added short summary * added label_length and made the model non-autoregressive * added params docs * better doc for `factor` * fix tests * renamed `moving_avg` to `moving_average` * renamed `factor` to `autocorrelation_factor` * make style * Update src/transformers/models/autoformer/configuration_autoformer.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update src/transformers/models/autoformer/configuration_autoformer.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * fix configurations * fix integration tests * Update src/transformers/models/autoformer/configuration_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * fixing `lags_sequence` doc * Revert "fixing `lags_sequence` doc" This reverts commit 21e3491. * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/configuration_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * model layers now take the config * added `layer_norm_eps` to the config * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * added `config.layer_norm_eps` to AutoformerLayernorm * added `config.layer_norm_eps` to all layernorm layers * Update src/transformers/models/autoformer/configuration_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/configuration_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/configuration_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/configuration_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * fix variable names * added inital pretrained model * added use_cache docstring * doc strings for trend and use_cache * fix order of args * imports on one line * fixed get_lagged_subsequences docs * add docstring for create_network_inputs * get rid of layer_norm_eps config * add back layernorm * update fixture location * fix signature * use AutoformerModelOutput dataclass * fix pretrain config * no need as default exists * subclass ModelOutput * remove layer_norm_eps config * fix test_model_outputs_equivalence test * test hidden_states_output * make fix-copies * Update src/transformers/models/autoformer/configuration_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * removed unused attr * Update tests/models/autoformer/test_modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/models/autoformer/modeling_autoformer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * use AutoFormerDecoderOutput * fix formatting * fix formatting --------- Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * add type hint in pipeline model argument (#23740) * add type hint in pipeline model argument * add pretrainedmodel and tfpretainedmodel type hint * make type hints string * TF SAM shape flexibility fixes (#23842) SAM shape flexibility fixes for compilation * fix Whisper tests on GPU (#23753) * move input features to GPU * skip these tests because undefined behavior * unskip tests * 🌐 [i18n-KO] Translated `fast_tokenizers.mdx` to Korean (#22956) * docs: ko: fast_tokenizer.mdx content - translated Co-Authored-By: Gabriel Yang <gabrielwithhappy@gmail.com> Co-Authored-By: Nayeon Han <nayeon2.han@gmail.com> Co-Authored-By: Hyeonseo Yun <0525_hhgus@naver.com> Co-Authored-By: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-Authored-By: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-Authored-By: Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/fast_tokenizers.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Update docs/source/ko/fast_tokenizers.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Update docs/source/ko/fast_tokenizers.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Update docs/source/ko/fast_tokenizers.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Update docs/source/ko/fast_tokenizers.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Update docs/source/ko/fast_tokenizers.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Update docs/source/ko/fast_tokenizers.mdx Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com> * Update fast_tokenizers.mdx * Update fast_tokenizers.mdx * Update fast_tokenizers.mdx * Update fast_tokenizers.mdx * Update _toctree.yml --------- Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com> Co-authored-by: Nayeon Han <nayeon2.han@gmail.com> Co-authored-by: Hyeonseo Yun <0525_hhgus@naver.com> Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com> * [i18n-KO] Translated video_classification.mdx to Korean (#23026) * task/video_classification translated Co-Authored-By: Hyeonseo Yun <0525_hhgus@naver.com> Co-Authored-By: Gabriel Yang <gabrielwithhappy@gmail.com> Co-Authored-By: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-Authored-By: Nayeon Han <nayeon2.han@gmail.com> Co-Authored-By: Wonhyeong Seo <wonhseo@kakao.com> Co-Authored-By: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com> * Update video_classification.mdx * Update _toctree.yml * Update _toctree.yml * Update _toctree.yml * Update _toctree.yml --------- Co-authored-by: Hyeonseo Yun <0525_hhgus@naver.com> Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com> Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Nayeon Han <nayeon2.han@gmail.com> Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com> * 🌐 [i18n-KO] Translated `troubleshooting.mdx` to Korean (#23166) * docs: ko: troubleshooting.mdx * revised: fix _toctree.yml #23112 * feat: nmt draft `troubleshooting.mdx` * fix: manual edits `troubleshooting.mdx` * revised: resolve suggestions troubleshooting.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> --------- Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Adds a FlyteCallback (#23759) * initial flyte callback * lint * logs should still be saved to Flyte even if pandas isn't install (unlikely) * cr - flyte team * add docs for Flytecallback * fix doc string - cr sgugger * Apply suggestions from code review cr - sgugger fix doc strings Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> --------- Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update collating_graphormer.py (#23862) * [LlamaTokenizerFast] nit update `post_processor` on the fly (#23855) * Update the processor when changing add_eos and add_bos * fixup * update * add a test * fix failing tests * fixup * #23388 Issue: Update RoBERTa configuration (#23863) * [from_pretrained] imporve the error message when `_no_split_modules` is not defined (#23861) * Better warning * Update src/transformers/modeling_utils.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * format line --------- Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> --------- Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> Signed-off-by: Wang, Yi <yi.a.wang@intel.com> Co-authored-by: Tyler <41713505+Tylersuard@users.noreply.github.com> Co-authored-by: Joshua Lochner <admin@xenova.com> Co-authored-by: zspo <songpo.zhang@foxmail.com> Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Co-authored-by: Zachary Mueller <muellerzr@gmail.com> Co-authored-by: Tim Dettmers <TimDettmers@users.noreply.github.com> Co-authored-by: younesbelkada <younesbelkada@gmail.com> Co-authored-by: LWprogramming <LWprogramming@users.noreply.github.com> Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> Co-authored-by: sshahrokhi <shahrokhi@google.com> Co-authored-by: Matt <Rocketknight1@users.noreply.github.com> Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com> Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Alex <116374290+aaalexlit@users.noreply.github.com> Co-authored-by: Nayeon Han <nayeon2.han@gmail.com> Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com> Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-authored-by: 小桐桐 <32215330+dkqkxx@users.noreply.github.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by: Wang, Yi <yi.a.wang@intel.com> Co-authored-by: Maria Khalusova <kafooster@gmail.com> Co-authored-by: regisss <15324346+regisss@users.noreply.github.com> Co-authored-by: uchuhimo <uchuhimo@outlook.com> Co-authored-by: Yuxian Qiu <yuxianq@nvidia.com> Co-authored-by: pagarsky <36376725+pagarsky@users.noreply.github.com> Co-authored-by: Connor Henderson <connor.henderson@talkiatry.com> Co-authored-by: Daniel King <43149077+dakinggg@users.noreply.github.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> Co-authored-by: Eric J. Wang <eric.james.wang@gmail.com> Co-authored-by: Ravi Theja <ravi03071991@gmail.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> Co-authored-by: 玩火 <niltok@163.com> Co-authored-by: amitportnoy <113588658+amitportnoy@users.noreply.github.com> Co-authored-by: Ran Ran <ran.rissy@gmail.com> Co-authored-by: Eli Simhayev <elisimhayev@gmail.com> Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> Co-authored-by: Samin Yasar <saminc97@gmail.com> Co-authored-by: Matthijs Hollemans <mail@hollance.com> Co-authored-by: Kihoon Son <75935546+KIHOON71@users.noreply.github.com> Co-authored-by: Hyeonseo Yun <0525_hhgus@naver.com> Co-authored-by: peridotml <106936600+peridotml@users.noreply.github.com> Co-authored-by: Clémentine Fourrier <22726840+clefourrier@users.noreply.github.com> Co-authored-by: Vijeth Moudgalya <33093576+vijethmoudgalya@users.noreply.github.com>
…23026) * task/video_classification translated Co-Authored-By: Hyeonseo Yun <0525_hhgus@naver.com> Co-Authored-By: Gabriel Yang <gabrielwithhappy@gmail.com> Co-Authored-By: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-Authored-By: Nayeon Han <nayeon2.han@gmail.com> Co-Authored-By: Wonhyeong Seo <wonhseo@kakao.com> Co-Authored-By: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com> * Update video_classification.mdx * Update _toctree.yml * Update _toctree.yml * Update _toctree.yml * Update _toctree.yml --------- Co-authored-by: Hyeonseo Yun <0525_hhgus@naver.com> Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com> Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Nayeon Han <nayeon2.han@gmail.com> Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com>
…23026) * task/video_classification translated Co-Authored-By: Hyeonseo Yun <0525_hhgus@naver.com> Co-Authored-By: Gabriel Yang <gabrielwithhappy@gmail.com> Co-Authored-By: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-Authored-By: Nayeon Han <nayeon2.han@gmail.com> Co-Authored-By: Wonhyeong Seo <wonhseo@kakao.com> Co-Authored-By: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Update docs/source/ko/tasks/video_classification.mdx Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com> * Update video_classification.mdx * Update _toctree.yml * Update _toctree.yml * Update _toctree.yml * Update _toctree.yml --------- Co-authored-by: Hyeonseo Yun <0525_hhgus@naver.com> Co-authored-by: Gabriel Yang <gabrielwithhappy@gmail.com> Co-authored-by: Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by: Nayeon Han <nayeon2.han@gmail.com> Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by: Jungnerd <46880056+jungnerd@users.noreply.github.com> Co-authored-by: Hyeonseo Yun <0525yhs@gmail.com>
What does this PR do?
Translated the video_classification.mdx file of the documentation to Korean.
Thank you in advance for your review.
Before reviewing
Who can review(initial)?
Team PseudoLab, may you please review this PR? @0525hhgus, @HanNayeoniee, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review(initial)?
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?