[Docs] Spanish Translation -Torchscript md & Trainer md#29310
[Docs] Spanish Translation -Torchscript md & Trainer md#29310stevhliu merged 5 commits intohuggingface:mainfrom
Conversation
aaronjimv
left a comment
There was a problem hiding this comment.
Hi @njackman-2344. Thank you, the translation in both documents have a great quality. I only have some feedback.
docs/source/es/trainer.md
Outdated
| * [~Trainer.get_train_dataloader] crea un entrenamiento de DataLoader | ||
| * [~Trainer.get_eval_dataloader] crea una evaluación DataLoader | ||
| * [~Trainer.get_test_dataloader] crea una prueba de DataLoader | ||
| * [~Trainer.log] anota información de los objetos varios que observa el entrenamiento |
There was a problem hiding this comment.
- Add "anota la información" in line 116
- I think that you can translate
an optimizer y rate schedulerin the sentence of line 117 - Chage “computa el perdido” to “computa la pérdida” in line 118.
There was a problem hiding this comment.
Copy that. I put tasa programada (rate scheduler) but I still don't feel that's sufficient. What do you think?
|
|
||
| ### Implicaciones | ||
|
|
||
| Los modelos transformers basados en la arquitectura [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert), o sus variantes como [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) y [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta), funcionan mejor en Inf1 para tareas no generativas como la respuesta a preguntas extractivas, la clasificación de secuencias y la clasificación de tokens. Sin embargo, las tareas de generación de texto aún pueden adaptarse para ejecutarse en Inf1 según este [tutorial de AWS Neuron MarianMT](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html). Se puede encontrar más información sobre los modelos que se pueden convertir fácilmente para usar en Inferentia en la sección de [Model Architecture Fit](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia) de la documentación de Neuron. |
There was a problem hiding this comment.
I don't know why the links in this paragraph doesn't show its preview (the color blue). Also happen in line 22 of trainer.md
There was a problem hiding this comment.
Hi @njackman-2344 LGTM👍. Just two little change in trainer.md and its done:
-
In line 70. Remove
"con"in "...o algo para preprocesar el conjunto de datoscon(..." -
In line 187. Remove
"parametros"at the end of "...Puedes cambiar el nivel de logging con los parametros log_level y log_level_replicaparametrosen [TrainingArguments]."
It'll close it, but I can reopen 😄 |
| title: Compartir modelos personalizados | ||
| - local: run_scripts | ||
| - local: trainer | ||
| title: Entrenador |
There was a problem hiding this comment.
There are two title's here which is causing the doc-builder to fail. I think we just want to keep Entrenador and remove Entrenamiento con scripts?
There was a problem hiding this comment.
Will change it :) Thanks!
There was a problem hiding this comment.
Ah sorry I wasn't clear, this should be as shown below because now - local: run_scripts doesn't have a title under it!
- local: run_scripts
title: Entrenamiento con scripts
- local: trainer
title: Entrenador|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
What does this PR do?
Fixes #28936 couple markdown files
Before submitting
Pull Request section?
to it if that's the case. [i18n-es] Translating docs to Spanish #28936
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu @gisturiz @aaronjimv
Whew!! I look forward to any changes needed. Thanks!! :)