Conversation
Documentation previewhttps://nvidia-merlin.github.io/Transformers4Rec/review/pr-591 |
sararb
approved these changes
Dec 29, 2022
| "so npartitions>=global_size. Cudf or pandas can be used for repartitioning " | ||
| "e.g.: df.to_parquet('file.parquet', row_group_size=N_ROWS/NPARTITIONS, engine" | ||
| "='pyarrow') as npartitions=nr_rows/row_group_size." | ||
| "eg. df.to_parquet('file.parquet', row_group_size=N_ROWS/NPARTITIONS) or " |
Contributor
There was a problem hiding this comment.
Can we replace df by pandas similar to the example with cudf?
Suggested change
| "eg. df.to_parquet('file.parquet', row_group_size=N_ROWS/NPARTITIONS) or " | |
| "eg. pandas.to_parquet('file.parquet', row_group_size=N_ROWS/NPARTITIONS) or " |
rnyak
reviewed
Jan 3, 2023
| <b>Note:</b> When using `DistributedDataParallel`, our data loader splits data between the GPUs based on dataset partitions. For that reason, the number of partitions of the dataset must be equal to or an integer multiple of the number of processes. If the parquet file has a small number of row groups (partitions), try repartitioning and saving it again using cudf or pandas before training. The dataloader checks `dataloader.dataset.npartitions` and will repartition if needed but we advise users to repartition the dataset and save it for better efficiency. Use pandas or cudf for repartitioning. Example of repartitioning a parquet file with cudf: | ||
|
|
||
| ```df.to_parquet("filename.parquet", row_group_size=10000)``` | ||
| ```cudf.to_parquet("filename.parquet", row_group_size_rows=10000)``` |
Contributor
There was a problem hiding this comment.
I'd recommend to say pdf.to_parquet(..) for pandas and gdf.to_parquet() for cudf dataframes.
Modified references to pandas and cudf data objects.
Modified references to pandas and cudf data objects in the documentation.
Fixed documentation line length.
Contributor
|
rerun tests |
rnyak
approved these changes
Jan 5, 2023
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR fixes user warning and readme documentation about data partitions for multi GPU training. This addresses #550.