Skip to content

Update fft docs for new cache size#12665

Closed
ssnl wants to merge 2 commits intopytorch:masterfrom
ssnl:fftdoc
Closed

Update fft docs for new cache size#12665
ssnl wants to merge 2 commits intopytorch:masterfrom
ssnl:fftdoc

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Oct 15, 2018

Follow up of #12553

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

the number of plans currently in cache, and
Changing ``torch.backends.cuda.cufft_plan_cache.max_size`` (default is
4096 on CUDA 10 and newer, and 1023 on older CUDA versions) controls the
capacity of this cache. Some cuFFT plans may allocate GPU memory. You may

This comment was marked as off-topic.

the number of plans currently in cache, and
Changing ``torch.backends.cuda.cufft_plan_cache.max_size`` (default is
4096 on CUDA 10 and newer, and 1023 on older CUDA versions) controls the
capacity of this cache. Some cuFFT plans may allocate GPU memory. You may

This comment was marked as off-topic.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants