-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Closed
Description
On dask version 2.10.1, I am getting high memory usage on a simple problem:
import dask.array as da
from dask.distributed import LocalCluster
lc = LocalCluster(n_workers=4, threads_per_worker=1, processes=False, memory_limit='8GB')
n_points = 1250*60*60*4
n_scales = 300
scales = da.arange(n_scales)
omega = da.fft.fftfreq(n_points) * 2 * np.pi
x = omega * scales[:, None]
x = x.rechunk((1, 1250*60*5))
x.to_hdf5('testdata.hdf5', '/x')When I run this code, I get a warning "Memory use is high but worker has no data to store to disk. Perhaps some other process is leaking memory?", and this is confirmed by htop. Is there an issue with my code that is causing a problem?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels