Usepbscluster#1
Conversation
| from dask.distributed import Client | ||
| from pangeo import PBSCluster | ||
|
|
||
| global logger |
There was a problem hiding this comment.
You may not need to annotate this as global. It's already top-level scope.
| formatter = logging.Formatter(' - '.join( | ||
| ["%(asctime)s", "%(name)s", "%(levelname)s", "%(message)s"])) | ||
| ch.setFormatter(formatter) | ||
| logger = logging.getLogger(__file__) |
There was a problem hiding this comment.
I typically use __name__ here rather than __file__ much as you did above. Perhaps this is your problem?
There was a problem hiding this comment.
I was initializing logger twice, but I removed this one and it still doesn't work.
Nothing strikes me at first glance. I'll admit that I haven't taken a very deep look at this though. |
|
I found the problem. _exit is being called which in turn calls qdel and cancels the job. |
|
The PBSCluster object was intended for interactive use in the notebook. It may not be appropriate and was not intended for setting up long running clusters. |
|
I still think that our objectives are similar. I don't need the session to last any longer than the notebook does. I have a script that sets up a dask scheduler and worker(s) on cheyenne and then connects a notebook to that, PBSCluster has some functionality that could make that easier. |
|
I've answered questions on pangeo-data#56 I'm going to close this for now. For future comments and debugging help I recommend opening a PR on the |
Hi Matt, Wonder if you could take a look at this and help me debug. The first question is - why is pbs.py not recognizing my logger? The second is that it doesn't seem to be submitting jobs to the queue, any idea why?