-
Notifications
You must be signed in to change notification settings - Fork 25
173.consume tahoe lafs eliot logs #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
173.consume tahoe lafs eliot logs #174
Conversation
This will fail more loudly (hopefully with an AttributeError) if Tahoe itself changes somehow. Co-authored-by: Chris Wood <chris@leastauthority.com>
Co-authored-by: Chris Wood <chris@leastauthority.com>
But give it a default so Tahoe API doesn't change. Co-authored-by: Chris Wood <chris@leastauthority.com>
Co-authored-by: Chris Wood <chris@leastauthority.com>
Co-authored-by: Chris Wood <chris@leastauthority.com>
Co-authored-by: Chris Wood <chris@leastauthority.com>
Co-authored-by: Chris Wood <chris@leastauthority.com>
Co-authored-by: Chris Wood <chris@leastauthority.com>
Co-authored-by: Chris Wood <chris@leastauthority.com>
already a transitive dependency via twisted
Provide a better fake reactor. Mock doesn't know when to call startConnecting and I'd rather not duplicate that logic in the test method itself. MemoryReactorClock knows to do this and it fixes the problem with starting then stopping the TCP4ClientEndpoint.
Tahoe.restart just calls Tahoe.start which gets the right API auth token.
Co-authored-by: Chris Wood <chris@leastauthority.com>
Co-authored-by: Chris Wood <chris@leastauthority.com>
repeat yourseeeeeelf Co-authored-by: Chris Wood <chris@leastauthority.com>
Co-authored-by: Chris Wood <chris@leastauthority.com>
Otherwise it tries to stop it
So that it connects to the right address
Pass just the information we need to the start call, instead.
Noticed when listenTCP actually raised an exception and forced us to notice the except statement used an undefined name.
| # This deque limit is based on average message size of 260 bytes | ||
| # and a desire to limit maximum memory consumption here to around | ||
| # 500 MiB. | ||
| maxlen = 2000000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something that occurs to me is that some folks use GridSync with multiple grids. Do we want to divide this buffer across all of the Tahoe's being monitored?
cc @crwood
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was actually just thinking about this last night.. Longer (or medium?) term I'd like to have this value and behavior be somewhat user-configurable via the Preferences pane (and/or forthcoming "Connection Manager"), however, in the interest of avoiding further feature-creep for this PR (i.e., keeping things focused primarily on consuming the logs), I'm happy to merge this work as-is and implement the necessary memory-balancing dials later on.
Nevertheless, I may lower this value somewhat pending further testing once I hook this work into the GUI that I'm working on in 168.export-debug-info (as it presents both "filtered" and "unfiltered" versions of logs to users and so requires additional memory).
pytest will load them for us automatically and so we won't need the flake8-upsetting imports from test_tahoe in test_streamedlogs. also importing library code from test modules is sad.
also avoid even importing the global reactor until it is being used future improvement would probably be to use the reactor attribute
|
We're green -- merging! Thanks, again, @exarkun, for everything relating to this. :) |
Fixes #173
Well, it sort-of fixes it. It only gathers the logs into the GridSync process. It doesn't expose them to anyone in any way. I imagine there should be a follow-up to #173 that describes some specific user experience involving this data.