Conversation
CodSpeed Performance ReportMerging #12207 will not alter performanceComparing Summary
|
|
@twm fyi this is your fault for making me actually consider the design (Thank you, the code is much better for it, but still, there's like 1000 lines now) |
|
Sigh, trying one more force push to get rid of spurious codecov statuses that seem to have gotten stuck to the win32 stdio changes. |
docs/core/howto/logger.rst
Outdated
| ~~~~~~~~~~~~~~~~~ | ||
|
|
||
| :py:class:`Logger <twisted.logger.Logger>` provides a :py:meth:`failure <twisted.logger.Logger.failure>` method, which allows one to capture a :py:class:`Failure <twisted.python.failure.Failure>` object conveniently: | ||
| :py:class:`Logger <twisted.logger.Logger>` provides a :py:meth:`failure <twisted.logger.Logger.handlingFailures>` method, which allows one to run some application code, then capture a :py:class:`Failure <twisted.python.failure.Failure>` in the log stream if that code raises an exception: |
There was a problem hiding this comment.
Maybe we should also have an example demonstrating the usage of the "Operation" ... or at least a few words about the fact that the context manager returns an "operation" that can be used to check the result and contains a reference to the raised Failure
adiroiban
left a comment
There was a problem hiding this comment.
Thanks for the update.
I did a quick review and left a few comments.
I haven't looked into the details of win32 and threadselect changes
I think that this is a big improvement.
I have approved it, but I am leaving this for review for Tom or other team members.
If a further review is delayed, I think that this can be merged.
Thanks again!
|
Thanks @adiroiban, I really appreciate that! I think this is one that might benefit from some post-hoc review, so after reviewing and addressing your comments I might land pretty quickly, but will be interested in feedback from others later. |
The problem in this case is that the job did not fail, but that codecov reported annotations
I use it all the time :) In this particular case, the issue is that the job did not actually fail. Codecov ran, it emitted some annotations, failed the check, then withdrew the failure once the Windows coverage arrived, but didn't withdraw the annotations. You can see them on the current files changed tab like this: which of course makes no sense given that the patch is 100% covered and these are patch lines. I was hoping that the bug was intermittent and might be addressed by a full rebuild, but no, it looks like a flaw in codecov. |
huh, why did the robot not notice this and add |
|
It might be due to this
I think that as long as there are requested reviewers, it is not yet considered ready for merge. |
OK, I can remove twm then |
|
Oh… but that doesn't help, because… somehow you reviewed as you, and not "on behalf of twisted-contributors"? Hmmm. I guess I should just mention people and never use the review-request mechanism. |
|
Not going to wait for Docker to resolve their incident here, but if there's a problem we can revert. |
The review robot is not AI. The main use case for the robot is to allow non Twisted Org members to manage add the review label |
Yeah, I should contribute back to it a bit more :). But I really want in-org members to use it as much as possible, so we have a common workflow. |
You can start by submitting bug reports and defining the rules/requirements/specifications for the robot. I think that the robot works. For this PR, you have explicitly requested a review from Tom and Tom has not reviewed the PR yet. For example see #12213 The PR was started as draft. As soon as the PR was no longer a draft the review label was added and a review was requested for the whole team. I have reviewed the change and the PR was marked for merge. Another example is #12216 ... I have expliclty requested the review, the label was autoatically applied and the Twisted team was automatically added to review. As soon as someone from the team has reviewd and approved the PR, it was marked for merge. Another example is #12202 ... marked as no longer a draft, the review was automatically request, I have requested changes and the label was automatically updated. |
Twisted recently changed behavior of logger on failures [1]. It newly logs the `Main loop terminated.` even on exceptions, which breaks two test in twistedsupport test suite. This hack attempts to address the upcoming issue. [1] twisted/twisted#12207
Twisted recently changed behavior of logger on failures [1]. It newly logs the `Main loop terminated.` even on exceptions, which breaks two test in twistedsupport test suite. This hack attempts to address the upcoming issue. [1] twisted/twisted#12207
Twisted recently changed behavior of logger on failures [1]. It newly logs the `Main loop terminated.` even on exceptions, which breaks two test in twistedsupport test suite. This hack attempts to address the upcoming issue. [1] twisted/twisted#12207
|
kivy (https://github.com/kivy/kivy) supports the use of twisted reactor. Therefor it uses the This changeset removes @glyph can you point me in the right direction please? |
|
Will |
|
@adiroiban Thank you. That fixes the problem. Just one question. Why is this not done in |
|
I think that the thread is stopped via the event system ... the reason... reduce the reliance on inheritance ... I don't know. I have not designed this component. twisted/src/twisted/internet/base.py Lines 1154 to 1156 in 8f0643c |
|
As pointed out in my initial comment, it's possible to install the reactor without signal handling, which seems to be the cause of the issue. It might make sense to handle this case in |
I think that "signal handling" means Unix signals. The shutdown used Twisted events notification. From what I can see, the thread pool is lazy initialized by Twisted. It looks like you want to run Twisted in the non-main thread. I am using this code to support some end to end tests for my Twisted application ... maybe it helps https://github.com/chevah/compat/blob/master/src/chevah_compat/testing/threaded_reactor.py |
I think that the next thing to do here is to file a bug, describing the problem that you're facing in specific details. It sounds like maybe you've got more of a feature request than a bug report, but I'm not sure why you're installing without signal handlers in the first place, so perhaps it's just a bug in the way that works. (In the future, I plan to make a more concerted effort to remove or scramble private APIs like this on a more regular basis so that downstream users take our compatibility contract more seriously, so if you notice issues like this, reach out and try to get the features you need supported :)) |
|
@rnixx To be clear I think your next step here is to get us to fix the bug, which is that the reactor should not break like this when you start and stop it normally. We may need to push some work off to the application, but hopefully we can just figure out a way to do it without signals. If we do need to push work onto the application it should be part of the contract of starting without signals, something obvious that comes from that API like an object that gets returned to you with the relevant hook, rather than providing a public version of an API which can arbitrarily mangle internal process state to work around bugs that we have shipped. |


This is a retread of #12196 with a simpler commit history. Its description: