Call log.warning instead of raising PluginWarning#1723
Call log.warning instead of raising PluginWarning#1723liiight merged 9 commits intoFlexget:developfrom jonathanou:fix_delete
Conversation
bt.hliang.com provides asian tv show torrents. However, it does not support rss feeds. To auto download shows, use http plugin to download the homepage of bt.hliang.com, and this url rewrite plugin will do the following: 1. Go through each entry url and follow the link to open the details page. 2. In the details page, search for the actual torrent download url (any link that contains "down.php?[unique id]" 3. Return the url if found, or raise UrlRewriteError.
Forgot to escape '.' and '?' for the "down.php?" regex pattern.
1. Wrapped requests.get() call inside try-except 2. Changed plugin registration to specify "interfaces" instead of "groups" Additional changes: 1. Removed unused urllib2 import. 2. Grouped import statements.
When making request calls, catch request specific exceptions and log the error before raising UrlRewritingError. For the soup library call, it is not immediately clear what Exception types are guaranteed to throw, therefore it is left unchanged.
Change from log.exception to log.error as requested in the pull request. Also fixed how the arguments are being passed, the message takes a format string and the arguments that follow are the values. The previous implementation was incorrect.
Instead of storing the actual exception, store exception message in UrlRewritingError. Printing the exception as a string shows the traceback, which for log files is a little verbose. Showing the the exception message helps give more meaningful info.
Exception.message is deprecated, use str(e) as recommended in https://stackoverflow.com/questions/1272138/baseexception-message-deprecated-in-python-2-6 .
Flexget will stop the entire task when the code throws PluginWarning exception. This prevents the task from processing subsequent files. By calling log.warning and continuing to the next iteration, subsequent files can be processed.
|
It sounds like the correct approach would be to break the loop instead of just logging a warning |
|
But breaking the loop stops other files from being processed. If I'm deleting a bunch of files and one of them don't exist on disk anymore, why should it stop the other files from being deleted? |
|
Maybe I misunderstood the scenario. You describe a situation where during iteration over files in a folder, that folder gets deleted, and trying to operate on subsequent files in the loop cannot work, right? If I didn't under this correctly I apologize. |
|
Yes, that's right. Here's a more conrete example:
I feel that the loop should carry on to the next iteration regardless if it can perform the operation on the current item or not. Even if deleting the folder is handled by the loop gracefully, there could still be other reasons to cause the delete to fail, like permission denied. I think the easiest way is to log a warning and continue, like what I have proposed. |
|
The easiest way and the best way are not always interchangeable. That being said, I'm not sure you're wrong. I rather wait for some more feedback on this. |
|
Was there anyone you wanted to include to discuss this? |
|
@cvium and @paranoidi maybe. I imagine they saw this already |
|
This change seems good to me. We don't want errors handling one entry to stop all subsequent entries from processing. I'm a bit confused by all the extra commits in this PR, but I suppose they don't matter as we can just squash it down when merging. |
|
The extra commits in the PR was for a separate PR that was already merged back in January. I reused the same fork to add this change. I'm quite new to Git PRs so I'm probably not doing it right. Is it recommended to delete the fork once a PR is merged, and then later re-fork if you want to create another PR? Or is there a way to reuse the fork but keep the commit history clean? |
|
Best practice is to maintain your fork in sync and open a branch per feature /pr |
Motivation for changes:
I use the delete plugin to clean folders older than 30 days. The delete plugin has a feature to clean the source folder (
clean_source: [folder size]) once the folder is under a certain size. When using this feature, I often run into this scenario:clean_source: 1, so it will clean the source folder once the folder size is less than 1 MB.PluginWarningis raised claiming it can't find B.txt and stops the task.Detailed changes:
PluginWarninginmove.pywhen the code detects something is wrong, we calllog.warningto still log the message, but will allow us to move on and process the next file.