Unnecessary file reloading issue 318#320
Conversation
ulidtko
left a comment
There was a problem hiding this comment.
We can't realistically expect real-world scenarios to follow the fixed-interval pace of "chunk - 300ms - chunk - 300ms - chunk - 300ms ..." when writing the incoming file.
I'd strongly prefer the slow.c test to do randomized delays instead. FWIW, the geometric distribution provides good practice when modelling uncertainty of the pace of arrival of external events.
As for the fix: it's not reliable. As you said @ktak-007, will still exhibit premature reloading of partially-written bundles. More rarely sure, but debouncing through hardcoded 1-second delay will not fully fix #318.
Mind that servers and network can, very realistically, become temporarily overloaded (making writing of one particular file appear slow); also VM hypervisors and container runners may throttle and load-share the CPU (making essentially everything appear slow / programs seeing time run too fast).
I acknowledge it's useful to have the debouncing logic (no matter how imperfect) for platforms without CLOSE_WRITE. There, we can't do any better. But, if we can get fully reliable fix on platforms with CLOSE_WRITE (Linux) -- we should do that.
|
@jappeace very good? 🤨 The test literally failed, with
|
|
On that branch, I reverted their changes to confirm their test works. and let the perfect not be the enemy of the good. |
|
OK, got it! Awesome then 🚀 Edit: I still hold that the test is not realistic. But oh well. Better than nothing. |
|
tests are often abstract understandings of real problems. so I'd say it's a good test as is, but certainly not perfect. |

No description provided.