Fix soft TTL for duplicate requests#73
Conversation
…ts, that response should be served immediately rather than having duplicate requests wait for the first request's network response.
…ts, that response should be served immediately rather than having duplicate requests wait for the first request's network response.
…ts, that response should be served immediately rather than having duplicate requests wait for the first request's network response.
jjoslin
left a comment
There was a problem hiding this comment.
Looks good overall, just some minor concurrency issues to address.
| try { | ||
| mNetworkQueue.put(request); | ||
| } catch (InterruptedException e) { | ||
| // Not much we can do about this. |
There was a problem hiding this comment.
Don't just swallow the InterruptedException, reset the interrupted status on the thread.
| try { | ||
| mNetworkQueue.put(nextInLine); | ||
| } catch (InterruptedException iex) { | ||
| VolleyLog.e("Couldn't add request to queue. %s", iex.toString()); |
There was a problem hiding this comment.
Don't just swallow the InterruptedException, reset the interrupted status on the thread.
| synchronized (mWaitingRequests) { | ||
| waitingRequests = mWaitingRequests.remove(cacheKey); | ||
| } | ||
| if (waitingRequests != null) { |
There was a problem hiding this comment.
This is racy and you should expand the synchronized block above to also include the mWaitingRequests.put() call below. If you don't then the map can be modified by maybeAddToWaitingRequests() before the code hits the synchronized block below and any changes will be clobbered.
| * which can be used for other, waiting requests. | ||
| * @param response received from the network | ||
| */ | ||
| /* package */ void notifyListenerResponseReceived(Response<?> response) { |
There was a problem hiding this comment.
Access to mRequestCompleteListener should be synchronized. I believe the code works in its current state because mRequestCompleteListener is never concurrently accessed but if the code is ever rearranged then this could be a problem.
joebowbeer
left a comment
There was a problem hiding this comment.
Great to see new unit tests. What is the code coverage of the added code?
| * <li>get(cacheKey) returns waiting requests for the given cache key. The in flight request | ||
| * is <em>not</em> contained in that list. Is null if no requests are staged.</li> | ||
| * </ul> | ||
| */ |
There was a problem hiding this comment.
Can you remove explicit synchronization of waiting requests and replace with ConcurrentHashMap instance? The downside is that you cannot use null as a marker, but that may be a good thing:-)
Is the CHM putIfAbsent method useful?
There was a problem hiding this comment.
I think we should probably defer discussion of that kind of change to a separate PR / bug if desired. This portion of the code was just moved from one class to another, and to keep the scope of this PR manageable we should try to focus it solely on the bug it is trying to fix rather than introduce lots of other small fixes. (I do not think this would be a trivial change; often ConcurrentHashMap has significantly worse performance than a simple HashMap with a lock around it for low levels of concurrency).
There was a problem hiding this comment.
@jpd236 Understood. Every time I see synchronized added, it raises my concern about maintainability, correctness and potential deadlock. I doubt ConcurrentHashMap would have any negative impact on the performance of this method at any level of concurrency. It might lead to a design that is easier to understand, though, and avoid questions about correctness. I felt obliged to raise this issue but I won't push back.
There was a problem hiding this comment.
Wouldn't we still have to add a synchronized block in onNoUsableResponseReceived anyway?
There was a problem hiding this comment.
@uhager synchronized can always be avoided :-) java.util.concurrent was written without a single synchronized. My advice to use CHM may be a bum lead, but the motivation is to make the synchronization more local and explicit. CHM may not be the best way to do that, but it feels like it may be. Seeing synchronized added to the code without explanation of which threads are involved, etc., worries me. Would moving the listener methods to a nested delegate, as I suggest, also allow the new synchronization to be more localized?
| // There is already a request in flight. Queue up. | ||
| Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); | ||
| if (stagedRequests == null) { | ||
| stagedRequests = new LinkedList<Request<?>>(); |
There was a problem hiding this comment.
ArrayList is more performant here (and virtually everywhere). In other words, LinkedList is best avoided in Java.
There was a problem hiding this comment.
This one was also just copied from the old code... but in this case I'm fine with making the change as I do think it's reasonable enough.
There was a problem hiding this comment.
This is a Queue though. Should I make the Queue a List instead?
There was a problem hiding this comment.
@uhager Ah. Use ArrayDeque:
This class is likely to be faster than Stack when used as a stack,
and faster than LinkedList when used as a queue.
Null items are prohibited. I hope that's not a problem.
There was a problem hiding this comment.
Volley's minSdkVersion is still 8, and ArrayDeque was added in API 9. Definitely a reasonable change to consider bumping minSdkVersion and making this change but it'd be out of the scope of the CL. So probably best to leave as is for now.
| * returns from the network. | ||
| */ | ||
| /* package */ void setNetworkRequestCompleteListener( | ||
| NetworkRequestCompleteListener requestCompleteListener) { |
There was a problem hiding this comment.
code style nit:
synchronized (mLock) {
| @Override | ||
| public void onNoUsableResponseReceived(Request<?> request) { | ||
| String cacheKey = request.getCacheKey(); | ||
| Queue<Request<?>> waitingRequests; |
There was a problem hiding this comment.
Move this to where it is assigned.
| VolleyLog.v("%d waiting requests for cacheKey=%s; resend to network", | ||
| waitingRequests.size(), cacheKey); | ||
| } | ||
| Request<?> nextInLine = waitingRequests.remove(); |
There was a problem hiding this comment.
How do you know nextInLine is not null? (A BlockingQueue does not accept null elements.)
jpd236
left a comment
There was a problem hiding this comment.
Regarding code coverage: we don't have that measured in our build at the moment, but it might be a reasonable feature request as a separate bug just for the extra information. (That being said, I don't think code coverage is a be-all end-all metric for test quality / sufficiency).
| * <li>get(cacheKey) returns waiting requests for the given cache key. The in flight request | ||
| * is <em>not</em> contained in that list. Is null if no requests are staged.</li> | ||
| * </ul> | ||
| */ |
There was a problem hiding this comment.
I think we should probably defer discussion of that kind of change to a separate PR / bug if desired. This portion of the code was just moved from one class to another, and to keep the scope of this PR manageable we should try to focus it solely on the bug it is trying to fix rather than introduce lots of other small fixes. (I do not think this would be a trivial change; often ConcurrentHashMap has significantly worse performance than a simple HashMap with a lock around it for low levels of concurrency).
| // There is already a request in flight. Queue up. | ||
| Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); | ||
| if (stagedRequests == null) { | ||
| stagedRequests = new LinkedList<Request<?>>(); |
There was a problem hiding this comment.
This one was also just copied from the old code... but in this case I'm fine with making the change as I do think it's reasonable enough.
|
Also, @joebowbeer, thanks so much for the review! We appreciate your help :) |
| @Override | ||
| public void run() { | ||
| try { | ||
| mNetworkQueue.put(request); |
There was a problem hiding this comment.
Just wondering: What thread executes this blocking call? (I realize that this is just moving code around.)
There was a problem hiding this comment.
I'm not quite sure which call you are referring to, but the CacheDispatcher runs on its own thread. In the default implementation, the Runnable is executed in ExecutorDelivery, which runs it on a thread that is set in ExecutorDelivery's instantiation by RequestQueue, where the default is the main looper. The network dispatcher then runs again on its own thread when it grabs the request from the NetworkQueue.
There was a problem hiding this comment.
@uhager I was referring to the mNetworkQueue.put which is a blocking call (hence the InterruptedException).
There was a problem hiding this comment.
Ah, that one then runs on the main thread (when using the default RequestQueue with the default ExecutorDelivery).
| * by a {@link NetworkDispatcher}. | ||
| */ | ||
| public class CacheDispatcher extends Thread { | ||
| public class CacheDispatcher extends Thread implements Request.NetworkRequestCompleteListener { |
There was a problem hiding this comment.
Can you push this listener responsibility into a delegate? Best practice is generally to limit scope/capability as much as possible... I bring this up because it is confusing to me to see new methods added to this Thread class that do not execute on this Thread.
There was a problem hiding this comment.
Do you mean as a nested class in CacheDispatcher?
There was a problem hiding this comment.
@uhager Yes. Move the listener methods, the mWaitingRequests instance and the maybeAdd helper method to a static nested class with an explicit reference to the CacheDispatcher instance. Then you can just synchronize the methods of the nested class, and it will be clear what state is being maintained.
| // Insert 'null' queue for this cacheKey, indicating there is now a request in | ||
| // flight. | ||
| mWaitingRequests.put(cacheKey, null); | ||
| request.setNetworkRequestCompleteListener(this); |
There was a problem hiding this comment.
Make this a dedicated listener instance? (See comment above.)
|
@jpd236 regarding code coverage, in Android Studio right-click on |
| VolleyLog.e("Couldn't add request to queue. %s", iex.toString()); | ||
| // Restore the interrupted status | ||
| Thread.currentThread().interrupt(); | ||
| quit(); |
There was a problem hiding this comment.
This double interrupt is confusing. Worth added a comment explaining which thread is executing this listener method, and pointing out that Thread.currentThread().interrupt() is reasserting the interrupt on the calling thread, while the call to interrupt() inside quit() is interrupting the CacheDispatcher thread?
joebowbeer
left a comment
There was a problem hiding this comment.
I like these changes. Suggest static nested class to make the interactions between listener and dispatcher more explicit.
| private volatile boolean mQuit = false; | ||
|
|
||
| /** Manage list of waiting requests and de-duplicate requests with same cache key. */ | ||
| private WaitingRequestHandler mWaitingRequestHandler; |
| mNetworkQueue = networkQueue; | ||
| mCache = cache; | ||
| mDelivery = delivery; | ||
| mWaitingRequestHandler = new WaitingRequestHandler(); |
There was a problem hiding this comment.
Declare static nested so the interactions with CacheDispatcher are explicit: WaitingRequestHandler(this)
| } | ||
| } | ||
|
|
||
| private class WaitingRequestHandler implements Request.NetworkRequestCompleteListener { |
There was a problem hiding this comment.
private static class - to make the interactions with CacheDispatcher explicit.
There was a problem hiding this comment.
Thanks for all the great comments! I'm new to Java, so I'm learning a lot here
| return; | ||
| } | ||
| String cacheKey = request.getCacheKey(); | ||
| Queue<Request<?>> waitingRequests; |
There was a problem hiding this comment.
declaration and assignment on single line.
| mNetworkQueue = networkQueue; | ||
| mCache = cache; | ||
| mDelivery = delivery; | ||
| mWaitingRequestHandler = new WaitingRequestHandler(this); |
There was a problem hiding this comment.
nit: the indentation here seems to be off by 1 space.
jjoslin
left a comment
There was a problem hiding this comment.
Thanks for the changes.
joebowbeer
left a comment
There was a problem hiding this comment.
After looking at the use of Queue, I see that an ordinary List would work great, and would be more standard (meaning that's what most Java programmers would do in this situation), and then you can use the superior ArrayList class.
|
@joebowbeer I think that's a reasonable suggestion but I'd prefer to see it done as a separate pull request. |
jpd236
left a comment
There was a problem hiding this comment.
+1 to keeping that separate.
Looks good here except for a couple nits and one observation that would be good to get your thoughts on; once addressed I can merge.
| private volatile boolean mQuit = false; | ||
|
|
||
| /** Manage list of waiting requests and de-duplicate requests with same cache key. */ | ||
| private final WaitingRequestHandler mWaitingRequestHandler; |
There was a problem hiding this comment.
nit: Handler is a loaded term on Android (https://developer.android.com/reference/android/os/Handler.html), so for clarity's sake it would be good to avoid using it if we're not actually extending that class.
Perhaps "WaitingRequestManager" or "WaitingRequestDispatcher"?
|
|
||
| /** Request received a valid response that can be used by other waiting requests. */ | ||
| @Override | ||
| public synchronized void onResponseReceived(Request<?> request, Response<?> response) { |
There was a problem hiding this comment.
A key change here (since the last time I've looked at this) is that you are now holding a lock (via synchronized on the method) for all of the calls to postResponse. Previously it was not held during that time. I think this is probably reasonable given that a sane implementation will simply dispatch the response without actually waiting for the response to be handled (and that's the default behavior of ExecutorDelivery) but it's perhaps worth noting that a different delivery (i.e. ImmediateResponseDelivery, though this is only in the test code) would cause this lock to be held during response callback handling.
To summarize, no change requested here, but it'd be good to think through this angle one more time to make sure that there isn't a major increase in contention by holding a lock for each of these methods' entire bodies.
There was a problem hiding this comment.
I was going to comment on this as well: "Strive for open calls to avoid deadlock" being the best practice. In other words, don't hold a lock when calling other code.
In this case, the lock can be shrunk quite a bit:
[Hey. There are HARD tabs in this file. Is that cool?]
Queue<Request<?>> waitingRequests;
synchronized (this) {
waitingRequests = mWaitingRequests.remove(cacheKey);
}
if (waitingRequests != null) {
if (VolleyLog.DEBUG) {
VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
waitingRequests.size(), cacheKey);
}
// Process all queued up requests.
for (Request<?> waiting : waitingRequests) {
mCacheDispatcher.mDelivery.postResponse(waiting, response);
}
}
There was a problem hiding this comment.
Yeah, I think that's how it was in the initial PR, so we should probably move back to that unless there is a thread-safety issue. Just need to tread real carefully there :)
And for tabs, I'm not as used to GitHub's review tool so I haven't caught whitespace issues like that, but yes, we should be using spaces and not tabs. #1 mentions integrating google-java-format into the workflow here which would resolve that more automatically FYI.
There was a problem hiding this comment.
I had that initially, but made it all synchronized for the inner class to avoid synchronized blocks. I thought that was part of the point of the nested class to have clearer synchronization by synchronizing the methods? But I can revert that.
Apologies for the tabs, I was working on that in emacs at home, I'll have to check my configurations.
There was a problem hiding this comment.
@uhager sorry for the added confusion. The inner class separates concerns nicely, but the need for open calls adds a small complication. Thanks for addressing.
| mCacheDispatcher.mNetworkQueue.put(nextInLine); | ||
| } catch (InterruptedException iex) { | ||
| VolleyLog.e("Couldn't add request to queue. %s", iex.toString()); | ||
| // Restore the interrupted status of the calling thread (i.e. NetworkDIspatcher) |
…tManager, fixed some tabs.
When a soft TTL cache entry exists, that response should be served immediately rather than having duplicate requests wait for the first request's network response. This is done by moving the list of waiting, duplicate requests from the RequestQueue to the CacheDispatcher.