-
Notifications
You must be signed in to change notification settings - Fork 290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 blockConcurrencyWhile
doesn't guarantee response ordering
#2714
Comments
We've never made any sort of guarantees about response ordering (and Cap'n Proto doesn't either). That said I agree this is surprising. I think what's happening is that when one request completes a blockConcurrentlyWhile(), and another request is waiting, we sometimes switch directly to the waiting request rather than finishing out the request that actually did the blockConcurrencyWhile(). This is not incorrect per se, but it'd probably be better if we would continue in the calling request first, until it actually awaits something, before switching requests. |
Isn't this pretty much guaranteed to happen because the requests need to wait for what is sometimes upwards of 100ms for the output gate to lift? It doesn't seem feasible to me to wait for the output gate to lift before continuing with the processing of other requests. |
@Frederik-Baetens The output gate will release responses in the same order as they completed in JavaScript, so it doesn't have any effect here except to delay things. |
Cool, I didn't know that. Do you think the write coalescing could contribute here? I thought that the output gate together with write coalescing could result in a few requests having their output gates lifted simultaneously (respecting completion order), after which a few ms of network jitter should be enough to affect response order? |
Oh... I just looked closer at the test and realized that the for loop doing 40 requests is actually in a separate node.js client process. Yeah there's absolutely no expectation of consistent response ordering here since every request is on a different connection. Even if workers responded exactly in order, the network could arbitrarily delay one connection against another.
What do you mean by "causal ordering" here? I don't think "causal ordering" is ever expected across overlapping requests? |
It's true that write coalescing + output gates could cause responses to be sent in batches, and I guess I'm not sure whether these batches will end up retaining their original order. |
Repro at https://github.com/threepointone/durable-ordering
The worker has a Durable Object, inside which, inside a
blockConcurrencyWhile
call, it sleeps for a random period of time between 0 and 400ms and then increments a counter. It persists the counter, then returns it.The test makes 40 requests to the worker, with a 50ms delay between each request. It then checks whether the counter is incremented correctly.
If
blockConcurrencyWhile
actually blocks request concurrency, then the counter should be incremented correctly. However, we see that it doesn't. Inside the Durable Object itself, we seem to be processing requests in order, but the reponses don't come in order. What is the expected behaviour?Example trial run:
The text was updated successfully, but these errors were encountered: