-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[R&D] Adaptative batch size on Preload #6396
Comments
Improved the batch size work a bit from what @MathieuLamiot did, added transient for all requests and then used the value to determine the max and min size of the next batch requests. |
Thanks @Khadreal 🙏
It seems to me that, with the current code, A rolling average could be implemented as follows (it's not the best way to do it, but it's the quickest one): Replace:
|
I cleaned up a few things and added a dedicated logic based on transient to limit the number of blocking requests to 1 per minute. It gives good results with my local. I am trying to test on gamma, where we should be able to see a preload going way slower thanks to this, currently blocked because I can't write with the FTP access 🤷 To easily monitor, I add the following:
@piotrbak
While we finalize testing, we would need your inputs on:
|
After running tests on gamma website and locally, I adjusted the formula so that we don't impact much "normal" website but provide a batch reduction when the website is slow (typically more than 3 or 4 seconds on average per request starts to reduce significantly the preload). I opened a PR to keep track, but we'll need AC or at least NRT plans here, and some rework of the unit/integration tests. I manually tested as much as possible and preloads seems to be going well. Just one question, as I am not sure about how Preload and RUCSS work together: if preload is slowed down (let's say batch size is 5 instead of 45), does it have any impact on the rate at which we'll add RUCSS jobs to the table and send them? I don't think so, but wanted a confirmation @wp-media/engineering-plugin-team |
@Khadreal Can you take over this issue for the completion?
|
Summary of the functional behavior of the implemented solution, as of nowFunctional behaviorThe number of URLs to preload per batch becomes variable. It is now adjusted with the time it takes to load a page. This time is estimated by frequently measuring how long a preload requests takes, and doing an average over time. Preparing a batchWhen preparing a preload batch, the plugin computes the batch size based on Sending preload requestsWhen sending a preload request, if it has been more than 1 minute since the last estimation, we make the request blocking and measure how long it takes to return. The measured time is used to update the Controlling the featureCurrently, this feature is applied by default. Bypassing the featureBypassing the feature means having a constant preload batch size. In the current implementation, to do this, one must set those filters to the same value, being the desired preload batch size: List of filters
List of transients
|
@piotrbak Are there changes required to release this compared to the functional description in my last comment above? |
@MathieuLamiot I think there's nothing to be added here. Just to confirm, if we set |
Yes, correct. Are you OK with the following:
|
But then, how it'll be increased to the regular size if the request time is less than 2s? In what steps? @DahmaniAdame pinging you about the 2s loading time, if it's bigger, we treat the website as heavy loaded. Also the 5 as a number of minimal batch for loaded websites. |
A starting batch of 5 is reasonable. It will help low-resource setups to not get overloaded after activating preload, and build up if there are enough resources to process more. |
See this:
One would be able to force a batch size regardless of the loading time reported. We cannot change the 2s value currently if someone wants to "adapt the formula". |
To whoever picks this up, this needs:
|
Context
It seems Preload can generate a lot of pressure on a server if the pages of the website are slow to open. A way to adapt to this would be to measure how long a request takes, and adapt batch size based on this.
What to do
This branch is a quick&dirty example of how this could be implemented: https://github.com/wp-media/wp-rocket/tree/prototype/preload-adaptative-batch
The idea is partially described here, but has evolved a bit to base the batch size on the measurement of a preload request, by making one request blocking from time to time.
A developer from the plugin team needs to go spend some time on this branch to make it work (maybe it is not, I just wrote the code to lay the idea down), production ready, and play with it to see how it behaves, possibly with logs.
We have the gamma.rocketlabs.ovh website that suffers from CPU issues when doing a full cache clear to trigger the preload. It would be a good place to test it. See here.
Warning
This branch would need #6394
Otherwise, we don't have control to prevent flooding the AS queue and the number of job in-progress could increase too quickly.
The text was updated successfully, but these errors were encountered: