Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A84: PID LB policy #430

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
Open

A84: PID LB policy #430

wants to merge 13 commits into from

Conversation

s-matyukevich
Copy link
Contributor

Follow up on #383 and #423

This proposal was implemented and tested with a few real apps in production environment, here are the results for one of the apps:
Screenshot 2024-04-02 at 3 44 04 PM

@markdroth markdroth changed the title PID LB policy A83: PID LB policy Jul 12, 2024
@markdroth markdroth changed the title A83: PID LB policy A84: PID LB policy Jul 12, 2024
Copy link
Member

@markdroth markdroth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks very interesting -- and the graphs definitely show impressive results!

My main concern here is the comment about making the WRR policy extensible as a public API. I think that needs some careful thought.

I'd like @ejona86 and @dfawley to review as well.

Please let me know if you have any questions. Thanks!

@@ -0,0 +1,290 @@
A68: PID LB policy.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This says A68, and the filename says A80, and both of those numbers are already taken. :)

Looks like the next available number is A84, so let's use that.

message PIDLbConfig {
// Configuration for the WRR load balancer as defined in [gRFC A58][A58].
// The PID balancer is an extension of WRR and all settings applicable to WRR also apply to PID identically.
WeightedRoundRobinLbConfig wrr_config = 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to include the WRR config here. Instead, let's just duplicate the fields from that proto in this one, so that the two are independent.


// Threshold beyond which the balancer starts considering the ErrorUtilizationPenalty.
// This helps avoid oscillations in cases where the server experiences a very high and spiky error rate.
// We avoid eliminating the error_utilization_penalty entirely to prevent redirecting all traffic to an instance
Copy link
Member

@markdroth markdroth Jul 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is the only thing that you need the error utilization penalty for, an alternative would be to set the penalty to zero and instead use outlier detection (see gRFC A50) to avoid sending traffic to such a backend. Then we presumably wouldn't need this knob.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, we can use outlier detection with zero utilization penalty for servers with very high and spiky error rates. I can remove this knob.

}
```

The proposal is to make `wrrCallbacks` public. This has a number of significant benefits. Besides PID, there are other cases where one might need to extend `wrr`. For example, Spotify [demonstrates](https://www.youtube.com/watch?v=8E5zVdEfwi0) a gRPC load balancer to reduce cross-zone traffic – this can be implemented nicely in terms of `wrr` weights. We are also considering the same and incorporating things like latency into our load balancing decisions. Existing ORCA extension points don't cover these use cases. We leverage ORCA for custom server utilization metrics, but we also need the ability to combine server and client metrics to generate the resulting weight. The alternative is to write our own balancer with custom EDF scheduler and handle details related to subchannel management and interactions with resolvers. With this new API, use cases like this can be covered naturally, users have full control over the end-to-end definition of weights.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am hesitant to make this a public API. This seems like a valuable internal thing to allow us to reuse LB policy implementations, but making it public implies a stability commitment that I'm not super comfortable with.

It's worth noting that the example of using latency would require that this API surface be even broader, because we'd need a way for this to trigger the LB policy to measure the latency on each RPC. (We do have that mechanism internally, but we'd need to expose it via this API in addition to the existing mechanism as part of the LB policy API itself.)

I'd like to hear thoughts from @ejona86 and @dfawley on this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expected this answer but decided to give it a try anyway. Making this interface private works for us as well - next time we need to extend PID of WRR we'll come back with more gRFCs. I assume that it is much easier to make an interface public than otherwise, so maybe we can revisit this decision in the future when we have more data and better defined use-cases where such interface might be useful.

For now I can remove this paragraph and replace it with a note that the interface will be private. @markdroth does this work for you?


### Moving Average Window for Load Reporting

As outlined in the previous section, smoothing the utilization measurements in server load reports is essential for the `pid` balancer to achieve convergence on spiky workloads. To address this, we propose integrating a moving average window mechanism into the `MetricRecorder` component, as described in [gRFC A51][A51]. This involves adding a `MovingAverageWindowSize` parameter to the component. Instead of storing a single value per metric, `MetricRecorder` will now maintain the last MovingAverageWindowSize reported values in a circular buffer. The process is detailed in the following pseudo-code:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me that this is something we need to build directly into gRPC APIs. Can't the application do this smoothing itself before it reports the data to the gRPC MetricRecorder?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's what we are doing now. However, this is not a trivial amount of code and PID almost certainly requires it. If we provide the balancer but don't provide the smoothing implementation the UX won't be ideal, as users most likely will start using it without any smoothing and soon come to the conclusion that it doesn't work.
My plan was to document that smoothing is required for PID and then mention "Use property X of the MetricsResorder to configure smoothing" IMO this is a lot better than say "implement smoothing yourself".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is really critical to the behavior of the PID policy, wouldn't it be better to build this directly into the PID policy, so that it's not possible to have a misconfiguration between the client and the server? In other words, why not do this smoothing on the client side?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We considered this but decided to implement it on the server for the following reasons:

  • How much we should smooth is mostly a server property (it depends on how spiky is the workload, which in turn depends on how expensive are individual requests and how requests are distributed in time) It easier to reason about this from the perspective of service owner rather than the client.
  • Because of the first point, it doesn't really make sense to have different clients with different smoothing parameters.
  • It protects the server from misconfigured clients: if some clients don't use enough smoothing this could easily result in oscillations, which could hurt the server.
  • Most importantly smoothing on the server is more accurate as it takes into account actual load and is not delayed in time. If we do smoothing on the client the result might be less accurate as the client only have data sampled at random points. If there was a big CPU spike between 2 consecutive requests sent by a particular client this information will be missed by client-side smoothing, so different clients may have different resulting view of the server load. If we do smoothing on the server we don't have such problems as we can sample CPU as frequently as a few milliseconds and use monotonically increasing cgrpup CPU counter, so we never miss any CPU spikes or drops.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bullets 1-3: these are resolved by this being configurable via xds/service config. The service is still in control. We do agree the service should be in control, but that's a bit different than which side calculates the average.

Bullet 4: We agree it is very helpful to have the higher sampling when producing the utilization. That would matter a lot for something like memory utilization, which can be measured at any instant. CPU utilization however is always an average over some time period (cpu seconds used / time period). There is no instantaneous value (other than a weak boolean (running/not running) per core). WRR assumes the cpu utilization is roughly the same time period as rps, so I'd hope server-side is already averaging over at least a second-ish. We'd be very interested if this is wildly inaccurate. It seems load for short periods should be covered.

For longer time periods, an exponential moving average on client-side would seem sufficient (updated each weight recalculation). The biggest concern would be too-infrequent of utilization updates, but even with server-side smoothing PD won't be able to function in such a case as there is no feedback loop.

Part of the concern about server-side smoothing is the smoothing period matters to the D behavior. Having it server-side would make it harder to guarantee that all the knobs are self-consistent and harder to change the smoothing period.

Aside: Understanding the server utilization monitoring period is essential for monitoring spikiness. If the PD oscillates at a rate faster than the server's utilization monitoring frequency, then you won't the see oscillations even though they are happening. This is a risk I've seen in multiple utilization-based LB schemes that prioritize fast updates.

(I've done further review/consideration since this conversation so I'll have additional comments on this that I'll post separately. Specifically I'm considering slower/larger RPCs and thus longer weight_expiration_period cases, which will have interplay here.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to brainstorm new ideas here: what do you think if we change a bit the structure of load reports and make them additionally and optionally report the timestamp at the time of measurement and the CPU time as monotonically increased counter. Basically we take the same interface as kernel provide us via getrusage syscall and make the server report the raw values reported by this syscall and don't do any aggregations. Now aggregation can be done on the client and we can use any type of smoothing we want and still get perfect accuracy. PID balancer should try to use this new load report data and if it not present it fallbacks to using the default CPU utilization.

I am not 100% sure what would be the best way to handle AppUtilization in this case, but we have a couple of options:

  • Make the new load report field more generic (e.g. call it UtilizationCounter) make it take precedence over both CpuUtilization and AppUtilization.
  • Make existing AppUtilization field take precedence over the new field. In this case we can also either apply or skip client-side smoothing for AppUtilization.

Copy link
Member

@ejona86 ejona86 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mark, Doug, Craig, and I discussed this on Friday. These are essentially our notes from the meeting with some minor extra review items from me. I tried to say "we" for the meeting and "I" for the other things. But I'm also going to follow this with a review just from myself.

// Controls the convergence speed of the PID controller. Higher values accelerate convergence but may induce oscillations,
// especially if server load changes more rapidly than the PID controller can adjust. Oscillations might also occur due to
// significant delays in load report propagation or extremely spiky server load. To mitigate spiky loads, server owners should
// employ a moving average to smooth the load reporting. Default is 0.1.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be very hard for us to change these values later. We're thinking it'd be better for each service to define these. That'd mean we wouldn't define defaults for PD and require they be specified in the service config.

We'd still provide a suggestion. But we'd have a way to change that suggestion over time.


### Moving Average Window for Load Reporting

As outlined in the previous section, smoothing the utilization measurements in server load reports is essential for the `pid` balancer to achieve convergence on spiky workloads. To address this, we propose integrating a moving average window mechanism into the `MetricRecorder` component, as described in [gRFC A51][A51]. This involves adding a `MovingAverageWindowSize` parameter to the component. Instead of storing a single value per metric, `MetricRecorder` will now maintain the last MovingAverageWindowSize reported values in a circular buffer. The process is detailed in the following pseudo-code:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bullets 1-3: these are resolved by this being configurable via xds/service config. The service is still in control. We do agree the service should be in control, but that's a bit different than which side calculates the average.

Bullet 4: We agree it is very helpful to have the higher sampling when producing the utilization. That would matter a lot for something like memory utilization, which can be measured at any instant. CPU utilization however is always an average over some time period (cpu seconds used / time period). There is no instantaneous value (other than a weak boolean (running/not running) per core). WRR assumes the cpu utilization is roughly the same time period as rps, so I'd hope server-side is already averaging over at least a second-ish. We'd be very interested if this is wildly inaccurate. It seems load for short periods should be covered.

For longer time periods, an exponential moving average on client-side would seem sufficient (updated each weight recalculation). The biggest concern would be too-infrequent of utilization updates, but even with server-side smoothing PD won't be able to function in such a case as there is no feedback loop.

Part of the concern about server-side smoothing is the smoothing period matters to the D behavior. Having it server-side would make it harder to guarantee that all the knobs are self-consistent and harder to change the smoothing period.

Aside: Understanding the server utilization monitoring period is essential for monitoring spikiness. If the PD oscillates at a rate faster than the server's utilization monitoring frequency, then you won't the see oscillations even though they are happening. This is a risk I've seen in multiple utilization-based LB schemes that prioritize fast updates.

(I've done further review/consideration since this conversation so I'll have additional comments on this that I'll post separately. Specifically I'm considering slower/larger RPCs and thus longer weight_expiration_period cases, which will have interplay here.)


## Abstract

This document proposes a design for a new load balancing policy called pid. The term pid stands for [Proportional–integral–derivative controller](https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller). This policy builds upon the [A58: weighted_round_robin LB policy (WRR)][A58] and requires direct load reporting from backends to clients. Similar to wrr, it utilizes client-side weighted round robin load balancing. However, unlike wrr, it does not determine weights deterministically. Instead, it employs a feedback loop with the pid controller to adjust the weights in a manner that allows the load on all backends to converge to the same value. The policy supports either per-call or periodic out-of-band load reporting as per [gRFC A51][A51].
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd appreciate making it more clear earlier this is a PD controller, not full PID.


The `pid` LB policy config will be as follows.

```textproto
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is "protobuf" as it it the schema. Textproto is textual representation for a message.

```

Here is how `pid` balancer implements `wrrCallbacks` interface.
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be great to define this as ```go, just to have some level of syntax highlighting.


The `update` method is expected to be called on a regular basis, with `samplingInterval` being the duration since the last update. The return value is the control signal which, if applied to the system, should minimize the control error. In the next section, we'll discuss how this control signal is converted to `wrr` weight.

The `proportionalGain` and `derivativeGain` parameters are taken from the LB config. `proportionalGain` should be additionally scaled by the `WeightUpdatePeriod` value. This is necessary because derivative error is calculated like `controlErrorDerivative = (this.controlError - previousError) / samplingInterval.Seconds()` and dividing by a very small `samplingInterval` value makes the result too big. `WeightUpdatePeriod` is roughly equal to `samplingInterval` as we will be updating the PID state once per `WeightUpdatePeriod`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"scaled" in which way? Let's not assume the implementer will get that right. (Let's explicitly say "multiply by weight_update_period as a floating point number of seconds".)

I think I understand proportionalGain to be scaled, but I'm quite surprised by the reasoning as that would seem to argue derivativeGain is the thing to scale. Instead of scaling derivativeGain we could just remove the division by samplingInterval, which then doesn't assume they are roughly equal. Although the problem isn't actually the small samplingInterval but instead noisiness of the utilization, which could be amplified by the small interval.

(Future comments about weight_expiration_period may change exactly what we do here.)


// Maximum allowable weight. Weights proposed by the PID controller exceeding this value will be capped.
// This prevents infinite weight growth, which could occur if only a subset of clients uses PID and increasing weights
// no longer effectively corrects the imbalance. Default is 10.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI: We were looking to see if there were ways to make the policy very safe by default. One option was to strongly restrict the range of weights which would limit the possible damage. However, WRR's main purpose is to balance load with non-homogeneous servers. Heavily restricting the range of weights only works with homogeneous servers, so we dismissed this idea.

Copy link
Member

@ejona86 ejona86 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is the "personal" review that I promised. Consider it food for thought. Y'all have practical experience with the PID, and I'm not trying to override that as I'm more of a newbie. But I'm also trying to avoid gotchas. See what resonates with you.

(I've picked this up several times since I mentioned it, so there may be cases that are fragmented/have strange flow.)

meanUtilization = data.meanUtilization

// call the PID controller to get the value of the control signal.
controlSignal = data.pidController.update({
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is wrong to update the PD controller in response to a load report. P and D are expected to be used at a certain update frequency and having it vary based on RPC rate means we lose the benefit prior art. P could be corrected for in the PID controller, but the feedback loop sees delay and it would require high, steady QPS. Updating the PID should be done as part of the weight updates, which is the control update. I agree that moving the update away from the utilization update creates new problems with sharing code, but I think they are easier to reason about.

Hear me out and tell me what you think:

The data plane just updates utilization (stored in weight; it'd be renamed/tweaked). The control plane runs the PID controller when updating weights. When updating weights, the data-plane's utilization is integrated into an exponential moving average used by the control plane for the PID controller. That average would be additional data, but lastAppliedWeightPerSubchannel and utilizationPerSubchannel would go away. samplingInterval would go away; derivative would always use 1 and users would assume weight_update_period when configuring the PID policy.

As an option to improve utilization: Data plane sums received utilizations and increments a counter for the number received. Control plane computes the average utilization by sum/count. Requires a lock+extra storage.

Only updating utilization on data plane does make more obvious the problem of "low QPS." This is actually a pretty big existing problem as 1) the weights aren't actually being utilized and 2) we aren't getting (non-OOB) load reports. That means both parts of the feedback loop are busted. For WRR low QPS is just stale. But PID can accumulate error and make very bad choices if there is a burst. PID needs QPS to be high enough for at least one RPC per endpoint each weight_update_period. Increasing weight_update_period is a good solution to this, up to a point. As is also decreasing weight_expiration_period. But we should probably also consider skipping weight updates if there was no RPC/weight update hasn't been received since the last weight update. That makes convergence slower but also reduces likelihood of harm.

Should we also disable/discourage OOB reporting? That's useful for WRR for long idle periods, but long idle periods are broken for PID. It is useful for reduced network and CPU cost, but I don't think people would be choosing it for that. If we keep it, we'd need to track whether an RPC has been received in the last period (or rely just on weight_update_period/weight_expiration_period); if we drop it we can just look at the last weight update.

// * If 2 updates are very close to each other in time, samplingInterval ~= 0 and signal ~= infinity.
// * If multiple updates happened during a single WeightUpdatePeriod, the actual weights are not applied,
// but the PID controller keeps growing the weights and it may easily pass the balancing point.
if time.Since(lastApplied) < conf.WeightUpdatePeriod {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weight was calculated on the data plane in WRR to avoid using a lock. It was one value and could be updated with an atomic. Here I see lastApplied, controlError, last applied weight, and utilization, which means it'd be hard to avoid a lock. That removes the benefit for calculating weight on the data plane and I expect we'd prefer doing the calculation during weight updating in that case. But I'll note that if we track just the most recent utilization on the data plane, no lock would be needed.

})

// Normalize the signal.
// If meanUtilization ~= 0 the signal will be ~= 0 as well, and convergence will become painfully slow.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the reasoning here is wrong. If meanUtilization is ~= 0 then to correct a 5%pt error you need to move a higher percentage of traffic than when meanUtilization is high. 'Weight' moves percentage of traffic, so the imbalance is higher at lower utilization and thus we'd expect it to take longer to converge. If we were generating traffic, the control could be QPS and "increase by X QPS" produces the same adjustment at high and low utilization. So this is a property of weight being relative to the mean instead of an absolute.

The effect of increasing the weight by 5 depends on the average weight and increasing it by 5% has a different effect on the signal at different utilizations. So I follow what you're trying to do. But I think this becomes more clear if we use the typical PID addition to adjust the control:

weight = lastAppliedWeight + controlSignal / meanUtilization * weight;

Dividing controlSignal by meanUtilization is the same as dividing the input to update():

controlSignal = data.pidController.update({
  referenceSignal:  1,
  actualSignal:     utilization / meanUtilization,
  samplingInterval: time.Since(lastApplied),
})
// weight could be moved earlier as well
weight = lastAppliedWeight + controlSignal * weight;

This is much easier for me to reason about. I can see now the unit for P is "% of weight/util%" and thus put the default P = .1 into context. That still has issues near zero, but now it is clearer that the noise is probably coming from the utilizations and not from dividing by a small number. It's also clear that below a threshold we could just set actualSignal to 1 to disable the PID (without messing up the derivative).

(I think we could use meanWeight instead of weight for scaling and it'd easier to reason about how the parallel PIDs interact. But weight would converge faster for heterogeneous servers (like what WRR targets). I am tempted to use meanWeight for scaling, and apply min/max_weight to the result, but multiply those weights by utilization/qps from WRR to account for different machine sizes. That might allow a tighter min/max_weight range. But... complexity, boiling the ocean, yada yada...)

// Maximum allowable weight. Weights proposed by the PID controller exceeding this value will be capped.
// This prevents infinite weight growth, which could occur if only a subset of clients uses PID and increasing weights
// no longer effectively corrects the imbalance. Default is 10.
google.protobuf.FloatValue max_weight = 5;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no inherent "anchor" or "center" point to these weights. Because of D the average weight will stray from 1. (As well as scaling controlSignal by weight instead of meanWeight, but D is unavoidable). The only thing preventing it from runaway are min/max_weight; some backends will be clamped and the opposite extreme backends can still adjust. When that happens though, the speed of convergence will be halved.

We can't avoid the drift by giving a convenient meaning to weight, as it is inherently relative to just the average. But if we were so inclined, we could continually re-center the weights around 1. That is mostly easy: (sum(weights) - endpoint_count) / endpoint_count and subtract that from each weight (... still clamping to max/min_weight). FWIW, there would remain subtleties if min/max_weight are still hit (because of large range of server capability).

@ejona86
Copy link
Member

ejona86 commented Sep 30, 2024

Oh, and I know the biggest issue with some of my suggestions is they can partly/wholly invalidate the testing you've done. The most important part of this is to have confidence in the result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants