Host Load Prediction in a Google Compute Cloud with a Bayesian Model Sheng Di 1, Derrick Kondo 1,...
-
Upload
britton-williams -
Category
Documents
-
view
219 -
download
1
Transcript of Host Load Prediction in a Google Compute Cloud with a Bayesian Model Sheng Di 1, Derrick Kondo 1,...
Host Load Prediction in a Google Compute Cloud with a Bayesian Model
Sheng Di1, Derrick Kondo1, Walfredo Cirne2
1INRIA
2Google
2/28
OutlineMotivation of Load PredictionGoogle Load Measurements & CharacterizationPattern Prediction Formulation
Exponential Segmented Pattern (ESP) Prediction Transformation of Pattern Prediction
Mean Load Prediction based on Bayes ModelBayes ClassifierFeatures of Load Fluctuation
Evaluation of Prediction EffectConclusion
3/28
Motivation (Who needs Load Prediction)From the perspective of on-demand allocation
User’s resources/QoS are sensitive to host load.
From the perspective of system performanceStable load vs. Unstable load:
System is best to run in a load balancing state, where the load burst can be released asap.
From the perspective of Green computingResource Consolidation:
Shutting down idle machines can save electricity cost.
4/28
Google Load Measurements & Characterization
Overview of Google traceGoogle released one-month trace in Nov. of
2011 (40G disk space).10,000+ machines @ Google670,000 jobs, 25 million tasks in totalTask: the basic resource consumption unitJob: a logic computation object that
contains one or more tasks
5/28
Google Load Measurements & CharacterizationLoad Comparison between Google and Grid (GWA)
Google host load fluctuates with higher noises min noise / mean noise / max noise
Google: 0.00024, 0.028, 0.081AuverGrid: 0.00008, 0.0011, 0.0026
> 20 times
6/28
Pattern Prediction FormulationExponentially Segmented Pattern (ESP)
The hostload fluctuation over a period is split into a set of consecutive segments, whose lengths increase exponentially.
We predict the mean load over each time segment: l1, l2, …..
(Evidence window)
7/28
Pattern Prediction Formulation (Cont’d)Reduction of ESP Prediction Problem
Idea: Get Segmented Levels (li) always from the mean load (denoted as ηi ) during [t0, ti ]
We can get li , based on t0, (ti-1,ηi-1), (ti,ηi)
Two key steps in the Pattern Prediction AlgorithmPredict mean values with b2k lengths from current pointTransform the set of mean load prediction to ESP
Current time pointTime Seriest0 t1 t2 t3 t4
8/28
Traditional Approaches to Mean Load PredictionCan Feedback Control Model work? NO
Example: Kalman FilterReason: one-step look-ahead prediction doesn’t fit our long-int
erval prediction goal.Can we use
short-term prediction
error to instruct
long-term prediction
feed-back? NOCan the traditional methods like
Linear Model fit Google host load prediction?Such as Simple Moving Average, Auto-Regression (AR), etc.
16 hours 16 hours
9/28
Mean Load Prediction based on Bayes ModelPrinciple of Bayes Model (Why Bayes?)
We strongly believe the correctness of probabilityPosterior Probability rather than Prior Probability
Naïve Bayes Classifier (N-BC) Predicted Value:
Minimized MSE Bayes Classifier (MMSE-BC) Predicted Value:
10/28
Why do we use Bayes Model? Special Advantages of Bayes Model
Bayes Method can
1. effectively retain important features about load fluctuation and noises, rather than ignoring them.
2. dynamically improve prediction accuracy, with more accurate probability updated based on increasing samples.
3. estimate the future with low computation complexity, due to quick probability calculation.
4. only take limited disk space since it just needs to keep/update corresponding probability values
11/28
Mean Load Prediction based on Bayes ModelImplementation of Bayes Classifier
Evidence Window: an interval until current momentStates of Mean Load: for prediction interval
r states (e.g., r = 50 means there are 50 mean load states to predict: [0,0.02), [0.02,0.04),……, [0.98,1]
)
Key Point: How to extract features in Evidence Window?
12/28
Mean Load Prediction based on Bayes Model Features of Hostload in Evidence Window
1. Mean Load State (Fml(e))
2. Weighted Mean Load State (Fwml(e))
3. Fairness index (Ffi(e))
13/28
eg. α = 4
Mean Load Prediction based on Bayes Model
4. Noise-decreased fairness index (Fndfi(e)) Load outliers are kicked out
5. Type state (Fts(e)): for degree of jitter Representation: (α, β) α= # of types (or # of state levels) β= # of state changes
0.020.040.06
0.00
0.100.08
β=1β=2 β=3β=4 β=5 β=6β=7β=8Prediction Interval
eg. β = 8
14/28F4-sp(e)F3-sp(e)F2-sp(e)
Mean Load Prediction based on Bayes Model
6. First-Last Load (Ffll(e))
= {first load level, last load level }7. N-segment Pattern (FN-sp(e))
F2-sp(e): {0.01, 0.03}
F3-sp(e): {0.02, 0.04, 0.04}
F4-sp(e): {0.02, 0.02, 0.05, 0.05}
0.020.040.06
0.00
0.100.08
Prediction Interval
15/28
Mean Load Prediction based on Bayes ModelCorrelation of Features
Linear Correlation CoefficientRank Correlation Coefficient
16/28
Mean Load Prediction based on Bayes ModelCompatibility of Features
Four Groups split: {Fml, Fwml, F2-sp, F3-sp, F4-sp} , {Ffi, Fndfi} , {Fts} , {Ffll}
Total Number of Compatible Combinations:
17/28
Evaluation of Prediction Effect (Cont’d)List of well-known load prediction methods
Simple Moving Average Mean Value in the Evidence Window (EW)
Linear Weighted Moving Average Linear Weighted Moving Average Value in the EW
Exponential Moving AverageLast-State
use last state in the EW as the prediction valuePrior Probability
the value with highest prior probabilityAuto-Regression (AR): Improved Recursive ARHybrid Model [27]: Kalman filter + SG filter + AR
18/28
Evaluation of Prediction Effect (Cont’d)Training and Evaluation
Evaluation Type A: the case with insufficient samples Training Period: [day 1, day 25]: only 18,000 load samples Test Period: [day 26, day 29]
Evaluation Type B: ideal case with sufficient samples Training Period : [day 1, day 29]: emulation of larger set of samples Test Period: [day 26, day 29]
19/28
Evaluation of Prediction Effect (Cont’d)Evaluation Metrics for Accuracy
Mean Squared Error (MSE)
where are true mean values
and Success Rate (delta of 10%) in the test period
success rate = Number of Accurate Predictions
Total Number of Predictions
20/28
Evaluation of Prediction Effect (Cont’d)1. Exploration of Best Feature Combination
(Success Rate): Evaluation Type A Representation of Feature Combinations
101000000 denotes the combination of the mean load feature and fairness index feature
(a) s = 3.2 hour (b) s = 6.4 hour (c) s = 12.8 hour
21/28
Evaluation of Prediction Effect (Cont’d)1. Exploration of Best Feature Combination
(Mean Squared Error)
(a) s = 3.2 hour (b) s = 6.4 hour (c) s = 12.8 hour
22/28
Evaluation of Prediction Effect (Cont’d)2. Comparison of Mean Load Prediction Methods
(Success Rate of CPU load w.r.t. Evaluation Type A)
(a) s = 6.4 hour (b) s = 12.8 hour
23/28
Evaluation of Prediction Effect (Cont’d)2. Comparison of Mean Load Prediction Methods
(MSE of CPU load w.r.t. Evaluation Type A)
(a) s = 6.4 hour (b) s = 12.8 hour
24/28
Evaluation of Prediction Effect (Cont’d)4. Comparison of Mean-Load Prediction Methods
(CPU load w.r.t. Evaluation Type B)
Best feature Combination mean load fairness index type-state first-last
25/28
Evaluation of Prediction Effect (Cont’d)5. Evaluation of Pattern Prediction Effect
Mean Error & Mean MSE Mean Error:
26/28
Evaluation of Prediction Effect (Cont’d)5. Evaluation of Pattern Prediction Effect
Snapshot of Pattern Prediction (Evaluate Type A)
27/28
ConclusionObjective: predict ESP of host load fluctuationTwo-Step Algorithm
Mean Load Prediction for the exponential interval from the current moment
Transformation to ESPBayes Model (for Mean Load Prediction)
Exploration of best-fit combination of featuresComparison with 7 other well-known methods
Use Google Trace in the experimentEvaluation type A:
Bayes Model ({Fml}) outperforms others by 5.6-50%Evaluation type B: {Fml,Ffi,Fts,Ffll} is the best combination.MSE of Pattern Predictions: majority are in [10-8, 10-5]
28/28
Thanks QQueueststioionnss??