Full Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012

A Stable and Agile Rate Estimation Technique Mary Looney1, Oliver Gough1 1

Department of Electronic Engineering, Cork Institute of Technology, Cork, Ireland Email: {mary.looney, oliver.gough} @cit.ie traditionally a weight based estimator but may be used for rate estimation if its configurable weight translates into a time window [3]. Although these algorithms have been widely used as rate estimation techniques, they suffer from one main limitation. I.e. they can only be configured to be either agile or stable rate estimators. They cannot be configured to be both agile and stable at the same instance. Hence their performance is dependent on a configurable parameter which dictates the agility or stability of the filter [1, 4]. To overcome this drawback, the configurable parameter could be adaptively adjusted, however this would only be applicable to traffic that tends to change quite smoothly. Hence, a rate estimation technique known as the flip-flop filter was proposed in [4] and suggests using two rate estimation techniques (i.e. TSW filters) where one is configured to be an agile filter and the other is configured to be stable. A controller is used to determine which output should be chosen as the estimated rate. Although, this approach has shown to be successful in providing both agile and stable estimated rates, quantitative measures of agility and stability have not been discussed. Additionally, the flip-flop uses a controller based on the 3sigma rule which assumes that the sample population it works with is normally distributed. Hence, this technique may not be applicable to more realistic heavy tailed distributions. The purpose of this paper is therefore to quantitatively analyse the flip-flop filter in terms of accuracy, agility and stability. Additionally, it proposes a rate estimation technique known as SARE (Stable Agile Rate Estimator) that is similar to the flip-flop filter but differs in its use of filters and controller to allow for more stable and agile results. As both techniques are composed of TSW or EWMA filters we also investigate the ability of these single estimators in producing agile, stable and accurate results. The paper is organised as follows: Section II investigates existing rate estimation techniques. Section III introduces the proposed rate estimation technique, SARE. Section IV presents simulation analysis investigating the proposed approach in comparison to the flip-flop filter as well as investigating the TSW and EWMA algorithms in terms of accuracy, agility, stability and cost. The paper concludes in Section V.

Abstractâ€”Traffic rate estimation is an essential part of traffic management and control in producing Quality of Service (QoS) enabled networks. Several rate estimation techniques have been proposed for efficient traffic rate estimation. Ideally, these estimators should be agile, stable and accurate to track changes in the traffic rate quickly but ignore short term changes due to traffic behaviour to produce accurate results. However, a single rate estimator cannot always be configured to be both agile and stable. In this paper we propose a rate estimation algorithm that uses two rate estimation techniques in a flip-flop based approach to enable it to be agile in measuring the actual changes of traffic in a timely and accurate manner as well as being stable in ignoring short term variations of the traffic. Investigation of existing TSW and EWMA algorithms is performed to determine those that work best with the proposed estimator in terms of agility and stability. Simulation analysis is used to analyse the performance of the proposed algorithm in comparison to that of an existing flip-flop filter. Quantitative results demonstrate the improved performance of the proposed estimator over that of the existing flip-flop filter in being an agile, stable and accurate rate estimation technique. Index Termsâ€” Rate Estimation, Metering, Flip-flop filter

I. INTRODUCTION Traffic rate estimation is an integral part of many high speed network services and components. These real time estimations are required for algorithms such as traffic management, conditioning, scheduling, monitoring and admission control. Due to the inherent bursty nature of Internet traffic, traffic rate estimation is not always an easy task to perform. Short term changes may obscure output results or a change in traffic rate may not always be detected easily. Hence, a rate estimator is required to be accurate, agile, stable and cost effective [1]. An accurate rate estimator should provide an estimated rate close to that of the actual rate. An agile rate estimator should track the changes in the actual data rate of the traffic in a timely and accurate manner. A stable rate estimator should ignore short term changes in traffic behaviour that are natural to the traffic. A cost effective estimator should be fast and simple and should not require a lot of computational power in processing samples of data or large memory constraints in storing data. However, not all existing rate estimation techniques are capable of satisfying all of these characteristics [1]. Various rate estimation techniques are in existence and it is the TSW and EWMA filters that are the most widely known recursive rate estimation techniques. The TSW rate estimator was proposed in [2] to act as a profile meter of a Traffic Conditioning technique and was subsequently specified in RFC 2859[3] as a rate estimation technique. The EWMA is ÂŠ 2012 ACEEE DOI: 02.ACE.2012.03. 4

II. RATE ESTIMATION TECHNIQUES As already mentioned, the TSW and EWMA rate estimation techniques are the most commonly used recursive rate estimators. We will now discuss these estimators and variations of these algorithms in terms of being accurate, agile, stabile and cost effective estimators. A. TSW Filter The TSW was designed to eliminate dependency on the 1

Full Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012 inter packet arrival time as seen with other estimators. It employs a rectangular data weighting function in the form of a window length of time to decay the estimated rate over time. This window length is usually established a priori, based on the assumed or specified characteristics of the traffic flow. It is this pre-configured parameter that establishes the TSW as being an agile or stable filter. However, it cannot be configured to be both agile and stable at once, thereby limiting the ability of the algorithm. When configured as a stable estimator with an optimal (large) window length, the TSW has proven to be a stable estimator [1]. Configuring the TSW as an agile filter requires only a small value of the window length and with optimal values the TSW has proven to track the changes in the actual data rate of the traffic in a timely and accurate manner, thereby proving to be an effective agile filter [1]. However, care must be taken in configuring the window length. If the value is too low then the estimated rate may be over-estimated [5] whereas in contrast, too large a value may under-estimate the actual rate. In terms of accuracy, the TSW produces accurate estimations for different traffic loads [1] [6] . With regards to cost, estimations are performed upon the arrival of every incoming packet. Hence, the computational cost can be substantial especially with the advent of high speed networks [1]. The TSW is therefore an accurate estimator that is not cost effective but one that can produce good results in terms of agility or stability but cannot produce agile and stable results at any one given instance of time. This led to the idea of the flip-flop filter where two TSW filters are used; one is configured to be agile and the other stable. This technique is capable of producing accurate, stable and agile rate estimations [4] [7]. A controller based on Statistical Process Control (SPC) is used to decide whether the agile estimated rate or stable estimated rate should be used as the output estimated rate. Typically the stable output is used and the controller uses the 3 sigma rule to decide when the agile estimated rate should be used. (This technique was based on a similar method proposed for RTT estimation [8].) Although this approach is more computationally expensive, this is at a trade-off of better agility and stability results. In order to use this approach with success however, an accurate estimation of standard deviation is required. Additionally, the 3-sigma rule controller assumes that the sample population it is working with is normally distributed and hence may not be appropriate for use with other distributions.

weight [3].The value of the weighting factor defines the time constant of the estimator, which determines its agility or stability. When old estimates are given more weight the filter provides good stability in that it resists noise in individual observations. When new observations are given more weight, the filter provides good agility in detecting performance changes quickly. However, as with the TSW, the EWMA can only be configured to be agile or stable but not both. Simulation analysis in [1] demonstrates that the EWMA has poor performance in terms of accuracy, agility and stability. We attribute this to the constant value used in the weighting factor as the estimation is decaying according to packet length distribution as opposed to time as recommended in [3]. An approach used in [10] uses an exponential weighting factor that allows the estimated rate to asymptotically converge to the real rate, thereby acting similar to that of the TSW. This may provide different results in terms of accuracy, agility and stability however to the authorâ€™s knowledge quantitative analysis does not exist. In terms of cost, EWMA is comparable to that of the TSW as it also estimates rates on a packet by packet basis. Hence the EWMA is comparable to the TSW in terms of cost but also in that it can only be configured to be agile or stable at any one time although when configured with a static weight its performance in terms of agility or stability is not favourable. The EWMA filters have also been applied to the flip-flop filter instead of TSW filters [11]. However simulation analysis demonstrates that this does not provide favourable results as fluctuations of the agile estimator only introduce instability and error in the system. A number of approaches have been proposed to overcome drawbacks of the EWMA. These include the Dynamic EWMA [12] and Time Window based EWMA [1]. These algorithms perform estimations periodically as opposed to a packet by packet basis thereby reducing the computational complexity of the EWMA. They also propose the use of a dynamic weight factor to overcome the shortcomings of the static weight. Performance results indicate that the Dynamic Weight EWMA produces comparable results with that of the TSW in terms of accuracy [1]. Stability results demonstrate more stable results than that of the TSW but reacts slowly to changes in traffic indicating poor agility performance. The time window based approach is comparable to that of the TSW and Dynamic Weight EWMA in terms of accuracy and shows good stability results once optimal algorithm parameters are configured correctly. Nonetheless, this is at a cost of agility with results demonstrating the Time Window based EWMA reacting slowly to permanent traffic changes.

B. EWMA Filters The EWMA estimator is an ideal maximum likelihood estimator that employs exponential data weighting and is widely used in many different areas and applications [9]. It is the principal method for network condition estimation and is used to calculate the RTT in TCP congestion control. EWMA uses a pre-configured weighting factor to shape its memory based on the assumed or specified characteristics of the traffic flow. For rate estimation its weight should be configured so that it translates into a time window as opposed to a static ÂŠ 2012 ACEEE DOI: 02.ACE.2012.03. 4

III. STABLE AGILE RATE ESTIMATOR In this section, we propose a rate estimation technique known as SARE. The purpose of the proposed technique is to produce an accurate, agile and stable estimator. Although it follows a similar approach to that of the flip-flop filter presented in [4] it eliminates the need for accurate estimations of standard deviation and its controller does not assume that 2

Full Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012 the sample population it is working with is normally distributed. The proposed algorithm is presented in Fig. 1 where rate estimation is performed on a per aggregate basis.

significance exists in the difference of the means of the agile and stable estimated rates. Statistical significance is indicated if the returned p-value is less than 0.0027 (or 0.27%). This value is chosen (as opposed to typical values i.e. 0.05 used by statistical t-tests) to reflect the characteristics of the 3sigma rule used by the flip-flop (i.e. in a normal distribution 99.73% of values lie within 3 standard deviations of the mean) but eliminates the assumption that the sample population is normally distributed and eliminates the need to estimate a standard deviation value. Although a significance level of 0.27% covers a wide distribution it allows for good agility and stability. If statistical significance is indicated then the agile estimated rate is used as the output estimated rate to allow for the system to reflect this persistent change in traffic. Otherwise the stable estimate is used. Overall it is the premise of the authors that the proposed algorithm will produce accurate estimations from the use of the TSW and EWMA (with dynamic weight) rate estimators. The TSW will produce agile results whereas the EWMA will produce stable results. The t-test statistical controller will allow a persistent change in traffic to be detected, allowing the output estimated rate to alter between agile and stable estimated rates thereby allowing changes in the actual data rate of the traffic in a timely and accurate manner whilst also allowing short term changes in traffic behaviour to be ignored. IV. SIMULATION ANALYSIS AND EVALUATION The purpose of this section is to compare the performance of the proposed SARE algorithm to the flip-flop filter in terms of accuracy, agility, stability and cost. However as both of these algorithms are comprised of a combination of TSW and EWMA rate estimation techniques these need to be analysed initially. Furthermore, as the proposed SARE algorithm uses an EWMA algorithm with dynamic weight and quantitative analysis of this algorithm in terms of agility, stability and accuracy does not exist this also needs to be investigated. (Henceforth we will refer to an EWMA with a static weight as EWMA(static) and an EWMA with a dynamic weight as EWMA(dynamic).) This analysis will validate the choice of traffic rate estimation techniques used in the proposed SARE algorithm. The overall performance of the SARE and flip-flop algorithms will then be presented.

Figure 1. Proposed SARE Algorithm

avg_rate_agile denotes the estimated agile traffic rate. avg_rate_stable denotes the estimated stable traffic rate. avg_rate is the estimated rate used on output. win_length is the window length of time of the TSW. T is the period of time over which the EWMA decays its estimated rate. interpk_time is the inter-arrival time of packets. pk_size is the packet size. t-test represents a t-test statistical controller. alpha is the alpha level used for the statistical t-test. p is the value used to determine if there is statistical significance in the output of the controller. Two rate estimation techniques are used in SARE, a TSW to determine an agile estimated rate and a EWMA to determine a stable estimated rate. The TSW is configured with a small win_length for agility. The EWMA uses a dynamic weight that is a function of interpk_time and T as in [10] with a large T value for stability. At each packet arrival both agile and stable estimated traffic rates are updated. Since each measurement reveals the latest estimate of the rate, it can determine if the latest estimations demonstrate a statistical significance in a change of input traffic. This is achieved by the use of a (one sample) statistical t-test controller [9]. The most recent estimates are input into this controller whose purpose is to determine if a persistent change in traffic has been detected and not just a transient change. A t-test is performed on the agile estimated rates (with degrees of freedom (dof) that allow for transient spikes or short bursts of traffic to be ignored) and the mean of the stable rates. This t-test returns a p-value that indicates whether a statistical ÂŠ 2012 ACEEE DOI: 02.ACE.2012.03.4

A. Simulation Setup The network simulator OPNET is used for implementation of the algorithms in a high speed DOCSIS network [13]. The MINITAB software package is used to validate analysis [14]. The network topology is shown in Fig. 2 with one Cable Modem Termination System (CMTS) acting as the FTP/TCP server and Cable Modems (CMs) as the clients. The downstream data rate is 10Mbps. Three different traffic scenarios are set up, each using different traffic sources of CBR, Poisson and Pareto traffic. CBR and Poisson traffic is used to precisely quantify the behaviour of the estimator whereas Pareto traffic is used to mimic a realistic environment and to stress the estimators with traffic changes. All traffic used for simulation analysis is 3

Full Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012 TABLE I: OPTIMAL CONFIGURATION PARAMETERS

TCP with all TCP flows being long lived FTP applications with Maximum Segment Size (MSS) of 1500Bytes. The CBR traffic generator generates a constant traffic load of approximately 6.6Mbps with a variance of bits and constant file sizes of 25700Bytes (with constant packet lengths) and inter-request times of 0.25s. For Poisson traffic the distribution rate uses a mean inter request time of 0.25s and the same file size as CBR traffic with greater variance of . To emulate the behaviour of realistic real-time flows a self-similar traffic flow is generated using Pareto distributions for packet sizes and inter-arrival times. The Pareto shape parameter used is 1.9 and the location parameter is 0.25 giving a variance of . Each experiment is replicated five times with different random seeds and simulations are run for a duration of 10minutes. To analyse the affect of a change in traffic load we reduce the traffic rates by 70% after 3 minutes duration and do not increase it again. Optimal parameters for TSW, EWMA(static) and EWMA(dynamic) are derived for agile and stable estimators using a simple trial and error approach. The configured values are outlined in Table I.

being experienced. EWMA(dynamic) experiences settle times almost 3 times greater than that of TSW for all traffic scenarios whereas EWMA(static) exhibits the largest settle times across all scenarios experiencing up to 7 times greater results than that of the TSW for the Pareto scenario. Hence, the TSW is the most agile estimator for all three traffic scenarios thereby validating our choice of TSW for use as the agile estimator in SARE. C. EWMA As The Stable Estimator Of SARE The purpose of this section is to validate the choice of EWMA(dynamic) as the stable estimator of SARE. We use the same simulation setup as described in Section A but we do not introduce the 70% decrease in traffic. To quantitatively evaluate stability we use two different measures of stability i.e. Coefficient of Variation (CV) and Mean Squared Error (MSE). CV is the ratio of standard deviation to mean and measures the degree to which measurement noise affects a filterâ€™s estimates. Lower CV values are better. MSE is used to measure resistance to transients. We use this metric because it penalises filters that are disturbed by large amounts for a short time rather than those filters that are disturbed by small amounts for a longer time. An error that is large in magnitude is more likely to cause an adaptive system to make a poor decision. Thus, lower MSE values are better.

B. TSW As The Agile Estimator Of SARE In this section we investigate the TSW, EWMA(static) and EWMA(dynamic) in terms of agility. We use settle time as a quantitative measure of agility as agility is the measure of how fast a rate estimator tracks changes in the actual data rate of the traffic. Settle time measures the elapsed time it takes an estimator to produce an estimate within 10% of its nominal value. Lower settle times are better. Although we can also evaluate the agility of the estimator by plotting the estimated rate with the actual rate and seeing how close the estimated rate matches the actual rate, we feel it is better to look at the settle times for analysis with quantitative measures.

TABLE II: SETTLE TIMES

Although these CV and MSE measures are appropriate for stability analysis, they do not give an indication on how accurate the estimated rate is. Performance results may be stable but inaccurate. Hence we use a relative estimation error (REE) to measure accuracy and this is the relative error between the estimated rate and the average reference as shown in (1). CV, MSE and REE results for CBR, Poisson and Pareto traffic are shown in Tables III-V.

Figure 2. Network Topology for Simulation Implementation

In order to accurately generate settle time measures, a notable change in traffic is required i.e. either a large increase or decrease in traffic should be introduced to the system. The settle time is then the measure of how fast the estimator recognises this chance and produces an estimate within 10% of its nominal value. Hence we measure the settle times when the traffic rate is decreased by 70%. Results are shown in Table II. For all three traffic sources it is the TSW that exhibits the most favourable performance with the lowest settle times ÂŠ 2012 ACEEE DOI: 02.ACE.2012.03.4

FOR AGILITY ANALYSIS

4

Full Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012 For CBR traffic EWMA(static) and EWMA(dynamic) show comparable performance for CV results. Yet EWMA(static) exhibits MSE and REE results approximately twice that of EWMA(dynamic). Hence, EWMA(dynamic) exhibits more favourable results than EWMA(static) for CBR traffic. On comparison of EWMA(dynamic) to the TSW, MSE and CV results for TSW exhibit slightly poorer performance than EWMA(dynamic), however, there is a slight improvement in terms of accuracy with REE results. However, both REE results exhibit good performance in terms of accuracy. For Poisson traffic EWMA(static) exhibits the poorest CV and REE results but has the most favourable MSE results. The TSW demonstrates slightly better performance than EWMA(dynamic) for CV and MSE results but demonstrates poorer REE results whereas EWMA(dynamic) has the greatest accuracy results. On analysis of the Pareto scenario it is the TSW that exhibits the poorest performance results. Although its CV results are comparable to that of EWMA(static) its MSE and REE results are over 3 times that of EWMA(static). Although the TSW demonstrates favourable performance for CBR and Poisson scenarios the unfavourable performance results for the Pareto scenario illustrates TSW’s sensitivity to bursty traffic for stability, perhaps as a result of the non-Normal behaviour of the Pareto distribution. EWMA(dynamic) exhibits much better performance than EWMA(static) and the TSW with excellent accuracy results for a bursty traffic scenario.

D. Accuracy Having validated our choice of TSW as the agile estimator of SARE and EWMA(dynamic) as the stable estimator of SARE we now compare performance of the SARE algorithm to the flip-flop filter. In this section we compare their performance in terms of accuracy. We use the REE metric as defined in (1) and the results are generated after a change in traffic is detected. These are presented in Table VI for CBR, Poisson and Pareto traffic. For all traffic scenarios SARE illustrates better performance results than the flip-flop filter. Nonetheless, for CBR and Poisson traffic results are somewhat comparable with only small differences in REE values. Still, Pareto results indicate that the flip-flop filter produces inaccurate results for extremely bursty traffic. TSW’s inaccuracy is already evident for Pareto traffic in Table V due to the estimation of non-Normal traffic. Although there is an improvement in the REE in comparison to Table V the inaccuracy is still large. This is attributed to the use of the stable TSW of the flip-flop filter as well as the use of the 3-sigma rule controller. This controller assumes that the population it is working with is normally distributed. This is not the case with the Pareto distribution as it is a heavy-tailed distribution. Hence, it may not detect a change in traffic as well as that of the t-test statistical controller of SARE. This highlights the suitability of using a statistical t-test controller over that of a 3-sigma rule controller. E. Agility The algorithms performance in terms of agility in next analysed. Settle times for when the traffic rate is reduced by 70% are recorded and presented in Table VII. It is the proposed SARE algorithm that generates the lowest settle times in all three scenarios with the flip-flop experiencing settle times of 1.5s greater than those experienced by SARE. This improved performance is attributed to the use of the t-test statistical controller.

TABLE III: CV RESULTS FOR STABILITY ANALYSIS

TABLE IV: MSE RESULTS FOR STABILITY ANALYSIS

TABLE VI: REE RESULTS

Across all three traffic scenarios it is EWMA(dynamic) that exhibits the most consistent results exhibiting favourable accuracy and stability results. Hence, we validate our choice of EWMA(dynamic) as the stable estimator of SARE.

FOR

ACCURACY ANALYSIS

TABLE VII: SETTLE TIME RESULTS FOR AGILITY ANALYSIS

TABLE V: REE R ESULTS FOR STABILITY ANALYSIS

F. Stability The stability of SARE and flip-flop rate estimators is analysed in this section. Stability analysis is performed on the estimates after a change in traffic has been detected and © 2012 ACEEE DOI: 02.ACE.2012.03. 4

5

Full Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012 the system is stable again. Results of CV and MSE are presented in Tables VIII and IX. In all three scenarios SARE outperforms the flip-flop filter. For CBR and Poisson scenarios the CV values are favourable for both rate estimation techniques with SARE outperforming the flip-flop by only approximately 0.7%. However, for the Pareto scenario SARE shows an approximate improvement of 25% for CV and over 200Kbits for MSE. Recall also from Table VI the inaccuracy of the flip-flop technique. Hence, SARE is the most stable traffic rate estimation technique for all traffic scenarios.

V. CONCLUSION This paper proposes a rate estimation technique that is capable of being agile, stable and accurate. Discussions highlight how rate estimation techniques such as TSW and EWMA can be configured to be either agile or stable but cannot be both thus leading to the proposal of the SARE algorithm. SARE, a similar approach to the flip-flop filter, is composed of two rate estimation techniques and a controller. It demonstrates more favourable performance results in terms of agility, stability and accuracy than that of the flip-flop. This is attributed to a t-test statistical controller that is employed in SARE to detect a persistent change in traffic as well as the use of EWMA(dynamic) as the stable estimator. Additionally the paper presents quantitative analysis on the agility, stability and accuracy of TSW and EWMA algorithms.

G. Computational Cost Cost effective algorithms are fast and simple and do not require a lot of computational power in processing samples of data nor do they require large memory constraints in terms of storing data. Already discussed in Section II is that TSW and EWMA algorithms are simple but can be computationally expensive and if the traffic load increases the computational cost increases linearly. Because both the SARE and flip-flop rate estimation techniques use two of these algorithms each, then their computational complexity proves to be quite large. However, this is at a trade-off of these estimators being capable of being both agile and stable when compared against single TSW and EWMA algorithms. On comparing SARE and flip-flop the main difference in computational cost is due to the controller employed. Using the t-test statistic as a controller in the SARE algorithm increases the computational complexity of the algorithm when compared against that of the flip-flop. As the latter uses the 3-Sigma rule as a controller, it only has one degree of freedom as opposed to 20 or 30 degrees of freedom with SARE (depending on the application or allowable burst of traffic). Nonetheless the complexity of SARE may be reduced by eliminating the generation of the p-value in the algorithm. Instead employing a look up table of critical values for the statistical t-test would eliminate a number of computational steps in determining the p-value and would allow a t-test statistic to be generated instead thereby minimising complexity.

REFERENCES [1] K. Salah and F. Haidari, “Performance evaluation and comparison of four network packet rate estimators” in Journal of Electronic Communications, vol. 64, pp. 1015-1023, Nov. 2010. [2] D. Clark and W. Fang, “Explicit allocation of best-effort packet delivery service” in IEEE/ACM Transactions on Networking, vol. 6, pp. 362-373, Aug. 1998. [3] W. Fang, N. Seddigh and B. Nandy, “A time sliding window three color marker”, in RFC 2859, June 2000. [4] F. Agharebparast and V.C.M. Leung, “A new traffic rate estimation and monitoring algorithm for the QoS-enabled Internet”, in Proc. of Globecom, pp. 3883-3997, Dec. 2003. [5] W. Lin, R. Zheng and J. Hou, “How to make assured service more assured” in Proc. of ICNP, pp. 182-191, Oct. 1999. [6] N. V. Lopes, M. J. Nicolau and A. Santos, “Evaluating rateestimation for a mobility and QoS-aware network architecture”, in Proc. of SoftCOM, pp. 348-352, Sept. 2009. [7] G. Wong, Q. Zhang and D. Tsang, “Joint optimization of power saving mechanism in the IEEE 802.16e Mobile WiMAX”, in Proc. of GLOBECOM, pp. 1-6, Nov. 2009. [8] M. Kim and B. Noble, “Mobile network estimation” in Proc. of MobiCom, pp.298-309, July 2001. [9] Young P. Recursive estimation and time-series analysis, Springer-Verlag: Berlin, 1984. [10] I. Stoica, S. Shenker and H. Zhang, “Core-stateless fair queuing: achieving approximately fair bandwidth allocations in high speed networks”, in Proc. SIGCOMM 1998, pp.203-214. Aug. 1998. [11] A. Woo and D. Culler, “Evaluation of efficient link reliability estimators for low-power wireless networks”, in Technical Report UCB/CSD-03-1270: University of California, Berkeley Computer Science Division, 2003. [12] I. Burgstahler and M. Neubauer, “New modifications of the exponential moving average algorithm for bandwidth estimation”, in Proc. of 15th ITC Specialist Seminar, pp. 210219, July, 2002. [13] OPNET, www.opnet.com. [14] MINITAB, www.minitab.com.

TABLE VIII: CV RESULTS FOR STABILITY ANALYSIS

TABLE IX: MSE RESULTS FOR STABILITY ANALYSIS

© 2012 ACEEE DOI: 02.ACE.2012.03. 4

6