Page 1

COMPARATIVE ESTIMATION METHODS FOR EXPONENTIATED WEIBULL DISTRIBUTION Salwa Abd El-Aty and Jumana Ahmed

,Department of Statistics, Faculty of Science, King Abdulaziz University Jeddah, Saudi Arabia

Abstract

The exponentiated Weibull (EW) distribution was introduced by Mudholkar and Srivastava (1993). This distribution involving one addition parameter to the Weibull distribution. In this paper, we mainly study the different methods of estimation such as: maximum likelihood method, method of moments, percentile estimation and least squares methods for estimating the three parameters of the exponentiated Weibull distribution. A comparison is performed based on variances, relative absolute biases, relative absolute estimate of risks and total deviation. The moment estimators are found to be better than other methods. In addition, the sampling distributions of the moment of .estimators are obtained. A simulation study and concluding remarks are introduced Key Words: Exponentiated Weibull distribution, Maximum likelihood estimators, Moment estimators, Percentile estimators, Least squares estimators, Total deviation, .Sampling distribution Introduction

1

One of the most interesting distributions, called the exponentiated Weibull (EW) distribution was introduced by Mudholkar and Srivastava (1993). This distribution is involving one additional parameter to the Weibull distribution. It has different types of parameters (scale, shape and location). Extensions of the Weibull family were introduced by Weibull in 1939 [see Lawless (2003)]. The EW distribution is an important life testing model, it is more flexible because it has many types of failure rate behaviors It may be .decreasing, constant, increasing or bathtub failure rate function

1


The EW distribution can be used for modeling lifetime data from reliability, various extreme value data, survival and population studies. The applications of the exponentiated Weibull distribution in reliability and survival studies were illustrated by Mudholkar et al. (1995). Its properties have been studied in more detail by Mudholkar and Hutson (1996) and Nassar and Eissa (2003). These studies presented useful .applications of the distribution in the modeling of flood data and in reliability Assume that the random variable T follows an EW distribution. The cumulative distribution function (cdf), is given by α

t −( ) λ θ

F ( t ; θ , α , λ )=[1−e

(1) .where (

θ

] ; t >0, θ , α , λ >0

,α) > 0 denotethe shape parameters and

λ

> 0 is a scale parameter

The rest of the paper is organized a s follows: In S ection (2), some properties of the EW distribution are discuss ed. In S ection (3), the different methods of estimation such as maximum likelihood estimation, moment estimation ,percentile estimation and least squares estimation are obtained. Comparisons among estimators are investigated through Monte Carlo simulations in Section (4). In addition, the sampling distributions of the moments estimators are discussed in Section (5). The last .section, Section (6) concludes with some conclusion remarks Some Statistical Properties of the Exponentiaed Weibull Distribution

2

If T is a random variable follows an EW distribution. The probability density function (pdf) is given by (2)

α

α

t t −( ) −( ) λ θ −1 λ

θα f ( t ; θ , α , λ )= [1−e λ

]

e

t α−1 ( ) ; t , θ , α , λ >0 λ

The EW distribution is unimodal for fixed α and λ , and it becomes more θ and more symmetric a s increas e s. There are s everal well known distributions which can be obtained a s a special cas e from EW distribution such

2


(θ=1) (θ=1, α=1), Exponentiated , Exponential distribution (α=1) (θ=1, α=2) exponential distribution , Rayleigh distribution and Burr type a s: Weibull

. (α=2) X distribution Reliability Function

-1

The reliability function is given by α

(3)

t −( ) λ θ

R ( t ; θ , α , λ )=1−[1−e

] ; t , θ ,α , λ>0 Failure Rate Function

-2

The failure rate (FR) function is given by α

α

t t −( ) −( ) λ θ −1 λ

(4)

θα[1−e

e

]

Z ( t ; θ , α , λ )=

t α−1 ( ) λ

α

t −( ) λ θ

λ [1−[1−e

; t ,θ , α , λ> 0

] ]

One of the interesting properties of the EW distribution is that it can has different types of failure rate shapes. As it can be obs erved in Figure (1), when θ=α =1 the FR function remains constant during the life time ; if α ≥ 1 and

3


the FR function is monotone increasing; FR function is monotone αθ ≥ 1 α≤1 αθ ≤ 1 α>1 αθ <1 decreasing for and ; if and we get the bathtubshaped failure rate (BFR) function; upside-down

bathtub-shaped failure rate α <1 . αθ >1 (UBFRF) if and

Mean Residual Life Function

-3

The mean residual life (MRL) is defined as MRL ( t ; θ , α , λ )=

(5)

1 1−[1−e

t − λ

α

( ) ]θ

∫ 1−[1−e

( xλ )

α

]θ dx

t

For positive values of θ , we can write (5) in the form

{

θ −1

( )[

]

t

α

}

−( i +1) ( ) (−1)i 1 −1 t α 1 λ MRL ( t ; θ , α , λ )=θ ∑ λ Γ + 1 ( i+ 1)−1 /α −( i+1 ) Γ ( i+1 ) ( ) +t e −t α λ α i=0 ( i+1 ) R(t)

()

(6)

Figure (2) illustrates the shape of some mean residual life functions for EW where θ=α =1 (θ , α ) ( λ) are the shape parameters and is a scale parameter. When 4


the mean residual function remains constant during the life time; if α ≥ 1 and αθ ≥ 1 the mean residual function is monotone decreasing; mean residual α≤1 αθ ≤ 1 α >1 function is monotone increasing for and ; if and αθ <1 α >1 . αθ <1 we get the BFR function; UBMRL if and Moments For positive integer values of

,θ the

th

r moment of the EW distribution t about the origin of is given by

θ- 1 r E (T ) = λ Γ ( +1 ) (-1 )i α i =0 r

r

-4

( )(i +1 )

−αr −1

θ −1 i

(7) :Special cases

The mean of the EW

distribution is obtained by substituting

(i)

in (7), then the me an of EW distribution is given by r =1

)

E (T ) = θλ Γ(

θ -1 1 + 1) (-1)i α i =0

( )(i + 1) θ −1 i

−α1 −1

(8 The s econd moment of the EW distribution is given by

5

(ii)


θ -1 2 E (T ) = θλ Γ( + 1)∑(-1)i α i =0 2

2

( )(i + 1) θ −1 i

− α2 −1

(9) The third moment is given by θ -1 3 E (T ) = θλ Γ( + 1) (-1)i α i =0 3

)

3

( )(i +1) θ −1 i

(iii)

− α3 −1

(10 We can find the variance of EW distribution

V (t ) = E (T 2 ) − [ E (T )]2 (

(11 Quantile

The quantile (12)

-5

Q ( p ) function of EW distribution (inverse cdf) can be shown a s Q ( p )=F −1 ( p )=λ [−ln (1− p1 /θ )]1 /α ; 0< p<1

Estimation of the Parameters

3

In this section, we estimate the three parameters of EW distribution using different methods of estimation such as : maximum likelihood estimator (MLE), moment .(estimators (MME), percentile estimators (PCE) and least squares estimator (LSE

6


3.1

Maximum Likelihood Estimators

Suppose that T 1 , T 2 ,… , T n are the n lifetimes observed arising from in (2), where,

(13)

µ = (θ , α , λ )

,

L( µ ; t )

the likelihood function n

n

α

ti

is given by ti

n

( t; µ )

f

α

−∑ ( ) −( ) 1 λ ¿ θ α αn ∏ t iα−1 e [1−e λ ]θ−1 ∏ λ i =1 i=1 n

n

i=1

It is usually easer to maximize the natural logarithm of the likelihood function of both sides, so the natural logarithm of the likelihood function can be written as

n

n

i=1

i=1

¿ nln (θ )+ nln ( α ) −nα ln ( λ ) + ( α−1 ) ∑ ln ( t i ) −∑

()

α

ti −( ) λ

n

+(θ −1) ∑ ln ⁡ ( 1−e

)

α

ti λ

)

i=1

(

(14

)

ˆ ˆ The maximum likelihood estimates, µˆ = θ , αˆ , λ are obtained by taking the first θ , α and λ derivative of the natural logarithm of the likelihood function with respect to and equating to zero as follows ti

n

(15)

α̂

−( ̂ ) ∂l n = ̂ + ∑ ln ⁡ ( 1−e λ )=0 ∂ θ θ i=1 n n t ∂ n 1 ̂α = −n ln ( ̂λ )+ ∑ ln ( t i )− ̂α ∑ ( t i ) ln ̂i ̂λ i=1 ∂ α α̂ λ i=1

()

)

ti

α̂

( ̂ ) ̂α t + ( θ̂ −1 ) n e λ ( ti ) ln ̂λi =0 t ̂λα̂ ∑ −( ̂ ) i=1 1−e λ ̂α

i

()

(16

7


and ̂α

ti −( ̂ ) λ

̂ ( θ−1) α̂ ∂ −n α̂ α̂ e = ̂ + α̂ +1 ∑ ( t i )α̂ + t i )α̂ =0 ∑ ( α̂ +1 t ̂ ̂ ∂λ λ λ i=1 λ i=1 −( ̂ ) 1−e λ n

(17)

n

α̂

i

ˆ ˆ : λ From (15) we can get the MLE of θ as a function of α and

̂ θ=

(18)

−n ̂α

n

∑ ln ⁡ (1−e

ti −( ̂ ) λ

)

i=1

After some simplification, we have

(19)

−n+

[

t α̂ −( ̂i ) λ

n

n

]

1 ̂ e ( θ−1) ∑ t αî + ∑ t i =0 t ̂λα̂ −( ̂ ) i=1 i=1 1−e λ ̂α

i

and

n

(20)

[

n

t ̂α −( ̂i ) λ

]

n

n 1 e α̂ α̂ + ∑ ln ( t i ) − α̂ (θ̂ −1) ∑ t i ) ln ⁡ ( t i )+ ∑ ( t i ) ln ⁡ ( t i) =0 ( t ̂λ α̂ i=1 −( ̂ ) i=1 i=1 1−e λ ̂α

i

The system of two non-linear equations (19) and (20) can be solved with respect to α ̂λ α̂ . θ and �. Then we can use (18) together with and to obtain the MLE of

Moment Estimators

If

T follows EW distribution with parameters (

3.2

θ , α , λ ¿ , then the me an,

s econd and third moments from the population of the EW are given in (8), (9) . (and (10 8


Similarly, the first three moments from the random sample T 1 , T 2 ,… , T n are n

n

∑ T 3i

and

i =1

n

∑ T 2i

n

∑Ti

,

i =1

T́ = i=1 n

n

Therefore, equating the first three population moments with the corresponding sample .moments, we have 1 T́ =θλ Γ +1 α

( )

(21)

θ−1

−1

∑ (−1)i (i+1) α

−1

i=0

n

∑ T 2i

(22)

i =1

n

=θ λ2 Γ

2 +1 α

( )

θ −1

−2

∑ (−1)i (i+1) α

−1

i=0

n

(23)

Then, the ME's of

∑ T 3i i =1

n

=θ λ 3 Γ

θ , α and λ, say

3 +1 α

( )

θ−1

−3

∑ (−1)i (i+1) α

−1

i=0

θ̃ , α̃ and

̃λ , respectively, can be obtained by

.solving the three non-linear equations (21), (22) and (23) numereically Percentile Estimators

3.3

If the data comes from a distribution function which has a closed form, then it is quite natural to estimate the unknown parameters by fitting a straight line to the theoretical points obtained from the distribution function and the sample percentile points. This method was originally explored by Kao in 1958, 1959 and it has been used quite successfully for Weibull distribution and for the generalized exponential

9


distribution [see, Gupta and Kundu (2000)] and for the exponentiated Pareto distribution .[([see, Shawky and Abu-Zinadah (2009 In case of an EW distribution, it is possible to use the same concept to obtain the θ ,α λ estimators of and based on the percentiles, Since t   −  ln [ F ( t ) ] = θ ln 1 − e  λ    (therefore,

α

Tdenotes the (i )

Let

   

t   −  F (t ) = 1 − e  λ    (24 ,

α

   

θ

,T 1<T 2< …<T n. If P i denotes t θ ,α λ F (¿¿(i)) ¿ , then the percentile estimate of and can be

some estimate of

th i order statistic, i.e.

obtained by minimizing n

{

−(

Q1=∑ ln [ p i ]−θ ln ⁡ [1−e

(25)

i=1

with respect to

θ , αand

2

]

}

λ. We call these estimators as PCE. It is possible to use t Pi F (¿¿(i)) ¿ . We mainly consider as estimators of

several estimators of P i=

t(i) α ) λ

i n+1 is the most used estimator as it is an unbiased estimator of

.[(i.e.,

t F ( ¿¿(i)) ¿ ,

t E [ F ( ¿¿(i))]=P i ¿ . [see Castillo et al. (2005

To find the PCE we will derive (25) with respect to

θ , α and

λ then :equating to zero

n

(26)

{

t(i)

α̂

}[

t( i)

̂α

]

−( ̂ ) −( ̂ ) ∂ Q1 =∑ ln [ pi ]−θ̂ ln ⁡ [1−e λ ] ln 1−e λ =0 ∂ θ i =1

10


̂α

{

n

α̂

t(i) −( ̂ ) λ

∂ Q1 =∑ ln [ pi ]−θ̂ ln ⁡ [1−e ∂ α i =1

(27)

t( i) −( ̂ ) λ

̂α e θ̂ t (i) ] α̂ ̂λ

}

t ( i) ln( ̂ ) λ t (i) −( ̂ ) λ

̂α

=0

1−e

and α̂

n

(28)

{

t(i) α̂ −( ̂ ) λ

∂ Q1 =∑ ln [ pi ]−θ̂ ln ⁡ [1−e ∂ λ i =1

Then, the PCE's of

t(i) −( ̂ ) λ

}

̂α t (i) θ̂ α̂ e ] α̂ +1 =0 t ̂λ −( ̂ ) 1−e λ α̂

(i)

θ̂ PCE , α̂ PCE and

θ , αand �, say

̂λ PCE , respectively, can be

.obtained by solving the non-linear equations by (26), (27) and (28) numerically Least Squares Estimators

3.4

T 1 , T 2 ,… , T nis a random sample of size n from a distribution T <…< T G( .) (1) (n ) function with cdf and denotes the order statistics of the Suppose

observed sample. It is well known that E ( G ( T (i) ) ) =

i n+1

The least squares estimators (LSE's) can be obtained by minimizing n

(29)

Q2=∑ G ( T (i ) )− E ( G ( T (i ) ) ) i=1

{

2

}

with respect to the unknown parameters. Therefore, in case of EW distribution the least ̂λ LSE θ̂ LSE , α̂ LSE θ ,α squares estimators of and, say and , respectively, can be obtained by minimizing n

)

{

α

−(

Q2=∑ [1−e i=1

t (i) ) θ λ

i ]− n+1

2

}

(30 :with respect to

11

θ ,α

and

λ

as fllows


n

{

t (i )

}

̂α

t(i)

[

̂α

t (i)

α̂

]

−( ̂ ) ̂ −( ̂ ) ̂ −( ̂ ) ∂ Q2 i =∑ [1−e λ ]θ − [1−e λ ]θ ln 1−e λ =0 ∂ θ i=1 n+1

(31)

n

(32)

{

t(i )

}

̂α

t (i)

̂α

t (i)

α̂

α̂

−( ̂ ) ̂ −( ̂ ) ̂ −( ̂ ) t (i ) t (i ) ∂ Q2 i ̂ θ θ −1 =∑ [1−e λ ] − θ [1−e λ ] e λ ( ̂ ) ln ⁡ ( ̂ )=0 ∂ α i=1 n+1 λ λ

and n

(33)

{

t (i )

}

̂α

t (i)

̂α

t(i)

α̂

−( ̂ ) ̂ −( ̂ ) ̂ −( ̂ ) ∂ Q2 i ̂ α̂ θ θ −1 =∑ [1−e λ ] − θ [1−e λ ] e λ ̂α+1 t (i )=0 ̂λ ∂ λ i=1 n+1

Then, the LSE's of unknown parameters, say

θ̂ LSE , α̂ LSE and

̂λ LSE , respectively, can

.be obtained by solving the three non-linear equations (31), (32) and (33) numerically

Simulation Study

4

It is very difficult to compare the theoretical performance of the different estimators described in the previous section, therefore, extensive Monte Carlo simulations are performed to compare the performance of the different methods of .estimation using Mathcad (2001) software This section consists of two subsections. Subsection (4.1) compares the performance of the different estimators proposed in the present study. Subsection (4.2) .selects the better method of estimation by using total deviation method

Comparative Study

4.1

The behavior of the different methods of estimation proposed for different sample sizes and different shape parameters is compared, mainly with respect to their relative absolute bias and relative absolute estimated risks (ER’s). We consider different sample sizes from small to large. In all our computations and we consider different values of the θ, α λ=1. shape parameters and when We repeat each computation over .N=1000 replications for different cases

12


The algorithm for obtaining the different estimates by the different methods can :be described in the following steps Step (1): Generate N=1000 random samples from the EW distribution. This can be achieved by firstly generating random samples from the uniform distribution. Then the uniform random numbers can be transformed to EW random numbers by using the following transformation

{

1 θ

}

1 α

T i= λ −ln ⁡ [1−(ui ) ] ; i=1,2, … , n

Step (2): we consider the estimation of the parameters , can be obtained by solving the .(non-linear equations in Section (3 Step (3): The pervious steps are repeated 1000 times. The average of estimates are μ=(θ , α , λ) computed by averaging the estimate of the parameters , as follows N

j=1,2,3

. μj

∑ ̂́μ , ́ i=1 ij μ j= ̂ N

where

́j ̂ μ

denoted the average of the estimates of

The variance of the estimates are obtained as follows j=1,2,3

1 , Var ( ́̂μ j )= N

[∑ N

i=1

[ ̂μ ij − ̂́μ j ]2

]

The bias of the estimates are obtained by Bias ( ́̂μ j )=Var ( ̂́μ j ) −μ j , j=1,2,3 We can obtain the relative absolute bias as j=1,2,3

Bias ( ̂μ́ j ) ́ , Rbias ( ̂μ j )= μj.

The estimated mean squared errors or shortly the estimated risks (ER’s) is computed as 2 ER ( ́̂μ j )=Var ( ̂́μ j ) +[ Bias ( ́̂μ j ) ] , j=1,2,3

Then, we can obtain the relative absolute estimated risks by using j=1,2,3

ER ( ́̂μ j ) , RER ( ́̂μ j )= μj

∣ ∣

13


Step (4): The results of the estimates of the parameters

θ̂ , α̂ , ̂λ and their variances

(Var.`s), relative absolute bias and relative absolute ER. are illustrated in Tables (1) to (4) .in Appendix A From these tables the following conclusions can be noticed on the performance of the :methods of estimation The variances and the relative absolute estimated risks are decreasing as the  θ ,α .sample size increases for different shape parameters for all methods The variances and RER of MLE`s are less than the variances and RER of other  .estimators .The relative bias of ME`s are less than the relative bais of other methods  Selecting the better method of estimation 4.2 The criterion of selecting the better method of estimation is the total deviation (TD) and .the better one which gives the minimum TD :To compare, we calculate the TD for each method as follows ̂ θ−θ α̂ −α ̂λ−λ + + θ α λ

∣ ∣∣ ∣∣ ∣

TD=

)

(34 where

θ , α and

λ are the parameters of the distribution,

θ̂ , α̂ and

̂λ are the

.[(estimates of these parameters by using any method. [see Al-Fawzan (2000 Table (5) shows the results when α=2, λ=1 for different values of the shape θ θ=2, λ=1 parameter . Table (6) displays the results when and different values of α. the shape parameter We can choose the better method which yield the minimum TD. Notice, that the maximum TD is 4.260 from PCE, and the minimum TD is 0.276 .[from ME for all methods. [ see Table (5) and (6) in Appendix B

Sampling Distribution

14

5


In this section, the sampling distributions of ME are derived using Pearson system, the better method of estimating the parameters. The Pearson’s system contain seven basic types of distribution together in a single parametric framework. The selection K approach is based on a certain quantity, , which is a function of the first four central :moments, that is K=

(35)

β1 ( β 2 +3)2 , 4 ( 4 β 2 −3 β 1 ) (2 β 2 −3 β 1−6)

β 2 denote the skewness and kurtosis measures respectively. K <0 So for different values of k, there exist different types of Pearson curves. If , we β 2> 3 I K =0 fit Type Pearson curve, while Type II is fitted if and . If K =∞ and where

β 1 and

III. IV 0< K <1 we obtain Type We get Type If . , 2β 2−3 β 1−6=0 K =1 V K >1 VI K =0 When , Type is obtained. If , get Type . Finally, If β 2< 3 VII .[(and Type is obtained. [see Elderton and Johnson (1969 The sampling distributions of the ME for each values of parameter and for different .[sample sizes are displayed in Tables (7) to (10). [ see Appendix C As a result of computer simulation; three sampling distributions were fitted to the ME which are Pearson Type I, IV and VI distributions Conclusion

6

The EW distribution is flexible in modeling various types of failure data with possibly increasing, constant, decreasing or bathtub shaped FR function. Also the mean residual life function has various types with possibly increasing, constant, decreasing or .upside-down bathtub shaped MRL function

15


In this paper, a three-parameter EW distribution is discussed and its main r th properties and the moment are derived. Four different methods of estimation are discussed and a comparison is made according to different criteria. The sampling .distribution of the estimates of the better method of estimation are derived The simulation study indicates that: the variances and relative absolute estimate of risks for all unknown parameters for most of the methods decreases as the sample sizes increase. The variances and RER of MLE`s are less than the variances and RER of other estimators. The relative bias of ME`s are less than the relative bias of other .methods It is found that using the total deviation that the method of estimate which gives the better estimates is the ME. In addition, the sampling distributions of ME is obtained, it is found that the most of the sampling distributions for different values of parameters under .different sample sizes follow Pearsonâ&#x20AC;&#x2122;s Type I ,IV and VI distributions Acknowledgements Special thanks from the authors to Prof.Dr. Amina Abu-hussien and Prof.Dr. Aldayaian for reviewing the paper. The authors are also grateful to Dr.Abeer El-Helbawy .for printing and delivering the paper References Al-Fawzan, M.A. (2000). Methods for Estimating the Parameters of the Weibull

-1

.Distribution. King Abdulaziz City for Science and Technology Castillo, E., Hadi, A.S., Balakrishnan, N. and Sarabia, J.M. (2005). Extreme

-2

Value and Related Models in Engineering and Science Applications. John Wiley .& Sons,. New York Elderton, W. and Johnson, N. (1969). System of Frequency Curve. Cambridge .University Press

16

-3


Gupta, R. D. and Kundu, D. (2000). Generalized Exponential Distributions:

-4

Different Methods of Estimation. Journal of Statistical Computation and .Simulation. Vol. 00, pp. 1-22 Lawless, F. (2003). Statistical Models and Methods for Lifetime Data. (2nd

-5

.Edition) John Wiley& Sons, New York Mudholkar, G.S., Srivastava, D.K. and Friemer, M. (1995). The exponentiated

-6

Weibull family: a reanalysis of the bus-motor-failure data. Technometrics. Vol. 37, .pp. 436_445 Mudholkar, G.S. and Srivastava, D.K. (1993). Exponentiated Weibull family for

-7

analyzing bathtub failure rate data. IEEE Transactions on Reliability. Vol. 42, pp. .299-302 Mudholkar, G.S. and Hutson, A.D. (1996). The exponentiated Weibull family:

-8

some properties and a flood data application. Communications in Statistics, .Theory and Methods. Vol. 25, pp. 3059–3083 Nassar, M.M. and Eissa, F.H. (2003). On the exponentiated Weibull distribution.

-9

.Communications in Statistics, Theory and Methods. Vol. 32, pp. 1317–1336 Shawky, A.I. and Abu-Zinadah, H.H. (2009). Exponentiated Pareto Distribution: Different Method of Estimations. International Journal of Contemporary .Mathematical Sciences. Vol. 4, pp. 677 – 693

APPENDIX A

17

-10


Results of Estimating the Parameters (Table (1

̂λ ̂́λ

Rbias

.Var

0.127 0.131

0.320 0.309

0.024 0.035

86.629

2.103

82.205

3.103

2.995

0.213

2.949

0.787

RER

̂λ

0.68 0.691

́̂λ

RER

.Var

Rbias

α̂ ά̂

0.294 0.171

0.313 0.199

0.196 0.182

19.53 7 16.82 6

1.889

24.796

1.601 1.779

0.946

30.076

0.109

α̂

1.373

θ̂

RER

Rbias

.Var

θ́̂

0.962 1.604

1.101 1.276

0.178 0.395

1.051

1.222

0.266

0.593

0.633

8.896

0.608

4.355

0.804

θ̂

1.138

θ́̂

ά̂

RER

Rbias

.Var

0.089 0.091 18.093

1.308 1.512 1.862

0.712 1.148 1.059

1.048 1.162 0.208

0.081 0.236 0.519

1.024 1.081 0.604

15.990

1.249

7.142

0.534

3.500

0.767

ά̂

RER

Rbias

1.283 1.474 1.870 1.586

0.630 0.908 1.040 4.227

1.030 1.102 0.200 0.368

ά̂

RER

Rbias

.Var

0.113 0.330 0.0038 0.670 0.285 0.368 0.030 1.265 0.567 1.015 0.102 0.305 0.0084 0.695 0.167 0.274 0.034 1.452 0.74 1.06 13.899 1.351 12.074 2.351 3.441 0.007 6.882 1.986 0.878 0.118 2.037 0.170 2.009 0.830 2.326 0.170 4.537 1.660 3.797 0.216 Mean of estimates, with their variance, relative absolute bias and relative absolute

0.026 0.089 0.436 1.887

RER

Rbias

.Var

0.117 0.116 54.222

0.325 0.308 1.856

0.011 0.021 50.778

0.675 0.692 2.856

0.042

0.207

2.475

0.793

̂λ

RER

Rbias

.Var

0.114 0.109 33.347 2.330

0.327 0.307 1.549 0.174

0.0072 0.014 30.948 2.300

RER

Rbias

̂λ

.Var

RER

Rbias

.Var

0.284 0.165 19.09 3 8.277

0.346 0.244 0.068 0.375

α̂

́̂λ

RER

Rbias

.Var

0.673 0.693 2.549 0.826

0.285 0.167 6.011 5.474

0.358 0.263 0.065 0.207

0.057 0.058 12.007 10.777

́̂λ

α̂

RER

Rbias

.Var

θ=0.5 , α=2 , λ=1 :estimated risk. Population parameter values

(Table (2

18

θ̂

.Var 0.050 0.150 0.510 2.080

θ̂

θ́̂ 1.015 1.051 0.600 0.684

θ́̂ 1.008 1.030 0.559 0.608


Mean of estimates, with their variance, relative absolute bias and relative absolute

θ=1.5 , α=2 , λ=1 :estimated risk. Population parameter values

RER

.Var

Rbias

̂λ ́̂λ

0.176

0.305

0.083

0.079

0.037

0.078

0.262

0.290

0.178

682.666

0.508

682.40 7

1.30 5 1.0 37 0.7 11 1.50 8

̂λ RER

Rbias

.Var

́̂λ

0.117

0.281

0.038

0.075

0.005

0.075

0.128

0.008

0.128

50.317

0.400

50.157

1.28 1 1.00 5 1.00 8 1.40 0

̂λ

RER

Rbias

.Var

́̂λ

0.101

0.275

0.025

0.07

0.004

0.070

0.107

0.009

0.107

20.681

0.388

20.531

1.27 5 1.00 4 1.00 9 1.38 8

̂λ

RER

Rbias

.Var

́̂λ

0.087

0.269

0.014

0.068

0.003

0.068

0.100

0.005

0.100

10.988

0.326

10.882

1.26 9 1.00 6 1.00 5 1.32 6

α̂ ά̂

θ̂ RER

Rbias

.Var

θ́̂

0.238

0.355

0.073

1.251

0.209

1.778

0.15 7

2.33 7 2.4 9 1.41 6

13.64 5

0.414

20.08 3

4.34 2

1.44 5

8.282

0.962

10.33 9

0.96 8 1.8 13 2.1 21 2.9 44

RER

Rbias

.Var

0.258

0.169

0.52

0.245

0.40 3 0.80 1

0.249

0.292

2.325

0.277

α̂

θ̂

n= 10 MLE ME PCE LSE

RER

Rbias

.Var

ά̂

RER

Rbias

.Var

θ́̂

n= 20

0.122

0.126

0.342

0.039

1.423

0.295

1.939

0.267

0.309

2.089

0.576

2.386

2.367

0.151

4.968

0.554

6.761

0.98 7 1.94 2 2.36 5 2.33 2

MLE

0.145

2.25 2 2.29 1 1.38 2 1.69 8

0.202

0.325

0.18 1 0.56 5 0.15 4 4.64 3

RER

Rbias

.Var

θ́̂

n= 30

0.99 3 1.92 7 2.03 6 2.13 3

MLE

α̂

θ̂

ME PCE LSE

RER

Rbias

.Var

ά̂

0.071

0.106

2.211

0.188

0.338

0.025

0.247

0.113

0.285

1.922

0.255

1.731

0.357

2.310

2.319

0.142

2.22 5 1.48 9 1.71 5

1.403

0.202

0.09 8 0.44 2 0.14 4 4.55 7

3.145

0.422

4.318

RER

Rbias

.Var

θ́̂

n= 50

0.179

0.337

0.014

MLE

1.379

0.273

1.901

1.660

0.334

2.239

2.930

0.416

4.005

0.99 5 1.90 9 2.00 2 2.12 5

α̂

RER

Rbias

.Var

ά̂

0.048

0.094

0.06

0.207

0.098

0.179

0.255

2.095

0.125

0.37 5 0.09 9 4.12 8

2.18 8 2.19 5 1.49 0 1.75 0

19

θ̂

ME PCE LSE

ME PCE LSE


(Table (3

Mean of estimates, with their variance, relative absolute bias and relative absolute

θ=2 , α=2 , λ=1 :estimated risk. Population parameter values

RER

Rbias

.Var

0.384

0.530

0.092

0.083

0.10 3 0.08 5

0.487

0.631

0.702

0.449

RER

̂λ Rbias

0.290

0.493

0.089

0.081

0.501

0.660

0.651

0.499

RER

̂λ Rbias

0.265

0.484

0.08 9 0.50

.Var 0.04 6 0.08 3 0.06 6 0.40 2

.Var 0.03

̂λ ́̂λ

RER

Rbias

1.53

0.217

0.146

1.08

0.786 3.717

RER

Rbias

.Var

0.349

2.292

0.589

0.532

0.045

0.93 6

MLE

0.335

1.123

2.67

1.091

0.091

2.149

2.18

ME

0.340

6.971

3.701

0.266

7.12

.Var

3 0.36 9 0.55 1

́̂λ 1.49 3 1.08 1 0.34 0 0.50 1

θ̂

α̂ ́α̂

0 2.680

θ́̂

n= 10

2 1.46

PCE

8 4.272

0.145

8.459

α̂

2.291

8.828

0.350

17.16 4

θ̂

ά̂

RER

Rbias

.Var

0.149

2.207

0.541

0.514

0.025

0.261

0.796

2.522

1.073

0.073

2.125

4.096

0.118

4.04

2.237

2.511

0.409

4.352

4.130

0.116

8.206

2.233

8.319

0.304

16.26 6

RER

Rbias

.Var

0.096

0.104

0.534

α̂

́̂λ

RER

Rbias

.Var

1.48

0.058

0.086

0.086

20

θ̂

ά̂

RER

Rbias

.Var

2.173

0.526

0.509

0.017

2.70

LSE

1

θ́̂

n= 20

0.97 2 2.14 6 1.18 1 2.60 9

MLE ME PCE LSE

θ́̂

n= 30

0.98

MLE


0.085

0.056

0.527

0.691

0.110

0.249

RER

̂λ Rbias

0.244

0.476

0.082

0.071

0.302

0.492

0.062

0.112

0 0.08 1 0.05 0 0.04 8

4 1.05 6 0.30 9 0.75 1

́̂λ

.Var 0.01 7 0.07 7 0.06 0 0.05 0

1.47 6 1.07 1 0.50 8 0.88 8

0.396

0.195

0.640

2.391

1.133

0.132

2.195

1.219

0.059

2.424

2.119

1.666

0.416

0.974

0.680

0.044

1.353

2.088

0.773

0.199

1.388

α̂

θ̂

ά̂

RER

Rbias

.Var

RER

Rbias

.Var

0.037

0.075

0.052

2.15

0.515

0.505

0.009

0.350

0.193

0.550

2.386

1.085

0.085

2.141

1.001

0.04

1.997

2.080

0.510

0.247

0.777

0.559

0.050

1.109

2.100

0.527

0.114

1.003

2 2.26 6 1.16 8 2.39 8

PCE

θ́̂

n= 50

0.98 9 2.17 0 1.50 6 2.22 9

ME

LSE

MLE ME PCE LSE

(Table (4

Mean of estimates, with their variance, relative absolute bias and relative absolute

θ=2, α=1.5, λ=1 estimated risk. Population parameter values: parameters

RER

Rbias

.Var

̂λ ́̂λ

RER

Rbias

.Var

α̂ ά̂

θ̂ RER

Rbias

.Var

θ́̂

0.858

0.778

0.253

1.77 8

0.164

0.148

0.196

1.72 1

0.599

0.537

0.043

0.925

0.196

0.259

0.129

1.25

0.767

0.484

0.624

2.22

0.821

0.179

1.514

1.642

9

6

21

n= 10 MLE ME


0.913 3.692

0.893 0.318

0.116

0.10

3.591

7 0.68 2

̂λ RER

Rbias

.Var

0.620

0.714

0.110

0.168

0.218

0.120

0.901

0.949

0.0005

3.244

0.307

l50.3

̂λ

RER

Rbias

.Var

0.558

0.698

0.071

0.155

0.183

0.122

0.908

0.952

0.0004

2.644

0.303

2.555

̂λ RER

Rbias

.Var

0.507

0.684

0.039

0.152

0.171

0.123

0.937

0.968

0.0001

1.632

0.278

1.555

́̂λ 1.71 4 1.21 8 0.05 1 0.69 3

́̂λ 1.69 8 1.18 3 0.04 8 0.69 9

́̂λ 1.68 4 1.17 1 0.03 2 0.72 2

2.158

0.535

2.593

2.30 3

1.945

0.981

0.040

0.038

PCE

29.16 7

0.381

43.42 3

2.07 2

6.685

0.201

13.209

1.598

LSE

ά̂

RER

Rbias

.Var

θ́̂

n= 20

1.65 7 2.01 4 2.10 8 1.99 5

0.546

0.517

0.025

0.967

MLE

0.884

0.116

1.714

1.768

ME

1.034

0.696

0.129

0.608

PCE

4.930

0.173

9.740

1.653

LSE

ά̂

RER

Rbias

.Var

θ́̂

n= 30

1.63 0 1.91 2 2.17 7 1.82 7

0.530

0.511

0.017

0.979

MLE

0.942

0.058

1.872

1.885

ME

1.013

0.687

0.137

0.625

PCE

3.050

0.161

5.998

1.678

LSE

ά̂

RER

Rbias

.Var

θ́̂

n= 50

1.61 3 1.85 9 1.96 5 1.77 4

0.517

0.506

0.009

0.987

MLE

0.963

0.036

1.922

1.927

ME

0.744

0.595

0.071

0.809

PCE

1.095

0.144

2.104

1.712

LSE

α̂ RER

Rbias

.Var

0.072

0.104

0.084

0.441

0.343

0.397

0.598

0.405

0.527

25.83 6

0.330

38.51 0

α̂

RER

Rbias

.Var

0.044

0.087

0.048

0.331

0.274

0.328

0.949

0.451

0.965

17.25 5

0.216

25.77 6

α̂ RER

Rbias

.Var

0.028

0.076

0.029

0.274

0.240

0.282

0.202

0.310

0.088

6.716

0.183

10.00 2

22

θ̂

θ̂

θ̂


APPENDIX B Total Deviation Method (Table (5

θ

Comparison between different methods of estimation with different

α=2, λ=1 ¿ ) LSE

PCE

MME

MLE

TD

TD

TD

TD

MLE

1.767

4.260

1.784

LSE

1.116

2.133

LSE

0.749

LSE

α

θ

Size

1.735

10

1

2

1.714

1.719

20

0. 5

1.814

1.672

1.715

30

0.556

1.476

1.639

1.713

50

ME

1.748

0.995

0.491

0.829

10

1

2

ME

1.105

0.893

0.445

0.748

20

1. 5

ME

0.952

0.621

0.401

0.719

30

ME

0.867

0.594

0.377

0.700

50

ME

0.945

1.237

0.509

1.208

10

1

2

2

ME

0.920

1.188

0.415

1.110

20

ME

0.492

1.166

0.385

1.079

30

LSE

0.276

0.779

0.349

1.056

50

Best

23


(Table (6

α Comparison between different methods of estimation with different θ=2, λ=1 ¿ ) LSE

PCE

MME

MLE

TD

TD

TD

TD

LSE

0.901

2.409

0.922

1.463

10

ME

0.810

2.051

0.677

1.335

20

ME

0.680

2.091

0.515

1.295

30

ME

0.605

1.873

0.447

1.266

50

ME

0.945

1.237

0.509

1.208

10

ME

0.920

1.188

0.415

1.110

20

ME

0.492

1.166

0.385

1.079

30

LSE

0.276

0.779

0.349

1.056

50

Best

APPENDIX C

24

Size

θ

α

1

2

1. 5

1

2

2


Sampling Distribution of ME (Table (7 Sampling distribution of the ME of the parameters

θ=0.5 , α=2 , λ=1

Type

Person’s Coefficient

Kurtosis

Skewness

IV

0.414

19.787

4.334

Parameter s θ

IV

0.559

5.366

1.007

α

IV

0.335

3.193

0.073

λ

IV

0.369

35.065

5.837

θ

IV

0.493

4.017

0.434

α

I

0.070-

4.433

0.386-

λ

IV

0.383

56.841

7.473

θ

IV

0.102

3.896

0.167

α

I

0.072-

5.585

0.683-

λ

IV

0.432

98.01

9.849

θ

I

0.026-

4.201

0.091-

α

I

0.068-

7.788

0.999-

λ

25

Sample size n 10

20

30

50


(Table (8 Sampling distribution of the ME of the parameters

θ=1.5 , α=2 , λ=1 Type

Person’s Coefficient

Kurtosis

Skewness

I

0.258-

2.062

1.030

Parameter s θ

IV

0.612

4.867

0.832

α

I

2.102-

2.348

0.486-

λ

I

0.203-

1.642

0.802

θ

I

0.148-

2.682

0.258

α

VI

1.673

2.003

0.590-

λ

I

0.244-

1.950

0.974

θ

IV

0.022

2.149

0.045-

α

VI

1.415

1.912

0.631-

λ

I

0.216-

1.735

0.857

θ

IV

0.226

1.932

0.352-

α

VI

8.531

1.863

0.74-

λ

(Table (9

26

Sample size n 10

20

30

50


Sampling distribution of the ME of the parameters

θ=2 , α=2 , λ=1 Type

Pearson’s Coefficient

Kurtosis

Skewness

I

0.113-

1.192

0.438

Parameter s θ

IV

0.949

4.928

0.968

α

IV

0.109

1.951

0.218-

λ

I

0.125-

1.236

0.486

θ

I

0.178-

2.642

0.402

α

IV

0.164

1.659

0.369-

λ

I

0.081-

1.100

0.316

θ

I

0.059-

1.898

0.192

α

IV

0.079

1.427

0.251-

λ

I

0.210-

1.692

0.832

θ

IV

0.197

1.974

0.313-

α

VI

3.138

1.824

0.736-

λ

(Table (10 Sampling distribution of the ME of the parameters

27

Sample size n 10

20

30

50


α=1.5 , θ=2 , λ=1 Type

Pearson’s Coefficient

Kurtosis

Skewness

Parameters

I

0.378-

2.945

1.395

θ

IV

0.613

4.824

0.814

α

I

0.352-

2.693

0.539-

λ

I

0.281-

2.250

1.118

θ

I

0.235-

2.938

0.283

α

I

0.433-

2.405

0.753-

λ

I

0.225-

1.808

0.899

θ

I

α

2.18

VI

3.440

1.965

0.65-

λ

I

0.208-

1.683

0.827

θ

IV

0.117

1.889

0.243-

α

VI

1.228

1.788

0.691-

λ

28

Sample size n 10

20

30

50

Final copy method1  
Advertisement