Sei sulla pagina 1di 12

Statistical Methodology 6 (2009) 7081

Contents lists available at ScienceDirect


Statistical Methodology
journal homepage: www.elsevier.com/locate/stamet
Kumaraswamys distribution: A beta-type distribution with
some tractability advantages
M.C. Jones

Department of Mathematics and Statistics, The Open University, Walton Hall, Milton Keynes MK7 6AA, UK
a r t i c l e i n f o
Article history:
Received 10 December 2007
Received in revised form
1 April 2008
Accepted 2 April 2008
Keywords:
Beta distribution
Distribution theory
Minimax distribution
Minimum of maxima
Order statistics
a b s t r a c t
A two-parameter family of distributions on (0, 1) is explored
which has many similarities to the beta distribution and a
number of advantages in terms of tractability (it also, of course,
has some disadvantages). Kumaraswamys distribution has its
genesis in terms of uniform order statistics, and has particularly
straightforward distribution and quantile functions which do
not depend on special functions (and hence afford very easy
randomvariate generation). The distributionmight, therefore, have
a particular role when a quantile-based approach to statistical
modelling is taken, and its tractability has appeal for pedagogical
uses. To date, the distribution has seen only limited use and
development in the hydrological literature.
2008 Elsevier B.V. All rights reserved.
1. Introduction
Despite the many alternatives and generalisations [20,27], it remains fair to say that the beta
distribution provides the premier family of continuous distributions on bounded support (which is
taken to be (0, 1)). The beta distribution, Beta(a, b), has density
g(x) =
1
B(a, b)
x
a1
(1 x)
b1
, 0 < x < 1, (1.1)
where its two shape parameters a and b are positive and B(, ) is the beta function. Beta densities
are unimodal, uniantimodal, increasing, decreasing or constant depending on the values of a and b
relative to 1 and have a host of other attractive properties ([15], Chapter 25). The beta distribution
is fairly tractable, but in some ways not fabulously so; in particular, its distribution function is an
incomplete beta function ratio and its quantile function the inverse thereof.

Tel.: +44 1908 652209; fax: +44 1908 655515.


E-mail address: M.C.Jones@open.ac.uk.
1572-3127/$ see front matter 2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.stamet.2008.04.001
M.C. Jones / Statistical Methodology 6 (2009) 7081 71
In this paper, I take a look at an alternative two-parameter distribution on (0, 1) which I will
call Kumaraswamys distribution, Kumaraswamy(, ), where I have denoted its two positive shape
parameters and . It has many of the same properties as the beta distribution but has some
advantages in terms of tractability. Its density is
f (x) = f (x; , ) = x
1
(1 x

)
1
, 0 < x < 1. (1.2)
Alert readers might recognise it in some way, especially if they are familiar with the hydrological
literature where it dates back to [22]. However, the distribution does not seem to be very familiar
to statisticians, has not been investigated systematically in much detail before, nor has its relative
interchangeability with the beta distribution been widely appreciated. For example, Kumaraswamys
densities are also unimodal, uniantimodal, increasing, decreasing or constant depending in the same
way as the beta distribution on the values of its parameters. (Boundary behaviour and the main
special cases are also commonto bothbeta andKumaraswamys distribution.) Andyet the normalising
constant in (1.2) is very simple and the corresponding distribution and quantile functions also need
no special functions. The latter gives Kumaraswamys distribution an advantage if viewed from the
quantile modelling perspective popular in some quarters [28,7]. Some other properties are also
more readily available, mathematically, than their counterparts for the beta distribution. Yet the
beta distribution also has its particular advantages and I hesitate to claim whether, in the end,
the tractability advantages of Kumaraswamys distribution will prove to be of immense practical
significance in statistics; at the very least, Kumaraswamys distribution might find a pedagogical role.
The background and genesis of Kumaraswamys distribution are given in Section 2 along with its
principal special cases. The basic properties of Kumaraswamys distribution are given in Section 3
whose easy reading reflects the tractability of the distribution. A deeper investigation of the skewness
and kurtosis properties of the distribution is given in Section 4. Inference by maximum likelihood
is investigated in Section 5 while, in Section 6, a number of further related distributions are
briefly considered. It should be the case that similarities and differences between the beta and
Kumaraswamys distributions are made clear as the paper progresses but, in any case, they are
summarised and discussed a little more in the closing Section 7.
For references to the hydrological literature on Kumaraswamys distribution, see [26]. The current
article gives a much more complete account of the properties of the Kumaraswamy distribution than
any previous publication, however.
Note that the linear transformation +(u )X moves a random variable X on (0, 1) to any other
bounded support (, u). So, provided and u dont depend on or and are known, there is no need
to mention such an extension further.
2. Genesis, forebears and special cases
Temporarily, set a = m, b = n + 1 m in the beta distribution, where m and n are positive
integers. Then, as is well known, the beta distribution is the distribution of the mth order statistic
froma randomsample of size n fromthe uniformdistribution(on(0, 1)). Nowconsider another simple
construction involving uniform order statistics. Take a set of n independent random samples each of
size m from the uniform distribution and collect their maxima; take X, say, to be the minimum of
the set of maxima; then X has Kumaraswamys distribution with parameters m and n. (Likewise, by
symmetry, one minus the maximum of a set of n minima of independent uniform random samples
of size m also has Kumaraswamys distribution.) A more descriptive name might be the minimax
distribution which I used in an earlier version of this paper and under which name the distribution is
listed in the recent work of [23].
Now, the maxima themselves constitute a random sample of size n from the power function
distribution which is the Beta(m, 1) distribution (it has density mx
m1
, 0 < x < 1). Hence X is also the
minimum of a random sample from the power function distribution.
As with the beta distribution, for greater generality the integer-valued parameters mand n may be
replaced in Kumaraswamys distribution by real-valued, positive, parameters and .
Kumaraswamy [22] was interested in distributions for hydrological randomvariables and actually
proposed a mixture of a probability mass, F
0
, at zero and density (1.2) over (0, 1), although I am using
72 M.C. Jones / Statistical Methodology 6 (2009) 7081
the terminology Kumaraswamys distribution to refer solely to the latter. Kumaraswamy gave a
number of the basic properties of the distribution but made no mention of its comparison with the
beta distribution. The double power distribution described by [21] is that of one minus a random
variable with Kumaraswamys distribution, but they were interested only in , > 1.
It is also clear that both beta and Kumaraswamy distributions are special cases of the three-
parameter distribution with density
g(x) =
p
B(, )
x
p1
(1 x
p
)
1
, 0 < x < 1, (2.1)
and p > 0. This is the generalised beta distribution of [25]. It is the distribution of the 1/pth power of
a Beta(, ) random variable or of the th order statistic of a sample of size + 1 from the power
function distribution Beta(p, 1) (for , integer). (The order statistics version of the generalised beta
density can be found, for example, in Example 2.2.2 of [1].) Beta and Kumaraswamy distributions
are therefore the (p, , ) = (1, a, b) and (, 1, ) special cases, respectively, of (2.1). This point is
also made by [26]. The similarities between beta and Kumaraswamy distributions that will become
clear through the rest of this article lead me, however, to the conclusion that this generalised beta
distribution is not very useful in practice, since it produces very similar distributions for quite different
parameter values.
The beta and Kumaraswamy distributions share their main special cases. Beta(a, 1) and
Kumaraswamy(, 1) distributions are both the power function distribution mentioned above and
Beta(1, a) and Kumaraswamy(1, ) distributions are both the distribution of one minus that power
function random variable. Beta(1, 1) and Kumaraswamy(1, 1) distributions are both the uniform
distribution.
A further special case of the Kumaraswamy distribution has also appeared elsewhere. The
Kumaraswamy(2, ) distribution is that of the generating variate R =
_
x
2
1
+x
2
2
when {x
1
, x
2
} follow
a bivariate Pearson Type II distribution ([5], Section 3.4.1). This has been used in an algorithm to
generate univariate symmetric beta random variates [31,4, p. 436].
3. Basic properties
The distribution function of the Kumaraswamy distribution is
F(x) = 1 (1 x

, 0 < x < 1. (3.1)


This compares extremely favourably in terms of simplicity with the beta distributions incomplete
beta function ratio.
The distribution function is readily invertible to yield the quantile function
Q(y) = F
1
(y) = {1 (1 y)
1/
}
1/
, 0 < y < 1. (3.2)
As already mentioned, this facilitates ready quantile-based statistical modelling [28,7]. Moreover, I
know of no other two-parameter quantile family on (0, 1) so simply defined and yet with such good
behaviour (as follows, with an explicit simple density function to boot!). The popular generalised
lambda family ([29,7, Section7.3]) withquantile functiony

(1y)

has support (0, 1) for , > 0but


encompasses bimodality, repeated incarnations of the uniformdistribution and complicated patterns
of skewness and kurtosis (e.g. [19]). A better competitor, almost as tractable as the Kumaraswamy
distribution and with some similar properties, is the L
B
distribution [30].
Formula (3.2) also facilitates trivial random variate generation (as noted by [22]). If U U(0, 1),
then X f if
X = (1 U
1/
)
1/
. (3.3)
This compares extremely favourably with the sophisticated algorithms preferred to generate random
variates from the beta distribution ([4], Section IX.4, [15], Section 25.2).
M.C. Jones / Statistical Methodology 6 (2009) 7081 73
Fig. 1. Some Kumaraswamy densities. = 5, = 2: left-skewed unimodal density; = 2, = 2.5: almost symmetric
unimodal density; = 1/2, = 1/2: uniantimodal density; = 1, = 3: decreasing density.
It can be shown that the Kumaraswamy distribution has the same basic shape properties as the
beta distribution, namely:
> 1, > 1 unimodal; < 1, < 1 uniantimodal;
> 1, 1 increasing; 1, > 1 decreasing;
= = 1 constant
([22] again). In the first two cases, the mode/antimode is at
x
0
=
_
1
1
_
1/
. (3.4)
Both beta and Kumaraswamy densities are log-concave if and only if both their parameters are greater
than or equal to 1.
The behaviour of the Kumaraswamy density also matches that of the beta density at the boundaries
of their support:
f (x) x
1
as x 0;
f (x) (1 x)
1
as x 1.
Some illustrative examples of Kumaraswamy densities are plotted in Fig. 1. Note the similarity to
analogous depictions of the beta family (e.g. [15], Figure 25.1b).
In [16], I suggested swapping the roles of distribution and quantile functions for distributions on
(0, 1) to produce complementary distributions. There, I applied the idea to the beta distribution to
produce the complementary beta distribution. Applying the same idea here, a pleasing symmetry is
observed that means that nothing is really gained. The density of the complementary distribution is
the quantile density function, q(y) = Q

(y), of the Kumaraswamy distribution, and


q(y) =
1

(1 y)
(1/)1
{1 (1 y)
1/
}
(1/)1
= f (1 y; 1/, 1/).
The moments of the Kumaraswamy distribution are both immediate and obtainable from [22,25,
21,6] or [1, p. 15]:
E(X
r
) = B
_
1 +
r

,
_
. (3.5)
Like those of the beta distribution they exist for all r > . I note the first appearance of a
special function, although it is still only the (complete) beta function. (It might be interesting to
74 M.C. Jones / Statistical Methodology 6 (2009) 7081
note the moment formula in terms of the binomial coefficient with non-integer arguments: E(X
r
) =
1/
_
+(r/)

_
.) In particular,
E(X) = B
_
1 +
1

,
_
;
Var(X) = B
_
1 +
2

,
_

_
B
_
1 +
r

,
__
2
.
The first L-moment is the mean. Expressions for higher L-moments [12,14] are explicitly available
an improvement on the situation for the beta distribution. However, the general formula for the rth
L-moment is a sum of r terms involving several gamma functions; see Appendix A for details. Here,
just the scale measure which is the second L-moment (half the Gini mean difference) is given:

2
=
_
B
_
1 +
1

,
_
2B
_
1 +
1

, 2
__
.
Distributions of order statistics and their moments are also relatively tractable but not especially
edifying (but this too is an improvement on the situation for the beta distribution). They are dealt with
briefly in Appendix B.
4. Skewness and kurtosis
4.1. Skewness
A strong skewness ordering between distributions is the classical one due to [32]. It is immediate
that skewness to the right increases, in this sense, with decreasing for fixed . This is because the
transformation from X
1
Kumaraswamy(
1
, ) to a Kumaraswamy(
2
, ) random variate is of the
form X

1
/
2
1
which is convex for
1
>
2
. There seems to be no such simple property for changing
and fixed .
Various scalar skewness measures can be plotted as functions of and . For example, the L-
skewness ([12]; formula from Appendix A),

3
=

3

2
=
B
_
1 +
1

,
_
6B
_
1 +
1

, 2
_
+6B
_
1 +
1

, 3
_
B
_
1 +
1

,
_
2B
_
1 +
1

, 2
_ ,
is shown in Fig. 2. Note the increasing nature of the skewness as Fig. 2 is traversed diagonally from
bottom right to top left. Of course, this measure respects the van Zwet ordering, being a decreasing
function of for fixed . Similar patterns arise for the classical third-moment measure and the
quantile-based skewness measure, {Q(3/4) 2Q(1/2) + Q(1/4)}/{Q(3/4) Q(1/4)} [3]; these are
not shown. For , > 1, one can also plot the skewness measure 1 2F(x
0
) [2], with similar results
again (not shown).
The pattern in Fig. 2 is broadly in line with similar pictures for the beta distribution, although the
latter are symmetric about the line a = b. They both contrast with the more complicated patterns of
skewness associated with the generalised lambda distribution ([19], Figure 7(a), (c) for
3
).
4.2. Symmetry and near-symmetry
Perhaps the least attractive feature of the Kumaraswamy distribution, at least at first thought, is
that, unlike the beta distribution, it has no symmetric special cases other than the uniformdistribution
( = = 1). It does, however, have a range of almost-symmetric special cases. Some of these are
shown in Fig. 3. They correspond to zero skewness in the sense of [2]. I chose this measure (and used
it for uniantimodal densities as well as unimodal ones) because it has a simple formula:
= 1 2F(x
0
) = 2
_
( 1)
1
_

1;
M.C. Jones / Statistical Methodology 6 (2009) 7081 75
Fig. 2. L-skewness for the Kumaraswamy distribution, plotted as a function of log() and log(). The straight lines
superimposed on the plot correspond to = 1 and = 1.
Fig. 3. Some almost-symmetric Kumaraswamy densities. In order of decreasing height of mode/antimode: (i) = 3.211, =
100; (ii) = 2.470, = 5; (iii) = 1.707, = 2; (iv) = = 1 (the symmetric uniform density); (v) = 2/5, = 1/2;
(vi) = 0.082, = 1/4.
it is easy to see that = 0 whenever
= 1/
_
( 1)2
1/
_
.
The zero curves for other skewness measures follow very similar trajectories.
The densities in Fig. 3 are indeed reasonably symmetric and, provided and are not too big,
resemble symmetric beta distributions. It is only when becomes large that, while almost-symmetry
is retained(as is a small variance), something happens whichis different frombut not necessarily less
desirable than the beta distribution: the modal location, (3.4), shifts away fromthe centre. (Almost-
symmetry about a point towards the right of the unit interval would necessitate f (1 x).)
76 M.C. Jones / Statistical Methodology 6 (2009) 7081
Fig. 4. L-kurtosis for the Kumaraswamy distribution, plotted as a functionof log() and log(). The straight lines superimposed
on the plot correspond to = 1 and = 1.
4.3. Kurtosis
The L-kurtosis ([12]; formula from Appendix A), defined by
4
=
4
/
2
, is

4
=
B
_
1 +
1

,
_
12B
_
1 +
1

, 2
_
+30B
_
1 +
1

, 3
_
20B
_
1 +
1

, 4
_
B
_
1 +
1

,
_
2B
_
1 +
1

, 2
_ ;
it is plotted as a function of and in Fig. 4. Notice howthe L-kurtosis increases along the line of near-
symmetry as the parameters get bigger. It also increases as one goes away fromnear-symmetry when
either parameter decreases. This behaviour is also observed for the classical fourth-moment and the
third-difference-quantile-based kurtosis measures (not shown) as well as being, once more, in line
with similar pictures for the beta distribution, except for the symmetry about the diagonal line there.
Again, the generalised lambda distribution on (0, 1) displays a more complicated pattern of kurtosis
([19], Figure 7(b), (d) for
4
).
5. Likelihood inference
In this section, I consider maximum likelihood estimation for the Kumaraswamy distribution;
maximum likelihood has not been considered for this distribution before. Let X
1
, . . . , X
n
be a random
sample from the Kumaraswamy distribution and let circumflexes denote maximum likelihood
estimates of parameters. Differentiating the log likelihood with respect to leads immediately to
the relation

= n
_
n

i=1
log(1 X

i
). (5.1)
I can now concentrate on the equation to be satisfied by arising from substituting for

in the other
score equation ([24], p. 762). This is
S()
n

_
1 +T
1
() +
T
2
()
T
3
()
_
= 0, (5.2)
M.C. Jones / Statistical Methodology 6 (2009) 7081 77
where
T
1
() = n
1
n

i=1
log Y
i
1 Y
i
, T
2
() = n
1
n

i=1
Y
i
log Y
i
1 Y
i
, T
3
() = n
1
n

i=1
log(1 Y
i
)
and Y
i
= X

i
, i = 1, . . . , n, remembering that each Y
i
is a function of . It is shown in Appendix C that
lim
0
S() > 0 and lim

S() < 0 and so, given the continuity of S(), there is at least one zero
of (5.2) in (0, ). However, although it is possible to obtain various further properties of T
j
() and its
derivatives, j = 1, 2, 3, I have been able to derive no further general properties of the shape of S().
The safest approach, therefore, is to evaluate S() over a fine grid of values (on the log scale), plot
it if desired, identify all zeroes of the function and then evaluate the log likelihood at those zeroes to
identify its global maximum.
Properties of the maximum likelihood estimates thus obtained follow from the usual asymptotic
likelihood theory. It is readily shown that the elements of the observed information matrix are:

=
n

2
+
( 1)

2
n

i=1
Y
i
(log Y
i
)
2
(1 Y
i
)
2
;

I

=
n

T
2
();

I

=
n

2
.
And a little more manipulation taking advantage of the fact that Y Beta(1, ) gives the elements of
the expected information matrix as:
n
1
I

=
A

2
; n
1
I

=
B

; n
1
I

=
1

2
, (5.3)
where
A = A() = 1 +

2
_
{() (2)}
2
{

()

(2)}
_
and
B = B() = {( +1) (2)}/( 1) < 0.
Here, (z) = d log (z)/dz is the digamma function.
It follows from (5.3) that, asymptotically,
n
1
Var()

2
A
2
B
2
; n
1
Var(

)

2
A
A
2
B
2
; Corr(,

)
B

A
> 0.
Notice that the standard deviation of is proportional to and (it seems, numerically) the constant
of proportionality decreases with ; the standard deviation of

is independent of and increases
with . The correlation between the parameter estimates does not depend on and increases with
from somewhere around 1/4 for small to 1 for large ; this behaviour is very similar to that of the
correlation between maximumlikelihood estimators of the parameters of the beta distribution traced
along the line a = b.
6. Related distributions
6.1. Limiting distributions
It is straightforward to see that if the Kumaraswamy distribution is normalised by looking at the
distributionof Y =
1/
X (on(0,
1/
)), thenits density, y
1
{1(y

/)}
1
, tends to y
1
exp(y

)
(on (0, )), the density of the Weibull distribution, as . This is the interesting analogue of the
gamma limit that arises in similar circumstances in the case of the beta distribution.
Similarly, the distribution of Z = (1 X) has limiting density
e
z
(1 e
z
)
1
(6.1)
(on (0, )) as . This is the distribution of minus the logarithmof the Beta(1, ) power function
distribution (since for finite , Z is minus the BoxCox transformation with power 1/ of a Beta(1, )
78 M.C. Jones / Statistical Methodology 6 (2009) 7081
random variable). This distribution has recently become quite popular under the name generalised
exponential distribution [9,10].
As both and become large (in any relationship to one another), the limit associated with the
overall normalisation (1
1/
X) is the extreme value density e
x
exp(e
x
).
The exact distribution of the overall normalised random variable used above is, in fact, the kappa
distribution of [13]. Notice, however, that this observation does not invalidate the novelty of the
remainder of the paper since the support of the kappa distribution depends on and while that
of the Kumaraswamy distribution does not. All of the limiting distributions correspond, of course,
with those mentioned in [13] or obtained from extreme value theory when = m, = n.
6.2. Distributions related by transformation
Let us ignore the glib implication of the subsection title which is all of them and consider
transformations from a limited class. First, let us briefly consider some of the most obvious
transformations to positive half-line and whole real line supports. These might include the odds
ratio Y = X/(1 X) from (0, 1) to (0, ) and what I have argued [18] is a natural extension
Z = (Y (1/Y))/2 from (0, ) to (, ) which maintain the power tails of the Kumaraswamy
distribution. Alternatively, take logs (Y = log(1 X), Z = log Y) to decrease tailweights with each
transformation. Or combine the two. In particular, the Kumaraswamy odds distribution is quite an
interesting heavy-tailed alternative to the F and generalized Pareto distributions. It has density

y
1
(1 +y)
1
_
(1 +y)

_
1
, y > 0.
On the other hand, Y = log(1 X) yields a scaled version of the exponentially-tailed distribution
with density (6.1).
A natural way of generating families of distributions on some other support froma simple starting
distribution with density h and distribution function H, say, is to apply the quantile function H
1
to a family of distributions on (0, 1). See [17] for this idea applied to the beta distribution. For the
Kumaraswamy distributions, the resulting family has density
h(x)H
1
(x){1 H

(x)}
1
.
The transformations of the previous paragraph can, of course, be interpreted in this light, and
are amongst the simplest transformations of this type. I particularly like to generate families of
distributions on the whole real line froma symmetric H, and then becoming the shape parameters
associated with the family.
7. Conclusions
To assist the reader in deciding whether the Kumaraswamy distribution might be of use to him
or her in terms of either research or teaching, I summarise the pros, cons and equivalences between
the two below. (Some of the pros of the beta distribution have not been mentioned previously in this
paper.)
The Kumaraswamy and beta distributions have the following attributes in common:
their general shapes (unimodal, uniantimodal, monotone or constant) and the dependence of those
shapes on the values of their parameters;
power function and uniform distributions as special cases;
straightforward interpretations in terms of order statistics from the uniform distribution;
explicit expressions for the mode/antimode (where appropriate);
behaviour of densities as x 0, 1;
good behaviour of skewness and kurtosis measures as functions of the parameters of the
distribution;
broadly similar maximum likelihood estimation;
simple standard (if different) limiting distributions.
M.C. Jones / Statistical Methodology 6 (2009) 7081 79
The Kumaraswamy distribution has the following advantages over the beta distribution:
a simple explicit formula for its distribution function not involving any special functions;
ditto for the quantile function;
as a consequence of the simplicity of the quantile function, a simple formula for random variate
generation;
explicit formulae for L-moments;
simpler formulae for moments of order statistics.
The beta distribution has the following advantages over the Kumaraswamy distribution:
simpler formulae for moments and the moment generating function;
a one-parameter subfamily of symmetric distributions;
simpler moment estimation;
more ways of generating the distribution via physical processes;
in Bayesian analysis, conjugacy with a simple distribution, the binomial.
This paper has offered the Kumaraswamy distribution as a viable alternative to the beta
distribution that shares many of the latters properties while being easier to handle in several ways.
I cautiously commend it to the reader. However, the Kumaraswamy distribution is certainly not
superior to the beta distribution in every way! But it might be worth consideration from time to time
by researchers who wish to utilise one or more of its simpler properties (e.g. quantile-based work and
random variate generation) and it provides a useful example for the teaching of distributions.
Acknowledgements
I am very grateful to an anonymous referee of the first version of this paper for pointing out the
history of this distribution in the hydrological literature, to Dr. Daniel Henderson for later drawing
my attention to the existence of an anonymous Wikipedia article on Kumaraswamys distribution
at http://en.wikipedia.org/wiki/Kumaraswamy_distribution, and to the review team at Statistical
Methodology for prompting final polishes to the presentation.
Appendix A. L-moments
A general form for the L-moments for r 2 is

r
=
1
r
r2

j=0
(1)
j
_
r 2
j
__
r
j +1
_
J(r 1 j, j +1),
where
J(i
1
, i
2
) =
_
1
0
F
i
1
(x)(1 F)
i
2
(x)dx.
[11]. In the case of the Kumaraswamy distribution
J(i
1
, i
2
) =
_
1
0
{1 (1 x

}
i
1
(1 x

)
i
2
dx
=
1

_
1
0
{1 w

}
i
1
w
i
2
(1 w)
(1/)1
dw
=
1

i
1

k=0
_
i
1
k
_
(1)
k
B
_
(k +i
2
) +1,
1

_
=
i
1

k=0
_
i
1
k
_
(1)
k
(k +i
2
)B
_
(k +i
2
), 1 +
1

_
.
80 M.C. Jones / Statistical Methodology 6 (2009) 7081
Therefore,

r
=

r
r2

j=0
rj1

k=0
(1)
j+k
_
r 2
j
__
r
j +1
__
r 1 j
k
_
(k +j +1)B
_
(k +j +1), 1 +
1

_
=

r
r

=1
(1)
1

_
min(1,r2)

j=0
_
r 2
j
__
r
j +1
__
r 1 j
r
_
_
B
_
, 1 +
1

_
=

r
r

=1
(1)
1

_
r

_
_
min(1,r2)

j=0
_
r 2
j
__

j +1
_
_
B
_
, 1 +
1

_
=

r
r

=1
(1)
1

_
r

__
r + 2
r 1
_
B
_
, 1 +
1

_
.
The final equality follows from (3.20) and (3.29) of [8].
Appendix B. Moments of order statistics
The ith order statistic, X
i:n
, of a random sample of size n from the Kumaraswamy distribution has
density

B(i, n +1 i)
x
1
(1 x

)
(n+1i)1
{1 (1 x

}
i1
, 0 < x < 1.
It follows that
E(X
r
i:n
) =

B(i, n +1 i)
_
1
0
x
r+1
{1 (1 x

}
i1
(1 x

)
(n+1i)1
dx
=

B(i, n +1 i)
_
1
0
{1 w

}
i1
w
(n+1i)1
(1 w)
(r/)
dw
=

B(i, n +1 i)
i1

k=0
_
i 1
k
_
(1)
k
B
_
(k +n +1 i), 1 +
r

_
=

B(i, n +1 i)
n

=n+1i
_
i 1
n
_
(1)
(n+1i)
B
_
, 1 +
r

_
.
Appendix C. Limiting behaviour of S()
Recall the notation Y
i
= X

i
, i = 1, . . . , n.
(I) 0. First, note that 1 Y
i
log Y
i
> 0 and hence that log(1 Y
i
) log < 0. It follows
that
T
1
() T
2
() 1 and T
3
() log
so that
n
1
S()
1
log
.
(II) . In this case,
T
1
() n
1
n

i=1
log Y
i
, T
2
() n
1
n

i=1
Y
i
log Y
i
, and T
3
() n
1
n

i=1
Y
i
.
M.C. Jones / Statistical Methodology 6 (2009) 7081 81
It follows that
n
1
S()
n

i=1
(log X
i
)
n
+
n

i=1
Y
i
(log X
i
)
n

i=1
Y
i

n

i=1
(log X
i
)
n
+(log X
n:n
) < 0
because the mean of positive quantities is larger than their minimum.
References
[1] B.C. Arnold, N. Balakrishnan, H.N. Nagaraja, A First Course in Order Statistics, Wiley, New York, 1992.
[2] B.C. Arnold, R.A. Groeneveld, Measuring skewness with respect to the mode, The American Statistician 49 (1995) 3438.
[3] A.L. Bowley, Elements of Statistics, sixth ed., Scribner, New York, 1937.
[4] L. Devroye, Non-Uniform Random Variate Generation, Springer, New York, 1986, Available at:
http://cg.scs.carleton.ca/~luc/rnbookindex.html.
[5] K.T. Fang, S. Kotz, K.W. Ng, Symmetric Multivariate and Related Distributions, Chapman and Hall, London, 1990.
[6] S.C. Fletcher, K. Ponnambalam, Estimation of reservoir yield and storage distribution using moments analysis, Journal of
Hydrology 182 (1996) 259275.
[7] W.G. Gilchrist, Statistical Modelling With Quantile Functions, Chapman & Hall/CRC, Boca Raton, LA, 2001.
[8] H.W. Gould, Combinatorial Identities, Morgantown Printing and Binding Co, Morgantown, VA, 1972.
[9] R.D. Gupta, D. Kundu, Generalized exponential distributions, Australian and New Zealand Journal of Statistics 41 (1999)
173188.
[10] R.D. Gupta, D. Kundu, Generalized exponential distribution: Existing results and some recent developments, Journal of
Statistical Planning and Inference 137 (2007) 35373547.
[11] J.R.M. Hosking, The theory of probability weighted moments, Research report RC 12210 (Revised Version), IBM Research
Division, Yorktown Heights, NY, 1989.
[12] J.R.M. Hosking, L-moments: Analysis and estimation of distributions using linear combinations of order statistics, Journal
of the Royal Statistical Society Series B 52 (1990) 105124.
[13] J.R.M. Hosking, The four-parameter kappa distribution, IBM Journal of Research and Development 38 (1994) 251258.
[14] J.R.M. Hosking, J.R. Wallis, Regional Frequency Analysis; an Approach Based on L-Moments, Cambridge University Press,
Cambridge, 1997.
[15] N.L. Johnson, S. Kotz, N. Balakrishnan, Continuous Univariate Distributions, second ed., vol. 2, Wiley, New York, 1994.
[16] M.C. Jones, The complementary beta distribution, Journal of Statistical Planning and Inference 104 (2002) 329337.
[17] M.C. Jones, Families of distributions arising from distributions of order statistics (with discussion), Test 13 (2004) 143.
[18] M.C. Jones, Connecting distributions with power tails on the real line, the half line and the interval, International Statistical
Review 75 (2007) 5869.
[19] J. Karvanen, A. Nuutinen, Characterizing the generalized lambda distribution by L-moments, Computational Statistics and
Data Analysis 52 (2008) 19711983.
[20] S. Kotz, J.R. van Dorp, Beyond Beta; Other Continuous Families of Distributions With Bounded Support and Applications,
World Scientific, New Jersey, 2004.
[21] D. Koutsoyiannis, T. Xanthopoulos, On the parametric approach to unit hydrograph identification, Water Resources
Management 3 (1989) 107128.
[22] P. Kumaraswamy, A generalized probability density function for double-bounded randomprocesses, Journal of Hydrology
46 (1980) 7988.
[23] L.M. Leemis, J.T. McQueston, Univariate distribution relationships, The American Statistician 62 (2008) 4553.
[24] T. Mkelinen, K. Schmidt, G.P.H. Styan, On the existence and uniqueness of the maximumlikelihood estimate of a vector-
valued parameter in fixed-size samples, Annals of Statistics 9 (1981) 758767.
[25] J.B. McDonald, Some generalized functions for the size distribution of income, Econometrica 52 (1984) 647664.
[26] S. Nadarajah, On the distribution of Kumaraswamy, Journal of Hydrology 348 (2008) 568569.
[27] S. Nadarajah, A.K. Gupta, Generalizations and relatedunivariate distributions, in: A.K. Gupta, S. Nadarajah(Eds.), Handbook
of Beta Distribution and Its Applications, Dekker, New York, 2004, pp. 97163.
[28] E. Parzen, Nonparametric statistical data modelling (with comments), Journal of the American Statistical Association 74
(1979) 105131.
[29] J.S. Ramberg, B.W. Schmeiser, An approximate method for generating asymmetric random variables, Communications of
the Association for Computing Machinery 17 (1974) 7882.
[30] P.R. Tadikamalla, N.L. Johnson, Systems of frequency curves generated by transformations of logistic variables, Biometrika
69 (1982) 461465.
[31] G. Ulrich, Computer generation of distributions on the M-sphere, Applied Statistics 33 (1984) 158163.
[32] W.R. van Zwet, Convex Transformations of Random Variables, Mathematisch Centrum, Amsterdam, 1964.

Potrebbero piacerti anche