Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Third Edition
Baoding Liu
Uncertainty Theory Laboratory
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
liu@tsinghua.edu.cn
http://orsc.edu.cn/liu
3rd Edition
c 2009 by UTLAB
Japanese Translation Version
c 2008 by WAP
2nd Edition
c 2007 by Springer-Verlag Berlin
Chinese Version
c 2005 by Tsinghua University Press
1st Edition
c 2004 by Springer-Verlag Berlin
Preface ix
1 Probability Theory 1
1.1 Probability Space . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Probability Distribution . . . . . . . . . . . . . . . . . . . . . 7
1.4 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Identical Distribution . . . . . . . . . . . . . . . . . . . . . . 13
1.6 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.8 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.9 Critical Values . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.10 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.11 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.12 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.13 Convergence Concepts . . . . . . . . . . . . . . . . . . . . . . 35
1.14 Conditional Probability . . . . . . . . . . . . . . . . . . . . . 40
2 Credibility Theory 45
2.1 Credibility Space . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.2 Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.3 Membership Function . . . . . . . . . . . . . . . . . . . . . . 58
2.4 Credibility Distribution . . . . . . . . . . . . . . . . . . . . . 62
2.5 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.6 Identical Distribution . . . . . . . . . . . . . . . . . . . . . . 70
2.7 Extension Principle of Zadeh . . . . . . . . . . . . . . . . . . 72
2.8 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.9 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.10 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.11 Critical Values . . . . . . . . . . . . . . . . . . . . . . . . . . 90
2.12 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
2.13 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
2.14 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
vi Contents
C Logic 239
C.1 Traditional Logic . . . . . . . . . . . . . . . . . . . . . . . . . 239
C.2 Probabilistic Logic . . . . . . . . . . . . . . . . . . . . . . . . 240
C.3 Credibilistic Logic . . . . . . . . . . . . . . . . . . . . . . . . 240
C.4 Hybrid Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
C.5 Uncertain Logic . . . . . . . . . . . . . . . . . . . . . . . . . 243
D Inference 244
D.1 Traditional Inference . . . . . . . . . . . . . . . . . . . . . . 244
D.2 Fuzzy Inference . . . . . . . . . . . . . . . . . . . . . . . . . 244
D.3 Hybrid Inference . . . . . . . . . . . . . . . . . . . . . . . . . 245
D.4 Uncertain Inference . . . . . . . . . . . . . . . . . . . . . . . 246
Bibliography 251
Index 268
Preface
Acknowledgment
This work was supported by a series of grants from National Natural Science
Foundation, Ministry of Education, and Ministry of Science and Technology
of China.
Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
January 17, 2009
Chapter 1
Probability Theory
Example 1.2: Let Ω = [0, 1] and let A be the Borel algebra over Ω. If Pr is
the Lebesgue measure, then Pr is a probability measure.
Example 1.3: Let φ be a nonnegative and integrable function on < (the set
of real numbers) such that
Z
φ(x)dx = 1. (1.3)
<
∞
[ k
[
(Ai \Ai−1 ) = A, (Ai \Ai−1 ) = Ak
i=1 i=1
= lim Pr{Ak }.
k→∞
Note that
∞
\ ∞
[
Ai ↑ A, Ai ↓ A.
i=k i=k
Probability Space
Definition 1.2 Let Ω be a nonempty set, A a σ-algebra over Ω, and Pr a
probability measure. Then the triplet (Ω, A, Pr) is called a probability space.
4 Chapter 1 - Probability Theory
Example 1.5: Let Ω = [0, 1], A the Borel algebra over Ω, and Pr the
Lebesgue measure. Then ([0, 1], A, Pr) is a probability space, and sometimes
is called Lebesgue unit interval. For many purposes it is sufficient to use it
as the basic probability space.
is an event.
Example 1.6: Take (Ω, A, Pr) to be {ω1 , ω2 } with Pr{ω1 } = Pr{ω2 } = 0.5.
Then the function (
0, if ω = ω1
ξ(ω) =
1, if ω = ω2
is a random variable.
Example 1.7: Take (Ω, A, Pr) to be the interval [0, 1] with Borel algebra
and Lebesgue measure. We define ξ as an identity function from Ω to [0,1].
Since ξ is a measurable function, it is a random variable.
Pr {ξ 6= x1 , ξ 6= x2 , · · · , ξ 6= xm } = 0; (1.9)
Pr {ξ 6= x1 , ξ 6= x2 , · · · } = 0. (1.10)
Random Vector
Definition 1.8 An n-dimensional random vector is a measurable function
from a probability space (Ω, A, Pr) to the set of n-dimensional real vectors,
i.e., for any Borel set B of <n , the set
{ξ ∈ B} = ω ∈ Ω ξ(ω) ∈ B (1.11)
is an event.
6 Chapter 1 - Probability Theory
which implies that ξ1 is a random variable. A similar process may prove that
ξ2 , ξ3 , · · · , ξn are random variables.
Conversely, suppose that all ξ1 , ξ2 , · · · , ξn are random variables on the
probability space (Ω, A, Pr). We define
B = B ⊂ <n {ξ ∈ B} ∈ A .
Next, the class B is a σ-algebra of <n because (i) we have <n ∈ B since
{ξ ∈ <n } = Ω ∈ A; (ii) if B ∈ B, then {ξ ∈ B} ∈ A, and
{ξ ∈ B c } = {ξ ∈ B}c ∈ A
Random Arithmetic
In this subsections, we will suppose that all random variables are defined on a
common probability space. Otherwise, we may embed them into the product
probability space.
Proof: Assume that ξ is a random vector on the probability space (Ω, A, Pr).
For any Borel set B of <, since f is a measurable function, f −1 (B) is also a
Borel set of <n . Thus we have
{f (ξ) ∈ B} = ξ ∈ f −1 (B) ∈ A
That is, Φ(x) is the probability that the random variable ξ takes a value less
than or equal to x.
Example 1.10: Assume that the random variables ξ and η have the same
probability distribution. One question is whether ξ = η or not. Gener-
ally speaking, it is not true. Take (Ω, A, Pr) to be {ω1 , ω2 } with Pr{ω1 } =
Pr{ω2 } = 0.5. We now define two random variables as follows,
−1, if ω = ω1 1, if ω = ω1
ξ(ω) = η(ω) =
1, if ω = ω2 , −1, if ω = ω2 .
Remark 1.1: Theorem 1.5 states that the identity function is a universal
function for any probability distribution by defining an appropriate proba-
bility space. In fact, there is a universal probability space for any probability
distribution by defining an appropriate function.
Proof: The parts (a), (b), (c) and (d) follow immediately from the definition.
Next we prove the part (e). If ξ is a continuous random variable, then
Pr{ξ = x} = 0. It follows from the probability continuity theorem that
holds for all x ∈ <, where Φ is the probability distribution of the random
variable ξ.
R +∞
Let φ : < → [0, +∞) be a measurable function such that −∞ φ(x)dx = 1.
Then φ Ris the probability density function of some random variable because
x
Φ(x) = −∞ φ(y)dy is an increasing and continuous function satisfying (1.14).
Proof: Let C be the class of all subsets C of < for which the relation
Z
Pr{ξ ∈ C} = φ(y)dy (1.17)
C
holds. We will show that C contains all Borel sets of <. It follows from the
probability continuity theorem and relation (1.17) that C is a monotone class.
10 Chapter 1 - Probability Theory
It is also clear that C contains all intervals of the form (−∞, a], (a, b], (b, ∞)
and ∅ since Z a
Pr{ξ ∈ (−∞, a]} = Φ(a) = φ(y)dy,
−∞
Z +∞
Pr{ξ ∈ (b, +∞)} = Φ(+∞) − Φ(b) = φ(y)dy,
b
Z b
Pr{ξ ∈ (a, b]} = Φ(b) − Φ(a) = φ(y)dy,
a
Z
Pr{ξ ∈ ∅} = 0 = φ(y)dy
∅
where Φ is the probability distribution of ξ. Let F be the algebra consisting of
all finite unions of disjoint sets of the form (−∞, a], (a, b], (b, ∞) and ∅. Note
that for any disjoint sets C1 , C2 , · · · , Cm of F and C = C1 ∪ C2 ∪ · · · ∪ Cm ,
we have
Xm Xm Z Z
Pr{ξ ∈ C} = Pr{ξ ∈ Cj } = φ(y)dy = φ(y)dy.
j=1 j=1 Cj C
denoted by U(a, b), where a and b are given real numbers with a < b.
Exponential Distribution: A random variable ξ has an exponential dis-
tribution if its probability density function is defined by
1 exp − x , if x ≥ 0
φ(x) = β β (1.19)
0, if x < 0
Φ(x1 , x2 , · · · , xn ) = Pr {ξ1 ≤ x1 , ξ2 ≤ x2 , · · · , ξn ≤ xn } .
Definition 1.14 The joint probability density function φ: <n → [0, +∞) of
a random vector (ξ1 , ξ2 , · · · , ξn ) is a function such that
Z x1 Z x2 Z xn
Φ(x1 , x2 , · · · , xn ) = ··· φ(y1 , y2 , · · · , yn )dy1 dy2 · · · dyn
−∞ −∞ −∞
1.4 Independence
Definition 1.15 The random variables ξ1 , ξ2 , · · · , ξm are said to be indepen-
dent if (m )
\ Ym
Pr {ξi ∈ Bi } = Pr{ξi ∈ Bi } (1.21)
i=1 i=1
holds. We will show that C contains all Borel sets of <. It follows from
the probability continuity theorem and relation (1.23) that C is a monotone
class. It is also clear that C contains all intervals of the form (−∞, a], (a, b],
(b, ∞) and ∅. Let F be the algebra consisting of all finite unions of disjoint
sets of the form (−∞, a], (a, b], (b, ∞) and ∅. Note that for any disjoint sets
C1 , C2 , · · · , Ck of F and C = C1 ∪ C2 ∪ · · · ∪ Ck , we have
Pr{ξ1 ∈ C, ξ2 ≤ x2 , · · · , ξm ≤ xm }
m
P
= Pr{ξ1 ∈ Cj , ξ2 ≤ x2 , · · · , ξm ≤ xm }
j=1
= Pr{ξ1 ∈ C} Pr{ξ2 ≤ x2 } · · · Pr{ξm ≤ xm }.
That is, C ∈ C. Hence we have F ⊂ C. Since the smallest σ-algebra containing
F is just the Borel algebra of <, the monotone class theorem implies that C
contains all Borel sets of <.
Applying the same reasoning to each ξi in turn, we obtain the indepen-
dence of the random variables.
Theorem 1.11 Let ξi be random variables with probability density functions
φi , i = 1, 2, · · · , m, respectively, and φ the probability density function of the
random vector (ξ1 , ξ2 , · · · , ξm ). Then ξ1 , ξ2 , · · · , ξm are independent if and
only if
φ(x1 , x2 , · · · , xm ) = φ1 (x1 )φ2 (x2 ) · · · φm (xm ) (1.24)
for almost all (x1 , x2 , · · · , xm ) ∈ <m .
Proof: If φ(x1 , x2 , · · · , xm ) = φ1 (x1 )φ2 (x2 ) · · · φm (xm ) a.e., then we have
Z x1 Z x2 Z xm
Φ(x1 , x2 , · · · , xm ) = ··· φ(t1 , t2 , · · · , tm )dt1 dt2 · · · dtm
−∞ −∞ −∞
Z x1 Z x2 Z xm
= ··· φ1 (t1 )φ2 (t2 ) · · · φm (tm )dt1 dt2 · · · dtm
−∞ −∞ −∞
Z x1 Z x2 Z xm
= φ1 (t1 )dt1 φ2 (t2 )dt2 · · · φm (tm )dtm
−∞ −∞ −∞
holds. We will show that C contains all Borel sets of <. It follows from
the probability continuity theorem and relation (1.26) that C is a monotone
class. It is also clear that C contains all intervals of the form (−∞, a], (a, b],
(b, ∞) and ∅ since ξ and η have the same probability distribution. Let F be
the algebra consisting of all finite unions of disjoint sets of the form (−∞, a],
14 Chapter 1 - Probability Theory
(a, b], (b, ∞) and ∅. Note that for any disjoint sets C1 , C2 , · · · , Ck of F and
C = C1 ∪ C2 ∪ · · · ∪ Ck , we have
k
X k
X
Pr{ξ ∈ C} = Pr{ξ ∈ Cj } = Pr{η ∈ Cj } = Pr{η ∈ C}.
j=1 j=1
Theorem 1.13 Let ξ and η be two random variables whose probability den-
sity functions exist. Then ξ and η are identically distributed if and only if
they have the same probability density function.
Proof: It follows from Theorem 1.12 that the random variables ξ and η are
identically distributed if and only if they have the same probability distribu-
tion, if and only if they have the same probability density function.
!
a b +∞ 0
b−r
Z Z Z Z
a+b
E[ξ] = 1dr + dr + 0dr − 0dr = .
0 a b−a b −∞ 2
!
+∞ a b 0
r−a
Z Z Z Z
a+b
E[ξ] = 0dr − 0dr + dr + 1dr = .
0 −∞ a b−a b 2
(
0, if r ≤ a
Pr{ξ ≤ r} =
(r − a)/(b − a), if a ≤ r ≤ 0,
!
b +∞ Z a 0
b−r r−a
Z Z Z
a+b
E[ξ] = dr + 0dr − 0dr + dr = .
0 b−a b −∞ a b−a 2
m
X
E[ξ] = pi xi .
i=1
Z +∞
xφ(x)dx
−∞
and Z +∞ Z y
lim xdΦ(x) = 0, lim xdΦ(x) = 0.
y→+∞ y y→−∞ −∞
It follows from
Z +∞
xdΦ(x) ≥ y lim Φ(z) − Φ(y) = y(1 − Φ(y)) ≥ 0, if y > 0,
y z→+∞
Z y
xdΦ(x) ≤ y Φ(y) − lim Φ(z) = yΦ(y) ≤ 0, if y < 0
−∞ z→−∞
that
lim y (1 − Φ(y)) = 0, lim yΦ(y) = 0.
y→+∞ y→−∞
Section 1.6 - Expected Value 17
Let 0 = x0 < x1 < x2 < · · · < xn = y be a partition of [0, y]. Then we have
n−1
X Z y
xi (Φ(xi+1 ) − Φ(xi )) → xdΦ(x)
i=0 0
and
n−1
X Z y
(1 − Φ(xi+1 ))(xi+1 − xi ) → Pr{ξ ≥ r}dr
i=0 0
as max{|xi+1 − xi | : i = 0, 1, · · · , n − 1} → 0. Since
n−1
X n−1
X
xi (Φ(xi+1 ) − Φ(xi )) − (1 − Φ(xi+1 ))(xi+1 − xi ) = y(Φ(y) − 1) → 0
i=0 i=0
= E[ξ] + b.
If b < 0, then we have
Z 0
E[ξ + b] = E[ξ] − (Pr{ξ ≥ r − b} + Pr{ξ < r − b}) dr = E[ξ] + b.
b
18 Chapter 1 - Probability Theory
If a < 0, we have
Z ∞ Z 0
E[aξ] = Pr{aξ ≥ r}dr − Pr{aξ ≤ r}dr
0 −∞
Z ∞ Z 0
n ro n ro
= Pr ξ ≤ dr − Pr ξ ≥ dr
0 a −∞ a
Z ∞ n Z 0
ro r n ro r
=a Pr ξ ≥ d −a Pr ξ ≤ d
0 a a −∞ a a
= aE[ξ].
= E[ξ] + E[η].
Step 4: We prove that E[ξ + η] = E[ξ] + E[η] when both ξ and η are
nonnegative random variables. For every i ≥ 1 and every ω ∈ Ω, we define
k − 1 , if k − 1 ≤ ξ(ω) < k , k = 1, 2, · · · , i2i
ξi (ω) = 2i 2i 2i
i, if i ≤ ξ(ω),
Section 1.6 - Expected Value 19
k − 1 , if k − 1 ≤ η(ω) < k , k = 1, 2, · · · , i2i
ηi (ω) = 2i 2i 2i
i, if i ≤ η(ω).
Then {ξi }, {ηi } and {ξi + ηi } are three sequences of nonnegative simple
random variables such that ξi ↑ ξ, ηi ↑ η and ξi + ηi ↑ ξ + η as i → ∞. Note
that the functions Pr{ξi > r}, Pr{ηi > r}, Pr{ξi + ηi > r}, i = 1, 2, · · · are
also simple. It follows from the probability continuity theorem that
as i → ∞. Similarly, we may prove that E[ηi ] → E[η] and E[ξi +ηi ] → E[ξ+η]
as i → ∞. It follows from Step 3 that E[ξ + η] = E[ξ] + E[η].
Step 5: We prove that E[ξ + η] = E[ξ] + E[η] when ξ and η are arbitrary
random variables. Define
( (
ξ(ω), if ξ(ω) ≥ −i η(ω), if η(ω) ≥ −i
ξi (ω) = ηi (ω) =
−i, otherwise, −i, otherwise.
Since the expected values E[ξ] and E[η] are finite, we have
lim E[ξi ] = E[ξ], lim E[ηi ] = E[η], lim E[ξi + ηi ] = E[ξ + η].
i→∞ i→∞ i→∞
Note that (ξi + i) and (ηi + i) are nonnegative random variables. It follows
from Steps 1 and 4 that
= E[ξ]E[η].
Step 2: Next we prove the case where ξ and η are nonnegative random
variables. For every i ≥ 1 and every ω ∈ Ω, we define
k − 1 , if k − 1 ≤ ξ(ω) < k , k = 1, 2, · · · , i2i
ξi (ω) = 2i 2i 2i
i, if i ≤ ξ(ω),
k − 1 , if k − 1 ≤ η(ω) < k , k = 1, 2, · · · , i2i
ηi (ω) = 2i 2i 2i
i, if i ≤ η(ω).
Then {ξi }, {ηi } and {ξi ηi } are three sequences of nonnegative simple random
variables such that ξi ↑ ξ, ηi ↑ η and ξi ηi ↑ ξη as i → ∞. It follows from
the independence of ξ and η that ξi and ηi are independent. Hence we have
E[ξi ηi ] = E[ξi ]E[ηi ] for i = 1, 2, · · · It follows from the probability continuity
theorem that Pr{ξi > r}, i = 1, 2, · · · are simple functions such that
Pr{ξi > r} ↑ Pr{ξ > r}, for all r ≥ 0
as i → ∞. Since the expected value E[ξ] exists, we have
Z +∞ Z +∞
E[ξi ] = Pr{ξi > r}dr → Pr{ξ > r}dr = E[ξ]
0 0
In addition, it is also known that Pr{fi (ξ) > r} ↑ Pr{f (ξ) > r} as i → ∞ for
r ≥ 0. It follows from the monotone convergence theorem that
Z +∞
E[f (ξ)] = Pr{f (ξ) > r}dr
0
Z +∞
= lim Pr{fi (ξ) > r}dr
i→∞ 0
Z +∞
= lim fi (x)dΦ(x)
i→∞ −∞
Z +∞
= f (x)dΦ(x).
−∞
X∞
= Pr{η = k} (E[ξ1 ] + E[ξ2 ] + · · · + E[ξk ])
k=1
X∞
= Pr{η = k}kE[ξ1 ] (by iid hypothesis)
k=1
= E[η]E[ξ1 ].
1.7 Variance
Definition 1.18 Let ξ be a random variable with finite expected value e.
Then the variance of ξ is defined by V [ξ] = E[(ξ − e)2 ].
0} = 1, i.e., Pr{ξ = e} = 1.
Conversely, if Pr{ξ = e} = 1, then we have Pr{(ξ − e)2 = 0} = 1 and
Pr{(ξ − e)2 ≥ r} = 0 for any r > 0. Thus
Z +∞
V [ξ] = Pr{(ξ − e)2 ≥ r}dr = 0.
0
Since ξ1 , ξ2 , · · · , ξn are independent, E [(ξi − E[ξi ])(ξj − E[ξj ])] = 0 for all
i, j with i 6= j. Thus (1.35) holds.
1.8 Moments
Definition 1.19 Let ξ be a random variable, and k a positive number. Then
(a) the expected value E[ξ k ] is called the kth moment;
(b) the expected value E[|ξ|k ] is called the kth absolute moment;
(c) the expected value E[(ξ − E[ξ])k ] is called the kth central moment;
(d) the expected value E[|ξ −E[ξ]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
Theorem 1.25 Let ξ be a nonnegative random variable, and k a positive
number. Then the k-th moment
Z +∞
k
E[ξ ] = k rk−1 Pr{ξ ≥ r}dr. (1.39)
0
26 Chapter 1 - Probability Theory
Theorem 1.26 Let ξ be a random variable that takes values in [a, b] and has
expected value e. Then for any positive integer k, the kth absolute moment
and kth absolute central moment satisfy the following inequalities,
b−e k e−a k
E[|ξ|k ] ≤ |a| + |b| , (1.40)
b−a b−a
b−e e−a
E[|ξ − e|k ] ≤ (e − a)k + (b − e)k . (1.41)
b−a b−a
This means that the random variable ξ will reach upwards of the α-
optimistic value ξsup (α) at least α of time, and will be below the α-pessimistic
value ξinf (α) at least α of time. The optimistic value is also called percentile.
where ξsup (α) and ξinf (α) are the α-optimistic and α-pessimistic values of the
random variable ξ, respectively.
Section 1.9 - Critical Values 27
Proof: It follows from the definition of the optimistic value that there exists
an increasing sequence {ri } such that Pr{ξ ≥ ri } ≥ α and ri ↑ ξsup (α) as
i → ∞. Since {ω|ξ(ω) ≥ ri } ↓ {ω|ξ(ω) ≥ ξsup (α)}, it follows from the
probability continuity theorem that
Pr{ξ ≥ ξsup (α)} = lim Pr{ξ ≥ ri } ≥ α.
i→∞
Example 1.19: Note that Pr{ξ ≥ ξsup (α)} > α and Pr{ξ ≤ ξinf (α)} > α
may hold. For example,
(
0 with probability 0.4
ξ=
1 with probability 0.6.
If α = 0.8, then ξsup (0.8) = 0 which makes Pr{ξ ≥ ξsup (0.8)} = 1 > 0.8. In
addition, ξinf (0.8) = 1 and Pr{ξ ≤ ξinf (0.8)} = 1 > 0.8.
Theorem 1.28 Let ξ be a random variable. Then we have
(a) ξinf (α) is an increasing and left-continuous function of α;
(b) ξsup (α) is a decreasing and left-continuous function of α.
Proof: (a) It is easy to prove that ξinf (α) is an increasing function of α.
Next, we prove the left-continuity of ξinf (α) with respect to α. Let {αi } be
an arbitrary sequence of positive numbers such that αi ↑ α. Then {ξinf (αi )}
is an increasing sequence. If the limitation is equal to ξinf (α), then the left-
continuity is proved. Otherwise, there exists a number z ∗ such that
lim ξinf (αi ) < z ∗ < ξinf (α).
i→∞
Similarly, we may prove that (−ξ)inf (α) = −ξsup (α). The theorem is proved.
1.10 Entropy
Given a random variable, what is the degree of difficulty of predicting the
specified value that the random variable will take? In order to answer this
question, Shannon [219] defined a concept of entropy as a measure of uncer-
tainty.
H[ξ] ≥ 0 (1.46)
and equality holds if and only if there exists an index k such that pk = 1, i.e.,
ξ is essentially a deterministic number.
Section 1.10 - Entropy 29
S(t)
....
.........
....
...
......................
1/e ... .....
.........
.......... ... ..................
.......
... ....... .. ......
... ... ... ......
... ... .. .....
... .....
...... .. .....
...
...
.
........................................................................................................................................................................
....
..... t
0 ...1/e
...
1 .....
.....
....
....
... ....
....
... ....
... ....
....
... ....
... ...
...
... ...
.. ...
Theorem 1.32 Let ξ be a simple random variable taking values xi with prob-
abilities pi , i = 1, 2, · · · , n, respectively. Then
H[ξ] ≤ ln n (1.47)
It follows from the Euler-Lagrange equation that the maximum entropy prob-
ability density function meets
ln φ(x) + 1 + λ = 0
and has the form φ(x) = exp(−1 − λ). Substituting it into the natural
constraint, we get
1
φ∗ (x) = , a≤x≤b
b−a
which is just the uniformly distributed random variable, and the maximum
entropy is H[ξ ∗ ] = ln(b − a).
The Lagrangian is
Z ∞ Z ∞ Z ∞
L=− φ(x) ln φ(x)dx − λ1 φ(x)dx − 1 − λ2 xφ(x)dx − β .
0 0 0
which is just the exponentially distributed random variable, and the maxi-
mum entropy is H[ξ ∗ ] = 1 + ln β.
The Lagrangian is
Z +∞ Z +∞
L=− φ(x) ln φ(x)dx − λ1 φ(x)dx − 1
−∞ −∞
Z +∞ Z +∞
2 2
−λ2 xφ(x)dx − µ − λ3 (x − µ) φ(x)dx − σ .
−∞ −∞
(x − µ)2
∗ 1
φ (x) = √ exp − , x∈<
σ 2π 2σ 2
1.11 Distance
Distance is a powerful concept in many disciplines of science and engineering.
This section introduces the distance between random variables.
Theorem 1.33 Let ξ, η, τ be random variables, and let d(·, ·) be the distance.
Then we have
(a) (Nonnegativity) d(ξ, η) ≥ 0;
(b) (Identification) d(ξ, η) = 0 if and only if ξ = η;
(c) (Symmetry) d(ξ, η) = d(η, ξ);
(d) (Triangle Inequality) d(ξ, η) ≤ d(ξ, τ ) + d(η, τ ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
The part (d) is proved by the following relation,
1.12 Inequalities
It is well-known that there are several inequalities in probability theory, such
as Markov inequality, Chebyshev inequality, Hölder’s inequality, Minkowski
inequality, and Jensen’s inequality. They play an important role in both
theory and applications.
E[f (ξ)]
Pr{|ξ| ≥ t} ≤ . (1.50)
f (t)
Section 1.12 - Inequalities 33
= f (t) · Pr{|ξ| ≥ t}
E[|ξ|p ]
Pr{|ξ| ≥ t} ≤ . (1.51)
tp
Proof: It is a special case of Theorem 1.34 when f (x) = |x|p .
Example 1.26: For any given positive number t, we define a random variable
as follows, (
0 with probability 1/2
ξ=
t with probability 1/2.
Then Pr{ξ ≥ t} = 1/2 = E[|ξ|p ]/tp .
V [ξ]
Pr {|ξ − E[ξ]| ≥ t} ≤ . (1.52)
t2
Proof: It is a special case of Theorem 1.34 when the random variable ξ is
replaced with ξ − E[ξ] and f (x) = x2 .
Example 1.27: For any given positive number t, we define a random variable
as follows, (
−t with probability 1/2
ξ=
t with probability 1/2.
Theorem 1.37 (Hölder’s Inequality) Let p and q be two positive real num-
bers with 1/p+1/q = 1, and let ξ and η be random variables with E[|ξ|p ] < ∞
and E[|η|q ] < ∞. Then we have
p
p
p
E[|ξη|] ≤ E[|ξ|p ] q E[|η|q ]. (1.53)
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume√E[|ξ|p ] > 0 and E[|η|q ] > 0. It is easy to prove that the function
√
f (x, y) = p x q y is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}. Thus
for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume √ E[|ξ|p ] > 0 and E[|η|p ] > 0. It is easy to prove that the function
√
f (x, y) = ( p x + p y)p is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}.
Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) − f (y) ≥ k · (x − y). Replacing x with ξ and y with E[ξ], we obtain
Note that ξi → ξ, a.s. if and only if Pr{X} = 0. That is, ξi → ξ, a.s. if and
only if (∞ ∞ )
\ [
Pr Xi (ε) = 0
n=1 i=n
for every ε > 0. Since
∞
[ ∞ [
\ ∞
Xi (ε) ↓ Xi (ε),
i=n n=1 i=n
Proof: It follows from the convergence a.s. and Theorem 1.40 that
(∞ )
[
lim Pr {|ξi − ξ| ≥ ε} = 0
n→∞
i=n
Proof: It follows from the Markov inequality that, for any given number
ε > 0,
E[|ξi − ξ|]
Pr {|ξi − ξ| ≥ ε} ≤ →0
ε
as i → ∞. Thus {ξi } converges in probability to ξ.
38 Chapter 1 - Probability Theory
1
E [|ξi − ξ|] = 2i · = 1.
2i
That is, the sequence {ξi } does not converge in mean to ξ.
Example 1.30: Convergence a.s. does not imply convergence in mean. For
example, take (Ω, A, Pr) to be {ω1 , ω2 , · · · } with Pr{ωj } = 1/2j for j =
1, 2, · · · The random variables are defined by
i
2 , if j = i
ξi {ωj } =
0, otherwise
Example 1.31: Convergence in mean does not imply convergence a.s. For
example, take (Ω, A, Pr) to be the interval [0, 1] with Borel algebra and
Lebesgue measure. For any positive integer i, there is an integer j such
that i = 2j + k, where k is an integer between 0 and 2j − 1. We define a
random variable on Ω by
(
1, if k/2j ≤ ω ≤ (k + 1)/2j
ξi (ω) =
0, otherwise
Since Pr{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim inf i→∞ Φi (x) for any
z < x. Letting z → x, we get
It follows from (1.61) and (1.62) that Φi (x) → Φ(x). The theorem is proved.
Theorem 1.44 Let (Ω, A, Pr) be a probability space, and B an event with
Pr{B} > 0. Then Pr{·|B} defined by (1.63) is a probability measure, and
(Ω, A, Pr{·|B}) is a probability space.
Pr{Ak } Pr{B|Ak }
Pr{Ak |B} = P
n (1.64)
Pr{Ai } Pr{B|Ai }
i=1
for k = 1, 2, · · · , n.
which is also called the formula for total probability. Thus, for any k, we have
Pr{Ak ∩ B} Pr{Ak } Pr{B|Ak }
Pr{Ak |B} = = P
n .
Pr{B}
Pr{Ai } Pr{B|Ai }
i=1
Remark 1.2: Especially, let A and B be two events with Pr{A} > 0 and
Pr{B} > 0. Then A and Ac form a partition of the space Ω, and the Bayes
formula is
Pr{A} Pr{B|A}
Pr{A|B} = . (1.65)
Pr{B}
Example 1.35: Let (ξ, η) be a random vector with joint probability density
function ψ. Then the marginal probability density functions of ξ and η are
Z +∞ Z +∞
f (x) = ψ(x, y)dy, g(y) = ψ(x, y)dx,
−∞ −∞
ψ(x, y) ψ(x, y)
φ(x|η = y) = =Z +∞ , a.s. (1.70)
g(y)
ψ(x, y)dx
−∞
Note that (1.69) and (1.70) are defined only for g(y) 6= 0. In fact, the set
{y|g(y) = 0} has probability 0. Especially, if ξ and η are independent random
variables, then ψ(x, y) = f (x)g(y) and φ(x|η = y) = f (x).
Hazard Rate
Definition 1.33 Let ξ be a nonnegative random variable representing life-
time. Then the hazard rate (or failure rate) is
Pr{ξ ≤ x + ∆ ξ > x}
h(x) = lim . (1.72)
∆↓0 ∆
The hazard rate tells us the probability of a failure just after time x when
it is functioning at time x. If ξ has probability distribution Φ and probability
density function φ, then the hazard rate
φ(x)
h(x) = .
1 − Φ(x)
Credibility Theory
The concept of fuzzy set was initiated by Zadeh [259] via membership function
in 1965. In order to measure a fuzzy event, Zadeh [262] proposed the concept
of possibility measure. Although possibility measure has been widely used,
it has no self-duality property. However, a self-dual measure is absolutely
needed in both theory and practice. In order to define a self-dual measure,
Liu and Liu [132] presented the concept of credibility measure. In addition,
a sufficient and necessary condition for credibility measure was given by Li
and Liu [102]. Credibility theory, founded by Liu [135] in 2004 and refined
by Liu [138] in 2007, is a branch of mathematics for studying the behavior of
fuzzy phenomena.
The emphasis in this chapter is mainly on credibility measure, credibility
space, fuzzy variable, membership function, credibility distribution, indepen-
dence, identical distribution, expected value, variance, moments, critical val-
ues, entropy, distance, convergence almost surely, convergence in credibility,
convergence in mean, convergence in distribution, and conditional credibility.
Let Θ be a nonempty set, and P the power set of Θ (i.e., the larggest σ-
algebra over Θ). Each element in P is called an event. In order to present an
axiomatic definition of credibility, it is necessary to assign to each event A a
number Cr{A} which indicates the credibility that A will occur. In order to
ensure that the number Cr{A} has certain mathematical properties which we
intuitively expect a credibility to have, we accept the following four axioms:
Axiom 1. (Normality) Cr{Θ} = 1.
Axiom 2. (Monotonicity) Cr{A} ≤ Cr{B} whenever A ⊂ B.
Axiom 3. (Self-Duality) Cr{A} + Cr{Ac } = 1 for any event A.
46 Chapter 2 - Credibility Theory
Axiom 4. (Maximality) Cr {∪i Ai } = supi Cr{Ai } for any events {Ai } with
supi Cr{Ai } < 0.5.
Definition 2.1 (Liu and Liu [132]) The set function Cr is called a cred-
ibility measure if it satisfies the normality, monotonicity, self-duality, and
maximality axioms.
Example 2.1: Let Θ = {θ1 , θ2 }. For this case, there are only four events:
∅, {θ1 }, {θ2 }, Θ. Define Cr{∅} = 0, Cr{θ1 } = 0.7, Cr{θ2 } = 0.3, and Cr{Θ} =
1. Then the set function Cr is a credibility measure because it satisfies the
four axioms.
is a credibility measure on Θ.
Theorem 2.1 Let Θ be a nonempty set, P the power set of Θ, and Cr the
credibility measure. Then Cr{∅} = 0 and 0 ≤ Cr{A} ≤ 1 for any A ∈ P.
Theorem 2.2 Let Θ be a nonempty set, P the power set of Θ, and Cr the
credibility measure. Then for any A, B ∈ P, we have
Proof: If Cr{A ∪ B} < 0.5, then Cr{A} ∨ Cr{B} < 0.5 by using Axiom 2.
Thus the equation (2.3) follows immediately from Axiom 4. If Cr{A ∪ B} =
0.5 and (2.3) does not hold, then we have Cr{A} ∨ Cr{B} < 0.5. It follows
from Axiom 4 that
Theorem 2.3 Let Θ be a nonempty set, P the power set of Θ, and Cr the
credibility measure. Then for any A, B ∈ P, we have
Proof: Suppose Cr{A} + Cr{B} < 1. Then there exists at least one term
less than 0.5, say Cr{B} < 0.5. If Cr{A} < 0.5 also holds, then the equation
(2.3) follows immediately from Axiom 4. If Cr{A} ≥ 0.5, then by using
Theorem 2.2, we obtain
for any events A and B. In fact, credibility measure is not only finitely
subadditive but also countably subadditive.
48 Chapter 2 - Credibility Theory
Case 2: Cr{A} ≥ 0.5. For this case, by using Axioms 2 and 3, we have
Cr{Ac } ≤ 0.5 and Cr{A ∪ B} ≥ Cr{A} ≥ 0.5. Then
Remark 2.1: For any events A and B, it follows from the credibility subaddi-
tivity theorem that the credibility measure is null-additive, i.e., Cr{A ∪ B} =
Cr{A} + Cr{B} if either Cr{A} = 0 or Cr{B} = 0.
Proof: Suppose that the credibility measure is additive. If there are more
than two elements taking nonzero credibility values, then we may choose three
elements θ1 , θ2 , θ3 such that Cr{θ1 } ≥ Cr{θ2 } ≥ Cr{θ3 } > 0. If Cr{θ1 } ≥ 0.5,
it follows from Axioms 2 and 3 that
Cr{θ2 , θ3 } ≤ Cr{Θ \ {θ1 }} = 1 − Cr{θ1 } ≤ 0.5.
By using Axiom 4, we obtain
Cr{θ2 , θ3 } = Cr{θ2 } ∨ Cr{θ3 } < Cr{θ2 } + Cr{θ3 }.
This is in contradiction with the additivity assumption. If Cr{θ1 } < 0.5,
then Cr{θ3 } ≤ Cr{θ2 } < 0.5. It follows from Axiom 4 that
Cr{θ2 , θ3 } ∧ 0.5 = Cr{θ2 } ∨ Cr{θ3 } < 0.5
which implies that
Cr{θ2 , θ3 } = Cr{θ2 } ∨ Cr{θ3 } < Cr{θ2 } + Cr{θ3 }.
This is also in contradiction with the additivity assumption. Hence there are
at most two elements taking nonzero credibility values.
Conversely, suppose that there are at most two elements, say θ1 and θ2 ,
taking nonzero credibility values. Let A and B be two disjoint events. The
argument breaks down into two cases.
Case 1: Cr{A} = 0 or Cr{B} = 0. Then we have Cr{A ∪ B} = Cr{A} +
Cr{B} by using the credibility subadditivity theorem.
Case 2: Cr{A} > 0 or Cr{B} > 0. For this case, without loss of generality,
we suppose that θ1 ∈ A and θ2 ∈ B. Note that Cr{(A ∪ B)c } = 0. It follows
from Axiom 3 and the credibility subadditivity theorem that
Cr{A ∪ B} = Cr{A ∪ B ∪ (A ∪ B)c } = Cr{Θ} = 1,
Cr{A} + Cr{B} = Cr{A ∪ (A ∪ B)c } + Cr{B} = 1.
Hence Cr{A ∪ B} = Cr{A} + Cr{B}. The additivity is proved.
Remark 2.2: Theorem 2.6 states that a credibility measure is identical with
probability measure if there are effectively two elements in the universal set.
Proof: (a) Since Cr{A} ≤ 0.5, we have Cr{Ai } ≤ 0.5 for each i. It follows
from Axiom 4 that
(b) Since limi→∞ Cr{Ai } < 0.5, we have supi Cr{Ai } < 0.5. It follows
from Axiom 4 that
Proof: Assume Ai ↑ Θ. If limi→∞ Cr{Ai } < 0.5, it follows from the credi-
bility semicontinuity law that
Theorem 2.10 (Li and Liu [102], Credibility Extension Theorem) Suppose
that Θ is a nonempty set, and Cr{θ} is a nonnegative function on Θ satisfying
the credibility extension condition (2.12). Then Cr{θ} has a unique extension
to a credibility measure as follows,
sup Cr{θ}, if sup Cr{θ} < 0.5
θ∈A θ∈A
Cr{A} = (2.13)
1 − sup Cr{θ}, if sup Cr{θ} ≥ 0.5.
θ∈Ac θ∈A
Proof: We first prove that the set function Cr{A} defined by (2.13) is a
credibility measure.
Step 1: By the credibility extension condition sup Cr{θ} ≥ 0.5, we have
θ∈Θ
Case 2: sup Cr{θ} ≥ 0.5. For this case, we have sup Cr{θ} ≥ 0.5, and
θ∈A θ∈B
Case 2: sup Cr{θ} ≥ 0.5. For this case, we have sup Cr{θ} ≤ 0.5, and
θ∈A θ∈Ac
Step 4: For any collection {Ai } with supi Cr{Ai } < 0.5, we have
Cr1 {A} = sup Cr1 {θ} = sup Cr2 {θ} = Cr2 {A}.
θ∈A θ∈A
Case 2: Cr1 {A} > 0.5. For this case, we have Cr1 {Ac } < 0.5. It follows
from the first case that Cr1 {Ac } = Cr2 {Ac } which implies Cr1 {A} = Cr2 {A}.
Case 3: Cr1 {A} = 0.5. For this case, we have Cr1 {Ac } = 0.5, and
Cr2 {A} ≥ sup Cr2 {θ} = sup Cr1 {θ} = Cr1 {A} = 0.5,
θ∈A θ∈A
Cr2 {Ac } ≥ sup Cr2 {θ} = sup Cr1 {θ} = Cr1 {Ac } = 0.5.
θ∈Ac θ∈Ac
Credibility Space
Definition 2.2 Let Θ be a nonempty set, P the power set of Θ, and Cr a
credibility measure. Then the triplet (Θ, P, Cr) is called a credibility space.
Section 2.1 - Credibility Space 53
Now we suppose that θ ∗ = (θ1∗ , θ2∗ , · · · , θn∗ ) is a point with Cr{θ ∗ } ≥ 0.5.
Without loss of generality, let i be the index such that
Then there is a point (θ10 , θ20 , · · · , θn0 ) 6= (θ1∗ , θ2∗ , · · · , θn∗ ) such that
That is,
sup Crj {θj } > sup Cri {θi }
θj 6=θj∗ θi 6=θi∗
Thus Cr satisfies the credibility extension condition. It follows from the cred-
ibility extension theorem that Cr{A} is just the unique extension of Cr{θ}.
The theorem is proved.
Theorem 2.12 Let (Θ, P, Cr) be the product credibility space of (Θk , Pk , Crk ),
k = 1, 2, · · · , n. Then for any Ak ∈ Pk , k = 1, 2, · · · , n, we have
Proof: We only prove the case of n = 2. If Cr1 {A1 } < 0.5 or Cr2 {A2 } < 0.5,
then we have
sup Cr1 {θ1 } < 0.5 or sup Cr2 {θ2 } < 0.5.
θ1 ∈A1 θ2 ∈A2
It follows from
sup Cr1 {θ1 } ∧ Cr2 {θ2 } = sup Cr1 {θ1 } ∧ sup Cr2 {θ2 } < 0.5
(θ1 ,θ2 )∈A1 ×A2 θ1 ∈A1 θ2 ∈A2
that
Cr{A1 × A2 } = sup Cr1 {θ1 } ∧ sup Cr2 {θ2 } = Cr1 {A1 } ∧ Cr2 {A2 }.
θ1 ∈A1 θ2 ∈A2
It follows from
sup Cr1 {θ1 } ∧ Cr2 {θ2 } = sup Cr1 {θ1 } ∧ sup Cr2 {θ2 } ≥ 0.5
(θ1 ,θ2 )∈A1 ×A2 θ1 ∈A1 θ2 ∈A2
that
Cr{A1 × A2 } = 1 − sup Cr1 {θ1 } ∧ Cr2 {θ2 }
/ A1 ×A2
(θ1 ,θ2 )∈
! !
= 1 − sup Cr1 {θ1 } ∧ 1 − sup Cr2 {θ2 }
θ1 ∈Ac1 θ2 ∈Ac2
is a credibility measure on P.
Proof: Like Theorem 2.11 except Cr{(θ1 , θ2 , · · · )} = inf 1≤k<∞ Crk {θk }.
Definition 2.4 Let (Θk , Pk , Crk ), k = 1, 2, · · · be credibility spaces. Define
Θ = Θ1 × Θ2 × · · · and Cr = Cr1 ∧ Cr2 ∧ · · · Then (Θ, P, Cr) is called the
infinite product credibility space of (Θk , Pk , Crk ), k = 1, 2, · · ·
Section 2.2 - Fuzzy Variables 57
Example 2.8: Take (Θ, P, Cr) to be {θ1 , θ2 } with Cr{θ1 } = Cr{θ2 } = 0.5.
Then the function (
0, if θ = θ1
ξ(θ) =
1, if θ = θ2
is a fuzzy variable.
Example 2.9: Take (Θ, P, Cr) to be the interval [0, 1] with Cr{θ} = θ/2 for
each θ ∈ [0, 1]. Then the identity function ξ(θ) = θ is a fuzzy variable.
Cr {ξ 6= x1 , ξ 6= x2 , · · · , ξ 6= xm } = 0; (2.27)
Cr {ξ 6= x1 , ξ 6= x2 , · · · } = 0. (2.28)
Fuzzy Vector
Definition 2.8 An n-dimensional fuzzy vector is defined as a function from
a credibility space (Θ, P, Cr) to the set of n-dimensional real vectors.
Fuzzy Arithmetic
In this subsection, we will suppose that all fuzzy variables are defined on a
common credibility space. Otherwise, we may embed them into the product
credibility space.
for any θ ∈ Θ.
Proof: Since f (ξ) is a function from a credibility space to the set of real
numbers, it is a fuzzy variable.
Membership function represents the degree that the fuzzy variable ξ takes
some prescribed value. How do we determine membership functions? There
are several methods reported in the past literature. Anyway, the membership
degree µ(x) = 0 if x is an impossible point, and µ(x) = 1 if x is the most
possible point that ξ takes.
1
Cr{ξ ≤ x} = sup µ(y) + 1 − sup µ(y) , ∀x ∈ <; (2.35)
2 y≤x y>x
1
Cr{ξ ≥ x} = sup µ(y) + 1 − sup µ(y) , ∀x ∈ <. (2.36)
2 y≥x y<x
µ(x)
Cr{ξ = x} = , ∀x ∈ <. (2.37)
2
Theorem 2.17 (Sufficient and Necessary Condition for Membership Func-
tion) A function µ : < → [0, 1] is a membership function if and only if
sup µ(x) = 1.
If there is some point x ∈ < such that Cr{ξ = x} ≥ 0.5, then sup µ(x) = 1.
Otherwise, we have Cr{ξ = x} < 0.5 for each x ∈ <. It follows from Axiom 4
that
It is clear that
1
sup Cr{x} ≥ (1 + 1 − 1) = 0.5.
x∈< 2
For any x ∈ < with Cr{x∗ } ≥ 0.5, we have µ(x∗ ) = 1 and
∗
1 1
= 1− sup µ(y) + sup µ(y) = 1.
2 y6=x∗ 2 y6=x∗
Thus Cr{x} satisfies the credibility extension condition, and has a unique
extension to credibility measure on P(<) by using the credibility extension
Section 2.3 - Membership Function 61
Remark 2.4: Theorem 2.17 states that the identity function is a universal
function for any fuzzy variable by defining an appropriate credibility space.
x−a
, if a ≤ x ≤ b
b−a
1, if b ≤ x ≤ c
µ3 (x) =
x−d
if c ≤ x ≤ d
,
c−d
0, otherwise.
That is, Φ(x) is the credibility that the fuzzy variable ξ takes a value less than
or equal to x. Generally speaking, the credibility distribution Φ is neither
left-continuous nor right-continuous.
lim Φ(y) = Φ(x) if lim Φ(y) > 0.5 or Φ(x) ≥ 0.5. (2.42)
y↓x y↓x
For this case, we have proved that limy↓x Φ(y) = Φ(x). Thus (2.41) and
(2.42) are proved.
Conversely, if Φ : < → [0, 1] is an increasing function satisfying (2.41) and
(2.42), then
2Φ(x), if Φ(x) < 0.5
µ(x) = 1, if lim Φ(y) < 0.5 ≤ Φ(x) (2.43)
y↑x
2 − 2Φ(x), if 0.5 ≤ lim Φ(y)
y↑x
takes values in [0, 1] and sup µ(x) = 1. It follows from Theorem 2.17 that
there is a fuzzy variable ξ whose membership function is just µ. Let us verify
that Φ is the credibility distribution of ξ, i.e., Cr{ξ ≤ x} = Φ(x) for each x.
The argument breaks down into two cases. (i) If Φ(x) < 0.5, then we have
supy>x µ(y) = 1, and µ(y) = 2Φ(y) for each y with y ≤ x. Thus
1
Cr{ξ ≤ x} = sup µ(y) + 1 − sup µ(y) = sup Φ(y) = Φ(x).
2 y≤x y>x y≤x
Section 2.4 - Credibility Distribution 65
(ii) If Φ(x) ≥ 0.5, then we have supy≤x µ(y) = 1 and Φ(y) ≥ Φ(x) ≥ 0.5 for
each y with y > x. Thus µ(y) = 2 − 2Φ(y) and
1
Cr{ξ ≤ x} = sup µ(y) + 1 − sup µ(y)
2 y≤x y>x
1
= 1 + 1 − sup(2 − 2Φ(y))
2 y>x
Thus we have
lim Φ(x) = a, lim Φ(x) = b.
x→−∞ x→+∞
Example 2.18: However, the inverse of Theorem 2.23 is not true. For
example, let ξ be a fuzzy variable whose membership function is
(
x, if 0 ≤ x ≤ 1
µ(x) =
1, otherwise.
66 Chapter 2 - Credibility Theory
Then its credibility distribution is Φ(x) ≡ 0.5. It is clear that Φ(x) is simple
and continuous. But the fuzzy variable ξ is neither simple nor continuous.
Definition 2.14 (Liu [130]) The credibility density function φ: < → [0, +∞)
of a fuzzy variable ξ is a function such that
Z x
Φ(x) = φ(y)dy, ∀x ∈ <, (2.44)
−∞
Z +∞
φ(y)dy = 1 (2.45)
−∞
Example 2.22: The credibility density function does not necessarily exist
even if the membership function is continuous and unimodal with a finite
support. Let f be the Cantor function, and set
f (x),
if 0 ≤ x ≤ 1
µ(x) = f (2 − x), if 1 < x ≤ 2 (2.46)
0, otherwise.
Section 2.5 - Independence 67
Proof: The first part follows immediately from the definition. In addition,
by the self-duality of credibility measure, we have
Z +∞ Z x Z +∞
Cr{ξ ≥ x} = 1 − Cr{ξ < x} = φ(y)dy − φ(y)dy = φ(y)dy.
−∞ −∞ x
Definition 2.16 The joint credibility density function φ : <n → [0, +∞) of
a fuzzy vector (ξ1 , ξ2 , · · · , ξn ) is a function such that
Z x1 Z x2 Z xn
Φ(x1 , x2 , · · · , xn ) = ··· φ(y1 , y2 , · · · , yn )dy1 dy2 · · · dyn
−∞ −∞ −∞
2.5 Independence
The independence of fuzzy variables has been discussed by many authors
from different angles. Here we use the following condition.
Definition 2.17 (Liu and Gao [158]) The fuzzy variables ξ1 , ξ2 , · · · , ξm are
said to be independent if
(m )
\
Cr {ξi ∈ Bi } = min Cr {ξi ∈ Bi } (2.48)
1≤i≤m
i=1
1 1
= µ(x1 , x2 , · · · , xm ) = min µi (xi )
2 2 1≤i≤m
1
= min (2Cr {ξi = xi }) ∧ 1
2 1≤i≤m
= min Cr {ξi = xi } .
1≤i≤m
Example 2.24: However, the equation (2.52) does not imply that the fuzzy
variables are independent. For example, let ξ be a fuzzy variable with credi-
bility distribution Φ. Then the joint credibility distribution Ψ of fuzzy vector
(ξ, ξ) is
for any real numbers x1 and x2 . But, generally speaking, a fuzzy variable is
not independent with itself.
Theorem 2.30 The fuzzy variables ξ and η are identically distributed if and
only if ξ and η have the same membership function.
for any set B of <. Thus ξ and η are identically distributed fuzzy variables.
Theorem 2.31 The fuzzy variables ξ and η are identically distributed if and
only if Cr{ξ = x} = Cr{η = x} for each x ∈ <.
Proof: If ξ and η are identically distributed fuzzy variables, then we im-
mediately have Cr{ξ = x} = Cr{η = x} for each x. Conversely, it follows
from
µ(x) = (2Cr{ξ = x}) ∧ 1 = (2Cr{η = x}) ∧ 1 = ν(x)
that ξ and η have the same membership function. Thus ξ and η are identically
distributed fuzzy variables.
Theorem 2.32 If ξ and η are identically distributed fuzzy variables, then ξ
and η have the same credibility distribution.
Proof: If ξ and η are identically distributed fuzzy variables, then, for any
x ∈ <, we have Cr{ξ ∈ (−∞, x]} = Cr{η ∈ (−∞, x]}. Thus ξ and η have the
same credibility distribution.
Example 2.25: The inverse of Theorem 2.32 is not true. We consider two
fuzzy variables with the following membership functions,
1.0, if x = 0
1.0, if x = 0
µ(x) = 0.6, if x = 1 ν(x) = 0.7, if x = 1
0.8, if x = 2, 0.8, if x = 2.
for any x ∈ <. Here we set µ(x) = 0 if there are not real numbers x1 , x2 , · · · , xn
such that x = f (x1 , x2 , · · · , xn ).
ξ + η = (a1 + b1 , a2 + b2 ).
!
Z a Z b Z +∞ Z 0
a+b
E[ξ] = 1dr + 0.5dr + 0dr − 0dr = .
0 a b −∞ 2
!
Z +∞ Z a Z b Z 0
a+b
E[ξ] = 0dr − 0dr + 0.5dr + 1dr = .
0 −∞ a b 2
If a < 0 < b, then
(
0.5, if 0 ≤ r ≤ b
Cr{ξ ≥ r} =
0, if r > b,
(
0, if r < a
Cr{ξ ≤ r} =
0.5, if a ≤ r ≤ 0,
!
Z b Z +∞ Z a Z 0
a+b
E[ξ] = 0.5dr + 0dr − 0dr + 0.5dr = .
0 b −∞ a 2
m
X
E[ξ] = wi xi (2.57)
i=1
µ1 ≤ µ2 ≤ · · · ≤ µk and µk ≥ µk+1 ≥ · · · ≥ µm .
76 Chapter 2 - Credibility Theory
Note that µk ≡ 1. Then the expected value is determined by (2.57) and the
weights are given by
µ1
, if i = 1
2
µi − µi−1
, if i = 2, 3, · · · , k − 1
2
µk−1 + µk+1
wi = 1− , if i = k
2
µi − µi+1
if i = k + 1, k + 2, · · · , m − 1
,
2
µm
, if i = m.
2
Theorem 2.35 (Liu [130]) Let ξ be a fuzzy variable whose credibility density
function φ exists. If the Lebesgue integral
Z +∞
xφ(x)dx
−∞
Proof: It follows from the definition of expected value operator and Fubini
Theorem that
Z +∞ Z 0
E[ξ] = Cr{ξ ≥ r}dr − Cr{ξ ≤ r}dr
0 −∞
Z +∞ Z +∞ Z 0 Z r
= φ(x)dx dr − φ(x)dx dr
0 r −∞ −∞
Z +∞ Z x Z 0 Z 0
= φ(x)dr dx − φ(x)dr dx
0 0 −∞ x
Z +∞ Z 0
= xφ(x)dx + xφ(x)dx
0 −∞
Z +∞
= xφ(x)dx.
−∞
and Z +∞ Z y
lim xdΦ(x) = 0, lim xdΦ(x) = 0.
y→+∞ y y→−∞ −∞
It follows from
Z +∞
xdΦ(x) ≥ y lim Φ(z) − Φ(y) = y (1 − Φ(y)) ≥ 0, for y > 0,
y z→+∞
Z y
xdΦ(x) ≤ y Φ(y) − lim Φ(z) = yΦ(y) ≤ 0, for y < 0
−∞ z→−∞
that
lim y (1 − Φ(y)) = 0, lim yΦ(y) = 0.
y→+∞ y→−∞
Let 0 = x0 < x1 < x2 < · · · < xn = y be a partition of [0, y]. Then we have
n−1
X Z y
xi (Φ(xi+1 ) − Φ(xi )) → xdΦ(x)
i=0 0
78 Chapter 2 - Credibility Theory
and
n−1
X Z y
(1 − Φ(xi+1 ))(xi+1 − xi ) → Cr{ξ ≥ r}dr
i=0 0
as max{|xi+1 − xi | : i = 0, 1, · · · , n − 1} → 0. Since
n−1
X n−1
X
xi (Φ(xi+1 ) − Φ(xi )) − (1 − Φ(xi+1 )(xi+1 − xi ) = y(Φ(y) − 1) → 0
i=0 i=0
Proof: Step 1: We first prove that E[ξ + b] = E[ξ] + b for any real number
b. If b ≥ 0, we have
Z ∞ Z 0
E[ξ + b] = Cr{ξ + b ≥ r}dr − Cr{ξ + b ≤ r}dr
0 −∞
Z ∞ Z 0
= Cr{ξ ≥ r − b}dr − Cr{ξ ≤ r − b}dr
0 −∞
Z b
= E[ξ] + (Cr{ξ ≥ r − b} + Cr{ξ < r − b}) dr
0
= E[ξ] + b.
If a < 0, we have
Z ∞ Z 0
E[aξ] = Cr{aξ ≥ r}dr − Cr{aξ ≤ r}dr
0 −∞
Z ∞ Z 0
n ro n ro
= Cr ξ ≤ dr − Cr ξ ≥ dr
0 a −∞ a
Z ∞ n Z 0
ro r n ro r
=a Cr ξ ≥ d −a Cr ξ ≤ d = aE[ξ].
0 a a −∞ a a
Step 3: We prove that E[ξ + η] = E[ξ] + E[η] when both ξ and η are
simple fuzzy variables with the following membership functions,
µ1 , if x = a1
ν1 , if x = b1
µ2 , if x = a2 ν2 , if x = b2
µ(x) = ν(x) =
···
···
µm , if x = am , νn , if x = bn .
Then ξ+η is also a simple fuzzy variable taking values ai +bj with membership
degrees µi ∧ νj , i = 1, 2, · · · , m, j = 1, 2, · · · , n, respectively. Now we define
1
wi0 = max {µk |ak ≤ ai } − max {µk |ak < ai }
2 1≤k≤m 1≤k≤m
+ max {µk |ak ≥ ai } − max {µk |ak > ai } ,
1≤k≤m 1≤k≤m
1
wj00 = max {νl |bl ≤ bi } − max {νl |bl < bi }
2 1≤l≤n 1≤l≤n
+ max {νl |bl ≥ bi } − max {νl |bl > bi } ,
1≤l≤n 1≤l≤n
80 Chapter 2 - Credibility Theory
1
wij = max {µk ∧ νl |ak + bl ≤ ai + bj }
2 1≤k≤m,1≤l≤n
Thus E[ξ + η] = E[ξ] + E[η]. If not, we may give them a small perturbation
such that they are distinct, and prove the linearity by letting the perturbation
tend to zero.
Step 4: We prove that E[ξ + η] = E[ξ] + E[η] when ξ and η are fuzzy
variables such that
1
lim Cr{ξ ≤ y} ≤ ≤ Cr{ξ ≤ 0},
y↑0 2
(2.61)
1
lim Cr{η ≤ y} ≤ ≤ Cr{η ≤ 0}.
y↑0 2
k−1 k−1 k
i
, if ≤ Cr{ξ ≤ x} < , k = 1, 2, · · · , 2i−1
2i 2i
2
Φi (x) = k k−1 k
i
, if ≤ Cr{ξ ≤ x} < , k = 2i−1 + 1, · · · , 2i
2 2i 2i
1, if Cr{ξ ≤ x} = 1
k−1 k−1 k
i
, if ≤ Cr{η ≤ x} < , k = 1, 2, · · · , 2i−1
2i 2i
2
Ψi (x) = k k−1 k
i
, if ≤ Cr{η ≤ x} < , k = 2i−1 + 1, · · · , 2i
2 2i 2i
if Cr{η ≤ x} = 1
1,
= Cr{ξ + η ≤ r}.
That is,
Cr{ξi + ηi ≤ r} ↑ Cr{ξ + η ≤ r}, if r ≤ 0.
Z +∞ Z 0
E[ηi ] = Cr{ηi ≥ r}dr − Cr{ηi ≤ r}dr
0 −∞
Z +∞ Z 0
→ Cr{η ≥ r}dr − Cr{η ≤ r}dr = E[η],
0 −∞
82 Chapter 2 - Credibility Theory
Z +∞ Z 0
E[ξi + ηi ] = Cr{ξi + ηi ≥ r}dr − Cr{ξi + ηi ≤ r}dr
0 −∞
Z +∞ Z 0
→ Cr{ξ + η ≥ r}dr − Cr{ξ + η ≤ r}dr = E[ξ + η]
0 −∞
Step 6: We prove that E[aξ + bη] = aE[ξ] + bE[η] for any real numbers
a and b. In fact, the equation follows immediately from Steps 2 and 5. The
theorem is proved.
Example 2.38: Theorem 2.37 does not hold if ξ and η are not independent.
For example, take (Θ, P, Cr) to be {θ1 , θ2 , θ3 } with Cr{θ1 } = 0.7, Cr{θ2 } =
0.3 and Cr{θ3 } = 0.2. The fuzzy variables are defined by
1, if θ = θ1
0, if θ = θ1
ξ1 (θ) = 0, if θ = θ2 ξ2 (θ) = 2, if θ = θ2
2, if θ = θ3 , 3, if θ = θ3 .
Then we have
1, if θ = θ1
(ξ1 + ξ2 )(θ) = 2, if θ = θ2
5, if θ = θ3 .
Thus E[ξ1 ] = 0.9, E[ξ2 ] = 0.8, and E[ξ1 + ξ2 ] = 1.9. This fact implies that
Then we have
0, if θ = θ1
(η1 + η2 )(θ) = 4, if θ = θ2
3, if θ = θ3 .
Thus E[η1 ] = 0.5, E[η2 ] = 0.9, and E[η1 + η2 ] = 1.2. This fact implies that
For random case, it has been proved that the expected value E[f (ξ)] is the
Lebesgue-Stieltjes integral of f (x) with respect to the probability distribution
Φ of ξ if the integral exists. However, generally speaking, it is not true for
fuzzy case.
Then the expected value E[ξ 2 ] = 0.5. However, the credibility distribution
of ξ is
0, if x < −1
0.3, if − 1 ≤ x < 0
Φ(x) =
0.5, if 0 ≤ x < 1
1, if x ≥ 1
Theorem 2.38 (Zhu and Ji [276]) Let ξ be a fuzzy variable whose credibility
distribution Φ satisfies
Assume that a and b are two real numbers such that a < 0 < b. The
integration by parts produces
Z b Z b Z f −1 (b)
−1
Cr{f (ξ) ≥ r}dr = Cr{ξ ≥ f (r)}dr = Cr{ξ ≥ y}df (y)
0 0 f −1 (0)
−1
Z f (b)
= Cr{ξ ≥ f −1 (b)}f (f −1 (b)) − f (y)dCr{ξ ≥ y}
f −1 (0)
Z f −1 (b)
−1 −1
= Cr{ξ ≥ f (b)}f (f (b)) + f (y)dΦ(y).
f −1 (0)
In addition,
Z 0 Z 0 Z f −1 (0)
Cr{f (ξ) ≤ r}dr = Cr{ξ ≤ f −1 (r)}dr = Cr{ξ ≤ y}df (y)
a a f −1 (a)
Z f −1 (0)
= −Cr{ξ ≤ f −1 (a)}f (f −1 (a)) − f (y)dCr{ξ ≤ y}
f −1 (a)
Z f −1 (0)
= −Cr{ξ ≤ f −1 (a)}f (f −1 (a)) − f (y)dΦ(y).
f −1 (a)
Section 2.8 - Expected Value 85
= Cr {ñξ1 ≥ r} .
On the other hand, for any given ε > 0, there exists an integer n and real
numbers x1 , x2 , · · · , xn with x1 + x2 + · · · + xn ≥ r such that
( ñ )
X
Cr ξi ≥ r − ε ≤ Cr{ñ = n} ∧ Cr{ξi = xi }
i=1
86 Chapter 2 - Credibility Theory
Letting ε → 0, we get
( ñ )
X
Cr ξi ≥ r ≤ Cr {ñξ1 ≥ r} .
i=1
It follows that ( ñ )
X
Cr ξi ≥ r = Cr {ñξ1 ≥ r} .
i=1
Similarly, the above identity still holds if the symbol “≥” is replaced with
“≤”. Finally, by the definition of expected value operator, we have
" ñ
# ( ñ ) ( ñ )
X Z +∞ X Z 0 X
E ξi = Cr ξi ≥ r dr − Cr ξi ≤ r dr
i=1 0 i=1 −∞ i=1
Z +∞ Z 0
= Cr {ñξ1 ≥ r} dr − Cr {ñξ1 ≤ r} dr = E [ñξ1 ] .
0 −∞
2.9 Variance
Definition 2.20 (Liu and Liu [132]) Let ξ be a fuzzy variable with finite
expected value e. Then the variance of ξ is defined by V [ξ] = E[(ξ − e)2 ].
Example 2.40: Let ξ be an equipossible fuzzy variable (a, b). Then its
expected value is e = (a + b)/2, and for any positive number r, we have
(
1/2, if r ≤ (b − a)2 /4
Cr{(ξ − e)2 ≥ r} =
0, if r > (b − a)2 /4.
µ(x)
...
..........
... . . . . . . . . . . . . . . . . . . ..
1 ...
...
....
.........
.... . ....
... .... . ....
... ..
..... . ........
... . .....
... .... . ....
... .... . .....
... .... . .....
..... . ..
... . . . . . . . . ............ . . . . . . . . . . . . . . . ...........
0.434 ...
... ..
.. ..
.
.... ..
.
. ......
. ...........
... .
...
..... . .
. . .......
... .......
................. .
. .
.
.
. .........
... ............
.............................. .... .
. .
.
.
. ....................
.............................................................................................................................................................................................................................................................................
...
x
...
0 e−σ .... e e+σ
Theorem 2.41 Let ξ be a fuzzy variable with expected value e. Then V [ξ] =
0 if and only if Cr{ξ = e} = 1.
Theorem 2.42 (Li and Liu [111]) Let f be a convex function on [a, b], and
ξ a fuzzy variable that takes values in [a, b] and has expected value e. Then
b−e e−a
E[f (ξ)] ≤ f (a) + f (b). (2.71)
b−a b−a
b − ξ(θ) ξ(θ) − a
ξ(θ) = a+ b.
b−a b−a
It follows from the convexity of f that
b − ξ(θ) ξ(θ) − a
f (ξ(θ)) ≤ f (a) + f (b).
b−a b−a
Taking expected values on both sides, we obtain (2.71).
Theorem 2.43 (Li and Liu [111], Maximum Variance Theorem) Let ξ be a
fuzzy variable that takes values in [a, b] and has expected value e. Then
2(b − e)
b − a ∧ 1, if x = a
µ(x) = (2.73)
2(e − a) ∧ 1, if x = b.
b−a
2.10 Moments
Definition 2.21 (Liu [134]) Let ξ be a fuzzy variable, and k a positive num-
ber. Then
(a) the expected value E[ξ k ] is called the kth moment;
(b) the expected value E[|ξ|k ] is called the kth absolute moment;
(c) the expected value E[(ξ − E[ξ])k ] is called the kth central moment;
(d) the expected value E[|ξ −E[ξ]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
µ(x)
..
.........
...
1 .....
.... ........
.
... ......
... ....
.....
... ....
... .....
... .....
..
... . . . . . . . ......... .....
0.434 ...
...
. ........
. ..
... . ...........
... . .......
. .........
... . ............
...................
... . ..
................................................................................................................................................................................................................... x
..
0 ..... m
Theorem 2.45 (Li and Liu [111]) Let ξ be a fuzzy variable that takes val-
ues in [a, b] and has expected value e. Then for any positive integer k, the
kth absolute moment and kth absolute central moment satisfy the following
inequalities,
b−e k e−a k
E[|ξ|k ] ≤ |a| + |b| , (2.76)
b−a b−a
b−e e−a
E[|ξ − e|k ] ≤ (e − a)k + (b − e)k . (2.77)
b−a b−a
Definition 2.22 (Liu [130]) Let ξ be a fuzzy variable, and α ∈ (0, 1]. Then
ξsup (α) = sup r Cr {ξ ≥ r} ≥ α (2.78)
This means that the fuzzy variable ξ will reach upwards of the α-optimistic
value ξsup (α) with credibility α, and will be below the α-pessimistic value
ξinf (α) with credibility α. In other words, the α-optimistic value ξsup (α) is
the supremum value that ξ achieves with credibility α, and the α-pessimistic
value ξinf (α) is the infimum value that ξ achieves with credibility α.
Example 2.44: Let ξ be an equipossible fuzzy variable on (a, b). Then its
α-optimistic and α-pessimistic values are
( (
b, if α ≤ 0.5 a, if α ≤ 0.5
ξsup (α) = ξinf (α) =
a, if α > 0.5, b, if α > 0.5.
(
(1 − 2α)a + 2αb, if α ≤ 0.5
ξinf (α) =
(2 − 2α)b + (2α − 1)c, if α > 0.5.
Proof: It follows from the definition of α-pessimistic value that there exists
a decreasing sequence {xi } such that Cr{ξ ≤ xi } ≥ α and xi ↓ ξinf (α) as
i → ∞. Since {ξ ≤ xi } ↓ {ξ ≤ ξinf (α)} and limi→∞ Cr{ξ ≤ xi } ≥ α > 0.5, it
follows from the credibility semicontinuity law that
is an increasing sequence. If the limitation is equal to ξinf (α), then the left-
continuity is proved. Otherwise, there exists a number z ∗ such that
Proof: Part (a): Write ξ(α) = (ξinf (α) + ξsup (α))/2. If ξinf (α) < ξsup (α),
then we have
A contradiction proves ξinf (α) ≥ ξsup (α). Part (b): Assume that ξinf (α) >
ξsup (α). It follows from the definition of ξinf (α) that Cr{ξ ≤ ξ(α)} < α.
Similarly, it follows from the definition of ξsup (α) that Cr{ξ ≥ ξ(α)} < α.
Thus
1 ≤ Cr{ξ ≤ ξ(α)} + Cr{ξ ≥ ξ(α)} < α + α ≤ 1.
A contradiction proves ξinf (α) ≤ ξsup (α). The theorem is proved.
Proof: If c = 0, then the part (a) is obviously valid. When c > 0, we have
Similarly, we may prove that (−ξ)inf (α) = −ξsup (α). The theorem is proved.
Section 2.12 - Entropy 93
Theorem 2.50 Suppose that ξ and η are independent fuzzy variables. Then
for any α ∈ (0, 1], we have
(ξ + η)sup (α) = ξsup (α) + ηsup (α), (ξ + η)inf (α) = ξinf (α) + ηinf (α),
(ξη)sup (α) = ξsup (α)ηsup (α), (ξη)inf (α) = ξinf (α)ηinf (α), if ξ ≥ 0, η ≥ 0,
(ξ ∨ η)sup (α) = ξsup (α) ∨ ηsup (α), (ξ ∨ η)inf (α) = ξinf (α) ∨ ηinf (α),
(ξ ∧ η)sup (α) = ξsup (α) ∧ ηsup (α), (ξ ∧ η)inf (α) = ξinf (α) ∧ ηinf (α).
Proof: For any given number ε > 0, since ξ and η are independent fuzzy
variables, we have
which implies
(ξ + η)sup (α) ≥ ξsup (α) + ηsup (α) − ε. (2.81)
On the other hand, by the independence, we have
which implies
(ξ + η)sup (α) ≤ ξsup (α) + ηsup (α) + ε. (2.82)
It follows from (2.81) and (2.82) that
ξsup (α) + ηsup (α) + ε ≥ (ξ + η)sup (α) ≥ ξsup (α) + ηsup (α) − ε.
Letting ε → 0, we obtain (ξ + η)sup (α) = ξsup (α) + ηsup (α). The other
equalities may be proved similarly.
2.12 Entropy
Fuzzy entropy is a measure of uncertainty and has been studied by many
researchers such as De Luca and Termini [29], Kaufmann [76], Yager [239],
Kosko [85], Pal and Pal [188], Bhandari and Pal [7], and Pal and Bezdek
[192]. Those definitions of entropy characterize the uncertainty resulting
primarily from the linguistic vagueness rather than resulting from information
deficiency, and vanishes when the fuzzy variable is an equipossible one.
In order to measure the uncertainty of fuzzy variables, Liu [137] suggested
that an entropy of fuzzy variables should meet at least the following three
basic requirements:
(i) minimum: the entropy of a crisp number is minimum, i.e., 0;
(ii) maximum: the entropy of an equipossible fuzzy variable is maximum;
(iii) universality: the entropy is applicable not only to finite and infinite cases
but also to discrete and continuous cases.
In order to meet those requirements, Li and Liu [97] provided a new def-
inition of fuzzy entropy to characterize the uncertainty resulting from infor-
mation deficiency which is caused by the impossibility to predict the specified
value that a fuzzy variable takes.
Remark 2.7: It is clear that the entropy depends only on the number of
values and their credibilities and does not depend on the actual values that
the fuzzy variable takes.
S(t)
...
..........
...
..
... . . . . . . . . . . . . . . . .......................
ln 2 ... .....
......
.
.
........
......
... ..... . .....
... ..... .....
... ..
..... .
. .....
... ..
.... . ....
....
... ....
.
.
. ...
...
... ... . ...
... .... .
. ...
... ... . ...
... ... . ...
...
... ..
. .
. ...
... .... . ...
... ... .
.
...
... ... . ...
... ... . ...
... ... . ...
...... . ...
. ...
...... . ...
..... .
....................................................................................................................................................................................
....
t
0 ..
. 0.5 1
Proof: Since the function S(t) reaches its maximum ln 2 at t = 0.5, we have
n
X
H[ξ] = S(Cr{ξ = xi }) ≤ n ln 2
i=1
and equality holds if and only if Cr{ξ = xi } = 0.5, i.e., µ(xi ) ≡ 1 for all
i = 1, 2, · · · , n.
This theorem states that the entropy of a fuzzy variable reaches its max-
imum when the fuzzy variable is an equipossible one. In this case, there is
no preference among all the values that the fuzzy variable will take.
96 Chapter 2 - Credibility Theory
Example 2.51: Let ξ be an equipossible fuzzy variable (a, b). Then µ(x) = 1
if a ≤ x ≤ b, and 0 otherwise. Thus its entropy is
Z b
1 1 1 1
H[ξ] = − ln + 1 − ln 1 − dx = (b − a) ln 2.
a 2 2 2 2
Example 2.52: Let ξ be a triangular fuzzy variable (a, b, c). Then its
entropy is H[ξ] = (c − a)/2.
Example 2.53: Let ξ be a trapezoidal fuzzy variable (a, b, c, d). Then its
entropy is H[ξ] = (d − a)/2 + (ln 2 − 0.5)(c − b).
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.
Section 2.12 - Entropy 97
Theorem 2.55 Let ξ and η be two continuous fuzzy variables with member-
ship functions µ(x) and ν(x), respectively. If µ(x) ≤ ν(x) for any x ∈ <,
then we have H[ξ] ≤ H[η].
Proof: Since µ(x) ≤ ν(x), we have S(µ(x)/2) ≤ S(ν(x)/2) for any x ∈ <.
It follows that H[ξ] ≤ H[η].
Theorem 2.56 Let ξ be a continuous fuzzy variable. Then for any real
numbers a and b, we have H[aξ + b] = |a|H[ξ].
Theorem 2.25 (Li and Liu [104]) Let ξ be a continuous nonnegative fuzzy
variable with finite second moment m2 . Then
πm
H[ξ] ≤ √ (2.89)
6
and the equality holds if ξ is an exponentially distributed fuzzy variable with
second moment m2 .
The Lagrangian is
Z +∞
µ(x) µ(x) µ(x) µ(x)
L= − ln + 1− ln 1 − dx
0 2 2 2 2
Z +∞
2
−λ xµ(x)dx − m .
0
Then µ
b is a decreasing function on [0, +∞), and
1 1 1
Cr{ξb2 ≥ x} = sup µ
b(y) = sup sup µ(z) = sup µ(z) ≤ Cr{ξ 2 ≥ x}
2 y≥√x 2 y≥√x z≥y 2 z≥√x
for any x > 0. Thus we have
Z +∞ Z +∞
E[ξb2 ] = Cr{ξb2 ≥ x}dx ≤ Cr{ξ 2 ≥ x}dx = E[ξ 2 ] = m2 .
0 0
−1
and has the form µ(x) = 2 (1 + exp (λ(x − e))) . Substituting it into the
variance constraint, we get
−1
∗ π|x − e|
µ (x) = 2 1 + exp √ , x∈<
6σ
which is just the normal membership function with √ expected value e and
variance σ 2 , and the maximum entropy is H[ξ ∗ ] = 6πσ/3.
Step 2: Let ξ be a general fuzzy variable with expected value e and
variance σ 2 . We define a fuzzy variable ξb by the membership function
sup(µ(y) ∨ µ(2e − y)),
y≤x if x ≤ e
µ
b(x) =
sup (µ(y) ∨ µ(2e − y)) , if x > e.
y≥x
100 Chapter 2 - Credibility Theory
1 1
= sup√ (µ(y) ∨ µ(2e − y)) = sup µ(y)
2 y≥e+ r 2 (y−e)2 ≥r
≤ Cr (ξ − e)2 ≥ r
for any r > 0. Thus
Z +∞ Z +∞
2
V [ξ] =
b Cr{(ξ − e) ≥ r}dr ≤
b Cr{(ξ − e)2 ≥ r}dr = σ 2 .
0 0
2.13 Distance
Distance between fuzzy variables has been defined in many ways, for exam-
ple, Hausdorff distance (Puri and Ralescu [204], Klement et. al. [81]), and
Hamming distance (Kacprzyk [73]). However, those definitions have no iden-
tification property. In order to overcome this shortage, Liu [135] proposed a
definition of distance as follows.
Definition 2.27 (Liu [135]) The distance between fuzzy variables ξ and η is
defined as
d(ξ, η) = E[|ξ − η|]. (2.91)
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the credibility subadditivity
theorem that
Z +∞
d(ξ, η) = Cr {|ξ − η| ≥ r} dr
0
Z +∞
≤ Cr {|ξ − τ | + |τ − η| ≥ r} dr
0
Z +∞
≤ Cr {{|ξ − τ | ≥ r/2} ∪ {|τ − η| ≥ r/2}} dr
0
Z +∞
≤ (Cr{|ξ − τ | ≥ r/2} + Cr{|τ − η| ≥ r/2}) dr
0
Z +∞ Z +∞
= Cr{|ξ − τ | ≥ r/2}dr + Cr{|τ − η| ≥ r/2}dr
0 0
It is easy to verify that d(ξ, τ ) = d(τ, η) = 1/2 and d(ξ, η) = 3/2. Thus
3
d(ξ, η) = (d(ξ, τ ) + d(τ, η)).
2
2.14 Inequalities
There are several useful inequalities for random variable, such as Markov
inequality, Chebyshev inequality, Hölder’s inequality, Minkowski inequality,
102 Chapter 2 - Credibility Theory
= f (t) · Cr{|ξ| ≥ t}
Theorem 2.61 (Liu [134], Hölder’s Inequality) Let p and q be two positive
real numbers with 1/p+1/q = 1, and let ξ and η be independent fuzzy variables
with E[|ξ|p ] < ∞ and E[|η|q ] < ∞. Then we have
p p
E[|ξη|] ≤ p E[|ξ|p ] q E[|η|q ]. (2.95)
Section 2.14 - Inequalities 103
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume√E[|ξ|p ] > 0 and E[|η|q ] > 0. It is easy to prove that the function
√
f (x, y) = p x q y is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}. Thus
for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume √ E[|ξ|p ] > 0 and E[|η|p ] > 0. It is easy to prove that the function
√
f (x, y) = ( p x + p y)p is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}.
Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) − f (y) ≥ k · (x − y). Replacing x with ξ and y with E[ξ], we obtain
f (ξ) − f (E[ξ]) ≥ k · (ξ − E[ξ]).
Taking the expected values on both sides, we have
E[f (ξ)] − f (E[ξ]) ≥ k · (E[ξ] − E[ξ]) = 0
which proves the inequality.
Definition 2.28 (Liu [134]) Suppose that ξ, ξ1 , ξ2 , · · · are fuzzy variables de-
fined on the credibility space (Θ, P, Cr). The sequence {ξi } is said to be con-
vergent a.s. to ξ if and only if there exists an event A with Cr{A} = 1 such
that
lim |ξi (θ) − ξ(θ)| = 0 (2.98)
i→∞
for every θ ∈ A. In that case we write ξi → ξ, a.s.
Definition 2.29 (Liu [134]) Suppose that ξ, ξ1 , ξ2 , · · · are fuzzy variables de-
fined on the credibility space (Θ, P, Cr). We say that the sequence {ξi } con-
verges in credibility to ξ if
lim Cr {|ξi − ξ| ≥ ε} = 0 (2.99)
i→∞
Proof: It follows from Theorem 2.59 that, for any given number ε > 0,
E[|ξi − ξ|]
Cr {|ξi − ξ| ≥ ε} ≤ →0
ε
as i → ∞. Thus {ξi } converges in credibility to ξ.
E [|ξi − ξ|] ≡ 1 6→ 0.
Theorem 2.65 (Wang and Liu [233]) Suppose that ξ, ξ1 , ξ2 , · · · are fuzzy
variables defined on the credibility space (Θ, P, Cr). If the sequence {ξi } con-
verges in credibility to ξ, then {ξi } converges a.s. to ξ.
Proof: If {ξi } does not converge a.s. to ξ, then there exists an element
θ∗ ∈ Θ with Cr{θ∗ } > 0 such that ξi (θ∗ ) 6→ ξ(θ∗ ) as i → ∞. In other words,
there exists a small number ε > 0 and a subsequence {ξik (θ∗ )} such that
|ξik (θ∗ ) − ξ(θ∗ )| ≥ ε for any k. Since credibility measure is an increasing set
function, we have
Cr {|ξik − ξ| ≥ ε} ≥ Cr{θ∗ } > 0
for any k. It follows that {ξi } does not converge in credibility to ξ. A
contradiction proves the theorem.
Since Cr{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim inf i→∞ Φi (x) for any
z < x. Letting z → x, we get
Φ(x) ≤ lim inf Φi (x). (2.103)
i→∞
It follows from (2.102) and (2.103) that Φi (x) → Φ(x). The theorem is
proved.
It is clear that Φi (x) 6→ Φ(x) at x > 0. That is, the sequence {ξi } does not
converge in distribution to ξ.
Cr{A ∩ B}
Cr{A|B} ≤ . (2.104)
Cr{B}
Cr{Ac ∩ B}
Cr{A|B} = 1 − Cr{Ac |B} ≥ 1 − . (2.105)
Cr{B}
Cr{Ac ∩ B} Cr{A ∩ B}
0≤1− ≤ ≤ 1. (2.106)
Cr{B} Cr{B}
Definition 2.32 (Liu [138]) Let (Θ, P, Cr) be a credibility space, and A, B ∈
P. Then the conditional credibility measure of A given B is defined by
Cr{A ∩ B} Cr{A ∩ B}
, if < 0.5
Cr{B} Cr{B}
Cr{A|B} = Cr{Ac ∩ B} Cr{Ac ∩ B} (2.107)
1− , if < 0.5
Cr{B} Cr{B}
0.5, otherwise
then we have
c Cr{A ∩ B} Cr{A ∩ B}
Cr{A|B} + Cr{A |B} = + 1− = 1.
Cr{B} Cr{B}
That is, Cr{·|B} satisfies the self-duality axiom. Finally, for any events {Ai }
with supi Cr{Ai |B} < 0.5, we have supi Cr{Ai ∩ B} < 0.5 and
supi Cr{Ai ∩ B} Cr{∪i Ai ∩ B}
sup Cr{Ai |B} = = = Cr{∪i Ai |B}.
i Cr{B} Cr{B}
Thus Cr{·|B} satisfies the maximality axiom. Hence Cr{·|B} is a credibility
measure. Furthermore, (Θ, P, Cr{·|B}) is a credibility space.
Example 2.65: Let ξ be a fuzzy variable, and X a set of real numbers such
that Cr{ξ ∈ X} > 0. Then for any x ∈ X, the conditional credibility of
ξ = x given ξ ∈ X is
Cr{ξ = x} Cr{ξ = x}
, if < 0.5
∈ ∈ X}
Cr{ξ X} Cr{ξ
Cr {ξ = x|ξ ∈ X} = Cr{ξ 6= x, ξ ∈ X} Cr{ξ 6= x, ξ ∈ X}
1− , if < 0.5
Cr{ξ ∈ X} Cr{ξ ∈ X}
0.5, otherwise.
Example 2.66: Let ξ and η be two fuzzy variables, and Y a set of real
numbers such that Cr{η ∈ Y } > 0. Then we have
Cr{ξ = x, η ∈ Y } Cr{ξ = x, η ∈ Y }
, if < 0.5
∈ } Cr{η ∈ Y }
Cr{η Y
Cr {ξ = x|η ∈ Y } = Cr{ξ 6= x, η ∈ Y } Cr{ξ 6= x, η ∈ Y }
1− , if < 0.5
Cr{η ∈ Y } Cr{η ∈ Y }
0.5, otherwise.
Definition 2.33 (Liu [138]) The conditional membership function of a fuzzy
variable ξ given B is defined by
µ(x|B) = (2Cr{ξ = x|B}) ∧ 1, x∈< (2.109)
provided that Cr{B} > 0.
µ(x|X) µ(x|X)
. .
.... ....
....... .......
.. ..
...................................... .......................................
1 ...
... ...
................................................................
... ... ...
... .....
...
...
1 ...
...
.....
... ... ..
... ..... ..
..........................
...
...
... ... ... ... ... ... ... ..... ...
... ... .
.... ... ...
... ... ..... .... ...
.. .. ... .. ... ...
... . .... .... ... ... .
..
. .
. ... . ...
... ..
. .. .... ... ... .. .
. ... . ...
... ... .... .... ... ... .... . . ...
.. ..... .. .. . ...
... ...
... .... ... .
..
. .
. .... ...
... .. .... ... ... ... .. .
. .... ...
... .. .... ... ... . . .
. . ... ... .
.. . .... ....
... ..
..
. .... .... ... .
..
. .
. . ... ....
... ... .. ... .. ... .. . .... ...
... ..... .. ... ...
... .. ... .....
. ... ... ..
... .... .. ...... ... .... .. ... ...
.. . .....
...... ... .. .. ..
.................................................................................................................................................................. x ................................................................................................................................................................... x
. . .
.... ............................................... . .
............................................... .... . . .
.....................
0 ...
.
. X 0 ...
.
.
X
....................
Example 2.68: Let ξ and η be two fuzzy variables with joint member-
ship function µ(x, y), and Y a set of real numbers. Then the conditional
membership function of ξ given η ∈ Y is
2 sup µ(x, y)
y∈Y
∧ 1, if sup µ(x, y) < 1
sup µ(x, y)
x∈<,y∈Y
x∈<,y∈Y
µ(x|Y ) = (2.111)
2 sup µ(x, y)
y∈Y
∧ 1, if sup µ(x, y) = 1
2 − sup µ(x, y)
x∈<,y∈Y
x∈<,y∈Y c
provided that µ(x, y) > 0 for some x ∈ < and y ∈ Y . Especially, the
conditional membership function of ξ given η = y is
2µ(x, y)
∧ 1, if sup µ(x, y) < 1
sup µ(x, y) x∈<
x∈<
µ(x|y) =
2µ(x, y)
∧ 1, if sup µ(x, y) = 1
2 − sup µ(x, z)
x∈<
x∈<,z6=y
Example 2.69: Let ξ and η be fuzzy variables. Then the conditional credi-
bility distribution of ξ given η = y is
Cr{ξ ≤ x, η = y} Cr{ξ ≤ x, η = y}
, if < 0.5
Cr{η = y} Cr{η = y}
Φ(x|η = y) = Cr{ξ > x, η = y} Cr{ξ > x, η = y}
1− , if < 0.5
Cr{η = y} Cr{η = y}
0.5, otherwise
Definition 2.37 (Liu [138]) Let ξ be a fuzzy variable. Then the conditional
expected value of ξ given B is defined by
Z +∞ Z 0
E[ξ|B] = Cr{ξ ≥ r|B}dr − Cr{ξ ≤ r|B}dr (2.116)
0 −∞
Hazard Rate
Definition 2.38 Let ξ be a nonnegative fuzzy variable representing lifetime.
Then the hazard rate (or failure rate) is
h(x) = lim Cr{ξ ≤ x + ∆ ξ > x}. (2.117)
∆↓0
Section 2.16 - Conditional Credibility 113
The hazard rate tells us the credibility of a failure just after time x when it
is functioning at time x.
Chance Theory
Fuzziness and randomness are two basic types of uncertainty. In many cases,
fuzziness and randomness simultaneously appear in a system. In order to
describe this phenomena, a fuzzy random variable was introduced by Kwak-
ernaak [87] as a random element taking “fuzzy variable” values. In addition,
a random fuzzy variable was proposed by Liu [130] as a fuzzy element taking
“random variable” values. For example, it might be known that the lifetime
of a modern engine is an exponentially distributed random variable with an
unknown parameter. If the parameter is provided as a fuzzy variable, then
the lifetime is a random fuzzy variable.
More generally, a hybrid variable was introduced by Liu [136] in 2006 as a
tool to describe the quantities with fuzziness and randomness. Fuzzy random
variable and random fuzzy variable are instances of hybrid variable. In order
to measure hybrid events, a concept of chance measure was introduced by Li
and Liu [107] in 2009. Chance theory is a hybrid of probability theory and
credibility theory. Perhaps the reader would like to know what axioms we
should assume for chance theory. In fact, chance theory will be based on the
three axioms of probability and five axioms of credibility.
The emphasis in this chapter is mainly on chance space, hybrid variable,
chance measure, chance distribution, expected value, variance, moments,
critical values, entropy, distance, convergence almost surely, convergence in
chance, convergence in mean, convergence in distribution, and conditional
chance.
Definition 3.1 (Liu [136]) Suppose that (Θ, P, Cr) is a credibility space and
(Ω, A, Pr) is a probability space. The product (Θ, P, Cr) × (Ω, A, Pr) is called
116 Chapter 3 - Chance Theory
a chance space.
The universal set Θ × Ω is clearly the set of all ordered pairs of the form
(θ, ω), where θ ∈ Θ and ω ∈ Ω. What is the product σ-algebra P × A? What
is the product measure Cr × Pr? Let us discuss these two basic problems.
Ω..
..
.........
....
..
...
... ...............................
... .......... ......
.............................................................. ...... .........
. ... ......
..... .
. ........ ... .......
..
.... .... .
... .. ...
... ... .... .. ...
...
... ... ... . ...
..
. ... .... ...
... .... ... ...
Λ(θ) ...
...
...
... Λ ..
..
..
...
... ... ... .. ...
... ... ... . ..
.
... ..
... ... ... .. ...
... ... ..
...
..... .. .....
........ ... ..... .. .......
.... ..
. .
..............................................................................
........... .
.... ........................... ..
... ..
... ..
... ..
... ..
.........................................................................................................................................................................
.
.
.... Θ
..
...
.
θ
Definition 3.2 (Liu [138]) Let (Θ, P, Cr) × (Ω, A, Pr) be a chance space. A
subset Λ ⊂ Θ × Ω is called an event if Λ(θ) ∈ A for each θ ∈ Θ.
Example 3.1: Empty set ∅ and universal set Θ × Ω are clearly events.
Theorem 3.1 (Liu [138]) Let (Θ, P, Cr) × (Ω, A, Pr) be a chance space. The
class of all events is a σ-algebra over Θ × Ω, and denoted by P × A.
Section 3.1 - Chance Space 117
Example 3.4: When Θ×Ω is uncountable, for example Θ×Ω = <2 , usually
P × A is a σ-algebra between the Borel algebra and power set of <2 . Let X
be a nonempty Borel set and let Y be a non-Borel set of real numbers. It
follows from X × Y 6∈ P × A that P × A is smaller than the power set. It is
also clear that Y × X ∈ P × A but Y × X is not a Borel set. Hence P × A is
larger than the Borel algebra.
Definition 3.3 (Li and Liu [107]) Let (Θ, P, Cr) × (Ω, A, Pr) be a chance
space. Then a chance measure of an event Λ is defined as
sup(Cr{θ} ∧ Pr{Λ(θ)}),
θ∈Θ
if sup(Cr{θ} ∧ Pr{Λ(θ)}) < 0.5
θ∈Θ
Ch{Λ} = (3.2)
1 − sup(Cr{θ} ∧ Pr{Λc (θ)}),
θ∈Θ
if sup(Cr{θ} ∧ Pr{Λ(θ)}) ≥ 0.5.
θ∈Θ
Example 3.5: Take a credibility space (Θ, P, Cr) to be {θ1 , θ2 } with Cr{θ1 } =
0.6 and Cr{θ2 } = 0.4, and take a probability space (Ω, A, Pr) to be {ω1 , ω2 }
with Pr{ω1 } = 0.7 and Pr{ω2 } = 0.3. Then
Theorem 3.2 Let (Θ, P, Cr) × (Ω, A, Pr) be a chance space and Ch a chance
measure. Then we have
Ch{∅} = 0, (3.3)
Ch{Θ × Ω} = 1, (3.4)
0 ≤ Ch{Λ} ≤ 1 (3.5)
for any event Λ.
Theorem 3.3 Let (Θ, P, Cr) × (Ω, A, Pr) be a chance space and Ch a chance
measure. Then for any event Λ, we have
Proof: It follows from the basic properties of probability and credibility that
and
sup(Cr{θ} ∧ Pr{Λ(θ)}) + sup(Cr{θ} ∧ Pr{Λc (θ)})
θ∈Θ θ∈Θ
≤ 1 ∨ 1 = 1.
The inequalities (3.8) follows immediately from the above inequalities and
the definition of chance measure.
Theorem 3.4 (Li and Liu [107]) The chance measure is increasing. That
is,
Ch{Λ1 } ≤ Ch{Λ2 } (3.9)
for any events Λ1 and Λ2 with Λ1 ⊂ Λ2 .
Section 3.1 - Chance Space 119
Proof: Since Λ1 (θ) ⊂ Λ2 (θ) and Λc2 (θ) ⊂ Λc1 (θ) for each θ ∈ Θ, we have
Case 2: sup(Cr{θ} ∧ Pr{Λ2 (θ)}) ≥ 0.5 and sup(Cr{θ} ∧ Pr{Λ1 (θ)}) < 0.5.
θ∈Θ θ∈Θ
It follows from Theorem 3.3 that
Case 3: sup(Cr{θ} ∧ Pr{Λ2 (θ)}) ≥ 0.5 and sup(Cr{θ} ∧ Pr{Λ1 (θ)}) ≥ 0.5.
θ∈Θ θ∈Θ
For this case, we have
Theorem 3.5 (Li and Liu [107]) The chance measure is self-dual. That is,
Case 2: sup(Cr{θ} ∧ Pr{Λ(θ)}) ≥ 0.5 and sup(Cr{θ} ∧ Pr{Λc (θ)}) < 0.5.
θ∈Θ θ∈Θ
For this case, we have
Theorem 3.6 (Li and Liu [107]) For any event X × Y , we have
Example 3.6: It follows from Theorem 3.6 that for any events X × Ω and
Θ × Y , we have
Theorem 3.7 (Li and Liu [107], Chance Subadditivity Theorem) The chance
measure is subadditive. That is,
Ch{Λ1 ∪ Λ2 } ≤ Ch{Λ1 } + Ch{Λ2 } (3.13)
for any events Λ1 and Λ2 . In fact, chance measure is not only finitely sub-
additive but also countably subadditive.
Proof: The proof breaks down into three cases.
Case 1: Ch{Λ1 ∪ Λ2 } < 0.5. Then Ch{Λ1 } < 0.5, Ch{Λ2 } < 0.5 and
Ch{Λ1 ∪ Λ2 } = sup(Cr{θ} ∧ Pr{(Λ1 ∪ Λ2 )(θ)})
θ∈Θ
≤ sup(Cr{θ} ∧ (Pr{Λ1 (θ)} + Pr{Λ2 (θ)}))
θ∈Θ
≤ sup(Cr{θ} ∧ Pr{Λ1 (θ)} + Cr{θ} ∧ Pr{Λ2 (θ)})
θ∈Θ
≤ sup(Cr{θ} ∧ Pr{Λ1 (θ)}) + sup(Cr{θ} ∧ Pr{Λ2 (θ)})
θ∈Θ θ∈Θ
= Ch{Λ1 } + Ch{Λ2 }.
Case 2: Ch{Λ1 ∪ Λ2 } ≥ 0.5 and Ch{Λ1 } ∨ Ch{Λ2 } < 0.5. We first have
sup(Cr{θ} ∧ Pr{(Λ1 ∪ Λ2 )(θ)}) ≥ 0.5.
θ∈Θ
For any sufficiently small number ε > 0, there exists a point θ such that
Cr{θ} ∧ Pr{(Λ1 ∪ Λ2 )(θ)} > 0.5 − ε > Ch{Λ1 } ∨ Ch{Λ2 },
Cr{θ} > 0.5 − ε > Pr{Λ1 (θ)},
Cr{θ} > 0.5 − ε > Pr{Λ2 (θ)}.
Thus we have
Cr{θ} ∧ Pr{(Λ1 ∪ Λ2 )c (θ)} + Cr{θ} ∧ Pr{Λ1 (θ)} + Cr{θ} ∧ Pr{Λ2 (θ)}
= Cr{θ} ∧ Pr{(Λ1 ∪ Λ2 )c (θ)} + Pr{Λ1 (θ)} + Pr{Λ2 (θ)}
≥ Cr{θ} ∧ Pr{(Λ1 ∪ Λ2 )c (θ)} + Pr{(Λ1 ∪ Λ2 )(θ)} ≥ 1 − 2ε
because if Cr{θ} ≥ Pr{(Λ1 ∪ Λ2 )c (θ)}, then
Cr{θ} ∧ Pr{(Λ1 ∪ Λ2 )c (θ)} + Pr{(Λ1 ∪ Λ2 )(θ)}
= Pr{(Λ1 ∪ Λ2 )c (θ)} + Pr{(Λ1 ∪ Λ2 )(θ)}
= 1 ≥ 1 − 2ε
and if Cr{θ} < Pr{(Λ1 ∪ Λ2 )c (θ)}, then
Cr{θ} ∧ Pr{(Λ1 ∪ Λ2 )c (θ)} + Pr{(Λ1 ∪ Λ2 )(θ)}
= Cr{θ} + Pr{(Λ1 ∪ Λ2 )(θ)}
≥ (0.5 − ε) + (0.5 − ε) = 1 − 2ε.
122 Chapter 3 - Chance Theory
Cr{θ} ∧ Pr{Λc1 (θ)} = Cr{θ} ∧ Pr{(Λc1 (θ) ∩ Λc2 (θ)) ∪ (Λc1 (θ) ∩ Λ2 (θ))}
i.e., Cr{θ} ∧ Pr{(Λ1 ∪ Λ2 )c (θ)} ≥ Cr{θ} ∧ Pr{Λc1 (θ)} − Cr{θ} ∧ Pr{Λ2 (θ)}. It
follows from Theorem 3.3 that
Remark 3.1: For any events Λ1 and Λ2 , it follows from the chance subaddi-
tivity theorem that the chance measure is null-additive, i.e., Ch{Λ1 ∪ Λ2 } =
Ch{Λ1 } + Ch{Λ2 } if either Ch{Λ1 } = 0 or Ch{Λ2 } = 0.
Theorem 3.9 (Li and Liu [107], Chance Semicontinuity Law) For events
Λ1 , Λ2 , · · · , we have
n o
lim Ch{Λi } = Ch lim Λi (3.15)
i→∞ i→∞
we have
It follows that Ch{Λ} < 0.5 and the part (b) holds by using (a).
(c) Assume Ch{Λ} ≥ 0.5 and Λi ↓ Λ. We have Ch{Λc } ≤ 0.5 and Λci ↑ Λc .
It follows from (a) that
(d) Assume limi→∞ Ch{Λi } > 0.5 and Λi ↓ Λ. We have lim Ch{Λci } <
i→∞
0.5 and Λci ↑ Λc . It follows from (b) that
is an event.
Remark 3.4: For each fixed θ ∈ Θ, it is clear that the hybrid variable ξ(θ, ω)
is a measurable function from the probability space (Ω, A, Pr) to the set of
real numbers. Thus it is a random variable and we will denote it by ξ(θ, ·).
Then a hybrid variable ξ(θ, ω) may also be regarded as a function from a
credibility space (Θ, P, Cr) to the set {ξ(θ, ·)|θ ∈ Θ} of random variables.
Thus ξ is a random fuzzy variable defined by Liu [130].
Remark 3.5: For each fixed ω ∈ Ω, it is clear that the hybrid variable
ξ(θ, ω) is a function from the credibility space (Θ, P, Cr) to the set of real
numbers. Thus it is a fuzzy variable and we will denote it by ξ(·, ω). Then a
hybrid variable ξ(θ, ω) may be regarded as a function from a probability space
(Ω, A, Pr) to the set {ξ(·, ω)|ω ∈ Ω} of fuzzy variables. If Cr{ξ(·, ω) ∈ B} is
a measurable function of ω for any Borel set B of real numbers, then ξ is a
fuzzy random variable in the sense of Liu and Liu [150].
Model I
ξ = f (ã, η) (3.19)
we have
Z !
µ(x)
sup ∧ φ(y)dy ,
x 2 f (x,y)∈B
!
Z
µ(x)
if sup ∧ φ(y)dy < 0.5
x 2 f (x,y)∈B
Ch{f (ã, η) ∈ B} = Z !
µ(x)
1 − sup ∧ φ(y)dy ,
2
x
f (x,y)∈B c
Z !
µ(x)
if sup ∧ φ(y)dy ≥ 0.5.
2
x f (x,y)∈B
More generally, let ã1 , ã2 , · · · , ãm be fuzzy variables, and let η1 , η2 , · · · , ηn be
random variables. If f : <m+n → < is a measurable function, then
ξ = f (ã1 , ã2 , · · · , ãm ; η1 , η2 , · · · , ηn ) (3.20)
is a hybrid variable. The chance Ch{f (ã1 , ã2 , · · · , ãm ; η1 , η2 , · · · , ηn ) ∈ B}
may be calculated in a similar way provided that µ is the joint membership
function and φ is the joint probability density function.
Model II
Let ã1 , ã2 , · · · , ãm be fuzzy variables, and let p1 , p2 , · · · , pm be nonnegative
numbers with p1 + p2 + · · · + pm = 1. Then
ã1 with probability p1
ξ= ã2 with probability p2 (3.21)
···
ãm with probability pm
is clearly a hybrid variable. If ã1 , ã2 , · · · , ãm have membership functions
µ1 , µ2 , · · · , µm , respectively, then for any set B of real numbers, we have
X m
!
µi (xi )
sup min ∧ {pi | xi ∈ B} ,
2
x1 ,x2 ··· ,xm 1≤i≤m
i=1
X m
!
µ (x )
i i
if sup min ∧ {pi | xi ∈ B} < 0.5
2
1≤i≤m
x1 ,x2 ··· ,xm
i=1
Ch{ξ ∈ B} = !
X m
µi (xi )
c
1− sup min ∧ {pi | xi ∈ B } ,
x1 ,x2 ··· ,xm 1≤i≤m 2 i=1
!
X m
µ i (xi )
if x ,xsup min ∧ {pi | xi ∈ B} ≥ 0.5.
2
1 2 ··· ,xm 1≤i≤m
i=1
Section 3.2 - Hybrid Variables 127
Model III
Model IV
in which the parameters ã1 , ã2 , · · · , ãm are fuzzy variables rather than crisp
numbers. Then ξ is a hybrid variable provided that φ(x; y1 , y2 , · · · , ym ) is
a probability density function for any (y1 , y2 , · · · , ym ) that (ã1 , ã2 , · · · , ãm )
may take. If ã1 , ã2 , · · · , ãm have membership functions µ1 , µ2 , · · · , µm , re-
spectively, then for any Borel set B of real numbers, the chance Ch{ξ ∈ B}
128 Chapter 3 - Chance Theory
is
Z
µi (yi )
sup min ∧ φ(x; y1 , y2 , · · · , ym )dx ,
y1 ,y2 ··· ,ym 1≤i≤m 2 B
Z
µi (yi )
if sup min ∧ φ(x; y1 , y2 , · · · , ym )dx < 0.5
2
y1 ,y2 ,··· ,ym 1≤i≤m B
Z
µi (yi )
1 − sup min ∧ φ(x; y1 , y2 , · · · , ym )dx ,
2
y1 ,y2 ··· ,ym 1≤i≤m Bc
Z
µi (yi )
∧ φ(x; y1 , y2 , · · · , ym )dx ≥ 0.5.
if sup min
y1 ,y2 ,··· ,ym 1≤i≤m 2 B
Hybrid Vectors
Definition 3.6 An n-dimensional hybrid vector is a measurable function
from a chance space (Θ, P, Cr) × (Ω, A, Pr) to the set of n-dimensional real
vectors, i.e., for any Borel set B of <n , the set
{ξ ∈ B} = (θ, ω) ∈ Θ × Ω ξ(θ, ω) ∈ B (3.24)
is an event.
B= B ⊂ <n {ξ ∈ B} is an event .
Section 3.3 - Chance Distribution 129
is an event. Next, the class B is a σ-algebra of <n because (i) we have <n ∈ B
since {ξ ∈ <n } = Θ × Ω; (ii) if B ∈ B, then {ξ ∈ B} is an event, and
{ξ ∈ B c } = {ξ ∈ B}c
is an event. This means that B c ∈ B; (iii) if Bi ∈ B for i = 1, 2, · · · , then
{ξ ∈ Bi } are events and
∞ ∞
( )
[ [
ξ∈ Bi = {ξ ∈ Bi }
i=1 i=1
Hybrid Arithmetic
Definition 3.7 Let f : <n → < be a measurable function, and ξ1 , ξ2 , · · · , ξn
hybrid variables on the chance space (Θ, P, Cr) × (Ω, A, Pr). Then ξ =
f (ξ1 , ξ2 , · · · , ξn ) is a hybrid variable defined as
ξ(θ, ω) = f (ξ1 (θ, ω), ξ2 (θ, ω), · · · , ξn (θ, ω)), ∀(θ, ω) ∈ Θ × Ω. (3.25)
Example 3.7: Let ξ1 and ξ2 be two hybrid variables. Then the sum ξ =
ξ1 + ξ2 is a hybrid variable defined by
ξ(θ, ω) = ξ1 (θ, ω) + ξ2 (θ, ω), ∀(θ, ω) ∈ Θ × Ω.
The product ξ = ξ1 ξ2 is also a hybrid variable defined by
ξ(θ, ω) = ξ1 (θ, ω) · ξ2 (θ, ω), ∀(θ, ω) ∈ Θ × Ω.
Theorem 3.12 Let ξ be an n-dimensional hybrid vector, and f : <n → < a
measurable function. Then f (ξ) is a hybrid variable.
Proof: Assume that ξ is a hybrid vector on the chance space (Θ, P, Cr) ×
(Ω, A, Pr). For any Borel set B of <, since f is a measurable function, the
f −1 (B) is a Borel set of <n . Thus the set
{f (ξ) ∈ B} = ξ ∈ f −1 (B)
Definition 3.8 (Li and Liu [107]) The chance distribution Φ: < → [0, 1] of
a hybrid variable ξ is defined by
Φ(x) = Ch {ξ ≤ x} . (3.26)
lim Φ(y) = Φ(x) if lim Φ(y) > 0.5 or Φ(x) ≥ 0.5. (3.28)
y↓x y↓x
For this case, we have proved that limy↓x Φ(y) = Φ(x). Thus (3.28) is proved.
Conversely, suppose Φ : < → [0, 1] is an increasing function satisfying
(3.27) and (3.28). Theorem 2.21 states that there is a fuzzy variable whose
credibility distribution is just Φ(x). Since a fuzzy variable is a special hybrid
variable, the theorem is proved.
Definition 3.9 The chance density function φ: < → [0, +∞) of a hybrid
variable ξ is a function such that
Z x
Φ(x) = φ(y)dy, ∀x ∈ <, (3.29)
−∞
Z +∞
φ(y)dy = 1 (3.30)
−∞
Proof: The first part follows immediately from the definition. In addition,
by the self-duality of chance measure, we have
Z +∞ Z x Z +∞
Ch{ξ ≥ x} = 1 − Ch{ξ < x} = φ(y)dy − φ(y)dy = φ(y)dy.
−∞ −∞ x
Φ(x1 , x2 , · · · , xn ) = Ch {ξ1 ≤ x1 , ξ2 ≤ x2 , · · · , ξn ≤ xn } .
132 Chapter 3 - Chance Theory
Definition 3.11 The joint chance density function φ : <n → [0, +∞) of a
hybrid vector (ξ1 , ξ2 , · · · , ξn ) is a function such that
Z x1 Z x2 Z xn
Φ(x1 , x2 , · · · , xn ) = ··· φ(y1 , y2 , · · · , yn )dy1 dy2 · · · dyn
−∞ −∞ −∞
n
holds for all (x1 , x2 , · · · , xn ) ∈ < , and
Z +∞ Z +∞ Z +∞
··· φ(y1 , y2 , · · · , yn )dy1 dy2 · · · dyn = 1
−∞ −∞ −∞
Proof: It follows from the definition of expected value operator and Fubini
Theorem that
Z +∞ Z 0
E[ξ] = Ch{ξ ≥ r}dr − Ch{ξ ≤ r}dr
0 −∞
Z +∞ Z +∞ Z 0 Z r
= φ(x)dx dr − φ(x)dx dr
0 r −∞ −∞
Z +∞ Z x Z 0 Z 0
= φ(x)dr dx − φ(x)dr dx
0 0 −∞ x
Z +∞ Z 0
= xφ(x)dx + xφ(x)dx
0 −∞
Z +∞
= xφ(x)dx.
−∞
R +∞
Proof: Since the Lebesgue-Stieltjes integral −∞
xdΦ(x) is finite, we imme-
diately have
Z y Z +∞ Z 0 Z 0
lim xdΦ(x) = xdΦ(x), lim xdΦ(x) = xdΦ(x)
y→+∞ 0 0 y→−∞ y −∞
and Z +∞ Z y
lim xdΦ(x) = 0, lim xdΦ(x) = 0.
y→+∞ y y→−∞ −∞
134 Chapter 3 - Chance Theory
It follows from
Z +∞
xdΦ(x) ≥ y lim Φ(z) − Φ(y) = y (1 − Φ(y)) ≥ 0, for y > 0,
y z→+∞
Z y
xdΦ(x) ≤ y Φ(y) − lim Φ(z) = yΦ(y) ≤ 0, for y < 0
−∞ z→−∞
that
lim y (1 − Φ(y)) = 0, lim yΦ(y) = 0.
y→+∞ y→−∞
Let 0 = x0 < x1 < x2 < · · · < xn = y be a partition of [0, y]. Then we have
n−1
X Z y
xi (Φ(xi+1 ) − Φ(xi )) → xdΦ(x)
i=0 0
and
n−1
X Z y
(1 − Φ(xi+1 ))(xi+1 − xi ) → Ch{ξ ≥ r}dr
i=0 0
as max{|xi+1 − xi | : i = 0, 1, · · · , n − 1} → 0. Since
n−1
X n−1
X
xi (Φ(xi+1 ) − Φ(xi )) − (1 − Φ(xi+1 )(xi+1 − xi ) = y(Φ(y) − 1) → 0
i=0 i=0
If a < 0, we have
Z +∞ Z 0
E[aξ] = Ch{aξ ≥ r}dr − Ch{aξ ≤ r}dr
0 −∞
Z +∞ Z 0
= Ch{ξ ≤ r/a}dr − Ch{ξ ≥ r/a}dr
0 −∞
Z +∞ Z 0
=a Ch{ξ ≥ t}dt − a Ch{ξ ≤ t}dt
0 −∞
= aE[ξ].
Step 3: For any real numbers a and b, it follows from Steps 1 and 2 that
3.5 Variance
Definition 3.13 (Li and Liu [107]) Let ξ be a hybrid variable with finite
expected value e. Then the variance of ξ is defined by V [ξ] = E[(ξ − e)2 ].
Theorem 3.19 Let ξ be a hybrid variable with expected value e. Then V [ξ] =
0 if and only if Ch{ξ = e} = 1.
Ch{(ξ − e)2 = 0} = 1.
b−e e−a
E[f (ξ)] ≤ f (a) + f (b). (3.36)
b−a b−a
b − ξ(θ, ω) ξ(θ, ω) − a
ξ(θ, ω) = a+ b.
b−a b−a
It follows from the convexity of f that
b − ξ(θ, ω) ξ(θ, ω) − a
f (ξ(θ, ω)) ≤ f (a) + f (b).
b−a b−a
3.6 Moments
Definition 3.14 (Li and Liu [107]) Let ξ be a hybrid variable. Then for any
positive integer k,
(a) the expected value E[ξ k ] is called the kth moment;
(b) the expected value E[|ξ|k ] is called the kth absolute moment;
(c) the expected value E[(ξ − E[ξ])k ] is called the kth central moment;
(d) the expected value E[|ξ −E[ξ]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
Theorem 3.22 Let ξ be a nonnegative hybrid variable, and k a positive num-
ber. Then the k-th moment
Z +∞
k
E[ξ ] = k rk−1 Ch{ξ ≥ r}dr. (3.38)
0
The hybrid variable ξ reaches upwards of the α-optimistic value ξsup (α),
and is below the α-pessimistic value ξinf (α) with chance α.
ξsup (α) = ηsup (α), ξinf (α) = ηinf (α), ∀α ∈ (0, 1].
In other words, the critical values of hybrid variable coincide with that of
random variable.
ξsup (α) = ãsup (α), ξinf (α) = ãinf (α), ∀α ∈ (0, 1].
In other words, the critical values of hybrid variable coincide with that of
fuzzy variable.
Proof: It follows from the definition of α-pessimistic value that there exists
a decreasing sequence {xi } such that Ch{ξ ≤ xi } ≥ α and xi ↓ ξinf (α) as
i → ∞. Since {ξ ≤ xi } ↓ {ξ ≤ ξinf (α)} and limi→∞ Ch{ξ ≤ xi } ≥ α > 0.5, it
follows from the chance semicontinuity theorem that
Proof: (a) If c = 0, then the part (a) is obvious. In the case of c > 0, we
have
(cξ)sup (α) = sup{r Ch{cξ ≥ r} ≥ α}
= c sup{r/c | Ch{ξ ≥ r/c} ≥ α}
= cξsup (α).
A similar way may prove (cξ)inf (α) = cξinf (α). In order to prove the part (b),
it suffices to prove that (−ξ)sup (α) = −ξinf (α) and (−ξ)inf (α) = −ξsup (α).
In fact, we have
(−ξ)sup (α) = sup{r Ch{−ξ ≥ r} ≥ α}
= − inf{−r | Ch{ξ ≤ −r} ≥ α}
= −ξinf (α).
Similarly, we may prove that (−ξ)inf (α) = −ξsup (α). The theorem is proved.
Theorem 3.26 Let ξ be a hybrid variable. Then we have
(a) if α > 0.5, then ξinf (α) ≥ ξsup (α);
(b) if α ≤ 0.5, then ξinf (α) ≤ ξsup (α).
¯
Proof: Part (a): Write ξ(α) = (ξinf (α) + ξsup (α))/2. If ξinf (α) < ξsup (α),
then we have
¯
1 ≥ Ch{ξ < ξ(α)} ¯
+ Ch{ξ > ξ(α)} ≥ α + α > 1.
A contradiction proves ξinf (α) ≥ ξsup (α).
Part (b): Assume that ξinf (α) > ξsup (α). It follows from the definition of
¯
ξinf (α) that Ch{ξ ≤ ξ(α)} < α. Similarly, it follows from the definition of
¯
ξsup (α) that Ch{ξ ≥ ξ(α)} < α. Thus
¯
1 ≤ Ch{ξ ≤ ξ(α)} ¯
+ Ch{ξ ≥ ξ(α)} < α + α ≤ 1.
A contradiction proves ξinf (α) ≤ ξsup (α). The theorem is verified.
Theorem 3.27 Let ξ be a hybrid variable. Then we have
(a) ξsup (α) is a decreasing and left-continuous function of α;
(b) ξinf (α) is an increasing and left-continuous function of α.
Proof: (a) It is easy to prove that ξinf (α) is an increasing function of α.
Next, we prove the left-continuity of ξinf (α) with respect to α. Let {αi } be
an arbitrary sequence of positive numbers such that αi ↑ α. Then {ξinf (αi )}
is an increasing sequence. If the limitation is equal to ξinf (α), then the left-
continuity is proved. Otherwise, there exists a number z ∗ such that
lim ξinf (αi ) < z ∗ < ξinf (α).
i→∞
3.8 Distance
Definition 3.16 (Li and Liu [107]) The distance between hybrid variables ξ
and η is defined as
Theorem 3.28 (Li and Liu [107]) Let ξ, η, τ be hybrid variables, and let
d(·, ·) be the distance. Then we have
(a) (Nonnegativity) d(ξ, η) ≥ 0;
(b) (Identification) d(ξ, η) = 0 if and only if ξ = η;
(c) (Symmetry) d(ξ, η) = d(η, ξ);
(d) (Triangle Inequality) d(ξ, η) ≤ 2d(ξ, τ ) + 2d(η, τ ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the chance subadditivity theorem
that
Z +∞
d(ξ, η) = Ch {|ξ − η| ≥ r} dr
0
Z +∞
≤ Ch {|ξ − τ | + |τ − η| ≥ r} dr
0
Z +∞
≤ Ch {{|ξ − τ | ≥ r/2} ∪ {|τ − η| ≥ r/2}} dr
0
Z +∞
≤ (Ch{|ξ − τ | ≥ r/2} + Ch{|τ − η| ≥ r/2}) dr
0
Z +∞ Z +∞
= Ch{|ξ − τ | ≥ r/2}dr + Ch{|τ − η| ≥ r/2}dr
0 0
3.9 Inequalities
Theorem 3.29 (Li and Liu [107]) Let ξ be a hybrid variable, and f a non-
negative function. If f is even and increasing on [0, ∞), then for any given
number t > 0, we have
E[f (ξ)]
Ch{|ξ| ≥ t} ≤ . (3.45)
f (t)
Section 3.9 - Inequalities 141
= f (t) · Ch{|ξ| ≥ t}
Theorem 3.30 (Li and Liu [107], Markov Inequality) Let ξ be a hybrid vari-
able. Then for any given numbers t > 0 and p > 0, we have
E[|ξ|p ]
Ch{|ξ| ≥ t} ≤ . (3.46)
tp
Proof: It is a special case of Theorem 3.29 when f (x) = |x|p .
Theorem 3.31 (Li and Liu [107], Chebyshev Inequality) Let ξ be a hybrid
variable whose variance V [ξ] exists. Then for any given number t > 0, we
have
V [ξ]
Ch {|ξ − E[ξ]| ≥ t} ≤ 2 . (3.47)
t
Proof: It is a special case of Theorem 3.29 when the hybrid variable ξ is
replaced with ξ − E[ξ], and f (x) = x2 .
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume√E[|ξ|p ] > 0 and E[|η|q ] > 0. It is easy to prove that the function
√
f (x, y) = p x q y is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}. Thus
for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume √ E[|ξ|p ] > 0 and E[|η|p ] > 0. It is easy to prove that the function
√
f (x, y) = ( p x + p y)p is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}.
Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) − f (y) ≥ k · (x − y). Replacing x with ξ and y with E[ξ], we obtain
i 1
Ch{|ξi − ξ| ≥ ε} = Cr{|ξi − ξ| ≥ ε} = → .
2i + 1 2
That is, the sequence {ξi } does not converge in chance to ξ.
1
Ch{|ξi − ξ| ≥ ε} = Pr{|ξi − ξ| ≥ ε} = →0
2i
as i → ∞. That is, the sequence {ξi } converges in chance to ξ. However, for
any ω ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k +
1)/2j ] containing ω. Thus ξi (θ, ω) does converges to 0. In other words, the
sequence {ξi } does not converge a.s. to ξ.
Proof: It follows from the Markov inequality that for any given number
ε > 0, we have
E[|ξi − ξ|]
Ch{|ξi − ξ| ≥ ε} ≤ →0
ε
as i → ∞. Thus {ξi } converges in chance to ξ. The theorem is proved.
Since Ch{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim inf i→∞ Φi (x) for any
z < x. Letting z → x, we get
It follows from (3.55) and (3.56) that Φi (x) → Φ(x). The theorem is proved.
Ch{|ξi − ξ| ≥ ε} = Cr{|ξi − ξ| ≥ ε} = 1.
It is clear that Φi (x) does not converge to Φ(x) at x > 0. That is, the
sequence {ξi } does not converge in distribution to ξ.
Ch{A ∩ B}
Ch{A|B} ≤ . (3.57)
Ch{B}
Ch{Ac ∩ B}
Ch{A|B} = 1 − Ch{Ac |B} ≥ 1 − . (3.58)
Ch{B}
Ch{Ac ∩ B} Ch{A ∩ B}
0≤1− ≤ ≤ 1. (3.59)
Ch{B} Ch{B}
Definition 3.21 (Li and Liu [110]) Let (Θ, P, Cr) × (Ω, A, Pr) be a chance
space and A, B two events. Then the conditional chance measure of A given
B is defined by
Ch{A ∩ B} Ch{A ∩ B}
, if < 0.5
Ch{B} Ch{B}
Ch{A|B} = Ch{Ac ∩ B} Ch{Ac ∩ B} (3.60)
1− , if < 0.5
Ch{B} Ch{B}
0.5, otherwise
Ch{Ac ∩ B} Ch{A ∩ B}
1− ≤ Ch{A|B} ≤ . (3.61)
Ch{B} Ch{B}
Furthermore, it is clear that the conditional chance measure obeys the max-
imum uncertainty principle.
Section 3.11 - Conditional Chance 149
Remark 3.7: Let X and Y be events in the credibility space. Then the
conditional chance measure of X × Ω given Y × Ω is
Cr{X ∩ Y } Cr{X ∩ Y }
, if < 0.5
Cr{Y } Cr{Y }
Ch{X × Ω|Y × Ω} = Cr{X c ∩ Y } Cr{X c ∩ Y }
1− , if < 0.5
Cr{Y } Cr{Y }
0.5, otherwise
Remark 3.8: Let X and Y be events in the probability space. Then the
conditional chance measure of Θ × X given Θ × Y is
Pr{X ∩ Y }
Ch{Θ × X|Θ × Y } =
Pr{Y }
which is just the conditional probability of X given Y .
Theorem 3.37 (Li and Liu [110]) Conditional chance measure is normal,
increasing, self-dual and countably subadditive.
Ch{A1 ∩ B} Ch{A2 ∩ B}
0.5 < ≤ ,
Ch{B} Ch{B}
then we have
Ch{Ac1 ∩ B} Ch{Ac2 ∩ B}
Ch{A1 |B} = 1 − ∨0.5 ≤ 1 − ∨0.5 = Ch{A2 |B}.
Ch{B} Ch{B}
Ch{A ∩ B} Ch{Ac ∩ B}
≥ 0.5, ≥ 0.5,
Ch{B} Ch{B}
Ch{A ∩ B} Ch{Ac ∩ B}
< 0.5 < ,
Ch{B} Ch{B}
then we have
Ch{A ∩ B} Ch{A ∩ B}
Ch{A|B} + Ch{Ac |B} = + 1− = 1.
Ch{B} Ch{B}
That is, Ch{·|B} is self-dual. Finally, for any countable sequence {Ai } of
events, if Ch{Ai |B} < 0.5 for all i, it follows from the countable subadditivity
of chance measure that
(∞ ) ∞
[ X
(∞ ) Ch Ai ∩ B Ch{Ai ∩ B} ∞
[ i=1 i=1
X
Ch Ai ∩ B ≤ ≤ = Ch{Ai |B}.
i=1
Ch{B} Ch{B} i=1
If Ch{∪i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
∞ ∞
!
[ \
c c
A1 ∩ B ⊂ (Ai ∩ B) ∪ Ai ∩ B ,
i=2 i=1
Section 3.11 - Conditional Chance 151
∞
(∞ )
X \
Ch{Ac1 ∩ B} ≤ Ch{Ai ∩ B} + Ch Aci ∩B ,
i=2 i=1
(∞ )
\
(∞ ) Ch Aci ∩B
[ i=1
Ch Ai |B =1− ,
i=1
Ch{B}
∞
X
∞
Ch{Ai ∩ B}
X Ch{Ac1 ∩ B} i=2
Ch{Ai |B} ≥ 1 − + .
i=1
Ch{B} Ch{B}
If there are at least two terms greater than 0.5, then the countable subad-
ditivity is clearly true. Thus Ch{·|B} is countably subadditive. Hence the
theorem is verified.
Definition 3.22 Let ξ be a hybrid variable on (Θ, P, Cr) × (Ω, A, Pr). A
conditional hybrid variable of ξ given B is a measurable function ξ|B from
the conditional chance space to the set of real numbers such that
ξ|B (θ, ω) ≡ ξ(θ, ω), ∀(θ, ω) ∈ Θ × Ω. (3.62)
Definition 3.23 (Li and Liu [110]) The conditional chance distribution Φ:
< → [0, 1] of a hybrid variable ξ given B is defined by
Φ(x|B) = Ch {ξ ≤ x|B} (3.63)
provided that Ch{B} > 0.
Definition 3.25 (Li and Liu [110]) Let ξ be a hybrid variable. Then the
conditional expected value of ξ given B is defined by
Z +∞ Z 0
E[ξ|B] = Ch{ξ ≥ r|B}dr − Ch{ξ ≤ r|B}dr (3.66)
0 −∞
Uncertainty Theory
A lot of surveys showed that the measure of union of events is not necessarily
the maximum measure except those events are independent. This fact im-
plies that many subjectively uncertain systems do not behave fuzziness and
membership function is no longer helpful. In order to deal with this type of
subjective uncertainty, Liu [138] founded an uncertainty theory in 2007 that
is a branch of mathematics based on normality, monotonicity, self-duality,
countable subadditivity, and product measure axioms.
The emphasis in this chapter is mainly on uncertain measure, uncer-
tainty space, uncertain variable, identification function, uncertainty distri-
bution, expected value, variance, moments, independence, identical distribu-
tion, critical values, entropy, distance, convergence almost surely, convergence
in measure, convergence in mean, convergence in distribution, and conditional
uncertainty.
{Λi }, we have
(∞ ) ∞
M M{Λi }.
[ X
Λi ≤ (4.1)
i=1 i=1
Definition 4.1 (Liu [138]) The set function M is called an uncertain mea-
sure if it satisfies the normality, monotonicity, self-duality, and countable
subadditivity axioms.
Example 4.1: Let Γ = {γ1 , γ2 , γ3 }. For this case, there are only 8 events.
Define
M{γ1 } = 0.6, M{γ2 } = 0.3, M{γ3 } = 0.2,
M{γ1 , γ2 } = 0.8, M{γ1 , γ3 } = 0.7, M{γ2 , γ3 } = 0.4,
M{∅} = 0, M{Γ} = 1.
Then M is an uncertain measure because it satisfies the four axioms.
Example 4.2: Suppose that λ(x) is a nonnegative function on < satisfying
M{Λ} = 1 − ρ(x)dx,
Z Z
(4.5)
if ρ(x)dx < 0.5
Λc Λc
0.5, otherwise
for any Borel set Λ of real numbers. Then the set function
Z Z
sup λ(x) + ρ(x)dx, if sup λ(x) + ρ(x)dx < 0.5
x∈Λ Λ x∈Λ Λ
x∈Λc Λc x∈Λc Λc
0.5, otherwise
0 ≤ M{Λ} ≤ 1 (4.8)
for any event Λ.
156 Chapter 4 - Uncertainty Theory
Proof: It follows from the normality and self-duality axioms that M{∅} =
1 − M{Γ} = 1 − 1 = 0. It follows from the monotonicity axiom that 0 ≤
M{Λ} ≤ 1 because ∅ ⊂ Λ ⊂ Γ.
Theorem 4.2 Suppose that M is an uncertain measure. Then for any events
Λ1 and Λ2 , we have
Proof: The left-hand inequality follows from the monotonicity axiom and
the right-hand inequality follows from the countable subadditivity axiom im-
mediately.
Theorem 4.3 Suppose that M is an uncertain measure. Then for any events
Λ1 and Λ2 , we have
Proof: The right-hand inequality follows from the monotonicity axiom and
the left-hand inequality follows from the self-duality and countable subaddi-
tivity axioms, i.e.,
lim
i→∞
M{Λ ∪ Λi } = i→∞
lim M{Λ\Λi } = M{Λ}. (4.12)
Remark 4.4: It follows from the above theorem that the uncertain measure
is null-additive, i.e., M{Λ1 ∪ Λ2 } = M{Λ1 } + M{Λ2 } if either M{Λ1 } = 0
or M{Λ2 } = 0. In other words, the uncertain measure remains unchanged if
the event is enlarged or reduced by an event with measure zero.
Proof: Suppose there are at most two elements, say γ1 and γ2 , taking
nonzero uncertain measure values. Let Λ1 and Λ2 be two disjoint events.
The argument breaks down into two cases.
Case 1: M{Λ1 } = 0 or M{Λ2 } = 0. For this case, we may verify that
M{Λ1 ∪ Λ2 } = M{Λ1 } + M{Λ2 } by using the countable subadditivity axiom.
Case 2: M{Λ1 } > 0 or M{Λ2 } > 0. For this case, without loss of general-
ity, we suppose that γ1 ∈ Λ1 and γ2 ∈ Λ2 . Note that M{(Λ1 ∪ Λ2 )c } = 0. It
follows from the self-duality axiom and uncertainty null-additivity theorem
that
M{Λ1 ∪ Λ2 } = M{Λ1 ∪ Λ2 ∪ (Λ1 ∪ Λ2 )c } = M{Γ} = 1,
M{Λ1 } + M{Λ2 } = M{Λ1 ∪ (Λ1 ∪ Λ2 )c } + M{Λ2 } = 1.
Hence M{Λ1 ∪ Λ2 } = M{Λ1 } + M{Λ2 }. The additivity is proved. Hence such
an uncertain measure is identical with probability measure. Furthermore, it
follows from Theorem 2.6 that it is also identical with credibility measure.
158 Chapter 4 - Uncertainty Theory
lim
i→∞
M{Λi } < 1, if Λi ↓ ∅. (4.14)
i=1
Example 4.5: Assume Γ is the set of real numbers. Let α be a number with
0 < α ≤ 0.5. Define a set function as follows,
0, if Λ = ∅
α, if Λ is upper bounded
M{Λ} = 0.5, if both Λ and Λc are upper unbounded (4.15)
c
1 − α, if Λ is upper bounded
1, if Λ = Γ.
Uncertainty Space
Definition 4.2 (Liu [138]) Let Γ be a nonempty set, L a σ-algebra over
Γ, and M an uncertain measure. Then the triplet (Γ, L, M) is called an
uncertainty space.
Γ = Γ 1 × Γ2 × · · · × Γn , L = L1 × L2 × · · · × Ln .
Section 4.1 - Uncertainty Space 159
and
sup min Mk {Λk } ≤ 0.5.
Λ1 ×Λ2 ×···×Λn ⊂Λc 1≤k≤n
It follows from (4.16) that M{Λ} = M{Λc } = 0.5 which proves the self-
duality.
Step 4: Let us prove that M is increasing. Suppose Λ and ∆ are two
events in L with Λ ⊂ ∆. The argument breaks down into three cases. Case 1:
Assume
sup min Mk {Λk } > 0.5.
Λ1 ×Λ2 ×···×Λn ⊂Λ 1≤k≤n
Then
Then
Thus
M{Λ} = 1 − sup min Mk {Λk }
Λ1 ×Λ2 ×···×Λn ⊂Λc 1≤k≤n
Case 3: Assume
sup min Mk {Λk } ≤ 0.5
Λ1 ×Λ2 ×···×Λn ⊂Λ 1≤k≤n
and
sup min Mk {∆k } ≤ 0.5.
∆1 ×∆2 ×···×∆n ⊂∆c 1≤k≤n
Then
M{Λ} ≤ 0.5 ≤ 1 − M{∆c } = M{∆}.
Step 5: Finally, we prove the countable subadditivity of M. For sim-
plicity, we only prove the case of two events Λ and ∆. The argument breaks
down into three cases. Case 1: Assume M{Λ} < 0.5 and M{∆} < 0.5. For
any given ε > 0, there are two rectangles Λ1 × Λ2 × · · · × Λn ⊂ Λc and
∆1 × ∆2 × · · · × ∆n ⊂ ∆c such that
1 − min Mk {Λk } ≤ M{Λ} + ε/2,
1≤k≤n
≤ M{Λ} + M{∆} + ε.
Letting ε → 0, we obtain M{Λ ∪ ∆} ≤ M{Λ} + M{∆}. Case 2: Assume
M{Λ} ≥ 0.5 and M{∆} < 0.5. When M{Λ ∪ ∆} = 0.5, the subadditivity is
obvious. Now we consider the case M{Λ ∪ ∆} > 0.5, i.e., M{Λc ∩ ∆c } < 0.5.
By using Λc ∪ ∆ = (Λc ∩ ∆c ) ∪ ∆ and Case 1, we get
M{Λc ∪ ∆} ≤ M{Λc ∩ ∆c } + M{∆}.
Thus
M{Λ ∪ ∆} = 1 − M{Λc ∩ ∆c } ≤ 1 − M{Λc ∪ ∆} + M{∆}
≤ 1 − M{Λc } + M{∆} = M{Λ} + M{∆}.
Case 3: If both M{Λ} ≥ 0.5 and M{∆} ≥ 0.5, then the subadditivity is
obvious because M{Λ} + M{∆} ≥ 1. The theorem is proved.
Definition 4.3 Let (Γk , Lk , Mk ), k = 1, 2, · · · , n be uncertainty spaces, Γ =
Γ1 × Γ2 × · · · × Γn , L = L1 × L2 × · · · × Ln , and M = M1 ∧ M2 ∧ · · · ∧ Mn .
Then (Γ, L, M) is called the product uncertainty space of (Γk , Lk , Mk ), k =
1, 2, · · · , n.
162 Chapter 4 - Uncertainty Theory
is an event.
M {ξ 6= x1 , ξ 6= x2 , · · · , ξ 6= xm } = 0; (4.18)
M {ξ 6= x1 , ξ 6= x2 , · · · } = 0. (4.19)
Uncertain Vector
Definition 4.7 An n-dimensional uncertain vector is a measurable function
from an uncertainty space (Γ, L, M) to the set of n-dimensional real vectors,
i.e., for any Borel set B of <n , the set
{ξ ∈ B} = γ ∈ Γ ξ(γ) ∈ B (4.20)
is an event.
is an event. Next, the class B is a σ-algebra of <n because (i) we have <n ∈ B
since {ξ ∈ <n } = Γ; (ii) if B ∈ B, then {ξ ∈ B} is an event, and
{ξ ∈ B c } = {ξ ∈ B}c
Uncertain Arithmetic
Definition 4.8 Suppose that f : <n → < is a measurable function, and
ξ1 , ξ2 , · · · , ξn uncertain variables on the uncertainty space (Γ, L, M). Then
ξ = f (ξ1 , ξ2 , · · · , ξn ) is an uncertain variable defined as
Example 4.7: Let ξ1 and ξ2 be two uncertain variables. Then the sum
ξ = ξ1 + ξ2 is an uncertain variable defined by
Proof: Let < be the universal set. For each set B of real numbers, we define
a set function
sup λ(x), if sup λ(x) < 0.5
M{B} =
x∈B x∈B
M{ξ ∈ B} = 1 − ρ(x)dx,
Z Z
(4.29)
if ρ(x)dx < 0.5
Bc Bc
0.5, otherwise.
ρ1 (x) ρ2 (x)
.... ....
......... .........
........ .... ...........
.... .. ....
... .... .. .... .. .....
... .... ... ... .. ...
... .... ... .... .. ...
...
... ... ... ... .. ...
... ... ... ... .. ...
... ... ... ... . ...
....
... .... ... ... ... ...
... ..... ... ... .. ...
..... .. ...
... ..... ... .. .. ...
... ..... ... .. . ...
...
......
....... ... .
... .
. ...
... ........
.......... ... ....... ... .....
.....
... ............. ... ....... .. ......
... ................... .. . . ...
. . ......
........................................................................................................................................................... x ............................................................................................................................................................... x
.... .... µ
. .
Proof: Let < be the universal set. For each Borel set B of real numbers, we
define a set function
Z Z
ρ(x)dx, if ρ(x)dx < 0.5
B B
Bc Bc
0.5, otherwise.
M{ξ ∈ B} =
Z
(4.36)
1 − sup λ(x) − ρ(x)dx,
x∈B c Bc
Z
if sup λ(x) + ρ(x)dx < 0.5
x∈B c Bc
0.5, otherwise.
Proof: Let < be the universal set. For each Borel set B of real numbers, we
define a set function
Z Z
sup λ(x) + ρ(x)dx, if sup λ(x) + ρ(x)dx < 0.5
x∈B B x∈B B
x∈B c Bc x∈B c Bc
0.5, otherwise.
x∈B c Bc
Z
if sup λ(x) + ρ(x)dx < 0.5
x∈B c Bc
0.5, otherwise.
xi ∈B c
xi ∈B c
X
if sup λ(xi ) + ρ(xi ) < 0.5
xi ∈B c c
x i ∈B
0.5, otherwise.
Φ(x) = M {ξ ≤ x} . (4.37)
Section 4.4 - Uncertainty Distribution 171
follows,
0, if Λ=∅
c, if Λ is upper bounded
M{Λ} =
0.5, if both Λ and Λc are upper unbounded
1 − c, if Λc is upper bounded
1, if Λ = Γ.
Definition 4.14 (Liu [138]) The uncertainty density function φ: < → [0, +∞)
of an uncertain variable ξ is a function such that
Z x
Φ(x) = φ(y)dy, ∀x ∈ <, (4.39)
−∞
Z +∞
φ(y)dy = 1 (4.40)
−∞
Example 4.22: The uncertainty density function may not exist even if the
uncertainty distribution is continuous and differentiable a.e. Suppose f is the
Cantor function, and set
0,
if x < 0
Φ(x) = f (x), if 0 ≤ x ≤ 1 (4.41)
1, if x > 1.
Section 4.5 - Independence 173
Proof: The first part follows immediately from the definition. In addition,
by the self-duality of uncertain measure, we have
Z +∞ Z x Z +∞
M{ξ ≥ x} = 1 − M{ξ < x} = φ(y)dy − φ(y)dy = φ(y)dy.
−∞ −∞ x
4.5 Independence
Definition 4.17 (Liu [141]) The uncertain variables ξ1 , ξ2 , · · · , ξm are said
to be independent if
(m )
M {ξi ∈ Bi } = min M {ξi ∈ Bi }
\
(4.43)
1≤i≤m
i=1
= 1 − min
1≤i≤m
M{ξi ∈ Bic } = max
1≤i≤m
M {ξi ∈ Bi } .
Thus the proof is complete.
min Mk {ξk ∈ Bk },
sup
f (B1 ,B2 ,··· ,Bn )⊂B 1≤k≤n
min Mk {ξk ∈ Bk } > 0.5
if sup
f (B1 ,B2 ,··· ,Bn )⊂B 1≤k≤n
M{ξ ∈ B} = 1 − min Mk {ξk ∈ Bk },
sup
f (B1 ,B2 ,··· ,Bn )⊂B c 1≤k≤n
ξ + η = (a1 + b1 , a2 + b2 ).
ξ + η = (a1 + b1 , a2 + b2 , a3 + b3 ).
Then their sum is also an exponential uncertain variable with second identi-
fication function
1 exp − x
, if x ≥ 0
ρ(x) = a+b a+b
0, if x < 0.
Definition 4.20 (Liu [138]) Let ξ be an uncertain variable. Then the ex-
pected value of ξ is defined by
Z +∞ Z 0
E[ξ] = M{ξ ≥ r}dr − M{ξ ≤ r}dr (4.46)
0 −∞
m
X
E[ξ] = wi xi (4.48)
i=1
λ 1 ≤ λ2 ≤ · · · ≤ λ k and λk ≥ λk+1 ≥ · · · ≥ λm .
178 Chapter 4 - Uncertainty Theory
Then the expected value is determined by (4.48) and the weights are given
by
λ1 , if i = 1
λ − λ , if i = 2, 3, · · · , k − 1
i i−1
wi = 1 − λk−1 − λk+1 , if i = k
λi − λi+1 , if i = k + 1, k + 2, · · · , m − 1
λm , if i = m.
α(β − α) 2α α 2α − β
E[ξ] = α + ln + ln .
β 2α − β 2 β
for i = 1, 2, · · · , m.
for i = 1, 2, · · · , m.
Proof: It follows from the definition of expected value operator and Fubini
Theorem that
Z +∞ Z 0
E[ξ] = M{ξ ≥ r}dr − M{ξ ≤ r}dr
0 −∞
Z +∞ Z +∞ Z 0 Z r
= φ(x)dx dr − φ(x)dx dr
0 r −∞ −∞
Z +∞ Z x Z 0 Z 0
= φ(x)dr dx − φ(x)dr dx
0 0 −∞ x
Z +∞ Z 0
= xφ(x)dx + xφ(x)dx
0 −∞
Z +∞
= xφ(x)dx.
−∞
R +∞
Proof: Since the Lebesgue-Stieltjes integral −∞
xdΦ(x) is finite, we imme-
diately have
Z y Z +∞ Z 0 Z 0
lim xdΦ(x) = xdΦ(x), lim xdΦ(x) = xdΦ(x)
y→+∞ 0 0 y→−∞ y −∞
and Z +∞ Z y
lim xdΦ(x) = 0, lim xdΦ(x) = 0.
y→+∞ y y→−∞ −∞
It follows from
Z +∞
xdΦ(x) ≥ y lim Φ(z) − Φ(y) = y (1 − Φ(y)) ≥ 0, for y > 0,
y z→+∞
Section 4.8 - Expected Value 181
Z y
xdΦ(x) ≤ y Φ(y) − lim Φ(z) = yΦ(y) ≤ 0, for y < 0
−∞ z→−∞
that
lim y (1 − Φ(y)) = 0, lim yΦ(y) = 0.
y→+∞ y→−∞
Let 0 = x0 < x1 < x2 < · · · < xn = y be a partition of [0, y]. Then we have
n−1
X Z y
xi (Φ(xi+1 ) − Φ(xi )) → xdΦ(x)
i=0 0
and
n−1 y
M{ξ ≥ r}dr
X Z
(1 − Φ(xi+1 ))(xi+1 − xi ) →
i=0 0
as max{|xi+1 − xi | : i = 0, 1, · · · , n − 1} → 0. Since
n−1
X n−1
X
xi (Φ(xi+1 ) − Φ(xi )) − (1 − Φ(xi+1 )(xi+1 − xi ) = y(Φ(y) − 1) → 0
i=0 i=0
Proof: Step 1: We first prove that E[ξ + b] = E[ξ] + b for any real number
b. If b ≥ 0, we have
Z +∞ Z 0
E[ξ + b] = M{ξ + b ≥ r}dr − M{ξ + b ≤ r}dr
0 −∞
+∞ 0
M{ξ ≥ r − b}dr − M{ξ ≤ r − b}dr
Z Z
=
0 −∞
b
(M{ξ ≥ r − b} + M{ξ < r − b})dr
Z
= E[ξ] +
0
= E[ξ] + b.
182 Chapter 4 - Uncertainty Theory
4.9 Variance
The variance of an uncertain variable provides a measure of the spread of the
distribution around its expected value. A small value of variance indicates
that the uncertain variable is tightly concentrated around its expected value;
and a large value of variance indicates that the uncertain variable has a wide
spread around its expected value.
Definition 4.21 (Liu [138]) Let ξ be an uncertain variable with finite ex-
pected value e. Then the variance of ξ is defined by V [ξ] = E[(ξ − e)2 ].
Example 4.39: Let ξ be a rectangular uncertain variable (a, b). Then its
expected value is e = (a + b)/2, and for any positive number r, we have
(
1/2, if r ≤ (b − a)2 /4
M{(ξ − e) ≥ r} =
2
0, if r > (b − a)2 /4.
Thus the variance is
Z +∞ (b−a)2 /4
(b − a)2
M{(ξ − e) ≥ r}dr =
Z
2 1
V [ξ] = dr = .
0 0 2 8
Φ(x)
....
.........
....
1 .........................................................................
..
.... .....................................
..............
... ..........
... ..
...
.........
.
... ......
.....
... ......
... .....
... ..
......
.
... .....
...........
0.5 .........................................................................
...
...
.
..... ...
... ..
...... .. ....
.
... ..... ... ...
..... ...
... ...... .. ...
... ....... .. ...
... ...
...
........ . ....
.......................
.
.
. .....
.........
................. .
...........................................................................................................................................................................................................
........................ . ...............
......................................... x
..
0 ..
..... e
.
has the normal uncertainty distribution Φ; and the uncertain variable with
second identification function
−1
πx π(e − x) π(x − e)
ρ(x) = √ 2 + exp √ + exp √
3σ 3σ 3σ
has also the normal uncertainty distribution Φ. It is easy to verify that
the normal uncertain variable ξ has expected value e. It follows from the
definition of variance that
Z +∞ Z +∞
V [ξ] = M{(ξ − e)2 ≥ r}dr = M{(ξ ≥ e + √r) ∪ (ξ ≤ e − √r)}dr.
0 0
Theorem 4.29 Let ξ be an uncertain variable that takes values in [a, b] and
has expected value e. Then
4.10 Moments
Definition 4.22 (Liu [138]) Let ξ be an uncertain variable. Then for any
positive integer k,
(a) the expected value E[ξ k ] is called the kth moment;
(b) the expected value E[|ξ|k ] is called the kth absolute moment;
(c) the expected value E[(ξ − E[ξ])k ] is called the kth central moment;
(d) the expected value E[|ξ −E[ξ]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
186 Chapter 4 - Uncertainty Theory
Thus we have
+∞
M{|ξ|t ≥ r}dr = 0.
Z
lim
x→∞ xt /2
t
M {|ξ|t ≥ r}dr ≥
t
M{|ξ|t ≥ r}dr ≥ 12 xt M{|ξ| ≥ x}.
x /2 x /2
Theorem 4.32 Let ξ be an uncertain variable that takes values in [a, b] and
has expected value e. Then for any positive integer k, the kth absolute moment
and kth absolute central moment satisfy the following inequalities,
b−e k e−a k
E[|ξ|k ] ≤ |a| + |b| , (4.62)
b−a b−a
b−e e−a
E[|ξ − e|k ] ≤ (e − a)k + (b − e)k . (4.63)
b−a b−a
Proof: (a) If c = 0, then the part (a) is obvious. In the case of c > 0, we
have
(cξ)sup (α) = sup{r M{cξ ≥ r} ≥ α}
Similarly, we may prove that (−ξ)inf (α) = −ξsup (α). The theorem is proved.
188 Chapter 4 - Uncertainty Theory
¯
Proof: Part (a): Write ξ(α) = (ξinf (α) + ξsup (α))/2. If ξinf (α) < ξsup (α),
then we have
1 ≤ M{ξ ≤ ξ(α)}
¯ + M{ξ ≥ ξ(α)}
¯ < α + α ≤ 1.
4.12 Entropy
This section provides a definition of entropy to characterize the uncertainty
of uncertain variables resulting from information deficiency.
Its entropy is
Z b
H[ξ] = − (0.5 ln 0.5 + (1 − 0.5) ln (1 − 0.5)) dx = (b − a) ln 2.
a
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.
4.13 Distance
Definition 4.26 (Liu [138]) The distance between uncertain variables ξ and
η is defined as
d(ξ, η) = E[|ξ − η|]. (4.72)
1
d(ξ, η) = (|a1 − b2 | + |b1 − a2 |) .
2
1
d(ξ, η) = (|a1 − c2 | + 2|b1 − b2 | + |c1 − a2 |) .
4
1
d(ξ, η) = (|a1 − d2 | + |b1 − c2 | + |c1 − b2 | + |d1 − a2 |) .
4
Section 4.14 - Inequalities 191
Theorem 4.40 Let ξ, η, τ be uncertain variables, and let d(·, ·) be the dis-
tance. Then we have
(a) (Nonnegativity) d(ξ, η) ≥ 0;
(b) (Identification) d(ξ, η) = 0 if and only if ξ = η;
(c) (Symmetry) d(ξ, η) = d(η, ξ);
(d) (Triangle Inequality) d(ξ, η) ≤ 2d(ξ, τ ) + 2d(η, τ ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the countable subadditivity axiom
that
Z +∞
d(ξ, η) = M {|ξ − η| ≥ r} dr
0
+∞
M {|ξ − τ | + |τ − η| ≥ r} dr
Z
≤
0
+∞
M {{|ξ − τ | ≥ r/2} ∪ {|τ − η| ≥ r/2}} dr
Z
≤
0
+∞
(M{|ξ − τ | ≥ r/2} + M{|τ − η| ≥ r/2}) dr
Z
≤
0
+∞ +∞
M{|ξ − τ | ≥ r/2}dr + M{|τ − η| ≥ r/2}dr
Z Z
=
0 0
It is easy to verify that d(ξ, τ ) = d(τ, η) = 1/2 and d(ξ, η) = 3/2. Thus
3
d(ξ, η) = (d(ξ, τ ) + d(τ, η)).
2
4.14 Inequalities
Theorem 4.41 (Liu [138]) Let ξ be an uncertain variable, and f a non-
negative function. If f is even and increasing on [0, ∞), then for any given
number t > 0, we have
= f (t) · M{|ξ| ≥ t}
Theorem 4.44 (Liu [138], Hölder’s Inequality) Let p and q be positive real
numbers with 1/p+1/q = 1, and let ξ and η be strongly independent uncertain
variables with E[|ξ|p ] < ∞ and E[|η|q ] < ∞. Then we have
p p
E[|ξη|] ≤ p E[|ξ|p ] q E[|η|q ]. (4.76)
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume√E[|ξ|p ] > 0 and E[|η|q ] > 0. It is easy to prove that the function
√
f (x, y) = p x q y is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}. Thus
for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume √ E[|ξ|p ] > 0 and E[|η|p ] > 0. It is easy to prove that the function
√
f (x, y) = ( p x + p y)p is a concave function on D = {(x, y) : x ≥ 0, y ≥ 0}.
Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) − f (y) ≥ k · (x − y). Replacing x with ξ and y with E[ξ], we obtain
Since M{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim inf i→∞ Φi (x) for any
z < x. Letting z → x, we get
It follows from (4.83) and (4.84) that Φi (x) → Φ(x). The theorem is proved.
M{A|B} ≤ MM
{A ∩ B}
{B}
. (4.85)
196 Chapter 4 - Uncertainty Theory
If
M{A1 ∩ B} ≤ 0.5 ≤ M{A2 ∩ B} ,
M{B} M{B}
then M{A1 |B} ≤ 0.5 ≤ M{A2 |B}. If
M{A1 ∩ B} ≤ M{A2 ∩ B} ,
0.5 <
M{B} M{B}
then we have
M{A1 |B} = 1 −
M{Ac1 ∩ B} ∨ 0.5 ≤ 1 − M{Ac2 ∩ B} ∨ 0.5 = M{A |B}.
M{B} M{B} 2
This means that M{·|B} satisfies the monotonicity axiom. For any event A,
if
M{A ∩ B} ≥ 0.5, M{Ac ∩ B} ≥ 0.5,
M{B} M{B}
then we have M{A|B} + M{A |B} = 0.5 + 0.5 = 1 immediately. Otherwise,
c
c M
M{A|B} + M{A |B} = M{B} + 1 − M{B} = 1.
{A ∩ B}
M {A ∩ B}
That is, M{·|B} satisfies the self-duality axiom. Finally, for any countable
sequence {Ai } of events, if M{Ai |B} < 0.5 for all i, it follows from the
countable subadditivity axiom that
(∞ ) ∞
) M M{Ai ∩ B} X
[ X
(∞ Ai ∩ B ∞
M M{Ai |B}.
[ i=1 i=1
i=1
Ai ∩ B ≤
M{B} ≤
M{B} =
i=1
If M{∪i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
∞ ∞
!
[ \
c c
A1 ∩ B ⊂ (Ai ∩ B) ∪ Ai ∩ B ,
i=2 i=1
∞
(∞ )
M M{Ai ∩ B} + M
X \
{Ac1 ∩ B} ≤ Aci ∩B ,
i=2 i=1
(∞ )
M
\
(∞ ) Aci ∩B
M
[ i=1
i=1
Ai |B =1−
M{B} ,
∞
M{Ai ∩ B}
X
∞
X M {Ac1 ∩ B}
M{Ai |B} ≥ 1 − M{B} + M{B} .
i=2
i=1
If there are at least two terms greater than 0.5, then the countable subad-
ditivity is clearly true. Thus M{·|B} satisfies the countable subadditivity
axiom. Hence M{·|B} is an uncertain measure. Furthermore, (Γ, L, M{·|B})
is an uncertainty space.
1−
M{η = y} if
M{η = y}
0.5, otherwise
Definition 4.37 (Liu [138]) Let ξ be an uncertain variable. Then the con-
ditional expected value of ξ given B is defined by
Z +∞ Z 0
E[ξ|B] = M{ξ ≥ r|B}dr − M{ξ ≤ r|B}dr (4.94)
0 −∞
Uncertain Process
is an event.
Definition 5.3 For any partition of closed interval [0, t] with 0 = t1 < t2 <
· · · < tk+1 = t, the mesh is written as
∆ = max |ti+1 − ti |.
1≤i≤k
202 Chapter 5 - Uncertain Process
provided that the limit exists almost surely and is an uncertain process. Espe-
cially, the m-variation is called total variation if m = 1; and squared variation
if m = 2.
M lim Xs = Xt = 1.
h i
(5.6)
s→t
N. t
...
..........
...
..
...........
4 ...
...
..............................
..
..
... ..
.......... .........................................................
3 ...
....
.. ..
..
..
.. .. ..
.......... ....................................... ..
2 ...
... .. ..
..
..
..
... .. ..
.. ..
... .. ..
.........................................................
1 .........
.... .. ..
..
..
..
..
..
... .. .. ..
.. .. .. .
.
......................................................................................................................................................................................................................................
0 ... ... ... ... ... t
ξ ...
... 1 ... .... ξ 2 ξ ....
... 3 ... ξ
....
4 ....
...
.. .. .. .. ..
S0 S1 S2 S3 S4
Nt has at most one renewal at each time. In particular, Nt does not jump at
time 0. Since Nt ≥ n is equivalent to Sn ≤ t, we immediately have
M ≤M
Nt 1
≥r ≥r
t ξ1
and
M =M
Nt 1
lim ≥r ≥r
t→∞ t ξ1
for any real numbers t > 0 and r > 0. It follows from Lebesgue dominated
convergence theorem that
Z ∞ Z ∞
M M
Nt 1
lim ≥ r dr = ≥ r dr.
t→∞ 0 t 0 ξ1
Hence (5.13) holds.
Uncertain Calculus
Uncertain calculus, founded by Liu [139] in 2008 and revised by Liu [141] in
2009, is composed of canonical process, uncertain integral and chain rule.
C. t
..
........
...
... ......
... .. ... ........ .
..... .. .... ............
... ...... .... ... ..... .....
... ..
. ......
... ... ......
... ........ ........
..
..
...
... ... .... . .. .
....... ...
... ..
. ... ....... ....... .... ... ...
. .
.
... ...... ....
. ..
. ........ .. ...... . ....
.. ... ... .. . .... .. ......
... .
. ..... ... ... ....... ............. .. .
... .. .
... .. .. . .
... .. .. ... . .. .. .. . . .
... .. ..... .....
... ............. ......... ..... ... ... ....... .....
... ... .. .. .....
... ... ... ...
... ..
.
... ............
... ...
......
...............................................................................................................................................................................................................................................................
t
C. t
...
.......... .
... ....
.... .. .. ......
.. ... ... ... .... ........ ..
... . . . .. ..... ... ..... ............
... ......... .. .. ..... .... ... ....... .....
.
... . .. ... .. ... ...
... .. . ... .. ... ........ .
......
.. .. ...
x ...........................................................................................................................................................................................................................................................
.... ............ . ..
. ...
. . ...
... .... .
. ..
.. ...... ....... .... .. .
.
.
........ .. ....... ..
. . ..
... .... . ... .............. ...........
. ... .... .... ... ..... ..
.
... ................ .... .... ... ......... ..... . .. ...... .
... .. .. ... ... .. ... .. .. .. ..... ..
... .. ... .... .... ....... ............ .... ........... ... ... ..... ..
... ..... ... .. .. .. .. .. ......
............. ..........
. .. ... ... .......
. ......
... .
.. . .. ...... .
... ..... ....... ........ .
... ..
... ...
... ..... ....
... ... ....
... ...
......
..............................................................................................................................................................................................................................................................
t
∆ = max |ti+1 − ti |.
1≤i≤k
provided that the limit exists almost surely and is an uncertain variable.
as ∆ → 0. It follows that
Z s Z s
tdCt = sCs − Ct dt.
0 0
as ∆ → 0. That is, Z s
1 2
Ct dCt = C .
0 2 s
for any s ≥ 0.
Remark 6.2: The infinitesimal increment dCt in (6.8) may be replaced with
the derived canonical process
Example 6.5: Applying the chain rule, we obtain the following formula
d(tCt ) = Ct dt + tdCt .
Hence we have
Z s Z s Z s
sCs = d(tCt ) = Ct dt + tdCt .
0 0 0
That is, Z s Z s
tdCt = sCs − Ct dt.
0 0
Example 6.6: Applying the chain rule, we obtain the following formula
Then we have Z s Z s
Cs2 = d(Ct2 ) =2 Ct dCt .
0 0
It follows that Z s
1 2
Ct dCt = C .
0 2 s
Example 6.7: Applying the chain rule, we obtain the following formula
Thus we get Z s Z s
Cs3 = d(Ct3 ) =3 Ct2 dCt .
0 0
That is Z s
1 3
Ct2 dCt = C .
0 3 s
Proof: By defining h(t, Ct ) = F (t)Ct and using the chain rule, we get
d(F (t)Ct ) = Ct dF (t) + F (t)dCt .
Thus Z s Z s Z s
F (s)Cs = d(F (t)Ct ) = Ct dF (t) + F (t)dCt
0 0 0
which is just (6.11).
Uncertain Differential
Equation
Remark 7.1: Note that there is no precise definition for the terms dXt ,
dt and dCt in the uncertain differential equation (7.1). The mathematically
meaningful form is the uncertain integral equation
Z s Z s
Xs = X0 + f (t, Xt )dt + g(t, Xt )dCt . (7.2)
0 0
However, the differential form is more convenient for us. This is the main
reason why we accept the differential form.
212 Chapter 7 - Uncertain Differential Equation
provided that a 6= 0.
|f (x, t) − f (y, t)| + |g(x, t) − g(y, t)| ≤ L|x − y|, ∀x, y ∈ <, t ≥ 0 (7.3)
7.3 Stability
Definition 7.2 (Liu [141]) An uncertain differential equation is said to be
stable if for any given > 0 and ε > 0, there exists δ > 0 such that for any
solutions Xt and Yt , we have
If we assume that the stock price follows some uncertain differential equation,
then we may produce a new topic of uncertain finance. As an example, let
stock price follow geometric canonical process. Then we have a stock model
(Liu [141]) in which the bond price Xt and the stock price Yt are determined
by
(
dXt = rXt dt
(7.6)
dYt = eYt dt + σYt dCt
where r is the riskless interest rate, e is the stock drift, σ is the stock diffusion,
and Ct is a canonical process.
A European call option gives the holder the right to buy a stock at a speci-
fied time for specified price. Assume that the option has strike price K and
expiration time s. Then the payoff from such an option is (Ys − K)+ . Con-
sidering the time value of money resulted from the bond, the present value
of this payoff is exp(−rs)(Ys − K)+ . Hence the European call option price
should be
fc = exp(−rs)E[(Ys − K)+ ]. (7.7)
Let us consider the financial market described by stock model (7.6). The
European call option price is
K/Y0 −1
π(es − ln y)
Z
fp = exp(−rs)Y0 1 + exp √ dy. (7.10)
0 3σs
Now we assume that there are multiple stocks whose prices are determined
by multiple canonical processes. For this case, we have a multi-factor stock
model (Liu [141]) in which the bond price Xt and the stock prices Yit are
determined by
dXt = rXt dt
n
X (7.11)
dYit = ei Yit dt +
σij Yit dCjt , i = 1, 2, · · · , m
j=1
where r is the riskless interest rate, ei are the stock drift coefficients, σij
are the stock diffusion coefficients, Cit are independent canonical processes,
i = 1, 2, · · · , m, j = 1, 2, · · · , n.
Portfolio Selection
For the stock model (7.11), we have the choice of m + 1 different investments.
At each instant t we may choose a portfolio (βt , β1t , · · · , βmt ) (i.e., the in-
vestment fractions meeting βt + β1t + · · · + βmt = 1). Then the wealth Zt at
time t should follow uncertain differential equation
m
X m X
X n
dZt = rβt Zt dt + ei βit Zt dt + σij βit Zt dCjt . (7.12)
i=1 i=1 j=1
No-Arbitrage
The stock model (7.11) is said to be no-arbitrage if there is no portfolio
(βt , β1t , · · · , βmt ) such that for some time s > 0, we have
M{exp(−rs)Zs ≥ Z0 } = 1 (7.14)
and
M{exp(−rs)Zs > Z0 } > 0 (7.15)
where Zt is determined by (7.12) and represents the wealth at time t. We may
prove that the stock model (7.11) is no-arbitrage if and only if its diffusion
matrix
σ11 σ12 · · · σ1n
σ21 σ22 · · · σ2n
.. .. .. ..
. . . .
σm1 σm2 ··· σmn
has rank m, i.e., the row vectors are linearly independent.
dX
bt = fb(t, Xt )dt + gb(t, Xt )dC
bt (7.19)
Uncertain Logic
Example 8.1: “Tom is tall with truth value 0.8” is an uncertain proposition,
where “Tom is tall” is a statement, and its truth value is 0.8 in uncertain
measure.
Generally, we use ξ to express the uncertain proposition and use c to
express its uncertain measure value. In fact, the uncertain proposition ξ is
essentially an uncertain variable
(
1 with uncertain measure c
ξ= (8.1)
0 with uncertain measure 1 − c
Definition 8.2 Uncertain propositions are called independent if they are in-
dependent uncertain variables.
220 Chapter 8 - Uncertain Logic
X = ¬ξ, X = ξ ∧ η, X = (ξ ∨ η) → τ
Definition 8.3 Uncertain formulas are called independent if they are inde-
pendent uncertain variables.
Definition 8.4 (Li and Liu [108]) Let X be an uncertain formula. Then
the truth value of X is defined as the uncertain measure that the uncertain
formula X is true, i.e.,
T (X) = M{X = 1}. (8.2)
where νi (xi ) are identification functions (no matter what types they are) of
ξi , i = 1, 2, · · · , n, respectively.
Proof: It follows from the definition of truth value and property of uncertain
measure that
Proof: It follows from the definition of truth value and property of uncertain
measure that
Theorem 8.5 (De Morgan’s Law) For any uncertain formulas X and Y , we
have
T (¬(X ∧ Y )) = T ((¬X) ∨ (¬Y )), (8.12)
T (¬(X ∨ Y )) = T ((¬X) ∧ (¬Y )). (8.13)
which proves the first equality. A similar way may verify the second equality.
The theorem is proved.
Proof: It follows from the law of excluded middle and law of truth conser-
vation that
T (X → X) = T (¬X ∨ X) = 1,
T (X → ¬X) = T (¬X ∨ ¬X) = T (¬X) = 1 − T (X).
The theorem is proved.
Independence Case
Theorem 8.10 If uncertain formulas X and Y are independent, then we
have
Proof: Since X and Y are independent uncertain formulas, they are inde-
pendent uncertain variables. Hence
Proof: Since X and Y are independent, the ¬X and Y are also independent.
It follows that
Uncertain Inference
is an event.
Let ξ and ξ ∗ be two uncertain sets. In order to define the degree that ξ ∗
matches ξ, we first introduce the following symbols:
{ξ ∗ ⊂ ξ} = {γ ∈ Γ ξ ∗ (γ) ⊂ ξ(γ)},
(9.2)
{ξ ∗ ⊂ ξ c } = {γ ∈ Γ ξ ∗ (γ) ⊂ ξ(γ)c },
(9.3)
Definition 9.2 Let ξ and ξ ∗ be two uncertain sets. Then the event ξ ∗ B ξ
(i.e., ξ ∗ matches ξ) is defined as
Definition 9.3 Let ξ and ξ ∗ be two uncertain sets. Then the matching degree
of ξ ∗ B ξ is defined as
η ∗ = η|ξ∗ Bξ (9.7)
Rule: If X is ξ then Y is η
From: X is ξ ∗ (9.8)
Infer: Y is η ∗ = η|ξ∗ Bξ
Theorem 9.1 Assume the if-then rule “if X is ξ then Y is η”. From M{ξ ∗ B
ξ} = 1 we infer η ∗ = η.
Example 9.1: Suppose ξ and ξ ∗ are independent uncertain sets with first
identification functions λ and λ∗ , and η has a first identification function ν
that is independent of (ξ, ξ ∗ ). If λ∗ (x) ≤ λ(x) for any x ∈ <, then M{ξ ∗ Bξ} =
1 and ν ∗ (y) = ν(y). This means that if ξ ∗ matches ξ, then η ∗ matches η.
Proof: If M{ξ ∗ Bξ} = 1 and M{η ∗ Bη} = 1, then M{(ξ ∗ Bξ)∩(η ∗ Bη)} = 1.
It follows from the definition of conditional uncertainty that τ ∗ = τ .
η2∗ = η2 |ξ2∗ Bξ .
The uncertain sets η1∗ and η2∗ will aggregate a new η ∗ that is determined by
for any Borel set B. That is, we assign it the value of M{η1∗ ⊂ B} or
M{η2∗ ⊂ B} such that it is as far from 0.5 as possible. Thus we infer Y = η∗ .
Hence we have the following inference rule:
Rule 1: If X is ξ1 then Y is η1
Rule 2: If X is ξ2 then Y is η2
(9.12)
From: X is ξ ∗
Infer: Y is η ∗ determined by (9.11)
228 Chapter 9 - Uncertain Inference
Example A.2: Let A be the set of all finite disjoint unions of all intervals
of the form (−∞, a], (a, b], (b, ∞) and ∅. Then A is an algebra over <, but
not a σ-algebra because Ai = (0, (i − 1)/i] ∈ A for all i but
∞
Ai = (0, 1) 6∈ A.
[
i=1
∞ \
∞
Ai ∈ A;
[
lim inf Ai = (A.3)
i→∞
k=1 i=k
lim Ai ∈ A. (A.4)
i→∞
Example A.3: We divide the interval [0, 1] into three equal open intervals
from which we choose the middle one, i.e., (1/3, 2/3). Then we divide each
of the remaining two intervals into three equal open intervals, and choose
the middle one in each case, i.e., (1/9, 2/9) and (7/9, 8/9). We perform this
process and obtain Dij for j = 1, 2, · · · , 2i−1 and i = 1, 2, · · · Note that {Dij }
is a sequence of mutually disjoint open intervals. Without loss of generality,
suppose Di1 < Di2 < · · · < Di,2i−1 for i = 1, 2, · · · Define the set
i−1
∞ 2[
[
D= Dij . (A.5)
i=1 j=1
A = A1 × A2 × · · ·
where Ai ∈ Ai for all i and Ai = Ωi for all but finitely many i. The smallest
σ-algebra containing all measurable rectangles of Ω = Ω1 × Ω2 × · · · is called
the product σ-algebra, denoted by
A = A1 × A2 × · · ·
A.2 Classical Measures
This appendix introduces the concepts of classical measure, measure space,
Lebesgue measure, and product measure.
Example A.4: Length, area and volume are instances of measure concept.
It has been proved that there is a unique measure π on the Borel algebra
of < such that π{(a, b]} = b − a for any interval (a, b] of <.
A = A1 × A2 × · · · × An .
Ω = Ω 1 × Ω 2 × · · · × Ωn ,
A = A1 × A2 × · · ·
Ω = Ω1 × Ω2 × · · ·
π{A1 ×· · ·×An ×Ωn+1 ×Ωn+2 ×· · · } = π1 {A1 }×π2 {A2 }×· · ·×πn {An } (A.9)
Definition A.7 A function f from (Ω, A) to the set of real numbers is said
to be measurable if
for any Borel set B of real numbers. If Ω is a Borel set, then A is always
assumed to be the Borel algebra over Ω.
Theorem A.7 The function f is measurable from (Ω, A) to < if and only if
f −1 (I) ∈ A for any open interval I of <.
Example A.8: Let f be a measurable function from (Ω, A) to <. Then its
positive part and negative part
( (
+ f (ω), if f (ω) ≥ 0 − −f (ω), if f (ω) ≤ 0
f (ω) = f (ω) =
0, otherwise, 0, otherwise
are measurable functions, because
+
ω f (ω) > t = ω f (ω) > t ∪ ω f (ω) ≤ 0 if t < 0 ,
−
ω f (ω) > t = ω f (ω) < −t ∪ ω f (ω) ≥ 0 if t < 0 .
Definition A.14 Let f (x) be a measurable function on the Borel set A, and
define
( (
+ f (x), if f (x) > 0 − −f (x), if f (x) < 0
f (x) = f (x) =
0, otherwise, 0, otherwise.
be integrable on A.
Definition A.18 Let f (x) be a measurable function on the Borel set A, and
define
( (
+ f (x), if f (x) > 0 − −f (x), if f (x) < 0
f (x) = f (x) =
0, otherwise, 0, otherwise.
Euler-Lagrange Equation
where F is a known function with continuous first and second partial deriva-
tives. If L has an extremum (maximum or minimum) at φ(x), then
∂F d ∂F
− =0 (B.2)
∂φ(x) dx ∂φ0 (x)
Note that the Euler-Lagrange equation is only a necessary condition for the
existence of an extremum. However, if the existence of an extremum is clear
and there exists only one solution to the Euler-Lagrange equation, then this
solution must be the curve for which the extremum is achieved.
Appendix C
Logic
Propositions
Connective Symbols
Formulas
¬ξ, ξ ∨ η, ξ ∧ η, ξ → η, (ξ ∨ η) → τ.
Truth Functions
Assume X is a formula containing propositions ξ1 , ξ2 , · · · , ξn . It is well-known
that there is a function f : {0, 1}n → {0, 1} such that X = 1 if and only if
f (ξ1 , ξ2 , · · · , ξn ) = 1. Such a Boolean function f is called the truth function
of X. For example, the truth function of formula ξ1 ∨ ξ2 → ξ3 is
of fuzzy propositions and connective symbols that must make sense. Let X
be a fuzzy formula. Then the truth value of X is defined as the credibility
that the fuzzy formula X is true, i.e.,
This is the key point that credibilistic logic is different from possibilistic logic.
T (ξ ∧ η → κ) = Ch{(ξ ∧ η = 0) ∪ (κ = 1)}
= 1 − Ch{(ξ = 1) ∩ (η = 1) ∩ (κ = 0)}
= 1 − Cr{(ξ = 1) ∩ (κ = 0)} ∧ Pr{η = 1}
= 1 − a ∧ (1 − c) ∧ b;
T (ξ ∧ η → τ ) = Ch{(ξ ∧ η = 0) ∪ (τ = 1)}
= 1 − Ch{(ξ = 1) ∩ (η = 1) ∩ (τ = 0)}
= 1 − Cr{ξ = 1} ∧ Pr{(η = 1) ∩ (τ = 0)}
= 1 − a ∧ (b − bd).
Remark C.3: Hybrid logic is consistent with the law of contradiction and
law of excluded middle, i.e.,
where νi (xi ) are identification functions (no matter what types they are) of
ξi , i = 1, 2, · · · , n, respectively.
Inference
This appendix will introduce traditional inference, fuzzy inference and un-
certain inference.
Let ξ and ξ ∗ be two fuzzy sets. In order to define the degree that ξ ∗
matches ξ, we first introduce the following symbols:
{ξ ∗ ⊂ ξ} = {θ ∈ Θ ξ ∗ (θ) ⊂ ξ(θ)},
(D.1)
{ξ ∗ ⊂ ξ c } = {θ ∈ Θ ξ ∗ (θ) ⊂ ξ(θ)c },
(D.2)
Section 4.3 - Hybrid Inference 245
Definition D.2 Let ξ and ξ ∗ be two fuzzy sets. Then the event ξ ∗ B ξ (i.e.,
ξ ∗ matches ξ) is defined as
Definition D.3 Let ξ and ξ ∗ be two fuzzy sets. Then the matching degree
of ξ ∗ B ξ is defined as
Fuzzy inference has been studied by many scholars and a lot of inference
rules have been suggested. For example, Zadeh’s compositional rule of infer-
ence, Lukasiewicz’s inference rule, and Mamdani’s inference rule are widely
used in fuzzy inference systems.
However, instead of using the above inference rules, we will introduce an
inference rule proposed by Liu [141]. Let X and Y be two concepts. It is
assumed that we have a rule “if X is a fuzzy set ξ then Y is a fuzzy set η”.
From X is a fuzzy set ξ ∗ we infer that Y is a fuzzy set
η ∗ = η|ξ∗ Bξ (D.6)
is an event.
Definition D.5 Let ξ and ξ ∗ be two hybrid sets. Then the event ξ ∗ B ξ (i.e.,
ξ ∗ matches ξ) is defined as
Definition D.6 Let ξ and ξ ∗ be two hybrid sets. Then the matching degree
of ξ ∗ B ξ is defined as
This section will introduce an inference rule proposed by Liu [141]. Let
X and Y be two concepts. It is assumed that we have a rule “if X is a hybrid
set ξ then Y is a hybrid set η”. From X is a hybrid set ξ ∗ we infer that Y is
a hybrid set
η ∗ = η|ξ∗ Bξ (D.10)
which is the conditional hybrid set of η given ξ ∗ B ξ.
Rule: If X is ξ then Y is η
From: X is ξ ∗ (D.12)
Infer: Y is η ∗ = η|ξ∗ Bξ
Appendix E
Maximum Uncertainty
Principle
b, if a ≤ b < 0.5.
Appendix F
M{A ∪ B} =
6 M{A} ∨ M{B}
[35] Dunyak J, Saad IW, and Wunsch D, A theory of independent fuzzy probabil-
ity for system reliability, IEEE Transactions on Fuzzy Systems, Vol.7, No.3,
286-294, 1999.
[36] Elkan C, The paradoxical success of fuzzy logic, IEEE Expert, Vol.9, No.4,
3-8, 1994.
[37] Elkan C, The paradoxical controversy over fuzzy logic, IEEE Expert, Vol.9,
No.4, 47-49, 1994.
[38] Esogbue AO, and Liu B, Reservoir operations optimization via fuzzy criterion
decision processes, Fuzzy Optimization and Decision Making, Vol.5, No.3,
289-305, 2006.
[39] Feng X, and Liu YK, Measurability criteria for fuzzy random vectors, Fuzzy
Optimization and Decision Making, Vol.5, No.3, 245-253, 2006.
[40] Feng Y, and Yang LX, A two-objective fuzzy k-cardinality assignment prob-
lem, Journal of Computational and Applied Mathematics, Vol.197, No.1, 233-
244, 2006.
[41] Fung RYK, Chen YZ, and Chen L, A fuzzy expected value-based goal pro-
graming model for product planning using quality function deployment, En-
gineering Optimization, Vol.37, No.6, 633-647, 2005.
[42] Gao J, and Liu B, New primitive chance measures of fuzzy random event,
International Journal of Fuzzy Systems, Vol.3, No.4, 527-531, 2001.
[43] Gao J, Liu B, and Gen M, A hybrid intelligent algorithm for stochastic multi-
level programming, IEEJ Transactions on Electronics, Information and Sys-
tems, Vol.124-C, No.10, 1991-1998, 2004.
[44] Gao J, and Liu B, Fuzzy multilevel programming with a hybrid intelligent
algorithm, Computers & Mathematics with Applications, Vol.49, 1539-1548,
2005.
[45] Gao J, and Lu M, Fuzzy quadratic minimum spanning tree problem, Applied
Mathematics and Computation, Vol.164, No.3, 773-788, 2005.
[46] Gao J, and Feng X, A hybrid intelligent algorithm for fuzzy dynamic inventory
problem, Journal of Information and Computing Science, Vol.1, No.4, 235-
244, 2006.
[47] Gao J, Credibilistic game with fuzzy information, Journal of Uncertain Sys-
tems, Vol.1, No.1, 74-80, 2007.
[48] Gao J, and Gao X, A new stock model for credibilistic option pricing, Journal
of Uncertain Systems, Vol.2, No.4, 243-247, 2008.
[49] Gao X, Some properties of continuous uncertain measure, International Jour-
nal of Uncertainty, Fuzziness & Knowledge-Based Systems, to be published.
(Also available at http://orsc.edu.cn/online/070512.pdf)
[50] Gil MA, Lopez-Diaz M, and Ralescu DA, Overview on the development of
fuzzy random variables, Fuzzy Sets and Systems, Vol.157, No.19, 2546-2557,
2006.
[51] González, A, A study of the ranking function approach through mean values,
Fuzzy Sets and Systems, Vol.35, 29-41, 1990.
254 Bibliography
[52] Guan J, and Bell DA, Evidence Theory and its Applications, North-Holland,
Amsterdam, 1991.
[53] Guo R, Zhao R, Guo D, and Dunne T, Random fuzzy variable modeling on
repairable system, Journal of Uncertain Systems, Vol.1, No.3, 222-234, 2007.
[54] Ha MH, Wang XZ, and Yang LZ, Sequences of (S) fuzzy integrable functions,
Fuzzy Sets and Systems, Vol.138, No.3, 507-522, 2003.
[55] Ha MH, Li Y, and Wang XF, Fuzzy knowledge representation and reasoning
using a generalized fuzzy petri net and a similarity measure, Soft Computing,
Vol.11, No.4, 323-327, 2007.
[56] Hansen E, Global Optimization Using Interval Analysis, Marcel Dekker, New
York, 1992.
[57] He Y, and Xu J, A class of random fuzzy programming model and its applica-
tion to vehicle routing problem, World Journal of Modelling and Simulation,
Vol.1, No.1, 3-11, 2005.
[58] Heilpern S, The expected value of a fuzzy number, Fuzzy Sets and Systems,
Vol.47, 81-86, 1992.
[59] Higashi M, and Klir GJ, On measures of fuzziness and fuzzy complements,
International Journal of General Systems, Vol.8, 169-180, 1982.
[60] Higashi M, and Klir GJ, Measures of uncertainty and information based on
possibility distributions, International Journal of General Systems, Vol.9, 43-
58, 1983.
[61] Hisdal E, Conditional possibilities independence and noninteraction, Fuzzy
Sets and Systems, Vol.1, 283-297, 1978.
[62] Hisdal E, Logical Structures for Representation of Knowledge and Uncer-
tainty, Physica-Verlag, Heidelberg, 1998.
[63] Hong DH, Renewal process with T-related fuzzy inter-arrival times and fuzzy
rewards, Information Sciences, Vol.176, No.16, 2386-2395, 2006.
[64] Hu BG, Mann GKI, and Gosine RG, New methodology for analytical and
optimal design of fuzzy PID controllers, IEEE Transactions on Fuzzy Systems,
Vol.7, No.5, 521-539, 1999.
[65] Hu LJ, Wu R, and Shao S, Analysis of dynamical systems whose inputs are
fuzzy stochastic processes, Fuzzy Sets and Systems, Vol.129, No.1, 111-118,
2002.
[66] Inuiguchi M, and Ramı́k J, Possibilistic linear programming: A brief review
of fuzzy mathematical programming and a comparison with stochastic pro-
gramming in portfolio selection problem, Fuzzy Sets and Systems, Vol.111,
No.1, 3-28, 2000.
[67] Ishibuchi H, and Tanaka H, Multiobjective programming in optimization of
the interval objective function, European Journal of Operational Research,
Vol.48, 219-225, 1990.
[68] Jaynes ET, Information theory and statistical mechanics, Physical Reviews,
Vol.106, No.4, 620-630, 1957.
Bibliography 255
[69] Ji XY, and Shao Z, Model and algorithm for bilevel Newsboy problem
with fuzzy demands and discounts, Applied Mathematics and Computation,
Vol.172, No.1, 163-174, 2006.
[70] Ji XY, and Iwamura K, New models for shortest path problem with fuzzy arc
lengths, Applied Mathematical Modelling, Vol.31, 259-269, 2007.
[71] John RI, Type 2 fuzzy sets: An appraisal of theory and applications, Interna-
tional Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.6,
No.6, 563-576, 1998.
[72] Kacprzyk J, and Esogbue AO, Fuzzy dynamic programming: Main develop-
ments and applications, Fuzzy Sets and Systems, Vol.81, 31-45, 1996.
[73] Kacprzyk J, Multistage Fuzzy Control, Wiley, Chichester, 1997.
[74] Karnik NN, Mendel JM, and Liang Q, Type-2 fuzzy logic systems, IEEE
Transactions on Fuzzy Systems, Vol.7, No.6, 643-658, 1999.
[75] Karnik NN, Mendel JM, and Liang Q, Centroid of a type-2 fuzzy set, Infor-
mation Sciences, Vol.132, 195-220, 2001.
[76] Kaufmann A, Introduction to the Theory of Fuzzy Subsets, Vol.I, Academic
Press, New York, 1975.
[77] Kaufmann A, and Gupta MM, Introduction to Fuzzy Arithmetic: Theory and
Applications, Van Nostrand Reinhold, New York, 1985.
[78] Kaufmann A, and Gupta MM, Fuzzy Mathematical Models in Engineering
and Management Science, 2nd ed, North-Holland, Amsterdam, 1991.
[79] Ke H, and Liu B, Project scheduling problem with stochastic activity duration
times, Applied Mathematics and Computation, Vol.168, No.1, 342-353, 2005.
[80] Ke H, and Liu B, Project scheduling problem with mixed uncertainty of ran-
domness and fuzziness, European Journal of Operational Research, Vol.183,
No.1, 135-147, 2007.
[81] Klement EP, Puri ML, and Ralescu DA, Limit theorems for fuzzy random
variables, Proceedings of the Royal Society of London Series A, Vol.407, 171-
182, 1986.
[82] Klir GJ, and Folger TA, Fuzzy Sets, Uncertainty, and Information, Prentice-
Hall, Englewood Cliffs, NJ, 1980.
[83] Klir GJ, and Yuan B, Fuzzy Sets and Fuzzy Logic: Theory and Applications,
Prentice-Hall, New Jersey, 1995.
[84] Knopfmacher J, On measures of fuzziness, Journal of Mathematical Analysis
and Applications, Vol.49, 529-534, 1975.
[85] Kosko B, Fuzzy entropy and conditioning, Information Sciences, Vol.40, 165-
174, 1986.
[86] Kruse R, and Meyer KD, Statistics with Vague Data, D. Reidel Publishing
Company, Dordrecht, 1987.
[87] Kwakernaak H, Fuzzy random variables–I: Definitions and theorems, Infor-
mation Sciences, Vol.15, 1-29, 1978.
[88] Kwakernaak H, Fuzzy random variables–II: Algorithms and examples for the
discrete case, Information Sciences, Vol.17, 253-278, 1979.
256 Bibliography
[89] Lai YJ, and Hwang CL, Fuzzy Multiple Objective Decision Making: Methods
and Applications, Springer-Verlag, New York, 1994.
[90] Lee ES, Fuzzy multiple level programming, Applied Mathematics and Com-
putation, Vol.120, 79-90, 2001.
[91] Lee KH, First Course on Fuzzy Theory and Applications, Springer-Verlag,
Berlin, 2005.
[92] Lertworasirkul S, Fang SC, Joines JA, and Nuttle HLW, Fuzzy data en-
velopment analysis (DEA): a possibility approach, Fuzzy Sets and Systems,
Vol.139, No.2, 379-394, 2003.
[93] Li DY, Cheung DW, and Shi XM, Uncertainty reasoning based on cloud
models in controllers, Computers and Mathematics with Appications, Vol.35,
No.3, 99-123, 1998.
[94] Li DY, and Du Y, Artificial Intelligence with Uncertainty, National Defense
Industry Press, Beijing, 2005
[95] Li HL, and Yu CS, A fuzzy multiobjective program with quasiconcave mem-
bership functions and fuzzy coefficients, Fuzzy Sets and Systems, Vol.109,
No.1, 59-81, 2000.
[96] Li J, Xu J, and Gen M, A class of multiobjective linear programming
model with fuzzy random coefficients, Mathematical and Computer Modelling,
Vol.44, Nos.11-12, 1097-1113, 2006.
[97] Li P, and Liu B, Entropy of credibility distributions for fuzzy variables, IEEE
Transactions on Fuzzy Systems, Vol.16, No.1, 123-129, 2008.
[98] Li SM, Ogura Y, and Nguyen HT, Gaussian processes and martingales for
fuzzy valued random variables with continuous parameter, Information Sci-
ences, Vol.133, 7-21, 2001.
[99] Li SM, Ogura Y, and Kreinovich V, Limit Theorems and Applications of
Set-Valued and Fuzzy Set-Valued Random Variables, Kluwer, Boston, 2002.
[100] Li SQ, Zhao RQ, and Tang WS, Fuzzy random homogeneous Poisson pro-
cess and compound Poisson process, Journal of Information and Computing
Science, Vol.1, No.4, 207-224, 2006.
[101] Li X, and Liu B, The independence of fuzzy variables with applications,
International Journal of Natural Sciences & Technology, Vol.1, No.1, 95-100,
2006.
[102] Li X, and Liu B, A sufficient and necessary condition for credibility measures,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.14, No.5, 527-535, 2006.
[103] Li X, and Liu B, New independence definition of fuzzy random variable and
random fuzzy variable, World Journal of Modelling and Simulation, Vol.2,
No.5, 338-342, 2006.
[104] Li X, and Liu B, Maximum entropy principle for fuzzy variables, Interna-
tional Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.15,
Supp.2, 43-52, 2007.
Bibliography 257
[105] Li X, and Qin ZF, Expected value and variance of geometric Liu process, Far
East Journal of Experimental and Theoretical Artificial Intelligence, Vol.1,
No.2, 127-135, 2008.
[106] Li X, and Liu B, On distance between fuzzy variables, Journal of Intelligent
& Fuzzy Systems, Vol.19, No.3, 197-204, 2008.
[107] Li X, and Liu B, Chance measure for hybrid events with fuzziness and ran-
domness, Soft Computing, Vol.13, No.2, 105-115, 2009.
[108] Li X, and Liu B, Hybrid logic and uncertain logic, Journal of Uncertain
Systems, Vol.3, No.2, 83-94, 2009.
[109] Li X, and Liu B, Foundation of credibilistic logic, Fuzzy Optimization and
Decision Making, to be published.
[110] Li X, and Liu B, Conditional chance measure for hybrid events, Technical
Report, 2008.
[111] Li X, and Liu B, Moment estimation theorems for various types of uncertain
variable, Technical Report, 2008.
[112] Li X, and Liu B, Cross-entropy and generalized entropy for fuzzy variables,
Technical Report, 2008.
[113] Li YM, and Li SJ, A fuzzy sets theoretic approach to approximate spatial
reasoning, IEEE Transactions on Fuzzy Systems, Vol.12, No.6, 745-754, 2004.
[114] Li YM, and Pedrycz W, The equivalence between fuzzy Mealy and fuzzy
Moore machines, Soft Computing, Vol.10, No.10, 953-959, 2006.
[115] Liu B, Dependent-chance goal programming and its genetic algorithm based
approach, Mathematical and Computer Modelling, Vol.24, No.7, 43-52, 1996.
[116] Liu B, and Esogbue AO, Fuzzy criterion set and fuzzy criterion dynamic
programming, Journal of Mathematical Analysis and Applications, Vol.199,
No.1, 293-311, 1996.
[117] Liu B, Dependent-chance programming: A class of stochastic optimization,
Computers & Mathematics with Applications, Vol.34, No.12, 89-104, 1997.
[118] Liu B, and Iwamura K, Modelling stochastic decision systems using
dependent-chance programming, European Journal of Operational Research,
Vol.101, No.1, 193-203, 1997.
[119] Liu B, and Iwamura K, Chance constrained programming with fuzzy param-
eters, Fuzzy Sets and Systems, Vol.94, No.2, 227-237, 1998.
[120] Liu B, and Iwamura K, A note on chance constrained programming with
fuzzy coefficients, Fuzzy Sets and Systems, Vol.100, Nos.1-3, 229-233, 1998.
[121] Liu B, Minimax chance constrained programming models for fuzzy decision
systems, Information Sciences, Vol.112, Nos.1-4, 25-38, 1998.
[122] Liu B, Dependent-chance programming with fuzzy decisions, IEEE Transac-
tions on Fuzzy Systems, Vol.7, No.3, 354-360, 1999.
[123] Liu B, and Esogbue AO, Decision Criteria and Optimal Inventory Processes,
Kluwer, Boston, 1999.
[124] Liu B, Uncertain Programming, Wiley, New York, 1999.
258 Bibliography
[145] Liu XC, Entropy, distance measure and similarity measure of fuzzy sets and
their relations, Fuzzy Sets and Systems, Vol.52, 305-318, 1992.
[146] Liu XW, Measuring the satisfaction of constraints in fuzzy linear program-
ming, Fuzzy Sets and Systems, Vol.122, No.2, 263-275, 2001.
[147] Liu YH, How to generate uncertain measures, Proceedings of Tenth National
Youth Conference on Information and Management Sciences, August 3-7,
2008, Luoyang, pp.23-26.
[148] Liu YK, and Liu B, Random fuzzy programming with chance measures
defined by fuzzy integrals, Mathematical and Computer Modelling, Vol.36,
Nos.4-5, 509-524, 2002.
[149] Liu YK, and Liu B, Fuzzy random programming problems with multiple
criteria, Asian Information-Science-Life, Vol.1, No.3, 249-256, 2002.
[150] Liu YK, and Liu B, Fuzzy random variables: A scalar expected value opera-
tor, Fuzzy Optimization and Decision Making, Vol.2, No.2, 143-160, 2003.
[151] Liu YK, and Liu B, Expected value operator of random fuzzy variable and
random fuzzy expected value models, International Journal of Uncertainty,
Fuzziness & Knowledge-Based Systems, Vol.11, No.2, 195-215, 2003.
[152] Liu YK, and Liu B, A class of fuzzy random optimization: Expected value
models, Information Sciences, Vol.155, Nos.1-2, 89-102, 2003.
[153] Liu YK, and Liu B, On minimum-risk problems in fuzzy random decision
systems, Computers & Operations Research, Vol.32, No.2, 257-283, 2005.
[154] Liu YK, and Liu B, Fuzzy random programming with equilibrium chance
constraints, Information Sciences, Vol.170, 363-395, 2005.
[155] Liu YK, Fuzzy programming with recourse, International Journal of Uncer-
tainty, Fuzziness & Knowledge-Based Systems, Vol.13, No.4, 381-413, 2005.
[156] Liu YK, Convergent results about the use of fuzzy simulation in fuzzy opti-
mization problems, IEEE Transactions on Fuzzy Systems, Vol.14, No.2, 295-
304, 2006.
[157] Liu YK, and Wang SM, On the properties of credibility critical value func-
tions, Journal of Information and Computing Science, Vol.1, No.4, 195-206,
2006.
[158] Liu YK, and Gao J, The independence of fuzzy variables with applications to
fuzzy random optimization, International Journal of Uncertainty, Fuzziness
& Knowledge-Based Systems, Vol.15, Supp.2, 1-20, 2007.
[159] Loo SG, Measures of fuzziness, Cybernetica, Vol.20, 201-210, 1977.
[160] Lopez-Diaz M, and Ralescu DA, Tools for fuzzy random variables: Embed-
dings and measurabilities, Computational Statistics & Data Analysis, Vol.51,
No.1, 109-114, 2006.
[161] Lu M, On crisp equivalents and solutions of fuzzy programming with different
chance measures, Information: An International Journal, Vol.6, No.2, 125-
133, 2003.
[162] Lucas C, and Araabi BN, Generalization of the Dempster-Shafer Theory:
A fuzzy-valued measure, IEEE Transactions on Fuzzy Systems, Vol.7, No.3,
255-270, 1999.
260 Bibliography
[201] Peng J, and Liu B, Birandom variables and birandom programming, Com-
puters & Industrial Engineering, Vol.53, No.3, 433-453, 2007.
[202] Peng J, A general stock model for fuzzy markets, Journal of Uncertain Sys-
tems, Vol.2, No.4, 248-254, 2008.
[203] Peng ZX, Some properties of product uncertain measure, http://orsc.edu.cn/
online/081228.pdf.
[204] Puri ML, and Ralescu D, Differentials of fuzzy functions, Journal of Mathe-
matical Analysis and Applications, Vol.91, 552-558, 1983.
[205] Puri ML, and Ralescu D, Fuzzy random variables, Journal of Mathematical
Analysis and Applications, Vol.114, 409-422, 1986.
[206] Qin ZF, and Li X, Option pricing formula for fuzzy financial market, Journal
of Uncertain Systems, Vol.2, No.1, 17-21, 2008.
[207] Qin ZF, A fuzzy control system with application to production planning
problem, Proceedings of the Sixth Annual Conference on Uncertainty, Lu-
oyang, August 3-7, 2008, pp.71-77. (Also available at http://orsc.edu.cn/
process/080412.pdf)
[208] Qin ZF, and Gao X, Fractional Liu process and applications to finance, Pro-
ceedings of the Seventh International Conference on Information and Man-
agement Sciences, Urumchi, August 12-19, 2008, pp.277-280. (Also available
at http://orsc.edu.cn/process/080430.pdf)
[209] Raj PA, and Kumer DN, Ranking alternatives with fuzzy weights using max-
imizing set and minimizing set, Fuzzy Sets and Systems, Vol.105, 365-375,
1999.
[210] Ralescu DA, and Sugeno M, Fuzzy integral representation, Fuzzy Sets and
Systems, Vol.84, No.2, 127-133, 1996.
[211] Ralescu AL, and Ralescu DA, Extensions of fuzzy aggregation, Fuzzy Sets
and systems, Vol.86, No.3, 321-330, 1997.
[212] Ramer A, Conditional possibility measures, International Journal of Cyber-
netics and Systems, Vol.20, 233-247, 1989.
[213] Ramı́k J, Extension principle in fuzzy optimization, Fuzzy Sets and Systems,
Vol.19, 29-35, 1986.
[214] Ramı́k J, and Rommelfanger H, Fuzzy mathematical programming based on
some inequality relations, Fuzzy Sets and Systems, Vol.81, 77-88, 1996.
[215] Saade JJ, Maximization of a function over a fuzzy domain, Fuzzy Sets and
Systems, Vol.62, 55-70, 1994.
[216] Sakawa M, Nishizaki I, and Uemura Y, Interactive fuzzy programming for
multi-level linear programming problems with fuzzy parameters, Fuzzy Sets
and Systems, Vol.109, No.1, 3-19, 2000.
[217] Sakawa M, Nishizaki I, Uemura Y, Interactive fuzzy programming for two-
level linear fractional programming problems with fuzzy parameters, Fuzzy
Sets and Systems, Vol.115, 93-103, 2000.
[218] Shafer G, A Mathematical Theory of Evidence, Princeton University Press,
Princeton, NJ, 1976.
Bibliography 263
[238] Wang Z, and Klir GJ, Fuzzy Measure Theory, Plenum Press, New York, 1992.
[239] Yager RR, On measures of fuzziness and negation, Part I: Membership in the
unit interval, International Journal of General Systems, Vol.5, 221-229, 1979.
[240] Yager RR, On measures of fuzziness and negation, Part II: Lattices, Infor-
mation and Control, Vol.44, 236-260, 1980.
[241] Yager RR, A procedure for ordering fuzzy subsets of the unit interval, Infor-
mation Sciences, Vol.24, 143-161, 1981.
[242] Yager RR, Generalized probabilities of fuzzy events from fuzzy belief struc-
tures, Information Sciences, Vol.28, 45-62, 1982.
[243] Yager RR, Entropy and specificity in a mathematical theory of evidence,
International Journal of General Systems, Vol.9, 249-260, 1983.
[244] Yager RR, On the specificity of a possibility distribution, Fuzzy Sets and
Systems, Vol.50, 279-292, 1992.
[245] Yager RR, Measures of entropy and fuzziness related to aggregation operators,
Information Sciences, Vol.82, 147-166, 1995.
[246] Yager RR, Modeling uncertainty using partial information, Information Sci-
ences, Vol.121, 271-294, 1999.
[247] Yager RR, On the entropy of fuzzy measures, IEEE Transactions on Fuzzy
Systems, Vol.8, 453-461, 2000.
[248] Yager RR, On the evaluation of uncertain courses of action, Fuzzy Optimiza-
tion and Decision Making, Vol.1, 13-41, 2002.
[249] Yang L, and Liu B, On inequalities and critical values of fuzzy random vari-
able, International Journal of Uncertainty, Fuzziness & Knowledge-Based
Systems, Vol.13, No.2, 163-175, 2005.
[250] Yang L, and Liu B, A sufficient and necessary condition for chance distri-
bution of birandom variable, Information: An International Journal, Vol.9,
No.1, 33-36, 2006.
[251] Yang L, and Liu B, On continuity theorem for characteristic function of fuzzy
variable, Journal of Intelligent & Fuzzy Systems, Vol.17, No.3, 325-332, 2006.
[252] Yang L, and Li K, B-valued fuzzy variable, Soft Computing, Vol.12, No.11,
1081-1088, 2008.
[253] Yang N, and Wen FS, A chance constrained programming approach to trans-
mission system expansion planning, Electric Power Systems Research, Vol.75,
Nos.2-3, 171-177, 2005.
[254] Yazenin AV, On the problem of possibilistic optimization, Fuzzy Sets and
Systems, Vol.81, 133-140, 1996.
[255] You C, Multidimensional Liu process, differential and integral, Proceedings
of the First Intelligent Computing Conference, Lushan, October 10-13, 2007,
pp.153-158. (Also available at http://orsc.edu.cn/process/071015.pdf)
[256] You C, and Wen M, The entropy of fuzzy vectors, Computers & Mathematics
with Applications, Vol.56, No.6, 1626-1633, 2008.
Bibliography 265
[257] You C, Some extensions of Wiener-Liu process and Ito-Liu integral, Proceed-
ings of the Seventh International Conference on Information and Manage-
ment Sciences, Urumchi, August 12-19, 2008, pp.226-232. (Also available at
http://orsc.edu.cn/process/071019.pdf)
[258] You C, Some convergence theorems of uncertain sequences, Mathematical and
Computer Modelling, Vol.49, Nos.3-4, 482-487, 2009.
[259] Zadeh LA, Fuzzy sets, Information and Control, Vol.8, 338-353, 1965.
[260] Zadeh LA, Outline of a new approach to the analysis of complex systems and
decision processes, IEEE Transactions on Systems, Man and Cybernetics,
Vol.3, 28-44, 1973.
[261] Zadeh LA, The concept of a linguistic variable and its application to approx-
imate reasoning, Information Sciences, Vol.8, 199-251, 1975.
[262] Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and
Systems, Vol.1, 3-28, 1978.
[263] Zadeh LA, A theory of approximate reasoning, In: J Hayes, D Michie and
RM Thrall, eds, Mathematical Frontiers of the Social and Policy Sciences,
Westview Press, Boulder, Cororado, 69-129, 1979.
[264] Zhao R and Liu B, Stochastic programming models for general redundancy
optimization problems, IEEE Transactions on Reliability, Vol.52, No.2, 181-
191, 2003.
[265] Zhao R, and Liu B, Renewal process with fuzzy interarrival times and rewards,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.11, No.5, 573-586, 2003.
[266] Zhao R, and Liu B, Redundancy optimization problems with uncertainty
of combining randomness and fuzziness, European Journal of Operational
Research, Vol.157, No.3, 716-735, 2004.
[267] Zhao R, and Liu B, Standby redundancy optimization problems with fuzzy
lifetimes, Computers & Industrial Engineering, Vol.49, No.2, 318-338, 2005.
[268] Zhao R, Tang WS, and Yun HL, Random fuzzy renewal process, European
Journal of Operational Research, Vol.169, No.1, 189-201, 2006.
[269] Zhao R, and Tang WS, Some properties of fuzzy random renewal process,
IEEE Transactions on Fuzzy Systems, Vol.14, No.2, 173-179, 2006.
[270] Zheng Y, and Liu B, Fuzzy vehicle routing model with credibility measure
and its hybrid intelligent algorithm, Applied Mathematics and Computation,
Vol.176, No.2, 673-683, 2006.
[271] Zhou J, and Liu B, New stochastic models for capacitated location-allocation
problem, Computers & Industrial Engineering, Vol.45, No.1, 111-125, 2003.
[272] Zhou J, and Liu B, Analysis and algorithms of bifuzzy systems, International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.12, No.3,
357-376, 2004.
[273] Zhou J, and Liu B, Modeling capacitated location-allocation problem with
fuzzy demands, Computers & Industrial Engineering, Vol.53, No.3, 454-468,
2007.
266 Bibliography
[274] Zhu Y, and Liu B, Continuity theorems and chance distribution of random
fuzzy variable, Proceedings of the Royal Society of London Series A, Vol.460,
2505-2519, 2004.
[275] Zhu Y, and Liu B, Some inequalities of random fuzzy variables with applica-
tion to moment convergence, Computers & Mathematics with Applications,
Vol.50, Nos.5-6, 719-727, 2005.
[276] Zhu Y, and Ji XY, Expected values of functions of fuzzy variables, Journal
of Intelligent & Fuzzy Systems, Vol.17, No.5, 471-478, 2006.
[277] Zhu Y, and Liu B, Fourier spectrum of credibility distribution for fuzzy vari-
ables, International Journal of General Systems, Vol.36, No.1, 111-123, 2007.
[278] Zhu Y, and Liu B, A sufficient and necessary condition for chance distribution
of random fuzzy variables, International Journal of Uncertainty, Fuzziness &
Knowledge-Based Systems, Vol.15, Supp.2, 21-28, 2007.
[279] Zhu Y, Fuzzy optimal control with application to portfolio selection, http://
orsc.edu.cn/process/080117.pdf.
[280] Zimmermann HJ, Fuzzy Set Theory and its Applications, Kluwer Academic
Publishers, Boston, 1985.
List of Frequently Used Symbols
min Mk {Λk },
sup
Λ1 ×Λ2 ×···×Λn ⊂Λ 1≤k≤n
Baoding Liu
Uncertainty Theory