Sei sulla pagina 1di 19

Optimal allocation in formation flying

Juan Giribet, Ignacio Mas, Ricardo Sanchez Pe


na
December 16, 2014

Abstract
In this article we propose an integrated navigation system for multiple
small UAVs flying in formation. A data fusion algorithm uses measurements from the IMU, a GPS receiver and a camera allowing to use the
positioning information of the surrounding vehicles to improve performance.

Introduction

Inertial measurement units (IMU) compute the navigation parameters (position,


velocity, and orientation) integrating accelerometer and rate gyro measurements
numerically. These systems are of great interest in various applications given
that they collect and provide autonomous information at high rates. Nevertheless, accumulated errors grow unboundedly over time. To achieve an acceptable
level of accuracy a high quality IMU is required, which results in high cost and
elevated size and weight, making its use impractical for many applications.
In contrast, navigation systems that depend on variables that are external
to the vehicle (e.g. GPS, magnetometers, cameras) make use of direct measurements of navigation parameters with bounded errors, but at discrete intervals
and with time delays, which translates into limitations for certain applications.
More over, these sensors are vulnerable to temporal occlusions of signals which
aects its measurements.
The core of the integrated navigation system is a data fusion algorithm (usually an extended Kalman filter) that combines the onboard sensor information.
This algorithm estimates the navigation parameters with bounded errors, high
accuracy and high data rates. By doing this, the quality and cost of the inertial
sensors can be drastically reduced.
In this article we present an integrated navigation system to be used in
small unmanned aerial vehicles (UAV) flying in formation. The fusion algorithm
incorporates measurements from an IMU, a GPS receiver and a video camera
that allows to visualize the surrounding vehicles. Integration of IMU and GPS
data is common in the literature, see for example [1] and its references. Dierent
methodologies for complementing GPS/INS integrated navigation systems with
vision sensors have been studied in the literature. For instance, Hoshizaki et
Department of Electronic Engineering, University of Buenos Aires, and CONICET - Instituto Argentino de Matem
atica, jgiribet@fi.uba.ar
CONICET and Instituto Tecnol
ogico de Buenos Aires.
CONICET and Instituto Tecnol
ogico de Buenos Aires.

al.[2] and Veth et al.[3] studied how the inclusion of features tracking methods
improve the performance of tightly-coupled INS/GPS navigation system. In [4]
Fakih et al. proposed an adaptive method for merging information provided by
a camera with inertial sensors and GPS. More recently, Priot et al. [5] proposed
a method for performing inertial sensors calibration, which merge information of
a deeply-coupled INS/GPS navigation system and images delivered by a camera.
The main contribution of this work is an INS/GPS navigation algorithm
that also uses camera measurements, allowing to obtain information from the
other UAVs in the platoon. This information can be used to fuse and share
navigation information among vehicles in the formation.
The reason for using a camera to provide inter-vehicle relative information
is mainly practical, since it is a small, light and low cost sensor. For instance,
a low cost vehicle as the Parrot Ar.Drone quadrotors [6] can be used as a testbench for multi-UAV experimentation. These vehicles have onboard IMU and
video cameras. The main limitation is their reduced payload capability. Despite
that, a small GPS receiver can be installed onboard (< 100grs). Using the available sensors, the most accurate navigation system is intended to be achieved,
motivating the present work.
The outline of this paper is as follow: in Section 2 we present notation and
the preliminaries. In Section 3 we introduce integrated navigation algorithms,
focussing on the dynamics that describes the error behaviour of this systems,
which is crucial for implementing the integrated navigation algorithm. Section
4 is devoted to introduce a simplified model for a camera. First, we

Preliminaries

In what follows, Rn denotes the space of (column) vectors with n real components. Given a matrix A Rnm , AT Rmn denotes its transpose. SO(3)
denotes the special orthogonal group of matrices in R33 , i.e., orthogonal matrices with determinant equal to 1 (which is the set of matrices representing
rotations in R3 ).
The function S : R3 R33 is defined as the function that assigns to x R3 ,
the unique skew-symmetric matrix S(x) R33 such that S(x)y = x y, for
every y R3 , where denotes the standard vector
product. In fact,

it is easy
0
x3 x2
0
x1 . Given k
to see that if x = (x1 , x2 , x3 )T then S(x) = x3
x2 x1
0
matrices in Rnm , we define the Rknkm matrix

A1
A1 0 . . . 0
A2 0 A2 . . . 0

diag . =

..
.. 0
. 0
0
Ak

...

Ak

Inertial navigation system

To determine the position, velocity and orientation (navigation parameters) of a


vehicle from the inertial measurements (accelerometers and rate gyros), the differential kinematic equations that relate such parameters must be integrated in
2

real time [1]. This task is performed by the inertial navigation system (INS). To
perform this operation, it is required to know the position, velocity, and orientation initial conditions. Given that this calculation is based on the integration
of the dierential equations, the accurate knowledge of the initial conditions,
as well as the accuracy of the inertial instruments, are crucial for the navigation. Otherwise the errors could grow in a unbounded fashion with time. This
is the main drawback of the INS, and it is not negligible in low cost inertial
instruments, in particular those based on MEMS technologies.

3.1

Inertial navigation system error dynamics

An INS error model for each vehicle in the platoon of UAV is presented. This
model is of great importance in the design of the integrated navigation system.
Let pi , vi : R R3 and Ci : R SO(3) functions of time t R representing
the (true) position, velocity and attitude of the vehicle i, respectively. Let
pi , vi : R R3 and Ci : R SO(3) the position, velocity and attitude of the
vehicle i calculated by the INS.
As usual, the INS position and velocity errors are defined as
pi
vi

= pi pi ,
= vi vi .

(1)
(2)

and the attitude error : R R3 is defined as


i = S 1 (Ci CiT I).

(3)

Remark 3.1. It is well known that, a vector 0 = R3 can be identified with


the (unique) rotation matrix having R3 as invariant vector and rotating any
v R3 , degrees (defining a positive direction of rotation) around /.
Moreover, this identification is given by
M () = I +

sin
1 cos 2
S() +
S () SO(3).

Note that, for small enough (neglecting second order terms) M () SO(3)
() = I + S() R33 .
can be approximated (with arbitrary precession) by M
Then, observe that, for a given time t R, the attitude error has an interesting
= (I + S((t)))C(t), for small (t) R3 , the
physical interpretation. Since C(t)
((t)) = I +S((t)) approximates the rotation matrix that compensates
matrix M
the cosine matrix calculated by the INS.
In the vicinity of (pTi , viT , Ti ) = 0 R9 the INS error dynamics can be
modeled as [1],
i (t)i (t)
X i (t) = Ai (t)Xi (t) + B
where,
Xi (t) =
(pi (t)T , vi (t)T , i (t)T )T ,
0
I
g
pi (t)pi (t)T

Ai (t) = |pi (t)| (3 |pi (t)|2 I) 0


0
0
3

S(Ci (t)fi (t)) R99 ,


0

(4)

0
0
i (t) = 0
Ci (t) R96 and
B
Ci (t)
0

i (t) = (fi (t)T , i (t)T )T R6 .


The specific force fi and the angular velocity i are the IMU measurements
of the vehicle i, which are corrupted with perturbations fi and i , respectively.
The gravity g is assumed a constant with value 9.798 sm2 .
Given the sequence of times {tk }kN , it is possible to discretize equation (4)
to obtain a discrete time model [1]:
Xi (tk+1 ) = Ai (tk )Xi (tk ) + Bi (tk )i (tk )

(5)

To design an integrated navigation algorithm, the IMU measurements perturbations are modeled as (stationary) gaussian zero mean stochastic( processes )
{i (tk )}kN with realisation in R6 and covariance matrices Qi (tk ) = E i (tk )i (tk )T >
0. Thus, the INS error is the stochastic process {Xi (tk )}kN , with realisations
in R9 , given by equation (5).
Suppose that we have n Z UAV, then we can define the stochastic process
{X(tk )}kN , with realisations in R9n , given by X T (tk ) = [X1T (tk ), X2T (tk ), ..., XnT (tk )],
k N, which models the INS error of each UAV. Then, from equation (5) it
follows that
X(tk+1 ) = A(tk )X(tk ) + B(tk )(tk ),

(6)

where

A1 (tk )
A2 (tk )
..
.

A(tk ) = diag

An (tk )

B(tk ) = diag

B1 (tk )
B2 (tk )
..
.

, (tk ) =

Bn (tk )

1 (tk )
2 (tk )
..
.

n (tk )

Camera measurements model

Suppose that we have a fleet of n UAV, with positions {p1 (t), p2 (t), ..., pn (t)}.
The purpose of this section is to present the method implemented to determine
the position of the vehicle i, by observing with a camera ki (t) Z vehicles,
with known positions pj (t) R3 , j Ii (t) {1, 2, ..., n}, j = i. Observe that,
the number of vehicles in the field of view is not the same for each vehicle (this
is the reason for subindex i in ki ) and it is a function of time. For example,
suppose that we have 5 UAV flying together and, in time t R, the vehicle 1
can see vehicles I1 (t) = {2, 4, 5}, (k1 (t) = 3).
4

!"
&$

%$

!'

*"

#$
*(

!(

#)
)

&)

Figure 1: Camera model


For simplifying notation, in what follows we will omit the dependence on
time t. For defining the camera model, we will make reference to Figure 1. For
a given time t R, let pi = pi (t) R3 be the camera focus position of the
vehicle i, with focal distance df , and the camera plane defined as,
= {p R3 : (p pi + df zb )T zb = 0}.
Where (xb , y b , z b ) is an orthogonal frame with z b normal to the plane
and xb , y b defining the axis of the image where the points qij are measured by
the camera. Notice that, since the camera is fixed to the vehicle, the frame
(xb , y b , z b ) defines the attitude of the vehicle1 . A point qij lying on the camera
plane, is the point qij Lj , where
Lj = {p R3 : p = (pj pi ) + pi , R},
where pj R3 , j = 1, ...., ki , i = j are the ki vehicles in the field of view of
the vehicle i.
Then,
(qij pi )(pj pi )T zb = df (pj pi ).

(7)

Moreover, if ij , ij R are the coordinates of of qij in the camera plane


(i.e., the coordinates measured by the camera) then,
qij = pi + ij x
b + ij yb df zb ,

(8)

where df is the focal length of the camera.


Thus, if Ci SO(3) is a matrix containing vectors xb , y b , z b as columns (i.e.,
the matrix representing the attitude of the vehicle i), then from equations (7)
and (8) it follows that,

ij
qij pi = Ci ij (pj pi )T zb = df (pj pi ),
df
1 This

(9)

is the reason why the supraindex b is used, the body frame, as it is usually called.

or more compactly, since zb = Ci [0 0 1] ,


[
ij =
where M

df
0

ij CiT (pj pi ) = 0,
M
]
0 ij
.
df ij

(10)

Remark 4.1. Therefore, to determine the position pi R3 it is necessary to


know the attitude of the vehicle i (i.e., Ci SO(3)) and the position of at least
two vehicles pj , pk , i = j, k satisfying, ij = ik or ij = ik , i.e., two vehicles
distinguishable with the camera. Indeed, suppose that pj , pk , i = j, k then we can
write
[
]
[
]
ij
ij C T pj
M
M
T
i
C
p
=
(11)
i
i
ik
ik C T pk .
M
M
i
If ij = ik or ij = ik , it follows that

df 0
]
[
ij
0 df
M

ik = df 0
M
0 df

ij
ij

ik
ik

has full rank, then equation (11) has a unique solution.


If ki (dierent) vehicles {pjr }r=1,...,ki , jr Ii are in the field of view of the
camera of the vehicle i, and two of these vehicles satisfy condition given in
Remark 4.1, position of vehicle i can be obtained by solving equation


ij
Mij1 CiT pj1
M
1

T c
..
..

Ci pi =
.
.
T

Mijki Ci pjki
Mijki

(12)

c
The supra-index
{ in pi is to remark that this position is given by the camera.
1 if j Ii
ij . Notice that, equation
Let Ii (j) =
and Mij = Ii (j)M
0 if j
/ Ii
(12) can be written as

Mi1
Mi1 CiT p1
.. T c

..
. Ci pi =
,
.
T
Min
Min Ci pn

(13)

then

( n
)1 ( n
)
Mi1
Mi1 CiT p1

..

.
c
T
T
T
..
pi = Ci .
Mij Mij
Mij Mij Ci pj (14)
=
T
i=1
i=1
Min
Min Ci pn
The matrix Ci needed to solve equation (12) can be obtained from the navigation system of vehicle i. The positions pj of the other vehicles are provided by
their respective navigation systems. As a result, inter-vehicle communication is
required to share such navigation information.
6

4.1

Camera error model

The camera error model is obtained by a perturbation of equation (12), where


the focal length df is known. By Remark 3.1, Ci = Ci Ci S(i )Ci . Then,

Mi1 CiT S(p1 pi )


Mi1

.. T c
..
i +
. Ci pi =
.
Min
Min CiT S(pn pi )

(15)

Mi1 CiT p1
Mi1 CiT (p1 pi )

..
..
+
+
,
.
.
T
T
Min Ci pn
Min Ci (pn pi )
[
]
0 0 ij
where Mij = Ii (j)
.
0 0 ij
Then,

p1 (t)

..
pci (t) = i (t)i (t) + i (t)
+
.
pn (t)

i1 (t)
i1 (t)

..
+ i (t)
,
.

in (t)
in (t)
where,
i (t) = i (Ii (t), t) =

Mi1 (t)
Mi1 (t)Ci (t)T S(p1 (t) pi (t))

..
..
Ci (t)

.
.
T
Min (t)
Min (t)Ci (t)S(pn (t) pi (t))

i (t) = i (Ii (t), t) =

Mi1 (t)
Mi1 (t)CiT (t)

..
..
Ci (t)
diag

.
.
T
Min (t)
Min (t)Ci (t)

i (t) = i (Ii (t), t) =


bT
z (p1 (t) pi (t))

bT
Mi1 (t)
z (p1 (t) pi (t))

.
..
..
Ci (t)
diag
.

bT
Min (t)
z (pn (t) pi (t))
T
z b (pn (t) pi (t))

(16)

(17)

Notice that i (t) is a block diagonal matrix. We will use the notation
(i (t))k for the k block element of its diagonal, i.e.,
1

(i (t))k = Ci (t)
Mij (t)T Mij (t) Mik (t)T Mik (t)CiT (t)
j=1

The supra-index in pci denotes that this is the error in position calculation
obtained with the camera, and should not be confused with pi , which is the
position error of the INS.
Notice that, the method for calculating the position of the vehicle i with a
camera has three dierent sources of error. The first term (which depends on i )
is due to the attitude error given by the navigation system of the vehicle i. The
second term (which depends on pj ) represents the uncertainty in the position
of the vehicle j, given by the navigation system of the vehicle j. The last source
(which depends on ij and ij ) is given by the errors in the determination of
points ij and ij , i.e., a camera error measurement.

Integrated navigation

Navigation systems based on GPS and INS have complementary characteristics.


While GPS systems allow to bound diverging INS errors, the latter provides
navigation information between GPS data acquisitions or when the satellites
are not in view. This motivated the use of both sources of information in the
same integrated navigation system. The ability to combine dierent information
sources is not limited to INS and GPS. It is possible to incorporate orientation
or position information from radar, altimeters, magnetometers, cameras, etc.
The framework to approach the integrated navigation problem is the filtering
or nonlinear estimation of dynamic systems. Given the complexity of the design
of nonlinear filters, the problem has traditionally been solved with extended
linear filters like the Extended Kalman Filter (EKF).
In a nutshell, the integrated navigation systems can be described as follows:
The INS provides navigation information aected a given error (due to the
uncertainty in the initial conditions or to errors in the accelerometers and rate
gyros). These errors are expected to be small enough for relative short periods of
time.The system then behaves as described in equation (5). Given such model,
we can obtain a filter that estimates the INS error and corrects it. Therefore, a
measurement of such error is needed. Here is when the additional sensors -such
as GPS, magnetometers or cameras- come into play. These additional sensors
are usually called external navigation sensors, since its measurements depends
on external parameters (satellite signals, visual patterns, earth magnetic field,
among others).

5.1

INS/GPS integrated navigation system

Assume that we have n > 2 UAV flying and suppose that, additionally to
the INS, each vehicle is equipped with a GPS receiver providing its position
and velocity. Then, for the vehicle i = 1, ..., n it is possible to implement
an integrated navigation system merging the information provided by the INS
and the GPS. Here we propose the classical loosely-coupled INS/GPS system
8

#$%

#$%

' () * +,() *
&

'

'

'

'

('

&

'

'(2*/0'(2*+,(2*/0,(2*+-(2*/0-(2*

'

.()*/0.()*+
1()*/!"#$%
'()*+,()*+-()*

'

'()*+,()*+

!"#

"

'

-()*

$%&

Figure 2: INS/GPS system


gps
represented in Figure 2 (see for instance [1]). Let pgps
R3 the position
i , vi
and velocity given by the GPS, which are corrupted by zero-mean Gasussian
2
2
white noise pi and vi with covariance matrices (ip ) I33 , (iv ) I33 > 0,
respectively. We define the Kalman filter vector input as
[
] [
]
pi pgps
pi + pi
i
yigps :=
=
.
gps
vi vi
vi + vi

Suppose that {tk }kN are discrete times where a measurement of GPS is
available. According to equation 5, the state model of the Kalman filter of the
integrated navigation system is given by

pi (tk+1 )
pi (tk )
vi (tk+1 ) = Ai (tk ) vi (tk ) + Bi (tk )i (tk )
i (tk+1 )
i (tk )

pi (tk )
yigps (tk ) = Higps (tk ) vi (tk ) + igps (tk ),
i (tk )
[
]
I33
0
0
where Higps (tk ) =
, for every tk ,
0
I33 0
[
]
pi (tk )
and igps (tk ) =
.
vi (tk )

5.2

(18)

(19)

INS/Camera integrated navigation system

The scheme proposed for integrating the camera measurement with the INS is
similar to the GPS measurements. Suppose that, in time tk , the vehicle i has ki
(dierent) vehicles {pjr }r=1,...,ki , jr Ii in its field of view, and satisfy condition
of Remark 4.1. Applying equation (12) we can calculate position pi , we call this
measurement pci , which is corrupted by an error pci , given by equation (15).
We define the vector input as,
(
) (
)
yic := pi pci = pi pci .
Observe that, pci does not depends only on Xi = (pTi , viT , Ti ) but also on
every Xjr , jr Ii , r = 1, ..., ki . Furthermore, since the vehicles in the field of
view of the vehicle i change with time, then all states Xi , i = 1, ...., n must be
taken into account in the Kalman filter model. In fact, by equation (16),
9


yic (tk )

c
Hi1

c
Hi2

...

where,
c
Hij
(tk ) =

c
Hin

(tk )

X1 (tk )
X2 (tk )
..
.

i1 (tk )
i1 (tk )
..
.

+ i (tk )

in (tk )
Xn (tk )
in (tk )

)
{ (
( I33 0 i (tk ) )
(i (tk ))j 0 0

if i = j
if i =
j

, (20)

(21)

By equation (6), the state model of the Kalman filter for the INS/Camera
integrated navigation system is,
X(tk+1 ) =
yic (tk ) =

A(tk )X(tk ) + B(tk )(tk )


Hic (tk )X(tk ) + i (tk )ic (tk ),

(22)
(23)

where ic (tk ) = [i1 (tk ), i1 (tk ), . . . , in (tk ), in (tk )] are assumed zero
2
mean gaussian random noises with covariance matrix (ic ) I2n2n > 0, and
[ c
]
c
c
Hi2
. . . Hin
Hic (tk ) = Hi1
.
(t )
k

Fusing both integration strategies, we obtain an INS/GPS/camera integrated navigation system.

Best formation flying configuration under different quality navigation sensors

Suppose that we have a platoon of n vehicles where m < n of these vehicles have
a low cost navigation systems and n m vehicles are carrying a high quality
navigation system. If the vehicles are performing certain task, the problem we
address in this section is, how to distribute the vehicles in the foramtion in order
to achieve the better navigation performance. More specifically, where should
be located the n m vehicles with the high quality navigation system. In order
to study this problem a performance criteria must be established. Suppose that,
for vehicle i = 1, ..., n, the dynamic model for the Kalman filter of the integrated
navigation algorithm is given by (see equation (6))
X(tk+1 ) =
Y (tk ) =

A(tk )X(tk ) + B(tk )(tk )


H(tk )X(tk ) + D(tk )(tk ),

(24)
(25)

where Y T (tk ) = [y1T (tk ), y2T (tk ), ..., ynT (tk )] are the external measurements of
each one of the n vehicles. and T (tk ) = [1T (tk ), 2T (tk ), ..., nT (tk )] the corresponding uncorrelated
noises with zero mean
(
) and (diagonal) covariance matrix
R(tk ) = diag R1 (tk ), R2 (tk ), . . . , Rn (tk ) > 0. More precisely, for i = 1, . . . , n
[ gps ]
[ gps ]
yi

9
yi =
R , i = i c R2n+6 ,
yic
i
10

(ip ) iI33
Ri (tk ) =
0
0
2

0
2
(iv ) I33

0
> 0,
0
c 2
(i ) I2n2n

where (ip ) , (iv ) > 0 stands for GPS position and velocity variance noises
2
and (ic ) > 0 for the camera variance noise, respectively. Notice that here it is
assumed that the variance noise is the same in every component.
Matrix D(tk ) is a block diagonal, D(tk ) = diag (D1 (tk ), D2 (tk ), . . . , Dn (tk )),
where
[
]
I
0
Di (tk ) = 66
R9(2n+6) ,
0
i (tk )
for i = 1, . . . , n. In what follows we assume that i (tk ), and hence Di (tk ), are
invertible matrices for all i = 1, . . . , n and tk t0 .
[
]T
Matrix H(tk ) = H1T (tk ), H2T (tk ), . . . , HnT (tk )
, where
[
]
0 . . . Higps (tk ) . . . 0
Hi (tk ) =
R69n ,
Hic (tk )
for i = 1, . . . , n.

6.1

Kalman filter estimates

Let P (tk ) and P (tk ) be the update and prediction covariance matrices of the
Kalman filter corresponding to the platoon (i.e., to equations (24)). More pre k ) is the Kalman filter estimate of state X(tk ) given measurements
cisely, if X(t
from the initial time t0 until time tk , then
k ))(X(tk ) X(t
k ))T ).
P (tk ) = E((X(tk ) X(t
(tk ) is the Kalman filter prediction of state X(tk ) given
In a similar way, if X
measurements from the initial time t0 until time tk1 , then
(tk ))(X(tk ) X
(tk ))T ).
P (tk ) = E((X(tk ) X
Matrices P (tk ), P (tk ) are assumed to be strictly positive, which is equivalent to say that {Xi (tk )}i=1,..,n is not linear dependent with probability one.
Also denote I(tk ) = P (tk )1 and I (tk ) = P (tk )1 , which are the update and
prediction information matrices. These matrices give a measure of the navigation performance of the swarm. In fact, trace of matrix I(tk ) is closely related
with observability of state X + (tk ) (see [7]). If I(tk ) increases with time then the
navigation error decreases. Beside trace of information matrix, there are several
scalar measures of how well perform our filer, see for instance [9], which is a survey of dierent scalar performance measures for tracking and sensor allocation
problems.
It is well known that the relation between I(tk ) and I (tk ) is given by []
I(tk ) = I (tk ) + H(tk )T (D(tk )R(tk )D(tk )T )1 H(tk ).

11

(26)

6.2

UAV Formation flying

In this section we assume that n vehicles are performing a given task that requires these vehicles to maintain its attitude and relative positions, i.e., pi (tk )
pj (tk ) = pi (tl ) pj (tl ) for all i = j = 1, . . . , n and time tk , tl > t0 .
As it was mentioned, suppose that some vehicles has a better navigation
system, more specifically better external navigation sensors but similar inertial
sensors. We will assume that the vehicles are ordered with respect to its external
sensors quality, i.e.,
1p 2p np ,
1v 2v nv ,
1c 2c nc .
Notice that, if inertial sensors are similar and
( its errors are
) assumed to
be( stationary, )it follows that Q = Qi (tk ) = E i (tk )i (tk )T > 0. Thus,
E (tk )(tk )T = diag(Q, . . . , Q).
The question we want to answer is: depends the navigation quality on where
each vehicle is located in the formation? As a quality index of the navigation
performance we use the trace of information matrix I(tk ).
Notice that, relocation of vehicles in the formation can be seen as a reordering
the diagonal blocks Ri (tk ) of matrix R(tk ) = diag (R1 (tk ), R2 (tk ), . . . , Rn (tk )).
For each of these formation there corresponds one information matrix. In fact,
each reordering of R(tk ) can be represented as T R(tk ), where is a block
permutation matrix. Thus we can give a parametrization of the set of information matrices corresponding to the dierent formations. Notice that, at most
we have n! dierent information matrices:

I (tk ) = I
(tk ) + H(tk )T (D(tk )T R(tk )D(tk )T )1 H(tk ).

(27)

Since D(tk )T R(tk )D(tk )T is a block diagonal matrix, it follows that,

I (tk ) = I
(tk ) +

Hj (tk )T (Dj (tk )T Rj (tk )Dj (tk )T )1 Hj (tk ).

(28)

j=1

Let I = {1, . . . , n} and f : I I a bijective function, each of these functions corresponding to a dierent block permutation matrix . Since Higps =
Hjgps , for every i, j = 1, . . . , n, by equations (18) and (22), it follows that,

I
(tk ) + diag (J1 , . . . , Jn )
n

+
(fc (j) )2 Hjc (tk )T (j (tk )j (tk )T )1 Hjc (tk ),

I (tk ) =

j=1

where, for j = 1, . . . , n

Jj =

(ip )

0
0

I33

(jv )2 I33
0

12

0
0
033

(29)
(30)

Second term of equation (29) shows that, where are located those vehicles
with better GPS receivers, do not incides in the swarm navigation performance,
because this term is invariant respect any vehicle permutation (which follows
from the fact that Higps = Hjgps , for every i, j = 1, . . . , n). However, better
camera sensor can incide in the navigation performance. This can be studied
looking at third term of equation above.
Now, if attitude and relative vehicles positions are fixed then Hj = Hj (tk )
and j = j (tk ) do not depend on tk . From equation (29), it follows that
T r(I (tk ))

T r(I
(tk )) + 3

n (

(ip )

+ (iv )

)
(31)

j=1

n (

fc (j)

)2

(
)
T r Hjc T (j Tj )1 Hjc .

(32)

j=1

We are interested in solving the following problem, given I


(tk ) find the
argument of
min
I (tk )
{f :II, bijective}

Solutions of this problem says where should be located vehicles with better
camera sensors in order to achieve the better navigation performance for the
swarm. We have at most n! dierent formations for the vehicles. Although solution above can be found solving n! Kalman filters, this is numerical intractable
if we have several vehciles in the formation. The following results allows us to
find the solution of problem above.
Lemma 6.1. Let A1 , A2 , . . . , An Rnn be positive matrices such that T r(A1 )
T r(A2 ) T r(An ), and let 1 2 n 0. Then,
( n
)
( n
)

i Ai T r
i Ai ,
Tr
i=1

i=1

for every (1 , 2 , . . . , n ) = f ((1 , 2 , . . . , n )).


Proof. The proof follows from the linearity of trace operator and from the fact
that T r(Ai ) 0, since Ai 0.
It has sense to study the update-step in the information filter, since the
dierent quality sensors are fundamental in this step. Since inertial sensors are
involved in the prediction-step, and every vehicle is carrying a similar inertial
navigation system, it is expected that, for a given I(tk1 ) (or P (tk1 )), the
same matrix I (tk ) is obtained. In fact, for every permutation matrix ,
(
)1
,
I (tk ) = P (tk )1 = AP (tk1 )AT + B diag(Q, . . . , Q) T B T

(33)

does not depend on vehicles location, since diag(Q, . . . , Q) = diag(Q, . . . , Q) T ,


for every .
Previous results allows us to give a positive answer to the question that
motivated this section. Not only we proved that how are distributed vehicles
in the formation incides in the navigation performance, furthermore we hace a
result that tells us which is the best distribution.
13

Figure 3: Trajectory of one UAV with respect to an initial position


(East/North). Straight flight (0 to 100sec). Right turn (100 to 235sec). Left
turn (235 to 370 sec).
Theorem 6.2. Suppose we have a formation of n N vehicles such that, for
every tk , tl > t0 , pi (tk ) pj (tk ) = pi (tl ) pj (tl ) and Ci (tk ) = Ci (t0 ), with i, j =
1, . . . , n. Suppose that, the external sensors satisfies 0 (< 1c 2c ) nc ,
and inertial sensors errors satisfies Q = Qi (tk ) = E i (tk )i (tk )T > 0, for
every i = 1, . . . , n. Then, the best formation flying configuration satisfies,
)
)
(
(
T r H1c T (1 T1 )1 H1c T r Hnc T (n Tn )1 Hnc
Proof. From Lemma 6.1, equations (31) and (33), the proof follows by induction.

Simulation Results

We present simulation results to validate the proposed INS/GPS/camera integration strategy.


The selected case study is a swarm of 5 UAVs flying at a constant altitude of
1.2m above the ground and following the trajectory shown in Figure 3. Each vehicle follows the same trajectory relative to the initial point but with a dierent
initial orientation, as shown in Figure 4.
The vehicles have INS/GPS/camera navigation systems with the following
specifications:
INS:
rate gyros noise: 1.5 103 rad/sec ()
accelerometers noise: 5 103 m/sec2 ()
initial condition errors: position 10m; orientation 10
GPS:
position noise: 15m ()
velocity noise: 0.2m/sec ()
14

Figure 4: Trajectories of the 5 UAVs

Figure 5: Position error of the INS/GPS navigation system


Camera:
measurement noise: 0.5 ()
mask: 70 (view angle from UAV nose)
In the following section, we present the navigation results for vehicle 1,
showing first the results of the INS/GPS integrated navigation system and then
contrasting them to the INS/GPS/camera system.

7.1

INS/GPS integrated navigation

Figure 5 show the position errors obtained by the INS/GPS navigation system.
It can be seen that the error starts at approximately 10m due to the initial
condition error and over time is reduced to an error of around 7m.
Figure 6 shows the orientation error. The initial conditions error (around
10 ) is reduced in pitch and roll when the simulation starts. This is due to the
information provided by the accelerometers (the vehicle must be aligned with
respect to the gravity vector). The yaw error takes longer to decrease and, in

15

Figure 6: Orientation error of the INS/GPS navigation system

Figure 7: Vehicles in the field of view of camera 1 over time


fact, reaches a steady state value of approximately 5 . The correction of the yaw
angle is aected by the trajectory followed by the vehicle given that it provides
dierent levels of observability.

7.2

INS/GPS/camera integrated navigation

Figure 7 shows the vehicles that are visible to the vehicle 1 camera over time.
It can be seen that around t = 120sec the first vehicle is detected (vehicle 5). In
Figure 8, it can be noted that at that time the position error calculated by the
navigation system is significantly reduced. Furthermore, it can also be seen that
around t = 250sec (mainly in the y dimension) the quality of the estimation
starts degrading, in agreement with the period in Figure 7 when no vehicles are
in the cameras field of view. This illustrates the improvement to the navigation
system due to the use of the camera.
Figure 9 shows more clearly the contribution of the camera to the navigation
system. In particular around t = 150sec, when at least two vehicles are in
the camera field of view, the error in yaw is significantly reduced. In fact, the
orientation error stays in the tenths of a degree, in comparison to the 5 obtained
by the INS/GPS system.

16

Figure 8: Position error of the INS/GPS/camera navigation system

Figure 9: Orientation error of the INS/GPS/camera navigation system

17

Conclusions

In our current testbed of UAV coordinated flight, the available hardware does
not allow the addition of onboard sensors. Therefore the navigation system
must rely on the information provided by gyros, accelerometers, GPS, and camera. This article presented an algorithm to optimally fuse the information of
these sensors and showed through simulation results the performance improvement provided by the use of the camera. Such improvement has a cost. The
INS/GPS navigation system of each vehicle is independent of the other vehicles
in the platoon. With the addition of the camera, the errors in the navigation
system of a vehicle is aected by those of the others. This translates into an
increment of the computational cost and the need to have a inter-vehicle communication. In fact, the optimal INS/GPS/camera fusion must be approached
from a centralized architecture, where the filter takes into account the navigation errors of all vehicles. Sub-optimal solutions can be implemented with
decentralized approaches, which will be the subject of future works.

References
[1] R. M. Rogers, Applied Mathematics in Integrated Navigation Systems,
(2003) AIAA.
[2] T. Hoshizaki, D. Andrisani II, A. W. Braun, A. K, Mulyana, and J. S.
Bethel. Performance of integrated electro-optical navigation systems. Navigation: Journal of the Institute of Navigation, 51(2): 101122, Summer
2004
[3] M. Veth, R. Anderson, F. Webber, and M. Nielsen. Tightly-coupled INS,
GPS, and imaging sensors for precision geolocation. Technical report, Air
Force Institute of Technology, Department of Electrical and Computer
Engineering, Wright Patterson AFB, OH, 2008.
[4] Adel Fakih, Samantha Ng, and John Zelek. Improving GPS localisation
with vision and inertial sensing. Geomatica, 63(4):383396, 2009.
[5] Priot, B., Peillon, C., Calmettes, V., Sahmoudi, M. Performance Assessment of an Ultra-Tightly Coupled Vision-Aided INS/GNSS Navigation
System. Proceedings of the 2011 International Technical Meeting of The
Institute of Navigation, San Diego, CA, January 2011, pp. 652-661.
[6] P. J. Bristeau, F. Callou, D Vissire, N Petit. The Navigation and Control
technology inside the AR.Drone micro UAV. 18th IFAC, 2011.
[7] Li, Dayi Wang, Xiangyu Huang. Study on the Observability Analysis
Based on the Trace of Error Covariance Matrix for Spacecraft Autonomous
Navigation. 10th IEEE International Conference on Control and Automation, China, 2013.
[8] Jinling Wang a, Matthew Garratt b, Andrew Lambert c, Jack Jianguo
Wang a, Songlai Hana, David Sinclair. Integration of GPS/INS/VISION
sensors to navigate unmanned aerial vehicles. The International Archives

18

of the Photogrammetry, Remote Sensing and Spatial Information Sciences.


Vol. XXXVII. Part B1. Beijing 2008
[9] C. Yang, L. Kaplan, E. Blasch, Performance Measures of Covariance and
Information Matrices in Resource Management for Target State Estimation. IEEE Transaction on Aerospace and Electronics Systems, vol. 48,
No 3, 2012.

19

Potrebbero piacerti anche