Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract
In this article we propose an integrated navigation system for multiple
small UAVs flying in formation. A data fusion algorithm uses measurements from the IMU, a GPS receiver and a camera allowing to use the
positioning information of the surrounding vehicles to improve performance.
Introduction
al.[2] and Veth et al.[3] studied how the inclusion of features tracking methods
improve the performance of tightly-coupled INS/GPS navigation system. In [4]
Fakih et al. proposed an adaptive method for merging information provided by
a camera with inertial sensors and GPS. More recently, Priot et al. [5] proposed
a method for performing inertial sensors calibration, which merge information of
a deeply-coupled INS/GPS navigation system and images delivered by a camera.
The main contribution of this work is an INS/GPS navigation algorithm
that also uses camera measurements, allowing to obtain information from the
other UAVs in the platoon. This information can be used to fuse and share
navigation information among vehicles in the formation.
The reason for using a camera to provide inter-vehicle relative information
is mainly practical, since it is a small, light and low cost sensor. For instance,
a low cost vehicle as the Parrot Ar.Drone quadrotors [6] can be used as a testbench for multi-UAV experimentation. These vehicles have onboard IMU and
video cameras. The main limitation is their reduced payload capability. Despite
that, a small GPS receiver can be installed onboard (< 100grs). Using the available sensors, the most accurate navigation system is intended to be achieved,
motivating the present work.
The outline of this paper is as follow: in Section 2 we present notation and
the preliminaries. In Section 3 we introduce integrated navigation algorithms,
focussing on the dynamics that describes the error behaviour of this systems,
which is crucial for implementing the integrated navigation algorithm. Section
4 is devoted to introduce a simplified model for a camera. First, we
Preliminaries
In what follows, Rn denotes the space of (column) vectors with n real components. Given a matrix A Rnm , AT Rmn denotes its transpose. SO(3)
denotes the special orthogonal group of matrices in R33 , i.e., orthogonal matrices with determinant equal to 1 (which is the set of matrices representing
rotations in R3 ).
The function S : R3 R33 is defined as the function that assigns to x R3 ,
the unique skew-symmetric matrix S(x) R33 such that S(x)y = x y, for
every y R3 , where denotes the standard vector
product. In fact,
it is easy
0
x3 x2
0
x1 . Given k
to see that if x = (x1 , x2 , x3 )T then S(x) = x3
x2 x1
0
matrices in Rnm , we define the Rknkm matrix
A1
A1 0 . . . 0
A2 0 A2 . . . 0
diag . =
..
.. 0
. 0
0
Ak
...
Ak
real time [1]. This task is performed by the inertial navigation system (INS). To
perform this operation, it is required to know the position, velocity, and orientation initial conditions. Given that this calculation is based on the integration
of the dierential equations, the accurate knowledge of the initial conditions,
as well as the accuracy of the inertial instruments, are crucial for the navigation. Otherwise the errors could grow in a unbounded fashion with time. This
is the main drawback of the INS, and it is not negligible in low cost inertial
instruments, in particular those based on MEMS technologies.
3.1
An INS error model for each vehicle in the platoon of UAV is presented. This
model is of great importance in the design of the integrated navigation system.
Let pi , vi : R R3 and Ci : R SO(3) functions of time t R representing
the (true) position, velocity and attitude of the vehicle i, respectively. Let
pi , vi : R R3 and Ci : R SO(3) the position, velocity and attitude of the
vehicle i calculated by the INS.
As usual, the INS position and velocity errors are defined as
pi
vi
= pi pi ,
= vi vi .
(1)
(2)
(3)
sin
1 cos 2
S() +
S () SO(3).
Note that, for small enough (neglecting second order terms) M () SO(3)
() = I + S() R33 .
can be approximated (with arbitrary precession) by M
Then, observe that, for a given time t R, the attitude error has an interesting
= (I + S((t)))C(t), for small (t) R3 , the
physical interpretation. Since C(t)
((t)) = I +S((t)) approximates the rotation matrix that compensates
matrix M
the cosine matrix calculated by the INS.
In the vicinity of (pTi , viT , Ti ) = 0 R9 the INS error dynamics can be
modeled as [1],
i (t)i (t)
X i (t) = Ai (t)Xi (t) + B
where,
Xi (t) =
(pi (t)T , vi (t)T , i (t)T )T ,
0
I
g
pi (t)pi (t)T
(4)
0
0
i (t) = 0
Ci (t) R96 and
B
Ci (t)
0
(5)
To design an integrated navigation algorithm, the IMU measurements perturbations are modeled as (stationary) gaussian zero mean stochastic( processes )
{i (tk )}kN with realisation in R6 and covariance matrices Qi (tk ) = E i (tk )i (tk )T >
0. Thus, the INS error is the stochastic process {Xi (tk )}kN , with realisations
in R9 , given by equation (5).
Suppose that we have n Z UAV, then we can define the stochastic process
{X(tk )}kN , with realisations in R9n , given by X T (tk ) = [X1T (tk ), X2T (tk ), ..., XnT (tk )],
k N, which models the INS error of each UAV. Then, from equation (5) it
follows that
X(tk+1 ) = A(tk )X(tk ) + B(tk )(tk ),
(6)
where
A1 (tk )
A2 (tk )
..
.
A(tk ) = diag
An (tk )
B(tk ) = diag
B1 (tk )
B2 (tk )
..
.
, (tk ) =
Bn (tk )
1 (tk )
2 (tk )
..
.
n (tk )
Suppose that we have a fleet of n UAV, with positions {p1 (t), p2 (t), ..., pn (t)}.
The purpose of this section is to present the method implemented to determine
the position of the vehicle i, by observing with a camera ki (t) Z vehicles,
with known positions pj (t) R3 , j Ii (t) {1, 2, ..., n}, j = i. Observe that,
the number of vehicles in the field of view is not the same for each vehicle (this
is the reason for subindex i in ki ) and it is a function of time. For example,
suppose that we have 5 UAV flying together and, in time t R, the vehicle 1
can see vehicles I1 (t) = {2, 4, 5}, (k1 (t) = 3).
4
!"
&$
%$
!'
*"
#$
*(
!(
#)
)
&)
(7)
(8)
ij
qij pi = Ci ij (pj pi )T zb = df (pj pi ),
df
1 This
(9)
is the reason why the supraindex b is used, the body frame, as it is usually called.
df
0
ij CiT (pj pi ) = 0,
M
]
0 ij
.
df ij
(10)
df 0
]
[
ij
0 df
M
ik = df 0
M
0 df
ij
ij
ik
ik
ij
Mij1 CiT pj1
M
1
T c
..
..
Ci pi =
.
.
T
Mijki Ci pjki
Mijki
(12)
c
The supra-index
{ in pi is to remark that this position is given by the camera.
1 if j Ii
ij . Notice that, equation
Let Ii (j) =
and Mij = Ii (j)M
0 if j
/ Ii
(12) can be written as
Mi1
Mi1 CiT p1
.. T c
..
. Ci pi =
,
.
T
Min
Min Ci pn
(13)
then
( n
)1 ( n
)
Mi1
Mi1 CiT p1
..
.
c
T
T
T
..
pi = Ci .
Mij Mij
Mij Mij Ci pj (14)
=
T
i=1
i=1
Min
Min Ci pn
The matrix Ci needed to solve equation (12) can be obtained from the navigation system of vehicle i. The positions pj of the other vehicles are provided by
their respective navigation systems. As a result, inter-vehicle communication is
required to share such navigation information.
6
4.1
.. T c
..
i +
. Ci pi =
.
Min
Min CiT S(pn pi )
(15)
Mi1 CiT p1
Mi1 CiT (p1 pi )
..
..
+
+
,
.
.
T
T
Min Ci pn
Min Ci (pn pi )
[
]
0 0 ij
where Mij = Ii (j)
.
0 0 ij
Then,
p1 (t)
..
pci (t) = i (t)i (t) + i (t)
+
.
pn (t)
i1 (t)
i1 (t)
..
+ i (t)
,
.
in (t)
in (t)
where,
i (t) = i (Ii (t), t) =
Mi1 (t)
Mi1 (t)Ci (t)T S(p1 (t) pi (t))
..
..
Ci (t)
.
.
T
Min (t)
Min (t)Ci (t)S(pn (t) pi (t))
Mi1 (t)
Mi1 (t)CiT (t)
..
..
Ci (t)
diag
.
.
T
Min (t)
Min (t)Ci (t)
bT
Mi1 (t)
z (p1 (t) pi (t))
.
..
..
Ci (t)
diag
.
bT
Min (t)
z (pn (t) pi (t))
T
z b (pn (t) pi (t))
(16)
(17)
Notice that i (t) is a block diagonal matrix. We will use the notation
(i (t))k for the k block element of its diagonal, i.e.,
1
(i (t))k = Ci (t)
Mij (t)T Mij (t) Mik (t)T Mik (t)CiT (t)
j=1
The supra-index in pci denotes that this is the error in position calculation
obtained with the camera, and should not be confused with pi , which is the
position error of the INS.
Notice that, the method for calculating the position of the vehicle i with a
camera has three dierent sources of error. The first term (which depends on i )
is due to the attitude error given by the navigation system of the vehicle i. The
second term (which depends on pj ) represents the uncertainty in the position
of the vehicle j, given by the navigation system of the vehicle j. The last source
(which depends on ij and ij ) is given by the errors in the determination of
points ij and ij , i.e., a camera error measurement.
Integrated navigation
5.1
Assume that we have n > 2 UAV flying and suppose that, additionally to
the INS, each vehicle is equipped with a GPS receiver providing its position
and velocity. Then, for the vehicle i = 1, ..., n it is possible to implement
an integrated navigation system merging the information provided by the INS
and the GPS. Here we propose the classical loosely-coupled INS/GPS system
8
#$%
#$%
' () * +,() *
&
'
'
'
'
('
&
'
'(2*/0'(2*+,(2*/0,(2*+-(2*/0-(2*
'
.()*/0.()*+
1()*/!"#$%
'()*+,()*+-()*
'
'()*+,()*+
!"#
"
'
-()*
$%&
Suppose that {tk }kN are discrete times where a measurement of GPS is
available. According to equation 5, the state model of the Kalman filter of the
integrated navigation system is given by
pi (tk+1 )
pi (tk )
vi (tk+1 ) = Ai (tk ) vi (tk ) + Bi (tk )i (tk )
i (tk+1 )
i (tk )
pi (tk )
yigps (tk ) = Higps (tk ) vi (tk ) + igps (tk ),
i (tk )
[
]
I33
0
0
where Higps (tk ) =
, for every tk ,
0
I33 0
[
]
pi (tk )
and igps (tk ) =
.
vi (tk )
5.2
(18)
(19)
The scheme proposed for integrating the camera measurement with the INS is
similar to the GPS measurements. Suppose that, in time tk , the vehicle i has ki
(dierent) vehicles {pjr }r=1,...,ki , jr Ii in its field of view, and satisfy condition
of Remark 4.1. Applying equation (12) we can calculate position pi , we call this
measurement pci , which is corrupted by an error pci , given by equation (15).
We define the vector input as,
(
) (
)
yic := pi pci = pi pci .
Observe that, pci does not depends only on Xi = (pTi , viT , Ti ) but also on
every Xjr , jr Ii , r = 1, ..., ki . Furthermore, since the vehicles in the field of
view of the vehicle i change with time, then all states Xi , i = 1, ...., n must be
taken into account in the Kalman filter model. In fact, by equation (16),
9
yic (tk )
c
Hi1
c
Hi2
...
where,
c
Hij
(tk ) =
c
Hin
(tk )
X1 (tk )
X2 (tk )
..
.
i1 (tk )
i1 (tk )
..
.
+ i (tk )
in (tk )
Xn (tk )
in (tk )
)
{ (
( I33 0 i (tk ) )
(i (tk ))j 0 0
if i = j
if i =
j
, (20)
(21)
By equation (6), the state model of the Kalman filter for the INS/Camera
integrated navigation system is,
X(tk+1 ) =
yic (tk ) =
(22)
(23)
where ic (tk ) = [i1 (tk ), i1 (tk ), . . . , in (tk ), in (tk )] are assumed zero
2
mean gaussian random noises with covariance matrix (ic ) I2n2n > 0, and
[ c
]
c
c
Hi2
. . . Hin
Hic (tk ) = Hi1
.
(t )
k
Suppose that we have a platoon of n vehicles where m < n of these vehicles have
a low cost navigation systems and n m vehicles are carrying a high quality
navigation system. If the vehicles are performing certain task, the problem we
address in this section is, how to distribute the vehicles in the foramtion in order
to achieve the better navigation performance. More specifically, where should
be located the n m vehicles with the high quality navigation system. In order
to study this problem a performance criteria must be established. Suppose that,
for vehicle i = 1, ..., n, the dynamic model for the Kalman filter of the integrated
navigation algorithm is given by (see equation (6))
X(tk+1 ) =
Y (tk ) =
(24)
(25)
where Y T (tk ) = [y1T (tk ), y2T (tk ), ..., ynT (tk )] are the external measurements of
each one of the n vehicles. and T (tk ) = [1T (tk ), 2T (tk ), ..., nT (tk )] the corresponding uncorrelated
noises with zero mean
(
) and (diagonal) covariance matrix
R(tk ) = diag R1 (tk ), R2 (tk ), . . . , Rn (tk ) > 0. More precisely, for i = 1, . . . , n
[ gps ]
[ gps ]
yi
9
yi =
R , i = i c R2n+6 ,
yic
i
10
(ip ) iI33
Ri (tk ) =
0
0
2
0
2
(iv ) I33
0
> 0,
0
c 2
(i ) I2n2n
where (ip ) , (iv ) > 0 stands for GPS position and velocity variance noises
2
and (ic ) > 0 for the camera variance noise, respectively. Notice that here it is
assumed that the variance noise is the same in every component.
Matrix D(tk ) is a block diagonal, D(tk ) = diag (D1 (tk ), D2 (tk ), . . . , Dn (tk )),
where
[
]
I
0
Di (tk ) = 66
R9(2n+6) ,
0
i (tk )
for i = 1, . . . , n. In what follows we assume that i (tk ), and hence Di (tk ), are
invertible matrices for all i = 1, . . . , n and tk t0 .
[
]T
Matrix H(tk ) = H1T (tk ), H2T (tk ), . . . , HnT (tk )
, where
[
]
0 . . . Higps (tk ) . . . 0
Hi (tk ) =
R69n ,
Hic (tk )
for i = 1, . . . , n.
6.1
Let P (tk ) and P (tk ) be the update and prediction covariance matrices of the
Kalman filter corresponding to the platoon (i.e., to equations (24)). More pre k ) is the Kalman filter estimate of state X(tk ) given measurements
cisely, if X(t
from the initial time t0 until time tk , then
k ))(X(tk ) X(t
k ))T ).
P (tk ) = E((X(tk ) X(t
(tk ) is the Kalman filter prediction of state X(tk ) given
In a similar way, if X
measurements from the initial time t0 until time tk1 , then
(tk ))(X(tk ) X
(tk ))T ).
P (tk ) = E((X(tk ) X
Matrices P (tk ), P (tk ) are assumed to be strictly positive, which is equivalent to say that {Xi (tk )}i=1,..,n is not linear dependent with probability one.
Also denote I(tk ) = P (tk )1 and I (tk ) = P (tk )1 , which are the update and
prediction information matrices. These matrices give a measure of the navigation performance of the swarm. In fact, trace of matrix I(tk ) is closely related
with observability of state X + (tk ) (see [7]). If I(tk ) increases with time then the
navigation error decreases. Beside trace of information matrix, there are several
scalar measures of how well perform our filer, see for instance [9], which is a survey of dierent scalar performance measures for tracking and sensor allocation
problems.
It is well known that the relation between I(tk ) and I (tk ) is given by []
I(tk ) = I (tk ) + H(tk )T (D(tk )R(tk )D(tk )T )1 H(tk ).
11
(26)
6.2
In this section we assume that n vehicles are performing a given task that requires these vehicles to maintain its attitude and relative positions, i.e., pi (tk )
pj (tk ) = pi (tl ) pj (tl ) for all i = j = 1, . . . , n and time tk , tl > t0 .
As it was mentioned, suppose that some vehicles has a better navigation
system, more specifically better external navigation sensors but similar inertial
sensors. We will assume that the vehicles are ordered with respect to its external
sensors quality, i.e.,
1p 2p np ,
1v 2v nv ,
1c 2c nc .
Notice that, if inertial sensors are similar and
( its errors are
) assumed to
be( stationary, )it follows that Q = Qi (tk ) = E i (tk )i (tk )T > 0. Thus,
E (tk )(tk )T = diag(Q, . . . , Q).
The question we want to answer is: depends the navigation quality on where
each vehicle is located in the formation? As a quality index of the navigation
performance we use the trace of information matrix I(tk ).
Notice that, relocation of vehicles in the formation can be seen as a reordering
the diagonal blocks Ri (tk ) of matrix R(tk ) = diag (R1 (tk ), R2 (tk ), . . . , Rn (tk )).
For each of these formation there corresponds one information matrix. In fact,
each reordering of R(tk ) can be represented as T R(tk ), where is a block
permutation matrix. Thus we can give a parametrization of the set of information matrices corresponding to the dierent formations. Notice that, at most
we have n! dierent information matrices:
I (tk ) = I
(tk ) + H(tk )T (D(tk )T R(tk )D(tk )T )1 H(tk ).
(27)
I (tk ) = I
(tk ) +
(28)
j=1
Let I = {1, . . . , n} and f : I I a bijective function, each of these functions corresponding to a dierent block permutation matrix . Since Higps =
Hjgps , for every i, j = 1, . . . , n, by equations (18) and (22), it follows that,
I
(tk ) + diag (J1 , . . . , Jn )
n
+
(fc (j) )2 Hjc (tk )T (j (tk )j (tk )T )1 Hjc (tk ),
I (tk ) =
j=1
where, for j = 1, . . . , n
Jj =
(ip )
0
0
I33
(jv )2 I33
0
12
0
0
033
(29)
(30)
Second term of equation (29) shows that, where are located those vehicles
with better GPS receivers, do not incides in the swarm navigation performance,
because this term is invariant respect any vehicle permutation (which follows
from the fact that Higps = Hjgps , for every i, j = 1, . . . , n). However, better
camera sensor can incide in the navigation performance. This can be studied
looking at third term of equation above.
Now, if attitude and relative vehicles positions are fixed then Hj = Hj (tk )
and j = j (tk ) do not depend on tk . From equation (29), it follows that
T r(I (tk ))
T r(I
(tk )) + 3
n (
(ip )
+ (iv )
)
(31)
j=1
n (
fc (j)
)2
(
)
T r Hjc T (j Tj )1 Hjc .
(32)
j=1
Solutions of this problem says where should be located vehicles with better
camera sensors in order to achieve the better navigation performance for the
swarm. We have at most n! dierent formations for the vehicles. Although solution above can be found solving n! Kalman filters, this is numerical intractable
if we have several vehciles in the formation. The following results allows us to
find the solution of problem above.
Lemma 6.1. Let A1 , A2 , . . . , An Rnn be positive matrices such that T r(A1 )
T r(A2 ) T r(An ), and let 1 2 n 0. Then,
( n
)
( n
)
i Ai T r
i Ai ,
Tr
i=1
i=1
(33)
Simulation Results
7.1
Figure 5 show the position errors obtained by the INS/GPS navigation system.
It can be seen that the error starts at approximately 10m due to the initial
condition error and over time is reduced to an error of around 7m.
Figure 6 shows the orientation error. The initial conditions error (around
10 ) is reduced in pitch and roll when the simulation starts. This is due to the
information provided by the accelerometers (the vehicle must be aligned with
respect to the gravity vector). The yaw error takes longer to decrease and, in
15
7.2
Figure 7 shows the vehicles that are visible to the vehicle 1 camera over time.
It can be seen that around t = 120sec the first vehicle is detected (vehicle 5). In
Figure 8, it can be noted that at that time the position error calculated by the
navigation system is significantly reduced. Furthermore, it can also be seen that
around t = 250sec (mainly in the y dimension) the quality of the estimation
starts degrading, in agreement with the period in Figure 7 when no vehicles are
in the cameras field of view. This illustrates the improvement to the navigation
system due to the use of the camera.
Figure 9 shows more clearly the contribution of the camera to the navigation
system. In particular around t = 150sec, when at least two vehicles are in
the camera field of view, the error in yaw is significantly reduced. In fact, the
orientation error stays in the tenths of a degree, in comparison to the 5 obtained
by the INS/GPS system.
16
17
Conclusions
In our current testbed of UAV coordinated flight, the available hardware does
not allow the addition of onboard sensors. Therefore the navigation system
must rely on the information provided by gyros, accelerometers, GPS, and camera. This article presented an algorithm to optimally fuse the information of
these sensors and showed through simulation results the performance improvement provided by the use of the camera. Such improvement has a cost. The
INS/GPS navigation system of each vehicle is independent of the other vehicles
in the platoon. With the addition of the camera, the errors in the navigation
system of a vehicle is aected by those of the others. This translates into an
increment of the computational cost and the need to have a inter-vehicle communication. In fact, the optimal INS/GPS/camera fusion must be approached
from a centralized architecture, where the filter takes into account the navigation errors of all vehicles. Sub-optimal solutions can be implemented with
decentralized approaches, which will be the subject of future works.
References
[1] R. M. Rogers, Applied Mathematics in Integrated Navigation Systems,
(2003) AIAA.
[2] T. Hoshizaki, D. Andrisani II, A. W. Braun, A. K, Mulyana, and J. S.
Bethel. Performance of integrated electro-optical navigation systems. Navigation: Journal of the Institute of Navigation, 51(2): 101122, Summer
2004
[3] M. Veth, R. Anderson, F. Webber, and M. Nielsen. Tightly-coupled INS,
GPS, and imaging sensors for precision geolocation. Technical report, Air
Force Institute of Technology, Department of Electrical and Computer
Engineering, Wright Patterson AFB, OH, 2008.
[4] Adel Fakih, Samantha Ng, and John Zelek. Improving GPS localisation
with vision and inertial sensing. Geomatica, 63(4):383396, 2009.
[5] Priot, B., Peillon, C., Calmettes, V., Sahmoudi, M. Performance Assessment of an Ultra-Tightly Coupled Vision-Aided INS/GNSS Navigation
System. Proceedings of the 2011 International Technical Meeting of The
Institute of Navigation, San Diego, CA, January 2011, pp. 652-661.
[6] P. J. Bristeau, F. Callou, D Vissire, N Petit. The Navigation and Control
technology inside the AR.Drone micro UAV. 18th IFAC, 2011.
[7] Li, Dayi Wang, Xiangyu Huang. Study on the Observability Analysis
Based on the Trace of Error Covariance Matrix for Spacecraft Autonomous
Navigation. 10th IEEE International Conference on Control and Automation, China, 2013.
[8] Jinling Wang a, Matthew Garratt b, Andrew Lambert c, Jack Jianguo
Wang a, Songlai Hana, David Sinclair. Integration of GPS/INS/VISION
sensors to navigate unmanned aerial vehicles. The International Archives
18
19