Sei sulla pagina 1di 205

Cooperative Control of

Ground and Aerial Vehicles


ALEJANDRO MARZINOTTO
Masters Degree Project
Stockholm, Sweden
August 22, 2012
XR-EE-RT 2012:018
Version: 1.0
January 2012 to August 2012
Cooperative Control of Ground and Aerial Vehicles
Diploma Thesis
Alejandro Marzinotto
1
Automatic Control Laboratory
School of Electrical Engineering, KTH Royal Institute of Technology, Sweden
Supervisors
Jose Araujo & Meng Guo
KTH Stockholm
Examiner
Dr. D. V. Dimarogonas
KTH Stockholm
Stockholm, August 22, 2012
1
alejandro.marzinotto@gmail.com
i
Abstract
Recent developments in the theoretical eld of multi-agent systems and
cooperative control have driven a growing interest towards the area of au-
tonomous transportation, surveillance and other applications. This thesis is
an attempt to close the gap between the elds of theoretical and experimental
control systems.
To close this gap it is essential to create a reliable software and hardware
infrastructure that can be used to test the applicability and performance of
the controllers developed in theory under articial constrains. In this thesis
we will present a feasible implementation of an experimental testbed, covering
both the hardware and software components of it.
To build this testbed and show its operability, scenarios of multi-agent sys-
tems such as platooning and surveillance were implemented using scale models
of Scania trucks and quadrotors. In this thesis we study the problem starting
with the theoretical analysis of the vehicle dynamics, performing simulations
and nally carrying out the experiments in the testbed.
The result of this thesis is a reliable and versatile testbed that can be
used to perform demonstrations of multi-agent robotic systems. The core of
the program is developed in LabVIEW, the wireless communication is done
using NesC and TinyOS, the simulations are developed in Simulink and the
Visualization Engine is written in C++.
The path taken to build this testbed has proven to be successful, allowing
us to control multiple vehicles simultaneously under the intuitive LabVIEW
programming environment. This document serves as a guide for those who
wish to carry out experiments using the infrastructure developed here or those
who wish to improve upon the existing work.
Acknowledgements
Above all I want to thank my family and specially my mother Laura for encouraging
me to study science.
I want to give special thanks to my examiner Dimos Dimarogonas for giving
me the opportunity to carry out this project and to my supervisors Meng Guo and
Jose Araujo for helping me with the technical and theoretical aspects.
Lastly, I want to thank Axel Klingenstein, Sara Khosravi, Patrik Almstrm and
everyone who was directly or indirectly involved with this project.
Alejandro Marzinotto
August 22, 2012
iii
Contents
Acknowledgements iii
Contents iv
List of Figures ix
List of Tables xiii
Programming Code xvi
Acronyms xviii
Notation xix
Ground Vehicle Variable Denition . . . . . . . . . . . . . . . . . . . . . . xix
Aerial Vehicle Variable Denition . . . . . . . . . . . . . . . . . . . . . . . xx
1 Introduction 1
1.1 Real World Applications . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Platooning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.4 Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.5 Rescue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.1 Platooning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.2 Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.1 Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . 11
1.3.2 Chapter 2 Hardware . . . . . . . . . . . . . . . . . . . . . . 11
1.3.3 Chapter 3 Positioning Systems . . . . . . . . . . . . . . . . 11
iv
CONTENTS v
1.3.4 Chapter 4 Testbed . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.5 Chapter 5 Ground Vehicles . . . . . . . . . . . . . . . . . . 12
1.3.6 Chapter 6 Aerial Vehicles . . . . . . . . . . . . . . . . . . . 12
1.3.7 Chapter 7 LabVIEW Implementation . . . . . . . . . . . . 12
1.3.8 Chapter 8 Simulations . . . . . . . . . . . . . . . . . . . . 12
1.3.9 Chapter 9 Experimental Results . . . . . . . . . . . . . . . 12
1.3.10 Chapter 10 Conclusions and Future Work . . . . . . . . . . 13
1.3.11 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 Hardware 15
2.1 Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Tamiya Truck . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.2 Tamiya Car . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.3 DIYdrones ArduCopter . . . . . . . . . . . . . . . . . . . . . 18
2.2 Electronics, Sensors and Actuators . . . . . . . . . . . . . . . . . . . 20
2.2.1 ArduPilot Mega CPU Board . . . . . . . . . . . . . . . . . . 20
2.2.2 ArduPilot Mega IMU Board . . . . . . . . . . . . . . . . . . . 21
2.2.3 Tmote Sky . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2.4 Triple Axis Magnetometer . . . . . . . . . . . . . . . . . . . . 24
2.2.5 Sonar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.6 Short Range IR Sensor . . . . . . . . . . . . . . . . . . . . . . 26
2.2.7 Long Range IR Sensor . . . . . . . . . . . . . . . . . . . . . . 27
2.2.8 Pololu Micro Serial Servo Controller . . . . . . . . . . . . . . 28
2.2.9 Futaba Servo . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Positioning Systems 31
3.1 Ubisense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.2 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.1.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.1.4 Data Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.1.5 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.6 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Qualisys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.3 Data Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.4 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2.5 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
vi CONTENTS
4 Testbed 51
4.1 Testbed Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1.1 Positioning Systems . . . . . . . . . . . . . . . . . . . . . . . 51
4.1.2 Closed Loop Control . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.3 Onboard Sensors . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.4 Onboard Actuators . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2 Layered Controller Architecture . . . . . . . . . . . . . . . . . . . . . 56
4.2.1 Motion Planning . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2.2 Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2.3 Mission Planning . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5 Ground Vehicles 61
5.1 Ground Vehicle Variable Denition . . . . . . . . . . . . . . . . . . . 61
5.2 Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3 Theoretical Controller Design . . . . . . . . . . . . . . . . . . . . . . 64
5.3.1 Layer 1 Motion Planning . . . . . . . . . . . . . . . . . . . 64
5.3.2 Layer 2 Coordination . . . . . . . . . . . . . . . . . . . . . 68
5.3.3 Layer 3 Mission Planning . . . . . . . . . . . . . . . . . . . 68
5.4 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.4.1 Variable Calculation . . . . . . . . . . . . . . . . . . . . . . . 69
5.4.2 Three Dimensional Model . . . . . . . . . . . . . . . . . . . . 72
6 Aerial Vehicles 75
6.1 Aerial Vehicle Variable Denition . . . . . . . . . . . . . . . . . . . . 75
6.2 Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.3 Theoretical Controller Design . . . . . . . . . . . . . . . . . . . . . . 80
6.3.1 Layer 1 Motion Planning . . . . . . . . . . . . . . . . . . . 80
6.3.2 Layer 2 Coordination . . . . . . . . . . . . . . . . . . . . . 85
6.3.3 Layer 3 Mission Planning . . . . . . . . . . . . . . . . . . . 85
6.4 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.4.1 Variable Calculation . . . . . . . . . . . . . . . . . . . . . . . 85
6.4.2 Three Dimensional Model . . . . . . . . . . . . . . . . . . . . 87
7 LabVIEW Implementation 89
7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.1.1 Ubisense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.1.2 Qualisys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.2 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.2.1 Open Serial Forwarder Connections . . . . . . . . . . . . . . 90
7.2.2 Start Ubisense Positioning System . . . . . . . . . . . . . . . 92
7.3 Main Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.3.1 Truck Control Algorithms . . . . . . . . . . . . . . . . . . . . 92
7.3.2 Quadrotor Control Algorithms . . . . . . . . . . . . . . . . . 96
CONTENTS vii
7.3.3 Sending Actuator Commands . . . . . . . . . . . . . . . . . . 99
7.3.4 Receiving Sensor Data . . . . . . . . . . . . . . . . . . . . . . 103
7.3.5 Start Qualisys Track Manager . . . . . . . . . . . . . . . . . . 106
7.3.6 Mission Planner . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.4 Finalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
7.4.1 Close Serial Forwarder Connections . . . . . . . . . . . . . . 108
7.5 Data Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.6 Global Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
8 Simulations 113
8.1 Ground Vehicle Simulations . . . . . . . . . . . . . . . . . . . . . . . 114
8.1.1 Platooning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.1.2 Formation I (Double Column) . . . . . . . . . . . . . . . . . . 116
8.1.3 Formation II (Triangle) . . . . . . . . . . . . . . . . . . . . . 117
8.1.4 Row Formation . . . . . . . . . . . . . . . . . . . . . . . . . . 118
8.1.5 Defensive Formation . . . . . . . . . . . . . . . . . . . . . . . 119
8.2 Aerial Vehicle Simulations . . . . . . . . . . . . . . . . . . . . . . . . 120
8.2.1 Single Flight 1 Quadrotor . . . . . . . . . . . . . . . . . . . . 120
8.2.2 Circular Motion 1 Quadrotor . . . . . . . . . . . . . . . . . . 122
8.2.3 Circular Motion 2 Quadrotors . . . . . . . . . . . . . . . . . . 124
8.2.4 Circular Motion 3 Quadrotors . . . . . . . . . . . . . . . . . . 125
8.2.5 Circular Motion 4 Quadrotors . . . . . . . . . . . . . . . . . . 126
8.2.6 Elliptical Motion 4 Quadrotors . . . . . . . . . . . . . . . . . 127
8.2.7 Circular Motion 4 Quadrotors [2 radii] . . . . . . . . . . . . . 128
8.2.8 Circular Motion 4 Quadrotors [sub-rotation] . . . . . . . . . . 129
8.2.9 Circular Motion 4 Quadrotors [variable radius] . . . . . . . . 130
8.2.10 Circular Motion 4 Quadrotors [2 altitudes] . . . . . . . . . . . 131
8.2.11 Circular Motion 4 Quadrotors [variable altitude] . . . . . . . 132
8.2.12 Circular Motion 4 Quadrotors [vertical] . . . . . . . . . . . . 133
8.2.13 Circular Motion 4 Quadrotors [horizontal & vertical] . . . . . 134
8.2.14 Circular Motion 4 Quadrotors [triple rotation] . . . . . . . . . 135
8.3 Cooperative Ground & Aerial Vehicles Simulations . . . . . . . . . . 136
8.3.1 Simple Surveillance 1 Quadrotor . . . . . . . . . . . . . . . . 136
8.3.2 Circular Surveillance 1 Quadrotor . . . . . . . . . . . . . . . 137
8.3.3 Circular Surveillance 2 Quadrotors . . . . . . . . . . . . . . . 138
8.3.4 Circular Surveillance 3 Quadrotors . . . . . . . . . . . . . . . 139
8.3.5 Circular Surveillance 4 Quadrotors [2 altitudes, 2 radii] . . . 140
8.3.6 Circular Surveillance 4 Quadrotors [front & back] . . . . . . . 141
8.3.7 Circular Surveillance 4 Quadrotors [vertical] . . . . . . . . . . 142
8.3.8 Circular Surveillance 4 Quadrotors [multiple platoons] . . . . 143
8.4 Control Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.4.1 Platoon Leader . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.4.2 Platoon First Follower . . . . . . . . . . . . . . . . . . . . . . 145
8.4.3 Quadrotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
viii CONTENTS
8.5 3D Visualization Engine . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.5.2 Code Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . 148
9 Experimental Results 155
9.1 Hardware and Experimental Performance . . . . . . . . . . . . . . . 155
9.1.1 Battery Charge Level . . . . . . . . . . . . . . . . . . . . . . 155
9.1.2 Truck Speed Control . . . . . . . . . . . . . . . . . . . . . . . 156
9.1.3 Quadrotor Throttle Control . . . . . . . . . . . . . . . . . . . 156
9.1.4 Number of Controllable Vehicles . . . . . . . . . . . . . . . . 156
10 Conclusions and Future Work 159
10.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
10.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Appendix A 161
MATLAB Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
C/C++ Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Appendix B 175
Actuator Interface Board . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
IR Sensor Interface Board . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Appendix C 177
Photos Ground Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Photos Aerial Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Photos Ground and Aerial Vehicles . . . . . . . . . . . . . . . . . . . . . . 180
References 181
List of Figures
1.1 Platoon formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Surveillance robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Point cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Transportation robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Rescue robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 3D model of platoon formation . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 3D model of removing vehicle from platoon . . . . . . . . . . . . . . . . 9
1.8 3D model of inserting vehicle to platoon . . . . . . . . . . . . . . . . . . 9
1.9 3D model of quadrotor tracking reference . . . . . . . . . . . . . . . . . 10
1.10 3D model of quadrotor following platoon . . . . . . . . . . . . . . . . . . 10
1.11 3D model of two quadrotors circling platoon . . . . . . . . . . . . . . . . 11
2.1 Tamiya Scania truck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Tamiya Honda car . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 DIYdrones quadrotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 ArduPilot Mega CPU board . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 ArduPilot Mega IMU board . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.6 Tmote Sky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.7 Tmote Sky parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8 Triple axis magnetometer . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.9 Sonar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.10 Short range IR sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.11 Long range IR sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.12 Pololu micro serial servo controller . . . . . . . . . . . . . . . . . . . . . 28
2.13 Futaba servo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1 Ubisense logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 LabVIEW .NET invoke node . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3 Qualisys logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4 LabVIEW QLC VI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
ix
x List of Figures
3.5 LabVIEW Q Command VI . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.6 LabVIEW Q6D Euler VI . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.7 EKF detailed diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.8 EKF simplied diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.9 EKF Simulink block diagram . . . . . . . . . . . . . . . . . . . . . . . . 46
3.10 EKF Simulink performance evaluator . . . . . . . . . . . . . . . . . . . 47
3.11 EKF performance evaluation results . . . . . . . . . . . . . . . . . . . . 48
3.12 EKF performance time comparison . . . . . . . . . . . . . . . . . . . . . 49
4.1 Feedback loop diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 Testbed Ubisense connection diagram . . . . . . . . . . . . . . . . . . . 53
4.3 Testbed Qualisys connection diagram . . . . . . . . . . . . . . . . . . . 54
4.4 Layered control diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.1 Ground vehicle graphical variable representation . . . . . . . . . . . . . 63
5.2 Ground vehicle speed controller . . . . . . . . . . . . . . . . . . . . . . . 65
5.3 Ground vehicle steering controller . . . . . . . . . . . . . . . . . . . . . 66
5.4 Ground vehicle safety controller . . . . . . . . . . . . . . . . . . . . . . . 67
5.5 FIFO queue graphical representation . . . . . . . . . . . . . . . . . . . . 70
5.6 Distance to WP graphical representation . . . . . . . . . . . . . . . . . . 71
5.7 Vehicle to WP Displacement graphical representation . . . . . . . . . . 72
5.8 Ground vehicle 3D model with electronics . . . . . . . . . . . . . . . . . 73
6.1 Aerial vehicle graphical variable representation (yaw) . . . . . . . . . . . 76
6.2 Aerial vehicle graphical variable representation (pitch, roll) . . . . . . . 77
6.3 Aerial vehicle throttle controller . . . . . . . . . . . . . . . . . . . . . . 81
6.4 Aerial vehicle pitch and roll controller . . . . . . . . . . . . . . . . . . . 82
6.5 Aerial vehicle yaw controller . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.6 Aerial vehicle safety controller . . . . . . . . . . . . . . . . . . . . . . . . 84
6.7 Quadrotor heading graphical representation . . . . . . . . . . . . . . . . 86
6.8 Distance to WP graphical representation . . . . . . . . . . . . . . . . . . 87
6.9 Aerial vehicle 3D model with electronics . . . . . . . . . . . . . . . . . . 87
7.1 Ubisense main.vi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.2 Ubisense main.vi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.3 OpenSF_basic.vi Common . . . . . . . . . . . . . . . . . . . . . . . . 91
7.4 OpenSF_double.vi Common . . . . . . . . . . . . . . . . . . . . . . . 91
7.5 Ubisense_Structure_Start.vi Ubisense Only . . . . . . . . . . . . . . . 92
7.6 Truck position data retrieval Ubisense Only . . . . . . . . . . . . . . . 93
7.7 Truck 6 DOF data retrieval Qualisys Only . . . . . . . . . . . . . . . . 93
7.8 Displacement.vi Common . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.9 Truck control signals (leader) Common . . . . . . . . . . . . . . . . . 95
7.10 Truck control signals (followers) Common . . . . . . . . . . . . . . . . 95
7.11 Truck control signals update Common . . . . . . . . . . . . . . . . . . 96
7.12 Quadrotor position data retrieval Ubisense Only . . . . . . . . . . . . 97
List of Figures xi
7.13 Quadrotor 6 DOF data retrieval Qualisys Only . . . . . . . . . . . . . 97
7.14 Quadrotor control signals Common . . . . . . . . . . . . . . . . . . . . 98
7.15 Quadrotor control signals update Common . . . . . . . . . . . . . . . 99
7.16 Truck actuator loop Common . . . . . . . . . . . . . . . . . . . . . . . 100
7.17 Quadrotor actuator loop Common . . . . . . . . . . . . . . . . . . . . 101
7.18 WriteSF_basic_global.vi Common . . . . . . . . . . . . . . . . . . . . 102
7.19 IR_sensor_complex_global.vi Common . . . . . . . . . . . . . . . . . 104
7.20 ReadSF_complex_global.vi (part 1) Common . . . . . . . . . . . . . 104
7.21 ReadSF_complex_global.vi (part 2) Common . . . . . . . . . . . . . 105
7.22 QTM.vi Qualisys Only . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.23 Coordinator.vi Common . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.24 CloseSF_basic.vi Common . . . . . . . . . . . . . . . . . . . . . . . . 108
7.25 CloseSF_double.vi Common . . . . . . . . . . . . . . . . . . . . . . . 109
7.26 Semaphore labels Common . . . . . . . . . . . . . . . . . . . . . . . . 110
7.27 Semaphore critical section Common . . . . . . . . . . . . . . . . . . . 110
7.28 Global variables Common . . . . . . . . . . . . . . . . . . . . . . . . . 111
8.1 Platooning simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.2 Formation I (double column) simulation . . . . . . . . . . . . . . . . . . 116
8.3 Formation II (triangle) simulation . . . . . . . . . . . . . . . . . . . . . 117
8.4 Row formation simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 118
8.5 Defensive formation simulation . . . . . . . . . . . . . . . . . . . . . . . 119
8.6 Single ight simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.7 Simple rotation of one aerial vehicle . . . . . . . . . . . . . . . . . . . . 122
8.8 Circular motion 1 quadrotor simulation . . . . . . . . . . . . . . . . . . 123
8.9 Circular motion 2 quadrotors simulation . . . . . . . . . . . . . . . . . . 124
8.10 Circular motion 3 quadrotors simulation . . . . . . . . . . . . . . . . . . 125
8.11 Circular motion 4 quadrotors simulation . . . . . . . . . . . . . . . . . . 126
8.12 Elliptical motion 4 quadrotors simulation . . . . . . . . . . . . . . . . . 127
8.13 Circular motion 4 quadrotors 2 radii simulation . . . . . . . . . . . . . 128
8.14 Circular motion 4 quadrotors sub-rotation simulation . . . . . . . . . 129
8.15 Circular motion 4 quadrotors variable radius simulation . . . . . . . . 130
8.16 Circular motion 4 quadrotors 2 altitudes simulation . . . . . . . . . . 131
8.17 Circular motion 4 quadrotors variable altitude simulation . . . . . . . 132
8.18 Circular motion 4 quadrotors vertical simulation . . . . . . . . . . . . 133
8.19 Circular motion 4 quadrotors horizontal & vertical simulation . . . . . 134
8.20 Circular motion 4 quadrotors triple rotation simulation . . . . . . . . 135
8.21 Simple surveillance 1 quadrotor simulation . . . . . . . . . . . . . . . . . 136
8.22 Circular surveillance 1 quadrotor simulation . . . . . . . . . . . . . . . . 137
8.23 Circular surveillance 2 quadrotors simulation . . . . . . . . . . . . . . . 138
8.24 Circular surveillance 3 quadrotors simulation . . . . . . . . . . . . . . . 139
8.25 Circular surveillance 4 quadrotors 2 altitudes, 2 radii simulation . . . 140
8.26 Circular surveillance 4 quadrotors front & back simulation . . . . . . . 141
8.27 Circular surveillance 4 quadrotors vertical simulation . . . . . . . . . . 142
xii List of Figures
8.28 Circular surveillance 4 quadrotors multiple platoons simulation . . . . 143
8.29 Platoon leader control signals time plot . . . . . . . . . . . . . . . . . 144
8.30 Platoon second vehicle control signals time plot . . . . . . . . . . . . . 145
8.31 Quadrotor control signals time plot (pitch & roll) . . . . . . . . . . . . 146
8.32 Quadrotor control signals time plot (yaw & throttle) . . . . . . . . . . 147
1 Actuator interface board . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
2 IR sensors interface board . . . . . . . . . . . . . . . . . . . . . . . . . . 176
3 Photo Arduino-Mote Custom Serial Connection Board . . . . . . . . . . 177
4 Photo IR Sensor & Actuator Interface Boards . . . . . . . . . . . . . . . 178
5 Photo truck connections . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6 Photo actuator & sensor motes . . . . . . . . . . . . . . . . . . . . . . . 179
7 Photo Pololu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
8 Photo Ubisense Tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9 Photo collection of vehicles . . . . . . . . . . . . . . . . . . . . . . . . . 180
List of Tables
1 Ground vehicles standard variables . . . . . . . . . . . . . . . . . . . . xix
2 Aerial vehicles standard variables . . . . . . . . . . . . . . . . . . . . . xx
2.1 Tamiya truck specications . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Tamiya truck features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Tamiya truck dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Tamiya car specications . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Tamiya car features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Tamiya car dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7 DIYdrones quadrotor features . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8 DIYdrones quadrotor ight times . . . . . . . . . . . . . . . . . . . . . . 19
2.9 DIYdrones quadrotor dimensions and weight . . . . . . . . . . . . . . . 19
2.10 ArduPilot Mega CPU board features . . . . . . . . . . . . . . . . . . . . 21
2.11 ArduPilot Mega IMU board features . . . . . . . . . . . . . . . . . . . . 22
2.12 Tmote Sky features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.13 Triple axis magnetometer features . . . . . . . . . . . . . . . . . . . . . 24
2.14 Sonar features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.15 Short range IR sensor features . . . . . . . . . . . . . . . . . . . . . . . 26
2.16 Long range IR sensor features . . . . . . . . . . . . . . . . . . . . . . . . 27
2.17 Pololu micro serial servo controller features . . . . . . . . . . . . . . . . 28
2.18 Futaba servo features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1 EKF variable meaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 EKF corrector step variable meaning . . . . . . . . . . . . . . . . . . . . 45
5.1 Ground vehicles standard variables . . . . . . . . . . . . . . . . . . . . 61
5.2 Ground vehicle diagram variable meaning . . . . . . . . . . . . . . . . . 62
6.1 Aerial vehicles standard variables . . . . . . . . . . . . . . . . . . . . . 75
xiii
List of Algorithms
1 Algorithm run (function) Ubisense . . . . . . . . . . . . . . . . . . 33
2 Algorithm tag_update (event) Ubisense . . . . . . . . . . . . . . . 33
3 Algorithm get_position (function) Ubisense . . . . . . . . . . . . 34
4 Algorithm main (function) Visualization Engine . . . . . . . . . . 148
xv
Programming Code
7.1 Lookup table calibration data (MATLAB script) . . . . . . . . . . . 103
8.1 Video driver (C++ code) . . . . . . . . . . . . . . . . . . . . . . . . 148
8.2 Video dimensions (C++ code) . . . . . . . . . . . . . . . . . . . . . 149
8.3 Scene manager (C++ code) . . . . . . . . . . . . . . . . . . . . . . . 149
8.4 Open ground vehicles text les (C++ code) . . . . . . . . . . . . . . 149
8.5 Open aerial vehicles text les (C++ code) . . . . . . . . . . . . . . . 149
8.6 Dene global variables (C++ code) . . . . . . . . . . . . . . . . . . . 150
8.7 Load ground vehicles 3D models (C++ code) . . . . . . . . . . . . . 150
8.8 Load aerial vehicles 3D models (C++ code) . . . . . . . . . . . . . . 150
8.9 Load testbed 3D model (C++ code) . . . . . . . . . . . . . . . . . . 151
8.10 Set camera position, orientation and navigation mode (C++ code) . 151
8.11 Read ground vehicle positions from .txt les (C++ code) . . . . . . 151
8.12 Read aerial vehicle positions from .txt les (C++ code) . . . . . . . 152
8.13 Update vehicle position variables (C++ code) . . . . . . . . . . . . . 152
8.14 Draw scene and vehicles (C++ code) . . . . . . . . . . . . . . . . . . 152
8.15 Drop rendering device and close .txt les (C++ code) . . . . . . . . 153
1 Extended Kalman lter (MATLAB script) . . . . . . . . . . . . . . . 161
2 Automatic IR lookup table generator (MATLAB script) . . . . . . . 162
3 Truck control signals plotting (MATLAB script) . . . . . . . . . . . 163
4 Quadrotor control signals plotting (MATLAB script) . . . . . . . . . 164
5 Vehicles and trajectories 3D plotting (MATLAB script) . . . . . . . 165
6 Simulink signal extraction with subsampling (MATLAB script) . . . 167
7 ArduCopter sensor data sending loop (C code) . . . . . . . . . . . . 168
8 Visualization Engine (C++ code) . . . . . . . . . . . . . . . . . . . . 170
xvi
xvii
xviii ACRONYMS
Acronyms
VI Virtual Instrument
WSN Wireless Sensor Network
CPU Central Processing Unit
IMU Inertial Measurement Unit
SLAM Simultaneous Localization and Mapping
UAV Unmanned Aerial Vehicle
EKF Extended Kalman Filter
KF Kalman Filter
DLL Dynamic Link Library
PWM Pulse Width Modulation
QTM Qualisys Track Manager
QLC Qualisys LabVIEW Client
RT Real Time
RTLS Real Time Localization System
DOF Degrees Of Freedom
RF Radio Frequency
RC Radio Control
IR Infrared
IDE Integrated Development Environment
API Application Programming Interface
WP Waypoint
SP Surveillance Point
FIFO First In First Out
SF Serial Forwarder
TCP/IP Transmission Control Protocol / Internet Protocol
PCB Printed Circuit Board
GUI Graphical User Interface
Notation
In this thesis there are two dierent notations: one for the ground vehicles and one
for the aerial vehicles, since both of them use the same symbols we must keep in mind
the context in which the symbols appear in order to interpret them accordingly.
Ground Vehicle Variable Denition
Symbol Meaning Theoretical Range
x y z Cartesian coordinate system positions (, ) (, ) [0]
x y z Cartesian coordinate system velocities (, ) (, ) [0]
x y z Cartesian coordinate system accelerations (, ) (, ) [0]
t Time [0, )
Vehicle Orientation [0, 2)
Vehicle Steering (

2
,

2
)
Vehicle to WP Displacement (, ]
d Distance to WP [0, )
Table 1: table showing the standard variables used for the ground vehicles.
xix
xx NOTATION
Aerial Vehicle Variable Denition
Symbol Meaning Theoretical Range
x y z Cartesian coordinate system positions (, ) (, ) [0, )
x y z Cartesian coordinate system velocities (, ) (, ) (, )
x y z Cartesian coordinate system accelerations (, ) (, ) (, )
t Time [0, )
Vehicle Roll angle (

2
,

2
)
Vehicle Pitch angle (

2
,

2
)
Vehicle Yaw angle (, ]
d Distance to WP [0, )
g Gravity 9.81 m/s
2
m Vehicle Mass 1.5 kg
Table 2: table showing the standard variables used for the aerial vehicles.
Chapter 1
Introduction
The usage of autonomous robots has increased drastically over the past few years
in areas such as surveillance, transportation, exploration and rescue. Most of the
applications require sharing information between the robots either by direct com-
munication between them or through a central station that takes care of processing
all the data.
Many applications require the usage of multiple robots to achieve a common goal;
this is normally referred to as multi-agent systems. Agents are autonomous entities
that exhibit behaviors and perform certain actions depending on their internal state,
the perception of the environment and the messages received from other agents.
The most straightforward implementation of agents is the reex agent where
perceptions are translated into actions using logical conditional rules. More complex
agents, not treated in this thesis, called learning agents are able to modify their way
to interact with the environment based on rewards and other indirect measurements
of the performance of their actions.
In multi-agent systems, communication is essential to achieve coordination be-
tween robots. Wireless transmission is normally used as the standard due to its
reliability and adequacy to dierent environments. Those advantages coupled with
the recent development of ultra-low power consumption wireless devices have in-
creased the interest in this type of communication.
The most common protocol for wireless communications is the IEEE 802.11
which is widely used for the Internet and allows multiple devices to be connected
seamlessly. However since we require our devices to be low-powered, as they will
run mainly on batteries, it is convenient to use a protocol which was specically
designed to work on low-power consumption devices such as the IEEE 802.15.4
standard for WSN.
When dealing with motion control of autonomous vehicles it is advantageous to
have the position of each robot. In centralized scenarios such as the ones treated
1
2 CHAPTER 1. INTRODUCTION
in this thesis, this is achieved using a local positioning system which gathers the
data in a central computer that takes care of processing it and sending appropriate
actuation commands to each robot according to the controllers output.
Decentralized scenarios, not treated in this thesis, occur when no central po-
sitioning system is available. In these cases the robots must use their sensors to
gather data and share it between vehicles just like it is done between the nodes of
a WSN so that each robot can calculate the most appropriate actuation commands
based on the available information of neighboring agents.
When multiple types of robots are present, the design of the WSN involves the
creation of hierarchical structures where the purpose and importance of each type
of vehicle is reected. In this thesis we will only use aerial and ground vehicles, how-
ever, the problem formulation and hierarchical design remains the same in scenarios
where there are more than two types of robots involved.
In this chapter we present ve real world applications where the ideas developed
in this thesis can be applied. We describe the specic goals of this project in terms
of algorithms and simulations to be implemented. Lastly, we outline the contents
of each chapter in order to give the reader an idea of the overall scope of this thesis.
1.1 Real World Applications
1.1.1 Platooning
Consists of grouping vehicles into platoons allowing them to travel closely and yet
safely. Grouping vehicles this way saves a considerable amount of space and thus
decreases the trac congestion, it is estimated that the eciency in vehicles per
lane per hour will duplicate even under non optimal platoon congurations.
Platooning also reduces drag forces created by the air. This reduction is trans-
lated into less fuel consumption which yields to less pollution. Drag reduction
reaches the optimal point when each vehicle is placed at a distance of 2.5 m from
the preceding member of the platoon, at this distance the reduction in drag is ap-
proximately 50 % which yields to 25 % reduction in fuel consumption. Driving at
2.5 m distance between cars at high speeds is not safe for human pilots; however it
is possible with automated navigation systems.
There are two main controllers to be implemented in platooning, these are:
distance control and alignment control. The rst takes care of holding the distance
between the vehicles in the convoy whereas the second takes care of keeping them
aligned in a row. This is achieved controlling the speed and the steering of each
vehicle like human drivers would do in regular driving conditions.
Each vehicle implements an agent that takes care of the following tasks: com-
municate his own data to a subset of the other cars in the convoy, receive the data
1.1. REAL WORLD APPLICATIONS 3
being transmitted from other agents, and perform actuation commands based on
the output of the steering and speed control algorithms.
The leader of the platoon does not follow any vehicle, for this reason specic
driving directions have to be specied upfront. In reality this is achieved using a
human pilot to lead the convoy, however, in this thesis a set of coordinate WPs and
navigation algorithms is used to guide the leader of the platoon in replacement of
the human driver.
Vehicles are allowed to enter and exit the platoon dynamically; this is achieved
through wireless communication between agents. These communication channels
allow them to cooperate and transmit information such as sensor readings, intention,
destination, position, speed, size, etc.
Figure 1.1: standard platoon formation with a truck as the leader.
1.1.2 Surveillance
Consists of the joint eort of several robots to patrol a certain area or to track
moving objects. These robots are able to achieve dierent formations to maximize
the patrolled area or other desired parameters. The robots are able to communicate
with each other and transmit relevant data such as alarms or video to a central
station. Both aerial and ground vehicles can be used for surveillance tasks depending
on the situation.
Many dierent commercial surveillance robots can be found online, most of
them are built to be very resistant to physical damage and to be usable in extreme
weather conditions. Some of them are designed to be able to accomplish certain
tasks such as going up or down stairs, whereas others have built-in weapons to
respond automatically in case of alarm.
4 CHAPTER 1. INTRODUCTION
These robots can be used in a variety of scenarios ranging from surveillance in
museums, banks, war zones, etc. Replacing humans with autonomous robots to
perform dangerous and repetitive tasks is one of the main goals in this eld. Since
surveillance vehicles can be equipped with a variety of sensors, it is to be expected
that soon these robots will outperform humans in surveillance tasks as well.
This is yet another clear example of the usage of WSN, especially in situations
where the patrolled area is too large to maintain direct communication link between
each robot and a central station. In situations like these, the agents function like
nodes in a sensor network; retransmitting incoming data from other agents so that
it reaches the central station or gateway.
Surveillance can also be done between agents, this means that not necessarily
the object subject to supervision belongs to the environment. Sometimes, coopera-
tive control can be achieved through aerial vehicles guiding ground vehicles or vice
versa. In these situations there could be a unidirectional or multidirectional ow of
information between the dierent types of robots.
Figure 1.2: surveillance robot developed at Darmstadt University.
1.1. REAL WORLD APPLICATIONS 5
1.1.3 Exploration
Consists of several robots working together to create the map of a certain region,
these robots are able to distribute their eorts in such a way that the map can
be completed in the shortest time or with the highest precision. The robots are
not only able to build the map, but also to position themselves inside it. They
can also use this information and dierent path planning algorithms to traverse the
environment.
These robots are often equipped with cameras and proximity sensors. In most
cases computer vision algorithms are used to extract features from each video frame
and SLAM techniques are used to combine these features and produce a coherent
map representation along with the agents current estimated position.
In other cases the position of the vehicle is estimated using the vehicle dynamics
and its internal sensors (e.g., encoders, accelerometers, gyros, IR sensors). Using a
relative coordinate system centered on the robots initial position, the rest of the
map is built based on the estimated robot state and its sensor readings.
The map representation can be metric or topological. In the rst case: precise
coordinates are updated using probabilistic methods, this type of maps are often
very accurate but not interpretable by machines. In the second case: places and
locations are stored as nodes in a graph, paths between those places are indicated
by arcs between nodes and no information regarding the exact position of places is
stored.
Metric maps can be two or three dimensional according to the requirements of
the specic scenario. Three dimensional map representations are currently being
developed in a eld called point clouds.
Figure 1.3: 3D point cloud that resembles a human body in standing position.
6 CHAPTER 1. INTRODUCTION
1.1.4 Transportation
Consists of the cooperation between several vehicles to move dierent objects from
one place to another; these loads can be truck trailers in a harbor, medical equipment
in a hospital, tools inside a workshop, etc. Both aerial and ground vehicles are
suitable for transportation; however depending on the situation it will be more
appropriate to use a certain type of vehicle.
Ground transportation is closely related to platooning because the purpose of
both is to move things from one place to another in an ecient and automated
way. For that reason, the explanations presented in the section where we discussed
platooning are also applicable here. In most cases, the robots used for automated
transportation are equipped with grippers especially designed to handle objects with
a certain size, weight and shape.
The process of transportation involves several sub-processes:
1. Recognizing the object to be transported: generally achieved using computer
vision algorithms. Sometimes we are presented with a challenging environment
where it is dicult to identify the object that we wish to transport.
2. Positioning the object and the vehicle: also possible using cameras and prox-
imity sensors embedded in the robot or other external positioning mechanism.
3. Picking up the object: a combination between path planning and control the-
ory is used to create a valid trajectory and successfully grip the object.
4. Moving to the destination: path planning and control algorithms are used in
this phase to guide the robot to its destination.
5. Releasing the load at the appropriate location: this phase is the reverse of
picking up the object, therefore the same explanation applies.
Figure 1.4: Intel robot named HERB equipped with 2 grippers for transportation.
1.1. REAL WORLD APPLICATIONS 7
1.1.5 Rescue
In emergency situations such as res and earthquakes it is a common problem having
to risk human lives to rescue possible victims. For this reason, autonomous robots
have stepped in attempting to replace humans in dangerous environments. In most
missions, time is a crucial factor and several robots must be coordinated optimally
to identify and rescue the victims depending on their particular situation.
One of the key factors to complete these missions successfully is the ability of
each robot to take appropriate decisions on RT, based on its own sensory input
and the incoming transmissions from the other robots. A clear example of these
situations is when certain victims require higher priority than others or if a specic
robot is unable to perform a task without the aid of another one.
Generally search and rescue missions are divided into three phases:
1. Exploration: mapping the space, locating and recognizing the victims. Each
victim particular situation is evaluated and prioritized accordingly.
2. Rescue: transporting the victims outside the place of the accident or bringing
them assistance otherwise.
3. Escape: exiting the place of the accident to ensure the safety of the robot
itself.
Generally, the development of rescue robots is a multidisciplinary eld because
involves not only control theory, but also computer vision algorithms which are
responsible for providing the agent with crucial information of the surrounding
environment.
Figure 1.5: autonomous rescue robot retrieving a dummy.
8 CHAPTER 1. INTRODUCTION
1.2 Tasks
In this section we describe the specic goals of this thesis in terms of algorithms
and simulations to be implemented. The relevance of these tasks is grounded on
the ve real life applications mentioned in this chapter.
1.2.1 Platooning
First we approach the problem of creating a controller capable of forming a platoon
of an arbitrary number of vehicles. This involves implementing two things: a speed
and steering controller for each vehicle and a framework where it is possible to share
information between them. A platoon vehicle can be a truck or a car, and they are
treated indistinctively in the following scenarios.
Figure 1.6: 3D representation of the controller used to hold the platoon formation.
Secondly we propose an implementation of a controller capable of removing any
vehicle from the platoon except for the leader; rearranging the remaining vehicles
in the same platoon. This involves the platooning controller of the previous task
plus a heuristic algorithm to safely remove a vehicle (truck or car) without causing
any disruption to the platoon.
1.2. TASKS 9
Figure 1.7: 3D representation of the controller used to remove a vehicle from the platoon.
Lastly, we propose an implementation of a controller capable of inserting a
vehicle (truck or car) into an existing platoon; this is done by opening a space
between two arbitrary vehicles to let the third one in. This involves the platooning
controller plus a heuristic algorithm responsible for allowing a vehicle to join without
causing any disruption to the platoon.
Figure 1.8: 3D representation of the controller used to insert a vehicle to the platoon.
1.2.2 Surveillance
First we propose an implementation where an aerial vehicle tracks certain discrete
reference WPs. We generalize the concept of discrete WPs into time-varying refer-
10 CHAPTER 1. INTRODUCTION
ence points so that we are able to generate dierent trajectories and shapes without
requiring further study of path planning control.
Figure 1.9: 3D representation of the controller used to perform takeo, ight and landing
of the quadrotor.
Secondly we propose an implementation where an aerial vehicle hovers above the
leader of a platoon and follows it along its trajectory. We also explore the opposite
situation where the aerial vehicle guides the platoon along its trajectory.
Figure 1.10: 3D representation of the controllers used to hold a platoon formation and
perform aerial tracking of the platoon leader using one quadrotor.
1.3. THESIS OUTLINE 11
Lastly, we propose an implementation where two aerial vehicles perform surveil-
lance simultaneously on a convoy. We explore the scenarios where more than two
aerial vehicles are involved without a deep study of path planning algorithms.
Figure 1.11: 3D representation of the controllers used to hold a platoon formation and
perform circular aerial tracking of the platoon leader using two quadrotors.
1.3 Thesis Outline
1.3.1 Chapter 1 Introduction
This chapter contains the thesis introduction, the real life applications of the control
algorithms developed, the specic tasks to be implemented in this project and a
short overview of the contents of each chapter.
1.3.2 Chapter 2 Hardware
This chapter contains the description of the hardware used in this thesis. This
includes not only the characteristics of the aerial and ground vehicles, but also the
specications of the electronics, sensors and actuators.
1.3.3 Chapter 3 Positioning Systems
This chapter contains detailed descriptions of the Ubisense and the Qualisys posi-
tioning systems as well as the usage, advantages and disadvantages of each system.
Lastly, it contains the theory and implementation of the EKF used to improve the
performance of the Ubisense.
12 CHAPTER 1. INTRODUCTION
1.3.4 Chapter 4 Testbed
This chapter contains two sections: in the rst we present the overall structure of
the testbed, we explain how the hardware and software work together in order to
provide a closed loop control system. In the second we explain in detail the layered
controller implemented in this thesis to give a consistent structure to the software
component of the testbed.
1.3.5 Chapter 5 Ground Vehicles
This chapter contains the mathematical analysis of the ground vehicle dynamics
and the conventions used to dene the systems variables. We also explain how the
concept of layered control is applied to the ground vehicles by describing the scope
of each layer in this specic scenario.
1.3.6 Chapter 6 Aerial Vehicles
This chapter contains the mathematical analysis of the aerial vehicle dynamics and
the conventions used to dene the systems variables. We also explain how the
concept of layered control is applied to the aerial vehicles by describing the scope
of each layer in this specic scenario.
1.3.7 Chapter 7 LabVIEW Implementation
This chapter contains the structure of the LabVIEW program; it describes the func-
tionality of each VI and how they work together. In case the reader is interested: two
separate les called LabVIEW_VIs_Ubisense.pdf and LabVIEW_VIs_Qualisys.pdf
are provided with detailed information about the hierarchy of each VI and depen-
dencies between them.
1.3.8 Chapter 8 Simulations
This chapter contains the simulations used to develop and test the controllers, for
each simulated scenario two gures are presented showing the perspective and top
views of the vehicles involved and their trajectories. Lastly, we introduce the 3D
Visualization Engine created in this thesis to reproduce simulations and recorded
experiments. A brief analysis of the code of the Visualization Engine is given so
that it becomes possible to perform adaptations in the future.
1.3.9 Chapter 9 Experimental Results
This chapter contains a brief analysis of the experimental results of this thesis, the
practical implications relative to the hardware used such as limitations on the num-
ber of vehicles that can be controlled, and remarks on the performance expected
1.3. THESIS OUTLINE 13
versus the performance observed. We also explore the scalability of the implementa-
tion in the light of hardware and software limitations as well as the communication
issues observed.
1.3.10 Chapter 10 Conclusions and Future Work
This chapter contains the summary of the goals achieved in this project, the contri-
butions of this thesis and nal remarks regarding the aspects that can be improved.
Lastly, we present possible future work to be done in this area to expand and im-
prove the testbed.
1.3.11 Appendices
This section contains three parts: Appendix A, where the MATLAB scripts and
C/C++ code used throughout the thesis are presented. Appendix B, where the
schematic and PCB of the sensor and actuator boards are presented. Appendix C,
where the photos of the trucks, quadrotors, and other electronic devices that shape
the testbed are presented.
Chapter 2
Hardware
In this chapter we will describe the hardware used in this thesis, we will cover
important aspects of the robots, sensors, actuators and motes. The purpose of
this chapter is to give a general idea of the practical limitations inherent to the
implementation and to provide the grounds for those who wish to build a similar
testbed.
2.1 Vehicles
2.1.1 Tamiya Truck
Figure 2.1: Tamiya Scania truck with the transportation trailer.
15
16 CHAPTER 2. HARDWARE
Summary
The Tamiya truck is a realistic scale model of the Scania V8. The speed of the
vehicle is controlled using the models gearbox and a commercial speed controller.
The steering of the vehicle is controlled using a servo with 180

of rotation. The
three available shifts of the gearbox can be selected using a dierent servo or a
certain gear can be selected manually.
Specications
Motor Type Brushed 540
Engine Size 540
Number of Gears 3
Tires Radial Tires
Damper Type Spring
Table 2.1: specications of the Tamiya truck.
Features
Top Speed 30 km/h 35 km/h
Drive Type Dierential 2 Wheel Drive
Table 2.2: features of the Tamiya truck.
Dimensions
Scale 1/14
Length 452 mm
Width 187 mm
Wheelbase 293 mm
Table 2.3: dimensions of the Tamiya truck.
2.1. VEHICLES 17
2.1.2 Tamiya Car
Figure 2.2: Tamiya Honda car.
Summary
The Tamiya car is a scale model of the Honda CR-Z. The speed of the vehicle is
controlled using a commercial speed controller; this model has only one gear. The
steering of this vehicle is controlled using a servo with 180

of rotation.
Specications
Motor Type Brushed 540
Engine Size 540
Number of Gears 1
Tires Narrow Radial Tires
Damper Type Wishbone Suspension
Body Polycarbonate
Table 2.4: specications of the Tamiya car.
Features
Top Speed 30 km/h 35 km/h
Drive Type Dierential 2 Wheel Drive
Table 2.5: features of the Tamiya car.
18 CHAPTER 2. HARDWARE
Dimensions
Scale 1/10
Length 408 mm
Width 174 mm
Wheelbase 243 mm
Table 2.6: dimensions of the Tamiya car.
2.1.3 DIYdrones ArduCopter
Figure 2.3: DIYdrones quadrotor.
Summary
The ArduCopter is a multirotor UAV development platform based on a design by
Jani Hirvinen. There are several models with dierent number of propellers, the
most common are: tri-, quad-, hexa- and octa-rotors. The code required to control
the aircraft is open source and it is easy to modify. The ArduCopter also comes
with a software called Mission Planner that allows the user to set the controller
constants, each channels minimum and maximum PWM limits, among other things.
2.1. VEHICLES 19
Features
Accelerometer 6 DOF IMU
Gyroscope Gyro Stabilized Flight
Heading Calculation Magnetometer
Height Calculation Sonar Sensor
Onboard Controller Stabilization Double Cascade PID Control
Conguration GUI for Conguration of PID Parameters
Motor Controller PWM Electronics Speed Controllers (ESCs)
Frame Conguration Capability to Fly in + or
Compatibility Any R/C Receiver or Servo Controller
Obstacle Avoidance IR Sensors
Battery Level Detection Onboard LEDs & Base Station Indicator
Table 2.7: features of the DIYdrones quadrotor.
Average Flight Times
2200 mAh LiPo Battery 9 min 10 min with no Payload
2650 mAh LiPo Battery 9 min with 300 g Video Camera Payload
Table 2.8: average ight times of the DIYdrones quadrotor.
Dimensions and Weight
Size 60.96 cm from Motor to Motor
Weight 1500 g
Table 2.9: dimensions and weight of the DIYdrones quadrotor.
20 CHAPTER 2. HARDWARE
2.2 Electronics, Sensors and
Actuators
2.2.1 ArduPilot Mega CPU Board
Figure 2.4: ArduPilot Mega CPU board.
Summary
ArduPilot is a fully programmable autopilot that requires a GPS module and IMU
sensors to create a functioning UAV. The autopilot handles both stabilization and
navigation, eliminating the need for a separate stabilization system. It also supports
a y-by-wire mode that can stabilize an aircraft when ying manually under RC
control, making it easier and safer to y. The hardware and software are all open
source. The rmware is already loaded, but the autopilot software must be loaded
onto the board by the user. It can be programmed with the Arduino IDE.
2.2. ELECTRONICS, SENSORS AND ACTUATORS 21
Features
Usage Autonomous Aircraft, Quad Copters and Helicopters
Microcontroller 16 MHz Atmega2560
Processing Power Dual-Processor with 32 MIPS
Memory 256 kB Flash Program Memory, 8 kB SRAM, 4 kB EEPROM
Analog Ports 16 Spare Analog Inputs (with ADC on each)
Digital Ports 40 Digital Input/Outputs to Add Additional Sensors
Serial Ports 4 Dedicated Serial Ports for Two-Way Telemetry
RC Channels 8 RC Channels
Table 2.10: features of the ArduPilot Mega CPU board.
2.2.2 ArduPilot Mega IMU Board
Figure 2.5: ArduPilot Mega IMU board.
Summary
This board features a large array of sensors needed for UAV and Robotics appli-
cations, including three-axis angular rotation and accelerations sensors, absolute
pressure and temperature sensor, 16 MB data logger chip among other things. It is
designed to t on top (or bottom) of the ArduPilot Mega CPU board, creating a
total autopilot solution when a GPS module is attached.
22 CHAPTER 2. HARDWARE
Features
Analog Port 12-bit ADC
Data Storage Built-in 16 MB Data Logger
Interface Built-in FTDI, Making the Board Native USB
I2C Port Allows Sensor Arrays
User Input 2 User-Programmable Buttons (Momentary & Slide)
Expansion Ports 10-bit Analog Expansion Ports
Indicators Status LEDs
Gyros Vibration Resistance Invensense Gyros (Triple Axis)
Accelerometer Analog Devices ADX330 Accelerometer
Extra Sensors Absolute Bosch Pressure/Temperature Sensor
Table 2.11: features of the ArduPilot Mega IMU board.
2.2.3 Tmote Sky
Figure 2.6: Tmote Sky wireless module.
Summary
Tmote Sky is an ultra-low power wireless module for use in sensor networks, moni-
toring applications, and rapid prototyping. Tmote Sky leverages industry standards
like USB and IEEE 802.15.4 to interoperate seamlessly with other devices. By using
industry standards, integrating humidity, temperature and light sensors, while pro-
viding exible interconnections with peripherals, Tmote Sky enables a wide range
of mesh network applications.
2.2. ELECTRONICS, SENSORS AND ACTUATORS 23
Tmote Sky is a drop-in replacement for Moteivs successful Telos design. With
TinyOS support out-of-the-box, Tmote Sky leverages emerging wireless protocols
and the open source software movement. Tmote Sky is part of a line of modules
featuring onboard sensors to increase robustness while decreasing cost and package
size.
Features
Microcontroller TI MSP430F1611 Microcontroller at up to 8 MHz
Storage Size 10 kB SRAM, 48 kB Flash, 1024 kB Serial Storage
Wireless Capability 250 kb/s 2.4 GHz Chipcon CC2420 IEEE 802.15.4
Onboard Sensors Onboard Humidity, Temperature and Light Sensors
Consumption Ultra-Low Current Consumption
Wakeup Time Fast Wakeup from Sleep (< 6 s)
Programming Interface USB
Identication Serial ID Chip
Expansion Capability 16-pin Expansion Port
Board Size 32 mm 80 mm
Table 2.12: features of the Tmote Sky.
Figure 2.7: Tmote Sky front view (left), and back view (right).
24 CHAPTER 2. HARDWARE
2.2.4 Triple Axis Magnetometer
Figure 2.8: triple axis magnetometer (not soldered).
Summary
This is a 3-axis digital compass board based on the Honeywells HMC5883L. Com-
munication with the HMC5843 is achieved through an I2C interface. The board has
an I2C translator and 3.3 V power regulator that make it compatible with 3.3 V
and 5 V applications using a solder jumper.
Features
Interface I2C Interface with I2C Translator for 5 V Signals Compatibility
Supply Voltage 2.5 V 3.3 V and 4 V 5.5 V Supply Ranges (Jumper Selectable)
Resolution Low Current Draw and 4.35 mG Resolution
Compatibility ArduIMU and ArduPilotMega Shield Pin Compatible
Table 2.13: features of the triple axis magnetometer.
2.2. ELECTRONICS, SENSORS AND ACTUATORS 25
2.2.5 Sonar
Figure 2.9: sonar (not soldered).
Summary
The LV-MaxSonar-EZ2 is a good compromise between sensitivity and side object
rejection. The sensor oers three standard outputs (analog voltage, serial data, and
pulse width) that are available on all the MaxSonar-EZ products. The sonars of
this brand also operate with very low voltage from 2.5 V to 5 V with less than 3 mA
nominal current draw.
Features
Gain Continuously Variable Gain
Object Detection Includes Zero Range Objects
Supply Voltage 2.5 V 5.5 V Supply Range with 2 mA Current Draw
Refresh Rate Up to Every 50 ms (20 Hz Rate)
Free Run Operation Continually Measure and Output Range Information
Triggered Operation Provides the Range Reading as Desired
Interfaces Serial, Analog and Pulse Width
Sensor Frequency 42 kHz
Wave Type High Output Square Wave
Table 2.14: features of the sonar.
26 CHAPTER 2. HARDWARE
2.2.6 Short Range IR Sensor
Figure 2.10: short range IR sensor.
Summary
The GP2D12 is a short range distance measuring sensor with integrated signal
processing and analog voltage output.
Features
Supply Voltage 4.5 V 5.5 V
Output Type Analog Output
Eective Range 10 cm to 80 cm
LED Pulse Cycle Duration 32 ms
Typical Response Time 39 ms
Typical Start Up Delay 44 ms
Average Current Consumption 33 mA
Detection Area Diameter 6 cm of diameter at 80 cm
Table 2.15: features of the short range IR sensor.
2.2. ELECTRONICS, SENSORS AND ACTUATORS 27
2.2.7 Long Range IR Sensor
Figure 2.11: long range IR sensor.
Summary
The GP2Y0A is a long range distance measuring sensor with integrated signal pro-
cessing and analog voltage output.
Features
Supply Voltage 4.5 V 5.5 V
Output Type Analog Output
Eective Range 20 cm to 150 cm
Typical Response Time 39 ms
Typical Start Up Delay 48 ms
Average Current Consumption 33 mA
Table 2.16: features of the long range IR sensor.
28 CHAPTER 2. HARDWARE
2.2.8 Pololu Micro Serial Servo Controller
Figure 2.12: Pololu micro serial servo controller.
Summary
The Pololu micro serial servo controller is a very compact solution for controlling
up to eight RC servos from a computer or microcontroller. Each servo speed and
range can be controlled independently, and multiple units can be daisy-chained on
one serial line to control up to 128 servos. It possesses three status LEDs and an
integrated level converter for RS-232 applications. The micro serial servo controller
can control any standard RC servo, including giant 1/4-scale servos.
Features
PCB size 2.31 cm 2.31 cm
Servo Ports 8
Resolution 0.5 s 0.05

Range 250 s 2750 s


Supply Voltage 5 V 16 V
Data Voltage 0 V and 5 V
Pulse Rate 50 Hz
Serial Baud Rate 1200 Bd 38400 Bd (Automatically Detected)
Current Consumption 5 mA (Average)
Table 2.17: features of the Pololu micro serial servo controller.
2.2. ELECTRONICS, SENSORS AND ACTUATORS 29
2.2.9 Futaba Servo
Figure 2.13: Futaba servo with 180

of rotation.
Summary
Servo motors are an ecient, easy way to precisely position or move things. Some
Servos can also be modied to rotate in a full circle (instead of just 180

), which
makes them useful as drive motors for robotics.
The Futaba S3305 is a heavy duty standard servo with brass gears, dual ball
bearings and 9 kg
f
cm of torque. It is ideal for those applications that require
extra power and strength.
Features
Dimensions 20.0 mm 40.0 mm 38.1 mm
Weight 45.6 g
Speed 0.20 s
Torque 9.00 kg
f
cm
Ball Raced Yes
Table 2.18: features of the Futaba servo.
Chapter 3
Positioning Systems
In this chapter we will present an overview of the Ubisense and the Qualisys po-
sitioning systems, the programs used to retrieve the data from them and a list of
advantages and disadvantages of each system. Lastly, we will cover the theory and
implementation of the EKF used to improve the performance of the Ubisense.
3.1 Ubisense
Figure 3.1: Ubisense logo.
The Ubisense Tag Module Research Package is an out-of-the-box, RTLS that
can be used to track and locate assets and personnel to an accuracy of 15 cm in 3D
in RT. It is an all-inclusive solution for RTLS development or an entry level pilot
system.
3.1.1 Overview
Ubisense tags transmit ultra-wideband (UWB) pulses of extremely short duration
which are received by the sensors and used to determine where the tag is located
using a unique combination of Time-Dierence-of-Arrival (TDoA) and Angle-of-
Arrival (AoA) techniques. The use of UWB together with the unique AoA and
TDoA functionality ensures both high accuracy and reliability of operation in chal-
lenging environments.
31
32 CHAPTER 3. POSITIONING SYSTEMS
Sensors are grouped into cells with the capability of adding more of them de-
pending on the geometry of the area to be covered. In each cell a master sensor
coordinates the activities of the other sensors, and communicates with all the tags
whose location is detected within the cell. By designing overlapping cells, it is
possible to cover very large areas.
3.1.2 Sensors
The sensors detect ultra-wideband pulses transmitted by Ubisense tags which are
used to accurately determine tag locations. The sensors have an array of four
UWB receivers enabling them to measure both Angle-of-Arrival (AoA) and Time-
Dierence-of-Arrival (TDoA) of tag signals, to generate accurate 3D tracking infor-
mation even when only two sensors can detect the tag. The sensors and tags also
support two-way conventional RF communications permitting dynamic changes to
tag update rates and enabling interactive applications.
3.1.3 Software
The software supplied with the research package includes the distributed location
processing software platform, supporting visualization, system customization, and
application integration via an industry-standard API. A Data Client graphical user
interface application is also supplied which allows the user to send, receive and view
data that is being sent to a tag module/accessory.
3.1.4 Data Retrieval
A central computer running proprietary Ubisense software enables all the sensors
connected to the Ethernet hub. Based on the readings obtained from these sensors
and optional ltering parameters, the proprietary Ubisense software calculates the
x y z position of each tag in the detection range, and forwards this information
to a TCP/IP connection.
Any computer connected through an Ethernet cable to the hub is able to read
this information using the functions contained in a DLL. This library is written in
C# and it consists of three main parts: run (function), get_position (function) and
tag_update (event). Below we present the pseudo code of each part of the library
and a brief description of the functionality:
run (function)
This is the program entry point; it initializes the event reception and starts a loop
that takes care of processing the queue of elements. This queue will constantly be
lled with new position updates; therefore it is important that the speed at which
the queue is emptied is higher than the speed at which the queue is lled. This is
achieved tuning the sleep time of the while loop according to the number of updates
3.1. UBISENSE 33
per second we get from the tags, by doing this we avoid using more computational
resources than needed in the loop.
The code inside the loop takes care of recognizing whether or not the tag ID was
previously processed in which case the position of the object is updated, otherwise
if the tag ID was not previously detected a new object is generated.
Algorithm 1: run function algorithm of the Ubisense.
function run(void)
{
initialization();
while true do
if processing_queue.size > 0 then
process_element();
if tag_exist then
update_tag();
else
create_tag();
end
end
sleep(30);
end
}
tag_update (event)
This event is called each time a new package containing the position of a tag is
received through the TCP/IP connection, this event merely adds the information
received to the processing_queue so that we exit the event as fast as possible and
new events can be triggered. To avoid data corruption the processing_queue is
locked before any changes are made to it.
The data added to the processing_queue each time a package is received from
the TCP/IP connection consists of: a tag ID, the x y z position of the tag, and
the estimated measurement error. The processing of this queue relies solely in the
while loop explained earlier.
Algorithm 2: tag_update event algorithm of the Ubisense.
event tag_update(tag id, double x, double y, double z)
{
lock(processing_queue);
{
processing_queue.Add(id, x, y, z);
}
}
34 CHAPTER 3. POSITIONING SYSTEMS
get_position (function)
This function is what we use to get the position of a desired tag; the process-
ing_queue is also locked to avoid variable corruption when reading the values.
This function will be called using as a parameter the tag ID we want to obtain
data from. The tag ID is a unique number which is printed on each physical tag
and has the following structure: XXX XXX XXX XXX, for example:
020 000 116 037.
Algorithm 3: get_position function algorithm of the Ubisense.
function get_position(string tag)
{
lock(processing_queue);
{
return position(tag);
}
}
Invoke nodes (.NET DLL)
From LabVIEW we can call the functions contained in the DLL using a .NET
constructor and invoke nodes. These .NET nodes look like this:
Untitled 1
Last modified on 2012-06-16 at 00:15
Printed on 2012-06-16 at 00:15
Page 1
Untitled 1
TagID reference
0
getX
0
getY
0
getZ
TagID
reference
getX
getY
getZ
UbisenseClient
TagID TagID
getX getX
UbisenseClient
TagID TagID
getY getY
UbisenseClient
TagID TagID
getZ getZ
TagID
reference
getX
getY
getZ
"Untitled 1 History"
Current Revision: 0
Figure 3.2: LabVIEW .NET invoke node calling getX, getY, getZ.
3.1.5 Advantages
1. Tags are small and identical, no need to form unique patterns.
2. Works equally well outdoors as indoors.
3. No need to recalibrate the system each time.
4. Several ltering options available: Fixed-Height, Information-Filter, etc.
3.2. QUALISYS 35
5. Easy for multiple users to connect to the system simultaneously.
6. Position can be obtained for non-visible vehicles as long as they are in the
detection range.
3.1.6 Disadvantages
1. The tags stop transmitting their position if no movement is detected (sleep
mode).
2. The tags require batteries to function.
3. Each tag has dierent refresh rate.
4. The refresh rate of the tags is very low for motion control (< 10 Hz).
5. The error associated with each measurement is high (20 cm in xy plane and
1 m in z-direction).
6. The quality of the measurements gets much worse when the tag is close to
walls or when the tag is not in the detection range of most sensors.
7. The more tags to be tracked simultaneously, the lower refresh rate obtained
from the system.
8. The system needs to go through a long process of recalibration if any of the
sensors is to be moved from its original position or orientation.
9. The system gives only position, not the orientation of the tag.
3.2 Qualisys
Figure 3.3: Qualisys logo.
Qualisys is a leading, global provider of products and services based on optical
motion capture. The core technology of Qualisys products has been developed in
Sweden since 1989. The experienced Qualisys sta has created a unique platform
for optical motion capture, built to medical and industrial standards.
36 CHAPTER 3. POSITIONING SYSTEMS
3.2.1 Overview
Optical Motion Capture is widely accepted and used daily all over the world. It
enables the capture of motion that would be dicult to measure in other ways.
In the medical eld, researchers and clinicians use movement data to study and
observe human movement performance.
In the industry, engineers use position and orientation data to control machinery
improving the safety and reliability of automated processes.
Qualisys motion capture hardware and software have been designed with low
latency and maximum speed in mind, without sacricing accuracy. Qualisys oers
an easy way to obtain accurate 3D and 6 DOF position in RT.
3.2.2 Features

The core component of Qualisys motion capture system is one or more infrared
optical cameras, Oqus, emitting a beam of infrared light. Each camera can be
congured independently and they can be used in high speed video capture
as well. Additionally, there is a possibility to overlay video footage with 3D
positioning data.

Small light-weight, retro-reective markers are placed on an object. Markers


of dierent sizes and hardness can be used interchangeably. Three markers is
the minimum to track 6 DOF rigid bodies, but more markers can be added
to enhance rigid body recognition reliability.

Cameras emit infrared light onto the markers that reect the light back to
the camera sensor. This information is then used to calculate the position
of targets with high spatial resolution. Precision and covered volume can be
increased by adding cameras to the system. The refresh rate of the data can
be selected in the range of 1 Hz up to 500 Hz.

To make a 3D reconstruction of 2D data, the system needs to be calibrated.


A wand is simply moved around in the capture volume for 10 s 40 s while
a stationary reference object in the volume denes the lab coordinate system.
During the calibration the system performs an automatic detection of each
cameras position and angle.

Works in RT mode and capture mode. The system can be connected to


analog 10 V signals for synchronization. The data can be retrieved through
a TCP/IP or OSC server using LabVIEW, MATLAB or QTM clients. The
data can also be exported to TSV, C3D and MATLAB for post processing
and visualization.
3.2. QUALISYS 37
3.2.3 Data Retrieval
There is a central computer running proprietary software that enables all the Qual-
isys Oqus cameras connected to the Ethernet hub. Based on the video obtained from
these cameras and several user selectable parameters such as: capture rate, expo-
sure time, marker threshold, prediction error, max frame gap, etc. the proprietary
Qualisys software calculates the x y z position of each tag in range.
This data can be requested asynchronously from other computers connected to
the Ethernet hub using a MATLAB script or a LabVIEW VI, in this thesis we only
use the second option to retrieve the data. Qualisys comes with a QLC which is a
LabVIEW library that consists of following VIs:
QLC main VI Used for connecting to QTM RT server and downloading
data.
Q Command Used for controlling QTM, by sending commands.
Q2D Used for fetching 2D data.
Q3D Used for fetching 3D data.
Q3D No Labels Used for fetching unidentied 3D data.
Q6D Used for fetching 6 DOF data.
Q6D Euler Used for fetching 6 DOF data with Euler angles.
Q Analog Single Used for fetching analog data, only one sample (the latest
one).
Q Analog Used for fetching analog data.
Q Force Single Used for fetching force data, only one sample (the latest
one).
Q Force Used for fetching force data.
In the following part we will present in detail the VIs used in this thesis, the
rest of them will be omitted for simplicity.
QLC VI
The main QLC VI must always be included in the project and be given the required
parameters in order for the QLC to be able to deliver data to LabVIEW. Only
one instance of QLC.vi should be used per LabVIEW client. Input and output
parameters are described below.
38 CHAPTER 3. POSITIONING SYSTEMS
Untitled 2
Last modified on 2012-06-16 at 00:21
Printed on 2012-06-16 at 00:21
Page 1
Address
Port
Frequency
Data
QTM Message
Last QTM Event
Connect to QTM
Controlling QTM
Control QTM
Camera Frame Number
Camera Timestamp
address
connect
controlQTM
data
frequency
port
camFrameNumb
camTimeStamp
lastEvent
stop
stop
Qualisys Process
Figure 3.4: LabVIEW QLC main VI, used to retrieve frames from the system.
Connect Set to true to connect to QTM RT server.
Address QLC needs the IP address of the computer running QTM. If QTM is
running on the same computer, use: 127.0.0.1 (default) or localhost.
Port The port used for the connection with QTM. The default value is 22222.
Frequency The frequency parameter is used to set the update frequency:
1. AllFrame: sets the update frequency to the camera frequency.
2. Frequency (n): sets the update frequency to n Hz.
3. FrequencyDivisor (n): sets the update frequency to the camera frequency
divided by n.
Data This parameter tells QTM which type of data it should send. To send several
dierent data types, use a space between each type. The default is to send
All. Using all data types can result in big data frames, which can reduce
performance. The best practice is only to use the components you need.
Available data components: [All] [2D] [3D] [3DRes] [3DnoLabels] [3DnoLabel-
sRes] [Analog] [Force] [6D][6DRes] [6DEuler] [6DEulerRes].
3.2. QUALISYS 39
ControlQTM Set to true to take control of QTM. You need to take control over
QTM to be able to control QTM via commands.
Message Returns status messages from the QTM RT server.
LastEvent Returns the last event from the QTM RT server. Here are all possible
events:
None = 0
Connected = 1
Connection Closed = 2
Capture Started = 3
Capture Stopped = 4
Fetching Finished = 5
Calibration Started = 6
Calibration Stopped = 7
RT From File Started = 8
RT From File Stopped = 9
Waiting for trigger = 10
Camera settings changed = 11
QTM shutting down = 12
Capture Saved = 13
QTMMaster Returns true if you are the QTM master, i.e. have control over
QTM.
CamTimeStamp Returns a 64 bit integer containing the current camera time
stamp in s.
CamFrameNumber Returns a 32 bit integer containing the current camera frame
number.
Updated Returns true if new data is read from the QTM RT server. This applies
to all data types, except for Analog and Force.
Q Command VI
It is possible to control QTM via the Q Command VI. To be able to control QTM
from the LabVIEW client, you have to set the controlQTM input to true in the
QLC.vi. You must also set the Allow client control checkbox in QTM; it can be
found under Processing/RT outputs in workspace options. Only one RT client can
control QTM at once. This includes all RT clients, not only LabVIEW clients.
40 CHAPTER 3. POSITIONING SYSTEMS
command.vi
C:\Users\Alejandro\Desktop\command.vi
Last modified on 2012-06-16 at 00:04
Printed on 2012-06-16 at 00:26
Page 1
Start Capture
Start RT From File
New Measurement
Close Measurement
Stop Capture
Set Event
Send Trig
Event Label
Save Capture
Save Filename
bClose
bEvent
bNew
bSave
bStart
bStartFile
bStop
bTrig
pEventLabel
pFileName
result
Figure 3.5: LabVIEW Q Command VI, used to send instructions to the QTM.
Once you have control over QTM, you can issue following commands:
New Create new measurement. Unsaved measurement must be saved or closed to
be able to create a new measurement.
Close Close measurement.
Start Start a capture in QTM. The length of the capture is the current capture
length in QTM. It is possible to stop a capture prematurely, with the Stop
command.
Start RT from le Simulate a RT session by playing the current QTM le.
Stop Stop QTM capture or RT session from le.
Save Save current measurement in QTM. The name of the QTM le is set with
the pFileName input in the QCommand VI.
Send Trig Send an external trigger to QTM. This will trigger a measurement, if
the system is set to start on external trigger.
Send Event Set an event in QTM. The name of the event is set with the pEvent-
Label input in the QCommand VI. If no name is given, the label will be set
to Manual event.
3.2. QUALISYS 41
Q6D Euler VI
This VI is used to obtain the 3D position and angles of each rigid body. Several
instances of this VI can be called from the LabVIEW program. In the following
diagram we can see the user specied parameters and outputs of the VI.
Untitled 2
Last modified on 2012-06-16 at 00:24
Printed on 2012-06-16 at 00:24
Page 1
6 DOF Body
no
residual
pitch
res
roll
x
y
yaw
z
Figure 3.6: LabVIEW Q6D Euler VI, used to retrieve rigid body 6 DOF data.
No Order number of 6 DOF body to read.
Residual Set to true to read 6 DOF residual values.
x, y, z 3D position coordinates.
Roll, Pitch, Yaw Euler rotation angles.
Res 6 DOF residual values.
3.2.4 Advantages
1. Global reference frame can be dened and later translated or rotated.
2. Can store/import/export conguration les, calibrations, rigid bodies and
interface parameters.
3. System can be integrated with HD video cameras to lm the experiments in
RT synchronously with the data recording of position and angles.
4. Capacity to enable bounding boxes to restrict the area of valid data points by
masking out possible sources of noise.
5. Data reprocessing allows for recorded experiments to be recalculated for en-
hanced precision, coordinate system shift, etc.
42 CHAPTER 3. POSITIONING SYSTEMS
6. Hierarchical organization of projects, data recordings, and video footage for
easy playback, and quick navigation between experiments.
7. A great deal of data processing is done inside the cameras, making the central
QTM application lightweight in terms of processor load.
3.2.5 Disadvantages
1. Requires several cameras to cover a relatively small volume.
2. Reective and shiny materials such as aluminium can cause small disruptions.
3.3 Extended Kalman Filter
When using the Ubisense positioning system the error associated with each mea-
surement is too large (30 cm on the x y plane and 1 m on the z-axis), for that
reason we implemented an EKF to improve the performance of the controllers. The
EKF was designed and tested using Simulink and then implemented in LabVIEW
through a MATLAB Script Node.
In this chapter we will cover the theory behind the EKF; we will show the
implementations in both Simulink and LabVIEW, and analyze the improvement
that the lter produces with the raw measurements. Even though this lter was
designed to cope with the poor quality of the Ubisense measurements, it can also
be used to improve the measurements of other positioning systems.
The theory and implementation of the EKF presented in this part was taken
from Phil Goddards webpage called Goddard Consulting and adapted to t the
purposes of this thesis. Even though we only deal with the EKF in two dimen-
sions, scaling the theory and implementation derived here into three dimensions is
straightforward.
3.3.1 Theory
The KF is widely used in engineering for estimating unmeasured states of a pro-
cess with known dynamics. The KF in general works in an iterative form using
predictions and measurements to estimate the most likely state at each time step.
Using these states estimates as input to control algorithms generally produces much
better results than using the raw measurements without ltering.
The most generic and adequate type of KF for non-linear discrete-time processes
is the EKF. However there also exist simpler versions of this lter for linear processes
like the KF and more complex versions for continuous-time systems such as the
Kalman-Bucy Filter. Considering the non-linear discrete-time process with input
and measurement noise represented by w
k
and v
k
:
3.3. EXTENDED KALMAN FILTER 43
uk
wk
xk
f(xk,uk,k)
h(xk,uk,k)
yk
vk
yk
Figure 3.7: EKF detailed diagram.
We can write the following equations in standard state-space form:
x
k
= f(x
k1
, u
k
, k) + w
k1
y
k
= h(x
k
, u
k
, k)
y
k
= y
k
+ v
k
Here
k discrete point in time.
u
k
vector of inputs.
x
k
vector of states.
y
k
vector of outputs.
y
k
vector of measured outputs.
w
k
vector of states noise with zero mean Gaussian distribution
and Q
k
covariance.
v
k
vector of output noise with zero mean Gaussian distribution
and R
k
covariance.
f(.) non-linear function that relates the past state, the current
input and the current time to the current state.
h(.) non-linear function that relates the current state, the current
input and the current time to the current output.
Table 3.1: the meaning of each variable in the EKF.
44 CHAPTER 3. POSITIONING SYSTEMS
The EKF algorithm takes as inputs the measured outputs, the process inputs
and a certain time k to produce as output the unmeasured observable states and the
actual process outputs. This is represented graphically by the following diagram:
uk
Extended
Kalman
Filter
xk
yk
yk
Figure 3.8: EKF simplied diagram.
The EKF algorithm takes place in 2 steps:
1. The rst step consists of projecting the most recent state estimate and an
estimate of the error covariance forwards in time to calculate a predicted
state estimate at the current time step.
2. The second step consists of correcting the predicted state estimate calculated
in the rst step by incorporating the most recent process measurement to
generate an updated state estimate.
Since the process in non-linear, instead of using directly f() and h() in the
prediction and update equations we ought to use the Jacobian of f() and h(). The
Jacobians are calculated using the following formulas:
F
k
=
f
x

(x
k
,u
k
,k)
H
k
=
h
x

(x
k
,u
k
,k)
For the EKF the predictor step is given by the following expressions:
x

k
= f( x
k1
, u
k
, k)
P

k
= F
k1
P
k1
F
T
k1
+ Q
k
And the corrector step is given by the following expressions:
3.3. EXTENDED KALMAN FILTER 45
K
k
= P

k
H
T
k
(H
k
P

k
H
T
k
+ R
k
)
1
x
k
= x

k
+ K
k
( y
k
h( x

k
, u
k
, k))
P
k
= (I K
k
H
k
)P

k
Here
P
k
1
covariance estimate of the measurement error.
K
k
Kalman gain.
x
k
1
current estimate of the states after the correction.
y
k
current output estimate.
Table 3.2: the meaning of each variable in the EKF corrector step.
3.3.2 Implementation
The problem involves estimating the x y position and x y velocities of an
object based on a succession of noisy x y measurements. To implement the
EKF algorithm we need to calculate the Jacobian matrices for the state and the
measurement equations. Using a rst order approximation we get the following
expression:
x(k + 1) =
_

_
x
pos
(k + 1)
x
vel
(k + 1)
y
pos
(k + 1)
y
vel
(k + 1)
_

_
=
_

_
1 t 0 0
0 1 0 0
0 0 1 t
0 0 0 1
_

_
_

_
x
pos
(k)
x
vel
(k)
y
pos
(k)
y
vel
(k)
_

_
= F
k
x(k)
This matrix formulation of the equations can be interpreted as follows: over a
small period of time dt, the position in both x and y directions changes proportion-
ally to the velocity along that axis, and the velocity remains constant on the next
time step. Here we named the Jacobian matrix F
k
.
In our particular scenario we are getting x and y directly from the system so
the measurements update equation is very simple:
m
k
=
_
m
1k
m
2k
_
=
_
x
pos
y
pos
_
1
P
k
& x
k
are stored and used in the predictor step of the next iteration.
46 CHAPTER 3. POSITIONING SYSTEMS
Calculating the Jacobian for the measurement equations we get:
H
k
=
( m
k
)
x

x
=
_

_
1 0 0 0
0 0 1 0
_

_
In the simulation the EKF is implemented with a MATLAB Script Node as
shown in the following gure:
0.05
Y
X
Vy
Vx
MATLAB Function
meas
dt
xhatOut
ExtKalman
y
x
Figure 3.9: EKF Simulink block diagram.
The code inside the MATLAB Script Node is mainly an adaptation from the
Phil Goddards code; the main change is that instead of taking the bearing and
angle measurements as shown in his website, we use directly the x and y positions
to be ltered. The code inside the block was relegated to the Appendix A for
simplicity.
The LabVIEW implementation uses the same code inserted into a MATLAB
Script Node, the only dierence is that from LabVIEW it is forbidden to use the
persistent keyword. To bypass this problem, we use feedback nodes to store the
corresponding matrices and vectors for the next iteration.
3.3.3 Results
To study the improvement achieved with the usage of the EKF, we need a measure-
ment of how closer to the real value the readings get after being ltered, the easiest
way to accomplish this is to compare the time integral of the dierence between
the true value and the measurement squared for both cases: with and without the
lter. We will denote by R the accumulated squared error calculated using the
3.3. EXTENDED KALMAN FILTER 47
raw measurements whereas we will denote by F the accumulated squared error
calculated using the ltered measurements. This is expressed mathematically with
the following formulas:
R
x
(t) =
_
t
0
(x()
raw
x()
true
)
2
d
R
y
(t) =
_
t
0
(y()
raw
y()
true
)
2
d
F
x
(t) =
_
t
0
(x()
filter
x()
true
)
2
d
F
y
(t) =
_
t
0
(y()
filter
y()
true
)
2
d
In this simulation, we reproduce the limitations of the experimental setup by
adding Gaussian noise to the measured position. The Gaussian noise used has a
variance equal to 0.05 and was capped to 0.2 m to resemble the real experimental
conditions. The quantity x()
raw
will be the true x position of the vehicle plus the
Gaussian noise whereas the quantity x()
filter
will be x() raw after being ltered
using the EKF as shown in the following diagram:
true values
x_true
y_true
raw values
Gaussian noise
filtered values
Plot integral of errors
(filtered & raw) squared
EKF
yv
y
xv
x
0.05
|u|
2
|u|
2
meas
dt
xhatOut
ExtKalman
1
s
1
s
1
s
1
s
Figure 3.10: Simulink diagram showing the calculations used to test the performance of
the EKF.
48 CHAPTER 3. POSITIONING SYSTEMS
In the following graph we compare R
x
with F
x
. The comparison between
R
y
and F
y
is the same and will be omitted. From the graph it can be seen that
the accumulated error of the raw measurements grows approximately twice as fast
as the accumulated error of the ltered measurements, therefore it is to be expected
a substantial improvement in the performance of the controller.
0 10 20 30 40 50 60 70 80 90 100
0
0.5
1
1.5
2
2.5
acc. error (raw)
acc. error (filtered)
Accumulated error of Raw and Filtered measurements
time
a
c
c
u
m
u
l
a
t
e
d

e
r
r
o
r
Figure 3.11: time plot of the integral of the error squared with and without ltering.
To illustrate how this lter improves the measurements we present the following
graph that consists of a simultaneous time plot of the true value, the raw measure-
ment and the ltered value:
3.3. EXTENDED KALMAN FILTER 49
25 26 27 28 29 30 31 32
18.8
19
19.2
19.4
19.6
19.8
20
20.2
raw measurements
filtered measurements
true values
Time Comparison Between Raw, Filtered and True Values
time
p
o
s
i
t
i
o
n

v
a
l
u
e
Figure 3.12: time plot of the true value, raw measurement and ltered measurement of
the x position of the vehicle.
Chapter 4
Testbed
In this chapter we present the overall structure of the testbed in both hardware
and software. On the rst section we will deal with the structure of the hardware,
describing how the control loop is closed using the feedback from the positioning
systems. On the second section we will deal with the structure of the software, and
how to handle the complexity of this problem by subdividing the controller design
into three layers with a dierent scope and purpose.
4.1 Testbed Structure
4.1.1 Positioning Systems
Ubisense
The tags continuously transmit ultra-wideband pulses which are received by the
sensors and used to determine where the tag is located. This data is ltered inside
the computer running the Ubisense proprietary software and then forwarded to
the TCP/IP connection. This information can later be read from any computer
connected to the same TCP/IP gateway through an Ethernet cable.
Qualisys
Reective spherical tags are arranged in unique 3D patterns on the rigid body that is
to be tracked, high speed infrared cameras are used to cover the volume of the room
and triangulation algorithms allow us to get the precise location and orientation of
each vehicle. The system requires a simple calibration procedure that takes around
20 s and ensures maximum precision on the measurements.
51
52 CHAPTER 4. TESTBED
4.1.2 Closed Loop Control
The position data of each vehicle obtained using the Ubisense or Qualisys is read
using LabVIEW. Sensor data such as IRs, heading and altitude of the vehicles is
constantly being received through the sensor SF mote and updated in the corre-
sponding variables inside the VI. LabVIEW also implements control algorithms to
calculate actuation commands that are then sent in a scheduled manner to the
robots through the actuator SF mote.
These actuation commands are received at the vehicles actuator mote and the
corresponding control inputs are sent to the servos or to the Arduino depending if
it is a ground or aerial vehicle. The sensor mote performs the ADC conversion of
the IRs, broadcasts the data and the cycle repeats.
It is worth noticing that the cycle that updates the position is independent from
the cycle that updates the sensor data, which in turn is independent from the cycle
that sends actuation commands. In fact, these cycles happen at dierent frequency
and read/write collisions are avoided using semaphores.
Controller PC
Actuator mote
Sensor mote
Vehicles
Positioning Systems
Figure 4.1: feedback loop diagram showing the vehicles, motes, positioning systems, and
the LabVIEW program.
4.1. TESTBED STRUCTURE 53
Ubisense Testbed
In the following gure we present a simplied connection diagram of the Ubisense
testbed: the dotted line represents an Ethernet connection and the arrows indicate
the direction of the information ow.
Truck
Quadrotor
Ubisense
Labview
Control
Actuator Serial
Forwarder
Sensor Serial
Forwarder
Proximity Sensors
Sensor Mote
Arduino (magnetometer)
Actuator Mote
Pololu
Steering
Servo
Trailer Trigger
Servo
Speed
Controller
Proximity Sensors
Sensor Mote
Arduino (magnetometer
& Sonar)
Actuator Mote
Pololu
Power
Distribution
Figure 4.2: simplied connection diagram for the vehicles and the testbed using the
Ubisense.
54 CHAPTER 4. TESTBED
Qualisys Testbed
In the following gure we present a simplied connection diagram of the Qualisys
testbed: the dotted line represents an Ethernet connection and the arrows indicate
the direction of the information ow.
Truck
Quadrotor
Qualisys
Labview
Control
Actuator Serial
Forwarder
Sensor Serial
Forwarder
Proximity Sensors
Sensor Mote
Arduino (magnetometer)
Actuator Mote
Pololu
Steering
Servo
Trailer Trigger
Servo
Speed
Controller
Proximity Sensors
Sensor Mote
Arduino (magnetometer
& Sonar)
Actuator Mote
Pololu
Power
Distribution
Figure 4.3: simplied connection diagram for the vehicles and the testbed using the Qual-
isys.
4.1. TESTBED STRUCTURE 55
4.1.3 Onboard Sensors
Proximity Sensors
Each vehicle has four proximity sensors: on the ground vehicles two are located on
the front and one on each side, whereas on the quadrotors there is one sensor point-
ing in each direction. The purpose of these sensors is mainly obstacle avoidance.
On the quadrotors all four sensors are long range whereas in the trucks only one is
long range and the remaining three are short range.
To interface the sensors with the motes ADC we use a custom board called IR
Sensor Interface Board (see Appendix B) that provides the sockets to connect the
IR sensors and the mote.
Orientation Sensors
Each vehicle has an Arduino Board with a magnetometer to determine the heading
of the vehicle. This is very important especially when we use the Ubisense because
there is no other reliable way to estimate the orientation with this positioning
system. However with the Qualisys we can extract the orientation information
directly from the QTM because it supports 6 DOF tracking of rigid bodies with a
much greater accuracy than the magnetometer.
Height Sensors
Each quadrotor is equipped with an ultrasonic pulse generator, with this we can
measure the amount of time it takes for a pulse to bounce back from the ground
and using this information we can estimate the height. The ground vehicles are not
equipped with this sensor. The sonar is very important for the aerial vehicles when
we use the Ubisense because there is no other reliable way to estimate the height
since the error associated with the z-axis using this positioning system is about 1 m.
However with the Qualisys we can read precise height information directly from the
QTM without using this sensor.
Arduino Boards
These boards consist of the CPU and the IMU boards, respectively red and blue.
In the trucks they work only as a center for processing the sensor data coming
from the magnetometer and the sonar, whereas in the quadrotors they play a more
important role because additionally, they implement a double-loop cascade PID
control to stabilize the aircraft. In the quadrotor this board also implements the
PWM reader that provides the user with an abstraction to control the throttle,
pitch, roll and yaw of the aircraft instead of having to send separate signals to each
motor.
The Arduino boards do not have wireless communication capability, for this
reason we use a custom interface board called Arduino-Mote Interface Board (see
56 CHAPTER 4. TESTBED
Appendix C) to communicate the Arduino via serial with the mote that performs the
ADC conversion of the IR sensors. With this hardware conguration it is possible
to send the entire set of sensor data gathered from the IRs, the Arduino sonar and
magnetometer using only one mote.
4.1.4 Onboard Actuators
Pololu Board
This board is designed to take a serial input and create PWM signals at the output
to control: servo position, motor speed or the input to another circuit. Just like
RC equipment provides dierent operating points for servos and motors in toy cars
by changing the duty cycle of a square wave, the Pololu does exactly the same but
instead of receiving the input wirelessly through the radio it is controlled using a
serial interface.
The serial commands that the Pololu outputs come from the actuator mote of the
vehicle, which in turn receives these commands wirelessly from the main computer.
To interface the Pololu with the motes serial port we use a custom board called
Actuator Interface Board (see Appendix B) that provides the pins to connect the
Pololu and the mote.
Servos and Motor Speed in Trucks
In the trucks we use two servos: one for the steering and one for triggering the trailer
release. The steering is controlled using the rst channel of the Pololu whereas the
trailer release is controlled using the third channel. The second channel of the
Pololu is used to control the speed of the motor and the remaining ve channels
are available for future expansions such as controlling the servo that takes care of
shifting between gears, etc.
Throttle, Pitch, Roll and Yaw in Quadrotors
In the quadrotors the PWM signals that come from the Pololu are not feeding
directly the motors, or actuators. Instead, they are used to feed the Arduino Boards
rst four input channels, corresponding to the throttle, pitch, roll and yaw. In this
scenario we are only using the rst four channels of the Pololu and the other four
are available for future expansions to control cameras, grippers, etc.
4.2 Layered Controller Architecture
The controller was divided into three layers to be able to handle the dierences
between each type of agent in a concise and structured way. From the most specic
to the most generic these three layers are: motion planning, coordination and mis-
sion planning. Some of the examples and applications discussed in this section are
4.2. LAYERED CONTROLLER ARCHITECTURE 57
beyond the scope of this thesis, for that reason they are merely used to illustrate
how these concepts are applied in this thesis.
Mission Planning
Coordination Coordination
Motion Control Motion Control
truck 1
truck 2
drone 1
c
o
m
m
u
n
ic
a
t
io
n
Steering
Servo
Trailer Trigger
Servo
Speed
Controller
Front
Motor
Right
Motor
Left
Motor
Back
Motor
Figure 4.4: control layers for one truck and one quadrotor.
58 CHAPTER 4. TESTBED
4.2.1 Motion Planning
This layer is used for short sighted navigation or so called navigation based on re-
exes; it uses the feedback from sensors such as encoders, magnetometer, positioning
system, proximity sensors and gyroscopes to track a certain reference, for example
tracking an x y z position, a distance from an object, a certain orientation or
speed.
This layer has access to the actuators and motors and is responsible for keeping
the robot safe, i.e. avoiding a collision with other robots, walls, etc. This layer
has higher priority than other layers; this means that if there is a conict between
dierent control signals, the one which will prevail is the signal originated in this
layer using the data from the sensor feedback.
This layer is able to operate independently from the other layers; this means
that if it does not receive data from the other layers it will still behave appropriately.
This controller handles the platform specic dynamics such as the vehicle type and
actuators, hereby making this layer an abstraction to the higher level layers.
To give an example of this abstraction let us examine the two vehicles used in
this thesis from the perspective of this layer. First the ground vehicles: a reference
point is given as input and this layer will calculate the control signals to move the
vehicle to the WP. Since this algorithm converges in nite time, from the instant
the vehicle reaches the WP, it will remain still waiting for the input reference to
change.
The scenario is dierent for the aerial vehicles. Since we are using a quadrotor,
for a given constant reference point in 3D space above the ground, the drone will
hover by adjusting the rotors speeds for all times, this means that even after the
desired location is achieved, this layer will actively work to keep the quadrotor at
the reference point.
The abstraction is represented by the fact that the upper lever layers do not
have to worry about what the motion planning is doing to hold the position; they
can simply rely on the fact that the reference points will be reached in nite time.
The responsibility of producing proper trajectories relies on those upper lever layers;
they achieve this by scheduling WPs in a proper way over time.
4.2.2 Coordination
This layer is used to continuously generate references to be tracked by the motion
planning layer. This layer is responsible for checking that the trajectories generated
are safe, this means that when the robots move along these paths, they should not
collide with other robots or obstacles in the way.
The set of reference points changing over time describe trajectories with variable
resolution. The resolution of these trajectories has to be suciently high to achieve
the control precision required. The resolution is directly related to the distance
4.2. LAYERED CONTROLLER ARCHITECTURE 59
between WPs and indirectly related to the frequency at which these WPs need to
be updated.
This means that the closer we place the WPs the more resolution we will achieve
and the more frequent we need to update the WPs. However there is a limit beyond
which it is not possible to increase the resolution of a trajectory. The reason for this
impossibility is not lack of computational resources but the fact that our position
feedback has a limited precision.
In any case after we achieve a certain resolution for our trajectory, it makes
no sense adding more WPs in-between because the performance will not increase.
Therefore, we ought to nd the optimal resolution for our trajectory by means of
trial and error and not just by assuming that the more resolution, the better. The
only limitation regarding the resolution of our trajectories is that it should not
surpass the precision of the positioning system.
At any given instant in time this layer should be providing one WP to the
motion planning layer. Additionally, this layer is responsible for keeping track of
the Distance to WP, vehicle speed and other variables that can be used to estimate
when to generate a new WP in order to achieve a smooth motion.
For example: when using the trucks, the new WPs need to be generated before
the truck reaches the reference and triggers the stop, otherwise we will have a rough
driving, likewise in the quadrotors to achieve circular motion at a constant angular
velocity the WPs need to be updated before the vehicle reaches the reference.
4.2.3 Mission Planning
This is the highest layer in our program; it is responsible for controlling the behavior
of the coordination layer so that the mission is fullled. This layer does not generate
WPs directly but rather generates more complex commands that are decoded by
the coordination layer as a succession of WPs in time.
For example one of those commands could be move forward for a certain dis-
tance; this and other commands will be produced and queued in this layer. This
queue of commands will be taken care of by the coordination layer, which will decode
the commands into several WPs distributed in time.
This layer is responsible for the re-planning in case something does not work as
expected; this means that just like the coordination layer needs to keep track of the
WPs being reached in order to update them, this layer needs to keep track of the
sub goals being achieved in order to produce new commands. This layer also keeps
track of the elapsed time since the mission started.
This is the only part of the program that needs to be modied in case we want
to perform a dierent mission; examples of these missions were already mentioned:
platooning, surveillance, exploration, etc. This layer is common to all the robots,
60 CHAPTER 4. TESTBED
this means that the algorithms to fulll the mission are coordinated here using all
the information available from the sensors and the positioning system.
4.3 Implementation
In this thesis we will use the LabVIEW programming environment to achieve a
coherent implementation of the layered control, this programming environment is
graphical (G-code), easy to understand, highly scalable, and allows a hierarchical
design using VIs. The LabVIEW program will be covered in detail in Chapter 7.
Chapter 5
Ground Vehicles
In this chapter we present the dierential equations and the dynamic models of the
ground vehicles. In Chapter 8 we provide simulations to back up the mathematical
analysis presented here. In this thesis there are two dierent notations: one for the
ground vehicles and one for the aerial vehicles, for ease of comparison both notations
have been presented in the beginning of this document; however we include them
again in the chapters where they are used.
5.1 Ground Vehicle Variable
Denition
Symbol Meaning Theoretical Range
x y z Cartesian coordinate system positions (, ) (, ) [0]
x y z Cartesian coordinate system velocities (, ) (, ) [0]
x y z Cartesian coordinate system accelerations (, ) (, ) [0]
t Time [0, )
Vehicle Orientation [0, 2)
Vehicle Steering (

2
,

2
)
Vehicle to WP Displacement (, ]
d Distance to WP [0, )
Table 5.1: table showing the standard variables used for the ground vehicles.
61
62 CHAPTER 5. GROUND VEHICLES
5.2 Mathematical Model
The dynamics of a ground vehicle can be described as follows: the steering angle of
the wheels determines in which direction the vehicle will turn and at which ratio will
it do so. There is however an important limitation to be considered: for the vehicle
to turn, i.e. change its angle, it must have a velocity dierent than zero. For
simplicity, in our model we restrict our velocity range to avoid negative numbers,
this means that the vehicle can move forward and stop but never go backward.
Additionally, in reality it is not possible to establish steering values instantly, for
this reason we model the steering angle () as a rst order dierential function of
the steering wheel rotational speed ().
In the following gure we show these variables in a simplied diagram of a
ground vehicle:
angle between the wheels and the vehicles main axis.
angle of the vehicles main axis measured from the x-axis in
the ground plane, i.e. x y plane.
angle between the vehicles main axis and the vector that
goes from vehicle to the next WP.
x
0
, y
0
vehicle coordinates in the ground plane.
d distance to WP.
Table 5.2: the meaning of each variable in the simplied ground vehicle diagram.
5.2. MATHEMATICAL MODEL 63
x0
y0

x
y

x
y
WP

d
Figure 5.1: ground vehicle graphical variable representation.
The most generic equations system to describe the dynamics of a ground vehicle
that comply with the requirements mentioned in the previous paragraph is the
following:

t
_

_
x
y
x
y

_
=
_

_
0
0
0
0
0
1
_

_
+
_

_
0
0
cos( + )
sin( + )
0
0
_

_
_
x
2
+ y
2
+
_

_
cos( + )
sin( + )
0
0
sin()
0
_

_
_
x
2
+ y
2
In this scenario we can control the rotational speed of the steering wheel and the
acceleration of the car itself. These quantities are represented by and
_
x
2
+ y
2
respectively. However in our case we are using truck and car robotic models which
are highly responsive in both acceleration and deceleration, so we can disregard the
64 CHAPTER 5. GROUND VEHICLES
short speed transient and assume we can control the speed directly to obtain the
following system:

t
_

_
x
y

_
=
_

_
0
0
0
1
_

_
+
_

_
cos( + )
sin( + )
sin()
0
_

_
_
x
2
+ y
2
In this scenario we can control the rotational speed of the steering wheel and
the speed of the car itself. These quantities are represented by and
_
x
2
+ y
2
respectively. However in our case we are using a servo to steer which has a time
response ranging from 0.16 s to 0.19 s. This means that it can establish any steering
almost instantly. In order to simplify the model even further we neglect this time
delay and obtain the following system:

t
_

_
x
y

_
=
_

_
cos( + )
sin( + )
sin()
_

_
_
x
2
+ y
2
In this simplied scenario we can control the steering angle and the speed of
the car itself. These quantities are represented by and
_
x
2
+ y
2
respectively. A
state is fully determined by the xy position and the angle which represents the
orientation of the vehicle with respect to the ground plane.
5.3 Theoretical Controller Design
We have already presented the goals of this thesis in terms of ground vehicles control.
It is clear that the rst step in achieving cooperative control is to design the three
layers of the motion planning scheme presented in Chapter 4.
It is to be noted that the scenario where this is going to be used is platooning;
therefore we should bear in mind that the WPs used to coordinate the motion will
be changing over time; either due to the WP scheduling of the higher level layers or
because the WPs themselves are the xy position of another vehicle in the platoon.
In order to bring a ground vehicle from its current position to the WP we have to
provide speed and steering signals for all times.
5.3.1 Layer 1 Motion Planning
This layer of the controller will take as input the x y position of the truck and
the x y coordinate of the WP. With this information it will produce speed and
steering signals to move the truck to the WP. Clearly, the position of the truck will
5.3. THEORETICAL CONTROLLER DESIGN 65
change along the trajectory, but the responsibility of keeping it updated relies of
the positioning system process explained in detail in Chapter 7.
Speed Control
This controller is designed to hold a certain distance to the WP. In order to achieve
this goal, the speed will be higher the farther away we are from the WP and decrease
as we get closer. There is an important limitation regarding the sign of the speed, it
must always be non-negative which means that the cars in the platoon will always
move forward or stop. The practical implication of this is that even if we get closer
than the reference distance, the lowest speed we will reach is zero.
A PD controller is used in both the simulations and the VIs to implement this
algorithm. In the following gure we show a diagram of the speed controller.
+
_
PD Controller
error
Saturation Rate Limiter Delay
reference
distance
control
signal
vehicle dynamics
distance
to WP
speed
Figure 5.2: speed control diagram showing the feedback loop, reference distance and PD
controller.
66 CHAPTER 5. GROUND VEHICLES
Steering Control
This controller is designed to keep the Vehicle to WP Displacement () as close to
zero as possible. This is achieved by steering away from the neutral steering angle
according to the sign and magnitude of . In other words, the steering magnitude
will be proportional to the absolute value of the displacement angle and will be
performed to the right if the angle is positive or to the left if the angle is negative.
A PD controller is used in both the simulations and the VIs to implement this
algorithm. In the following gure we show a diagram of the steering controller.
+
_
PD Controller
error
Saturation Rate Limiter Delay
reference
displacement
control
signal
vehicle dynamics
vehicle to WP
displacement
steering
0
Figure 5.3: steering control diagram showing the feedback loop, reference displacement
and PD controller.
5.3. THEORETICAL CONTROLLER DESIGN 67
Safety Control
This controller is designed to avoid collisions between vehicles and with other ob-
jects, it functions independently from the other controllers and has a higher priority,
this means that the control signals produced here will override the other signals.
Four IR proximity sensors are attached to each ground vehicle: two on the front and
one on each side. According to the values of these sensors, an emergency stop will
be triggered automatically regardless of the steering and speed signals produced by
the other two controllers.
An on-o controller is used in the VIs to implement this algorithm. It takes
as input the readings from the proximity sensors and triggers a stop if a certain
threshold is reached. In the simulations we do not have measurements of proxim-
ity to objects so we rely on the speed controller alone to avoid collisions between
vehicles.
+
_
ON-OFF
Controller
error
Saturation Rate Limiter Delay
reference
threshold
control
signal
vehicle dynamics
distance to
closest obstacle
speed
Figure 5.4: safety control diagram showing the feedback loop, threshold safety distance
and on-o controller.
It is worth noticing that even though the emergency stop seems a rather bold
solution, it works well for the robotic systems despite of its inapplicability in real life
autonomous driving. For that reason, the development of more complex algorithms
to improve safety and comfort of human passengers lies beyond the scope of this
thesis and can be treated in more detail in future projects.
68 CHAPTER 5. GROUND VEHICLES
5.3.2 Layer 2 Coordination
This layer is implemented through a MATLAB Script Node which keeps track of
the Distance to WP and based on that information requests a new WP when certain
criteria are met. The criteria vary depending on how much information we have
available.
In the most basic case we have only available the current position of the vehicle
and the WP, in this case the only thing we can do is update the WP when the
distance between the vehicle and the WP falls below a certain threshold. With
the Qualisys however, it is possible to have a reliable estimate of the speed, this
additional information can be used to delay or hasten in time the WP update
achieving a more uent motion.
In the situation where the truck is not the leader of the platoon, the WP to be
tracked will be the coordinates of other truck; in this scenario this layer will not
require to perform any mathematical calculations to update the trucks WP because
it will be the x y position of another vehicle.
To achieve a more realistic platooning it is possible to shift the reference to
be tracked by the followers so that instead of tracking the center of the truck,
the followers track the rear part of the preceding ground vehicle. This layer is
responsible for calculating the x y position of the rear of the vehicle using the
following formula:
x
back
= x
0
Dcos()
y
back
= y
0
Dsin()
Here x
back
and y
back
are the coordinates to be tracked, and they correspond to
a point that is to the rear part of the vehicle, a distance D from its center, i.e.
(x
0
, y
0
).
5.3.3 Layer 3 Mission Planning
This layer is also implemented through a MATLAB Script Node and it has access to
the elapsed time and the position of every vehicle. Here we will store the information
or list of commands that shape the mission itself. The re-planning algorithms which
are triggered in case something goes o plan are also specied in this script.
5.4. IMPLEMENTATION DETAILS 69
5.4 Implementation Details
In this section we will cover specic implementation details not taken care of in the
theoretical controller design section.
5.4.1 Variable Calculation
There are three values that the motion planning layer calculates in order to feed
the control algorithms, these values are: Vehicle Orientation, Distance to WP and
Vehicle to WP Displacement. In the following sections we will describe in detail
how these values are calculated.
Vehicle Orientation
Four dierent approaches were implemented to calculate the orientation of the ve-
hicle, only the methods that provided the best results were used for the nal exper-
iments:
1. Putting two Ubisense tags on each ground vehicle and calculating the orien-
tation based on them, the problem is that the amount of tags is quite limited
and they have dierent refresh rates. This approach would clog the Ubisense
capacity with few vehicles.
Assuming that (x
1
, y
1
) is the position of the tag located on the front of the
vehicle, and (x
2
, y
2
) is the position of the tag located on the rear of the vehicle,
the orientation can be calculated using the following equation:
= tan
1
_
y
1
y
2
x
1
x
2
_
2. Calculating the orientation of the truck based on a succession of measurements
of xy positions. This method is the most generic and can be used with any
positioning system seamlessly.
This algorithm continuously polls the xy location of the vehicle and pushes
the information into a FIFO queue; using this scheme we have at all times not
only the present x y position, but also certain amount of the previous x y
positions. With this information we can calculate the current orientation of
the vehicle as follows:
70 CHAPTER 5. GROUND VEHICLES
x(t - 1/fs)
x(t)
x(t - 2/fs)
x(t - 3/fs)
x(t - 4/fs)
x(t - 5/fs)
x(t - 6/fs)
x(t - 7/fs)
y(t)
y(t - 1/fs)
y(t - 2/fs)
y(t - 3/fs)
y(t - 4/fs)
y(t - 5/fs)
y(t - 6/fs)
y(t - 7/fs)
}
}
}
}
x t
i
fs
i
( )
=

0
3
4
y t
i
fs
i
( )
=

0
3
4
x t
i
fs
i
( )
=

4
7
4
y t
i
fs
i
( )
=

4
7
4
+
-
+
-
=dx
=dy
new points entering the queue
old points leaving the queue
Figure 5.5: FIFO queue graphical representation.
Calculating dx and dy as shown in the diagram, we compute the orientation
using the following equation:
= tan
1
_
dy
dx
_
We need to determine the size of the FIFO queue; if we take too many previous
values our estimate of the orientation will be precise but will suer from delay.
If we do not take enough previous values, our estimate will not be accurate.
The size of the FIFO queue must be tuned to achieve acceptable precision
keeping the delay low. This tuning will mostly depend on the refresh rate of
the positioning system and the error associated with each value.
3. Using a magnetometer
1
to calculate the orientation based on the magnetic
eld of the earth. This provides a highly accurate method but introduces
the additional problems of interfacing the magnetometer to the motes and
calibrating the equations for each truck.
This is one of the approaches that provided the best results, mainly because
it is not aected by the errors of the Ubisense since it calculates the orien-
tation independently. The magnetometer has three axes and it measures the
magnitude of the magnetic eld that traverses each axis. Several algorithms
inside the Arduino will convert those raw values into the heading (orientation),
which is an integer value that goes from 0

to 359

.
1
For the magnetometer to work, it is essential to set the variable named: magnetic eld declina-
tion, according to your geographic location; this is done using the ArdupilotMegaPlanner software.
5.4. IMPLEMENTATION DETAILS 71
Since the orientation of 0

does not necessarily correspond to the x-axis align-


ment of the vehicle, we will force this by adding a constant to the raw heading
value so that it is zero when the car is aligned with the positive x-axis. After
adding this constant, the heading will no longer lie between 0

and 359

, so
we normalize the values that fall outside the range.
A separate tutorial is provided explaining the programs developed and the
circuits used to retrieve sensor data from the Arduino through the serial con-
nection with the motes. This tutorial applies to both the ground and aerial
vehicles.
4. Using the heading determined with the Qualisys positioning system. This
allows us to get not only the heading of the vehicle in the ground plane, but
also its orientation in 3D space.
This is one of the approaches that provided the best results, however it requires
the Qualisys or other positioning system that supports tracking of objects with
6 DOF. When we do not have this feature available in our positioning system
our best bet is to use the magnetometer method.
Distance to Waypoint
The calculation is very simple because it is the Euclidean distance between the
current location and the WP, the formula is:
d =
_
a
2
+ b
2
This value is used to control the speed of the vehicle in order to maintain certain
distance from the WP.
a
b
a
b
2
2
+
WP
Figure 5.6: Distance to WP graphical representation.
72 CHAPTER 5. GROUND VEHICLES
Vehicle to Waypoint Displacement
This value is an angle called it represents the angle between the orientation of
the vehicle and the vector that goes from the vehicle to the WP. For example, if
the truck is on the straight path to the WP the displacement is zero, otherwise it
will be a certain angle ranging from to .
This value is used to control the steering of the vehicle trying to keep the dis-
placement as close to zero as possible.
0
0
0

WP WP WP

Figure 5.7: Vehicle to WP Displacement graphical representation.


5.4.2 Three Dimensional Model
In this section we present a diagram with the chosen distribution of IR sensors and
the placement of the Arduino. The main consideration to determine the position of
the IRs is the fact that they need to be separated at least 10 cm from each other
in order to avoid interference, whereas the main consideration to determine the
placement of the Arduino is based on the magnetic perturbation that the trucks
engine inicts upon the magnetometer, therefore the Arduino has to be placed as
far away from the motor as possible.
5.4. IMPLEMENTATION DETAILS 73
Figure 5.8: diagram showing the placement of the IR sensors and the Arduino on the
truck.
Chapter 6
Aerial Vehicles
In this chapter we present the dierential equations and the dynamic models of the
aerial vehicles. In Chapter 8 we provide simulations to back up the mathematical
analysis presented here. In this thesis there are two dierent notations: one for the
ground vehicles and one for the aerial vehicles, for ease of comparison both notations
have been presented in the beginning of this document; however we include them
again in the chapters where they are used.
6.1 Aerial Vehicle Variable Denition
Symbol Meaning Theoretical Range
x y z Cartesian coordinate system positions (, ) (, ) [0, )
x y z Cartesian coordinate system velocities (, ) (, ) (, )
x y z Cartesian coordinate system accelerations (, ) (, ) (, )
t Time [0, )
Vehicle Roll angle (

2
,

2
)
Vehicle Pitch angle (

2
,

2
)
Vehicle Yaw angle (, ]
d Distance to WP [0, )
g Gravity 9.81 m/s
2
m Vehicle Mass 1.5 kg
Table 6.1: table showing the standard variables used for the aerial vehicles.
75
76 CHAPTER 6. AERIAL VEHICLES
6.2 Mathematical Model
The most generic and complete model to describe the dynamics of a quadrotor is
developed in The Quadrotor Bible; it considers many details in the modeling of
the quadrotor including: voltage to motor speed nonlinearities, motor torque under
dierent propeller loads, inertial matrix and gyroscopic propeller matrix. In other
words, the system is modeled as a generic rigid-body with 6 DOF, which involves
an unnecessary complexity for the scope of this thesis.
For that reason we propose a much simpler model that behaves almost like the
complete system. In our case, we are using an ArduCopter which has an embed-
ded stabilization algorithm consisting of a double-loop cascade PID controller that
provides the user with four inputs: throttle, pitch, roll and yaw. This is a huge
advantage because we can rely on the embedded stabilization algorithms to send
the signals to each motor individually, whereas we restrict ourselves to manipulate
the four inputs mentioned.
In all the quadrotor diagrams we will use throughout this document the red
propeller indicates the front of the vehicle, the segmented vectors represent the local
coordinate system, and the solid vectors represent the global coordinate system.
Below, we present two diagrams that illustrate the meaning of each variable.

y
a
w
y
x
z
Figure 6.1: diagram showing one of the quadrotor standard angles (); the other two
quadrotor angles ( and ) are not shown in this diagram because they are
zero. This diagram also shows the quadrotor position (x, y, z) in the global
coordinate system.
6.2. MATHEMATICAL MODEL 77


p
it
c
h
roll
d
i
s
t
a
n
c
e

t
o

W
P
Figure 6.2: diagram showing two of the quadrotor standard angles ( and ); the third
quadrotor angle () is not shown in this diagram because it is zero. This
diagram also shows the Distance to WP (d).
To model the quadrotor in a simple way, we regard it as a second order dif-
ferential system. The vehicle is under the eect of the gravity; always pointing
downward, and the thrust of the propellers, always pointing in the direction which
is orthogonal to the plane dened by the quadrotors arms.
In order to write the dierential equations in each direction, we use the pitch,
roll, and yaw angles to project the thrust of the propellers in the appropriate pro-
portion over the x, y, and z axes. To do so, we use the rotational matrices: Rot
X
,
Rot
Y
, and Rot
Z
to transform the unitary vector
1
u
Z
according to the quadrotors
, , and angles.
Rot
X
=
_

_
1 0 0
0 cos sin
0 sin cos
_

_
Rot
Y
=
_

_
cos 0 sin
0 1 0
sin 0 cos
_

_
Rot
Z
=
_

_
cos sin 0
sin cos 0
0 0 1
_

_
u
Z
=
_

_
0
0
1
_

_
1
The vector u
Z
is unitary and points in the direction of the positive z-axis.
78 CHAPTER 6. AERIAL VEHICLES
The vector that is orthogonal to the quadrotors frame
2
, i.e. u
N
, will tell us
in which direction is the thrust of the propellers being applied, and therefore will
allow us to calculate the quadrotors acceleration in each axis. To calculate u
N
we
simply take u
Z
and apply the rotations around each axis as shown in the following
formula:
u
N
= Rot
Z
Rot
X
Rot
Y
u
Z
=
_

_
sin cos + cos sin sin
sin sin cos sin cos
cos cos
_

_
Finally, we take into account the weight of the vehicle, which depends on the
mass (m) and the gravity (g). Summing the forces in each direction we get the
following three equations:
m x = T(sin cos + cos sin sin )
m y = T(sin sin cos sin cos )
m z = T cos cos mg
In this scenario we are assuming that we have direct control over the pitch, roll,
yaw and thrust
3
. These quantities are represented by , , and T respectively.
In our case, we are using an embedded controller to internally manipulate these
variables, therefore we have to keep in mind that the four channels provided to us
are only an abstraction, and there are certain limitations that need to be taken into
consideration in the model:
Time delay given a hovering quadrotor, there will be a small delay between the
instant when a certain angle is requested and the instant when the control
action beings.
Rate limit given a hovering quadrotor that has started a control action to modify
one of its angles, there is a limit in the rate at which these angles can grow or
decrease, i.e. the change is not instantaneous.
Saturation given a hovering quadrotor that is trying to reach a certain pitch or
roll angle, there will be a safety limit for both angles preventing it from tilting
over.
2
The vector u
N
is unitary and perpendicular to the quadrotors frame, it corresponds with the
positive z-axis when and are equal to zero.
3
The thrust is indicated by T and it refers to the actual force that the propellers produce.
The thrust is proportional to the throttle, which is a digital signal representing the rotational
speed requested from the motors. The same throttle can produce dierent thrust depending on
the battery charge level, the propellers size, etc.
6.2. MATHEMATICAL MODEL 79
Inertia given a hovering quadrotor that has achieved the desired angle, there will
always be a small overshoot in the angle due to inertia; however we will
neglect this eect as it is only appreciable when abrupt changes in the angles
are requested.
These considerations are built into the model used in the simulations allowing
us to test the controllers knowing that they bear resemblance with reality, but at
the same time taking advantage of the fact that the controllers for , , and T
are already implemented.

The time delay consideration is represented by the following three formulas:

eff
(t) =
req
(t )

eff
(t) =
req
(t )

eff
(t) =
req
(t )
Where the subscript eff stands for eective and the subscript req stands for
requested, these equations reect the time delay () that it takes for the angles
requested to be reached in reality. For simplicity, the value of was tuned to
resemble the delays observed in the real life quadrotor.

The rate limit consideration is represented by the following three formulas:


K
min

(t)
t
K
max
K
min

(t)
t
K
max
K
min

(t)
t
K
max
Here both K
min
and K
max
are positive constant numbers which depend on
the moment of inertia of the quadrotors. For simplicity, we will assume that
both are equal and performing simulations we tune those constants to values
that produce a behavior which resembles the real life quadrotor.

The Saturation consideration is represented by the following two formulas:


C
min
(t) C
max
C
min
(t) C
max
Here both C
min
and C
max
are positive constant numbers which depend on
the quadrotors embedded controller saturation parameters. For simplicity,
we will assume that both are approximately equal to /8 which resembles the
values observed in the real life experiments.
80 CHAPTER 6. AERIAL VEHICLES
With these considerations in mind we can write the matrix representation of the
quadrotor model described by the dierential equations derived earlier:

t
_

_
x
y
z
x
y
z
_

_
=
_

_
0 0 0
1 0 0
0 0 0
0 1 0
0 0 0
0 0 1
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
_

_
x
y
z
x
y
z
_

_
+
_

_
0
0
0
T
m
(sin cos + cos sin sin )
T
m
(sin sin cos sin cos )
T
m
cos cos g
_

_
6.3 Theoretical Controller Design
We have already presented the goals of this thesis in terms of aerial vehicles control.
It is clear that the rst step in achieving cooperative control is to design the three
layers of the motion planning scheme presented in Chapter 4.
It is to be noted that one of the scenarios where this is going to be used is
surveillance; therefore we should keep in mind that this WPs will be changing over
time either due to the WP scheduling of the higher level layers or because the WPs
themselves are composed of the x y position of a vehicle in the platoon and a z
value which depends on the desired ying altitude. In order to bring the vehicle
from its current position to the WP we have to provide throttle, pitch, roll and yaw
signals for all times.
6.3.1 Layer 1 Motion Planning
This layer of the controller will take as input the xy z position of the quadrotor
and the x y z coordinate of the WP. With this information it will produce
throttle, pitch, roll and yaw signals to move the quadrotor to the WP. Clearly, the
position of the quadrotor will change along the trajectory, but the responsibility of
keeping it updated relies of the positioning system process explained in detail in
Chapter 7.
6.3. THEORETICAL CONTROLLER DESIGN 81
Throttle Control
This controller is designed to hold a certain altitude above the ground. In order
to achieve this goal, the throttle will be regulated with a PD controller taking as
input the current height and as reference the desired altitude. There is a limitation
regarding the thrust of the propellers: it always points in the same direction; this
means that the rotation of the propellers cannot be reversed to achieve a negative
thrust force.
This is not a problem because we have the gravity always pointing downward.
However, in order to control the altitude more easily without requiring an integral
term in the PD controller, we add a constant base throttle to the output of the
controller. The constant base throttle corresponds to the thrust that makes the
quadrotor almost lift from the ground.
By these means we are raising the operating point of the quadrotors throttle to
the region where it is more convenient to keep a steady ight.
+
_
PD Controller
error
Saturation Rate Limiter Delay
reference
altitude
control
signal
vehicle dynamics
current
altitude
throttle
Figure 6.3: throttle control diagram showing the feedback loop, reference altitude and PD
controller.
82 CHAPTER 6. AERIAL VEHICLES
Position Control
This controller is designed to track a certain WP in the x y plane. This is
achieved using two independent PD controllers one for the x-direction that controls
the pitch and one for the y-direction that controls the roll. The input variable to
these controllers is the position of the vehicle in that axis and the reference is the
corresponding x or y coordinate of the WP. For all times it is assumed that the
frontal arm of the quadrotor is pointing in the direction of the positive x-axis.
+
_
PD Controller
errors
Saturation Rate Limiter Delay
reference
position [xf,yf]
control
signals
vehicle dynamics
current
position [x0,y0]
pitch
roll
2
Figure 6.4: position control diagram showing the feedback loop, reference position (2D
vector) and PD controller.
6.3. THEORETICAL CONTROLLER DESIGN 83
Yaw Control
This controller is designed to keep the frontal arm of the quadrotor always pointing
in the direction of the positive x-axis. This is extremely important because the
position control relies on this assumption to perform pitch and roll commands. In
more generic scenarios it is possible to remove this controller, replacing its function-
ality with a calculation that allows the quadrotor to pitch and roll appropriately,
without assuming that it is aligned with any axis.
The easiest way to do this is by applying a linear transformation to the WP
before feeding this reference to the position controller. The linear transformation
consists of a 2 2 matrix that allows us to express the WP x y coordinates in a
new rotated reference system, this rotation is done around the z-axis centered on
the quadrotors frame, the amount of degrees indicated by the corresponding yaw
angle.
Using this new WP coordinates, we can control the quadrotor using the same
position controller regardless of the orientation of the vehicle, hereby replacing the
yaw controller with a new preprocessing step of WP normalization. We will explain
in detail this procedure in the Implementation Details section of this chapter.
One last remark regarding this controller is that it is a PI instead of a PD, which
is what we have been using so far; the reason for this is that we require the integral
part to compensate for the quadrotors natural drift in the yaw, in other words we
are trying to compensate a static error inherent to both the quadrotors structure
and possible external perturbations such as wind.
+
_
PI Controller
error
Saturation Rate Limiter Delay
reference
yaw angle
control
signal
vehicle dynamics
current
yaw angle
0
yaw
Figure 6.5: yaw control diagram showing the feedback loop, reference yaw angle and PI
controller.
84 CHAPTER 6. AERIAL VEHICLES
Safety Control
This heuristic controller is designed to avoid collisions between vehicles and with
other objects, it functions independently from the other controllers and has a higher
priority, this means that the control signals produced here will override the other
signals. Four long range IR proximity sensors are attached to each aerial vehicle:
one pointing on each direction. According to the values of these sensors, small pitch
and roll commands will be produced to tilt away from the obstacles.
It is worth noticing that even though this obstacle avoidance mechanism has
blind spots due to the sensor placement, for big enough obstacles the solution is
appropriate. The development of more complex algorithms to improve safety lies
beyond the scope of this thesis and can be treated in more detail in future projects.
4
+
_
Heuristic
Controller
errors
Saturation Rate Limiter Delay
safety distance
threshold
control
signals
vehicle dynamics
distance reading
from each IR
pitch
roll
Figure 6.6: safety control diagram showing the feedback loop (IR readings), safety dis-
tance threshold and heuristic controller.
6.4. IMPLEMENTATION DETAILS 85
6.3.2 Layer 2 Coordination
This layer is implemented through a MATLAB Script Node which keeps track of
the Distance to WP and based on that information requests a new WP when certain
criteria are met. The criteria vary depending on how much information we have
available. In the most basic case we have only available the current position of the
vehicle and the WP, in this case the only thing we can do is update the WP when
the distance between the vehicle and the WP falls below a certain threshold.
With the Qualisys however, it is possible to have a reliable estimate of the
quadrotor angles and speed, this information can be used to delay or hasten in time
the WP change achieving a more uent motion.
In the situation where the quadrotor is performing surveillance, the WP to be
tracked will be the coordinates of the platoon leader; in this scenario this layer will
not require to perform any calculation to update the quadrotors WP because it
will be the x y position of a truck plus a certain altitude.
In the situation where the quadrotor is circling around the leader of the platoon,
the WP to be tracked will be calculated using a combination of the current truck
position and trigonometric formulas.
6.3.3 Layer 3 Mission Planning
This layer is also implemented through a MATLAB Script Node and it has access to
the elapsed time and the position of every vehicle. Here we will store the information
or list of commands that shape the mission itself. The re-planning algorithms which
are triggered in case something goes o plan are also specied in this script.
6.4 Implementation Details
In this section we will cover specic implementation details not taken care of in the
theoretical controller design section.
6.4.1 Variable Calculation
There are two values that the motion planning layer calculates in order to feed the
control algorithms, these values are: Vehicle Orientation and Distance to WP. In
the following sections we will describe in detail how these values are calculated.
Vehicle Orientation
Using the prior knowledge obtained when working with the trucks we discard
from the beginning every attempt that involves obtaining the orientation with the
Ubisense positioning system because we already know it is not the optimal approach.
That leaves us with only two reliable methods to get the orientation of the vehicle:
86 CHAPTER 6. AERIAL VEHICLES
1. Using a magnetometer to calculate the orientation based on the magnetic
eld of the earth. This provides a highly accurate method but introduces
the additional problems of interfacing the magnetometer to the motes and
calibrating the equations for each quadrotor.
For details regarding this method refer to the homologous section in the Chap-
ter 5.
2. Using the heading determined with the Qualisys positioning system. This
allows us to get not only the heading of the vehicle in the ground plane, but
also its orientation in 3D space.
This is one of the approaches that provided the best results, however it requires
the Qualisys or other positioning system that supports tracking of objects with
6 DOF. When we do not have this feature available in our positioning system
our best bet is to use the magnetometer method.
0< 0 -<<0
Figure 6.7: quadrotor heading angle graphical representation.
Distance to Waypoint
The calculation is very simple because it is the Euclidean distance between the
current location and the WP, the formula is:
d =
_
a
2
+ b
2
+ c
2
Here the numbers a, b and c are the projection of the distance vector over
the three axes, this value is used to update the WPs according to how close the
quadrotor is to the reference point.
6.4. IMPLEMENTATION DETAILS 87
a
b
c
d a b c
2 2 2
Figure 6.8: Distance to WP graphical representation.
6.4.2 Three Dimensional Model
In this section we present a diagram with the chosen distribution of IR sensors and
the placement of the Arduino. The main consideration to determine the position of
the IRs is the fact that we are trying to use them to avoid obstacles, therefore we
ought to place them pointing in every direction without getting their eld of view
blocked with the propellers. The placement of the Arduino is based on the location
of the accelerometer in the IMU board; the idea is to place the Arduino in such a
way that the accelerometer is approximately in the quadrotors center of rotation,
hereby avoiding rotational movements to aect the accelerometer unevenly.
Figure 6.9: diagram showing the placement of the IR sensors and the Arduino on the
quadrotor.
Chapter 7
LabVIEW Implementation
In this chapter we present the structure of the LabVIEW program, the function
of each VI and how they work together. We also describe how the data is shared
between these processes in a safe manner. The VIs developed for this thesis are
meant to work with Ubisense and Qualisys, some of the VIs are common for both
systems whereas others are used particularly for one system. Therefore, we will
identify each VI with one of the following labels inside brackets: Ubisense Only,
Qualisys Only, and Common.
7.1 Overview
The main LabVIEW program has three phases: Initialization, Main Loop and Fi-
nalization. These phases can be seen in the following two diagrams for both the
Ubisense and Qualisys programs.
7.1.1 Ubisense
main.vi
C:\Users\Alejandro\Desktop\Labview\truck_mapped_wp_platoon_ubi\main.vi
Last modified on 2012-06-20 at 21:01
Printed on 2012-06-20 at 21:01
Page 1
9002
9003
Initialization
Serial Forwarder
Opening
Connections
Ubisense Process
Start
020-000-116-000 020-000-116-037
Truck 1 Truck 2
Trucks
Mission Planner
Actuators
Main Loop
Sensors
Serial Forwarder
Closing
Connections
Finalization
Figure 7.1: main.vi used for the Ubisense positioning system.
89
90 CHAPTER 7. LABVIEW IMPLEMENTATION
7.1.2 Qualisys
main.vi
C:\Users\Alejandro\Desktop\Labview\QUALISYS_picture\main.vi
Last modified on 2012-06-01 at 16:40
Printed on 2012-06-20 at 20:47
Page 1
9001
9002
9004
9005
Serial Forwarder
Opening
Connections
Initialization
truck_1
Address
Port
Frequency
Data
QTM Message
Last QTM Event
Connect to QTM
Controlling QTM
Control QTM
Camera Frame Number
Camera Timestamp
address
connect
controlQTM
data
frequency
port
camFrameNumb
camTimeStamp
lastEvent
stop
stop
Qualisys Process
truck_2
truck_3 truck_4
manual 1
Servo Positions 1
manual 2
Servo Positions 2
manual 4
Servo Positions 4
manual 3
Servo Positions 3
arm 1
manual 5
disarm 1
land 1
arm 2
manual 6
disarm 2
land 2
Servo Positions 5 Servo Positions 6
6 DOF Body 6 DOF Body 2
6 DOF Body 3 6 DOF Body 4
quad_1 quad_2
Actuator Truck 1 & Truck 2 Actuator Truck 1 & Truck 2
Actuator Quad 1 Actuator Quad 2
Trucks
Mission Planner
Quadrotors
Main Loop
6 DOF Body 5 6 DOF Body 6
Serial Forwarder
Closing
Connections
Finalization
Figure 7.2: main.vi used for the QTM.
In the following sections we will explore in detail the VIs of each of these three
phases.
7.2 Initialization
In this phase we perform the following operations: Open SF Connections and Start
Ubisense Positioning System.
7.2.1 Open Serial Forwarder Connections
This VI opens one TCP/IP connection for each SF we want to use; this will allow
us to write and read data from the motes using the TCP/IP protocol. Generally
we use one SF for each mote, and motes are normally used in pairs, one for sending
commands and one for receiving data from the robots.
7.2. INITIALIZATION 91
OpenSF_basic.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_basic\OpenSF_basic.vi
Last modified on 2012-05-20 at 18:41
Printed on 2012-06-20 at 22:49
Page 1
Connection ID
error out
Port Number
OpenSF_basic.vi
status
0
code
source
error out
Connection ID
9002
Port Number
Open TCP connection
127.0.0.1
U 2
Connect to SF
Handshake
error out
Connection ID
Port Number
OpenSF_basic.vi:1
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_basic\OpenSF_basic.vi
Last modified on 2012-05-20 at 18:41
Printed on 2012-06-20 at 21:17
Page 1
Open TCP connection
127.0.0.1
U 2
Connect to SF
Handshake
error out
Connection ID
Port Number
Figure 7.3: OpenSF_basic.vi used to open a connection to a SF [Common].
The number of pairs of motes to be used will be determined by the amount of
robots to be controlled. Generally one pair of motes (sensor + actuator) can handle
up to two ground vehicles or one aerial vehicle. If no data needs to be received from
the robots, we can also open single TCP/IP connections without having to open
unnecessary connections for the sensors.
OpenSF_double.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_double\OpenSF_double.vi
Last modified on 2012-05-20 at 18:40
Printed on 2012-06-20 at 22:58
Page 1
connection IDs
Actuator Port
Sensor Port
OpenSF_double.vi
0
Sensor
Port
0
Actuator
Port
connection ID
actuators
Connected to
actuators
connection ID
sensors
Connected to
sensors
connection IDs
Sensor
Port
Sensor
True
Actuator
Port
Actuato
True
False
OpenSF_double.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_double\OpenSF_double.vi
Last modified on 2012-05-20 at 18:40
Printed on 2012-06-20 at 22:47
Page 1
Sensor
Port
Sensor
True
Actuator
Port
Actuator
True
connection IDs
False False
Figure 7.4: OpenSF_double.vi used to open a pair of SF connections [Common].
92 CHAPTER 7. LABVIEW IMPLEMENTATION
7.2.2 Start Ubisense Positioning System
This VI starts the Ubisense process which will handle reading values from the posi-
tioning system. This process will run in the background and will provide the user in
LabVIEW with functions to retrieve the position of an arbitrary tag. The Ubisense
process normally takes up to two seconds to start, hence the delay perceived each
time we run the program.
Ubisense_Structure_Start.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\Ubisense\Ubisense_Structure_Start.vi
Last modified on 2012-04-23 at 17:05
Printed on 2012-06-20 at 23:03
Page 1
reference out
Ubisense_Structure_Start.vi
reference out
UbisenseClient UbisenseClient
Init Init
reference out
Delay Time (s
Time Delay
Time Delay
Time Delay
Inserts a time delay into the calling VI.
--------------------

This Express VI is configured as follows:

Delay Time: 3 s
Ubisense_Structure_Start.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\Ubisense\Ubisense_Structure_Start.vi
Last modified on 2012-04-23 at 17:05
Printed on 2012-06-20 at 23:02
Page 1
UbisenseClient UbisenseClient
Init Init
reference out
Delay Time (s)
Time Delay
Figure 7.5: Ubisense_Structure_Start.vi used to initialize the connection to the position-
ing system [Ubisense Only].
7.3 Main Loop
In this phase we perform the following operations: Truck Control Algorithms,
Quadrotor Control Algorithms, Sending Actuator Commands, Reading Sensor Val-
ues, Start QTM and Mission Planner.
7.3.1 Truck Control Algorithms
Starts the VI associated with each ground vehicle, this will generate as many while
loops as trucks we have, running at 100 Hz each. The goal of these loops is to
read the xy positions of the ground vehicles and perform the appropriate control
algorithms to update the global variables that correspond to their control signals.
truck_sub_1.vi
C:\Users\Alejandro\Desktop\Labview\QUALISYS_picture\truck_sub_1.vi
Last modified on 2012-06-01 at 16:04
Printed on 2012-06-21 at 22:09
Page 1
TagID
6 DOF Body
truck_sub_1.vi
0
getX
0
getY
5,0
-5,0
-4,0
-3,0
-2,0
-1,0
0,0
1,0
2,0
3,0
4,0
5,0 -5,0 -4,0 -3,0 -2,0 -1,0 0,0 1,0 2,0 3,0 4,0
XY Chart
100
chart length (points)
TagID
0
Distance To Wp
90,00
displacement (degrees)
78
speed
0
steer
-3
Kp
0
Kd
0
Ki
-63
min
63
max
0
Yf 1
0
Xf 1
10
1
2
3
4
5 6
7
8
9
6 DOF Body
0
xf
0
yf
0
dist
7.3. MAIN LOOP 93
Each Truck VI is composed of the following subparts:
Position Data Acquisition
In this section we present two VIs, the rst is used to retrieve position data from
the Ubisense, whereas the second is used to retrieve 6 DOF data from the Qualisys.
getlocation_ubisense_truck.vi
C:\Users\Alejandro\Desktop\getlocation_ubisense_truck.vi
Last modified on 2012-06-21 at 16:12
Printed on 2012-06-21 at 16:13
Page 1
UbisenseClient
TagID TagID
getX getX
UbisenseClient
TagID TagID
getY getY
TagID
reference
getX
getY
Figure 7.6: truck position data retrieval [Ubisense Only].
getdata_qualisys.vi
C:\Users\Alejandro\Desktop\getdata_qualisys.vi
Last modified on 2012-06-21 at 01:20
Printed on 2012-06-21 at 01:21
Page 1
6 DOF Body
no
residual
pitch
res
roll
x
y
yaw
z
1000
1000
heading (degrees)
x (meters)
y (meters)
Figure 7.7: truck 6 DOF data retrieval [Qualisys Only].
94 CHAPTER 7. LABVIEW IMPLEMENTATION
Control Algorithms
In this section we present three VIs, the rst is used to calculate both the Vehicle
to WP Displacement and the Distance to WP, while the second and the third VIs
are used to calculate the control signals of the platoon leader and the followers
respectively.
displacement.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\Trucks\displacement.vi
Last modified on 2012-06-21 at 13:13
Printed on 2012-06-21 at 13:16
Page 1
Distance To Wp
Speed
psi (degrees)
phi (radians)
Yf
Xf
Vy
Vx
Yo
Xo
displacement.vi
displacement.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\Trucks\displacement.vi
Last modified on 2012-06-21 at 13:13
Printed on 2012-06-21 at 13:14
Page 1
psi (radians)
180,0
psi (degrees)
Vx
Vy
Yo
Xo
Xo+Vx
Yo+Vy
XY Graph
XoXe
YoYe
XoXf
YoYf
True
-1
Normalize the angle
Speed
Distance To Wp
Xf
Yf
psi : vehicle to WP displacement
phi : orientation angle of the Truck
i.e. the Velocity angle.
phi (radians)
angle to WP
False
Figure 7.8: calculation of Vehicle Orientation (), Vehicle to WP Displacement () and
Distance to WP (d) [Common].
7.3. MAIN LOOP 95
controller_truck_leader.vi
C:\Users\Alejandro\Desktop\controller_truck_leader.vi
Last modified on 2012-06-21 at 13:29
Printed on 2012-06-21 at 13:29
Page 1
1
64
127
0
Speed Truck 1
Steering Truck 1
Neutral Truck 1
Distance To Wp
0
pid steering
displacement (degrees)
speed
steering 8bit integer
Kp
Kd
Ki
min
max
64
64
Xo
Yo
Vx
Vy
Xf
Yf
Speed
phi (radians)
64 = 0 speed
if vehicle is stopped
don't perform steering
limit output
vehicle to WP displacement controller
truck leader speed
Figure 7.9: calculation of the control signal (steering) for the platoon leader, the speed is
controlled by the user [Common].
controller_truck_follower.vi
C:\Users\Alejandro\Desktop\controller_truck_follower.vi
Last modified on 2012-06-21 at 14:00
Printed on 2012-06-21 at 14:00
Page 1
0
71
127
0
Speed Truck 2
Steering Truck 2
Neutral Truck 2
Distance To Wp
0
pid steering
displacement (degrees)
steering 8bit integer
Kp
Kd
Ki
min
max
71
64
0
Kp 2
Kd 2
Ki 2
min 2
max 2
127
0
Xo
Yo
Vx
Vy
Xf
Yf
Speed
phi (radians)
if vehicle is stopped
don't perform steering
limit output
vehicle to WP displacement controller
almost start moving speed
setting the operating point appropriately
distance to WP controller
limit output
Figure 7.10: calculation of the control signals (steering & speed) for the platoon followers
[Common].
96 CHAPTER 7. LABVIEW IMPLEMENTATION
Control Signals Update
In this section we present only one VI, it takes care of saving the control signals
calculated in the previous step to the variable that corresponds for each truck.
save_signals_truck.vi
C:\Users\Alejandro\Desktop\save_signals_truck.vi
Last modified on 2012-06-21 at 14:06
Printed on 2012-06-21 at 14:06
Page 1
Servo Truck 1
Speed Truck 1
Steering Truck 1
Trailer Truck 1
False
semaphore
True
Figure 7.11: writing the truck control signals to the variable called Servo Truck (n), that
is going to be sent to the SF (protected using semaphores) [Common].
7.3.2 Quadrotor Control Algorithms
Starts the VI associated with each aerial vehicle, this will generate as many while
loops as quadrotors we have, running at 100 Hz each. The goal of these loops is
to read the x y z positions of the aerial vehicles and perform the appropriate
control algorithms to update the global variables that correspond to their control
signals.
quad_sub_1.vi
C:\Users\Alejandro\Desktop\Labview\QUALISYS_picture\quad_sub_1.vi
Last modified on 2012-06-01 at 16:11
Printed on 2012-06-21 at 22:12
Page 1
TagID
6 DOF Body
quad_sub_1.vi
0
getX
0
getY
5,0
-5,0
-4,0
-3,0
-2,0
-1,0
0,0
1,0
2,0
3,0
4,0
5,0 -5,0 -4,0 -3,0 -2,0 -1,0 0,0 1,0 2,0 3,0 4,0
XY Chart
1
chart length (points)
0
getZ
TagID
4,5
Kp
0,1
Kd
0
Ki
0
min
60
max
Throt
0
Zo Quad
0
Psi Quad
2
0
0,2
0,4
0,6
0,8
1
1,2
1,4
1,6
1,8
altitude
0
base throttle
0
throt
10 1
2
3
4
5 6
7
8
9
6 DOF Body
7.3. MAIN LOOP 97
Each Quadrotor VI is composed of the following subparts:
Position Data Acquisition
In this section we present two VIs, the rst is used to retrieve position data from
the Ubisense, whereas the second is used to retrieve 6 DOF data from the Qualisys.
getlocation_ubisense.vi
C:\Users\Alejandro\Desktop\getlocation_ubisense.vi
Last modified on 2012-06-21 at 00:58
Printed on 2012-06-21 at 00:59
Page 1
UbisenseClient
TagID TagID
getX getX
UbisenseClient
TagID TagID
getY getY
UbisenseClient
TagID TagID
getZ getZ
TagID
reference
getX
getY
getZ
Figure 7.12: quadrotor position data retrieval [Ubisense Only].
getlocation_qualisys_quad.vi
C:\Users\Alejandro\Desktop\getlocation_qualisys_quad.vi
Last modified on 2012-06-21 at 16:18
Printed on 2012-06-21 at 16:19
Page 1
6 DOF Body
no
residual
pitch
res
roll
x
y
yaw
z
1000
1000
heading (degrees)
x (meters)
y (meters)
1000
z (meters)
roll
pitch
Figure 7.13: quadrotor 6 DOF data retrieval [Qualisys Only].
98 CHAPTER 7. LABVIEW IMPLEMENTATION
Control Algorithms
In this section we present only one VI, it is composed of four independent PID
controllers, these are used to produce the pitch, roll, yaw and throttle control signals
based on the quadrotors current position and the desired goal location. A NaN
protection check is included before updating the corresponding variable to avoid
crashes if no position could be retrieved at any time.
controller_quad.vi
C:\Users\Alejandro\Desktop\controller_quad.vi
Last modified on 2012-06-21 at 16:36
Printed on 2012-06-21 at 16:36
Page 1
0
Kp
Kd
Ki
min
max
0
Kp 2
Kd 2
Ki 2
min 2
max 2
0
Kp 3
Kd 3
Ki 3
min 3
max 3
0
Kp 4
Kd 4
Ki 4
min 4
max 4
Throttle
Pitch
Roll
Yaw
dx
dy
dz
dpsi
0
64
64
64
base throttle
70
0
Throttle Quad 1
throttle
False
108
20
Pitch Quad 1
pitch
False
108
20
Roll Quad 1
roll
False
88
40
Yaw Quad 1
yaw
False
Xo Quad 1
Yo Quad 1
Xf Quad 1
Yf Quad 1
Zf Quad 1
Zo Quad 1
Psi Quad 1
Throttle Quad 1 0
True
Throttle Quad 1 64
True
64
True
Figure 7.14: calculation of the control signals (throttle, pitch, roll and yaw) for the
quadrotor [Common].
7.3. MAIN LOOP 99
Control Signals Update
In this section we present only one VI, it takes care of saving the control signals
calculated in the previous step to the variable that corresponds for each quadrotor.
save_signals_truck.vi
C:\Users\Alejandro\Desktop\save_signals_truck.vi
Last modified on 2012-06-21 at 14:06
Printed on 2012-06-21 at 14:06
Page 1
Servo Truck 1
Speed Truck 1
Steering Truck 1
Trailer Truck 1
False
semaphore
True
Figure 7.15: writing the quadrotor control signals to the variable called Servo Quad (n),
that is going to be sent to the SF (protected using semaphores) [Common].
7.3.3 Sending Actuator Commands
This VI starts a while loop that takes care of scheduling in time the actuation
commands that are sent to each vehicle. There are two versions of this VI: the rst
is meant to handle two trucks whereas the second is meant to handle one quadrotor.
The reason for this disparity in number of vehicles lies in the fact that the trucks
require fewer updates per second to achieve acceptable control performance.
The number of vehicles that can be controlled using each VI is mainly con-
strained by the limited capacity of the motes to send data wirelessly and the limited
capacity of the SF to forward packages. In both cases, semaphores are used to pre-
vent data corruption. The truck version of the actuator VI provides the user with
manual and automatic modes, whereas the quadrotor version of the actuator VI
provides the user with the same features plus buttons for quick arming, disarming
and landing.
100 CHAPTER 7. LABVIEW IMPLEMENTATION
Truck Actuator
motor_truck1_truck2.vi
C:\Users\Alejandro\Desktop\Labview\QUALISYS_picture\motor_truck1_truck2.vi
Last modified on 2012-05-30 at 09:31
Printed on 2012-06-21 at 19:37
Page 1
cluster
Servo Positions 2
manual 2
Servo Positions 1
manual 1
motor_truck1_truck2.vi
connection ID
actuators
Connected to
actuators
connection ID
sensors
Connected to
sensors
cluster
manual 1
64
Data0
64
Data1
100
Data3
100
Data2
100
Data4
100
Data5
100
Data6
100
Data7
Servo Positions 1
manual 2
64
Data0
64
Data1
100
Data3
100
Data2
100
Data4
100
Data5
100
Data6
100
Data7
Servo Positions 2
actuator_truck.vi
C:\Users\Alejandro\Desktop\actuator_truck.vi
Last modified on 2012-06-21 at 17:28
Printed on 2012-06-21 at 17:29
Page 1
cluster
Servo Positions 1
manual 1
Servo Truck 1
110
2
False
truck1
1
2
Control Loop
100
1
stop
True
0, Default
True
Figure 7.16: actuator command sending loop for two trucks (protected using semaphores)
[Common].
7.3. MAIN LOOP 101
Quadrotor Actuator
motor_quad1.vi
C:\Users\Alejandro\Desktop\Labview\QUALISYS_picture\motor_quad1.vi
Last modified on 2012-05-30 at 00:27
Printed on 2012-06-21 at 19:38
Page 1
cluster
Servo Positions 1
land 1
disarm 1
manual 1
arm 1
motor_quad1.vi
actuator_quad.vi
C:\Users\Alejandro\Desktop\actuator_quad.vi
Last modified on 2012-06-21 at 17:11
Printed on 2012-06-21 at 17:11
Page 1
cluster
arm 1
disarm 1
0
0
Servo Positions 1
manual 1
115
2
Servo Quad 1
land 1
0
False
quad1
1
1
Control Loop
100
1
stop
stop
True
0, Default
True
Figure 7.17: actuator command sending loop for one quadrotor (protected using
semaphores) [Common].
102 CHAPTER 7. LABVIEW IMPLEMENTATION
Write to Serial Forwarder
In both actuator VIs presented, there is an important sub VI that takes care of
building the data package and sending it to the TCP/IP connection. This VI takes
as input the mote ID where the data is going to be sent and the message type (by
default the message type equals 2).
WriteSF_basic_global.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_basic\WriteSF_basic_global.vi
Last modified on 2012-05-26 at 19:02
Printed on 2012-06-21 at 17:37
Page 1
Connection ID
MsgType
Destination ID
Servo Positions
WriteSF_basic_global.vi
0
Destination ID
0
MsgTyp
Connection ID
100
Data0
100
Data1
100
Data3
100
Data2
100
Data4
100
Data5
100
Data6
100
Data7
Servo Positions
Destination ID
0
0
6
Source ID = 0x01
Payload Lenght
GroudID
MsgType
Payload
Add packe
length at t
beginning
1
True
Connection ID
Servo Positions
WriteSF_basic_global.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_basic\WriteSF_basic_global.vi
Last modified on 2012-05-26 at 19:02
Printed on 2012-06-21 at 17:36
Page 1
Destination ID
0
0
6
Source ID = 0x01
Payload Lenght
GroudID
MsgType
Payload
Add packet
length at the
beginning
1
True
Connection ID
Servo Positions
False
Figure 7.18: WriteSF_basic_global.vi used to build the package and send it to the
TCP/IP connection [Common].
A command is a binary number composed of eight servo positions, i.e. a com-
mand is a concatenation of eight integer values from 0 to 127. A servo position is
then 7 bit.
The process of sending the data from LabVIEW has three parts:
1. The LabVIEW program reading the global variable holding the actuation
commands and writing this data to the TCP/IP connection.
2. The SF reading the data received from the TCP/IP connection and resending
it to the serial port.
3. The mote receiving the data through the serial port and forwarding it wire-
lessly to the actuator with the specied mote ID.
7.3. MAIN LOOP 103
7.3.4 Receiving Sensor Data
This VI starts the reception of packages from the TCP/IP connection. This VI is
composed of two phases: in the rst we load a lookup table for each vehicle, and
in the second we start a reception loop that is going to process the incoming mes-
sages. The lookup tables are formatted as .csv les and they provide pre-calculated
distance values for each possible raw proximity sensor reading. In other words,
these tables convert raw proximity values ranging from 0 to 4095 into interpolated
distance values.
The tables are automatically generated using a MATLAB script and the curve
t tool
1
. The input to this script is a set of calibration values composed by pairs
of numbers representing the measured distance to an object, and the corresponding
raw sensor reading:
Code 7.1: x1 represents a vector of raw sensor readings and y1 represents the corre-
sponding distance in cm used for the calibration (MATLAB script).
1 %% Sensor 1 (left side Short Range)
2 x1 = [4095;3210;2280;1778;1460;1260;1106;973;896;818;763;710;680;650];
3 y1 = [5;10;15;20;25;30;35;40;45;50;55;60;65;70];
Sensor Data Receiver
ir_sensor_complex_global_4bytes.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\IR\ir_sensor_complex_global_4bytes.vi
Last modified on 2012-06-21 at 19:45
Printed on 2012-06-21 at 19:47
Page 1
cluster
ir_sensor_complex_global_4bytes.vi
connection ID
actuators
Connected to
actuators
connection ID
sensors
Connected to
sensors
cluster
C:\Users\Alejandro\Desktop\Labview\dependencies\
LUT File Truck 1
Distance
Filter
C:\Users\Alejandro\Desktop\Labview\dependencies\
LUT File Truck 2
C:\Users\Alejandro\Desktop\Labview\dependencies\
LUT File Truck 3
LU
LU
LU
1
The complete MATLAB script used to generate the lookup .csv tables was relegated to the
Appendix A.
104 CHAPTER 7. LABVIEW IMPLEMENTATION
ir_sensor_complex_global_4bytes.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\IR\ir_sensor_complex_global_4bytes.vi
Last modified on 2012-06-21 at 19:45
Printed on 2012-06-21 at 19:46
Page 1
Double
LUT File Truck 1
;
Table Truck 1
Double
LUT File Truck 2
;
Table Truck 2
Double
LUT File Truck 3
;
Table Truck 2
Double
LUT File Quad 1
;
Table Quad 1
Double
LUT File Quad 2
;
Table Quad 2
Double
LUT File Quad 3
;
Table Quad 3
Loading Lookup Tables
Distance
Filter
10
stop
Message Reception
Loop
True
cluster
False
Figure 7.19: ir_sensor_complex_global.vi; there are several versions of this VI depending
on the structure of the data to be received [Common].
Read from Serial Forwarder
Inside the while loop of the IR sensors VI there is an important sub VI that takes
care of parsing each received message. This sub VI is called ReadSF.vi and its
functioning will be divided in two phases: in the rst phase the VI separates the
header from the data and determines the vehicle whose data is being received, in
the second phase the VI performs some processing to the data received (moving
average + lookup table) and updates the corresponding vehicles global variables:
IR sensors, magnetometer heading, and sonar altitude (if applies).
ReadSF_complex_global_4bytes.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_basic\ReadSF_complex_global_4bytes.vi
Last modified on 2012-05-20 at 19:23
Printed on 2012-06-21 at 20:01
Page 1
header
payload
0
-1
1
Read message
length
0
Read message
0
message
14
Payload length
2
length
connection ID
Extract message parts
Signal syntonization
1
0
1
2
3
5
1
3
5
1
3
5
1
3
5
1
3
0
1
2
3
Mote ID
Data
Filter
Distance
10
8
Figure 7.20: ReadSF_complex_global.vi rst phase: decodes the message received; there
are several versions of this VI depending on the structure of the data to be
received [Common].
7.3. MAIN LOOP 105 irsensors.vi
C:\Users\Alejandro\Desktop\irsensors.vi
Last modified on 2012-06-21 at 20:12
Printed on 2012-06-21 at 20:13
Page 1
0
1
2
3
5
1
3
0
3
5
1
3
0
3
5
1
3
0
3
5
1
3
0
3
750
750
750
750
Table Quad 1
True
True
0
1
2
3
v1 = (256*n1+d1)/100;
v2 = (256*n2+d2)/100;
if (v1 < 90)
v1 = v1 + 360;
1
2
3
4
v2
v1
d2
n2
d1
n1
heading quad 1
altitude quad 1
Psi Quad 1
Zo Quad 1
IR Quad 1
110
Data
Mote ID
Data
Filter
Distance
Array
Array 2
Numeric
Data
False
False
Default
Figure 7.21: ReadSF_complex_global.vi second phase: process the data and update the
variables; there are several versions of this VI depending on the structure of
the data to be received [Common].
For each receiver sensor mote a new instance of the sensor VI has to be created.
It is recommended to have at least one mote for each pair of vehicles we want to
receive data from.
The process of receiving the data in LabVIEW has three parts:
1. The mote receiving the data wirelessly from the sensors and writing it to the
serial port.
2. The SF reading the data received on the serial port and resending it to the
TCP/IP connection.
3. The LabVIEW program reading the data through the TCP/IP connection,
parsing it accordingly and updating the corresponding global variables.
106 CHAPTER 7. LABVIEW IMPLEMENTATION
7.3.5 Start Qualisys Track Manager
This VI starts the connection to the QTM, in other words, starts the reception of
frames from the Qualisys Server located at the IP specied in the input labeled
Address. This VI is called in the Main Loop and not in the Initialization phase
like the Ubisense analogous VI. For further details on this, refer to the Chapter 3.
qualisys_process.vi
C:\Users\Alejandro\Desktop\qualisys_process.vi
Last modified on 2012-06-21 at 21:41
Printed on 2012-06-21 at 21:41
Page 1
Address
Port
Frequency
Data
QTM Message
Last QTM Event
Connect to QTM
Controlling QTM
Control QTM
Camera Frame Number
Camera Timestamp
address
connect
controlQTM
data
frequency
port
camFrameNumb
camTimeStamp
lastEvent
stop
stop
Qualisys Process
Figure 7.22: QTM.vi, while the loop is being executed the connection to the Qualisys will
continue to receive frames [Qualisys Only].
7.3. MAIN LOOP 107
7.3.6 Mission Planner
This VI starts a while loop that will schedule the mission by updating each vehicles
goal variables. This VI is based on a MATLAB Script Node that takes as input the
current position and orientation of each vehicle and updates their corresponding
goals. Both the reading of the vehicle coordinates and the update of the goal
variables are protected from data corruption using semaphores.
mission_planner.vi
C:\Users\Alejandro\Desktop\mission_planner.vi
Last modified on 2012-06-21 at 21:37
Printed on 2012-06-21 at 21:37
Page 1
coord
Yo Truck 1
Xo Truck 1
Phi Truck 1
False
100
Xf1 = ...
Yf1 = ...

Xf2 = ...
Yf2 = ...

Xf3 = ...
Yf3 = ...

Xf4 = ...
Yf4 = ...

%% This code allows the vehicle
coordination by setting
different goal positions for each
t
Yf1
Xf1
i
i
Yf4
Xf4
Phi3
Yo3
Xo3
Yf3
Xf3
Phi2
Yo2
Xo2
Yf2
Xf2
Phi1
Yo1
Xo1
MATLAB script
Xf Truck 2
Yf Truck 2
False
stop
Yo Truck 2
Xo Truck 2
Phi Truck 2
False
Xf Truck 3
Yf Truck 3
False
Xf Truck 4
Yf Truck 4
False
Yo Truck 3
Xo Truck 3
Phi Truck 3
False
0
Xf Truck 1
Yf Truck 1
False
Mission Planner
Time has Elapsed
Elapsed Time (s)
Present (s)
Elapsed Time
goal
truck1
truck2
coord
truck2
coord
truck3
goal
truck3
goal
truck4
goal
truck1
0
0
0
True
0
0
0
True
0
0
0
True True True True
Figure 7.23: coordinator.vi, this is the VI that functions as the Mission Planner [Com-
mon].
108 CHAPTER 7. LABVIEW IMPLEMENTATION
7.4 Finalization
In this phase we perform the following operation: Close SF Connections.
7.4.1 Close Serial Forwarder Connections
Close the TCP/IP connections opened in the Initialization phase, discard any value
that was to be written or read from the connection. Report if any error was produced
during the execution of the program related to the TCP/IP connection.
CloseSF_basic.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_basic\CloseSF_basic.vi
Last modified on 2012-03-05 at 17:16
Printed on 2012-06-21 at 22:20
Page 1
error out
connection ID
CloseSF_basic.vi
status
0
code
source
error out
connection ID
Disconnect
error out
connection ID
Close TCP connection
CloseSF_basic.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_basic\CloseSF_basic.vi
Last modified on 2012-03-05 at 17:16
Printed on 2012-06-21 at 22:19
Page 1
Disconnect
error out
connection ID
Close TCP connection
Figure 7.24: CloseSF_basic.vi used to close a connection to a SF [Common].
A homologous VI to close pairs of SF connections is also presented below. Even
though these VIs for double opening SF connections and double closing SF con-
nections were originally meant for actuator/sensor pairs, they can also be used for
actuator/actuator or sensor/sensor pairs.
7.5. DATA SHARING 109
CloseSF_double.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_double\CloseSF_double.vi
Last modified on 2012-05-20 at 18:40
Printed on 2012-06-21 at 22:26
Page 1
connection IDs
CloseSF_double.vi
connection ID
actuators
Connected to
actuators
connection ID
sensors
Connected to
sensors
connection IDs
True
True
connection IDs
False False
CloseSF_double.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\SF_double\CloseSF_double.vi
Last modified on 2012-05-20 at 18:40
Printed on 2012-06-21 at 22:29
Page 1
True
True
connection IDs
False False
Figure 7.25: CloseSF_double.vi used to close a pair of SF connections [Common].
7.5 Data Sharing
In the LabVIEW program we have dierent processes running simultaneously taking
care of several tasks which require access to the same information. Since these
processes read and write data in an asynchronous way, we propose a non-blocking
scheme to access this data using semaphores, which are supported by LabVIEW.
The variables that need to be read and modied by these dierent processes are
dened as global in a separate VI. There are three dierent VIs created to hold
global variables, these are: global_trucks, global_quads and global_tables. They all
work in the same way; the only dierence is the variables they contain.
The semaphores are variables that provide an abstraction for controlling access
by multiple processes to the same resource in a parallel programming environment.
In general, a semaphore can allow an specic number of processes to access the same
resource simultaneously. However, in this particular case we do not want more than
one process writing or reading to the same memory location at the same time, hence
we use binary semaphores which only have two states: locked or unlocked.
Each variable or group of variables we want to protect has a dierent semaphore
associated, when a process attempts to access the shared resource, it will rst check
whether the semaphore is locked or not. If it is locked, the process will wait until
it is unlocked or timeout is reached. If the semaphore is unlocked, the process will
110 CHAPTER 7. LABVIEW IMPLEMENTATION
lock it so that other processes know they cannot access the associated data during
this period. The part of the code that reads or writes data to the variables, also
called critical section, is executed and nally the semaphore is unlocked allowing
other processes to access the shared resource again.
Each vehicle has three semaphores, the semaphore ID (label) is determined by
a string:
1. truck(n): takes care of protecting the critical section when reading or writing
to the servo positions.
2. truck(n)coord: takes care of protecting the critical section when reading or
writing to the vehicle coordinates.
3. truck(n)goal: takes care of protecting the critical section when reading or
writing to the goal positions of the vehicle.
semaphores.vi
C:\Users\Alejandro\Desktop\semaphores.vi
Last modified on 2012-06-21 at 22:51
Printed on 2012-06-21 at 22:52
Page 1
goal
coord
truck1
Figure 7.26: example of the three semaphore labels mentioned [Common].
criticalsection.vi
C:\Users\Alejandro\Desktop\criticalsection.vi
Last modified on 2012-06-21 at 23:11
Printed on 2012-06-21 at 23:12
Page 1
Yf Truck 1
Xf Truck 1
False
goal
truck1
0
0
True
Figure 7.27: example of the critical section protection for the truck1goals [Common].
7.6. GLOBAL VARIABLES 111
7.6 Global Variables
Here we present the VIs that hold the variables used throughout the program.
For simplicity, we will only display the variables of one truck extracted from the VI
global_trucks and the variables of one quadrotor extracted from the VI global_quads.
global_quads.vi
C:\Users\Alejandro\Desktop\Labview\dependencies\global_variables\global_quads.vi
Last modified on 2012-05-19 at 17:46
Printed on 2012-06-21 at 23:24
Page 1
STOP
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
0
IR Quad 1
0
Data0
64
Data1
64
Data3
64
Data2
100
Data4
100
Data5
100
Data6
100
Data7
Servo Quad 1
0
Throttle Quad 1
64
Roll Quad 1
64
Yaw Quad 1
64
Pitch Quad 1
6,25
Xf Quad 1
4,33
Yf Quad 1
4
Xo Quad 1
5
Yo Quad 1
0
Psi Quad 1
1,2
Zf Quad 1
0
Zo Quad 1
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
0
IR Quad 2
0
Data0
64
Data1
64
Data3
64
Data2
100
Data4
100
Data5
100
Data6
100
Data7
Servo Quad 2
0
Throttle Quad 2
64
Roll Quad 2
64
Yaw Quad 2
64
Pitch Quad 2
4
Xf Quad 2
6
Yf Quad 2
4
Xo Quad 2
5
Yo Quad 2
0
Psi Quad 2
0
Zf Quad 2
0
Zo Quad 2
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
0
IR Quad 3
0
Data0
64
Data1
64
Data3
64
Data2
100
Data4
100
Data5
100
Data6
100
Data7
Servo Quad 3
0
Throttle Quad 3
64
Roll Quad 3
64
Yaw Quad 3
64
Pitch Quad 3
4
Xf Quad 3
6
Yf Quad 3
4
Xo Quad 3
5
Yo Quad 3
0
Psi Quad 3
0
Zf Quad 3
0
Zo Quad 3
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
0
IR Truck 1
100
Data0
64
Data1
100
Data3
100
Data2
100
Data4
100
Data5
100
Data6
100
Data7
Servo Truck 1
100
Steering Truck 1
64
Speed Truck 1
100
Trailer Truck 1
64
Neutral Truck 1
4
Xf Truck 1
6
Yf Truck 1
4
Xo Truck 1
5
Yo Truck 1
0
Phi Truck 1
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
4096
0
500
1000
1500
2000
2500
3000
3500
0
IR Truck 2
100
Data0
64
Data1
100
Data3
100
Data2
100
Data4
100
Data5
100
Data6
100
Data7
Servo Truck 2
100
Steering Truck 2
64
Speed Truck 2
100
Trailer Truck 2
64
Neutral Truck 2
4
Xf Truck 2
6
Yf Truck 2
4
Xo Truck 2
5
Yo Truck 2
0
Phi Truck 2
Figure 7.28: global variables for one truck (left side) and one quadrotor (right side) [Com-
mon].
Chapter 8
Simulations
In this chapter we will present all the simulations used in this thesis to test and
tune the controllers. The simulations where developed in Simulink and can be easily
modied to produce dierent behaviors. This chapter contains ve parts, namely:
ground vehicle simulations, aerial vehicle simulations, cooperative ground & aerial
vehicles simulations, control signals and 3D Visualization Engine.
The Simulink models (.mdl) can be downloaded for personal usage and the 3D
visualizations shown in the YouTube channel: AutomaticControlKTH can be
easily reproduced using the 3D Visualization Engine which is presented later in this
chapter.
None of the simulations implement obstacle avoidance algorithms: it is assumed
that the x y z references generated for each vehicle are trackable; this means
that the vehicles will stay reasonably close to the reference position given that it
changes according to the quadrotors and trucks controllability.
In other words, it is assumed that if the references generated for each vehicle are
distant enough from each other and the vehicles can track them, the vehicles will
not collide. These assumptions are not necessarily fullled for all cases; however
the implementation of more complex obstacle avoidance mechanisms is beyond the
scope of this thesis.
Some of the simulations presented here are dicult to understand due to the
amount of vehicles involved, for that reason we encourage the reader to follow this
chapter alongside with the 3D visualizations that can be found in the YouTube
channel: AutomaticControlKTH.
The colors of the trucks presented here were chosen to match the same colors
used in the simulation videos of the YouTube channel, which in turn were chosen
to match the colors of the real life trucks used for the experiments.
113
114 CHAPTER 8. SIMULATIONS
8.1 Ground Vehicle Simulations
In this section we will present the simulations regarding platooning and formation
control of ground vehicles. In general, one vehicle will be designed as the leader
and the reference points for the rest of the vehicles will be calculated using the
coordinates and orientation of it.
The Double Column Formation is an example of a hierarchical formation where
sub-platoons can be attached to a bigger platoon. In other words, any platoon
can grow by accepting more vehicles or by adhering themselves to another existing
platoon.
8.1.1 Platooning
The WPs are generated in a cyclic pattern so that a closed trajectory is formed.
The leader of the platoon is designated to track the reference points as explained in
Chapter 5, the second vehicle of the platoon is programmed to follow the rear part
of the rst vehicle and so forth. The reference points are updated when the leader
of the platoon reaches a certain distance from it.
8.1. GROUND VEHICLE SIMULATIONS 115
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
Y (m)
Platooning
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Platooning
X

(
m
)
Figure 8.1: platooning simulation, perspective and top views.
116 CHAPTER 8. SIMULATIONS
8.1.2 Formation I (Double Column)
The leader of the formation is followed simultaneously by two other vehicles, each
of which is the leader of a sub-platoon formation.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
Y (m)
Formation I (Double Column)
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Formation I (Double Column)
X

(
m
)
Figure 8.2: formation I (double column) simulation, perspective and top views.
8.1. GROUND VEHICLE SIMULATIONS 117
8.1.3 Formation II (Triangle)
The leader of the formation is the tip of the triangle, the WPs of the other vehicles
are calculated recursively to achieve this shape.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
Y (m)
Formation II (Triangle)
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Formation II (Triangle)
X

(
m
)
Figure 8.3: formation II (triangle) simulation, perspective and top views.
118 CHAPTER 8. SIMULATIONS
8.1.4 Row Formation
The trucks travel one beside each other instead of behind each other. This is useful
for exploration and surveillance tasks.
40
30
20
10
0
10
20
30
40 40
30
20
10
0
10
20
30
40
0
10
20
Y (m)
Row Formation
X (m)
Z

(
m
)
40
30
20
10
0
10
20
30
40
40 30 20 10 0 10 20 30 40
Y (m)
Row Formation
X

(
m
)
Figure 8.4: row formation simulation, perspective and top views.
8.1. GROUND VEHICLE SIMULATIONS 119
8.1.5 Defensive Formation
Four vehicles give cover to the leader of the formation (black truck in the center of
the formation).
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
Y (m)
Defensive Formation
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Defensive Formation
X

(
m
)
Figure 8.5: defensive formation simulation, perspective and top views.
120 CHAPTER 8. SIMULATIONS
8.2 Aerial Vehicle Simulations
In this section we will present the simulations regarding quadrotors takeo, position
hold, WP tracking, and coordination of multiple quadrotors simultaneously. Several
scenarios will be considered to illustrate the responsiveness of the controller versus
the speed of change of the reference points in 3D space.
The concepts of takeo, WP tracking and landing are exactly the same as the
ones that apply in the case of multiple quadrotors, for that reason these explanations
will be omitted in the corresponding sections.
8.2.1 Single Flight 1 Quadrotor
In this simulation the quadrotor starts on the ground and the purpose is to move it
to a new location. In order to achieve this, the controller is divided in three parts:
take o, ight and landing.
1. Takeo: the quadrotor lifts vertically until a desired pre-established height
is achieved. This height has to be suciently high (1 m) so that the ground
does not aect the propellers thrust causing it to move erratically.
2. Flight: the quadrotor modies its pitch and yaw angles in order to move to
the desired location. In this phase the goal location does not necessarily need
to be at the same height of the starting position; however we restrict our WP
coordinates to be at least 1 m from the ground to avoid the undesirable eects
mentioned in the previous point.
3. Landing: once the quadrotor is not moving anymore in the x y plane, we
can start the landing phase, to achieve this; the height of the WP tracked is
decreased slowly to avoid abrupt collisions with the ground.
8.2. AERIAL VEHICLE SIMULATIONS 121
30
20
10
0
10
20
30
30
20
10
0
10
20
30
0
10
20
30
Y (m)
Single Flight 1 Quadrotor
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Single Flight 1 Quadrotor
X

(
m
)
Figure 8.6: single ight simulation, perspective and top views.
122 CHAPTER 8. SIMULATIONS
8.2.2 Circular Motion 1 Quadrotor
In this simulation the quadrotor starts on the ground and its purpose is to circle
around a certain SP. In order to achieve this, the controller is divided in the same
phases as before; however, the ight phase is modied in the following way:
Once the quadrotor has reached the desired height, the WP to be tracked is
updated using the following formulas:
X
wp
= X
sp
+ Rcos(t)
Y
wp
= Y
sp
+ Rsin(t)
Z
wp
= K
alt
Where wp stands for the WP to be tracked and sp stands for the SP, i.e. the point
around which the quadrotor will circle.
These formulas can be interpreted as follows: a circle of radius R centered at
the SP is formed. For each time step the WP is updated by increasing or decreasing
the angle of rotation
1
and calculating the corresponding xy position of the WP
for that angle and the chosen R. This allows both clockwise and counter-clockwise
circling motion around the SP.
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 1 Quadrotor
X

(
m
)
X X R t
Y Y R t
wp sp
wp sp
= +
= +
cos( )
sin( )

X X R t
Y Y R t
wp sp
wp sp
= +
= +
cos( )
sin( )

X X R t
Y Y R t
wp sp
wp sp
= +
= +
cos( )
sin( )

1
3
1
2
2
3
( , ) X Y
sp sp
Figure 8.7: simple rotation of 1 aerial vehicle with positive .
The idea of considering the circling motion a function of simplies the problem
of generating WPs when the surveillance WP itself is changing over time: this idea
1
In this context, refers to the angle of a circular motion with angular velocity , it has nothing
to do with the pitch angle of the quadrotor.
8.2. AERIAL VEHICLE SIMULATIONS 123
will be useful for the cooperative control between ground and aerial vehicles. By
considering , R, and Z
wp
as time varying functions, we can create many interesting
scenarios that will be explored below.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Motion 1 Quadrotor
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 1 Quadrotor
X

(
m
)
Figure 8.8: circular motion 1 quadrotor simulation, perspective and top views.
124 CHAPTER 8. SIMULATIONS
8.2.3 Circular Motion 2 Quadrotors
Two quadrotors describe a circular trajectory both clockwise and counter-clockwise.
The phase shift between them is .
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Motion 2 Quadrotors
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 2 Quadrotors
X

(
m
)
Figure 8.9: circular motion 2 quadrotors simulation, perspective and top views.
8.2. AERIAL VEHICLE SIMULATIONS 125
8.2.4 Circular Motion 3 Quadrotors
Three quadrotors describe a circular trajectory. The phase shift between them is
2/3.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Motion 3 Quadrotors
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 3 Quadrotors
X

(
m
)
Figure 8.10: circular motion 3 quadrotors simulation, perspective and top views.
126 CHAPTER 8. SIMULATIONS
8.2.5 Circular Motion 4 Quadrotors
Four quadrotors describe a circular trajectory. The phase shift between them is
pi/2.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Motion 4 Quadrotors
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 4 Quadrotors
X

(
m
)
Figure 8.11: circular motion 4 quadrotors simulation, perspective and top views.
8.2. AERIAL VEHICLE SIMULATIONS 127
8.2.6 Elliptical Motion 4 Quadrotors
Four quadrotors describe an elliptical trajectory with eccentricity equal to 0.75.
They do not hold a constant phase shift due to the inertial limitations that prevent
the quadrotors from accelerating and decelerating fast enough.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Elliptical Motion 4 Quadrotors
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Elliptical Motion 4 Quadrotors
X

(
m
)
Figure 8.12: elliptical motion 4 quadrotors simulation, perspective and top views.
128 CHAPTER 8. SIMULATIONS
8.2.7 Circular Motion 4 Quadrotors [2 radii]
Four quadrotors describe two circular trajectories with two dierent radii and op-
posite rotational orientation. The phase shift between vehicles is .
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Motion 4 Quadrotors [2 radii]
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 4 Quadrotors [2 radii]
X

(
m
)
Figure 8.13: circular motion 4 quadrotors [2 radii] simulation, perspective and top views.
8.2. AERIAL VEHICLE SIMULATIONS 129
8.2.8 Circular Motion 4 Quadrotors [sub-rotation]
Four quadrotors are packed in groups of two, each of these two groups rotate around
a dierent point and these points in turn rotate around the z-axis.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Motion 4 Quadrotors [subrotation]
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 4 Quadrotors [subrotation]
X

(
m
)
Figure 8.14: circular motion 4 quadrotors [sub-rotation] simulation, perspective and top
views.
130 CHAPTER 8. SIMULATIONS
8.2.9 Circular Motion 4 Quadrotors [variable radius]
Four quadrotors describe a spiral trajectory. This is achieved by changing the radius
of rotation over time. i.e. considering R = R(t) = R
k
sin(t).
40
30
20
10
0
10
20
30
40 40
30
20
10
0
10
20
30
40
0
10
20
30
Y (m)
Circular Motion 4 Quadrotors [variable radius]
X (m)
Z

(
m
)
40
30
20
10
0
10
20
30
40
40 30 20 10 0 10 20 30 40
Y (m)
Circular Motion 4 Quadrotors [variable radius]
X

(
m
)
Figure 8.15: circular motion 4 quadrotors [variable radius] simulation, perspective and
top views.
8.2. AERIAL VEHICLE SIMULATIONS 131
8.2.10 Circular Motion 4 Quadrotors [2 altitudes]
Four quadrotors describe two circular trajectories with the same radii but at dier-
ent altitudes. They rotate in opposite directions and with a phase shift of .
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Motion 4 Quadrotors [2 altitudes]
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 4 Quadrotors [2 altitudes]
X

(
m
)
Figure 8.16: circular motion 4 quadrotors [2 altitudes] simulation, perspective and top
views.
132 CHAPTER 8. SIMULATIONS
8.2.11 Circular Motion 4 Quadrotors [variable altitude]
Four quadrotors describe a helicoidal trajectory. This is achieved by changing the
altitude over time. i.e. considering Z
wp
= Z
wp
(t) = Z
k
sin(t).
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Motion 4 Quadrotors [variable altitude]
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 4 Quadrotors [variable altitude]
X

(
m
)
Figure 8.17: circular motion 4 quadrotors [variable altitude] simulation, perspective and
top views.
8.2. AERIAL VEHICLE SIMULATIONS 133
8.2.12 Circular Motion 4 Quadrotors [vertical]
Four quadrotors describe a circular trajectory but instead of doing so in the hori-
zontal plane, it is done vertically.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
40
Y (m)
Circular Motion 4 Quadrotors [vertical]
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 4 Quadrotors [vertical]
X

(
m
)
Figure 8.18: circular motion 4 quadrotors [vertical] simulation, perspective and top views.
134 CHAPTER 8. SIMULATIONS
8.2.13 Circular Motion 4 Quadrotors [horizontal & vertical]
Two quadrotors circle horizontally whereas two quadrotors do so vertically. They
do so in a coordinated manner to avoid collisions.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
40
Y (m)
Circular Motion 4 Quadrotors [horizontal & vertical]
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 4 Quadrotors [horizontal & vertical]
X

(
m
)
Figure 8.19: circular motion 4 quadrotors [horizontal & vertical] simulation, perspective
and top views.
8.2. AERIAL VEHICLE SIMULATIONS 135
8.2.14 Circular Motion 4 Quadrotors [triple rotation]
Four quadrotors describe a three dimensional rotation composed of: rotation around
z-axis, rotation of the z-axis itself and altitude change over time.
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
40
Y (m)
Circular Motion 4 Quadrotors [triple rotation]
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Motion 4 Quadrotors [triple rotation]
X

(
m
)
Figure 8.20: circular motion 4 quadrotors [triple rotation] simulation, perspective and top
views.
136 CHAPTER 8. SIMULATIONS
8.3 Cooperative Ground & Aerial
Vehicles Simulations
8.3.1 Simple Surveillance 1 Quadrotor
One quadrotor tracks the x y position of the leader of the platoon (red truck).
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Simple Surveillance 1 Quadrotor
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Simple Surveillance 1 Quadrotor
X

(
m
)
Figure 8.21: simple surveillance 1 quadrotor simulation, perspective and top views.
8.3. COOPERATIVE GROUND & AERIAL VEHICLES SIMULATIONS 137
8.3.2 Circular Surveillance 1 Quadrotor
One quadrotor circles around the x y position of the leader of the platoon (black
truck).
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Surveillance 1 Quadrotor
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Surveillance 1 Quadrotor
X

(
m
)
Figure 8.22: circular surveillance 1 quadrotor simulation, perspective and top views.
138 CHAPTER 8. SIMULATIONS
8.3.3 Circular Surveillance 2 Quadrotors
Two quadrotors circle around the x y position of the leader of the platoon (black
truck).
30
20
10
0
10
20
30 30
20
10
0
10
20
30
0
10
20
30
Y (m)
Circular Surveillance 2 Quadrotors
X (m)
Z

(
m
)
30
20
10
0
10
20
30
30 20 10 0 10 20 30
Y (m)
Circular Surveillance 2 Quadrotors
X

(
m
)
Figure 8.23: circular surveillance 2 quadrotors simulation, perspective and top views.
8.3. COOPERATIVE GROUND & AERIAL VEHICLES SIMULATIONS 139
8.3.4 Circular Surveillance 3 Quadrotors
Three quadrotors circle around the xy position of the leader of the platoon (black
truck).
40
30
20
10
0
10
20
30
40 40
30
20
10
0
10
20
30
40
0
10
20
30
Y (m)
Circular Surveillance 3 Quadrotors
X (m)
Z

(
m
)
40
30
20
10
0
10
20
30
40
40 30 20 10 0 10 20 30 40
Y (m)
Circular Surveillance 3 Quadrotors
X

(
m
)
Figure 8.24: circular surveillance 3 quadrotors simulation, perspective and top views.
140 CHAPTER 8. SIMULATIONS
8.3.5 Circular Surveillance 4 Quadrotors [2 altitudes, 2 radii]
Four quadrotors circle around the x y position of the leader of the platoon (black
truck). The black and green quadrotors form a pair with a small circling radius,
the purple and cyan quadrotors form another pair with a bigger circling radius and
higher altitude.
40
30
20
10
0
10
20
30
40 40
30
20
10
0
10
20
30
40
0
10
20
30
Y (m)
Circular Surveillance 4 Quadrotors [2 altitudes, 2 radii]
X (m)
Z

(
m
)
40
30
20
10
0
10
20
30
40
40 30 20 10 0 10 20 30 40
Y (m)
Circular Surveillance 4 Quadrotors [2 altitudes, 2 radii]
X

(
m
)
Figure 8.25: circular surveillance 4 quadrotors [2 altitudes, 2 radii] simulation, perspective
and top views.
8.3. COOPERATIVE GROUND & AERIAL VEHICLES SIMULATIONS 141
8.3.6 Circular Surveillance 4 Quadrotors [front & back]
Two quadrotors (black and green) circle around the x y position of the leader of
the platoon (black truck) and two other quadrotors (purple and cyan) circle at a
dierent altitude around the x y position of the last member of the platoon (red
truck).
40
30
20
10
0
10
20
30
40 40
30
20
10
0
10
20
30
40
0
10
20
30
Y (m)
Circular Surveillance 4 Quadrotors [2 altitudes, front & back]
X (m)
Z

(
m
)
40
30
20
10
0
10
20
30
40
40 30 20 10 0 10 20 30 40
Y (m)
Circular Surveillance 4 Quadrotors [2 altitudes, front & back]
X

(
m
)
Figure 8.26: circular surveillance 4 quadrotors [2 altitudes, front & back] simulation, per-
spective and top views.
142 CHAPTER 8. SIMULATIONS
8.3.7 Circular Surveillance 4 Quadrotors [vertical]
Four quadrotors circle vertically around the x y position of the leader of the
platoon (black truck).
40
30
20
10
0
10
20
30
40 40
30
20
10
0
10
20
30
40
0
10
20
30
Y (m)
Circular Surveillance 4 Quadrotors [vertical]
X (m)
Z

(
m
)
40
30
20
10
0
10
20
30
40
40 30 20 10 0 10 20 30 40
Y (m)
Circular Surveillance 4 Quadrotors [vertical]
X

(
m
)
Figure 8.27: circular surveillance 4 quadrotors [vertical] simulation, perspective and top
views.
8.3. COOPERATIVE GROUND & AERIAL VEHICLES SIMULATIONS 143
8.3.8 Circular Surveillance 4 Quadrotors [multiple platoons]
Two quadrotors (black and green) circle around the x y position of the leader
of a platoon (black truck) and two other quadrotors (purple and cyan) circle at a
dierent altitude around the xy position of the leader of a dierent platoon (blue
truck).
40
30
20
10
0
10
20
30
40 40
30
20
10
0
10
20
30
40
0
10
20
30
Y (m)
Circular Surveillance 4 Quadrotors [multiple platoons]
X (m)
Z

(
m
)
40
30
20
10
0
10
20
30
40
40 30 20 10 0 10 20 30 40
Y (m)
Circular Surveillance 4 Quadrotors [multiple platoons]
X

(
m
)
Figure 8.28: circular surveillance 4 quadrotors [multiple platoons] simulation, perspective
and top views.
144 CHAPTER 8. SIMULATIONS
8.4 Control Signals
In this section we present the control signals and the respective controlled variables
for: the platoon leader, the rst follower and the quadrotor that tracks the platoon
leader in the simulation called Simple Surveillance 1 Quadrotor.
8.4.1 Platoon Leader
15 20 25 30 35 40 45
4
4.2
4.4
4.6
4.8
5
5.2
5.4
5.6
5.8
6
Time (s)
S
p
e
e
d

(
m
/
s
)


D
i
s
t
a
n
c
e

t
o

W
P

(
m
)
Speed & Distance to WP vs. Time [Platoon Leader]
0
5
10
15
20
25
30
35
40
Speed (v)
Distance (d)
15 20 25 30 35 40 45
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Time (s)
S
t
e
e
r
i
n
g

(
r
a
d
)


V
e
h
i
c
l
e

t
o

W
P

d
i
s
p
l
a
c
e
m
e
n
t

(
r
a
d
)
Steering & Vehicle to WP displacement vs. Time [Platoon Leader]
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.1
Steering ()
Displacement ()
Figure 8.29: platoon leader, control signals time plot.
8.4. CONTROL SIGNALS 145
8.4.2 Platoon First Follower
15 20 25 30 35 40 45
2.5
3
3.5
4
4.5
5
Time (s)
S
p
e
e
d

(
m
/
s
)


D
i
s
t
a
n
c
e

t
o

W
P

(
m
)
Speed & Distance to WP vs. Time [Platoon Follower 1]
7.3
7.35
7.4
7.45
7.5
7.55
7.6
7.65
Speed (v)
Distance (d)
15 20 25 30 35 40 45
0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
Time (s)
S
t
e
e
r
i
n
g

(
r
a
d
)


V
e
h
i
c
l
e

t
o

W
P

d
i
s
p
l
a
c
e
m
e
n
t

(
r
a
d
)
Steering & Vehicle to WP displacement vs. Time [Platoon Follower 1]
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
0.02
Steering ()
Displacement ()
Figure 8.30: platoon second vehicle, control signals time plot.
146 CHAPTER 8. SIMULATIONS
8.4.3 Quadrotor
15 20 25 30 35 40 45
0.3
0.25
0.2
0.15
0.1
0.05
0
0.05
0.1
0.15
0.2
Time (s)
P
i
t
c
h

(
r
a
d
)

x

(
m
)
Pitch & Xaxis Error vs. Time
1
0.5
0
0.5
1
1.5
Pitch ()
Error
x
(a)
15 20 25 30 35 40 45
0.3
0.25
0.2
0.15
0.1
0.05
0
0.05
0.1
0.15
0.2
Time (s)
R
o
l
l

(
r
a
d
)

y

(
m
)
Roll & Yaxis Error vs. Time
2
1.5
1
0.5
0
0.5
1
Roll ()
Error
y
(b)
Figure 8.31: quadrotor, control signals time plot [pitch & roll].
8.4. CONTROL SIGNALS 147
15 20 25 30 35 40 45
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0.01
0.02
0.03
Time (s)
Y
a
w

(
r
a
d
)


(
r
a
d
)
Yaw & angle Error vs. Time
1.5
1
0.5
0
0.5
1
1.5
x 10
3
Yaw ()
Error

()
15 20 25 30 35 40 45
9.8
9.85
9.9
9.95
10
10.05
Time (s)
T
h
r
o
t
t
l
e

(
m
/
s
2
)

z

(
m
)
Throttle & Zaxis Error vs. Time
0.22
0.2
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
Throttle (T)
Error
z
(c)
Figure 8.32: quadrotor, control signals time plot [yaw & throttle].
148 CHAPTER 8. SIMULATIONS
8.5 3D Visualization Engine
The 3D Visualization Engine was written in C++ and can be used to visualize
recorded experiments or simulations. The program reads 6 DOF data from .txt
les and displays the movement of the rigid bodies on the screen. In this section we
will present an overview of the code and the main routines that allow the program
to function.
8.5.1 Overview
Algorithm 4: main function algorithm of the Visualization Engine.
function main(void)
{
ask user for video driver [OpenGL, Direct3D, etc];
set up window dimensions [640x480, 1600x900, 1920x1080];
set video driver and start Scene Manager;
open ground vehicles text files;
open aerial vehicles text files;
define global variables;
load ground vehicles 3D models [.obj, etc];
load aerial vehicles 3D models [.obj, etc];
load testbed 3D model [.obj, etc];
set camera position, orientation and navigation mode;
while rendering is not done do
read vehicle positions from .txt files;
if no more data to read then
rendering is done;
end
update vehicle position variables in the program;
draw scene and vehicles;
end
drop rendering device and close .txt files
}
8.5.2 Code Breakdown
Video Driver
Code 8.1: video driver (C++ code).
1 video::E_DRIVER_TYPE driverType=driverChoiceConsole();
2 if (driverType==video::EDT_COUNT)
3 return 1;
8.5. 3D VISUALIZATION ENGINE 149
Video Dimensions
Code 8.2: video dimensions (C++ code).
1 IrrlichtDevice
*
device = createDevice(driverType,
2 core::dimension2d<u32>(1600, 900), 16,
3 false, false, false, &receiver);
4 if (device == 0)
5 return 1; // could not create selected driver.
Scene Manager
Code 8.3: scene manager (C++ code).
1 video::IVideoDriver
*
driver = device>getVideoDriver();
2 scene::ISceneManager
*
smgr = device>getSceneManager();
Open Ground Vehicles Text Files
Code 8.4: open ground vehicles text les (C++ code).
1 std::ifstream file_g1 ("ground1.txt");
2 if (!file_g1.is_open())
3 {
4 std::cout << "Couldn't open File" << std::endl;
5 }
Open Aerial Vehicles Text Files
Code 8.5: open aerial vehicles text les (C++ code).
1 std::ifstream file_a1 ("air1.txt");
2 if (!file_a1.is_open())
3 {
4 std::cout << "Couldn't open File" << std::endl;
5 }
150 CHAPTER 8. SIMULATIONS
Dene Global Variables
Code 8.6: dene global variables (C++ code).
1 irr::f32 number[3]; //for reading the trucks coordinates
2 irr::f32 number2[6]; //for reading the quads coordinates
3 core::vector3df g1_position = core::vector3df(0.0f,0.0f,700.0f);
4 core::vector3df g1_rotation = core::vector3df(0.0f,0.0f,0.0f);
5 core::vector3df a1_position = core::vector3df(0.0f,0.0f,600.0f);
6 core::vector3df a1_rotation = core::vector3df(0.0f,0.0f,0.0f);
Load Ground Vehicles 3D Models
Code 8.7: load ground vehicles 3D models (C++ code).
1 scene::IAnimatedMeshSceneNode
*
ground1 =
2 smgr>addAnimatedMeshSceneNode(smgr>getMesh(
3 "C:/Users/Alejandro/Desktop/objects/truck_red.obj"));
4 if (ground1)
5 {
6 ground1>setMaterialFlag(video::EMF_LIGHTING, false);
7 ground1>setScale(core::vector3df(0.01f,0.01f,0.01f));
8 ground1>setRotation(core::vector3df(0,0,0));
9 ground1>setPosition(core::vector3df(0.f,2.f,0.f));
10 }
Load Aerial Vehicles 3D Models
Code 8.8: load aerial vehicles 3D models (C++ code).
1 scene::IAnimatedMeshSceneNode
*
air1 =
2 smgr>addAnimatedMeshSceneNode(smgr>getMesh(
3 "C:/Users/Alejandro/Desktop/objects/arducopter1.obj"));
4 if (air1)
5 {
6 air1>setMaterialFlag(video::EMF_LIGHTING, false);
7 air1>setScale(core::vector3df(200.f,200.f,200.f));
8 air1>setRotation(core::vector3df(0,0,0));
9 air1>setPosition(core::vector3df(0.f,0.f,0.f));
10 }
8.5. 3D VISUALIZATION ENGINE 151
Load Testbed 3D Model
Code 8.9: load testbed 3D model (C++ code).
1 scene::IAnimatedMeshSceneNode
*
testbed =
2 smgr>addAnimatedMeshSceneNode(smgr>getMesh(
3 "C:/Users/Alejandro/Desktop/objects/testbed_simple.obj"));
4 if (testbed)
5 {
6 testbed>setMaterialFlag(video::EMF_LIGHTING, false);
7 testbed>setScale(core::vector3df(40.f,40.f,40.f));
8 testbed>setRotation(core::vector3df(0,0,0));
9 testbed>setPosition(core::vector3df(0.f,0.f,200.f));
10 }
Set Camera Position, Orientation and Navigation Mode
Code 8.10: set camera position, orientation and navigation mode (C++ code).
1 scene::ICameraSceneNode
*
camera =
2 smgr>addCameraSceneNodeFPS(0, 50.f, 0.2f);
3 camera>setPosition(core::vector3df(200,200,200));
4 device>getCursorControl()>setVisible(false);
Read Ground Vehicle Positions From .txt Files
Code 8.11: read ground vehicle positions from .txt les (C++ code).
1 // GROUND VEHICLE 1
2 if ( file_g1.good() )
3 {
4 getline (file_g1,line);
5 std::istringstream iss (line);
6 int i = 0;
7 do
8 {
9 std::string sub;
10 iss >> sub;
11 number[i] = ::atof(sub.c_str());
12 i++;
13 } while (iss);
14 g1_position.X = SCALE_K
*
number[0];
15 g1_position.Z = SCALE_K
*
number[1];
16 g1_rotation.Y = number[2];
17 }
152 CHAPTER 8. SIMULATIONS
Read Aerial Vehicle Positions From .txt Files
Code 8.12: read aerial vehicle positions from .txt les (C++ code).
1 // AERIAL VEHICLE 1
2 if ( file_a1.good() )
3 {
4 getline (file_a1,line);
5 std::istringstream iss (line);
6 int i = 0;
7 do
8 {
9 std::string sub;
10 iss >> sub;
11 number2[i] = ::atof(sub.c_str());
12 i++;
13 } while (iss);
14 a1_position.X = SCALE_K
*
number2[0];
15 a1_position.Z = SCALE_K
*
number2[1];
16 a1_position.Y = SCALE_K
*
number2[2];
17 a1_rotation.X = number2[4];
18 a1_rotation.Z = 1
*
number2[3];
19 a1_rotation.Y = number2[5];
20 }
Update Vehicle Position Variables
Code 8.13: update vehicle position variables (C++ code).
1 // GROUND VEHICLE 1
2 ground1>setPosition(g1_position);
3 ground1>setRotation(g1_rotation);
4 // AERIAL VEHICLE 1
5 air1>setPosition(a1_position);
6 air1>setRotation(a1_rotation);
Draw Scene and Vehicles
Code 8.14: draw scene and vehicles (C++ code).
1 driver>beginScene(true, true, video::SColor(255,113,113,133));
2 smgr>drawAll(); // draw the 3d scene
3 device>getGUIEnvironment()>drawAll();
4 driver>endScene();
5 int fps = driver>getFPS();
8.5. 3D VISUALIZATION ENGINE 153
Drop Rendering Device and Close .txt Files
Code 8.15: drop rendering device and close .txt les (C++ code).
1 device>drop();
2 file_g1.close();
3 file_a1.close();
Chapter 9
Experimental Results
This chapter contains a brief analysis of the experimental results of this thesis, the
practical implications relative to the hardware used such as limitations on the num-
ber of vehicles that can be controlled, and remarks on the performance expected
versus the performance observed. We also explore the scalability of the implementa-
tion in the light of hardware and software limitations as well as the communication
issues observed.
The videos of the experiments relevant to this thesis can be found in the YouTube
channel: AutomaticControlKTH.
9.1 Hardware and Experimental
Performance
9.1.1 Battery Charge Level
As it is to be expected the truck battery discharges over time until the vehicle can
no longer move. The importance of this discharge process is related to the output
power the battery can supply at a given time. In general, it was observed that a
fully charged battery could only supply a steady power for 5 min, after that time
the battery would still retain enough charge to keep the truck moving but the speed
controller constants would need to be increased to account for the decrease in the
battery output power.
The same situation applies to the quadrotor, however in this case the constants
that need to be increased to account for the battery discharge are the throttle
controller constants. Both the truck and quadrotor batteries last approximately
the same amount of time before requiring to be recharged. The working time is
normally above 5 min up to 10 min, it depends on the trucks motor speed and the
quadrotors throttle in each case.
155
156 CHAPTER 9. EXPERIMENTAL RESULTS
9.1.2 Truck Speed Control
In the simulations the truck speed controller was implemented using 128 levels of
quantization over the speed range of 0 m/s to 7 m/s. However in the experimental
setup, the truck speed behavior was far from being linearly distributed over even
quantization intervals as expected in the model. In reality, the rst 5 quantization
intervals produced no movement whatsoever, the next 5 quantization intervals pro-
duced appropriate speed values with very high quantization steps, and the rest of
the quantization intervals produced exceedingly high speeds; for this reason they
were conveniently blocked in the program.
In other words, the speed controller is subject to: a deadzone (in the low speed
range), a very low quantization resolution zone (in the middle speed range), and a
saturation zone (in the upper speed range). This means that the real working range
of the speed controller is about 10 % of the simulated (expected) range. These
limitations are inherent to the vehicle construction and the ranges vary with the
battery charge as well.
9.1.3 Quadrotor Throttle Control
In the simulations the quadrotor throttle controller was implemented using 128 levels
of quantization. However in the experimental setup, the quadrotor throttle behavior
was not linearly distributed over even quantization intervals as expected in the
model. In reality, the rst 4 quantization intervals produced no propeller movement
whatsoever, the next 90 quantization intervals produced appropriate throttle values,
and the rest of the quantization intervals produced exceedingly high throttle values;
for this reason they were conveniently blocked in the program.
In other words, the throttle controller is subject to: a deadzone (in the low
throttle range), a reasonable quantization resolution zone (in the middle throttle
range), and a saturation zone (in the upper throttle range). This means that the real
working range of the throttle controller is about 70 % of the simulated (expected)
range. These limitations are inherent to the vehicle construction and the ranges
vary with the battery charge as well.
9.1.4 Number of Controllable Vehicles
The motes connected to the computer running LabVIEW and the SF interfaces
are normally referred to as base station motes (actuator and sensor). As mentioned
earlier, due to the specic refresh rate requirements of the trucks and the quadrotors,
each actuator base station mote can take care of sending commands to either two
trucks or one quadrotor at a time. In order to avoid package dropping, the actuator
base station motes should be programmed with dierent mote IDs just like the
motes onboard the vehicles.
9.1. HARDWARE AND EXPERIMENTAL PERFORMANCE 157
In the experimental setup four trucks were used simultaneously, using two ac-
tuator base station motes to control them produced no problems whatsoever. On
a dierent experiment, the reception of actuator commands was tested separately
using 6 ground vehicles and 3 aerial vehicles at the same time; in this setup 6 ac-
tuator base station motes were used to send the commands at close range. The
data reception on the vehicles had a refresh rate slightly lower than the refresh rate
expected, this was due to the SF clogging in the computer connected to the actuator
base station motes.
To solve this problem we attempted opening several instances of Cygwin so that
there would be less load per process, this however, did not improve the performance
signicantly. Future work and experimentation will be required in order to allow
this setup to handle an arbitrary number of vehicles.
Chapter 10
Conclusions and Future Work
10.1 Conclusions
In this thesis we have presented a complete implementation for an experimental
robotic testbed, we have described both the hardware and software parts, covering
with detail aspects about the electronics used, the information ow in the testbed,
the mathematical modeling of the ground and aerial vehicles and the corresponding
simulations.
Regarding the hardware section, we presented the electronics used to build the
testbed along with the specications of each component. Regarding the Software
section, we presented a complete explanation of each VI in our LabVIEW program
along with the abstract program structure dened as the layered controller.
Both positioning systems used to build the testbed were extensively analyzed,
showing the pros and cons, along with a discussion about the performance limita-
tions inherent to the physical principles in which they are based. Regarding the
localization systems we also presented the theory and implementation of an EKF
used to improve the quality of the data.
Regarding the simulations and theoretical analysis we presented the mathemat-
ical model of both the ground and aerial vehicles. We proceeded to build these
models in Simulink and created simulations to test the controllers under dier-
ent scenarios involving multiple ground and aerial vehicles. Lastly, we presented
the C++ Visualization Engine created to display 3D animations of any rigid body
simulation.
Even though this testbed was developed in the framework of quadrotor surveil-
lance over platoons, it was designed in a generic way, allowing the user to specify
dierent behaviors easily through MATLAB scripts.
159
160 CHAPTER 10. CONCLUSIONS AND FUTURE WORK
The software and hardware infrastructure developed in this thesis along with this
document itself serving as a guide, will help those who wish to carry out experiments
related to multi-agent control and those who wish to improve upon the existing work.
10.2 Future Work
In the future there is still more work to be done to improve the current testbed, in
this section we will briey mention these tasks:

Improve the truck and quadrotor Simulink models by accounting for the as-
pects that were simplied in the mathematical models presented in this thesis.

Create mathematical and Simulink models for the truck trailers so that they
can be displayed in the simulation as well.

Improve the C++ Visualization Engine so that it can handle dynamic lights
and physic interactions such as collisions with walls and between vehicles.

Improve the EKF proposed in this thesis by incorporating vehicle specic


dynamics in the lter update process.

Study the possibility of migrating the controllers from the central computer
into the embedded microcontrollers, i.e. the motes.

Perform experiments in the testbed in situations where 10 or more vehicles


are involved simultaneously and sort out the SF clogging and other problems
that may arise with the transmission of mote actuation commands.

Study the possibility of using dierent sensors onboard the vehicles besides
the IRs and sort out possible data transmission issues related to the reception
of incoming sensor data from multiple vehicles simultaneously.

Study the possibility of using more capable robotics hardware such as the
RoBoard RB-100 which possesses a 32 bit x86 CPU working at 1000 MHz
capable of running Windows, Linux and DOS. It can be programmed in C++
and expansion ports allow it to be connected to internet and any other sensor
or actuator.

Study the possibility of expanding the testbed to be able to control water


vehicles such as scale boats, aerial vehicles such as scale planes and ground
vehicles such as dierential drive robots.
Appendix A
MATLAB Scripts
Code 1: extended Kalman lter (MATLAB script).
1 function xhatOut = ExtKalman(meas,dt)
2 persistent P xhat Q R
3 if isempty(P)
4 % First time through the code so do some initialization
5 xhat = [0;0;0;0];
6 P = zeros(4,4);
7 Q = diag([0 .001 0 .001]);
8 R = diag([0.01 0.01]);
9 end
10 % Calculate the Jacobians for the state and measurement equations
11 F = [1 dt 0 0;0 1 0 0;0 0 1 dt;0 0 0 1];
12 X_Hat = xhat(1);
13 Y_Hat = xhat(3);
14 yhat = [X_Hat; Y_Hat];
15 H = [1 0 0 0;
16 0 0 1 0];
17 % Propogate the state and covariance matrices
18 xhat = F
*
xhat;
19 P = F
*
P
*
F' + Q;
20 % Calculate the Kalman gain
21 K = P
*
H'/(H
*
P
*
H' + R);
22 % Calculate the measurement residual
23 resid = meas yhat;
24 % Update the state and covariance estimates
25 xhat = xhat + K
*
resid;
26 P = (eye(size(K,1))K
*
H)
*
P;
27 % Post the results
28 xhatOut = xhat;
161
162 APPENDIX A
Code 2: automatic IR lookup table generator (MATLAB script).
1 %% Sensor 1 (left side Short Range)
2 x1 = [4095;3210;2280;1778;1460;1260;1106;973;896;818;763;710;680;650];
3 y1 = [5;10;15;20;25;30;35;40;45;50;55;60;65;70];
4 %% Sensor 2 (front side Short Range)
5 x2 = [4095;3150;2200;1733;1422;1220;1070;965;875;810;720;706;650;620];
6 y2 = [5;10;15;20;25;30;35;40;45;50;55;60;65;70];
7 %% Sensor 3 (front side Long Range)
8 x3 = [3278;2605;1995;1620;1335;1158;992;870;791;700];
9 y3 = [20;30;40;50;60;70;80;90;100;110];
10 %% Sensor 4 (right side Short Range)
11 x4 = [4095;3280;2238;1779;1419;1239;1073;941;865;810;734;703;650;620];
12 y4 = [5;10;15;20;25;30;35;40;45;50;55;60;65;70];
13 %% Curve fit Sensor 1
14 %cftool(x1,y1)
15 a1 = 471.9;
16 b1 = 0.004217;
17 c1 = 56.22;
18 d1 = 0.0005716;
19 X1 = 0:4095;
20 Y1 = a1
*
exp(b1
*
X1) + c1
*
exp(d1
*
X1);
21 %% Curve fit Sensor 2
22 %cftool(x2,y2)
23 a2 = 295;
24 b2 = 0.0035;
25 c2 = 49.68;
26 d2 = 0.0005378;
27 X2 = 0:4095;
28 Y2 = a2
*
exp(b2
*
X2) + c2
*
exp(d2
*
X2);
29 %% Curve fit Sensor 3
30 %cftool(x3,y3)
31 a3 = 263.2;
32 b3 = 0.002904;
33 c3 = 108.2;
34 d3 = 0.000506;
35 X3 = 0:4095;
36 Y3 = a3
*
exp(b3
*
X3) + c3
*
exp(d3
*
X3);
37 %% Curve fit Sensor 4
38 %cftool(x4,y4)
39 a4 = 337.4;
40 b4 = 0.003722;
41 c4 = 50.08;
42 d4 = 0.0005275;
43 X4 = 0:4095;
44 Y4 = a4
*
exp(b4
*
X4) + c4
*
exp(d4
*
X4);
45 %% Write to .xls config file
46 xlswrite('truck1.xls', Y1','A1:A4096');
47 xlswrite('truck1.xls', Y2','B1:B4096');
48 xlswrite('truck1.xls', Y3','C1:C4096');
49 xlswrite('truck1.xls', Y4','D1:D4096');
MATLAB SCRIPTS 163
Code 3: truck control signals plotting (MATLAB script).
1 %% Plot the control signals
2
3 %% Get size of measurements
4 p_begin = 0.2;
5 p_end = 1.00;
6 if (exist('ground1','var') == 1)
7 sel = floor(size(ground1)
*
p_begin):floor(size(ground1)
*
p_end);
8 elseif (exist('air1','var') == 1)
9 sel = floor(size(air1)
*
p_begin):floor(size(air1)
*
p_end);
10 else
11 disp('error');
12 return;
13 end
14
15 %% Trucks
16 plot(time(sel),steering1(sel),'b',time(sel),displacement1(sel),'.r');
17 Labels = {'Time (s)','Steering (rad)','Vehicle to WP displacement ...
(rad)'};
18 Legends = {'Steering (\theta)','Displacement (\psi)'};
19 layerplot(time(sel),steering1(sel),time(sel),displacement1(sel),...
20 Labels,Legends,'Steering & Vehicle to WP displacement ...
21 vs. Time [Platoon Leader]');
22 %markersize 2 stepsize 10 width 0.5
23
24 plot(time(sel),steering2(sel),'b',time(sel),displacement2(sel),'.r');
25 Labels = {'Time (s)','Steering (rad)','Vehicle to WP displacement ...
(rad)'};
26 Legends = {'Steering (\theta)','Displacement (\psi)'};
27 layerplot(time(sel),steering2(sel),time(sel),displacement2(sel),...
28 Labels,Legends,'Steering & Vehicle to WP displacement ...
29 vs. Time [Platoon Follower 1]');
30 %markersize 2 stepsize 10 width 0.5
31
32 plot(time(sel),speed1(sel),'b',time(sel),dist1(sel),'.r');
33 Labels = {'Time (s)','Speed (m/s)','Distance to WP (m)'};
34 Legends = {'Speed (v)','Distance (d)'};
35 layerplot(time(sel),speed1(sel),time(sel),dist1(sel),...
36 Labels,Legends,'Speed & Distance to WP ...
37 vs. Time [Platoon Leader]');
38 %markersize 2 stepsize 10 width 1.5
39
40 plot(time(sel),speed2(sel),'b',time(sel),dist2(sel),'.r');
41 Labels = {'Time (s)','Speed (m/s)','Distance to WP (m)'};
42 Legends = {'Speed (v)','Distance (d)'};
43 layerplot(time(sel),speed2(sel),time(sel),dist2(sel),...
44 Labels,Legends,'Speed & Distance to WP ...
45 vs. Time [Platoon Follower 1]');
46 %markersize 2 stepsize 10 width 1.5
164 APPENDIX A
Code 4: quadrotor control signals plotting (MATLAB script).
1 %% Plot the control signals
2
3 %% Get size of measurements
4 p_begin = 0.2;
5 p_end = 1.00;
6 if (exist('ground1','var') == 1)
7 sel = floor(size(ground1)
*
p_begin):floor(size(ground1)
*
p_end);
8 elseif (exist('air1','var') == 1)
9 sel = floor(size(air1)
*
p_begin):floor(size(air1)
*
p_end);
10 else
11 disp('error');
12 return;
13 end
14
15 %% Quads
16 plot(time(sel),pitch1(sel),'b',time(sel),xdist1(sel),'.r');
17 Labels = {'Time (s)','Pitch (rad)','\Deltax (m)'};
18 Legends = {'Pitch (\theta)','Error_x (a)'};
19 layerplot(time(sel),pitch1(sel),time(sel),xdist1(sel),...
20 Labels,Legends,'Pitch & Xaxis Error vs. Time');
21 %markersize 2 stepsize 10 width 0.5
22
23 plot(time(sel),roll1(sel) ,'b',time(sel),ydist1(sel),'.r');
24 Labels = {'Time (s)','Roll (rad)','\Deltay (m)'};
25 Legends = {'Roll (\phi)','Error_y (b)'};
26 layerplot(time(sel),roll1(sel),time(sel),ydist1(sel),...
27 Labels,Legends,'Roll & Yaxis Error vs. Time');
28 %markersize 2 stepsize 10 width 0.5
29
30 plot(time(sel),yaw1(sel),'b',time(sel),pangle(sel),'.r');
31 Labels = {'Time (s)','Yaw (rad)','\Delta\psi (rad)'};
32 Legends = {'Yaw (\psi)','Error_\psi (\Delta\psi)'};
33 layerplot(time(sel),yaw1(sel),time(sel),pangle(sel),...
34 Labels,Legends,'Yaw & \psiangle Error vs. Time');
35 %markersize 2 stepsize 10 width 0.5
36
37 plot(time(sel),throttle1(sel),'b',time(sel),zdist1(sel),'.r');
38 Labels = {'Time (s)','Throttle (m/s^2)','\Deltaz (m)'};
39 Legends = {'Throttle (T)','Error_z (c)'};
40 layerplot(time(sel),throttle1(sel),time(sel),zdist1(sel),...
41 Labels,Legends,'Throttle & Zaxis Error vs. Time');
42 %markersize 2 stepsize 10 width 0.5
MATLAB SCRIPTS 165
Code 5: vehicles and trajectories 3D plotting (MATLAB script).
1 %% Plot the vehicles and trajectories
2
3 %% Get size of measurements
4 p_begin = 0.3;
5 p_end = 1.00;
6 if (exist('ground1','var') == 1)
7 sel = floor(size(ground1)
*
p_begin):floor(size(ground1)
*
p_end);
8 elseif (exist('air1','var') == 1)
9 sel = floor(size(air1)
*
p_begin):floor(size(air1)
*
p_end);
10 else
11 disp('error');
12 return;
13 end
14
15 grey = [0.35, 0.35, 0.35];
16 stepsize = 1600;
17 stepsize_q = 1600;
18
19 %% Truck 1
20 trajectory3(ground1(sel,1),ground1(sel,2),zeros(size(sel,2),2)...
21 ,zeros(size(sel,2),2),zeros(size(sel,2),2),...
22 (ground1(sel,3)
*
pi/180),1/4,stepsize,'truck','k')
23
24 %% Truck 2
25 trajectory3(ground2(sel,1),ground2(sel,2),zeros(size(sel,2),2)...
26 ,zeros(size(sel,2),2),zeros(size(sel,2),2),...
27 (ground2(sel,3)
*
pi/180),1/4,stepsize,'truck','k')
28
29 k = 2; %scale angles up
30 %% Quad 1
31 trajectory3(air1(sel,1),air1(sel,2),air1(sel,3)...
32 ,(air1(sel,4)
*
k
*
pi/180),(air1(sel,5)
*
k
*
pi/180),...
33 (air1(sel,6)
*
pi/180),1/6,stepsize_q,'ah64','k')
34
35 %% Quad 2
36 trajectory3(air2(sel,1),air2(sel,2),air2(sel,3)...
37 ,(air2(sel,4)
*
k
*
pi/180),(air2(sel,5)
*
k
*
pi/180),...
38 (air2(sel,6)
*
pi/180),1/6,stepsize_q,'ah64','g')
39
40 %% Resize axes limits (if needed)
41 %h = gca;
42 %xlims = get(h,'XLim');
43 %ylims = get(h,'YLim');
44 %set(h,'XLim',xlims
*
1.25);
45 %set(h,'YLim',ylims
*
1.25);
46
47 %% Name Axes and Title
48 hTitle = title ('Circular Surveillance 4 Quadrotors [2 altitudes, ...
front & back]');
49 hXLabel = xlabel('X (m)' );
50 hYLabel = ylabel('Y (m)' );
51 hZLabel = zlabel('Z (m)' );
166 APPENDIX A
52
53 %% Adjust Styles
54 set( gca , ...
55 'FontName' , 'Helvetica' );
56 set([hTitle, hXLabel, hYLabel, hZLabel], ...
57 'FontName' , 'AvantGarde');
58 set([hXLabel, hYLabel, hZLabel], ...
59 'FontSize' , 10 );
60 set( hTitle , ...
61 'FontSize' , 12 , ...
62 'FontWeight' , 'bold' );
63
64 set(gca, ...
65 'Box' , 'off' , ...
66 'TickDir' , 'out' , ...
67 'TickLength' , [.02 .03] , ...
68 'XMinorTick' , 'on' , ...
69 'YMinorTick' , 'on' , ...
70 'ZMinorTick' , 'on' , ...
71 'XLim' , [40 40] , ...
72 'YLim' , [40 40] , ...
73 'ZLim' , [ 0 30] , ...
74 'XTick' , 40:10:40, ...
75 'YTick' , 40:10:40, ...
76 'ZTick' , 0:10:30, ...
77 'XColor' , [.3 .3 .3], ...
78 'YColor' , [.3 .3 .3], ...
79 'ZColor' , [.3 .3 .3], ...
80 'LineWidth' , 1 );
81
82 %% Visualization and closing
83 az = 50;
84 el = 30;
85 view(az,el) % Perspective view
86 pause;
87 az = 90;
88 el = 90;
89 view(az,el) % Top view
90
91 %% Note: trajectory3 is a function created by: Valerio Scordamaglia
92 % function [M]=trajectory3(x,y,z,pitch,roll,yaw,...
93 % scale_factor,step,selector,color,varargin);
MATLAB SCRIPTS 167
Code 6: Simulink signal extraction with subsampling (MATLAB script).
1 %% Simulink data extractor with subsampling
2 n = 5; % subsampling factor
3
4 if (exist('ground1','var') == 1)
5 samples = size(ground1,1);
6 elseif (exist('air1','var') == 1)
7 samples = size(air1,1);
8 else
9 disp('error');
10 return;
11 end
12
13 samples = floor(samples/n);
14
15 %% Trucks
16 ground1sub = zeros(samples,3);
17 ground2sub = zeros(samples,3);
18 ground3sub = zeros(samples,3);
19 ground4sub = zeros(samples,3);
20 ground5sub = zeros(samples,3);
21
22 %% Quads
23 air1sub = zeros(samples,6);
24 air2sub = zeros(samples,6);
25 air3sub = zeros(samples,6);
26 air4sub = zeros(samples,6);
27
28 %% Subsampling loop
29 for i=1:samples
30 ground1sub(i,:)= ground1(n
*
i,:);
31 ground2sub(i,:)= ground2(n
*
i,:);
32 ground3sub(i,:)= ground3(n
*
i,:);
33 ground4sub(i,:)= ground4(n
*
i,:);
34 ground5sub(i,:)= ground5(n
*
i,:);
35
36 air1sub(i,:)= air1(n
*
i,:);
37 air2sub(i,:)= air2(n
*
i,:);
38 air3sub(i,:)= air3(n
*
i,:);
39 air4sub(i,:)= air4(n
*
i,:);
40 end
41 %% Saving variables to .txt files
42 save('ground1.txt', 'ground1sub', 'ASCII')
43 save('ground2.txt', 'ground2sub', 'ASCII')
44 save('ground3.txt', 'ground3sub', 'ASCII')
45 save('ground4.txt', 'ground4sub', 'ASCII')
46 save('ground5.txt', 'ground5sub', 'ASCII')
47
48 save('air1.txt', 'air1sub', 'ASCII')
49 save('air2.txt', 'air2sub', 'ASCII')
50 save('air3.txt', 'air3sub', 'ASCII')
51 save('air4.txt', 'air4sub', 'ASCII')
168 APPENDIX A
C/C++ Codes
Code 7: ArduCopter sensor data sending loop (C code).
1 // My variables
2 //
3 uint8_t d_time = 10;
4 static int16_t n_heading = 0;
5 static int16_t sonar_alt = 0;
6
7 uint8_t n_init = 0xFE; // protocol first byte
8
9 uint8_t h_num1 = 0;
10 uint8_t h_num2 = 0;
11 uint8_t a_num1 = 0;
12 uint8_t a_num2 = 0;
13 ...
14 static void medium_loop()
15 {
16 // This is the start of the medium (10 Hz) loop pieces
17 //
18 switch(medium_loopCounter) {
19
20 // This case deals with the GPS and Compass
21 //
22 case 0:
23 medium_loopCounter++;
24 ...
25
26 // This case performs some navigation computations
27 //
28 case 1:
29 medium_loopCounter++;
30 ...
31
32 // command processing
33 //
34 case 2:
35 medium_loopCounter++;
36 ...
37
38 // This case deals with sending high rate telemetry
39 //
40 case 3:
41 medium_loopCounter++;
42 ...
43
44 // This case controls the slow loop
45 //
46 case 4:
47 ...
48
C/C++ CODES 169
49 // Code for sending altitude and heading
50 //
51 // through the serial port
52 //
53
54 if (compass.read()) {
55 compass.calculate(ahrs.get_dcm_matrix());
56 Vector3f maggy = compass.get_offsets();
57 n_heading = (wrap_360(ToDeg(compass.heading)
*
100));
58 }
59
60 h_num1 = (int)(((n_heading) >>8) & 0XFF);
61 h_num2 = (int)((n_heading) & 0XFF);
62
63 a_num1 = (int)(((sonar_alt) >>8) & 0XFF);
64 a_num2 = (int)((sonar_alt) & 0XFF);
65
66 //Serial.printf("%d\n",n_heading);
67 //Serial.printf("%d\n",sonar_alt);
68
69 // protocol requires 0xFE to be avoided in data stream
70 if (h_num1 == 0xFE)
71 h_num1 = 0xFF;
72 if (h_num2 == 0xFE)
73 h_num2 = 0xFF;
74 if (a_num1 == 0xFE)
75 a_num1 = 0xFF;
76 if (a_num2 == 0xFE)
77 a_num2 = 0xFF;
78
79 Serial3.printf("%c",n_init);
80 delay(d_time);
81 Serial3.printf("%c",h_num1);
82 delay(d_time);
83 Serial3.printf("%c",h_num2);
84 delay(d_time);
85 Serial3.printf("%c",a_num1);
86 delay(d_time);
87 Serial3.printf("%c",a_num2);
88
89 default:
90 // this is just a catch all
91 //
92 medium_loopCounter = 0;
93 }
94 }
170 APPENDIX A
Code 8: Visualization Engine example for 1 truck and 1 quadrotor (C++ code).
1 #ifdef _MSC_VER
2 // We'll also define this to stop MSVC complaining about sprintf().
3 #define _CRT_SECURE_NO_WARNINGS
4 #pragma comment(lib, "Irrlicht.lib")
5 #endif
6
7 #include <irrlicht.h>
8 #include "driverChoice.h"
9 #include <iostream>
10 #include <fstream>
11 #include <string>
12 #include <sstream>
13 #include <stdlib.h>
14 using namespace irr;
15
16 int main()
17 {
18 // ask user for driver
19 video::E_DRIVER_TYPE driverType=driverChoiceConsole();
20 if (driverType==video::EDT_COUNT)
21 return 1;
22 // create device
23 MyEventReceiver receiver;
24 IrrlichtDevice
*
device = createDevice(driverType,
25 core::dimension2d<u32>(1600, 900), 16,
26 false, false, false, &receiver);
27 if (device == 0)
28 return 1; // could not create selected driver.
29
30 video::IVideoDriver
*
driver = device>getVideoDriver();
31 scene::ISceneManager
*
smgr = device>getSceneManager();
32
33 irr::f32 number[3]; // for reading the cars coordinates
34 irr::f32 number2[6]; // for readings the copters coordinates
35
36 std::string line; // variable used to read new lines
37
38 std::ifstream file_g1 ("ground1.txt"); // truck 1
39 if (!file_g1.is_open())
40 {
41 std::cout << "Couldn't open File" << std::endl;
42 }
43
44 std::ifstream file_a1 ("air1.txt"); // quad 1
45 if (!file_a1.is_open())
46 {
47 std::cout << "Couldn't open File" << std::endl;
48 }
49 core::vector3df g1_position = core::vector3df(0.0f,0.0f,700.0f);
50 core::vector3df g1_rotation = core::vector3df(0.0f,0.0f,0.0f);
51 core::vector3df a1_position = core::vector3df(0.0f,0.0f,600.0f);
52 core::vector3df a1_rotation = core::vector3df(0.0f,0.0f,0.0f);
C/C++ CODES 171
53
54 scene::IAnimatedMeshSceneNode
*
ground1 =
55 smgr>addAnimatedMeshSceneNode(smgr>getMesh(
56 "C:/Users/Alejandro/Desktop/objects/truck_red.obj"));
57 if (ground1)
58 {
59 ground1>setMaterialFlag(video::EMF_LIGHTING, false);
60 ground1>setScale(core::vector3df(0.01f,0.01f,0.01f));
61 ground1>setRotation(core::vector3df(0,0,0));
62 ground1>setPosition(core::vector3df(0.f,2.f,0.f));
63 } // load truck model ".obj"
64
65 scene::IAnimatedMeshSceneNode
*
air1 =
66 smgr>addAnimatedMeshSceneNode(smgr>getMesh(
67 "C:/Users/Alejandro/Desktop/objects/arducopter1.obj"));
68 if (air1)
69 {
70 air1>setMaterialFlag(video::EMF_LIGHTING, false);
71 air1>setScale(core::vector3df(200.f,200.f,200.f));
72 air1>setRotation(core::vector3df(0,0,0));
73 air1>setPosition(core::vector3df(0.f,0.f,0.f));
74 } // load quadrotor model ".obj"
75
76 scene::IAnimatedMeshSceneNode
*
testbed =
77 smgr>addAnimatedMeshSceneNode(smgr>getMesh(
78 "C:/Users/Alejandro/Desktop/objects/testbed_simple.obj"));
79 if (testbed)
80 {
81 testbed>setMaterialFlag(video::EMF_LIGHTING, false);
82 testbed>setScale(core::vector3df(40.f,40.f,40.f));
83 testbed>setRotation(core::vector3df(0,0,0));
84 testbed>setPosition(core::vector3df(0.f,0.f,200.f));
85 } // load testbed model ".obj"
86
87 /
*
88 To be able to look at and move around in this scene,
89 we create a first person shooter style camera
90 and make the mouse cursor invisible.
91
*
/
92 scene::ICameraSceneNode
*
camera = ...
smgr>addCameraSceneNodeFPS(0, 50.f, 0.2f);
93 camera>setPosition(core::vector3df(200,200,200));
94 device>getCursorControl()>setVisible(false);
95
96 device>getGUIEnvironment()>addImage(
97 driver>getTexture("../../media/kth.png"),
98 core::position2d<s32>(10,20)); // add a logo
99
100 gui::IGUIStaticText
*
diagnostics = ...
device>getGUIEnvironment()>addStaticText(
101 L"", core::rect<s32>(10, 10, 400, 20));
102 diagnostics>setOverrideColor(video::SColor(255, 255, 255, 0));
103
104 int lastFPS = 1;
172 APPENDIX A
105 // In order to do framerate independent movement, we have to know
106 // how long it was since the last frame
107 u32 then = device>getTimer()>getTime();
108 // This is the movemen speed in units per second.
109 const f32 MOVEMENT_SPEED = 5.f;
110 const f32 ANGULAR_SPEED = 25.f;
111 const f32 SCALE_K = 10.f;
112
113 while(device>run())
114 {
115 // Work out a frame time.
116 const u32 now = device>getTimer()>getTime();
117 const f32 frameDeltaTime = (f32)(now then) / 1000.f;
118 then = now; // Time in seconds
119
120 // GROUND VEHICLE 1
121 if ( file_g1.good() )
122 {
123 getline (file_g1,line);
124 std::istringstream iss (line);
125 int i = 0;
126 do
127 {
128 std::string sub;
129 iss >> sub;
130 number[i] = ::atof(sub.c_str());
131 i++;
132 } while (iss);
133 g1_position.X = SCALE_K
*
number[0];
134 g1_position.Z = SCALE_K
*
number[1];
135 g1_rotation.Y = number[2];
136 }
137
138 // AERIAL VEHICLE 1
139 if ( file_a1.good() )
140 {
141 getline (file_a1,line);
142 std::istringstream iss (line);
143 int i = 0;
144 do
145 {
146 std::string sub;
147 iss >> sub;
148 number2[i] = ::atof(sub.c_str());
149 i++;
150 } while (iss);
151 a1_position.X = SCALE_K
*
number2[0];
152 a1_position.Z = SCALE_K
*
number2[1];
153 a1_position.Y = SCALE_K
*
number2[2];
154 a1_rotation.X = number2[4];
155 a1_rotation.Z = 1
*
number2[3];
156 a1_rotation.Y = number2[5];
157 }
158
C/C++ CODES 173
159 ground1>setPosition(g1_position);
160 ground1>setRotation(g1_rotation);
161 air1>setPosition(a1_position);
162 air1>setRotation(a1_rotation);
163
164 driver>beginScene(true, true,
165 video::SColor(255,113,113,133));
166 // draw the 3D scene
167 smgr>drawAll();
168 // draw the GUI environment
169 device>getGUIEnvironment()>drawAll();
170 driver>endScene();
171
172 int fps = driver>getFPS(); // print frames per second
173 if (lastFPS != fps)
174 {
175 core::stringw tmp(
176 L"Movement Example Irrlicht Engine [");
177 tmp += driver>getName();
178 tmp += L"] fps: ";
179 tmp += fps;
180 device>setWindowCaption(tmp.c_str());
181 lastFPS = fps;
182 }
183 }
184 /
*
185 In the end, delete the Irrlicht device.
186
*
/
187 device>drop();
188 file_g1.close();
189 file_a1.close();
190 return 0;
191 }
Appendix B
Actuator Interface Board
8/15/2012 7:53:08 PM C:\Users\Laura\Desktop\SerialAdapter.sch (Sheet: 1/1)
GND
GND
1u 1u
MC74VHC1GT50DT
VCC
10k 16k
GND
MC74VHC1GT50DT
VCC
AP1117E50G-13
GND
10u 22u
LP38692MP
GND
1
3
5
2
4
6
7
9
8
10
P1
1
2
J 1
C2 C1
U1
NC
1
IN_A
2
GND
3
OUT_Y
4
VCC
5
R1 R2
1
2
3
4
5
J 2
U2
NC
1
IN_A
2
GND
3
OUT_Y
4
VCC
5 U4
GND
VI
3
1
VO
2
C3 C4
U3
VEN
1
NC
2
VIN
4
GND
5
VOUT
3
+ +
++
TX
RX
RX
TX
Vcc=7.2V (from RC battery pack)
J 1 - power supply
J 2 - to Pololu Micro Serial servo controller
P1 - to Tmote Sky
RST
8/15/2012 7:53:35 PM C:\Users\Laura\Desktop\SerialAdapter.brd
Tmote Sky dimensions
3
4
1
2
P
1
J 1
C2
C1
U1
R1 R2
J 2
U2
U
4
C3
C
4
U
3
10u
2
2
u
Figure 1: actuator interface board (schematic & PCB).
175
176 APPENDIX B
IR Sensor Interface Board
8/15/2012 7:52:38 PM C:\Users\Laura\Desktop\board1.sch (Sheet: 1/1)
1u 1u
GND
M-1X03-SIP-100-32
M-1X03-SIP-100-32
M-1X03-SIP-100-32
M-2X05-DIL-100
GND
GND
GND
GND
VCC
VCC
VCC
10K 10K
10K 10K
10K 10K
GND
GND
GND
M-1X02-SIP-100-32 GND
LP38692MP
AP1117E50G-13
GND
VCC
GND
10u 22u
M-1X03-SIP-100-32
GND
VCC
10K 10K
GND
10u
10u
10u
10u
C1 C2
P3
1
2
3
P1
1
2
3
P2
1
2
3
P4
1
3
5
2
4
6
7
9
8
10
R1 R2
R3 R4
R5 R6
P5
1
2
U1
VEN
1
NC
2
VIN
4
GND
5
VOUT
3
U2
GND
VI
3
1
VO
2
C3 C4
P6
1
2
3
R7 R8
C5
C6
C7
C8
8/15/2012 7:51:16 PM C:\Users\Laura\Desktop\board1.brd
3
4
1
2
C
1
C
2
P3 P1 P2
P
4
R1 R2 R3 R4 R5 R6
P5
U1
U
2
C3
C4
P
6
R7 R8
C5
C6
C7
C8
Figure 2: IR sensors interface board (schematic & PCB).
Appendix C
In this appendix we will present photos of the ground and aerial vehicles, i.e. the
trucks and the quadrotors. We will make special emphasis in showing the electron-
ics
1
used.
Photos Ground Vehicles
Figure 3: Photo showing the Arduino with the Arduino-Mote Custom Serial Connection
Board.
1
The electronics and hardware in general are described in Chapter 2
177
178 APPENDIX C
Figure 4: Photo showing the IR Sensor Interface Board (middle), and the Actuator In-
terface Board (left), with their corresponding motes.
Figure 5: Photo showing all the connections of two trucks.
PHOTOS AERIAL VEHICLES 179
Photos Aerial Vehicles
Figure 6: Photo showing the actuator mote (top) and sensor mote (bottom), with their
corresponding interface boards.
Figure 7: Photo showing the Pololu and Arduino connections.
180 APPENDIX C
Figure 8: Photo showing the placement of the Ubisense Tag.
Photos Ground and Aerial Vehicles
Figure 9: Photo showing the collection of ground and aerial vehicles.
References
[1] L.B. Arranz, A. Seuret, and C. Canudas de Wit. Contraction control of a eet
circular formation of auvs under limited communication range. In American
Control Conference (ACC), 2010, pages 59915996. IEEE, 2010.
[2] T. Bresciani. Modelling, identication and control of a quadrotor helicopter.
Department of Automatic Control, Lund University, 2008.
[3] A. Canu, M. Boussard, and A.I. Mouaddib. Stackelberg equilibrium in robot
platooning.
[4] X.C. Ding, A.R. Rahmani, and M. Egerstedt. Multi-uav convoy protection: an
optimal approach to path planning and coordination. Robotics, IEEE Trans-
actions on, 26(2):256268, 2010.
[5] R. Goela, S.M. Shahb, N.K. Guptac, and N. Ananthkrishnanc. Modeling, simu-
lation and ight testing of an autonomous quadrotor. In Proceedings of the IISc
Centenary International Conference and Exhibition on Aerospace Engineering,
ICEAE, pages 1822, 2009.
[6] M. Hehn and R. DAndrea. Quadrocopter trajectory generation and control.
In Proceedings of the IFAC world congress, 2011.
[7] E.G. Hernndez-Martnez and E. Aranda-Bricaire. Convergence and collision
avoidance in formation control: A survey of the articial potential functions
approach.
[8] G.M. Homann, S. Waslander, and C.J. Tomlin. Quadrotor helicopter trajec-
tory tracking control. In AIAA Guidance, Navigation and Control Conference
and Exhibit, Honolulu, Hawaii, 2008.
[9] G. Klanar, D. Matko, and S. Blai. Wheeled mobile robots control in a linear
platoon. Journal of Intelligent & Robotic Systems, 54(5):709731, 2009.
181
182 REFERENCES
[10] N. Michael, D. Mellinger, Q. Lindsey, and V. Kumar. The grasp multiple micro-
uav testbed. Robotics & Automation Magazine, IEEE, 17(3):5665, 2010.
[11] K. Miller. Path tracking control for quadrotor helicopters. Computing & Con-
trol Engineering Journal, 2008.
[12] F. Morbidi and D. Prattichizzo. Sliding mode formation tracking control of a
tractor and trailer-car system. In Robotics: Science and Systems, volume 3,
pages 126133. Citeseer, 2007.
[13] I. Shames, D.V. Dimarogonas, and K.H. Johansson. Collective circumnaviga-
tion using bearing measurements.
[14] HG Tanner and DK Christodoulakis. Decentralized cooperative control of het-
erogeneous vehicle groups. Robotics and autonomous systems, 55(11):811823,
2007.

Potrebbero piacerti anche