Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Visual Navigation
for Flying Robots
Probabilistic Models
and State Estimation
Dr. Jrgen Sturm
Scientific Research
Be Creative &
Do Research
Write and
Submit Paper
Present Work at
Conference
Visual Navigation for Flying Robots
Prepare
Talk/Poster
2
Flow of Publications
MSc. Thesis
Conference Paper
Journal Article
Research
Conference Paper
PhD Thesis
Conference Paper
Journal Article
Conference Paper
T-RO, IJCV,
RAS, PAMI,
Important Conferences
Robotics
next week
International Conference on Robotics and Automation (ICRA)
International Conference on Intelligent Robots and Systems (IROS)
Robotics: Science and Systems (RSS)
Computer Vision
International Conference on Computer Vision (ICCV)
Computer Vision and Pattern Recognition (CVPR)
German Conference on Computer Vision (GCPR)
ICRA: Preview
Christians work got nominated for the Best Vision Paper
Four sessions (with 6 papers each) on flying robots
Interesting sessions:
Localization
Theory and Methods for SLAM
Visual Servoing
Sensor Fusion
Trajectory Planning for Aerial Robots
Novel Aerial Robots
Aerial Robots and Manipulation
Modeling & Control of Aerial Robots
Sensing for Navigation
..
Perception
Perception
Perception
Perception and models are strongly linked
Perception
Perception and models are strongly linked
Example: Human Perception
more on http://michaelbach.de/ot/index.html
Visual Navigation for Flying Robots
10
11
State Estimation
12
State Estimation
What parts of the world state are (most) relevant
for a flying robot?
13
State Estimation
What parts of the world state are (most) relevant
for a flying robot?
Position
Velocity
Obstacles
Map
Positions and intentions of other
robots/humans
14
Belief /
State Estimate
Motion
Model
Perception
Plan
Execution
Acting
Sensing
Physical
World
Visual Navigation for Flying Robots
15
world
state
observation
function
16
previous
state
current state
Visual Navigation for Flying Robots
executed
action
17
Probabilistic Robotics
Sensor observations are noisy, partial,
potentially missing (why?)
All models are partially wrong and incomplete
(why?)
Usually we have prior knowledge (from
where?)
18
Probabilistic Robotics
Probabilistic sensor and motion models
Integrate information from multiple sensors
(multi-modal)
Integrate information over time (filtering)
19
Motivation
Bayesian Probability Theory
Bayes Filter
Normal Distribution
Kalman Filter
20
1.
2.
3.
Visual Navigation for Flying Robots
21
22
23
Example
24
Continuous case
25
If
If
and
26
Conditional Independence
Definition of conditional independence
Equivalent to
27
Marginalization
Discrete case
Continuous case
28
Example: Marginalization
29
Continuous case
30
31
32
33
from
34
is diagnostic
is causal
Often causal knowledge is easier to obtain
Bayes rule allows us to use causal knowledge:
observation likelihood
35
Bayes Formula
Derivation of Bayes Formula
Our usage
36
Bayes Formula
Derivation of Bayes Formula
Our usage
37
Normalization
Direct computation of
can be difficult
Idea: Compute improper distribution,
normalize afterwards
Step 1:
Step 2:
(Law of total
probability)
Step 3:
Visual Navigation for Flying Robots
38
39
40
41
42
43
Combining Evidence
Suppose our robot obtains another
observation (either from the same or a
different sensor)
How can we integrate this new information?
More generally, how can we estimate
?
44
Combining Evidence
Suppose our robot obtains another
observation (either from the same or a
different sensor)
How can we integrate this new information?
More generally, how can we estimate
?
Bayes formula gives us:
45
46
Markov Assumption:
is independent of
if we know
47
Markov Assumption:
is independent of
if we know
48
49
50
51
Actions (Motions)
Often the world is dynamic since
actions carried out by the robot
actions carried out by other agents
or just time passing by
change the world
52
Typical Actions
Quadrocopter accelerates by changing the
speed of its motors
Position also changes when quadrocopter does
nothing (and drifts..)
53
Action Models
To incorporate the outcome of an action into
the current state estimate (belief), we use
the conditional pdf
54
Example: Take-Off
Action:
World state:
0.99
air
0.9
0.01
ground
0.1
Visual Navigation for Flying Robots
55
Continuous case
56
Example: Take-Off
Prior belief on robot state:
(robot is located on the ground)
Robot executes take-off action
What is the robots belief after one time step?
57
Markov Chain
A Markov chain is a stochastic process where,
given the present state, the past and the future
states are independent
58
Markov Assumption
Observations depend only on current state
Current state depends only on previous state
and current action
Underlying assumptions
Static world
Independent noise
Perfect model, no approximation errors
Visual Navigation for Flying Robots
59
Bayes Filter
Given:
Stream of observations and actions :
Sensor model
Action model
Prior probability of the system state
Wanted:
Estimate of the state of the dynamic system
Posterior of the state is also called belief
Visual Navigation for Flying Robots
60
Bayes Filter
For each time step, do
1. Apply motion model
61
Bayes Filter
For each time step, do
1. Apply motion model
62
Example: Localization
Discrete state
Belief distribution can be represented as a grid
This is also called a histogram filter
P = 1.0
P = 0.0
Visual Navigation for Flying Robots
63
Example: Localization
Action
Robot can move one cell in each time step
Actions are not perfectly executed
64
Example: Localization
Action
Robot can move one cell in each time step
Actions are not perfectly executed
Example: move east
65
Example: Localization
Binary observation
One (special) location has a marker
Marker is sometimes also detected in
neighboring cells
66
Example: Localization
Lets start a simulation run (shades are handdrawn, not exact!)
67
Example: Localization
t=0
Prior distribution (initial belief)
Assume we know the initial location (if not, we
could initialize with a uniform prior)
68
Example: Localization
t=1, u=east, z=no-marker
Bayes filter step 1: Apply motion model
69
Example: Localization
t=1, u=east, z=no-marker
Bayes filter step 2: Apply observation model
70
Example: Localization
t=2, u=east, z=marker
Bayes filter step 2: Apply motion model
71
Example: Localization
t=2, u=east, z=marker
Bayes filter step 1: Apply observation model
Question: Where is the robot?
72
Kalman filter
Particle filter
Hidden Markov models
Dynamic Bayesian networks
Partially observable Markov decision processes
(POMDPs)
73
Kalman Filter
74
Normal Distribution
Univariate normal distribution
75
Normal Distribution
Multivariate normal distribution
iso lines
76
77
78
79
80
81
with
Visual Navigation for Flying Robots
82
Linear Observations
Further, assume we make observations that
depend linearly on the state
83
Linear Observations
Further, assume we make observations that
depend linearly on the state and that are
perturbed by zero-mean, normally distributed
observation noise
with
84
Kalman Filter
Estimates the state of a discrete-time
controlled process that is governed by the linear
stochastic difference equation
with
Visual Navigation for Flying Robots
and
85
State
Controls
Observations
Process equation
Measurement equation
86
Kalman Filter
Initial belief is Gaussian
87
88
89
with
Visual Navigation for Flying Robots
90
Kalman Filter
For each time step, do
1. Apply motion model
with
Visual Navigation for Flying Robots
91
Kalman Filter
Highly efficient: Polynomial in the
measurement dimensionality k and state
dimensionality n:
92
Observation function
93
Taylor Expansion
Solution: Linearize both functions
Motion function
Observation function
94
with
2. Apply sensor model
with
Visual Navigation for Flying Robots
and
95
Example
2D case
State
Odometry
Observations of visual marker
(relative to robot pose)
Fixed time intervals
96
Example
Motion function
97
Example
Observation Function ( Sheet 2)
98
Example
Dead reckoning (no observations)
Large process noise in x+y
99
Example
Dead reckoning (no observations)
Large process noise in x+y+yaw
100
Example
Now with observations (limited visibility)
Assume robot knows correct starting pose
101
Example
What if the initial pose (x+y) is wrong?
102
Example
What if the initial pose (x+y+yaw) is wrong?
103
Example
If we are aware of a bad initial guess, we set
the initial sigma to a large value (large
uncertainty)
104
Example
105
106