Sei sulla pagina 1di 6

HTC Vive: Analysis and Accuracy Improvement

Miguel Borges? , Andrew Symington† , Brian Coltin† , Trey Smith‡ , Rodrigo Ventura?

Abstract— HTC Vive has been gaining attention as a cost-


effective, off-the-shelf tracking system for collecting ground
truth pose data. We assess this system’s pose estimation through
a series of controlled experiments where we show its precision
to be in the millimeter magnitude and accuracy to range from
millimeter to meter. We also show that Vive gives greater
weight to inertial measurements in order to produce a smooth
trajectory for virtual reality applications. Hence, the Vive’s off
the shelf algorithm is poorly suited for robotics applications
such as measuring ground truth poses, where accuracy and
repeatability are key. Therefore we introduce a new open-
source tracking algorithm and calibration procedure for Vive
which address these problems. We also show that our approach
improves the pose estimation repeatability and accuracy by up
to two orders of magnitude.

I. I NTRODUCTION
The HTC Vive is a consumer headset and accompanying
motion capture system designed for virtual reality (VR)
applications [1]. Motion capture describes the process of
estimating absolute position and orientation — or pose — Fig. 1. Astrobee (1) shown on a level granite surface (2), which is used
in real-time, and has many applications in film, medicine, to simulate a 2D microgravity environment. The prototype has trackers
engineering [2], and notably robotics. mounted to its port and starboard sides (3), and a single lighthouse (4)
mounted overhead.
The Vive system is comprised of lighthouses that emit
synchronized light sweeps, and trackers that use photodiodes
to measure light pulse timings as a proxy for estimating the
horizontal and vertical angles to lighthouses. The trackers
tracking systems (VisualEyez with an in-house developed
fuse angle measurements from a bundle of rigid photodiodes
pose estimation algorithm and QR codes with overhead
together to estimate the pose using a technique similar to
camera).
Angle-of-Arrival [3]. The tracker also has access to motion
data from an incorporated Inertial Measurement Unit (IMU) The first contribution of this paper is an analysis of Vive’s
to maintain a smooth and continuous trajectory. static precision and dynamic precision and accuracy. We
The Vive system provides a compelling means of obtaining show that although the original system has a sub-millimeter
ground truth data for roboticists: it is more affordable than precision with the trackers in a static state, when that state is
competing technologies, it is straightforward to set up and dynamic, the precision worsens by one order of magnitude.
use, and a Robotics Operating System (ROS) driver already We also show experimentally that the accuracy of this system
exists for integration with an ecosystem of robotic tools. can vary from a few millimeters up to a meter in a dynamic
For the reasons above, Vive was chosen as a source situation.
of ground truth for testing the Astrobee robots (Fig. 1). We attribute the high error to the closed-source fusion
Astrobees [4] are free-flying robots that will be deployed algorithm giving higher weight to the inertial measurements,
to the International Space Station in 2018 to be used as a thus minimizing jitter to the VR user. Motivated by this
research platform for free-flying robots1 . Ideally, the system and by not having access to the source code of Vive’s
should exhibit error in the millimeter range in order to be algorithms, the second contribution of this paper is a set
able to benchmark Astrobee’s localization algorithms [5], to of algorithms for Vive that improve on the accuracy and
test Astrobee’s robotic arm and to improve on the available stability while providing an open-source platform that is easy
? Institute for Systems and Robotics - Lisboa, Instituto Superior Técnico, for the user to change. These algorithms are used to compute
Universidade de Lisboa the trackers’ poses and for the calibration procedure that
† SGT Inc., NASA Ames Research Center
relates the lighthouses with the user’s workspace. We show
‡ NASA Ames Research Center
 miguel.r.borges@tecnico.ulisboa.pt that our tracking methods, although less smooth, are able to
1 Astrobee flight software available open source at outperform Vive’s built-in algorithms in accuracy by up to
https://github.com/nasa/astrobee two orders of magnitude.
II. M OTION C APTURE
Vive is one of many motion capture systems available
on the market. Examples of other systems include VICON,
OptiTrack and VisualEyez. VICON and OptiTrack use cam-
eras to track reflective markers illuminated by infrared light
sources. VICON quotes accuracy levels of up to 76 µm and
precision (noise) of 15 µm in a four camera configuration [6]. Fig. 2. Side view of HTC Vive working principle with α being the angle
OptiTrack claims that its system can achieve accuracy levels between a sweeping plane and the normal vector to the lighthouse.
of less than 0.3 mm for robotic tracking systems. VisualEyez
uses a three-camera system to track active LED markers, and
is reported to have millimeter-level precision [7]. The key The inertial data is composed of linear accelerations and
issue with these tracking systems is that they are prohibitively angular velocities. But since this comes from a consumer
expensive for general use. grade IMU, the measurements are very noisy.
Reliable motion capture is an essential component of There are multiple frames associated with this problem.
an immersive VR experience. As the technology grows in Both the lighthouse and the tracker have their own frames, l
popularity so the cost of equipment falls. We are now at a and t respectively. For clarity, the lighthouse is represented
point where off-the-shelf VR devices are providing a feasible as li instead of l in Fig. 3, where i is its index. An auxiliary
alternative for motion capture in the context of robotics. frame vive, v, is selected to always be coincident with one
Examples of VR systems that offer motion capture include of the lighthouses’ frames — chosen during the calibration
Oculus Rift [8] and HTC Vive. procedure. This procedure allows the system to relate the
HTC Vive’s pose estimation has a similar working prin- poses of the lighthouses, in the case multiple lighthouses are
ciple to the Angle of Arrival (AoA) localization techniques being used. It also allows the user to choose the world frame
[9] used in Wireless Sensor Networks (WSN). AoA based w. The final output of Vive is a rigid-body transform between
localization resorts to arrays of antennas to estimate the angle a tracker frame and the world frame, represented by a red
of the received signal. From this interaction between multiple arrow in Fig. 3.
nodes, it is possible to estimate their location. Vive’s trackers
however, estimate the angle of the lighthouse through a time
difference, as is explained in the next section.
In [1], the authors evaluate Vive’s accuracy, however they
focus on research with the headset. They do not mention how
the trackers and controllers behave. These two devices are
more appealing for roboticists. It is therefore unclear how
Vive behaves as a ground truth tracking system for robotic
applications.
III. P ROBLEM D ESCRIPTION Fig. 3. Frames involved in Vive’s pose estimation.

We intend to use Vive as a means of obtaining Astrobee’s


ground truth. Our desired accuracy is one order of magnitude We use the standard notation for transforms from [10],
greater than the robot’s current localization algorithm, which where a Tb is a rigid-body transform from frame b to a (or
is expected to have an accuracy in the centimeter range. pose of b in frame a). We will also be mentioning a Pb and
a
As a tracking system, Vive’s desired outputs are the poses P̃b , which are the position of frame b relatively to a in
of the trackers. In order to compute this, the system has as cartesian and homogeneous coordinates, respectively.
inputs inertial measurements (from the IMU in each tracker) IV. V IVE A NALYSIS
and light data (from the photodiodes in the trackers).
The light data may be of two types — a synchronization In order to evaluate the performance of Vive as ground
flash or an infrared sweeping plane (red line in Fig. 2). Both truth tracking system, we performed two sets of tests where
these light signals are emitted by the lighthouse (the fixed we measured the system’s precision and accuracy. For these
base station). Every cycle starts with the synchronization experiments, two lighthouses were attached to the top of
pulse, modulated with low-rate metadata containing calibra- consecutive walls of a two meter cubic workspace with a
tion parameters for correcting the sweeping plane, followed granite level ground surface. The lighthouses were facing
by an infrared rotating planar laser. The tracker’s photodiodes 45◦ downwards in order to maximize our working volume.
detect both these signals and are able to estimate the angle
(α in Fig. 2) between the lighthouse’s normal vector and A. Stationary Performance
the photodiode with the time difference because the laser First, we assess how precise the system is in a static
rotates at a constant angular velocity. This cycle happens for configuration. We placed a tracker on a still surface and
a horizontally and a vertically rotating laser. From both these recorded the returned poses. The position’s standard devi-
angles, Vive can estimate the tracker’s absolute pose. ation recorded is always less than 0.5 mm, as we show
in Table I of section VI, meaning that Vive achieves sub- A. Pose Tracking
millimeter precision in a static configuration. We will see The APE is our algorithm that estimates full poses of a
however, that the performance degrades with the trackers in tracker in real-time, using only light data. It finds the best fit
motion. between a pose and the available data with a non-linear least-
squares method that incorporates the model of the system. In-
B. Dynamic Performance ertial data is also recorded, however we did not use it because
In order to evaluate the performance of the system in the first we are looking for a satisfactory result. The correction
mentioned state, we took advantage of the fact that Astrobee parameters modulated in the synchronization pulses are also
is in a support that floats in a granite level surface with not used for the same reason as the inertial data. Including
compressed air thrusters [11]. While the robot floats across the IMU data and the sweeping laser correction parameters
the granite surface, the height of any part is constant because in the algorithm will therefore be left for future work.
the surface has been precisely machined to be leveled. The model of this system is a function that returns a
This surface is used to simulate a confined 2D zero-gravity horizontal and a vertical angle based on the relative position
environment. As we show in Table II in section VI, we between a tracker’s photodiode and the lighthouse. This
evaluate the deviation between tracker’s sample points and position is obtained through a series of rigid-body frame
a plane fit to its trajectory. We also provide an orientation transforms that convert the photodiode’s coordinates from
estimation evaluation in the same table. the tracker’s frame to the lighthouse’s:
The results show how unstable the system is and how the
l
accuracy can vary from 1 mm to 43 mm and even 802 mm in P̃p = l Tv v Tt t P̃p (1)
the worst case. In order to use Vive to benchmark Astrobee’s where p is the photodiode, l the lighthouse, v the vive frame
localization algorithms and also to evaluate its robotic arm’s and t the tracker. The position of the photodiode t P̃p is
grasp, our desired accuracy is in the millimeter magnitude. already known from the start due to a factory calibration.
We suspect that in order to improve the experience for the In order to obtain the relative horizontal and vertical angle
VR user, this system gives the inertial data a high weight in between a photodiode and the lighthouse’s normal vector,
order to reduce the jitter. This pushed us towards developing we have to use the three photodiode’s coordinates separately
our own algorithms, tuned for robotic applications. as in (2). The l Ppx , l Ppx and l Ppx terms are the x, y and z
coordinates of l Pp and the top expression is for the horizontal
V. T RACKING A LGORITHMS axis while the bottom one is for the vertical one. This
The new software platform, in Fig. 4, is composed of two expression is a formalization of what was explained in a
main ROS nodes, the Bridge and the Server. The Bridge previous section with Fig. 2.
uses deepdive2 to pull light and IMU data from the tracker  
l y

Pp
through a USB connection and then sends it to the Server. arctan


  lP z

The Server then passes that data to the Calibrator or APE h l Pp =  p (2)
l x
Pp
(Absolute Pose Estimator), depending on its current state. arctan


 lP z
p
These states can be Calibrating — determining the relative
rigid-body transforms between the lighthouses and the world In order to compute the pose of the tracker we use a sum
frame using the Calibrator — or Tracking — real time pose of of squared-differences between the photodiode’s recorded
solving using the APE. angles (αp ) and the estimated angles. This is a non-convex
optimization problem however, using an optimizer 3 , we
are able to get results quickly enough to achieve real-time
tracking. The cost function is the following:
M X
X N
2
fAP E = [hp,l (v Tt ) − αp ] (3)
l=1 p=1

where hp,l (·) is function (2) with (1) as the input argument
after converting it to cartesian coordinates.
Our cost function uses the data from all the lighthouses at
the same time in order to increase the stability of the solution,
however for the horizontal axis, we have to negate the
recorded angles (−αphorizontal ) due to the rotation direction
Fig. 4. Diagram of the system. of the lighthouse’s laser.
Our algorithm also takes advantage of Vive’s high sam-
pling rate by initializing the optimizer at the last computed
2 deepdive is available open source at
https://github.com/asymingt/deepdive 3 Ceres-Solver is available at http://ceres-solver.org
pose as a means of making the estimation process faster. of the relative pose between the trackers and the lighthouse,
For the first estimation done by the algorithm, we use an using the following cost function:
arbitrary starting pose in front of one lighthouses, to make N
sure it doesn’t converge to a pose behind it. X 2
fˆCal = hp l Tt − αp
 
(5)
In order to prevent outliers and old data from influencing p=1
the estimation we included the following restrictions: all
measured angles with magnitude greater than 60 degrees are where hp is function (2) using as its input:
rejected, there must be at least 4 measured angles from the
l
most recently detected lighthouse and all samples older than P̃p = l Tt t P̃p (6)
50 ms are not used in the case they are not from the most
recently detected lighthouse, otherwise the APE skips the This estimate is used to initialize the final cost function,
estimation of this pose. where we compute, simultaneously, the pose of each light-
The cost function (3) is fairly complex and the optimizer house in the vive frame, but this time with all the trackers
may sporadically diverge or converge into a local minimum. at the same time, as in:
In order to prevent wrong estimations, we included one
K X
M X
N
more verification before providing the solution to the user: X 2
it checks the cost function’s cost and if it is bigger than fCal = [hp,l,t (w Tl ) − αp ] (7)
t=1 l=1 p=1
a threshold linearly related with the number of observed
angles, the algorithm rejects the pose and waits for new data. where the function hp,l,t is similar to (2), however the input
These constraints improve APE’s stability but they also is w Tl which can be obtained from the rigid-body transform
lead to ignoring poses (loss of tracking) on the edges of the l
Tw . To obtain the input of the original function, we resort
workspace, where most of the photodiodes are occluded from to the following expression:
the lighthouse.
All the poses are estimated by the algorithm in the vive l
P̃p = l Tw w
Tt t P̃p (8)
frame instead of the world frame. The vive frame is an
auxiliary frame between the lighthouses’ frames and the After having all the lighthouses’ poses computed, the
world frame. The poses in the world frame are computed procedure chooses the vive frame as one of the lighthouses’
through ROS because the world frame is determined prior to frames and converts the lighthouses’ poses to this new
the pose estimation, as is mentioned in subsection V-B. auxiliary frame. We decided to use this frame in order to
preserve the frame hierarchy in the original ROS driver.
B. Calibration Procedure
VI. R ESULTS
When the Vive system is installed, the lighthouses are
individually mounted wherever it is convenient for the user, In order to evaluate Vive’s performance, we designed
so the registration from lighthouse to lighthouse and from two experiments where we assess the system in different
lighthouse to the world frame of interest is initially unknown. situations. Since we do not have access to Vive’s baseline
Therefore we created a procedure that addresses this issue. algorithm’s (from now on referenced as baseline) input data,
Our calibration procedure consists of a concatenation of in order to compare it with our proposed algorithm (from
rigid-body transforms (4) that leads to the relative poses of now on referenced as proposed), we will have to use different
the lighthouses. It assumes that the trackers are static leading datasets collected in similar conditions. We used however,
to a more accurate process. different lighthouse configurations for each dataset. We used
two set-ups (shown in Fig. 5): for the baseline algorithms,
w
Tl =w Tb b Tt t Tl (4) we used the adjacent walls configuration, which provided a
position standard deviation of 5 mm against 93 mm and a
For Astrobee tracking, as in many robotic applications, maximum deviation of 28 mm against 6271 mm of the other
we have multiple trackers rigidly mounted to the robot configuration; the proposed algorithm performed well for
chassis, pointing in different directions, to provide improved the first lighthouse set-up (lighthouses on opposing walls),
tracking coverage. We use the combined tracker information eliminating the need to reconfigure their locations.
to estimate the position of the robot’s body frame (b). The
user should specify the mounting geometry of the trackers
on the robot as body-to tracker relative poses b Tt . The user
also registers the world frame by taking Vive measurements
with the robot body frame fixed at a known pose w Tb .
In the time interval between the start and end of the data
acquisition, our calibrator records light data at 30 or 60 Hz
(depending on the lighthouse’s mode). After completing the
data acquisition, it starts by computing an initial estimate Fig. 5. Configuration of the lighthouses in the granite surface’s workspace.
A. Static State Results fit to the estimated poses and also the angle between the
We started with a stationary state comparison between plane’s normal vector and the same vector attached to the
algorithms, as described in section IV, where we evaluate tracker’s frame in the first instant. We will compare the
the estimated pose’s standard deviation (maximum standard precision (through the standard deviation) of both algorithms
deviation of the 3D position and standard deviation of again but for this different situation and now we’ll also in-
the angle resorting to an axis-angle representation of the clude an accuracy assessment (through the plane’s maximum
deviation — max — and average deviation — d). ¯ For these
orientation). In this experiment, the tracker was static and
at a distance of 1-2 m from the lighthouses. Table I contains tests, trackers 1 and 2 were mounted on the sides of Astrobee,
the results we obtained for each of the approximately 30 s except for dataset d4, where they were attached to the top
dataset. of the robot, as was tracker 3 when it was used. We include
in the results table a reference to the location of the trackers
TABLE I on the robot for each dataset (s for starboard, p for port and
S TANDARD DEVIATION OF THE POSE IN A STATIC STATE . t for top). All the datasets have a duration of 40-120 s and
Algorithm Dataset σPosition [mm] σOrientation [◦ ] Astrobee performed a trajectory similar to the one in Fig. 6,
s1 0.417 0.00300 manually controlled, with an average linear velocity of 1-6
s2 0.151 0.00586 cm/s.
Baseline s3 0.260 0.000476
s4 0.214 0.0023 TABLE II
s5 0.168 0.00687 D EVIATION FROM THE FITTED PLANE .
s6 4.960 0.010
s7 0.0875 0.000216 Algorithm Dataset Tracker σ [mm] max [mm] d¯ [mm]
Proposed s8 1.149 0.000476 d1 1 (s) 1.08 2.36 0.90
s9 10.052 0.0447 d1 2 (p) 1.51 6.77 2.02
s10 0.851 0.0030 d1 3 (t) 0.74 2.96 0.93
d2 1 (s) 33.44 802.57 43.25
Bsl.
d2 2 (p) 7.73 74.51 8.63
Comparing the average position’s standard deviation from d3 1 (s) 0.76 3.32 1.12
Vive’s built-in algorithms (0.242 mm) and from our algo- d3 2 (p) 2.24 28.79 3.39
d4 1 (t) 71.721 270.628 150.371
rithms (3.419 mm) we can clearly conclude that the former d4 2 (t) 26.629 106.627 48.589
outperforms the latter in a stationary experiment. These d5 3 (t) 1.14 5.05 0.90
results are explained by the fact that our algorithm does not d6 3 (t) 0.39 5.21 0.39
Prop.
use the correlation between consecutive poses. The inertial d7 1 (s) 2.11 22.80 2.94
d7 2 (p) 1.09 12.40 1.07
measurements also help the baseline algorithm with this
Algorithm Dataset Tracker σ [◦ ] max [◦ ] d¯ [◦ ]
correlation. However, this is only valid for a static situation, d1 1 (s) 0.02 0.50 0.01
and the trackers will not be in a constant pose while tracking d1 2 (p) 0.02 0.47 0.02
the robot. d1 3 (t) 0.01 0.26 0.01
d2 1 (s) 0.80 58.31 0.11
Baseline
B. Dynamic State Results d2 2 (p) 0.11 4.7 0.08
d3 1 (s) 0.01 0.26 0.01
The second experiment consists of tracking with the same d3 2 (p) 0.09 4.24 0.02
algorithms but with the trackers in motion, as described in d4 1 (t) 0.04 0.64 0.01
d4 2 (t) 0.04 0.69 0.01
section IV. As Astrobee floats in a perfectly flat surface, it’s
d5 3 (t) 0.11 2.14 0.06
trajectory should be a perfect plane. d6 3 (t) 0.36 4.98 0.17
Proposed
d7 1 (s) 1.05 11.63 0.32
d7 2 (p) 0.29 4.89 0.13

The obtained plane distances show that the baseline al-


gorithm can be unstable when compared to our algorithm.
For the position’s worst case scenario, the baseline is outper-
formed by the proposed algorithm in precision by a factor
of 15 while for accuracy by a factor of at least 36 (and up to
50). Both algorithms however have similar results for the best
case. For the orientation evaluation, the baseline algorithms
outperform the proposed ones due to using the gravitational
acceleration. The tests we present only address two degrees
of freedom. Nevertheless, the orthogonality between the
lighthouse’s fields of view leads to similar results for the
Fig. 6. XY view of the path generated with tracker 3 from dataset d5. other degrees.
The inertial measurements make the baseline algorithm be-
In this set of experiments (Table II), we assess the distance have differently from the proposed one. This can be observed
between the trajectory generated by each tracker and a plane in Figs. 7 and 8, with the evolution of the plane’s distance
for every sample with both algorithms. Although dataset d5 when the tracker is static. We also provide evidence from
is longer, d3 has more samples due to incorporating IMU tests performed in a controlled environment that the accu-
measurements. racy of Vive with the trackers in motion can range from
millimetric up to metric.
We have also contributed a tracking algorithm, a calibra-
tion procedure and accompanying open-source software4 that
besides providing a completely transparent experience for the
user, also grants accuracy as a means of obtaining ground
truth localization data.
We intend to make our algorithms even easier to use and
we also plan to experiment with other pose estimators [12]
that use the IMU data and include the lighthouses’ error
parameters in the pose estimator.
Fig. 7. Height of tracker 2 in the path generated with the baseline algorithm,
for dataset d3.
ACKNOWLEDGEMENT
We would like to thank the Astrobee engineering team
Analyzing the distance to the best-fit plane, represented and the NASA Human Exploration Telerobotics 2 project
in Fig. 8, we can see that although it is smooth throughout for supporting this work. The NASA Game Changing
most of the samples, it has some spikes. We suspect that Development Program (Space Technology Mission Direc-
these spikes might be related to rogue reflections perturbing torate), ISS SPHERES Facility (Human Exploration and
the data or to a sudden loss of measurements. Operations Mission Directorate) and Fundação da Ciência
e Tecnologia (project [UID/EEA/50009/2013] and grant
[SFRH/BI/135041/2017]) supported this work.
R EFERENCES
[1] D. Niehorster, L. Li, and M. Lappe, “The Accuracy and Precision of
Position and Orientation Tracking in the HTC Vive Virtual Reality
System for Scientific Research,” i-Perception, vol. 8, no. 3, 2017.
[2] F. King, J. Jayender, S. Bhagavatula, P. Shyn, S. Pieper, T. Kapur,
A. Lasso, and G. Fichtinger, “An Immersive Virtual Reality Environ-
ment for Diagnostic Imaging,” Journal of Medical Robotics Research,
vol. 01, no. 01, p. 1640003, 2016.
[3] P. Kulakowski, J. Vales-Alonso, E. Egea-López, and W. Ludwin,
“Angle-of-arrival localization based on antenna arrays for wireless
Fig. 8. Height of tracker 3 in the path generated with the proposed sensor networks,” Computers & Electrical Engineering, vol. 36, no. 6,
algorithm, for dataset d5. pp. 1181–1186, 2010.
[4] M. Bualat, J. Barlow, T. Fong, C. Provencher, and T. Smith, “Astrobee:
Examining figure (Fig. 8) we notice some noise that Developing a Free-flying Robot for the International Space Station,”
in AIAA SPACE Conference and Exposition, 2015, p. 4643.
resembles a step, which can be explained by the movement of [5] B. Coltin, J. Fusco, Z. Moratto, O. Alexandrov, and R. Nakamura,
the tracker changing the photodiodes that detect the infrared “Localization from Visual Landmarks on a Free-flying Robot,” in
laser. We also suspect that the predominating offset in the IEEE/RSJ IROS, 2016, pp. 4377–4382.
[6] M. Windolf, N. Götzen, and M. Morlock, “Systematic accuracy and
same figure is related to not including the error parameters precision analysis of the video motion capturing systems — Exempli-
broadcasted by the lighthouses mentioned in section III. The fied on the Vicon-460 system,” Journal of biomechanics, no. 12, pp.
inclusion of these parameters will be part of our future work. 2776–2780, 2008.
[7] S. Soylu, A. Proctor, R. Podhorodeski, C. Bradley, and B. Buckham,
VII. O UTDOOR E XPERIMENTS “Precise trajectory control for an inspection class ROV,” Ocean
Engineering, vol. 111, pp. 508–523, 2016.
We also tried our algorithms and platform in an outdoor [8] P. R. Desai, P. N. Desai, K. D. Ajmera, and K. Mehta, “A Review
environment. This test was similar to the one in VI-A, but Paper on Oculus Rift - A Virtual Reality Headset,” IJEET, vol. 13,
no. 4, 2014.
this time the lighthouses were further from each other (the [9] P. Rong and M. L. Sichitiu, “Angle of Arrival Localization for Wireless
distance was around 5 m). The sun’s radiation interfered Sensor Networks,” in 3rd Annual IEEE SECON, vol. 1, 2006, pp. 374–
severely in the synchronization between lighthouses, but this 382.
[10] J. Craig, Introduction to Robotics: Mechanics and Control. Pearson,
was easily solved with a synchronization cable. Nevertheless, 2005, vol. 3.
for two trackers, the worst standard deviation for the position [11] D. Miller, A. Saenz-Otero, J. Wertz, A. Chen, G. Berkowski,
was 13.5 mm and for the orientation was 0.0193◦ . These C. Brodel, S. Carlson, D. Carpenter, S. Chen, S. Cheng et al., “Spheres:
A Testbed For Long Duration Satellite Formation Flying In Micro-
results show that both the system and the algorithms have Gravity Conditions,” in AAS/AIAA Space Flight Mechanics Meeting,
potential for outdoor environments. 2000, pp. 167–179.
[12] D. Schinstock, “Gps-aided INS Solution for OpenPilot,” Kansas State
VIII. C ONCLUSIONS University, Tech. Rep., 2014.
HTC Vive is an affordable solution for localization prob- 4 Our algorithms and platform are available open-source at
lems, in indoors and outdoors environments with great accu- https://github.com/nasa/astrobee together with
racy. We demonstrated that it has sub-millimetric precision Astrobee’s source code

Potrebbero piacerti anche