Sei sulla pagina 1di 7

Motion Capture and Activity Tracking Using

Smartphone-Driven Body Sensor Networks


Tudor Pascu

Martin White

Zeeshan Patoli

Department of Informatics
University of Sussex
Brighton, UK
Email: t.pascu@sussex.ac.uk

Department of Informatics
University of Sussex
Brighton, UK
Email: m.white@sussex.ac.uk

Department of Informatics
University of Sussex
Brighton, UK
Email: m.z.patoli@sussex.ac.uk

AbstractBecause of advances in inertial microelectronics and


mobile computing technologies, highly accurate sensor hardware
has become ubiquitous in modern smartphones. This paper
introduces a framework that networks smartphone devices to
produce body sensor networks for motion capture and activity
tracking application areas. Data is transferred in real-time
using the motion cloud, an online gateway and storehouse for
inertial data. The goal of this research is to present a modular
methodology for amalgamating smartphone sensor data within a
centralized repository that is suitable for experimental research.
The proposed framework explores solutions for sensor fusion,
data synchronization, data streaming and remote control functionality in smartphones. To demonstrate sensing accuracy, three
devices are strapped to the motion performers arm to record
the articulated motion of a hand wave gesture. To demonstrate
continuous sensing, two devices are networked for a period of
two hours to identify differences between sedentary and active
comportments.
Index TermsBody Sensor Network, Motion Capture, Activity
Tracking, Animation, Smartphone

I. I NTRODUCTION
Early smartphones contained low-power accelerometers and
magnetometers to measure device tilt. Tilt tracking allows
users to interact with applications by simply rotating the
device. For example, turning the phone on its side will rotate
its interface from portrait to landscape. Modern smartphones
include additional micro-electro mechanical gyroscopes. This
combination of sensing hardware can also be found in motion
capture systems in the form of inertial measurement units
(IMU). This paper draws a parallel between smartphones
and IMUs to answer one question: Is motion capture and
activity tracking feasible using smartphone-driven body sensor
networks?
The motion capture software environment [1] has been
ported to the Android platform while targeting small handheld
devices in terms of interface dimensions and touchscreen
interaction. The proposed mobile application, previously introduced in the context of medicine and healthcare [2], encloses
a 3D engine that is optimized to render kinematic models on
the smartphones screen. The phone acts as an IMU by fusing
the gyroscope, accelerometer and magnetometer data. The
processed motion is applied to the kinematic rig and rendered
on the smartphone screens to give the user an interpretable
visualisation of the data.

978-1-4799-0048-0/13/$31.00 2013 IEEE

A typical motion capture suit uses a multiplexer as a data


hub between its constituent nodes. Our smartphone equivalent
substitutes the concept of a multiplexer for a cloud server.
Each device uploads its motion recording in real-time to a
web service for server-side synchronization and storage. The
merged result is directed back to every smartphone in the
network.
This paper presents two studies that demonstrate the frameworks functionality. In the first study, three smartphones are
interconnected to form an upper body motion capture sleeve.
The devices are strapped to a motion performers arm, forearm
and hand to demonstrate, empirically, that smartphones can
record the articulated movement of a hand wave gesture. In the
second study, two smartphones are interconnected to measure
activity over extended periods of time. The aim of this study
is to amalgamate motion data, from two separate users, in
one centralized repository and compute differences between
sedentary and active behaviour.
II. BACKGROUND
The developments presented in this paper can be associated
with two research topics: motion capture and smartphone sensing. This section provides a brief overview of those topics to
outline the motivation behind this work. Further discussions on
potential application areas (e.g. health and fitness, emergency
response, road condition monitoring, etc.) can be seen in the
penultimate subsection of the paper.
A. Motion Capture
Motion capture is a general term describing the reconstruction of real-life movement in 3D space. Human motion is the
primary target for motion capture because of its anatomical
complexity and organic characteristics. Most recording studios employ either optical or inertial technologies to capture
human motion for the purpose of character animation. Optical
technologies rely on digital cameras to triangulate positional
data while inertial technologies use IMUs to track world space
rotations. Inertial motion capture can be described as being
a multi-faceted sequence of procedures entailing: data acquisition, sensor fusion, post-processing, kinematic deployment
and 3D visualisation. Aside from character animation, motion
capture is also popular throughout the fields of biomechanics

456

[3], medical science [4], sport science, ergonomics etc. Several


interconnected IMUs can be referred to as a body sensor network (BSN) [5] or wireless body area networks (WBAN) [6].
While there are many motion capture systems commercially
available, these technologies remain out of the reach of the
average user (someone without in-depth knowledge of sensing
technologies) due to cost or complexity.

2)

3)
B. Smartphone Sensing
The smartphone has become the most ubiquitous [7] type
of wearable computing. Its worldwide uptake has prompted
many advances in micro-electro mechanical sensors (MEMS)
technologies. Modern smartphones embed a variety of sensors:
optical cameras, gyroscopes, accelerometers, magnetometers,
pressure sensors, thermometers and light sensors. These components are designed to be robust and withstand everyday use
for the lifespan of the device. The smartphones innate ability to
sense while sustaining an Internet connection has encouraged
new application areas focused around aggregating sensor data
in online repositories. Multiple smartphones connected to the
Internet can form large sensor networks [8] that generate big
data in accordance to the Internet of Things model. For example, networks of smartphones can be used to conclude road
condition maps from accelerometer data [9]. Another popular
use is activity tracking whereby sensor data is amalgamated to
deduce the overall comportment of smartphone users [10] [11].
Smartphones have the ability to stimulate physical activity and
improve the overall health of their operators [12] [13].
III. A RCHITECTURE OVERVIEW
This section provides an architectural overview of the application by introducing its main components. As shown in Fig. 1,
the back-ends constituent objects are distributed throughout
five main packages entitled: mathematics, kinematics, animation, recorder and connectivity.

Fig. 1. Simplified object diagram showing the applications main packages


and their contents.

1) The mathematics package contains libraries for vector,


quaternion and matrix transformations. Throughout the

4)

5)

application, rotations are stored as quaternions (for performance reasons) while vectors and matrixes are used
primarily by the OpenGL ES renderer.
The kinematic package generates virtual bone objects
based on BVH hierarchies. Each bone object is assigned
a rotational and positional constraint to form an articulated skeletal hierarchy.
The animation package is a temporary storehouse for
recorded data. Its primary functionality is to feed data
to the skeleton during animation playback. Additionally,
it provides the basic functionality for the animation
controls.
The recorder package is used to extract data from the
devices gyroscope, accelerometer and magnetometer.
The three sets of data are fused to produce a result that
has little drift or noise. The recorder checks the data for
errors and applies post-processing filters.
The connectivity package allows the application to function as a node in a BSN by establishing a connection
to the cloud server. The authenticator identifies the
phone while the uploader and downloader transfer data
between the smartphone and an online database. The
trigger listener awaits server events so that the device
can be controlled remotely by other smartphones or by
the server.
IV. I NTERFACE OVERVIEW

Condensing an otherwise large software environment to fit


the small screen dimensions of a smartphone proved difficult.
The applications main interface had to host the majority of
its features while providing enough space for the visualizer
canvas. As shown in Fig. 2 (a), the user interface has three
main sections: dashboard, viewport and animation controls.
The dashboard, located at the top of the user interface, is
a scrollable menu that allows users to load/save files, select
camera views, change settings and upload or download recordings from the server. The viewport camera can be manipulated
using three multi-touch gestures: pinch-zoom, swipe to rotate
and two-finger-swipe to translate. The animation controls are
customary apart from one checkbox labelled networked. By
enabling networked, the phone automatically registers with the
server as a BSN node and enables all online functionality.
Due to the limited dimensions of the screen, the recording process is achieved in a secondary interface. As shown
in Fig. 2 (b), the recorder overlays the main interface to
reveal two important options. First, a channel is chosen to
inform the application which body part is being recorded
(channels correspond to segments in the skeletal rig). Second,
a recording track is chosen to inform the application where
to upload data. Users have the ability to create, modify and
delete recording tracks. Several smartphones connected to
the same recording track are instantly unified as one body
sensor network. Consequently, the large record button informs
all nodes to begin recording simultaneously over an Internet
connection using event triggers.

457

B. Drift Cancellation
In this context, drift is the difference between the
gyroscopes rotation and the combined accelerometer/magnetometer rotation. The two streams of data
are compared to determine a drift coefficient that is
represented by a percentage ratio. Controlled drift is
induced so that the gyroscope rotation gradually matches
the accelerometer/magnetometer rotation. This process is
achieved using quaternion linear interpolation and the drift
coefficient. The fused rotation will replace the gyroscopes
accumulated rotation before the next frame of motion is
computed. The result contains some unwanted noise that can
be masked using filters. A popular alternative to this solution
is the Kalman filter [14].
C. Post-Processing
Fig. 2. Main user interface (a) rendering kinematic motion and recorder
interface (b).

V. S ENSOR F USION
The applications sensor fusion relies on quaternion linear
interpolation to combine the gyroscope, accelerometer and
magnetometer data. Gyroscopes suffer from drift but produce
noise-free data at high frequencies. Accelerometers and magnetometers can be combined to produce a rough approximation
of rotation. The resulting signal contains no noticeable drift
but oscillates in excess of one degree per frame of motion.
The three sensors can be fused so that the accelerometer and
magnetometer are used by the gyroscope as a reference point
to cancel drift. The entire motion processing can be divided in
three separate stages: data preparation, drift cancellation and
post-processing.
A. Data Preparation
The first step of the preparation stage is to combine the
accelerometer and magnetometer readings using the Android
native API. The result is a quaternion representing a rough
approximation of the devices orientation. Its rotation contains
noise that can be attenuated by applying a low-pass filter. A
set of transformations is applied so that the data matches the
3D engines orthogonal configuration.
The second step is to gather the gyroscope data and compute
its world space rotation. The gyroscope outputs three angular
speeds corresponding to azimuth, pitch and roll. Angular
speeds are converted into rotation increments when multiplied
by the time step. The time step is the time interval between
two consecutive samplings. A summation of the rotation
increments produces a world space rotation. The two rotations
are converted from Euler to quaternions to avoid gimbal lock
and for performance reasons (quaternion multiplications are
much faster to compute than trigonometry).

Once drift cancellation is complete, the data is subject to


cleaning and smoothing. Occasionally, some of the quaternions
are miscalculated due to sensor or software inaccuracies.
The cleaning filter traverses the recording to compare every
frame of motion against its predecessors. A frame is deemed
inaccurate if it exceeds the filters threshold. All inaccuracies
are discarded and substituted for new interpolated values. The
cleaning is followed by smoothing where a Gaussian mask is
applied to every frame of motion to filter out any remaining
noise. The user can adjust or disable the amount of smoothing
depending on the task in hand. For example, a fluent motion
may be desired for an animation while raw data may be
preferred for scientific studies.
VI. F ROM S MARTPHONE TO B ODY S ENSOR N ETWORK
The smartphone can be converted into a body sensor
network in one tap. The application has two fundamental
operating modes: inertial measurement unit and networked. By
default, the smartphone streams motion data from the recorder
to the data cleaner without accessing the server. The data
is filtered and the result is applied to the kinematic model.
By enabling the networked functionality, the application first
sends the recorded data to the server. As shown in Fig. 3, the
server waits for all nodes to upload before synchronizing the
data. Once the data is synchronized, the result is sent back to
the smartphones for post-processing. This section continues
to outline the concept of event triggers, the applications
data streaming protocol and the server-side approach for data
synchronization
A. Event Triggers
Tapping several phones simultaneously to begin recording is
unfeasible from a usability standpoint. The BSNs nodes must
be activated and deactivated remotely using triggers. In this
context, the term trigger describes a web service that sends
events to multiple smartphones simultaneously. A trigger may
be accessed from the applications interface or, alternatively,
from the motion clouds control panel. Fundamentally, triggers
are used primarily to enable and disable recording across the
BSN.

458

data will be gathered in the buffer until Internet access is


regained. The same process is used for downloading data from
the server.
C. Data Synchronization

Fig. 3.
Flow chart of the applications two operating modes: inertial
measurement unit and networked.

The motion cloud server stores a list of trigger objects for


each BSN. Each trigger object contains a binary on/off status
that is accessible only to the specified smartphones. Each
smartphone fires an additional thread tasked with querying the
server for status updates. The querying is optimized so that the
smartphones consume insignificant amounts of mobile data.
Uploads are made only if a status alteration is detected.
B. Data Streaming Protocol
Data can be streamed to the server in real-time over Wi-Fi
or 3G. We wanted to ensure that the smartphones communicate
with the server quickly and that no data is lost during transfers.
Streaming motion data to a server in real time requires a stable
connection. Most wireless IMUs connect to a computer over
Bluetooth at short distances. Comparatively, a smartphones
Internet connection is unstable during everyday use. We developed a reliable data streaming protocol with the aim of
resolving two potential problems. First, the smartphone may
alternate between Wi-Fi, 3G and no connectivity during a
recording session. Second, the smartphone ability to process
sensor data may hindered by connection latencies.
Sending data to the server frame-by-frame is unfeasible due
to connection and server latencies. Our data streaming protocol
allows the smartphone to send motion data to the server in
bursts that vary in size depending on the connection speed.
The processed motion obtained from the smartphones sensor
is gathered and placed in a buffer. A set of asynchronous tasks
are started to empty the buffer contents and upload data to
the server in the form of Java Server Object Notation (JSON)
packets. The frequency at which asynchronous tasks are started
is governed by the time it takes for the server to respond for
each completed task. If the connection is fast, the smartphone
will upload small amounts of data very frequently. If the
connection is slow, the smartphone will upload large amounts
of data very rarely. If the connection is lost, the recorded

Synchronization refers to the process of merging multiple


sets of motion data (each corresponding to a skeletal segment)
and distributing the result throughout the BSN. From a usability perspective, synchronization is an automated process that
requires no user input. On average, this task is completed in
less than one second directly after the recording is stopped.
Although the smartphones are triggered to begin recording
simultaneously, there is a significant delay until the message is
received depending on the Internet connection. Consequently,
we had to find a more accurate solution to remove that delay
and align the motion data. For this we take advantage of the
Android time server, which automatically updates the clock
when the phone is switched on. This clock reading can be
used as a reference based on the assumption that two phones in
the same time zone will show identical times. A timestamp is
stored to identify precisely when the recording has started. The
outgoing server message contains the timestamp, a skeletal
segment id (to create a link between a motion channel and its
corresponding body part) and motion data.
The front of each motion set is truncated with regard to the
largest timestamp. For example, if a BSN contains smartphone
a and b where a begins recording precisely one second after
b in a 30f/s animation, 30 motion frames will be truncated
from the beginning of bs motion set. When a recording
session is first initiated, the motion performer is asked to
stand motionless to ensure that no important motion frames
are discarded. Fig. 4 illustrates the truncation of five motion
sets using timestamps.

Fig. 4.

Server-side alignment of five motion sets using timestamps.

Every smartphone receives an identical copy of the processed motion from the server. The rotational data is applied to
the kinematic model using the skeletal segment ids. Notably,
the fused sensor data is a stream of world space rotations
while the kinematic model requires local space. Each skeletal
segments quaternion must be divided by its kinematic predecessors in accordance with the hierarchy specification. This
process is repeated for every frame of motion.

459

VII. M OTION C LOUD


The motion cloud is a centralized repository for all motion
capture mediums: animation suits, smartphones, inertial measurement units, pedometers, etc. The online repository can be
powered by any cloud service (e.g. Microsoft Azure) and can
accommodate thousands of devices recording simultaneously.
Because of its cross-platform capabilities, the motion cloud
can also be used as a gateway between smartphones, computers and tablets. For example, motion captured on a handheld
device can be opened using a desktop version of our motion
capture software environment. Likewise, a commercial inertial
suits data, that was recorded in conjunction with a computer,
can be transferred through the motion cloud to a handheld
device.
A. Data Storage
A single smartphone recording continuously for a period of
24 hours at a frame rate of 30f/s would equate to approximately 2.5 million database entries. Consequently, the motion
clouds database (shown in Fig. 5) is designed to be efficient
to accommodate large amounts of data. Users are required
to register a unique account by providing general profile
information and authentication credentials. Once authenticated
using the OAuth authorization protocol, the user can create
new recordings or access previous recordings. Each recording
object is paired with a configuration table entry that stores
settings such as frame rate, recording timestamps and a hash
value corresponding to the kinematic rig in use. Finally,
each recoding contains a set of channels that store vector
readings corresponding to rotations (as found in BVH files)
and gravitational accelerations.

Fig. 5.

Fig. 6. Motion cloud web portal displaying two channels streaming data
from two smartphones.

VIII. R ESULTS - M OTION C APTURE


The smartphone motion capture BSN concept was evaluated
using three Samsung Galaxy S3 devices strapped to the
motion performers right arm. The experiments goal was to
animate an articulated arm performing a variety of gestures.
Each gesture was recorded over a period of one minute at
frame rates ranging from 30f/s to 120f/s. These particular
smartphones have a large surface area but are relatively light.
By using thicker elastic straps, the phones move very little in
relation to the body, minimising skin-induced artefacts [15].
Fig. 7 shows the recorded motion alongside the corresponding real life movement. The performed gesture consists of
three consecutive hand waves. The smartphones are strapped
to the right arm, forearm and hand. Each recording is preceded
by calibration whereby angular readings are compensated to
match the rigs posture. The calibration posture is the T-Pose
whereby both arms extended laterally with the palms facing
downwards.

Relational diagram of the motion clouds database objects.

B. Web Portal
As shown in Fig. 6, the motion clouds data can be
interfaced with through a web portal. The web portal has
two primary functionalities. First, the interface allows users to
visualize data as set of interactive graphs, export data as CSV
tables or as BVH motion files. Second, the portals control
panel provides remote control functionality for enabling or
disabling the recording process throughout a BSNs. Future
iterations of the interface will implement HTML5s WebGL
approach to facilitate 3D visualizations in the web browser.

Fig. 7.
Hand wave gesture recorded using three Samsung Galaxy S3
smartphones.

460

Fig. 8 illustrates the three sets of motion over a period of


200 frames at a frame rate of 30f/s. The graphs illustrate an
identifiable pattern, consisting of highpoints and depressions,
that is repeated for each consecutive hand wave. Throughout
the gesture, the elbow joint does the majority of the work
while the hand and forearm move in tandem. The result is a
virtual motion that mirrors the original movement accurately
and fluidly.

Fig. 9.
Recorded motion data illustrating sedentary (green) behaviour
overlaying active (gray) behaviour.

Activity tracking is generally achieved using accelerometer


data [17]. The motion cloud is capable of computing the number of steps taken by an individual by analysing the properties
of motion sensed during gait. The algorithm interprets an
angular change of 112 degrees to equate to one step taken.
This particular value was calibrated by running the smartphone
application alongside a FitBit Ultra pedometer. Table 1 summarizes activity and illustrates a rough approximation of the
steps taken by each individual during the two hour recording
session.
Fig. 8. Processed hand, forearm and arm rotations depicting three consecutive
hand waves.

IX. R ESULTS - ACTIVITY T RACKING


In the previous example, we developed a network with its
nodes corresponding to different parts of the same body. For
the purpose of activity tracking, a body sensor network can
be used to aggregate data from different individuals.
This study demonstrates two networked smartphones, belonging to separate individuals, recording data over the period
of two hours. This study also served the purpose of testing the
applications ability to stream data correctly during everyday
use. To form a contrasting set of results indicative of both
sedentary and active behaviour, the two individuals were
chosen based on the amount of activity they experience on a
daily basis. For this experiment, each phone was set to record
motion at 10f/s and the study accumulated in excess of 0.5
million points of data (corresponding to 72 000 vector readings
and timestamps per individual). In reality, that sampling rate
would be reduced to preserve database storage space and
smartphone battery [16]. During the recording process, the
application sums all angular increments and decrements to
extrapolate a coefficient indicative of activity. If the smartphone is rotated 1 degree along each of its three axes, the
coefficient of activity will equal 3 (regardless if the rotations
are clockwise or anticlockwise). Fig. 9 illustrates a comparison
between the two sets of data.

Sedentary

Active

Ratio

X Activity

57441.11

158525.17

2.75

Y Activity

26935.33

90022.84

3.34

Z Activity

55560.02

167786.56

3.02

Total Activity

139936.45

416334.57

2.96

Steps Taken

1249

3717

2.96

TABLE I
ACTIVITY RESULTS SUMMARIZED .

X. C ONCLUSION
The aim of this study has been to investigate the smartphones relevance to the field of inertial motion capture
and activity tracking. The papers developments expose solutions for sensor fusion, server-side data synchronization, data
streaming protocols and remote control functionality using
event triggers. When combined to form one application, these

461

solutions demonstrate that modern smartphones are suitable


prototyping environments and test beds for BSNs because
they amalgamate powerful computational resources, motion
sensing hardware and telecommunication technologies in one
ubiquitous package.
The smartphones ability to sustain an Internet connection
can be used to establish omnidirectional BSNs. Motion capture
software environments can be deployed on the smartphone
to track the three-dimensional orientation of a singular body
part or, when networked, to capture kinematic motion. Network functionality is facilitated by the motion cloud, a web
service that replicates the features of an inertial multiplexer
while providing online storage for the recordings. Although
performance has not been benchmarked against purpose-built
inertial suits, preliminary results suggest that smartphones
are very capable inertial measurement units. Benchmarking is
problematic because performance is dependant on the device
in use and different smartphones enclose different sensor
components.
This paper presents two use cases for smartphone-driven
BSNs. In the motion capture example, the resulting BVH
animation is compatible with most animation software and can
be used to animate character topologies in a similar way to
inertial motion capture suits. In the activity tracking example,
the online repository can also be used to interpret sedentary
and active behaviour over extended periods of time.
A. Application Areas
The proposed framework has been designed as a platform
to be modular, versatile and extendable. There are numerous
application areas (e.g. small research experiments, for educational purposes, rapid BSN prototyping, etc.) that could benefit
from the developments outlined in this paper.
In the context of wearable computing, the proposed framework can analyse the behavioural properties for the purpose
of health and fitness. Recordings can be processed to estimate
the number of calories burnt, steps taken, weight lost while
maintaining a track record of progress.
Outside the context of wearable computing, the proposed
framework can record the motion of inorganic objects. For
example, it is common for drivers to mount their smartphones
on the dashboard for turn-by-turn navigation, music storage
and playback, etc. Sensing motion inside vehicles and streaming inertial data, together with geolocation coordinates, can be
used for road and traffic condition monitoring [9] (e.g. pothole
detection), accident detection, etc.
B. Future Work
A typical inertial motion capture suit uses Bluetooth to
convey motion data to a computer. The computer processes the
data and applies it to a kinematic model for visualization purposes. Our experiments involve recording human-environment
interaction across urban households and outdoor environments.
Bluetooth can fail due to distance or environment obstructions.
A possible solution is to utilize mobile computing technologies
alongside motion capture suit instead of laptops. The motion

performer can carry a smartphone on their person that streams


their movements to the motion cloud.
BSNs can also be used for behavioural studies to measure
physiological reactions. A physiological reaction is a sudden
gesticulation instigated by a visual or auditory stimulant. Unlike character animation, a behavioural study does not require
full-body motion capture. Sensors are placed only on relevant
body parts that are particularly suggestive of the subjects
cognition. We plan to use three smartphones to track the
behaviour of subjects viewing television adverts by recording
upper body (torso and shoulders) movements.
R EFERENCES
[1] T. Pascu, Z. Patoli, M. White, Unifying Software and HardwareCentric Inertial Measurement Units in Body Sensor Networks, Proc.
of Computer Graphics and Imaging, ACTA Press, 2013.
[2] T. Pascu, M. White, N. Beloff, Z. Patoli, L. Barker, Ambient Health
Monitoring: The Smartphone as a Body Sensor Network Component,
Proc. of Innovation in Medicine and Healthcare, pp. 62-65, 2013.
[3] H. M. Schepers, E. H. F. Asseldonk, C. T. M. Baten, P. H. Veltink,
Ambulatory Estimation of Foot Placement During Walking Using
Inertial Sensors, Journal of Biomechanics, vol. 43, no. 16, pp. 31383143, 2010.
[4] S. Patel, K. Lorincz, R. Hughes, N. Huggins, J. Growdon, D. Standaert,
M. Akay, J. Dy, M. Welsh, P. Bonato, Monitoring Motor Fluctuations
in Patients with Parkinsons Disease Using Wearable Sensors, IEEE
Transactions on Information Technology in Biomedicine, vol. 13, no. 6,
pp. 864-873, 2009.
[5] K. D. Nguyen, I. M. Cheng, Z. Luo, S. H. Yeo, H. B. Duh, A
Body Sensor Network For Tracking and Monitoring of Functional Arm
Motion, Proc. IEEE/RSJ International Conference on Intelligent Robots
and Systems, pp. 3862-3867, 2009.
[6] J. A. D. Moutinho, Wireless Body Area Network, Journal Of Engineering Sciences, pp. 199-216, 2011.
[7] R. Ballagas, M. Rohs, J. Sheridan, J. Borchers, The Smart Phone: A
Ubiquitous Input Device, IEEE Pervasive Computing, vol. 5, no. 1,
2006.
[8] H. Turner, J. White, Verification and Validation of Smartphone Sensor
Networks, MobilWare, 2011.
[9] A. Ghose, Road Condition Monitoring and Alert Application: Using InVehicle Smartphone as Internet-Connected Sensor, IEEE International
Conference on Pervasive Computing and Communications Workshops
(PERCOM Workshops), pp. 489-491, 2012.
[10] R. K. Ganti, S. Srinivasan, A. Gacic, Multisensor Fusion in Smartphones for Lifestyle Monitoring, Body Sensor Networks BSN 2010
International Conference, vol. 365, no. 12, pp. 36-43, 2010.
[11] M. Keally, G. Zhou, G. Xing, J. Wu, A. Pyles, PBN : Towards Practical
Activity Recognition Using Smartphone-Based Body Sensor Networks,
Computer Engineering, ACM, pp. 246-259, 2011.
[12] M. Yamada, T. Aoyama, S. Mori, S. Nishiguchi, K. Okamoto, T. Ito,
S. Muto, T. Ishihara, H. Yoshitomi, H. Ito, Objective Assessment
of Abnormal Gait in Patients With Rheumatoid Arthritis Using a
Smartphone, Rheumatology International, pp. 1-6, 2011.
[13] L. Hebden, Development of Smartphone Applications for Nutrition and
Physical Activity Behavior Change, JMIR Research Protocols, vol. 1,
no. 2, 2012.
[14] R. Olfati-Saber, Distributed Kalman Filtering and Sensor Fusion in
Sensor Networks, Networks, vol. 331, pp. 1-13, 2006.
[15] D. De Rossi, F. Carpi, F. Lorussi, A. Mazzoldi, E. P. Scilingo, A.
Tognetti, Electroactive Fabrics for Distributed, Conformable and Interactive Systems, Proc. of IEEE Sensors, vol. 2, pp. 1608-1613, 2002.
[16] A. Dias, B. Fistererm, G. Lamala, K. Kuhn, G. Hartvigsend, A. Horsch,
Measuring Physical Activity With Sensors: A Qualitative Study,
Studies In Health Technology And Informatics, vol. 150, pp. 475-479,
2009.
[17] M. Tomlein, P. Bielik, P. Kratky, S. Mitrk, M. Barla, M. Bielikova,
Advanced Pedometer for Smartphone-Based Activity Tracking, Proc.
of the International Conference on Health Informatics, SciTe Press, pp.
401-404, 2012.

462

Potrebbero piacerti anche