Sei sulla pagina 1di 6

Power Efficient Compressive Sensing for Continuous

Monitoring of ECG and PPG in a Wearable System


Venkat Natarajan and Apoorv Vyas
venkat.natarajan@intel.com, apoorv.vyas@intel.com
Signal and Systems Lab, Wireless Communications Research, Intel Labs
India Technology India Pvt. Ltd., Devarabisanahalli, 560103,
Bengaluru, India

Abstract— We demonstrate an end-to-end system prototype


using Compressive Sensing (CS) to power-efficiently compress
continuous bio-signals in a wearable body sensor network (BSN).
We use a variant of a Binary Permuted Block Diagonal (P-BPBD)
matrix encoder and pad the input signal symmetrically to achieve
high compression ratios for ECG and PPG signals. In our
approach, the gateway dynamically tunes compression parameters (a)
to adapt to changing signal sparsity levels during continuous
monitoring and transmits them to the sensor node. We show that
the power consumed by our encoder on a 32MHz Intel® CurieTM
compute module is much lower than that consumed by a standard
Wavelet-based (DWT) encoder. We also show how CS can
simultaneously de-noise ECG signals during reconstruction. After
evaluating multiple reconstruction algorithms, we demonstrate
real-time signal recovery via an implementation of the Alternating (b)
Direction algorithm on a 500MHz Intel® Edison board, used as a Fig 1. Signal Processing Pipeline for: (a) DWT Compression and
gateway. (b) Digital Compressive Sensing in a Body Sensor Network.
Index Terms—Compressed Sensing, Smart wearables, IOT, CS
De-noising, Body sensor networks, ECG PPG compression, DWT
DWT, as shown in Fig. 1a, delivers high compression with
compression, Continuous monitoring. low signal loss; yet it is computationally demanding, memory
intensive, and consumes a significant amount of energy [8,9].
Furthermore, to embed DWT in a resource-constrained node
I. INTRODUCTION
one must overcome other challenging constraints, e.g. fixed
Continuous ambulatory monitoring of bio-signals in point implementation for low power, embedded code in finite
everyday life is a game-changer for a range of wearable SRAM memory, etc.
applications including healthcare sports and biometric
authentication. For example, continuous real-time ECG Compressive Sensing [3] (Fig. 1b) offers a low power alterative
monitoring enables tracking cardiac activity for extended to DWT for real-time streaming because the data encoding
periods to identify and mitigate risks and enhances wellness of operation on the battery powered node is computationally light
the patient. Similarly sports monitoring systems rely on while the more computationally intensive reconstruction
continuous tracking of physiological signals to enhance algorithm is performed on the gateway, exploiting the
performance of athletes. Continuous monitoring of bio-electric asymmetric architecture of a wireless wearable system.
signals for long durations under free living conditions faces Moreover, bio-signals are naturally sparse in basis such as
stringent technical constraints: maximizing battery life, DCT/Fourier and meet sparsity requirements as per CS theory.
efficiently using small memory space on the sensor node, This makes compressive sensing an excellent choice for real-
resilience to motion-artifacts, and flexible adaptation to the non- time power efficient data compression of bio-signals in a body
stationary properties of the continuously sampled signal. In any sensor network (BSN) [2].
wearable system, continuous wireless transmission of high
throughput data (such as ECG) significantly reduces node II. PRIOR ART AND THEIR LIMITATIONS
battery life. It requires a power efficient way to compress and From a CS implementation perspective, it is critically
stream the data to a compute capable gateway. For example, important to select the right encoding matrix and the
continuous sampling of ECG signals at a rate of 500 bytes/s corresponding sparse basis [1]. The encoding matrix must meet
results in several megabytes of data within few hours. Fig. 1 multiple stringent constraints simultaneously: it must be
depicts the signal processing pipeline and related mathematical embeddable deliver high compression performance with low
operations for two different compression algorithms: reconstruction error on important bio-signals such as ECG and
Compressive Sensing and Discrete Wavelet Transform (DWT).

Intel and Curie are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
978-1-5090-4130-5/16/$31.00 ©2016 IEEE

336
PPG in bases in which they are known to be sparse, and be power computes the best CR to meet the target reconstruction loss. The
efficient for node battery life. revised parameters are then transmitted to the sensor node for
Traditionally, compressive sensing has used the random subsequent encoding operations. The overheads introduced by
Gaussian sensing matrix to encode an input bio-electric signal this adaptive algorithm are relatively small and worth the trade-
[3]. Such a matrix, being dense, requires more onboard memory off since it enables the system to operate optimal compression
with relatively high compute capability. That fact makes it more and best power efficiency.
difficult to embed in a resource-constrained wireless node. In contrast to sparsity number used in [13], the use of
Hence, alternate sensing matrix types with different distributions reconstruction accuracy to determine the optimal CR is a more
and more compact structure have been proposed. Allstot, Chen direct measure to control reconstruction errors. The periodic use
et al. [4-7] have investigated the efficiency and power savings of of raw measurements directly by the gateway to compute the
toeplitz, circulant, and triangular sensing matrices for optimal CR is advantageous as it avoids accumulated errors by
compressed sensing of ECG signals. However, embedding these using reconstructed signal based sparsity number.
matrices in a resource-constrained node remains challenging Insofar as determining optimal compression parameters for
because the random elements of the sub-matrix need to be the signal, we show that using alternate metrics such as accuracy
generated on-board the sensor node. Pre-calculating and storing of certain features (e.g. heart rate) extracted from a reconstructed
of these matrices are impractical and of limited use because the signal can push compression performance and associated battery
encoding logic and matrix structure need to adapt to variations life savings much higher than otherwise achievable with
in the sampled signal and scale to different types of bio-signals reconstruction error constraints such as percentage root-mean-
(e.g. ECG, PPG, EEG). square difference (PRD), alone. This optimization is particularly
The use of the Binary Permuted Block Diagonal (BPBD) valuable in scenarios where we are only interested in specific
sensing matrix simplifies the implementation of the encoding features such as heart rate (as opposed to the entire signal) to
algorithm [1, 2] without compromising high sensing efficiency make inferences on the user context.
with accurate reconstruction. Yet, a key shortcoming of using Because continuous monitoring systems are noise prone, we
BPBD encoder is that the compression curve is discontinuous demonstrate a method to simultaneously de-noise the signal
with only limited choices for the compression operating points. during the acquisition process without any additional post-
For example, standard BPBD supports only a few compression processing steps. In this context, we present CS based de-noising
settings, e.g. Compression Ratio (CR) = 2, 4, 8, 16…etc. for for an ECG signal affected by a zero mean Gaussian noise.
input signal of 512 samples. However, to design and operate the Finally, to show a real time implementation, we evaluate
compression algorithm at the best power efficiency for a given three reconstruction algorithms: Approximate Message Passing
signal sparsity, the designer needs to set CR as dictated by the [14], Orthogonal Matching Pursuit [11] and Alternating
sparsity and CS theory and not be limited by encoder structure. Direction algorithm [12]. Based on reconstruction performance
Furthermore, another challenge in continuous monitoring (accuracy/execution time), we built an embeddable version of
applications is that the sparsity within the sensor streams varies the latter algorithm that is optimized for variable reuse, low
due to non-stationarity, resulting in large reconstruction errors precision fixed floating point operations, and efficient memory
for a static compression parameters. In [13], Behravan et. al. use usage. We have demonstrated real time reconstruction on
a parameter called sparsity number to tune the compression 500MHz Intel® Edison board, used as a gateway.
ratios. The sparsity number is a lower bound of the sparsity as
defined by CS [13]. The limitation of this method is that the true The specific contributions of the current work are as follows:
sparsity of the signal can differ significantly from the sparsity 1. A novel P-BPBD CS encoder that supports fine grained
number as defined in their work, leading to poor reconstruction compression for ECG/PPG signals.
performance. Moreover, since sparsity number is calculated 2. An adaptive algorithm that delivers optimal
using the recovered signal, it is prone to reconstruction errors. compression setting for a signal with changing sparsity.
3. An end-to-end Proof of Concept comprising of 32MHz
III. OUR APPROACH Intel® CurieTM compute module (product code
We overcome the limitation of standard BPBD encoder [1] JHCURIE) and Intel® Edison board (product code
by using a symmetric padding algorithm, which we call Padded- EDI1BB.AL.K) to demonstrate power efficient
BPBD (P-BPBD),that operates at the best CR and power compressive sensing using P-BPBD with real time
efficiency for a given sparsity in keeping with CS theory. Our reconstruction for ECG and PPG bio-signals.
algorithm generates a wide continuous range of feasible
compression points that the system can operate in. A. Padded BPBD Encoder
To minimize reconstruction errors arising from changing A BPBD encoding matrix (Fig. 2) consists of a sub-matrix
sparsity of signal in the field, we propose a novel technique of 1’s arranged as a block along the diagonal with the off-
where the gateway dynamically adapts and tunes compression diagonal elements set to zero. The number of elements or size
parameters based on current sparsity levels. The real-time tuning of the sub-matrix in a BPBD matrix is N/M where N is number
of parameters is achieved by periodically transmitting raw of columns and M is the number of rows. We construct the
(uncompressed) measurements from the node to the gateway. BPBD matrix on the sensor node based upon the target
Using the raw measurements, the gateway algorithmically compression ratio control setting received from the gateway.

337
Ordinarily, in a traditional BPBD technique, the number of
samples in a window is exactly divisible by the sub-matrix size
or CR [1]. This implies that only a discontinuous set of
compression ratios are feasible for a fixed sampling window
size and is not practically useful as described earlier. Our
symmetric approach that matches the dimensions of the
specific BPBD matrix for each compression ratio enables
the user to obtain any desired integer compression ratio
(without discontinuities).
Fig.3. Dynamic calculation of compression parameters on gateway and
subsequent transmission to sensor node

IV. CS MODEL AND EMBEDDED POWER CHARACTERIZATION


We validated our compressive sensing algorithm on an
Intel® CurieTM compute module (32MHz). Our experimental
(a) (b)
apparatus consists of an Intel® CurieTM compute module,
Fig.2 (a) Example of a BPBD matrix. (b) Our Symmetric
Oscilloscope, Multi-meter, power and performance
padding logic
measurement software as shown in Fig. 4. To benchmark CS
Fig. 2a-b gives an example of BPBD matrix and performance (power consumption and embedded reconstruction
corresponding symmetric padding logic. If the length of accuracy), we also embed and performance characterize the
signal (N) is not divisible by Compression Ratio (CR) i.e. traditional DWT algorithm on the Intel® CurieTM compute
Remainder (N/CR) ≠ 0, then, module and compare our results.
We symmetrically pad the signal at the end to next nearest
length divisible by the CR i.e.
Padding size p = ((floor (N/CR) + 1)* CR) – N.
The values of the padded elements are given by
X[N+k] = X[N-k] i.e. mirror padding.
This means we can set any compression ratio and obtain the
appropriate BPBD matrix.

Table 1 shows that our P-BPBD matrix exhibits excellent


incoherency as required for CS with respect to the Fourier basis. Fig.4 Our algorithm power characterization system on Intel®
Incoherence is defined as the largest correlation between sensing CurieTM compute module
matrix Φ (P-BPBD) and basis Ψ (DFT) and is a requirement for We assess our techniques under two different constraints
Compressive Sensing. The size of the BPBD matrix Φ is MxN used in signal reconstruction: percentage root-mean-square
and the size of the sparse basis Ψ is NxN, where N is the window difference (PRD) and heart-rate accuracy (e.g.: differences in
size obtained after samples are artificially added to the original beat-rate calculated from the recovered signal compared to the
input window by symmetric padding. original uncompressed signal). The use of the PRD constraint
Number Theoretical helps recover the entire signal, albeit at the cost of a lower CR
Compression P-BPBD P-BPBD
of Coherency
Ratio Size
padded Lower
Coherence with the attendant higher transmission power. Alternatively, the
(N/M) (MxN) (1/√N, 1) use of feature accuracy constraints helps set high CR with low
element Limit (1/√N)
2 256x512 0 0.0442 0.0626 transmission power while recovering only select features.
10 52x520 8 0.0439 0.1399 ∑𝑁
𝑛=1(𝑥1[𝑛] − 𝑥2[𝑛])
2
𝑃𝑅𝐷 = √ 𝑁 2
× 100
25 21x525 13 0.0436 0.2228 ∑𝑛=1(𝑥1[𝑛])
Table 1. Excellent Incoherency of BPBD sensing matrix with Fourier Where x1 is the original signal and x2 is the reconstructed
(DFT) basis signal. N is the length (i.e. dimensionality) of both the signals.
B. Adaptive CS Algorithm
In continuous ambulatory monitoring, the sampled signal is V. RESULTS AND DISCUSSION
likely to show significant variability in morphology, noise, We first evaluate the compression efficiency via MATLAB
signal quality etc., so the compression ratio is periodically models on ECG and PPG signals separately and subsequently
updated by the gateway and transmitted to the sensor node to validate the simulation results on an Intel® CurieTM compute
reconfigure the BPBD encoder (Fig.3). This enables tuning the module and Intel® Edison board. Separately, we model and
compression logic in real time to meet application-dependent validate the multi-level dyadic DWT algorithm with
quality constraints even as the signal properties vary. Daubechies-6 wavelet (db6) [9] for comparison. The data sets

338
include both clean and noisy ECG/PPG signals from the MIT- Conventional BPBD-CS gives Our BPBD-CS algorithmn gives a
discontinuous compression curve continuous compression curve
BI repository [11]. We then show, by modelling, how setting CS

Reconstruction Loss %
Reconstruction Loss %
40 40

parameters adaptively improves reconstruction performance. 30 30

We next discuss CS based de-noising for ECG signals and 20 20

comparative assessment of different signal recovery algorithms. 10 10

0 0

A. Compression performance 0 5 10

Compression ratio
15 20 0 2 4 6 8 10

Compression ratio
12 14 16

We present an example of P-BPBD CS on a clean PPG that (a) (b)


gives excellent recovery (PRD = 4.38%) with high compression Fig. 7. Our padding logic generates fine grained compression curves
ratio (CR = 10). Fig. 5a is the raw PPG signal. Fig 5b gives the The extent of achievable compression depends on the
encoded PPG signal. The morphologies of original and encoded specific constraints set by the application. We evaluate the
signal are similar, as the P-BPBD CS-encoder performs a block- impact of two different constraints: PRD and beat-rate
averaging operation on the signal. Fig 5c shows the accuracy, on the maximum compression ratio achievable
reconstructed PPG signal. (Table. 2). For a PRD ~10%, we achieve a CR=14 for PPG.
Encoded clean PPG using BPBD However, if the application requires only heart rate and not
@CR= 10 the complete signal, we can achieve a CR=23 for the same
35
30 PPG signal cutting data transmission traffic by a factor of
Amplitude

23. As table 2 shows, with a CR=23, although reconstructed


25
20

signal quality is poor (PRD=~20%), we can still extract


15
10
5
0 accurate beat-rate.
0 10 20 30 40 50 60

Sample number Max CR Remarks


Signal Constraint
(a) Original PPG (512 samples) (b) P-BPBD encoded PPG (52 achievable
samples) ECG PRD= 14% 6
Reconstructed clean PPG @CR=10, Can extract accurate beatrate
accurate beat
PRD=4.38 ECG 11 even with poor recovered
rate
3
signal quality (PRD= 40.05%)
2.5 PPG PRD= 10% 14
2
Can extract accurate beatrate
mV

1.5
accurate beat
1 PPG 23 even with poor recovered
Reconstructed PPG
rate
0.5
0
Original PPG signal quality (PRD= 18.19%)
0 100 200 300 400 500 600
Sample number Table. 2. Max compression ratio (CR) achievable under
(c) Reconstructed PPG (512 samples) different constraints using P-BPBD CS
Fig. 5. Example of P-BPBD based Compressive Sensing using on PPG Finally, we show that P-BPBD-CS algorithm is superior in
Our P-BPBD CS encoder works well even for a noisy ECG both ease of implementation and compression performance
signal with baseline wander (Fig. 6a) and a noisy PPG signal compared to prior art techniques such as Random Gaussian
with motion artefacts (Fig. 6b). This means we can use this based CS and DWT. In Fig. 8, we show that our P-BPBD-
technique to power-efficiently transmit both clean and noisy CS algorithm outperforms random Gaussian algorithm
ECG or PPG signals encountered in free living unconstrained across a range of compression ratios for ECG and PPG.
conditions.
Noisy PPG reconstruction @CR=14,
Noisy
Noisy ECG ECG reconstruction
reconstruction @CR=6,
Noisy
NoisyPPG reconstruction
@CR=14, @CR=14,
Noisy ECG reconstruction @CR=6, @CR=6, PPG reconstruction
PRD=10.4297
PRD=13.71
PRD=13.71
PRD=13.71 PRD=10.4297
PRD=10.4297
400 400 5000
400
5000 5000 reconstructed PPG
4000 reconstructed
reconstructed PPG PPG
200
200 200 4000 4000
30003000 originaloriginal
PPG PPG
3000 original PPG
0 00
20002000
0 -200 00 100 100200200 300200 400 500
300 400 500
mV

2000
mV

100 300 400 500


mV
mV
mV

1000
-200 -200 1000
mV

1000 0
-400 0
-400 -400 0-1000 0 100 200 300 400 500
-600
Original ECG
OriginalOriginal
ECG ECG
-10000 0 100
100
200
200
300
300
400
400
500
500 (a)
Reconstructed ECG -1000-2000
-600 -600 Reconstructed ECG
-800 Reconstructed ECG -2000
-2000-3000
-800 -800 Sample number -3000 Sample number
-3000
Sample
(a)
Sample number
number (b) number
Sample
Sample number
Fig.6. P-BPBD CS works well even on noisy ECG/PPG.

The use of symmetric padding (P-BPBD) in the encoder results


in continuous compression characteristic curve as shown below
in contrast to standard BPBD [1]. (Fig 7)
(b)
Fig.8 Superior reconstruction of our padded P-BPBD CS
scheme over prior art ( Random Gaussian [3] )

339
Based on comparing only compression performance, we B. Power-efficiency
find that P-BPBD CS algorithm offers similar results to Our P-BPBD-CS technique is more power efficient for a
DWT approach (Fig. 9). given reconstruction quality (Table 4) than a state of the art
DWT compression algorithm. Based on power characterization
on Intel® CurieTM compute module (32MHz CPU, 80KB
SRAM), we found that the embedded BPBD algorithm is ~100X
faster than DWT with energy consumption less than ~3% of that
of DWT for the same reconstruction loss. This shows that P-
BPBD is the compression algorithm of choice for real time
continuous sampling and transmission of bio signals.
Compression Execution Energy
Algorithm
Ratio Time Overheads
(a) (b) CS-BPBD CR=4 to
0.32ms ~6mJ
CR=12
Fig 9. Compression achieved by P-BPBD CS and DWT for a
DWT CR=1.3 to ~215mJ to
fixed reconstruction loss% ~31ms
CR=7.7 830mJ
Fig. 10a-c shows the superior performance of our adaptive- Table 4. Energy overheads for P-BPBD CS are extremely low
CS algorithm. Fig. 10a represents the original signal, Fig. on 32MHz Intel® CurieTM compute module for ECG and PPG
10b shows the recovered signal using adaptive CS for PRD
constraint of 9% and Fig. 10c shows the recovered signal C. Joint CS De-noising and Reconstruction
for conventional Non-Adaptive CS with a CR of 5.
Fig. 11 shows the effectiveness of jointly optimizing noise
removal and signal recovery on an ECG corrupted with zero-
mean Gaussian noise (variance 0.2). The improvements in
Signal-to-noise ratio (SNR) in Table 5 is driven by both
BPBD structure and Law of Large Numbers which states
that sample mean tends towards true mean for large sample
(a)
sizes. BPBD logic averages out zero mean Gaussian noise
in the signal and renders the encoded noisy signal almost
same as encoded signal without any noise whatsoever.
Noisy ECG reconstruction @CR=6, Noisy ECG reconstruction @CR=6,
PRD=13.71 PRD=13.71
400 400
200 200
0 0
0 100 200 300 400 500
mV

0 100 200 300 400 500

mV
-200 -200
-400 -400

(b)
Original ECG Original ECG
-600 -600
Reconstructed ECG Reconstructed ECG
-800 -800
Sample number Sample number

(a) Noisy Input ECG (b) CS De-noised ECG

Fig.11. Example of CS based ECG de-noising

Noise Variance 0.10 0.20 0.30 0.40


Reconstructed SNR 16.46 2.23 2.56 0.77
(c)
Noisy Signal SNR 11.50 -0.08 -1.66 -3.06
Fig 10. Better reconstruction achieved by Adaptive CS for
a fixed PRD constraint Table 5. SNR improvements by CS De-Noising technique
Table 3 presents the CR and PRD for signal acquisition for different noise levels
using Adaptive CS and Non-Adaptive CS. As it can be seen, For the best real-time reconstruction on the gateway, we
adaptive CS switches to a lower CR to meet the PRD compared the performance of multiple algorithms. This
constraints set by the application. also included two greedy algorithms: AMP[14], OMP[11]
as well as an optimization based algorithm: Alternating
Waveform 1 Waveform 2 Waveform 3
Direction Method [12]. We evaluated these algorithms on a
Adaptive CS CR=3 CR=3 CR=4 laptop using 6 ECG datasets from QT Database, Physionet
PRD= 7.8% PRD=7.7% PRD=8.0% (Table 6).
Non- CR=5 CR=5 CR=5 In terms of recovery time, Alternating Direction Method is
Adaptive CS PRD= 16.9% PRD=13.8 % PRD=13.7% ~20X better than AMP and ~2X better than OMP. The
Table 3. Comparison of Adaptive and Non-Adaptive CS reconstruction accuracy between these algorithms is
comparable. AMP has worst reconstruction times amongst

340
all the algorithms because it required significantly more quality (PRD) constraint. Based on power performance
iterations to converge to a low error. characterization on Intel® Curie TM compute module, our
User Reconstruction Accuracy Reconstruction Times embedded P-BPBD algorithm is ~100X faster than DWT
PRD (%) (seconds) with energy consumption less than ~3% of that of DWT at
Alternating Alternating the same reconstruction loss. Further, for real time
AMP Direction OMP AMP Direction OMP monitoring we have demonstrated fast reconstruction using
1 Alternating Direction algorithm on Intel® Edison board.
8.72 8.10 8.78 0.523 0.023 0.049
2 For continuous ECG-PPG monitoring applications using
9.43 8.92 9.50 0.553 0.023 0.048
3
wearable sensors the P-BPBD-CS offers a compelling
14.52 13.86 14.57 0.546 0.023 0.048
alternative to the DWT.
4 7.27 7.68 7.33 0.537 0.023 0.0488
5
14.69 14.29 14.74 0.525 0.023 0.0503 VII. ACKNOWLEDGEMENTS
6
3.60 3.83 3.56 0.515 0.023 0.0491 The authors thank Mr. Kumar Ranganathan, Manager and
Table 6. Comparative performance of different Principal Engineer, Intel Labs for his extensive guidance
reconstruction algorithms presented for 6 users. on the project. We would like to thank our interns Nikita
Pendekanti, Krishna Karthick and Krishna Bhat for their
D. End-to-End CS System excellent efforts & significant contributions to this project.
We demonstrated an end-to-end system of our application by
acquiring BPBD encoded ECG signal sampled at 250Hz from a REFERENCES
wearable band with real time reconstruction using Alternating [1] Zaiking, Ogawa, and Hayasema. ‘The Simplest Measurement
Direction Method. Matrix for Compressed Sensing of Natural Images’ in IEEE 17th
International Conference on Image Processing, 2010
[2] Ravelomanantsoa, Rabah, and Rouane. ‘Simple and Efficient
Compressive Sensing Encoder for Wireless Body Area Network’
[3] Candes, and Wakin. ‘An introduction to Compressive Sampling’
in IEEE Signal Processing Magazine, March 2008
[4] Allstot, Chen, Dixon, and Gangopadhyay. ‘Compressed Sensing
of ECG Biosignals using 1-bit matrices’ in International
Real-time Embedded ECG CS ECG recovery using Symposium on Circuits and Systems, IEEE, 2010
compression on band Alternate Direction Method
[5] Bajwa, Haupt et al. ‘Toeplitz- Structured Compressed Sensing
and DCT basis
matrices’ in IEEE SSP, 2007.
Fig 12. Our end to end low power wearable ECG sensing
[6] Yin, Yang, and Zhang. ‘Practical Compressive Sensing with
system using Compressive Sensing. Toeplitz and Circulant Matrices’ in Proceedings of Visual
Table. 7 shows reconstruction time of Alternating Direction Communications and Image Processing (VCIP), SPIE, CA 2010.
Method on Intel® Edison board (500 MHz, 1GB Ram) for a pre- [7] Daniel Chae et al ‘Performance Study of Compressive Sampling
stored encoded ECG signal to recover a 500 dimensional ECG for ECG Signal Compression in Noisy and Varying Sparsity
signal. This shows that the reconstruction is extremely fast and Acquisition’ in IEEE International Conference on
well suited for an end-to-end real time applications. Acoustics,Speech and Signal Processing, 2013.
[8] Lou, Luo, and Wang. ‘Data Compression based on Compressed
Iterations for signal recovery Algorithm Execution Time Sensing and the Wavelet Transform’, IEEE Xplore, 2006.
100 0.607 msec [9] Mamaghanian, Khaled , Atienza, and Vandergehyst.
500 2.860 msec ‘Compressed Sensing for Real-Time Energy Efficient ECG
1000 5.689 msec Compression on Wireless Body sensor nodes’ in IEEE
Transactions on Biomedical Engineering, September 2011.
Table.7. Fast Reconstruction on Intel® Edison Board [10] http://physionet.org/physiobank/database/
[11] J. A. Tropp, and A. C. Gilbert, Signal recovery from random
VI. CONCLUSIONS measurements via orthogonal matching pursuit, IEEE Trans.
We have demonstrated an end-to-end system based on Inform. Theory, vol. 53, no. 12, pp. 4655-4666, 2007.
CS for continuous monitoring of bio-signals in a BSN. [12] J. F. Yang, and Yin Zhang, Alternating direction algorithms for
Using a symmetric padded P-BPBD sensing matrix we L1-problems in compressive sensing, SIAM Journal on Scientific
obtain a continuous fine-grained compression curve that Computing, 33(1), 250–278, 2011.
allows the user to operate at any compression ratio desired. [13] Behravan, Vahid, et al. "Rate-adaptive compressed-sensing and
We found that the CR achievable in a wearable node- sparsity variance of biomedical signals." 2015 IEEE 12th
gateway system can be significantly enhanced based on International Conference on Wearable and Implantable Body
choice of constraints imposed in signal reconstruction. For Sensor Networks (BSN). IEEE, 2015.
a given sampling window, we can considerably increase the [14] Donoho, David L., Arian Maleki, and Andrea Montanari.
"Message-passing algorithms for compressed sensing."
compression ratio from CR=14 to CR=23 for a PPG signal Proceedings of the National Academy of Sciences 106.45
using a beat rate accuracy constraint instead of a signal (2009): 18914-18919.

341

Potrebbero piacerti anche