Sei sulla pagina 1di 11

www.ietdl.

org

Published in IET Circuits, Devices & Systems


Received on 27th February 2008
Revised on 31st May 2009
doi: 10.1049/iet-cds.2008.0244

ISSN 1751-858X

Geometric centre tracking of tagged objects


using a low power demodulation smart
vision sensor
M. Habibi M. Sayedi
Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
E-mail: mhdhabibi@gmail.com

Abstract: In this study, a modulated light detecting smart CMOS image sensor is presented. The design has the
ability to sense asynchronous signals transmitted from electronic markers such as flashing light emitting diodes
(LEDs) tagged on moving objects. The geometric centre of the detected region is returned as the output result.
With the presented sensor, object localisation and position detection functions are simplified, performed at
higher speeds in real time and power requirement is reduced. The sensor in-pixel processing filters out the
background image data, detects the modulated marker regions and projects the extracted region on the two
axes, while the geometric centre extraction units placed at each axis identify the coordinates assigned to the
marker. The design presents less sensitivity to object texture compared with techniques based on edge
extraction or binarisation. The sensor has been designed as a 64  64 pixel VLSI CMOS chip in the 0.35 mm
standard CMOS technology and analysed in the presence of mismatches and noise. Issues such as sensor
array scalability, speed and power dissipation are also examined in this study and features of the sensor are
reported and compared with some previous designs.

1 Introduction processing speed. For region identification and extraction


algorithms, some smart image sensors with in-pixel
In many machine vision algorithms, it is required to identify processing capabilities have been previously reported. The
a specific region of the image data and to extract some sensors presented in [7, 8] assume that the desired target is
properties of the region (location for example) for further the maximum intensity in the image and seek for the peak
processing [1]. Human motion capturing and robot brightness in a certain search window. In these techniques,
location tracking using special image markers, 3D profile an object with the highest local intensity can be tracked.
acquisition using laser scanners and unmanned flying However, since the highest local maximum feature is not
vehicle (UAV) positioning are some samples of these tasks unique in an environment, one could be confused with
[2 – 4]. Conventionally, high-speed image sensors and another. In [9] an image template matching sensor is
DSP processors are used in hardware implementation. The presented. The sensor identifies places in the image that
sensor – processor combination requires additional external match a specific template. Since the camera or object
hardware such as storage elements and controllers and thus movement changes the pattern of the object captured
the overall processing power usage is usually high. An by the sensor, the applications of this sensor are limited.
alternative approach is the use of smart CMOS image Image binarisation has also been used in smart CMOS
sensors where some or all of the processing is performed image sensors for object tracking [10, 11]. Similarly, the
inside the sensor [5, 6]. With elimination of external data tracking process will face difficulties in this method, if
bus and additional hardware, the power usage can be the object texture is close to the background. There have
considerably reduced. Furthermore, it is possible to perform been smart CMOS sensors presented that are capable of
some of the operations in-pixel and thus increase the detecting light signals modulated at a specific frequency

IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77 67


doi: 10.1049/iet-cds.2008.0244 & The Institution of Engineering and Technology 2009
Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

[12 – 14]. A flashing LED mounted on an object will produce amount of photocurrent is in the order of a few pico
a bright spot in the sensor focal plane. In these sensors, only amperes, which places the processing switches in
the modulated signal is extracted and all other components of subthreshold region and also introduces long time
the captured scene are omitted, by which essentially light constants. Amplification of photocurrent will not help since
filtering is performed. With the determination of the generally, the photocurrent noise itself is relatively high
modulated spot location, the coordinates of the actual and it will be amplified too. The solution used in the
object can be obtained. Owing to ultra-low currents of synchronous demodulation method, for the implementation
photodiodes, demodulation techniques are used in these of an electronic bandpass filter, is the use of correlation
sensors to detect the desired signal [15, 16]. The drawback technique by multiplication of sensors output currents in a
of the sensors is requirement of a synchronous signal and synchronous reference signal, followed by an integration
hence an external link, for the demodulation procedure. In step. With this procedure, not only the output signal
[17] a demodulation technique which eliminates the need amplitude can be increased but also because of the
for the synchronous link has been reported, but the bandwidth limitation imposed by the integration capacitor,
procedure is not suitable for in-pixel signal detection. The the effective noise amplitude is limited. Fig. 1 shows the
sensor presented in [18] uses asynchronous serial code circuit used for this technique. The two switches Q1 and
reception to identify modulated tags. Owing to long digital Q2 integrate the photocurrent on either C1 or C2
codes used for synchronisation and adaptive background capacitors. The final output result is the voltage difference
signal elimination elements, the frame rate is relatively low. between the two integrating capacitors; hence, the two
switches act as a multiplier, scaling the photocurrent by
In this paper, an image demodulation CMOS image factors of þ1 and 21.
sensor which eliminates the need for an external
synchronisation link is presented. For this purpose, the Approximating the photocurrent and the switch reference
in-pixel orthogonal demodulation technique is introduced gate pulses with sinusoid waveforms, in the conditions of
and its circuit structure is presented. With the modulated having the reference signal coherent with the photocurrent,
signals detected in-pixel, the region is projected on the x the differential output waveform is as follows
and y axes and the geometric centre detectors on each axis
ð tint
extract the coordinates of the marker. 1
Vout2  Vout1 ¼ [(Iback þ Isig sin(vt)) sin(vt)]
Cint 0
In Section 2, the synchronous demodulation concept
which is used in electronic light filtering will be explained.
Isig Iback (1  cos(tint v)) Isig sin(tint v)
¼ tint þ þ
The proposed in-pixel orthogonal demodulation method is 2 v 2v
introduced in Section 3, and in Section 4 the smart sensor Isig
structure including the array of pixels and the geometric ! Vout2  Vout1 jtint ¼tn ¼2pn=v¼nT ¼ (nT )
2
centre detection units is presented. Sections 5 and 6 (1)
provide the sensor analyses and simulation results,
respectively. In Section 7, conclusions are presented. in which Isig is the photocurrent amplitude, Iback is the
background photocurrent, T is the period of photocurrent
and reference signals, v equals 2p/T, tint is the integration
2 Synchronous filtering pixel duration and n is an integer value related to the sample
In CMOS image sensors, direct processing of image data iteration. In this equation, it is assumed that the integration
encounters some difficulties. The reason is related to the capacitors are far bigger than the photodiode capacitor and
ultra-low currents produced at the photodiodes due to the that the input signal frequency is much lower than the
received radiation. At normal illumination levels, the circuit time constant. To omit additional harmonics, the

Figure 1 Basic operation principle of synchronous demodulation technique


a Synchronous demodulation circuit. The photodiode is replaced by a light dependent current source and equivalent node capacitance [14]
b Output result of a synchronous demodulation procedure

68 IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77


& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-cds.2008.0244

Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

output differential voltage should be sampled at nT intervals Fig. 2 shows the structure required for the asynchronous
(synchronous with the reference signal). filtering procedure that consists of two demodulation
circuits. The two piecewise pulses F1 and F2 are 908 out
Equation (1) shows that the output signal amplitude can of phase. It is also assumed that the two photodiodes are
be increased by increasing tint , and that the output term is closely placed and thus their photocurrents are the same.
proportional to the amplitude of the input photodiode The circuit operation in the four different phases is as
signal with angular frequency v and there is no relationship follows: In phase 1 photocurrent is integrated on C1 and
with the background component. Thus, the background C3, in phase 2 on C1 and C4, in phase 3 on C2 and C4
illumination is filtered out and the modulated light and finally in phase 4 on C2 and C3. The differential
component remains. If the reference signal and the output results are Vs ¼ Vc1 2 Vc2 and Vc ¼ Vc3 2 Vc4 . In
photocurrent are not coherent, then (1) will not hold. For this case, the final asynchronous demodulation result can
example, if the reference signal is 908 out of phase in be expressed as
relation with the photodiode signal, then the output result
sampled at nT will be zero. Vout ¼ jjVc j þ jVs jj ¼ jjVc3  Vc4 j þ jVc1  Vc2 jj (3)

The main disadvantages in Fig. 2 are the requirement of


3 Proposed asynchronous two closely spaced photodiodes, the charge sharing of the
filtering pixel integration capacitors through the photodiode capacitor and
the number of capacitors required for integration.
To alleviate the problem of reference signal synchronisation
and the need for an external link, an asynchronous In [17] an alternative approach is used to evaluate Vout
demodulation technique is proposed. In this technique, two using a single demodulation circuit where Vc and Vs are
orthogonal signals sin(vt) and cos(vt) are used in the extracted in two different but consecutive integration cycles.
demodulation procedure to produce two differential output With this method, the in-pixel extraction of the output
signals Vs and Vc. Using these two values, the asynchronous demodulation result is difficult to implement and
output result can be obtained as follows: furthermore the maximum frame rate is halved.

ð nT For a practical in-pixel implementation of the


1
Vs jtint ¼nT ¼ [(Iback þ Isig sin(vt þ f)) sin(vt)] asynchronous demodulation circuit, it is noted that (3) can
Cint 0
be rewritten as
Isig
¼ (nT ) cos(f)
2 Vout ¼ max(j(Vc3 þ Vc1 )  (Vc4 þ Vc2 )j, j(Vc3 þ Vc2 )
ð nT
1  (Vc4 þ Vc1 )j) (4)
Vc jtint ¼nT ¼ [(Iback þ Isig sin(vt þ f)) cos(vt)]
Cint 0

Isig According to (4), two complementary demodulation


¼ (nT ) sin(f) sequences are possible, where among the two, the one
2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I producing the absolute maximum should be considered as
! Vout ¼ Vc2 þ Vs2 ¼
sig
(nT ) (2) the result. Since the integration capacitors are all equal thus
2 in each sequence, the states that integrate photocurrent

Figure 2 Double section demodulation circuit required for asynchronous filtering and corresponding control waveforms
and phases

IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77 69


doi: 10.1049/iet-cds.2008.0244 & The Institution of Engineering and Technology 2009
Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

on capacitors with identical signs can be combined into The absolute difference detector in Fig. 3b is used in-
one integration cycle and states in which integration is pixel to detect the absolute difference of the obtained
performed on capacitors with opposite signs can be result for each sequence and compare it with an
transferred to a no operation phase (NOP). The circuit appropriate threshold. As it can be seen in the figure, the
used for this purpose is shown in Fig. 3. The demodulation differential pair is used to produce differential voltage on
circuit is modified to perform the different phases shown one node and its negative value on the other node. If any
in Fig. 3a, for the two possible complementary sequences. one of these voltage levels is capable to turn either Ma or
Among the results obtained from the two sequences, the Mb on, this means that the absolute voltage difference
greater absolute one is acceptable. If the signal is to be between the two integration capacitors has exceeded a
demodulated using the first sequence of (4), then in phase threshold value. Hence the output will be connected to
1, since the voltage integrated on C1 and C3 is eventually Vref2 by the Ma –Mb NOR gate, which identifies that
summed up, it can be converted into a single integration modulated light has been detected. To reduce power
on C1. Demodulating the photocurrent signal using the dissipation, the detector is only momentarily turned on
second sequence results in a ‘no operation’ in phase 1, since at the end of each demodulation cycle. In practice, the
the voltages produced on C1 and C3 in the second demodulation procedure needs not to be performed
sequence of (4) are subtracted from each other and they in two different sequences at all cycles, instead the
cancel each other out. The states in other phases are demodulation is performed using one sequence; if the
obtained similarly. modulated light is detected, the output of the absolute
detector will turn low. Consequently, if the output
In the NOP phases, where both Q1 and Q3 are off, switch remains high after the appropriate integration time, the
Q2 connects the photodiode node to a constant voltage. other sequence is applied immediately after the first one.
Without the Q2 switch, the photocurrents charge would Should the output remain high, it is inferred that no
be temporarily integrated on the photodiode node and upon modulated signal exist in the scene and demodulation
the switching of the transfer switches, it would be integrated continues with the complement sequence until a
on the integration capacitors. Furthermore, this switch modulated signal is detected. When a modulated marker
eliminates the charge sharing between the two integration is found, the demodulation continues with the same
capacitors through the photodiode node and limits charge appropriate sequence. Eventually because of the slight
sharing to the integration capacitors and a constant voltage. frequency difference between the modulated signal and

Figure 3 The proposed asynchronous filtering pixel


a Modified asynchronous demodulation circuit and the in-pixel detector. The detector is activated if the absolute voltage difference
between the integration capacitors exceeds a threshold
b Corresponding control waveforms and phases of the proposed technique

70 IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77


& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-cds.2008.0244

Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

the reference signal, the modulated signal will not be


identified in one cycle. At this point, the operation will
continue using the complement sequence and the
modulated signal tracking continuous.

4 Sensor structure
4.1 Overview
Fig. 4 shows the structure of the asynchronous filtering
smart sensor. A two-dimensional array of modulated light
detecting pixels constructs the sensor focal plane. The
clock phase signals are common among the pixels. The
surrounding scene is focused onto the focal plane using a
lens. An object tagged with a flashing LED will cause a
region on the two-dimensional array generate modulated
photocurrent signal. The region on the focal plane where
the LED light shines is related to the object actual
position by a mapping known as the camera matrix. By
identifying the region which receives the modulated light
signal, the location of object can be identified. In every
integration cycle, the detector output of each pixel will
be activated if modulated light is received by the pixel
photodiode. A row access method allows the pixel
detector results to be read out one row at a time. Owing
to the in-pixel detectors, it is possible to project the
region that receives modulated light on to the horizontal
and vertical axes. Projection on the horizontal axis is
possible by simply accessing all the rows at the same
time. In this situation, the decoder’s outputs on each
column are logically summed (OR function) and a 1 bit
logical projection of the region which has been
illuminated by modulated light is obtained on the
horizontal axis. Similarly for projection on vertical axis, a
series of column access lines are implemented in the
sensor. By simultaneous selection of the column access
lines the region will be projected on the vertical axis
using an OR function. The projection method
reduces the data required to be transferred to the sensors
periphery. The actual position assigned to each region
is obtained by two coordinate generation blocks
located at the periphery and the x and y axes. Their
function is to find the geometric centre of the projections
at each axis.

Figure 4 Structure of the asynchronous filtering sensor


4.2 Coordinate generators a Proposed sensor structure
The coordinate generator is shown in Fig. 5a for the b Decomposition of sensor structure to the vertical projection
64  64 pixel sensor. Inputs In0 – In63 of the coordinate switches
c Horizontal projection switches
generator are connected to the outputs of the projection
access gates in the basic pixel array. Outputs Out0 –
Out63, which represent the centre coordinates, are
connected to an encoder and outputted at the chip pins. found, the clock pulse of the shift registers (Aux_Clk)
The coordinate generator block in Fig. 5 operates by is disabled to stop the shifting. On the next cycle of
finding the edges of the projection in the first stage. The the activate signal (new image frame), the procedure is
two edges are then shifted right and left using two shift repeated. The shift register clock frequency must be such
registers. The location where two edges meet is the that a centre point is found before the new cycle begins.
centre point of the projection. When the centre point is During the centre finding procedure, the size of the

IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77 71


doi: 10.1049/iet-cds.2008.0244 & The Institution of Engineering and Technology 2009
Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

Figure 5 The coordinate generator


a Structure of the coordinate generator
b Coordinate generator timing sequence

segment can also be extracted by counting the number of 5 Sensor analyses


shift register clock cycles required to generate the centre
detected signal (Ack). The timing waveforms required to In this section, the parameters that effect the sensor
control the coordinate generator are shown in Fig. 5b. operation, the sensor limitations and scalability and power
dissipation tradeoffs are presented.
4.3 Sensor layout
The layout design of the sensor is performed in a 0.35 mm
5.1 Mismatch and noise analyses
standard CMOS technology. Fig. 6 shows the complete The demodulation procedure and the absolute difference
layout design and the layout of one individual pixel. detection of the proposed pixel is effected by several

72 IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77


& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-cds.2008.0244

Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

Figure 6 Layout design of the proposed sensor


a Layout of the complete design
b Layout of the basic pixel

parameters such as integration capacitor mismatches, switch to-light intensity ratio is equal to 15 pA/W/m2, the
mismatches, supply voltage variations, random noise, sensitivity (minimum detectable modulated light intensity)
variations in background illumination, temperature is equal to 0.6 W/m2.
variations and the frequency error present in the modulated
markers frequencies. These effects can change the output 5.2 Sensor dynamic range
result of the demodulator section and also the threshold
value of the absolute difference detector. In order for the The maximum amount of background illumination, which
sensor to operate correctly, the detector’s maximum can be handled by the sensor, is limited by the integration
threshold activation voltage should be lower than the capacitors discharge rate. With background illumination,
minimum voltage produced by the demodulator section in the integration capacitors discharge at a constant rate and
the presence of a detectable modulated light signal. On the the common voltage of the difference detector decreases.
other hand, the detector’s minimum deactivation voltage The minimum acceptable voltage for the difference
should be higher than the maximum voltage produced by detector is approximately 1 V, thus the capacitors voltage
the demodulator section in the absence of modulated light can change from the precharge voltage (2 V approximately)
at the specific reference frequency. The result of mismatch, to the detectors minimum common mode voltage. A rough
noise and temperature variation analyses using Monte expression for the maximum background illumination will
Carlo and noise simulations is shown in Fig. 7. As the thus be in the form
graph shows, a modulated photocurrent of 10 pA produces Iback max Dtmax
100 mV output on the demodulator in the absence of ¼ DVmax (5)
2Cint
parameter variation. Considering the overall voltage change
on the absolute difference detector and the demodulation where DVmax ’ 1 V, Dtmax ’ 5 ms and Cint ’ 100 fF,
section output, a threshold voltage level of 60 mV can be thus Iback_max ’ 40 pA. As a result, the dynamic range
used to detect modulated photocurrent signals with of background illumination is approximately equal to
amplitudes as low as 10 pA. Since each pixel’s current- 2.4 W/m2.

Figure 7 Effect of different factors on the detector threshold voltage and the demodulator output

IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77 73


doi: 10.1049/iet-cds.2008.0244 & The Institution of Engineering and Technology 2009
Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

5.3 Sensor tracking speed and integration cycle, there exist a group of pixels that
resolution limitation receive modulated light in the entire procedure and enough
voltage is accumulated on the integration capacitors to
A square target region with n  n pixels expansion and activate the detector. When the object is moving during
acceptable modulated illumination can move with a speed the integration cycle only some pixels complete the
of n pixel per 5 ms without being lost, since in each entire demodulation procedure, thus the error of the sensor
is 1 pixel when object is stationary and n pixels when
moving. The situation is illustrated in the simulation
results section. If an m  m metre field of view is focused
on the 64  64 pixel chip focal plane, the 1 pixel error
will be equivalent to m/64 metre object position error on
each axis.

5.4 Structure scalability


The sensor array can be scaled to either widen the field of view
or to increase the tacking resolution (as mentioned earlier the
tracking error is equal to m/q metre, for an m  m metre field
Figure 8 Frequency response of the presented pixel of view focused on a q  q pixel chip focal plane).

Figure 9 Differential demodulated signal output variations with phase

74 IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77


& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-cds.2008.0244

Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

The power consumption in an m  m pixel sensor is noise voltage level increase with a root function relationship
roughly expressed with (6) with time [19].

Pdiss ¼ m2  ðVref
2 Fig. 9 shows the simulation results for the two
1 Cint  f þ Vref 1 Imean
complementary sequences demodulating a light signal with a
Pcmp ton frequency same as the reference signal but with an arbitrary
þ þ Pgen Þ þ Petc (6)
5 ms phase difference. The differential voltage of the integration
capacitors is considered as output in the figures. In row (a),
where Vref1 ’ 2 V, f is equal to the integration frequency the output is plotted against time while the phase difference
(200 Hz), Imean is the average photodiode current, Pcmp is is varied from 08 to 908. As it can be seen, after an
the static and dynamic power dissipation of one detector appropriate integration time (5 ms), using the first sequence,
unit, ton is the time required for the detectors to be the absolute value of the result produced at the output is
activated (20 ms), Pgen is the dynamic power dissipation
of one coordinate generator cell (the static dissipation is
negligible in this case) and Petc is other sources of power
dissipation (I/O pads, routings etc.). The first term of
Pdiss is due to the dynamic power dissipation of the
demodulator capacitances, the second term is related to
the power usage of the photodiodes, the third term is
related to the power usage of the comparators which
should be on for ton second in each clock cycle (5 ms).
The fourth term is related to the coordinate generator.
Although the number of coordinate generator cells
increases linearly with m but the frequency of this unit
should also increase with m to find the centre of the
projections in the appropriate time, thus the power usage
of this unit scales with m 2.

Simulation results show that the main contributions are


the power dissipation because of Imean and Pcmp , and the
power usage increases with the sensor array size.

6 Simulation results
To show the functionality of the proposed technique, the
circuits are implemented in a 0.35 mm standard CMOS
technology and the post-layout extractions are used in
the simulations. Owing to charge sharing effects, the
demodulation frequency is chosen 2.5 kHz. The radiation
striking the photodiode contains two components: one is
background illumination and the other is modulated
light. In moderate outdoor illumination, the amount of
photocurrent produced in each photodiode is roughly 10 pA.

The frequency response of the designed asynchronous


in-pixel filter is shown in Fig. 8. In the figure, the output
is plotted against the modulated signal frequency. The
output is considered as the maximum of the two absolute
results produced by the two complementary sequences. The
reference signal frequency remains constant and equal to
2.5 kHz. As the figure shows the peak output is produced Figure 10 Chip simulation results
for modulated signals with frequency of 2.5 kHz and drops a Test image data used to simulate modulated tag detection
sharply at other frequencies. during motion
b Sequence frames during a single 5 ms demodulation cycle
c Result of the demodulation and modulated region detection
Noise analyses show that signal-to-noise ratio is increased after the 5 ms demodulation cycle
during the integration process. This is due to the fact that the d Sensor y-axis digital output
signal level increases proportionally with time, whereas the e Sensor x-axis digital output

IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77 75


doi: 10.1049/iet-cds.2008.0244 & The Institution of Engineering and Technology 2009
Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

relatively constant and does not vary with the change of phase, produced at the capacitor nodes decreases from a certain
whereas the result produced in the second sequence changes threshold the other sequence should be applied.
with phase. In row (b), phase difference is varied from 908
to 1808. In this case, an inverse situation takes place and the For the purpose of overall chip simulation, the
output remains constant using the second demodulation photocurrent produced in the photodiodes is modelled by
sequence. In row (c) the output is plotted against phase a time-dependent current source. The waveforms of the
difference for the integration time of 5 ms. When the result current sources are obtained from an illumination netlist
is constant for a specific phase range using one sequence, it file, which is created by custom software. The software
will be changing in that range using the other sequence. As converts the image sequences to the amount of illumination
explained earlier, it is not necessary to constantly striking each photoreceptor. Simulation is performed by
demodulate a signal using both sequences. Instead one merging the main circuit netlist with the illumination
sequence should be used, and if the absolute output netlist obtained from the image sequence. A test image
sequence captured for simulation is shown in Fig. 10a
where a two-dimensional arm moves across the plane in
both x and y directions. Fig. 10b shows part of the
Table 1 Proposed asynchronous pixel and sensor sequence during a 5 ms demodulation cycle. The LED
specifications flashes as it moves a few pixel across the scene during the
5 ms integration time. The result of the demodulation and
process 0.35 mm 2-poly, 3-metal, modulated region detection after the 5 ms integration is
standard CMOS shown in Fig. 10c where only the pixels constantly
chip die size 3 mm  3 mm illuminated with modulated light have been detected. In
Figs. 10d and 10e, the digital output of the sensor is shown
number of pixels 64  64 for the complete image sequence.
maximum frame rate 200 frames/s
The specifications of the proposed pixel and the sensor are
maximum Reference 2.5 kHz shown in Table 1. The presented structure is also compared
frequency with some of the present designs in Table 2. The main
pixel maximum power 0.8 mW/pixel advantages of the chip are its low power consumption
usage because of low clock frequency processing, high sensitivity
because of the long integration period used in the
pixel average power 30 nW/pixel demodulation procedure, no requirement for a
usage synchronisation link and the ability to obtain the
pixel fill factor 25% coordinates of the marker without any external processors.

pixel modulated light 0.6 W/m2


sensitivity 7 Conclusions
2
pixel background 2.3 W/m A low power, modulated marker tracking sensor was
dynamic range designed in a 0.35 mm standard CMOS technology. The
sensor average power 190 mW sensor can act as an electronic bandpass filter that omits
usage all background illumination light while detecting the
modulated signal source. As a simple application, the

Table 2 Comparison of the proposed sensor with some previous designs

Sensor Tech, mm Tracking technique Array size Speed, FPS Accuracy, pixels Power, mW
[7] 0.6 local peak tracking 11  11 10 k 0.1 —
[11] 0.5 binarisation 64  64 1k 1 112
[10] 0.18 binarisation 80  80 1k 0.1 30
[8] 0.8 global peak tracking 20  20 3k 0.013 15
[18] 0.35 direct code reception 128  128 30 1 682
[13] 0.6 synchronous demodulation 64  64 — 1 —
[15] 0.6 synchronous demodulation 120  110 2k 1 250
this work 0.35 asynchronous demodulation 64  64 200 1 0.2

76 IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77


& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-cds.2008.0244

Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.
www.ietdl.org

location of the modulated signal on a two-dimensional array and template matching’, Electron. Lett., 2002, 38,
of pixels can be used to determine the position of the object pp. 172– 174
carrying the modulated light tag. Unlike previous designs, no
addition link is required for signal synchronisation. The [10] BURNS R.D. , SHAH J., HONG C. , ET AL .: ‘Object location
modulated light detection procedure is performed in-pixel, and centroiding techniques with CMOS active pixel
which holds the potential for in-pixel data processing and sensors’, IEEE Trans. Electron Devices, 2002, 50,
data bandwidth reduction, suitable in distributed sensor pp. 2369 – 2377
network configurations. The functionality and analyses of
the proposed technique is shown in the 0.35 mm CMOS [11] KOMURO T., ISHII I., ISHIKAWA M., YOSHIDA A.: ‘A digital vision
technology. chip specialized for high-speed target tracking’, IEEE
Trans. Electron Devices, 2003, 50, pp. 191– 199

8 References [12] LANGE R. , SEITZ P.: ‘Solid-state time-of-flight range


camera’, IEEE J. Quantum Electron., 2001, 37, (3),
[1] HIGHTOWER J., BORRIELLO G.: ‘Location systems for ubiquitous pp. 390– 397
computing’, IEEE Comput., 2001, 34, (8), pp. 57–66
[13] OHTA J., YAMAMOTO K., HIRAI T., ET AL .: ‘An image sensor with
[2] REKIMOTO J. , AYATSUKA Y. : ‘CyberCode: designing an in-pixel demodulation function for detecting the
augmented reality environments with visual tags’. ACM intensity of a modulated light signal’, IEEE Trans. Electron
Designing Augmented Reality Environments (DARE 2000), Devices, 2003, 50, pp. 166 – 172
2000, pp. 1 – 10
[14] ANDO S., KIMACHI A. : ‘Correlation image sensor: two-
[3] LIN W., JIA S., ABE T., TAKASE K.: ‘Localization of mobile robot dimensional matched detection of amplitude-modulated
based on ID tag and WEB camera’. IEEE Int. Conf. Robotics light’, IEEE Trans. Electron Devices, 2003, 50, (10),
and Mechatronics, 2004, pp. 851– 856 pp. 2059 – 2065

[4] EL GAMAL A., ELTOUKHY H.: ‘CMOS image sensors’, IEEE [15] OIKE Y., IKEDA M., ASADA K.: ‘A 120  110 position sensor
Circuits Devices Mag., 2005, 21, pp. 6– 20 with the capability of sensitive and selective
light detection in wide dynamic range for robust active
[5] NI Y.: ‘Smart image sensing in CMOS technology’, IEE range finding’, IEEE J. Solid-State Circuits, 2004, 36,
Proc. Circuits Devices Syst., 2005, 152, pp. 547 – 555 pp. 246– 251

[6] MAMMARELLA M., CAMPA G., NAPOLITANO M.R., FRAVOLINI M.L., GU [16] PITTER M.C., LIGHT R.A., SOMEKH M.G., CLARK M., HAYES-GILL B.R.:
Y., PERHINSCHI M.G.:
‘Machine vision/GPS integration using EKF ‘Dual-phase synchronous light detection with 64  64
for the UAV aerial refueling problem’, IEEE Trans. Syst. Man CMOS modulated light camera’, Electron. Lett., 2004, 40,
Cybern., 2008, 38, pp. 791– 801 pp. 1404 – 1405

[7] AKITA J., WATANABE A., TOOYAMA O., MIYAMA M., YOSHIMOTO M.: [17] WADA T., TAKAHASHI M., KAGAWA K., OHTA J.: ‘Laser pointer as
‘An image sensor with fast object’s position extraction a mouse’. SICE Annual Conf., Kagawa University, 2007,
function’, IEEE Trans. Electron Devices, 2003, 50, pp. 369– 372
pp. 184– 190
[18] OIKE Y., IKEDA M., ASADA K.: ‘A smart image sensor with
[8] VIARANI N. , MASSARI N. , GONZO L. , GOTTARDI M. , STOPPA D. , high-speed feeble ID-beacon detection for augmented
SIMONI A.: ‘A fast and low power CMOS sensor for optical reality system’. Proc. 29th European Solid-State Circuits
tracking’. Proc. 2003 Int. Symp. Circuits and Systems Conf., 2003, pp. 125 – 128
(ISCAS’03), 2003, vol. 4, pp. 796 – 799
[19] TIAN H., FOWLER B., EL GAMAL A.: ‘Analysis of temporal noise
[9] ETIENNE-CUMMINGS R., POULIQUEN P., LEWIS M.A.: ‘Single in CMOS photodiode active pixel sensor’, IEEE J. Solid-State
chip for imaging, colour segmentation, histogramming Circuits, 2001, 36, pp. 92– 101

IET Circuits Devices Syst., 2010, Vol. 4, Iss. 1, pp. 67– 77 77


doi: 10.1049/iet-cds.2008.0244 & The Institution of Engineering and Technology 2009
Authorized licensed use limited to: Jeppiaar Engineering College. Downloaded on July 25,2010 at 11:12:22 UTC from IEEE Xplore. Restrictions apply.

Potrebbero piacerti anche