Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract
The project was to develop an android application for light painting photography. The
application extrudes 3D models, while taking a picture in a long exposure. This will result in a
picture of 3D objects hovering in the air creating futuristic effect. The first part was recreating few
similar existing applications. The second part of the project is research based, exploring possibility
of using phone's sensors for sensing device motion and incorporating this knowledge to the
application. It particularly focuses on two Android sensors: rotation vector and linear acceleration.
The research part describes many choices made during the development of the object including
rotation representation: Euler angles vs Quaternions, accelerometer choice: acceleration sensor vs
linear acceleration sensor, etc. It also summarises some of the relevant analysis made in this field
like accelerometer noise and filtering. The research part of the project was challenging as phone
accelerometer sensor was never used before for finding position of the device.
The project achieved developed light painting application with rotation sensing capabilities
and also made substantial research on the double integration of the acceleration obtained from the
MEMS sensors incorporated in the smart phones today.
Table of contents
1. Introduction
1.1. Background information
1.2. Problem statement
1.3.Project's aim and goals
1.4. Organization of the report
2. Related work
2.1. Outline of the sources of information and related work
2.2 Technologies Used
3. Design and implementation
3.1.Loading a model into the application and fitting it into the screen
3.2. Light painting iteration 1. Slicing 3D geometry
3.3. Camera settings for light painting
3.4. ImageAverager or if you are too poor for a DSLR
3.5. Light painting iteration 2. Device-aware rotation of the geometry
3.6. Light painting iteration 3. Device-aware extrusion of the
geometry
3.7. GraphHelper
4. Digital Signal Processing or What I would do if I had more time
5. Conclusion and evaluation
5.1. Future work
5.2. Evaluation
Chapter 1. Introduction
1.1 Background information
Light painting is an old photography technique of taking photo of a light
traces in dark location using long exposure and a light source such as a torch
as a real-world paint brush. The technique was developed for research
purposes as a part of a national program of psychic and physical regeneration
and military preparedness and was used to study movements of human body,
especially gymnasts and soldiers.[1] The first known such light painting
photography is called Pathological walk from in front and was taken by
attaching incandescent bulbs to the joints of a person. As many other military
inventions, light painting has been adapted for more peaceful purposes by
ordinary photographers as an artistic endeavor.
With emergence of smart phones and mass development of applications the
idea migrated to the app markets of Android phones, IPhones and IPads. Most
of them use the device as a simple torch, i.e. a circle in different colors and
sizes or to draw holographic 2D or 3D texts.
The most advanced of such applications is Holographium - currently
available in app store. It has many features such as importing logos, icons and
photos in PNG, BMP, JPG, GIF formats and converting them to 3D objects. It
also has functions of user definable extrusion time and depth. However the
idea of my application came from Making Future Magic of Dentsu London
studio.
1.2.Problem statement:
In order to take good light painting photo it is important to balance between
the exposure time, the speed of the extrusion and screen angle. Thats why it
normally takes many attempts to get it right, mostly photos come out
squashed or blurred and with intrusive haziness. While exposure time depends
on the settings of a camera, the extrusion speed and the screen angle depends
on the person dragging the phone. It has to be possible to adjust the 3D model
being extruded according to the movement and rotation of the device and
make extrusion process independent of the human factor.
Make it possible to load any object so that the objects centre is in the
centre of the phone screen and the object fits into the screen
3. Move small virtual window along the model in order to get slices or
cross-sections of a model, similar to CAT scan used in medical imaging.
These slices are rendered in the application so that later, while dragging
the phone in front of the camera and taking photo in a long exposure,
the slices are extruded and the composition of these 2D slices form 3D
image in the photo. This is very similar to stop motion, where hundreds
of photos of still objects in different positions form illusion of a
movement.
4. Adding some functionalities to the user interface, like buttons to increase
and decrease near and far planes of the window, i.e. specify the
thicknessof the 3D painting.
To make it independent of the human factor it is possible to sense the
device in the environment and adjust the 3D model accordingly.
Therefore following goals were set as well:
5. Find the appropriate sensor to sense how much was the phone rotated,
i.e. what is the angle of rotation of the phone and use this information to
keep the 3D model stable and stay in its original orientation,
independent from the rotation of a phone.
6. Use the appropriate sensor to extrude slices as quick as the phone is
dragged.
Goals number 5 and 6 are where the capabilities of Android sensors were
to be explored and used to find the rotational angle and speed of the device
movement. This is the research part of the project, which aims to see if it is
possible, are the sensors accurate enough and will they fit into purpose.
The report is organised in sequential fashion, building up the application as it
advances.
1.4 Organization of the report
Chapter 2. Related work
In this chapter I will outline and reference the sources of information: research
papers, source codes and books which assisted my project. I will also list
software tools I used during the development.
Chapter 3. Design and implementation
This chapter is the core of the report where I'll describe the development
process of each iteration and show deliverables of each iteration. I'll also go
through little helper applications which I've built to assist my research.
Chapter 4. Digital Signal Processing or What I would do if I had more time
This chapter talks about types of noise in the accelerometer sensor that
constrains realisation of some ideas. And also describes different filtering
techniques that might be used to eliminate these noise.
Chapter 5. Conclusion and evaluation.
This is capstone chapter where I will sum up the project, its derivations and
This is a brief summary of technologies used in the project, how and where
they were used.
Android SDK(Software Development Kit) provides the API libraries and
developer tools necessary to build, test, and debug apps for Android
including Eclipse IDE and Android Development Tools.
Eclipse IDE is a software development environment with extensible plugin system for customising its base workspace.
ADT (Android Development Tools) is a plugin for Eclipse that provides a
suite of Android specific tools for project creation, building, packaging,
installation, debugging as well as Java programming language and XML
editors and importantly, integrated documentation for Android framework
APIs.
OpenGL ES is an API for programming advanced 3D graphics in
embedded devices such as smartphones, consoles, vehicles, etc. It is an
OpenGL API modified to meet the needs and constraints of the
embedded devices such as limited processing capabilities and memory
availability, low memory bandwidth, power consumption factor, and lack
of floating-point hardware.
Rajawali is an OpenGL ES 2.0 Based 3D Framework for Android built on
top of the OpenGL ES 2.0 API. From its many functionalities I have
used .obj file importer, frustrum culling and quaternion based rotations.
MeshLab is a system for the processing and editing of unstructured 3D
triangular meshes. The system helps to process big, unstructured models
by providing a set of tools for editing, cleaning, healing, inspecting,
rendering and converting this kind of meshes. I downloaded many open
source .obj files from different web sites to load them into my
application. Preprocessing those files in MeshLab before using in my
application helped to remove duplicated, unreferenced vertices, null
faces and small isolated components as well as provided coherent normal
unification, flipping, erasing of non manifold faces and automatic filling of
holes.
Octave is a high-level programming language, primarily intended for
numerical computations. It has adapted gnuplot,a graphing utility for
3.1.Loading a model into the application and fitting it into the screen
In my application Ive used models defined in geometry definition file - .obj.
This file format is a simple data-format that represents essential information
about 3D geometry: position of each vertex, the UV position of each texture
coordinate vertex, the faces and the normals. These information takes
hundreds of lines and looks similar to this:
1. v 3.268430 -28.849100 -135.483398 0.752941 0.752941 0.752941
2. vn 1.234372 -4.395962 -4.233196
3. v 4.963750 -28.260300 -135.839600 0.752941 0.752941 0.752941
4. vn 1.893831 -3.827220 -4.498524
...
1178. f 3971//3971 3877//3877 3788//3788
1179. f 2489//2489 2608//2608 2447//2447
1180. f 3686//3686 3472//3472 3473//3473
...
Models are loaded into the application from this file by parsing each line and
saving the information into the Arraylists or a similar data structure of vertices,
normals, texture coordinates, colours and indices. This information after being
sorted into lists, is now ready to be drawn using typical Opengl ES program.
For this purpose Ive used Rajawali, an open source 3D framework for Android,
by Dennis Ippel. It has a file parsing functionality which can import geometry
definition file formats such as .obj, .md2, .3ds and .fbx files and shader
rendering class using Opengl ES, which is precisely what is needed.
Models come in different sizes and are located in different coordinates in
camera space. When loaded it would be nice if a model is automatically placed
in the centre of the screen and fit it, instead of manually changing camera
position or far and near planes or other settings for every model.
To fit the model into the screen it has to be scaled to fit into unit bounding
box. For that, the scale factor, i.e. how much the model should be scaled,
needs to be calculated. The amount, by which unit bounding box of the model
is bigger than the bounding box of the model, is the scaling factor. The depth,
width and height of the unit box are all equal to 2(|-1|+1).
Achieving this effect is nothing more than moving Znear and Zfar clipping
planes along the Z axis. By specifying distance between Znear and Zfar, slice
thickness can be defined. The starting point of the window is initial near. The
window stops when reaches some maximum far limit. Figure 3 illustrates this.
far = initialFar;
initialNear = 3.1f;
near = initialNear;
increment =0.05f / 3;
farLimit = 6.7f;
The increment describes how fast should the window move, large increment
matches quick movement and small increment is for slow movement. The
increment and other specifiable variables are incorporated into the UI as in
Figure 2.
Opengls onDrawFrame() draws current frame. Frame rate(FPS-frames
per second) can be specified, which also allows to control how quick should the
window move, the larger is the frame rate, the quicker will the window
move.
Figure 3. Illustration of initialFar and initialNear and sliding window
This is enough for the first working light painting iteration - photo of a 3D
models hovering in the air can be taken, but it doesnt sense devices rotation
or its speed, therefore requires the holder to drag the phone in a very stable
manner and in a constant speed. But if the phone is rotated, the resulting
image will be distorted, because the model being extruded doesnt change its
orientation with the rotation of the phone. This can be seen in the Fig. 4
However, dragging phone in a stable manner results in the kind of images as
in Fig. 5.
Figure 5. Light painting while holding the phone straight and stable, without any rotation
The code in Code listing 1 takes sequence of images stored in a list as an input
and averages red, green and blue components of all pixels separately and
creates new raster using these new averaged samples, then sets this new
raster data to newly created BufferedImage. The buffered image is then saved
as a jpeg file.
available, and modifies some of those raw sensor data to output values
corresponding to the device coordinate system. The output is angular
quantities around axes. Rotation is described about any given axis in terms of
the number of degrees of difference between the device's coordinate frame
and the Earth coordinate frame. They can be measured in degrees or
quaternions, therefore the output can be in 3-vector, rotation matrix or
quaternion forms.
Figure 7. Rotations around different axis.[3]
Different combination of parameters of axis and angle is possible. But the last
line of the previous code
mRenderer.setQuat(quatFinal);
quatFinal
case Sensor.TYPE_ROTATION_VECTOR:
SensorManager.getQuaternionFromVector(Q, event.values);
Quaternion forLaterUse = new Quaternion(Q[0],Q[1],Q[2],Q[3]);
quatFinal = new Quaternion(Q[0],Q[1],Q[2],Q[3]);
if(startSensing ==true)
{
if(afterTap == 1)
{
offsetQuaternion = mRenderer.getInitialOrientation();}
else if(afterTap>1)
{
Quaternion differenceOfRotation =
findDifferenceOfRotation(quatInit,quatFinal);
differenceOfRotation.inverseSelf();
offsetQuaternion.multiply(differenceOfRotation);
}
quatInit = forLaterUse;
afterTap++;
mRenderer.setQuat(offsetQuaternion);
}
}
}
private Quaternion findDifferenceOfRotation(Quaternion initialOrientation,Quaternion
finalOrientation)
{
initialOrientation.inverseSelf();
finalOrientation.multiply(initialOrientation);
return finalOrientation;
}
It is possible to take good picture with just first iteration of light painting,
but it requires many trials to get it right. The Holographium, the light
painting application in the market, has features to help the user to control his
speed to get undistorted result. For example, extrusion distance shows the
distance for which user must drag the device to get an undistorted result. Or
the duration of extrusion can be set before the rendering. One-second
interval sounds and 50%-done sounds are also included to warn the user to
slow down or quicken the dragging of the phone.
In my project, I tried to explore the idea of sensing the devices position
relative to its starting point to eliminate the need for controlling stability of
phones speed in the process. So instead of moving near and far planes by the
same amount during extrusion, a solution where this amount is dependent on
devices disposition can be proposed. By disposition I mean the difference
between devices new position and old position. As the phone changes its
position, its new and old positions will be updated accordingly. If the device is
dragged slowly the increment will be small, therefore the window will slide
slower,if it is dragged quicker, the window speed will increase accordingly.
So, instead of float increment =0.05f / 3; it is improved to be float increment = distance;
The window can be imagined to be sliding as the device is being dragged. If
the device stops moving, sliding window will also stop, because increment =
old position new position = 0. But this is in the case if the distance the phone
travels can be accurately found. Otherwise, some alpha can be introduced to
compensate inaccuracy, such that float increment =alpha*distance; where
alpha value has to be chosen through testings among different alpha values,
until the increments are close to the distances the phone travels and 3D image
is clear.
Now, mathematically, distance can be found by integrating velocity and
velocity can be found by integrating acceleration. There are two sensors
available in most phones, that can give accelerometer values.
Accelerometer sensor gives raw accelerometer data. When the device is
When the device is flat on a table, accelerometer shows non-zero, nonconstant readings. If the output signal is not zero when the measured property
is zero, it is said, that the sensor has an offset or bias. The graph in Figure 13
showing the raw accelerometer data obtained when phone was dragged along
z axis in the device coordinate space for 15 sm as in Figure 12 and held still for
a while in this final position.
The graph shows how the device was motionless until 10th second and
started its movement at this time, then it was dragged for about 5 seconds and
stopped. At the moment when it starts its movement, it accelerates by some
amount, but when it starts stopping and until the moment it stops deceleration
is not equal in magnitude to the acceleration. It will affect the integration error
later in the process.
Figure 12 Dragging the phone along its z axes
The offset cant be completely eliminated, as its value is not constant and
predictable. However it is possible to suppress its influence by performing
calibration. In my application, Ive decided to store first one hundred readings
of the accelerometer sensor, when it is not undergoing any acceleration, in a
list and average them in order to get calibrated value. However, observations
showed that first twenty five or so readings arent even closely related to
physical reality, therefore they were ignored and readings from 26 to 126 were
used for calibration. The long term average is commonly used to calibrate
readings, the more output is averaged first, the better calibration can be
achieved. Once the calibrated value is known it was subtracted from each
subsequent reading.
As it was said before, it is only possible to weaken the influence of the
offset, because of its fluctuating nature. As such, any residual bias introduced
causes an error in position which grows quadratically with time.
While the application is reading first 125 sensor outputs to use for
calibration, the phone needs to be still for several seconds. The countdown can
be implemented in the UI to keep the user informed about calibration process.
Drift in the accelerometer is caused by a small DC bias in the signal. To
resolve the problem of a drift, a high-pass filter was applied. There are number
of possible filters that could be used, they are discussed in the Chapter 4 of
this report.
Even after the filtering, a no movement condition wont show zero
acceleration. Minor errors in acceleration could be interpreted as a non-zero
velocity due to the fact that samples not equal to zero are being summed. This
velocity indicates a continuous movement condition and hence unstable
position of the device. To discriminate between valid data and invalid data
in the no movement condition, a filtering window was implemented.
This method high-pass filters the calibrated data by using weighted
smoothing and ignores anything between [-0.02;0.02] by setting them to zero.
private float[] highPass(float x, float y, float z)
{
float[] filteredValues = new float[3];
lowFrequency[0] = ALPHA * lowFrequency[0] + (1 - ALPHA) * x;
lowFrequency[1] = ALPHA * lowFrequency[1] + (1 - ALPHA) * y;
lowFrequency[2] = ALPHA * lowFrequency[2] + (1 - ALPHA) * z;
filteredValues[0] = x - lowFrequency[0];
filteredValues[1] = y - lowFrequency[1];
filteredValues[2] = z - lowFrequency[2];
//window filter
if(filteredValues[0]<=0.02&&filteredValues[0]>=-0.02)
{filteredValues[0]=0.0f;}
if(filteredValues[1]<=0.02&&filteredValues[1]>=-0.02)
{filteredValues[1]=0.0f;}
if(filteredValues[2]<=0.02&&filteredValues[2]>=-0.02)
{filteredValues[2]=0.0f;}
return filteredValues;
}
After the high pass filter and filtering window the accelerometer data looks like this:
Now, when the accelerometer data is more or less filtered, it can be used
to find velocity, further, to find disposition. First integration of accelerometer is
v = v0 + at, where t is time interval. Androids SensorEvent data structure
contains timestamp - time in nanoseconds at which the event happened, along
with other information passed to an app when a hardware sensor has
information to report. Using this timestamp information, it is possible to know
the time interval at which accelerometer sensor brings new data, i.e. the time
interval between sensor events.
float currentTimestamp = event.timestamp ;//nanoseconds
float timeInterval= (currentTimestamp timestampHolder)/1000000000;//seconds
timeValues.add(tempHolder);
if(timestampHolder!=0.0f){
tempHolder+=timeInterval;}
timestampHolder = currentTimestamp;
The fact that velocity is never zero when phone comes to rest after the
movement, causes even more error in the second integration as it is shown in
the graph. There are other factors which cause the position to wander off. It
was shown that the accumulated error in position due to constant bias is 4.2.1
oliver woodman. where t is the time of integration.[6]
These graphs described the process when phone is dragged along only one
axes - Z, the case where the phone is dragged making angles with all three
axes is described in Appendix 1.
There are many sources of noise that makes accelerometer sensors very
unreliable for double integration. In the technical report Intro to inertial
navigation, Woodman analyzes these intrinsic noises and gives them numeric
characteristics. [6]
The accelerometer noise comes from electronic noise from the circuitry
that converts motion into a voltage signal and the mechanical noise from the
sensor itself. MEMS accelerometers consist of small moving parts which are
averse to the mechanical noise that results from molecular agitation,
generating thermo-mechanical or white noise.[7] Integrating accelerometer
output containing white noise results in velocity random walk. Woodman
analyzes what effect this white noise has on the calculated position, by double
integrating the white noise and finding the variance. The analysis shows that
accelerometer white noise creates a second order random walk in position,
with a standard deviation standard deviation which grows proportionally to
t^(3/2).
There are many sources of accelerometers electronic noise: shot noise,
Johnson noise, flicker noise and so forth. The flicker noise causes the bias to
stray over time. Woodmans analysis shows that flicker noise creates a second
order random walk in velocity whose uncertainty grows proportionally to
t^(3/2), and a third order random walk in position which grows proportionally
to t^(5/2).
Furthermore, the analysis reveals that gyros white noise also influences
the noisiness of the acceleration. Woodman says, Errors in the angular
velocity signals also cause drift in the calculated position, since the rotation
matrix obtained from the attitude algorithm is used to project the acceleration
signals into global coordinates. An error in orientation causes an incorrect
projection of the acceleration signals onto the global axes. This fact will result
in the accelerations of the device being integrated in the wrong direction.
Moreover, acceleration due to gravity can no longer be correctly removed.
3.7. GraphHelper
Helper class GraphHelper was developed by myself to help to collect
necessary data and write it to a file in the external storage of a phone.
Once the time interval, velocities and positions are calculated this method is
called
saveDataToPlotGraph(timeInterval,raw,offset,highPassed,velocityX,velocity
Y,velocityZ,finalSpeed,positionX,positionY,positionZ,finalPosition);whic
When the application pauses, all the accumulated data are now written into a
file
protected void onPause()
{
super.onPause();
mSensorManager.unregisterListener(this,mRotVectSensor);
mSensorManager.unregisterListener(this,mLinearAccelSensor);
writeDataToFile();
}
GraphHelper graphHelper;
private void writeDataToFile() {
graphHelper = new GraphHelper(getApplication());
graphHelper.writeToFile("rawAx.txt",rawAx);
graphHelper.writeToFile("calibratedAx.txt",calibratedAx);
graphHelper.writeToFile(highpassedAx.txt",highpassedAx);
...}
Figure 14. The screenshot from the folder holding all the saved files and the content(time interval acceleration) of one of those files.
5.2. Evaluation
The first iteration of the project developed an application which can transform
any .obj model to fit into the screen after it has been loaded. It also
accomplished the task of sliding window for getting cross sections of the
model object. It provided UI for adjusting parameters to make it easy to
experiment with speed, thickness and starting and finishing z points of sliding
window. It is extremely helpful to be able to try different settings. Android
application development essentials, Opengl ES, and basic computer graphics
theory like camera positioning and bounding box were learned during the first
iteration of the project.
The second iteration of the project enhanced the first iteration by integrating
device-aware rotational abilities to the model object. Much knowledge about
androids rotation vector sensor and different representations of rotation such
as Euler angles and Quaternions were learned during the development.
The first and the second iteration of the project were successfully
accomplished and makes good working application, which can be used for light
painting with a constraint that holder has to drag the phone in constant speed.
While the result of the first iteration, extruding slices of 3D image, resembles
existing light painting applications, the second iteration, where rotation sensor
is used to sense the devices rotation and incorporate the output of the sensor
to the 3D model, is a novelty that improved the technology. Now, users can
rotate the device and dont be afraid of distorted results because of devices
rotation.
The third iteration of the project, although didnt reach the goal - getting
phones accurate disposition by double integrating accelerometer data,
attempted the theoretical base of double integration and has showed that it is
impractical to use acceleration alone for the positioning or the inertial
navigation systems generally. It has to be highly processed by advanced
filtering technologies. The results of each step of double integration were
plotted to visualise the obtained data. The existing noise measurements of
accelerometer sensor and ideas about filtering technologies were researched.
Considerable amount of experiments were held in order to reach accurate
representation of the accelerometer, due to its nature, which senses tiniest
environmental vibrations and natural shaking of the holding hand. Great deal
of knowledge about androids inertial sensors were gained during the
Once it's created, the Rajawali external library needs to be downloaded from
https://github.com/MasDennis/Rajawali and imported the same way as described above. Once the
Rajawali is in the workspace as well, it should be included as external library to the LightPainter3D
project. Right click the project->Properties->Android will bring the following window, where in the
Library section the new library Rajawali needs to be added
Now it's ready to be run in the android device. In the phone's settings
USB debugging must be enabled (usually Settings->Developers-> USB
debugging, but might differ from phone to phone) and then phone should be
connected to the computer via USB. Once connected, run the project as
Android application.
When the camera is set and you are ready, the photo can be taken.
While it is being taken you'll need to drag the phone in front of the camera in a
constant speed. You are not constrained by the occasional rotations of the
device that can happen while you are dragging the phone in the air, the
application senses the rotation and rotates the object accordingly to keep it in
the place, therefore you have more freedom than with other applications of
this kind.
Appendix 3. More results in graphs
Here are the results obtained from dragging phone in the air for 30 sm,
now, making angles with all three axes like so:
BIBLIOGRAPHY