Sei sulla pagina 1di 112

Big Data Gets Real in Boston!

People are talking about


BigData TechCon!

April 26-28, 2015


Seaport World Trade Center Hotel
Big Data TechCon is a great learning
experience and very intensive.

Huaxia Rui, Assistant Professor,


University of Rochester

Get some sleep beforehand,


and divide and conquer the packed
schedule with colleagues.

Paul Reed, Technology Strategy & Innovation, FIS

Choose from 55+ classes and tutorials!


Big Data TechCon is the HOW-TO technical conference
for professionals implementing Big Data solutions
at their company

Worthwhile, technical, and a breath of


fresh air.

Julian Gottesman, CIO, DRA Imaging

Come to Big Data TechCon to learn the best ways to:


Process and analyze the real-time data pouring into your organization
Learn how to extract better data analytics and predictive analysis
to produce the kind of actionable information and reports your
organization needs.
Come up to speed on the latest Big Data technologies like Yarn, Hadoop,
Apache Spark and Cascading
Understand HOW to leverage Big Data to help your organization today

Big Data TechCon is definitely worth the


investment.

Sunil Epari, Solutions Architect, Epari Inc.

www.BigDataTechCon.com
Big Data TechCon is a trademark of BZ Media LLC.

A BZ Media Event

The Best Of

Dear Readers!
Our Best of SDJ issue is finally released and it is free to download! We worked hard and we hope that you
see that. This issue is dedicated to Web Development mostly. We tried to compare as many frameworks
aswe can. Our jQuery section starts with Jquery is awesome. So, use it less! by Willian Carvahlo. Authoris
convinced that jQuery is a great tool but its sometimes used in a wrong way. This article is a voice in
adiscussion about proper use of jQuery.
Then youll find jQuery promises by Ryan ONeill. This article is related to the previous one. Author shows
in a simple way how you can manage all that you want with jQuery without complications.
Davide Marzioni shows a simple trick with web2py and Tomomichi Onishi presents Tutorial for creating
simple Hyakunin-Issyu application using Sinatra and Heroku. Manfred Jehle in theoretical way explains
howyou can Start developing better applications.
Aimar Rodriguez covers Django subject in the article entitled Developing your own GIS application
withGeoDjango and Leaflet.js.
Also look closely to the other articles. You need to read the article on AexolGL and I think you will find
thenew 3D graphics engine full of new tools. This issue contains really interesting content and we are
happyto publish that for you!
Were hoping youll enjoy our work.
Ewa & the SDJ Team

Editor in Chief: Ewa Dudzic


Editorial Advisory Board: David Gillies, Shawn Davis
Special thanks to our Beta testers and Proofreaders who helped
us with this issue. Our magazine would not exist without your
assistance and expertise.
Publisher: Pawe Marciniak
Managing Director: Ewa Dudzic
DTP: Ireneusz Pogroszewski
Marketing Director: Ewa Dudzic
Publisher: Hakin9 Media SK
02-676 Warsaw, Poland
Postepu 17D
http://www.sdjournal.org
Whilst every effort has been made to ensure the highest quality of the magazine, the editors make no warranty, expressed
or implied, concerning the results of the contents usage. All
trademarks presented in the magazine were used for informative purposes only.
All rights to trademarks presented in the magazine are reserved
by the companies which own them.
DISCLAIMER!

The techniques described in our magazine may be used in


private, local networks only. The editors hold no responsibility for the misuse of the techniques presented or any
data loss.

The Best Of
Copyright 2015 Hakin9 Media Sp. z o.o. SK

Table of Contents
AexolGL New 3D Graphics Engine ...............................................................................6
Jquery is Awesome. So, Use it Less!..................................................................................9
by Willian Carvalho

Jquery Promises..................................................................................................................11
by Ryan ONeill

Technical Tip & Tricks For Web2py.................................................................................14


by Davide Marzioni

Tutorial for Creating Simple Hyakunin-Issyu Application


Using Sinatra and Heroku..................................................................................................18
by Tomomichi Onishi

Start Developing Better Web Applications.......................................................................28


by Manfred JEHLE

Developing Your Own GIS Application with GeoDjango and Leaflet.js.......................33


by Aimar Rodriguez

Solving Metrics Within Distributed Processes.................................................................44


by Dotan Nahum

CreateJS In Brief................................................................................................................52
by David Roberts

Build Customized Web Apps through Joomla.................................................................67


by Randy Carey

What is Drupal? Concepts And Tips for Users and Developers.....................................74


by Juan Barba

AngularJS Tool Stack.........................................................................................................80


by Zachariah Moreno

Thinking the AngularJS Way............................................................................................84


by Shyam Seshadri

Reusable UI Components in AngularJS...........................................................................100


by Abraham Polishchuk & Elliot Shiu

Test Automation with Selenium 2......................................................................................104


by Veena Devi

Grabbing the Elements in Selenium..................................................................................109


by Nishant Verma

The Best Of

AexolGL New 3D Graphics Engine


Aexol specialises in creating mobile applications. It was created by
Artur Czemiel, agraduate of the DRIMAGINE 3D Animation & VFX
Academy, who has a lifelong interest in 3D technology. He first started
to realise his passion by working in the film industry. Artur is the cocreator of the special effects in Polish production Weekend and
the short film Hexaemeron, which was awarded Finest Art award
Fokus Festiwal and nominated to best short animated film at fLEXiff
2010 Australia. The experience gained by working in the movie
industry and on the mobile applications market was the basis for
creating AexolGL atool designed to make work easier for Aexol and
other programmers around the world.
What is AexolGL?

Why use Python (AexolGL PRO)?

A set of tools for creating visualisations, applications and 3D


games with little workload. The user doesnt have to worry
about things like differences between OSs or hardware.
AexolGL lets you focus on the keyelements and appearance
of the end product (application, game) instead of worrying
about technical details.

Python is a easily adaptable scripting language. Being in


line with the idea behind the engine itself (quick programming), it allows rapid prototyping of applications. Pythons
module structure allows the addition ofmany prepared libraries, which help make the programmers work easier.

How are different scenes, models etc.


imported into theengine?

What was the main objective and the main


incentive to create the engine?

We have integrated the ASSIMP library with our engine,


which allows the import of about 20 different formats.
However because it is constantly being expanded that
number will increase overtime.

We wanted to create a tool for small/medium-sized developer studios, indie developers, that would let them design
3D projects on any platform they want.

What can you say about the engine structure?

Why create two different engines?

One of the main efficiency problems that appear when


creating 3D projects are context changes. To minimalize
the number of costly changes, while not forcing the object
sorting order, we created aRenderTree, which makes sure
that operations are not repeated and are executed in the
correct order.

AexolGL PRO is a tool for creating games and applications natively in C++/Python, for the following platforms:
iOS, Android,Windows, Mac and Linux. AexolGL WEB is
used to create games and applications for internet browsers
(Mozilla, Safari, Chrome) without the need to use plugins
or simple webview apps, games for iOS and Android.

Is AexolGL a tool only for creating games and


mobile applications? Will it find use inother
fields?

AexolGL WEB is a perfect tool for creating visualizations. 3D technology is the modern form of presentation,
that works perfectly for visualizing interiors, buildings
and product models (e.g. cars and electronic devices). AexolGL takes website product presentation to a whole new
level.

Will displaying a lot of 3D graphics


inweb browser slow the users computer
(AexolGLWEB)?

Most certainly not! The web engine handles displaying


3D very well, even on machines using integrated graphics. Deferred shading technology handles creating complicated lighting models without overly taxing the hardware.

AexolGL team
6

The Best Of

Ready for instantation animated sprite object from JSON file (C++)

Does the engine give user the ability to


implement individual solutions, forexample,

Are there any examples available? It seems


that currentlythere arent any games or more
importantly, atech demo of the engine created
with AexolGL available on the website.

Yes, we let the user create personal solutions, writecustom


shaders or effects needed for specialised tasks.

We are currently putting finishing touches on our product


and the website. Soon gl.aexol.com will host the first examples showcasing the possibilities ofAexolGL WEB as
well as the first game fro mobile devices created with our
technology, called Gravity: Planet Rescue.

Are there any similar products already on


themarket? What makes AexolGL stand out
(specifically in terms of functionality) inthe
field of available solutions?

AexolGL, is primarily a tool for small and medium-sized


projects, that lets you rapidly prototype and preview
them. We do not aim to compete with the big engines.
Ours is one of the select few that works on all platforms
and has a web counterpart with a similar RenderTree
structure.

Does the engine use optimization algorithms,


like occlusion culling? Or others like, for
example, those found inUmbra technology.

The engine does have the most popular optimization algorithms available. Although not as advanced asUmbras,

A simple way of creating objects with assigned materials, shaders, geometry and transformation matrices.In
AexolGL the object is ready for display after only 30 lines of code (C++)
7

The Best Of

Is AexolGL only a graphics engine or does


italso handle other aspects of game creation
(physics, optimal resource management,
AIetc.)?

they certainly increase the efficiency of the application.


As we expand the engine we will certainly further improve this system.

What kinds of lighting algorithms are


available in the engine, does it support
lightmapping or global illumination? Doyou
plan on including realtime global illumination
shaders?

Aside from the graphics engine itself, our framework also


supports optimal, multithread resource management. We
introduced a simple system of creating multiple threads
in an application and solved the problem of file loading
on different platforms as well. For mobile platforms we
prepared a suitable small format for saving 3D geometry.
Additionally our engine easily integrates with available
physics engines (for example, the popular Bullet Physics).
The engine also has an integrated mathematical library
equipped with the most needed functions for 3D applications: 2D/3D vector math, transformation matrices and
quaternions. As well as countless additional instruments
e.g.: color conversions, easing function library, Bezier and
CatMull curves and the ability to create simple parameterized geometry (cubes, spheres, cylinders).

We are constantly working on scene lighting. Ultimately


it will be one of the advantages of LightRig technology
which creates a compact lighting model out of the environment map, giving the illusion of GI. Currently the engine is equipped with several types oflighting and supports shadow-mapping.

How does the engine model terrain, doyou


plan on using voxels, can you create
heightmap based terrain?

Heightmap based terrain creation is already available. Its


actually very convenient and practical tool useful in a majority of projects. A voxel version might beimplemented
as well in the future.

Similarities and differences between your


product and thebiggest player, Unity 3D.
What is the niche forAexolGL in a market
with a free Unity 3D?

To my understanding, the engine provides


ajoint interface that lets create applications that
work both under, for example, Windows and
Android? How does it handle the fundamental
difference in controls (desktop mouse and
keyboard, mobile devices touchpad)?

Its difficult for us to compare with Unity. The idea behind


our engine is completely different. Were not targeting the
biggest studios with complicated and high-budget projects.
Our aim is to let small and medium-sized studios benefit
from a quick and simple tool that will let them begin their
journey into the world of 3d games and applications without straining their budget. Obviously we will also continue to work on our project, extending its capabilities and
broadening its use. Additionally if we take a closer look at
the free version of Unity 3D, we can see that the access to
many useful functions, such as Static Batching, Renderto-Texture Effects, Full-Screen Post-Processing Effects or
GPU Skinning, is only available to the paid PRO version.

We give the developer the ability to define controls on keyboard, joystick, mouse and touchscreen. It is also possible
to define a virtual joystick on the touchscreen. However
how the application reacts to individual signals is entirely
up to its creator. By default, signals from the mouse and
one finger touches are treated thesame, however they can
easily be assigned to different actions.

How about the significant difference


incomputing power between desktops
andsmartphones?

Does your product benefit from the new


possibilities available in OpenGL 4?

OpenGL 4 is currently only available on PC. Because


alot of mobile devices still use OpenGL ES 2.0 our engine is compatible mainly with that API version. Although
thanks to the high flexibility of the engine, introducing
OpenGL4 would not be a problem. Users of the AexolGL
Lab have the ability to independently adapt the engine to
OpenGL 4 thanks to GL abstract.

Obviously smartphones do have less computing power


than desktops, however how the application functions on
mobile platforms depends primarily on its design. And for
our users, the help of our efficient solutions.

In the currently available version of AexolGL


WEB you used the K-3D library licensed by
GNU GPL. Why wasnt this fact mentioned
on the product page? Are the licenses
compatible?

The K-3D library is not used in the current version of the


engine. The File loading mechanisms employed byK-3D
are obsolete and do not support usemtl.

The Best Of

Jquery is Awesome. So, Use it Less!


by Willian Carvalho
One of the greatest features, if not the only one, responsible for making Javascript shine
brighter in the past years is definitely jQuery.
Since its birth in 2006, it has become very popular, attracting both programmers and web designers, because
it made their lives a lot easier.
At that time, server side developers were always paying attention to database and security handling,
component layers, message queues, etc and they have never actually been able to focus on client side
programming.
Web designers, on the other hand, focused their efforts on building nice designs for applications, as well as
caring about user experience and on making the best out of HTML/CSS combo, also leaving Javascript behind.
Jquery came to fill this gap between the server and the client tiers.
Enough with cross browser concerns and AJAX handling issues. Enough with lines and lines of code to do
simple and repetitive tasks. It was the beginning of a new era for the web development.
The time has passed and tons of jQuery plugins have been built. Almost everybody uses jQuery now, and it
has become some sort of a common language between developers and web designers.
Jquery became so popular and easy to use that people started using it for nearly everything, from rich plugins
to simple selectors.
This brings us to the whole point on what this article is about: people have forgotten why jquery was built.
Frameworks and APIs are made to solve a lot of problems by encapsulating functionalities, like if they were
an utility belt. However, these functionalities come with the cost of being too generic, causing them to be
slower than if they were built to solve one single problem.
Jquery is no different from other frameworks. But this also doesnt mean that the additional cost is a bad
thing. Actually it means that jQuery is doing what it was supposed to do! Its just that we shouldnt be using
our utility belt when our own hands do the job as well (or better) than it.
But, what exactly are we doing wrong with jQuery? When should we be using VanillaJS (http://vanilla-js.com/)
instead of it? How can we make sure that we made the right choice?
First of all, let me refresh your mind about what jQuery is for. Heres the description from jquery.com itself:
jQuery is a fast, small, and feature-rich JavaScript library. It makes things like HTML document traversal
and manipulation, event handling, animation, and Ajax much simpler with an easy-to-use API that works
across a multitude of browsers.
This means that jQuery makes easier for us to manipulate DOM elements, bind and unbind event listeners,
write less code to some tasks, such as animations, and it handles AJAX requests a lot easier than we would
have done on our own. All of this without having to care about browser compatibility!
Besides these awesome features, jQuery also provides us a very useful set of functions for dealing with
loops, reading a particular item on an array, css style manipulation and so on.
Although these functions are very tempting to use, some of them dont necessarily make our work easier.
One example is the use of the $.each function (http://api.jquery.com/jQuery.each/) instead of using the
regular for statement.
9

The Best Of
It might not be of a big difference when iterating a small number of elements, but for larger lists the
performance would be compromised. Because deep inside, jQuery is using the plain old Javascript with an
anonymous function for every step of the loop.
Besides, the amount of lines of code required to build a loop is not very different between jQuery and pure
Javascript. Lets see:
Listing 1. jQuery sample
var arr = [a, b, c];
$(arr).each(function(index, data) {
console.log(index, data);
});

Listing 2. Javascript sample


JS version
var arr = [a, b, c];
len = arr.length;
for(var i=0; i < len; i++) {
console.log(i, arr[i]);
}

As you can see, there is no big difference between them, but keep in mind that pure JS is faster, for sure.
There are other cases where we should be using pure Javascript instead of jQuery. Unfortunately, there is no
magic way to decide which one, other than analyze each situation.
It is very hard to guess which one is faster, and you will want to make sure which one to use.
A simple way to compare your code with jQuery (or any other code) is by using JSPerf (http://jsperf.com/).
With this online tool, its possible to create any number of test cases and run them against each other to see
how many operations are done per second.
JSPerf is also able to store the results for every test in every browser and version. This is good, because some
tests might be quite similar, but one particular browser might have a very different result than the others,
which can help you on your decision.
In conclusion, when you are writing a web application, take a 5 seconds break, think about what you are
about to use jQuery for, and try to use it properly.
We all know how brilliant it is and how much it has helped us for the past years, but its important to always
keep in mind why and what it was really built for. Use less jQuery for the best!

About the Author

Willian is a Senior Javascript Engineer at TOTVS. He worked specificly with Javascript for years and he
wants to discuss about jQuery.

10

The Best Of

Jquery Promises
by Ryan ONeill
jQuery has made working with asynchronous Javascript incredibly easy. However, callback
functions still get quickly out of control leading to code that is both hard to read and to
debug. Usage of the promises pattern via the jQuery Deferred object is a great way to keep
your code clean and maintainable.
This article will give an overview of jQuerys implementation of the promise pattern, how it can be used to
write clean asynchronous jQuery, and an example implementation.
The author has been working with the jQuery library for over five years. He is currently a front-end engineer
with Twitter designing and building single page apps using jQuery and other libraries.
For context, this article assumes that the reader has some general Javascript and jQuery experience and is
familiar with the asynchronous nature of the language.
Since jQuerys initial release in 2006, it has grown from a simple utility library into the defacto standard for
writing Javascript in the browser. jQuery solved many issues, such as cross-browser incompatibilities and
shaky DOM querying, and also introduced features like the $.ajax() function which made it easier than ever
for developers to build dynamic pages and applications without the need for full page reloads.

The Status Quo


The $.ajax() function did indeed change how Javascript applications were built. With the power of AJAX so
readily available, it quickly gained traction within the Javascript community. Shortly thereafter developers
ran into a problem which occurred when they needed to make more than one request to a remote server with
the second request relying on the response of the first. For example, code that looks like this:
Listing 1. A single AJAX request with callback
$.get(/user, function (user) {
$(#user-name).val(user.name);
});

Very soon turns into this as the applications grow in size:

11

The Best Of
Listing 2. Multiple AJAX requests with nested callbacks
$.ajax(
type: GET,
url: /user,
success: function (user) {
$(#user-name).val(user.name);
$.ajax(
type: POST
url: /user/login,
data: { userId : user.id },
success: function (loginResult) {
alert(loginResult);
},
error: function (err) {
// Error handling
});
},
error: function (err) {
// Error handling
});

Even in this basic example adding a single nested request and some trivial error handling makes the code much
more difficult to read (partly due to switching to the longer form $.ajax() for error handling. Note that post()
and get() are wrappers for ajax()). In practice, callback and error handling functions are typically much longer
and the need for three or four nested requests commonly becomes necessary. This is especially true if the
application uses more than one web service to function. At this point the code becomes effectively unreadable.

Cleaner Code with Promises


By taking advantage of the fact that the $.ajax() function returns a $.Deferred.promise we can break this code
into individual pieces and lay them out much more clearly and without nesting. Well get into the $.Deferred
interface later. First lets look at the above logic written using these promises.
Listing 3. Multiple AJAX requests using promises
var errorHandler = function (err) {
// Handle Error
}
var userRequest = $.get(/user);
var userLoginRequest = function (user) {
return $.post(
/user/login,
{ userId : user.id }
);
}
userRequest
.then(userLoginRequest, errorHandler)
// Any number of then()s can be used here to chain more asynchronous functions together
.done(function (loginResult) {
alert(loginResult);
}, errorHandler);

12

The Best Of
The above code accomplishes the same set of tasks as the code in Figure 2. Through the use of jQuery
promises we are able to chain the asynchronous requests together rather than to rely on messy nested
callbacks resulting in a script that is both easier to read and more maintainable. A few things to note:

and then() appear to be similar. A key difference is that then() will pipe the result of the callback(s)
into the next piece of the chain

done()

Note that in the then() block we need to return the result of the second AJAX request rather than to call it
standalone. This is so that the resulting $.Deferred from the userLoginRequest gets passed into the done()
function, allowing us to make use of its result
We also hoisted a generic error handler for an even clearer solution.

What is $.Deferred?
Under the covers $.Deferred is a stateful callback register that follows the convention of Promises/A (http://
wiki.commonjs.org/wiki/Promises/A). The promise has three possible states: pending, resolved, and rejected.
Every $.Deferred object starts in the pending state and can move into either the resolved state via the
resolve() function or the rejected state via the reject() function. In practice, these two methods are called
internally by the library being used. For instance, the $.ajax() function handles resolving and rejecting the
promise. In fact, the resolve() and reject() functions will not even be available to us. This is because the
object returned from $.ajax() is actually a $.Deferred.promise which exposes only the functions to attach
callbacks and hides the functions that control the promise state. You can also take advantage of this if you
are writing code that returns a promise that other code will subscribe to.
When a promise is rejected, all callbacks registered to the promise via the fail() function will execute.
Similarly, when a promise is resolved the callbacks registered via the then() or the done() method will be
called. If we need a block of code to run when the promise completes regardless of whether it is failed or
resolved, we can attach those callbacks using the always() function. This is analogous to a finally statement
in a try/catch block and is generally used for running clean-up code.
Listing 4. Using always()
$.get(/user, function (user) {
$(#user-name).val(user.name);
}).always(function () {
alert(AJAX request complete); // This will always be called
});

Keeping a clean, maintainable, and readable code base requires active effort and diligence from all
developers involved. Promises are not a magic bullet and code can still get out of control when using
this pattern. When used correctly, promises can offer a large improvement in flow control relative to the
traditional callback pattern.

About the Author

Ryan ONeill was born in Washington D.C. in 1986. Since then he has taken residence in Miami, Atlanta,
Chicago and San Francisco. He has worked with web technologies for the better part of a decade and is
currently a senior front-end engineer with Twitter (you can follow him @rynonl).

13

The Best Of

Technical Tip & Tricks For Web2py


by Davide Marzioni
Web2py (http://www.web2py.com) is an amazing framework to develop a fully-featured webbased application using Python language (http://www.python.org).
Web2py inherit all Python feature in term of simplicity, power and flexibility and it applies
them to a web environment.
You will find a lot of useful tools like a database abstraction layer, authentication form and
form-making utilities.
What you will learn

In this article I want to share with you some tips and tricks that could be useful when programming with Web2Py framework.

What you should know

You should have a basic knowledge about how Web2Py works.

Use Aptana Studio as IDE


The integrated admin tool in Web2py is good enough to develop simple application and to do quickly edit,
but if you want a stronger tool I suggest you to use Aptana (http://www.aptana.com). The main advantage to
use Aptana as IDE is that you can use the integrated debugger.
Setup Aptana for Web2Py is easy:
Create new PyDev Project
Name it Web2py
Uncheck Use default under Project contents and select the path to your local Web2py folder
Be sure you are using Python 2.5-2.7
To run Web2py you need to create a custom launch configuration:
Select Run menu and then Run configurations...
Right click on Python Run and select New
Name it Web2Py
Select the Web2py project just created
Select web2py.py file as main module
In the arguments tabs use the options:
-i 0.0.0.0 to bind web2py to all network
-a 123456 as dummy administration password
-p <num port> if you want to change the default port (8000)
Press Apply button to save the configuration

14

The Best Of
The only drawback in using Web2py in any IDEs (Aptana included) is that it doesnt understand the context
(the gluon module) and therefore autocompletion doesnt work. To solve this issue you can use a trick to add
in the models and controllers with the following code:
Listing 1. A trick
if False:
from gluon import *
request = current.request
response = current.response
session = current.session
cache = current.cache
T = current.T

This code isnt executed but it does force the IDE to parse it and understand where the objects in the global
namespace come from.

Modify a form with form.element


Web2py has some function helpers to define forms (FORM(), SQLFORM(), SQLFORM.factory()). Howeveroften
happens that youll need to modify a form element after the declaration (styles, default values, etc.). You can
do it using .element() and .elements() methods. They are equivalent but elements return a list of item if there
are more corresponding to the criteria you inserted.
Items returned can be modified using the standard notation: undescore + attribute name.
The parameters of these functions are descriptor of what item you want to return. The only positional
parameter is the item type you want to modify (eg. input, textarea, select, etc...). Other parameter are explicit
and depend on the attribute you want to filter.
For example we want to change the submit button style classes.
Listing 2. Change the submit button style classes
submit_button = form.element(input, _type=submit)
submit_button[_class] = btn btn-large btn-success

To change the selected status of an option item.


Listing 3. Change the selected status of an option item
default_option = form.element(option, _id=option_1)
default_option[_selected] = selected
del default_option[_selected]

To change the text of a textarea.


Listing 4. Change the text of a textarea
textarea = form.element(textarea, _name=description)
textarea[0] = New text

To change the style of all inputs.

15

The Best Of
Listing 5. Change the style of all inputs
for input_field in form.element(input):
input_field[_style] = input_field[_style] + width: 200px

How to write a custom validator of two linked field in a


table
Validators are special functions which helps to validate a form or database field. You can insert them using
the requires keywords. If you want to have a database table where only one of two fields must have a value,
there will be trouble to define this with standard validators. This can be resolved by a custom validator.
For example, if you have a website table where, for some reason, you want be filled an IPv4 address or
a URL link, only one of them but not both together. In addition you want use the default validator for both
fields. To solve this problem you can define in the model a custom LinkedFieldValidator and use it in the
requires values of the fields.
Listing 6. TwoLinkedValidator in a model file.
class TwoLinkedValidator:
def __init__(self, a, b, validator=None,both_filled_message=Enter only one field!, both_
empty_message=Insert at least one field!):
self.a = a
self.b = b
self.v = validator
self.error_filled = both_filled_message
self.error_empty = both_empty_message
def __call__(self, value):
if IS_NOT_EMPTY()(self.a)[1] == None:
if IS_NOT_EMPTY()(self.b)[1] == None:
return (value, self.error_filled)
if self.v:
return self.v(value)
return (value, None)
else:
if IS_NOT_EMPTY()(self.b)[1] == None:
return (value, None)
return (value, self.error_empty)
def formatter(self, value):
return value

Init function requires the two field to validate. Parameter a is always the self-referenced field, while b is
the other. Optionally I can pass another validation function in the validator parameter.
The validation functions return a tuple where the first value is the formatted value (no formatting is done in
this case) and the second value is an error message (None if the value is correct).
In our case it looks like that.

16

The Best Of
Listing 7. A sample code
db.website.weblink.requires = LinkedFieldValidator(request.vars.weblink,
request.vars.ipaddress,
IS_URL(mode=generic, allowed_schemes=[ftp, http, https]))
db.website.ipaddress.requires = LinkedFieldValidator(request.vars.document_file,
request.vars.weblink,
IS_IPV4())

How to solve lazyT issues


Web2py has a great tool to internationalize the content: the T() function. Every string inserted as parameter
will be putted in the translation files.
works in default mode as lazy. It means that the real content is established when rendering the view.
A lazy T returns an object instead a string, so if you need the translated value string in a function, youll get a
an error argument must be string or read-only buffer, not lazyT
T()

Listing 8. Example of error string


timestamp_string = datetime.strftime(datetime.now(), T(%Y-%m-%d %H:%M))
label = DIV(timestamp_string)

Then you have two options:


Use a function that immediately force to cast a lazyT object to a string like .xml().
Disable temporarily lazyness with T.lazy

= False,

In the previous example you can.


Listing 9. First solution
timestamp_string = datetime.strftime(datetime.now(), T(%Y-%m-%d %H:%M).xml())
label = DIV(timestamp_string)

Or you can try that!


Listing 10. Second solution
T.lazy = False
timestamp_string = datetime.strftime(datetime.now(), T(%Y-%m-%d %H:%M))
label = DIV(timestamp_string)
T.lazy = True

About the Author

Im Davide Marzioni and I have worked since 2011 as software developer for a small company in Italy
mainly focused on research and development in automation and electronic fields. I use Web2py in many
projects I can because it bring in easy way your application to a web environment.

17

The Best Of

Tutorial for Creating Simple HyakuninIssyu Application Using Sinatra and


Heroku
by Tomomichi Onishi

Figure 1. Image of Karuta Hyakunin-Issyu based card game (photo credit: aurelio.asiain via photopin cc)

Overview
In this tutorial, well see how to create a web application using Sinatra, the light-weight Ruby framework,
and how to deploy it on Heroku, the web application hosting service.
We create a simple web application as a sample, using Hyakunin-Issyu, the beautiful anthology of Japanese
ancient poems, as a theme.
This app has only two pages; the one shows the list of all the poems and the other one shows the detail of
each poem.
(Dont worry if you never heard of Hyakunin-Issyu. Youll see a quick guide at the end of this introduction.)

What you will learn

Through this tutorial, youll learn the following.


how to use Sinatra framework
how to deploy your app on Heroku
general understanding of Hyakunin-Issyu

What you should know

This tutorial expects you to know..





Ruby
Gem management using Bundler
Git
Haml

18

The Best Of

About Hyakunin-Issyu
Hyakunin-Issyu, or the one hundred poems by one hundred poets, is an anthology of one hundred tanka, a
Japanese poem of thirty-one syllables, selected by a famous poet in the medieval period.
http://en.wikipedia.org/wiki/Ogura_Hyakunin_Isshu Wiki page of Hyakunin Isshu
Tanka is made of thirty-one syllables, five-seven-five for the first half of the poem and the seven-seven for
the last half.
As it cant contain very much information on such a limited number of words, its very important to feel the
aftertaste of the poem.
Composing a poem with very selected words, describing the delicate feelings and the beautiful scenery of
nature, is a very Zen-like way and this is the culture we Japanese should be proud of.
We often play the Hyakunin-Issyu based card game called Karuta in the New Years holidays in Japan.
The basic idea of the Karuta game is to be able to quickly determine which card out of an array of cards is
required and then to grab the card before it is grabbed by an opponent.
Chihayafuru, the karuta themed comic, became a big hit in Japan and now this traditional culture has became
popular again.
Please take a look at this comic if you are interested.
http://www.youtube.com/watch?v=rxebYxY9NXE opening video for Chihayafuru anime
Okay, I think thats enough for the intro.
Now its time to start the tutorial.

Using Sinatra
The first half of this tutorial is to create an simple application with Sinatra.

The very basics of Sinatra


Minimum construction

To start with the smallest possible project, all you need is two files.
Listing 1. The construction of the project files
|-sample
|-main.rb
|-Gemfile

The core parts of the application will be written in main.rb.


At the moment, we only need to add routing for root (/). So any requests for / will be processed here.
In this example, well output a simple hello world.

19

The Best Of
Listing 2. The minimum implementation of main.rb
#main.rb
require sinatra
get / do
hello world.
end

Next, make a Gemfile for gem management. Now you only need a Sinatra gem.
Listing 3. List gems on Gemfile
#Gemfile
source :rubygems
ruby 2.0.0
gem sinatra

From the terminal, run bundle install to install gems to the project.
The project settings are almost done!
Move to the project root and run ruby main.rb from the Terminal.
The application will be run on port:4567 (this may be different on your machine, so be sure to check the
output in Terminal).
Open your browser and access localhost:4567.
If successful, you should see the words hello world displayed there.
Adding more pages

Okay, now were going to add some more pages to this app (its just too simple, otherwise!).
Edit main.rb to do this:
Listing 4. Adding more page to main.rb
#main.rb
...
get /poem do
this is another page!
end

Well done! Now we have another page with the route /poem.
Restart the project by running ruby main.rb and access localhost:4567/poem in your browser.
You should now see this is another page! displayed there.
Auto reloading Sinatra

It can get tiresome to restart the process every time youve changed something in the code.
To make things easier, lets introduce auto-reloading into our app.

20

The Best Of
Listing 5. Add sinatra-contrib to Gemfile
#Gemfile
...
gem sinatra-contrib

Add this line to Gemfile and run bundle install again.


Then require sinatra/reloader on main.rb.
Listing 6. Require sinatra/reloader in main.rb
#main.rb
require sinatra
require sinatra/reloader
...

Thats all we need. Try restarting main.rb again (its the last time, I promise!), then access localhost:4567
in the browser.
Next, change the hello world message on main.rb and refresh the page. If all goes well, youll now see the
message changed without having to restart.
Accept parameters

One last thing for this section is to accept URLs with parameters, like /poem/13, so that the page contents
update based on this new value.
Listing 7. Accept parameters in main.rb
#main.rb
get /poem/:id do
this page shows the detail of poem-#{params[:id]}
end

Add :id to the get part, and use that param with params[:id].
Now try accessing localhost:4567/poem/13. The content should have changed.

Developing the main parts


Okay, we now have much of the core of the project completed.
I have made a HyakuninIssyu gem which allows us to use poem data easily, so lets install it.
(dont worry, the file contains English data, also)
If you want to know how to use the gem, please check it out, here.
https://github.com/Tomomichi/HyakuninIssyu Tomomichi/HyakuninIssyu
Install HyakuninIssyu gem

Add the gem to Gemfile and run bundle install again.

21

The Best Of
Listing 8. Add HyakuninIssyu gem to Gemfile
#Gemfile
gem ...
gem HyakuninIssyu

Youll also need to require it in main.rb.


Listing 9. Require HyakuninIssyu gem in main.rb
#main.rb
require ...
require HyakuninIssyu

With that done, check to make sure it works.


Listing 10. Add sample code to test gem
#main.rb
get / do
data = HyakuninIssyu.new
data.poem(1).kanji
end

Add this to main.rb and then access localhost:4567 in your browser.


Have you found the poem of Emperor Tenchi (in Japanese this time)?

Figure 2. The card of Emperor Tenchi


This poem describes a miserable life of farmers, but isnt it strange that the emperor composed a poem like
this? How could he understand the feelings of those people?
Its one of the mysteries of Hyakunin-Issyu.

22

The Best Of
Index page

Okay, well now finish the index page using this gem.
This page shows the list of all the poems. Use the poems method of the gem:
Listing 11. List all the poems in index page
#main.rb
get / do
data = HyakuninIssyu.new
@poems = data.poems
end

Thats it. We set all the poems data to @poem.


Now its time to finish view files.
Use separate view files

Itll be messy if you write all the html document in main.rb, so we will divide the code and use separate
view files.
Listing 12. The construction of the project after adding view files
|-sample
|-...
|-views
|-index.haml
|-poem.haml

Add a views directory and create haml files there.


Install haml gem to use haml files.
Listing 13. Adding haml gem to Gemfile
#Gemfile
...
gem haml

And now create the index.haml file to show the list of poems.
Listing 14. Index.haml
#views/index.haml
%h1 INDEX
@poems.each do |poem|
unless poem.nil?
%p #{poem.kanji}
%small #{poem.en}

One last thing to do is to declare the use of haml file in main.rb.

23

The Best Of
Listing 15. Declare the use of haml file
#main.rb
get / do
...
haml :index
end

This simply means that it uses views/index.haml as a view file.


Now lets access localhost:4567 again to see whether the content of index.haml is shown there.
Remember that we used @poems in main.rb.
This enables us to pass that variable to the view file.
Now the index page is done. Lets move on to the second page.
Poem detail page

As we enabled the parameter handling already, we use it to get poem data from the gem.
Listing 16. Developing poem detail page
#main.rb
...
get /poem/:id do
id = params[:id].to_i #treat the parameter as an integer
data = HyakuninIssyu.new
@poem = data.poem(id)
@poet = data.poet(id)
haml :poem
end

We set the poem data to @poem and @poet, and declared that we use views/poem.haml as a view file.
The poem.haml file should look like this:
Listing 17. The content of poem.haml
#views/poem.haml
%h1 POEM
%div
%h2 Poem Info
%p #{@poem.kanji}
%small #{@poem.en}
%div
%h2 Poet Info
%p #{@poet.name.ja}
%small #{@poet.name.en}

Access localhost:4567/poem/13 in the browser, perhaps with a different poem number, and check the
poem data is shown correctly.
Finish the development

To finish the development of this app, well link these two pages.

24

The Best Of
Listing 18. Add a link to index.haml
#views/index.haml
%h1 INDEX
@poems.each do |poem|
%p
%a(href=/poem/#{poem.id}) #{poem.kanji}
%small #{poem.en}

And add a very simple back link to poem.haml.


Listing 19. Add a link to poem.haml
#views/poem.haml
...
%a(href=/) Back

Okay, weve now finished developing this very simple Sinatra web application.
It shows the list of all the poems of HyakuninIssyu, and you can see the detail of each poem.
Now lets try to deploy this to Heroku.
Heroku Deployment

The last half of this tutorial is deploying the Sinatra application to Heroku.
Before continuing, please sign up and create your account on Heroku.
https://id.heroku.com/signup Heroku Sign Up
Also youll need the Heroku Toolbelt to use the heroku command.
Please download this from the link below:
https://toolbelt.heroku.com/ Heroku Toolbelt
Okay, now lets get started.

Create a Heroku app


First you need to create a Heroku app.
Move to a new project root and run the following comand:
Listing 20. Create a new heroku app
heroku create YOUR-APP-NAME

Thats all. The empty app is created on heroku and its added to your git remote repository.
(You can check this by running the git

remote

command)

Create a system startup file


Before deploying your app, you need the system startup file to run your app on Heroku.

25

The Best Of
Create config.ru file as shown below:
Listing 21. Create a config.ru file
#config.ru
require bundler
Bundler.require
require ./main
#requiring main.rb
run Sinatra::Application

Introduce a git version management


As we use the git command to deploy the app to Heroku, we need to introduce git and commit the changes
so far.
Listing 22. Introduce git version management
git init
git commit -m initial commit

If youre not familiar with git, check the Git Book or other tutorials.
http://git-scm.com/book Git Book
Now were ready for deployment!

Deploy to Heroku
Deploying to Heroku is extremely easy. Just run the following command:
Listing 23. Deploy command to Heroku
git push heroku master

Thats it. After successfully building your app on Heroku, run heroku open or
access APP-NAME.heroku-app.com to see your app.
Is your app working well? If you find some errors, please run heroku logs to see whats wrong.
Okay, thats the end of the tutorial.
The final version of the codes are in my GitHub repository.
If your code doesnt work, please check there and compare it with yours.
And more..

This tutorial covers only the very basics of Sinatra and Heroku to keep it simple.
If you find them interesting, please go further to get to know them better.
The following topics would be your next challenges:

26

The Best Of
Sinatra

use layout/shared files in view


use Helper
introduce SCSS, CoffeeScript
internationalization of the app
test with Rspec
introduce login management with Sorcery
Heroku

prevent Heroku app from sleeping with Heroku Scheduler


monitor the app performance with NewRelic
use thin server instead of webrick
build the staging app
connect to the database and backup it
use Travis CI for the automatic test and continuous deployment
Hyakunin-Issyu
learn the poems of Hyakunin-Issyu and remember them
read Chihayafuru to know the poems more.
join the Karuta game.
If you have an interest on these topics, Ill write the next article about them.
Please send me a request to let me know what you would like next: tomomichi.onishi@gmail.com.

About the Author

The author of this article is a Japanese web developer interested in Hyakunin-Issyu.


My GitHub account is here: https://github.com/Tomomichi the authors github account.
My past products are:
booklovesmusic: Music recommendation service which matches your favourite books
Hyaku-Ichi: Will help you to remember Hyakunin-Issyu about a month from nowt

27

The Best Of

Start Developing Better Web Applications


by Manfred JEHLE
Web applications are a good thing no client installation needed and they usually work with
different browsers and browser versions properly! However, their functionality and look and
feel are currently much different than a desktop application.
That has not to be so! We look step by step at various issues and suggest resolutions to brush
them up. The resolutions also provide additional benefits that make web applications more
useful than a desktop application.

Avoid that flickering screen


Most web applications are designed as common web pages, frequently by the same developer creating the
marketing web pages. On marketing designed web solutions pixel accurate representation of the content is
in great demand, whereas page reloads or rebuilds of the whole page are not an issue. On the other hand, in a
web application the permanent reload of the whole page makes the work slow and unattractive. Additionally,
the reload means also that a lot of data has to be transferred from page to page.

Solution
To solve the flickering screen use AJAX functionality and you are able to replace any identifiable part on the
web page without reloading the whole page. In other words, you need a real single page application. If you
choose a good AJAX libraries support features such as: changing input element types, for example, text input
to drop-down box depending on the entered value by identifying on server side and providing additional
content back to the page.
With AJAX, you can develop user-friendly applications like desktop applications.

Developers corner
Web application architecture contains not only the server side it covers also the client side. To get an
efficient client application it is not necessary to hold all the JavaScript code initially time loaded in the single
page. Such designs frequently result in slow, inefficient web applications with too much overhead that suffer
from lost flexibility and maintainability. Keep an eye on the client side HTML code structure and reloading
and disposing partial JavaScript code.

Web application environment


Some web applications are developed for Silverlight or flash, but these technologies are not usable in the
most browsers on mobile devices. Not all browser and operating systems supports flash and SilverLight
technology. The current heterogeneous environment of devices and device vendors limits the IT department
in flexibility. Remote desktop orientated solutions like Silverlight or remote desktop solutions are not
the real solution because the usability is not the best for mobile devices. Try to use with the fingers a
standard designed desktop application on a mobile device your fingers are mostly too thick to get the best
interaction on screen.

Solution
Use pure HTML on the client side! The reward for your efforts: approximately 80% compatibility with
common browsers.

28

The Best Of

Developers corner
Avoid using hacks to get a nonstandard or incorrectly implemented browser element running! When the
browser is fixed, your hack will produce mostly side effects so that you have to remove the code previously
fixing a bug. Use code that will run in all browsers at development time and you will be on the better side!

No Menu
A lot of web applications are not designed with the elements commonly used in desktop applications.
The look and feel of desktop applications is given by standard user interfaces like a menu bar with all the
commands needed to handle the application. Web applications are frequently designed against marketing
pages standards, which, as described above, are not the right approach for web applications.

Solution
Consider application processes, keeping the focus on making your web application function like common
desktop applications. Use a common menu element to make all options available in the menu bar and control
the icons and descriptions only if the function is available in the current context. Use common icons and
domain specific or a general (common) naming for the menu items.

Developers corner
Use a state machine to handle all the combinations of menu items state and the availability of menu
functions. Show and open events are not clear enough because menu parts can be set as inactive or hidden
too on different content stated like in a common desktop menu.

Ribbon
The ribbon user interface element is not often used in web applications. But this element provides of fast
access to many of the web applications functions and provides handling closer to the common Microsoft
desktop applications.

Solution
If it makes sense add a ribbon to your web application to make more useful functions accessible for users.
Dont hide functions in the depth of menu structures!

Developers corner
Use a state machine to handle all the combinations of ribbon items state and the availability of ribbon
functions. The same applies for any ribbon item as for any menu item applies too to have additional states
like hidden or inactive.

Undo and Redo


Most web applications have no built in Undo or Redo function, but in desktop applications this functionality
is one of the most used. Providing this functionality in web applications greatly improves user experience.

29

The Best Of

Solution
Keep the last actions in the background to implement the Undo and Redo functionality. Practically speaking,
it is not so simple but try to add it in your next update or new web application.

Developers corner
You must check on each Redo operation to ensure it makes sense at the current position. The Undo is not a
big problem because the content refers directly to the content part on that a Redo makes sense.

Wizard functionality
In some desktop applications a wizard helps users step by step through entering and editing data. The wizard
makes it easier for users to get enter structured data into the application. Such functionality is also used in
online survey tools. But in many web applications, a wizard would make it easier for the user to enter the
data. Another option is to allow the user to switch between dialog and wizard-based data editing.

Solution
Provide a wizard for the dialog-based data editing and allow switching between the two views.

Developers corner
Use in the edit dialog web parts and make them visible or hide them to get the switch between common
dialog content and wizard content.

Push function
In some desktop applications you can be pushed by other users activities when using the same data or file.
The common workaround is notifying a user that you are editing the data which the other user is already
viewing Another method is presenting a read-only view until the editing has been completed.
This functionality needs information about what you are currently viewing and what other users are doing.
Web applications can also provide this functionality but I have not seen many web applications, other than
my own, implement this functionality.

Solution
It is possible to implement a notification to other users handling the same data, but keep the same in mind,
just as with desktop applications use this functionality only when circumstances call for it.

Developers corner
Use a simple JavaScript timer to ask the server who is using the same data as I use currently and hold
the notifications ready for the other user. With a second timer ask for notifications on the server. Without
any web socket you can provide content like from server pushed. At the moment for web sockets is no
implementation available working on any browser and operating system.

30

The Best Of

Local devices
In desktop applications it is mostly not a problem to add local devices or other devices in the network to the
functionality of the solution. In web applications the usage of local devices is mostly a reason why a web
application is not practical to develop except printing. But that assumption is not true! With a little more
effort it is possible to use most local devices in a web application.

Solution
Expose devices to web applications accessible by wrapping them in a local service with a web service
interface. With this trick it is possible to access local devices through the server, and by proxy the
external application.

Developers corner
If you build such services it is the easier way to use REST services.

Dump and last actions


In some really professional developed desktop applications when an error occurs you may be able to send an
application dump, with the last few actions performed, to the software support centre or helpdesk.
That allows the support team to rebuild the current state and execute the last action before the error occurred.
Similar functionality is currently not common on the client side; on server side it is a common and helpful
support feature. It is possible to build in such a dump and last action functionality on the client side.
The support team can easily recover the HTML page while getting the real HTML page part.

Solution
Adding such a function to the web application does not take much effort. You have to cover the URL based
actions (GET, POST actions) and store them into a first in last out (FILO) queue. The dump is quite easy to
implement: select the html part, copy the outer HTML and send it through the server to the support team.

Developers corner
Use the standard functions from jQuery to capture the HTML dump. The FILO needs a little bit more
JavaScript effort, but not too much.

Device detection
Device detection in desktop applications has no significance because it runs mostly in similar environments
or is designed to run on different operating systems with standard desktop screens. Exemplary web
applications support a wide range of devices such as tablets and phones along with desktop machines.
By detecting the device, you are able to deliver device depending content to the target. Common web
application solutions support the usual boring actions such as zooming and moving screen content, but
frequently suffer from issues such as clicking the wrong link because your fingers are too thick. Mobile users
benefit from device specific content that is intended for fingers.
Most web applications detect devices at CSS level, but deliver every time the full content and hide only
some parts or enable alternate designs for smart phones and tablets. Such a solution is ineffective because
it wastes bandwidth by forcing the device to download extra, unnecessary content. Only a delivery of the
effectively needed content will be an efficient solution.

31

The Best Of

Solution
Detect the device in your web application and deliver only device-specific code.

Developers corner
Dont be satisfied with common CSS solutions go a step further by detecting the device and delivering
content specified for that device. It is required to get at any time the screen of the smart phone in 100% scale
visible! Overall, a smart phone ready design will involve some effort.

Common URL
In the content of device detection it is mandatory that you have the same URL and sub URLs for all devices.
The common URL is needed to store page links in the cloud or in a common link collection. Only with a
common URL base the web application will be handy for users working with different device types.

Developers corner
Dont think about URL switches they dont solve the problem of URLs stored in link collections!

Final statement
Web applications are not dead! In todays multi-device environment, web applications soar to new prime
of life. Most web applications are developed for low budgets but they are used as desktop applications
developed for high levels. This gap and the described missing functions can lift up the web applications to
a higher level. Its clear that mostly it needs some effort to achieve the higher state, but finally you get an
actual solution ready to use in several devices.

Another fact
Web applications developed more than thirteen years ago runs without any update of the user interface
without problems in the current browsers. How many desktop applications can get such a lifetime with all
the operation system changes in the past?
The discussions about apps reminds me of the late 90s, when the battleground was between operating
systems. At the moment we have the same kind of solution as we got at the past by JAVA a crutch not
really working perfectly for the app development.
For the foreseeable future, the web application provides a common base for all operating systems and
devices the browser.

About the Author

Manfred is CEO and Chief Architect of several products and customer projects and has more than 17
years of experience in web applications and more than 28 years in general information technology.
Contact: jehle@cetris.ch

32

The Best Of

Developing Your Own GIS Application


with GeoDjango and Leaflet.js
by Aimar Rodriguez
Geographic information systems seem to have been proliferating lately, both on smart
phones and on the web. GeoDjango is a module included in the Django contrib package
which allows to easily create geographic web applications. In this article we will build a
simple GIS application using GeoDjango and Leaflet.js.
What you will learn

You will learn to develop a simple geographic application using Django. You will learn to set up a geospatial database using PostgreSQL and PostGIS, to represent and manipulate the data stored in this database with Django models and GeoDjango extensions
for this models and to present it to the user using the HTML5 map framework Leaflet.js.

What you should know

In order to fully understand this article some knowledge about the basics of the Django web framework are recommended, as
well as knowledge of the Python programming language, even though they are not required. It is also advisable to have some
knowledge about the JavaScript programming language.

A Geographic Information System or GIS is a computer system that enables users to work with spatial
data. Even if this concept was invented around the 60s, it has only taken relevance in the past years, with
powerful applications like Google Maps or OpenStreetMap. The proliferation of this kind of applications has
been huge to the point that now event the smallest local transport company uses this technologies. We have
all kinds of projects, from social networks based on routes like Wikiloc to project which attempt to bring a
spatial dimension to The Semantic Web, like LinkedGeoData or GeoSPARQL.
One of the biggest benefits that the developer community have gotten from this phenomena is the
appearance diverse tools and framework for spatial data manipulation, and this is where GeoDjango comes
into play. Django an open source web development framework written in Python, it has a huge community
and a wide amount of tools for the developers. Many of this tools come included in the contrib package of
the framework, where we can find the geographic web framework GeoDjango.
What this package offers to the web developers is the following:
The Model API, to store, query and manipulate the geographic data stored in the database using the
typical Django models,
The Database API, to manipulate different spatial database back ends,
The Forms API, which provides some specialized forms and widgets to display and edit the data on a map,
The GeoQuerySet API, which allows using the QuerySet API for spatial lookups,
The GEOS API, a wrapper for the GEOS open source geometry engine, for C++,
The GDAL (Geospatial Data Abstraction Library) API,
The measurement objects, which allow convenient representation of distance and area measure units.
Apart from the aforementioned, several utility functions, commands are included in the package, as well as a
specialized administration site.
We will be developing a very simple GIS application, which allows user to upload routes and to visualize them
in maps. We have already seen that we can store and manipulate all this data with GeoDjango, however, we
still need some way to present this data adequately to the users of the web page. Fortunately, there are several
choices for this purpose, however, we will usually find two alternatives, OpenLayers and Leaflet.
33

The Best Of
Both are JavaScript libraries which allow to create a dynamic map on a web page. Which library to choose is
up to each developer, I personally prefer Leaflet.js for its ease of use and learning. However, OpenLayer is a
more mature project and promises several improvements in its third version which is yet to come.
With these two tools we can easily create a GIS web application of any kind. However, when developing
one of these we will have several concerns, not related with the available technologies, for example, where
can we get our data from? One approach which is followed by many is to let our users generate the data,
however this is not always suitable for our application. It is also quite common to use external information
sources, like available web services. Even if we are not going to explore the possibilities that these web
services offer, I will give the following list of web services with some of the functions they offer.
Nominatim, a tools to search OSM (OpenStreetMaps) data. It allows address lookup and reverse
geocoding, among other functions. A guide to this search engine is published on http://wiki.
openstreetmap.org/wiki/Nominatim,
The OSM API. OpenStreetMaps offers an XML API which allows to upload to and download data from their
database. You can find more about it the following address: http://wiki.openstreetmap.org/wiki/API_v0.6,
LinkedGeoData. For those desiring to implement a semantic spatial web application, know that
LinkedGeoData offers and API and has developed an ontology. It even has one SPARQL endpoint. More
information on http://linkedgeodata.org/OnlineAccess,
Google Maps API web services. Google maps has its own API (even has a library for map visualization).
However it imposes several limitations, so it is not used for more advanced GIS applications. More information
on the google developers webpage: https://developers.google.com/maps/documentation/webservices.

The Web Application


To introduce this two libraries we will develop a very simple web application. It will allow users to upload
routes in GPX format, which will be processed and stored in a spatial database. Then, the users will be able
to browse all the routes and visualize one of them. We will also perform a small analysis of these routes,
using the GEOS API.
The code will be presented along the article, however, the whole project can be found in the following
repository: https://github.com/aimarrod/SimpleGISApp.
The web application will consist on a simple HTML page, which will contain a form allowing the users to
submit their GPX files, a map showing a route and a list of all the uploaded routes. On the back end we will
have a small Django project with a PostgreSQL database extended by PostGIS. The choice of database has
been done taking into account that GeoDjango offers some limitations depending on the database, and the
least constraining one is PostGIS. Anyway, there are several choices, for example, MySQL, so feel free to
use any of them.

The first steps


First of all, we have to install a PostgreSQL database and the Django framework (and of course a python
interpreter if you dont have it yet). This is usually trivial and if you are working on a Linux distribution you
may find instructions on how to do it in your distros wiki.
Note

From now on I will be assuming that PostgreSQL, Django and Python2.7 are installed. Since I am working
with an Arch Linux distribution, so some installation steps may vary. Also, I will not be explaining all the
basics of the Django framework, some aspects like the settings file and the urls.py file will be omitted, if
you dont know the framework I encourage you to look up the Django documentation page, which explains
everything very nicely. You can find it in the following address: https://docs.djangoproject.com/en/1.5/.

34

The Best Of
Installing PostGIS will be different depending on the OS you are using. In my case I can obtain it from the
official repositories of my Linux distribution, however, PostGIS offers some binary installers for Windows,
OSX and Linux, plus instructions for downloading and compiling the source code in the following page:
http://postgis.net/install.
First, we will create a user for our spatial database, and then we will create a database in which we will load
the PostGIS spatial types later. We will also need to install the pl/pgSQL language on the database since this
extension need it. Then, we will load the postgres spatial types from the directory in which they reside (in
my case /usr/share/postgresql/contrib/postgis-2.1/). Next step is common to make this database a template,
so that we can create spatial databases without repeating all these steps.
Listing 1. Sample
$ su simplegisuser
Password:





$
$
$
$
$
>

createdb -O simplegisuser template_postgis -E UTF-8


createlang plpgsql template_postgis
psql -d template_postgis -f /usr/share/postgresql/contrib/postgis-2.1/postgis.sql
psql -d template_postgis -f /usr/share/postgresql/contrib/postgis-2.1/spatial_ref_sys.sql
psql
UPDATE pg_database SET datistemplate = TRUE WHERE datname = template_postgis;

$ createdb -T template_postgis simplegisdb

Platform specific instructions can be found in the PostGIS homepage and on the GeoDjango documentation
page. https://docs.djangoproject.com/en/dev/ref/contrib/gis/install/#installation.
After all the installation are done we can finally get into creating our project. First we will create a django
project. The first thing to do is to access the settings.py file in order to add django.contrib.gis to the installed
apps. We will also need to edit the database connection setting, in order to match the database we created in
the previous section. The modified parts of the settings.py file should look similar to this:






DATABASES = {
default: {
ENGINE: django.contrib.gis.db.backends.postgis,
NAME: simplegisdb,
USER: simplegisuser,
}
}

INSTALLED_APPS = (

django.contrib.auth,

django.contrib.contenttypes,

django.contrib.sessions,

django.contrib.sites,

django.contrib.messages,

django.contrib.staticfiles,

django.contrib.gis,

)

Once all the setup steps are done, we can finally start coding our application.

35

The Best Of

The Back End


On the back end of our applications, we will define the models for our routes, we will define a simple form
to allow the uploading of files, we will create one view which will show the map to the user and we will
implement a method to parse the uploaded GPX files.

The models
One essential part of most Django applications are the models, and this case is no different. Since we want to
store routes in our web page, we will first create a route model in a models.py file, following the convention.
If you are familiar with this framework, you should know that the first thing to do is to import the model,
however, since we are storing spatial data we wont use the conventional model, but the models defined in
GeoDjango. For this, we will import the models from django.contrib.gis.db.
Apart from this little change, we can define our models as usual with the advantage that we now have some
additional fields related to spatial data. Taking advantage of this feature, we will declare the model for our
routes, which will contain the following fields:
A name for the field (Django CharField),
The date in which was uploaded (Django DateField),
The geometric representation of the route (GeoDjango MultiLineString).
Here we start seeing the tools that this package offers us. In our route model, we have declared a
MultiLineString field, which corresponds to one of the geometry objects specified in the OpenGIS Simple
Feature specification. Simply put, a MultiLineString is formed by a list of LineStrings, which represent a set
of points or coordinates. you can find more about the models API in the Django documentation page: https://
docs.djangoproject.com/en/dev/ref/contrib/gis/model-api/.
The models.py file should look similar to this:
from django.contrib.gis.db import models

class






Route(models.Model):
name = models.CharField(max_length=255)
creation_date = models.DateField()
representation = models.MultiLineStringField(dim=3)
objects = models.GeoManager()

The reason for dim (dimension) to be 3 is to allow the field to save the altitude. This attribute specifies
the dimension that the geometric field has, which defaults to 2. All geometric fields are composed
by points, which must have at least two dimension (latitude and longitude), but can by extended by a
third dimension (altitude). The choice on the dimensions of the geometries depends on the application
to build and on the sources of information, and since the GPX files allow to record altitude, we will
include the three dimensions.
Of course, it is possible to work a way around to represent this geometrical object without the use of this
package. We could have defined our own Point model in which we store coordinates as floats and then
define LineString model and so on, however, this would require us to do extra work and more importantly,
we wouldnt have access to all the utilities that the GEOS API offers.
Once the model is defined we can finally synchronize the models with the database, using the following
command: python2 manage.py syncdb.

36

The Best Of

The views
The Django views are functions that take a web request and return a web response. For this simple example,
we will define a single view which will always return a HTML response. The document we will return will
contain the list of all the uploaded routes and a form which will allow our users to upload files.
As is usual, we will create a forms.py file in which the form will be defined. This form will contain two
fields, the first one for the name of the route and the second one for the file. We will also perform two
validations, to see if the name already exists and to check if the uploaded file is a GPX (though at this point
we can only check it by looking at the extension of the file).
Listing 2. Forms file
from django import forms

from simplegisapp.models import Route


class RouteUploadForm(forms.Form):

name = forms.CharField(max_length=255)

file = forms.FileField()




def clean_name(self):
name = self.cleaned_data[name]
if Route.objects.filter(name=name).count():
raise forms.ValidationError(That name is not available)
return name

def clean_file(self):
f = self.cleaned_data[file]
extension = f.name.split(.)[-1]
if extension not in [gpx]:
raise forms.ValidationError(Format not supported)
return f

Next we will create a view which will handle the uploading of files and will return the HTML file containing
the map and the form. However, before that, we should take care of the parsing of the documents that will be
uploaded to our page. The GPX files we will be parsing follow a structure which is similar to the following:
<gpx>

<trk>

<trkseg>

<trkpt lat=XXX lon=XXX>

<ele>XXX</ele>

<time></time>

</trkpt>

</trkseg>

</trk>
</gpx>

For this we will create a file called utils.py and define a method for parsing the file. This function will create
a new LineString for every trkseg found, which will contain all the Points identified in the trkpt tags. When
the trk tag ends, all these LineString will be used to create the MultiLineString which will be stored in the
database. There are many ways to do this, so I wont enter into the details of the implementation, you can
anyway find the utils.py file in the repository. Just one note, I have used the iterator parser from the lxml
python package to parse the file iteratively. This is due to the fact that GPX files may have quite a size (For
testing purposes I used a file with 33000 lines), so the iterator parser may improve the speed and solve some
recursion problems.

37

The Best Of
On the view, we will just check if the method of the request is POST or GET. If it is the first case, it means
that the user has submitted a form, in which case we will check if it is valid and we will parse and store it. In
both cases we will retrieve a list of routes and we will embed it in the HTML file, so the views.py file should
look more or less like in the following example.
Listing 3. views.py



from
from
from
from

django.shortcuts import render_to_response


forms import GPXUploadForm
utils import parse_gpx
simplegisapp.models import Route

def route(request):
if request.method == POST:
form = GPXUploadForm(request.POST, request.FILES)
if form.is_valid():
data = form.cleaned_data
f, name= data[file], data[name]
route = parse_gpx(f=f, name=name)
else:
form = GPXUploadForm()
routes = Route.objects.all()
dict = {form:form,routes:routes}
dict.update(csrf(request))
return render_to_response(routepage.html, dict)

And with this last step we have a very simple application working. Of course we have to configure the
settings file to point to the right templates and static files directory, but I will leave that out of the article. A
guide can be found in https://docs.djangoproject.com/en/dev/ref/settings/.
At this point however, we have not used all the power of the GeoDjango package, and we havent developed
any kind of map to show the routes to the users. On the next section we will see some functions of the GEOS
API, and we will get into the development of the frontend later.

Extending the models


Until now, we have just stored some geometric fields and processed files to obtain that information, but
GeoDjangos real power resides in the operations that it allows us to perform with those geometric objects.
To show some of these features we will be performing a simple analysis over the routes users have uploaded
to the web page.
A very basic operation we can do is to calculate the total length of one route. Usually this would mean
iterating over every coordinate and calculating the distance to the next (for example with the Haversine
formula). GEOS makes this trivial providing an attribute on the geometry object, the length attribute, which
calculates the length differently depending on the geometry object. To make it as simple as possible, we will
make a wrapper on the Route model that return the length of it.
Listing 4. The length
def length(self):
return self.representation.length

We can, of course, make more complex operations, for example, we will implement a function that given
a route, tells us which is the nearest. For this we will be using the distance function, which returns us the
distance between the nearest points on two geometries. We will define the method nearest in the route model.

38

The Best Of
Listing 5. nearest method
def nearest(self):
minDist = sys.maxint
rt = self
for route in Route.objects.exclude(pk=self.pk):
dist = self.representation.distance(route.representation)
if dist < minDist:
minDist = dist
rt = route
return rt

Finally, we will define another method to get the GeoJSON representation of a route. GeoJSON is a format
defined to encode simple geographic features in JavaScript Object Notation, which is supported by the JS
mapping framework we are using.
Listing 6. geoJSON
def geoJSON(self):
return self.representation.json

With this we have seen some of the most simple applications of the GEOS API. However, we have only
scratched the tip or the iceberg, there is much more it can us, so I encourage anyone to explore this library
and discover the powerful applications that can be easily developed using it. A complete guide to GeoDjango
and all of its features can be found on the Django documentation pages: https://docs.djangoproject.com/en/
dev/ref/contrib/gis/.

The Front End


Now that we have created our models and our views, we can continue to implement the front end of the
application. For that, we will simply create one HTML page, which will contain a form for uploading files, a
list of the uploaded routes and a map container. We will take advantage of the Django template system for this.

39

The Best Of
Listing 7. Sample HTML code
<!DOCTYPE html>
<html lang=en>
<head>
<meta charset=utf-8 />
<title>Simple GIS App</title>
<link rel=shortcut icon href=/favicon.ico />
<link rel=stylesheet href=/static/css/style.css/>
<script src=/static/js/jquery.min.js></script>
<!-- Leaflet CSS and JS files -->
<link rel=stylesheet href=/static/leaflet/leaflet.css/>
<link rel=stylesheet href=/static/leaflet/leaflet.ie.css/>
<script src=/static/leaflet/leaflet.js></script>
<script src=/static/leaflet/leaflet-src.js></script>
</head>
<body>
<div id=text>
<form id=form method=post enctype=multipart/form-data>{% csrf_token %}
<legend><h2>Upload GPX file</h2></legend>
{{ form.as_p }}
<input type=submit value=Submit />
</form>
<div id=list>
<h2>Routes</h2>
<ul>
{% for route in routes %}
<li id={{ route.pk }} class=route-link>{{ route.name }}</li>
{% endfor %}
</ul>
</div>
<div id=data></div>
</div>
<div id=map></div>
<script src=/static/js/map.js></script>
</body>
</html>

The body of the HTML file can be divided into four pieces. The first is the form which will allow the users
to upload the files. The second is a container for the list of routes in the database. The third is an empty
container, which will be filled via AJAX with some data about the route the user is visualizing. The fourth
container is initially empty, but will contain the map once the page is loaded.
In order to use Leaflet.js, we have to download some JavaScript and some CSS file which have to be
included in the document. This files can be downloaded from the Leaflet homepage: http://leafletjs.com/
download.html. Once they are downloaded, we only have to include them in the static files directory and
load them as regular JavaScript and CSS files. However, we have to be careful with two details; first of all,
Leaflet need jQuery to work, so we have to download it (from http://jquery.com/download/) and include it
in the document before the Leaflet scripts. Second, we will create a script to initialize the map, which has to
be executed strictly after the container for the map is loaded, for this we can simply include the script in the
body of the document, below the map container.
As mentioned, we will load the details of each route via AJAX, so we will need to create another view
which will return a JSON object containing the details of the route. We could also return an XML document,
however, since we have to embed a GeoJSON object in it and we will parse it in JavaScript, it seems more
adequate to use a JSON.

40

The Best Of
Listing 8. Our new view
def routeJSON(request, pk):

route = Route.objects.get(pk=pk)

if route is not None:

rt = {name:route.name, dist:route.length(),
nearest:route.nearest().name}

rt[geojson] = json.loads(route.geoJSON())
return HttpResponse(json.dumps(rt),
content_type=application/json)
return HttpResponse(, content_type=application/json)

Note that we load the GeoJSON string into a Python object before dumping it again. This seems redundant,
however it is necessary, for if we dump a JSON string, we will have issues with characters like the quotes.
Once all this is ready, we can follow to create our map. We will create a file called map.js in the static files
directory, which will contain the script initializing the map and the functions that allow the asynchronous
loading the routes. First we will take care of creating the map, the code needed is the following.
Listing 9. The sample code
var route;
var map = L.map(map);
var osmLayer = L.tileLayer(http://{s}.tile.openstreetmap.org/{z}/{x}/ {y}.png);
map.addLayer(osmLayer);
map.fitWorld();

First, we declare a variable called route, which will later contain the route the user is currently viewing.
Next, we call the map() function from the Leaflet library, which receives an identifier and creates a map on
the container with that id, we store it on a variable so that we can manipulate it later.
Leaflet works mainly with layers; markers, lines, tiles, etc. are all layers, which can be added and remove to
the map. In order to be able to actually see something, we have to include at least one tile layer, which is in
charge of rendering the map. There are several free tile providers, but for this example we will be using the
ones provided by OpenStreetMaps, though we can add several tile layers at the same time and allow he user
to switch among them at will.
Note

You can find a script which creates short cuts for several popular tile providers in the following URL: https://
gist.github.com/mourner/1804938.
After we have created all the tile layers we wish we just have to add them to the map, with the addLayer()
function on the map or with the addTo() function of the layers. Finally, it is recommended to set view port
of the map to something, since it will show nothing if it has no view port. An easy way to do this when
developing is the fitWorld() function of the map.
Finally, we have add an event to each element on the list so that when the user click on it, a route is loaded
and the details of the route are displayed

41

The Best Of
Listing 10. A route
$(.route-link).click(function(){

var id = $(this).attr(id);

$.getJSON(/ajax/+id+/route, function(data) {


//Remove the previous route and add the new one

if(route!=null){

map.removeLayer(route);

}

route = L.geoJson(data.geojson);

route.addTo(map);

map.fitBounds(route.getBounds());

//Add the data to the data panel


$(#data).html(<h2>+data.name+</h2><p><b>Distance:



</b>+data.dist +</p><p><b>Nearest: </
b>+data.nearest+</p>);

});
});

When the user clicks on one of the links, a AJAX call is made to an URL which returns the details of the
route. The first thing to do, is to remove the route which is currently being displayed, if not, we can end up
with a mess of lines in the map. Then, we just create a layer from the GeoJSON object, we add it to the map
and we set the view port of the map to the bound of the route.
Here, we have transparently created a geometric object and added it to the map, however, Leaflet provides
some classes to represent polygons, LineString and other geometric objects in a similar way to GeoDjango
(but much more primitive). Though I wont explain all the functions on the library, I encourage anyone
interested to explore the leaflet API (on http://leafletjs.com/reference.html) which gives a comprehensive
guide to using and extending leaflet.
Finally, we just have to add the rest of the data downloaded to the document and it is finished.

The results
After all these steps we should have obtained something like this (the CSS file is provided in the repository):

Figure 1. Resulting web page

42

The Best Of
On the left part we can see all the information panels we created, and on the right part we can visualize the
map, and a blue line representing the route we are using. Anyway, we have only used a tiny part of the two
frameworks involved, which have way more functions than the ones explores.
GeoDjango allows to manipulate and store spatial data in a very transparent way, but the real power of this
library comes from the operations it allows us to do. The capacity to make spatial queries and to manipulate
spatial data, allow us to create very rich GIS applications without having to worry about complex algorithms
and different spatial representation systems.
Leaflet on the other side, is a lightweight library, but at the same time very complete. It comes with a set of
built in GUI elements, for example, a panel which allows switching among different tile layers with a click.
It also has some spatial utility functions and classes similar to the ones on GeoDjango, though its more
focused on the visualization.
Also, in a similar fashion to Django, Leaflet has a very active community, which develops different plug-ins
for this framework. This combined with its comprehensive API and the ease of use of the framework, makes
the task of plug-in development really easy, its quite common to see developers extend this framework to
serve their purpose on a single project.
The application we have developed can be considered GIS, though it is really simple, but as anyone can see
there are really huge GIS projects like google maps. If you are interested on the subject, I would recommend
to check out some of the following projects:

GeoSPARQL ,

an extension of the SPARQL protocol which adds support for spatial queries, it currently
has very few implementations. More information can be found in the following URL: http://www.
opengeospatial.org/standards/geosparql,

The GEOS library, which is the core of GeoDjango. It is completely open source and its hosted on http://
trac.osgeo.org/geos/,

Osmdroid,

a library which allows the use of OpenStreetMaps in native android applications. It is a good
way to work around the restrictions of the google maps API. The project is hosted in https://code.google.
com/p/osmdroid/,

Wikipedia has a nice list of GIS data sources in the following page: http://en.wikipedia.org/wiki/List_of _
GIS_data_sources.
There are many more projects and papers, of course, but as you can see there is a big proliferation on the
world of GIS. Every day new projects appear, be they libraries, mobile apps, web apps, data sources or
whatsoever.

About the Author

Aimar Rodriguez is a Computer Science student in the last year of his bachelor. He is currently working
in the MORElab research group, in areas related to The Semantic Web and The Internet of Things,
working with technologies like GeoSPARQL and GeoDjango.

43

The Best Of

Solving Metrics
Processes

Within

Distributed

by Dotan Nahum
If youre building a Web backend or a processing backend nowadays, chances are youre
building a distributed system. In addition, youre probably deploying to the cloud, and
using paradigms such as SOA and REST. Today, these methods are very common, and I
watched them evolve from a best kept secret 10 years ago, into best practices, and then into
a common, trivial practice today. This article will show you how to tackle the problem of
handling metrics around complex architectures.
Youll learn how to use Ruby to build a performance-first processing server, using technologies such as Redis,
Statsd, Graphite and ZeroMQ. More importantly, youll learn about the whys of each of those components in
the context of this problem. Lastly, I hope youll be inspired enough to either use the solutions suggested in the
text, or build your own tailor-made solution using the building blocks that are outlined.
You should have a basic to intermediate understanding of Ruby, service architectures such as SOA, and
concepts within the HTTP protocol such as REST.

Evolved Complexity
Something that evolved along with building distributed systems is complexity; breaking up a system to many
components will almost always introduce additional overhead, and theres one thing that in my opinion isnt
keeping up with being very common amongst developers monitoring such complexity. Or specifically,
monitoring distributed processes.
Ruby makes building distributed systems dead easy. With Sinatra, for example, due to its simplistic
programming model and ease of use, you can build a RESTful architecture spanning across servers and
processes very easily, without focusing on much of the typical cruft and overhead that usually appear
when building and deploying new services. By lowering the price you pay for deploying new services and
maintaining them, Ruby makes building distributed systems fun.

Distributed Processes
Youre building a product which has many services that span over different machines at the backend. These
services co-ordinate to implement business processes.
How could you track it?
In general, how can you provide visibility for
A series of processing stages that are arranged in succession,
Performing a specific business function over a data stream (i.e. transaction),
Spanning across several machines,
Note: I use the terms transaction, workflow and pipeline interchangeably to mean the same thing a
series of actions bound together logically, leading to a final result under the same business process.

Process Tracking
44

The Best Of
A business process might span several machines and services. As in the physical world, stages such as
planning, provisioning, packing, shipping apply in many other domains as well.

Figure 1.The typical stages of a process within a physical ordering system


Heres a flight booking system: query planning, querying for 3rd party flight providers, temporary booking,
displaying results.
A user ordering an item from an online store is another example where multiple stages are typically involved
charging, invoicing, delivery (perhaps some of these are even done with the help of third-parties such as
Paypal or Stripe).

Tracking In Practice
So how can you track these at the infrastructure level?
How would you have better visibility for an entire multi-stage process which may start at machine A and
service X, and then end a few machines and services later at machine B and service Z. How would you also
measure and be able to reason about the overall performance of such a process across all of the players in
your architecture and at each step of the way? You need to be able to correlate.

Internal Tracking
You may have bumped into this before. Take a look at manufacturing in real-life: an item gets a ticket
slapped onto it when it is first pronounced as an actual entity in the factory. This ticket is then used to record
each person who handled the item, and the time and station it was handled at.
Looking back at a distributed system implementing such a pipeline, if the data handed from process to
process is such that you can tack on additional properties easily, that is it will be persisted after each
step, and persisting it doesnt cost that much, then you may be in luck. In such a scenario it is common
to include tracking metadata within the object, and just stamp it with relevant trace information (such as
time, location, handler) per process checkpoint or stage, within the lifetime of that object and the length of
the processing pipeline.

Figure 2. Internal Tracking


At the end of the entire business process, a given object will show you where its been and when. This idea
would be easy to implement and provide excellent forensics ability you can investigate your pipeline
behavior per process step within the pipeline, by just looking at the object itself.

45

The Best Of
If you dig deeper into this sort of a solution though, youll find a couple of pitfalls that exist when you
realize that this is a proper distributed system performing a single goal of tracking: first, since youre
tracking time, time must be synchronized on all machines. This may only seem easy at first glance, becomes
harder when measuring sub-second accuracy. Second, failure points; additional moving parts in the process,
that increase the probability of failure grows higher.

External Tracking
You may also have been aware of workflows in factories, or even physical shops, where operators enter an
item ID, their signature, and a time stamp onto a thin terminal in order to indicate they have processed the
item at their station. The system will then log all those details into an external tracking data store.

Figure 3. External tracking using a separate service


Keeping that in mind, the solution I want to discuss here is a high-performance service, external to any of
your systems, to which each step or service of your distributed process simply announce progress.
If youre originally coming from the enterprise, youve already identified such a thing as something
somewhat similar to a BAM (Business Activity Monitoring).
And if you dont like using off-the-shelf enterprisy solutions to problems, you may have also heard of taking
this concept to a much lower-level infrastructural kind of thing Googles Dapper and not very long ago
Twitters Zipkin systems.
Zipkin, for example, can take a high-level business process, and pick it apart into a tree-form, and show
you why the process was slow down to the data store query level. This is due to the fact that it is tied to the
standard infrastructure components (such as data store drivers) that are being used within Twitter.

Roundtrip
The problem I was facing, is I didnt want to introduce infrastructural changes in order to use a system like
Zipkin, but still have the ability to take any process spanning any number of services, and be able to point it
to some kind of tracking and tracing endpoint to report progress to. This way, I get the benefits of tracking
my business process with as little overhead as possible.
This service needed to have good performance, so that it wouldnt hinder the progress of the workflow. It
needed to be exact, so that no tracking data was lost (i.e. no UDP). It needed to be maintainable and fun to
work with. Since I wanted to achieve all those goals, and yet I didnt want to prematurely optimize, I used
Ruby with an HTTP endpoint for ease of use, and offered an additional ZeroMQ endpoint for the more
performance-heavy scenarios.
46

The Best Of
I called this service Roundtrip and open sourced it, required only Ruby and Redis installed on your machine.
Next up, well investigate how its API behave, what makes it performant, and how you can get inspired by it
and build such a similar custom solution.

Using Roundtrip
Heres how you use Roundtrip with its default HTTP endpoint:
Listing 1. Roundtrip API Usage
# create a new business process trip
# a trip is a synonym for a workflow, or transaction.
$ curl -XPOST -droute=invoicing http://rtrip.dev/trips
{id:cf1999e8bfbd37963b1f92c527a8748e,route:invoicing,started_at:2012-1130T18:23:23.814014+02:00}
# now add a checkpoint as many as you like.
# a checkpoint is a step within the transaction.
$ curl -XPATCH -dcheckpoint=generated.pdf http://rtrip.dev/trips/cf199...a8748
{ok:true}
# now end the process.
$ curl -XDELETE http://rtrip.dev/trips/cf1999...a8748e
{id:cf1999e8bfbd37963b1f92c527a8748e,route:invoicing,started_at:2012-11-30T18:54:20.
098477+02:00,checkpoints:[[generated.pdf,2012-11-30T19:08:26.138140+02:00], [emailed.
customer,2012-11-30T19:12:41.332270+02:00]]}

A given distributed system may generate a ton of business workflows and transactions over many or few
machines, and the point is that a transaction or a workflow starts at a certain machine, goes to one or more,
and then ends up at some other (or same) machine.
We need a way to keep track of when a transaction starts and when it ends. A bonus would be to be able to
track stages in the transaction that happen before it ends lets call that checkpoints.
That is, basically, what Roundtrip is. Roundtrip will store the tracking data about your currently running
transactions: start, end, and any number of checkpoints, and will provide metrics as a bonus.
When a transaction ends, it is removed from Roundtrip this allow Rountrip to be bounded in size of storage
and have good performance.

Redis for Storage


I couldnt think of anything more suitable for this than Redis. It is perfect. It offers great performance (80k
ops/sec on a typical machine), being in-memory fits well here since the data is bounded in size, it also offers
great data structures for us to use, and it has awesome C-based Ruby drivers (redis and hiredis gems). Redis
is also very simple to maintain and develop against.

47

The Best Of
Listing 2. Adding a trip
@conn.set(trip_key(trip.id), Marshal.dump(trip))
@conn.zadd(route_key(trip.route), trip.started_at.to_i, trip.id)

Adding a trip is just setting the data within the trip in a key/value pair, and more importantly being able to
add the ID of the trip to a Redis ZSET. A ZSET is a sorted set in Redis, and in our case we'll be having time
as the sorted component. This will allow us to trim out data as a torrent of processes hit the server constantly,
and be able to have bounded data size at all times.
Listing2. Adding a checkpoint to a trip
time = Time.now
# Redis: ZADD key score member
@conn.zadd(checkpoints_key(trip.id), time.to_f, at)

Again we're using the awesome ZSET. Essentially, each trip holds a set, or in our case a sorted set of the
checkpoints within it. A checkpoint is just a stage within any business process.
Listing 3. Removing a trip
@conn.del(trip_key(trip.id))
@conn.del(checkpoints_key(trip.id))
@conn.zrem(route_key(trip.route), trip.id)

Clearing off data from Redis is important. Although Redis has a built-in EXPIRE function which allows
you to expire data automatically, it's often not enough, because often an entity will be composed of several
disconnected Redis data structures like in our case, and there's no way to describe a dependency between
keys currently.
That is basically the meat of Roundtrip. It's a bunch of Ruby code glued on top of Redis and that's why it's
so fast (although the store component is pluggable you can replace Redis with anything conforming to the
protocol within Roundtrip).
Next up we'll see how Roundtrip integrates internal monitoring even into itself using StatsD, and how it
goes even further up the performance tree using ZeroMQ.

Statsd For Metrics


I love using statsd. Etsy designed it to be as low-overhead as possible; both in how a developer uses it and
how its performance overhead.
Statsd is a simple Node.js daemon developed and open-sourced at Etsy, that will receive UDP packets which
represent a metric specifically a string such as servers.myproduct.myfeature.success.server-ip and a
numeric value, and push those onto Graphite an enterprise-grade metric aggregation, visualization and
digestion service. Any of your servers and/or application servers can send this kind of data. Well see how
its integrated into Roundtrip and how you can integrate it into your products, but be sure to go in-depth on
Statsd and Graphite as its out of the scope of this article.

48

The Best Of
Listing 4. Statsd integration into Roundtrip
require statsd
class Roundtrip::Metrics::Statsd
def initialize(opts={:statsd => {:host => localhost, :port => 8125}})
@statsd =opts[:statsd][:connection] || ::Statsd.new(opts[:statsd][:host], opts[:statsd]
[:port])
end
def time(route, event, time)
@statsd.timing(#{route}.#{event}, time)
end
end

I often recommend to wrap infrastructural concerns such as metrics, logging, configuration in something that
will be easy to swap. Here, we've wrapped Statsd in an abstract Metrics module, so that in the future I could
use TSDB, Cassandra, Redis, ZooKeeper or anything that can provide good, scalable and atomic counters
should I be not satisfied with how Statsd/Graphite is working out for me. I also chose to use the standard
'statsd' Ruby gem, as I've verified it is thread-safe and I widely use it for my open-source and day-job work.
Along the Roundtrip code, with the help of this module, there will be 'time' calls scattered. These will be
responsible to time various operations within the internals of Roundtrip, so that I could later monitor and
review its operation in production.
Listing 5. Usage of the Metrics module
@metrics.time(trip.route, at, msec(res[1] trip.started_at))

This is simple enough to develop, and light-weight enough to include in your code, that it is worth to have
more metrics radiated out than not to have it at all if in doubt, just add metrics as you see fit; later you can
always either remove them if they appear to be useless from a business-value point of view, or sample
them (i.e. make only one of 100 calls generate a metrics call, or any other ratio that makes sense). In either
case the traffic generated by these calls in the case of Statsd is UDP its asynchronous, low-overhead, and
nothing critically bad happens if the receiving server is down.

ZeroMQ for High-Performance Traffic


By default, Roundtrip as a service, uses HTTP as its transport. This is due to the simplistic nature of
HTTP its stateless and is omnipresent in the day-to-day developer work. Today, any developer should be
able to spit out a PUT call very easily from any kind of technological stack. This makes integration with
Roundtrip dead easy, and is also something Id recommend for you to evaluate when youre developing your
infrastructural services dont jump directly into ZeroMQ, Thrift, or any other RPC protocol, unless you
know you have a problem to solve that these do.
I had great performance out of using HTTP, but since Roundtrip is synchronous (i.e. its not fire-andforget), any delay within its processing will reflect a delay in the application server code if you choose
to use it naively, which is the smart thing to do (I tend to not prematurely optimize things). Having a look
at Roundtrips internals you can already feel that all major optimizations have been done, so to squeeze
a bit more performance, a low-hanging fruit is to optimize the wasteful HTTP transport with something
else. Since the bare data that goes through the wire is so simple in the case of our problem domain, it made
sense to use TCP but it also made sense to use something a notch higher level than TCP ZeroMQ. More
commonly known as TCP on steroids. And it is.
ZeroMQ supports various network topologies; you can declaratively build topologies such as pub-sub, loadbalancers, reverse-proxies and more, just as if youre using LEGO. ZeroMQ also knows to queue outgoing
traffic so that if the receiving side goes down, your traffic will get transmitted once its up. It all works and
feels like magic almost.

49

The Best Of
Listing 6. A ZeroMQ server
#
# quick protocol desc:
#
# S metric.name.foo i-have-an-id-optional
#
# U metric.name.foo checkpoint.name
#
# E metric.name.foo
#
# All replies are serialized JSON
#
ACTIONS = { S => :start, U => :checkpoint, E => :end }
def listen!(port)
context = ZMQ::Context.new(1)
puts Roundtrip listening with a zeromq socket on port #{port}...
socket = context.socket(ZMQ::REP)
socket.bind(tcp://*:#{port})
while true do
select(socket)
end
end
def select(socket)
request =
rc = socket.recv_string(request)
unless request && request.length > 2
socket.send_string({:error => bad protocol: [#{request}]}.to_json)
return
end
action, params = ACTIONS[request[0]], request[1..-1].strip.split(/\s+/)
begin
resp = @core.send(action, *params)
socket.send_string(resp.to_json)
rescue
puts error: #{$!} if @debug
socket.send_string({ :error => $! }.to_json)
end
end

Within the comments, is the description of the protocol. This is a very simple line-protocol where every
line represents a transactional unit of data. This makes parsing very simple, and data very compact; and
the server can leverage the fact that it is relatively dumb do less, and have better performance. Responses
are transmitted out in JSON form, because of the fact that most clients are smart and want to see more
meaningful format and description being sent out of the server.
All in all using the ZeroMQ endpoint yielded a major jump in throughput.

Closing Up
Weve seen a problem thats currently in its infancy monitoring distributed systems and seemingly
disconnected cross-server, cross-farm processing workflows. Weve laid out a couple of solutions, and seen
how its solved in the real world. Ive also walked with you through the path of how I got to solve this kind
of problem, and the context behind every step in the way of building the solution in Ruby; using relatively
cutting-edge backing technologies such as Redis, Statsd, Graphite and ZeroMQ.

50

The Best Of
Hopefully, you can not only solve this problem for your own infrastructure, but take the tips and contexts
Ive laid out and use them in other scenarios. Of course, youre also welcome to clone Roundtrip itself, use it
within your products, and hopefully contribute anything you see fit as its open-sourced on github: http://
github.com/jondot/roundtrip.

About the Author

Dotan Nahum is a Senior Architect at Conduit. Aside from building polyglot, mission critical, large-scale
solutions (millions of users, thousands of requests/sec) as part of his day job, he is also an avid opensource contributor, technical writer, and an aspiring entrepreneur. Youll find his blog at http://blog.
paracode.com, twitter at http://twitter.com/jondot and contributions on Github, http://github.com/jondot.

51

The Best Of

CreateJS In Brief
by David Roberts
Over the past several months, Ive been making games and animations with a Javascript
library called CreateJS. The library contains series of four components to assist with
developing for HTML5; one for graphics (via the <canvas> https://developer.mozilla.org/
en-US/docs/HTML/Canvas element), one for tweening values, one for sound (using <audio>
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/audio, webAudio https://dvcs.
w3.org/hg/audio/raw-file/tip/webaudio/specification.html, or flash), and one for preloading.
This article introduce the graphics component, EaselJS, as it is the most interesting and the
easiest to misuse. A basic working knowledge of HTML5 is required for this article.

Layers
When we start a project, it is natural to make the scene by adding different objects to the stage, in order of
back to front. This stands up fairly well, provided we only want to add objects in front of everything. In the
following example, a cloud scuds past our actor.
Listing 1. A single cloud
<HTML>
<head>
<title>Example 1-0: Clouds</title>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
</head>
<body>
<canvas id=output width=300 height=150></canvas>
<script>
use strict
//Create a new stage, from the createjs library, to put our images in.
var stage = new createjs.Stage(output);
//Well add some scenery to the stage, then add an actor.
addLand(stage);
addCloud(stage, 0, 40);
addActor(stage, 99, 38);
//CreateJS comes with a Ticker object which fires each frame. Well make it so that we
repaint the stage each time its called, via our stages update function.
createjs.Ticker.addEventListener(tick, function paintStage() {
stage.update(); });
function addLand(stage) {
var land = new createjs.Bitmap(images/land.png);
stage.addChild(land); //The background image includes the blue sky.
}
function addCloud(stage, x, y) {
var cloud = new createjs.Bitmap(images/cloud.png);
cloud.x = x, cloud.y = y;
stage.addChild(cloud);
//Well move the cloud behind the player because it looks good.
createjs.Ticker.addEventListener(tick, function moveCloud() {
cloud.x += 2;

52

The Best Of

});

function addActor(stage, x, y) {
//All the images in this scene have been drawn from the Open Pixel Platformer. See
http://www.pixeljoint.com/forum/forum_topics.asp?FID=23 for more details.
var actor = new createjs.Bitmap(images/male native.png);
actor.x = x, actor.y = y;
stage.addChild(actor);
}
</script>
</body>
</HTML>

Here, weve created a stage, added some objects to it, and set it to continually redraw itself to reflect the
changing position of the cloud. Looking at the output, however, that one cloud seems awful lonely. Well add
in a little timer to give him some friends. Add the following code around at line 22 of the script.

Listing 2. Adding more clouds


//Well add in another cloud, at a random height, every 2 seconds.
window.setInterval(function addClouds() {
addCloud(stage, 0, Math.random()*60);
}, 2000);

Now we have more clouds, but theyre going in front of our character. To fix this, well create several
containers. A container is holds other objects, like a stage does. If when we add our clouds to a container
behind our actor, the container will keep them behind our actor where they belong. Replace the calls to
addLand, addCloud, and addActor, starting on line 14, with the Listing 3:

53

The Best Of

Listing 3. Properly layered clouds


//First, well add some containers to keep our scenery organized.
var backgroundContainer, sceneryContainer, actorContainer;
stage.addChild(backgroundContainer = new createjs.Container());
stage.addChild(sceneryContainer = new createjs.Container());
stage.addChild(actorContainer = new createjs.Container());
//Well add some scenery to the stage, then add an actor.
addLand(backgroundContainer);
addCloud(sceneryContainer, 0, 40);
addActor(actorContainer, 99, 38);
//Well add in another cloud, at a random height, every 2 seconds.
window.setInterval(function addClouds() {
addCloud(sceneryContainer, 0, Math.random()*60);

}, 2000);

This is how you implement z-layers in EaselJS, although it doesnt seem to be explicitly stated in the
documentation.
This is quite a nice approach to z-layers. First, it is quite scalable. Because we have named layers, we can
easily add more layers between them and move around existing layers. Second, our layer orders are now
defined in one place. This means that when we need to rearrange them and we will if were working
on a large project we wont have to go hunting for a hundred different constants in our files. Lastly, we
can apply almost any effect to a container that we can also apply to an object. For example, if we had our
background in a separate container, we could easily add parallax scrolling just by moving the container.
Supplementary Listing 4. The final product

54

The Best Of
<HTML>
<head>
<title>Example 1-2: Clouds</title>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
</head>
<body>
<canvas id=output width=300 height=150></canvas>
<script>
use strict
//Create a new stage, from the createjs library, to put our images in.
var stage = new createjs.Stage(output);
//First, well add some containers to keep our scenery organized.
var backgroundContainer, sceneryContainer, actorContainer;
stage.addChild(backgroundContainer = new createjs.Container());
stage.addChild(sceneryContainer = new createjs.Container());
stage.addChild(actorContainer = new createjs.Container());
//Well add some scenery to the stage, then add an actor.
addLand(backgroundContainer);
addCloud(sceneryContainer, 0, 40);
addActor(actorContainer, 99, 38);
//Well add in another cloud, at a random height, every 2 seconds.
window.setInterval(function addClouds() {
addCloud(sceneryContainer, 0, Math.random()*60);
}, 2000);
//CreateJS comes with a Ticker object which fires each frame. Well make it so that we
repaint the stage each time its called, via our stages update function.
createjs.Ticker.addEventListener(tick, function paintStage() {
stage.update(); });
function addLand(stage) {
var land = new createjs.Bitmap(images/land.png);
stage.addChild(land); //The background image includes the blue sky.
}
function addCloud(stage, x, y) {
var cloud = new createjs.Bitmap(images/cloud.png);
cloud.x = x, cloud.y = y;
stage.addChild(cloud);

//Well move the cloud behind the player because it looks good.
createjs.Ticker.addEventListener(tick, function moveCloud() {
cloud.x += 2;
});

function addActor(stage, x, y) {
//All the images in this scene have been drawn from the Open Pixel Platformer. See
http://www.pixeljoint.com/forum/forum_topics.asp?FID=23 for more details.
var actor = new createjs.Bitmap(images/male native.png);
actor.x = x, actor.y = y;
stage.addChild(actor);
}
</script>
</body>
</HTML>

55

The Best Of

Performance
In a large game of minesweeper (such as http://mienfield.com), we can have a few thousand tiles on the
screen at once. A simple, direct implementation will happily use up all our available processing power in
CreateJS, though.
Listing 5. A hard-to-compute version of a Minesweeper field

<HTML>
<head>
<title>Example 2-0: Minesweeper</title>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
</head>
<body>
<canvas id=output width=320 height=320></canvas>
<script>
use strict
var stage = new createjs.Stage(output);
//Set up some z-layers, as in example 1.
var tileContainer, uiContainer;
stage.addChild(tileContainer = new createjs.Container());
stage.addChild(uiContainer = new createjs.Container());
//Add 1600 tiles in a square. This should load one of our processors a little, and we
can observe it with our task manager. You can open one up in Chrome by pressing shift-esc.
var tiles = [];
for (var x = 0; x < 40; x++) {
tiles.push([])

56

The Best Of
for (var y = 0; y < 40; y++) {
tiles[x][y] = addTile(tileContainer, x, y);
};

};
//When we click on the tile, we should make it respond. Well use the question mark
in place of an actual game of minesweeper.
stage.addEventListener(mousedown, function revealTile(event) {
var x = Math.floor(event.stageX/8); //StageX is the pixel of the stage we clicked
on.
var y = Math.floor(event.stageY/8); //8 is how wide our tiles are.
tiles[x][y].image.src=images/question mark tile.png;
});
//Add two blue bars to the stage to track the mouse.
var horizontalBlueBar = addGridTool(uiContainer, -90);
var verticalBlueBar = addGridTool(uiContainer, 0);
//Well make them track our mouse cursor. How quickly they do so will also give us a
good feel for our framerate.
stage.addEventListener(stagemousemove, function updateGridTool(event) {
horizontalBlueBar.y = event.stageY;
verticalBlueBar.x = event.stageX;
});
//When we redraw the stage, we should make the blue bars flicker a bit for effect.
createjs.Ticker.addEventListener(tick, function paintStage() {
horizontalBlueBar.alpha = 0.3 + Math.random()/3;
verticalBlueBar.alpha = 0.3 + Math.random()/3;
stage.update();
});
function addTile(stage, x, y) {
var tile = new createjs.Bitmap(images/blank tile.png);
tile.x = x*8,
tile.y = y*8;
//Our tile is 16 pixels wide, but well scale
them down for this example.
tile.scaleX = 0.5, tile.scaleY = 0.5; //We need to draw lots of objects to produce
a measurable stress on a modern computer.
stage.addChild(tile);
return tile;
}
function addGridTool(stage, rotation) {
var gridTool = new createjs.Bitmap(images/bar gradient.png);
gridTool.regX = 4;
//Offset the bar a bit in the narrow dimension, so our mouse
will be over the middle of it.
gridTool.scaleY = 320; //Make the bar as long as the gamefield.
gridTool.rotation = rotation;
stage.addChild(gridTool)
return gridTool;
}
</script>
</body>
</HTML>

On my computer, this version takes over half of the processing power of the page to run. (To open this task
list, you can press shift-esc in Chrome https://www.google.com/intl/en/chrome/browser/ or Chromium http://
www.chromium.org/).
57

The Best Of

Why does this version use so much processing power? It turns out that CreateJS does not implement the
dirty rect optimization (http://c2.com/cgi/wiki?DirtyRectangles) when it redraws the scene. This is because
it is prohibitive to calculate the bounding box for some of the elements the library can draw, such as vector
graphics and text. http://blog.createjs.com/width-height-in-easeljs/ explains the trouble in more detail its
quite an interesting problem. For our purposes, this means that each time we call stage.update() the backing
canvas is cleared and every single object on the stage has to be drawn again. All 1600 of them. To fix this,
well cache() our background to a new canvas and call updateCache() when we need to refresh the tiles.
Listing 6. Optimized tile drawing

58

The Best Of
//Set up some z-layers, as in example 1.
var tileContainer, uiContainer;
stage.addChild(tileContainer = new createjs.Container());
stage.addChild(uiContainer = new createjs.Container());
tileContainer.cache(0,0,320,320);
//Add 1600 tiles in a square. This should load one of our processors a little, and we can
observe it with our task manager. You can open one up in Chrome by pressing shift-esc.
var tiles = [];
for (var x = 0; x < 40; x++) {
tiles.push([])
for (var y = 0; y < 40; y++) {
tiles[x][y] = addTile(tileContainer, x, y);
};
};
tiles[39][39].image.onload = function() {tileContainer.updateCache()}; //When the last
tiles image has loaded, we need to refresh the cache. Otherwise, well just draw a blank
canvas.
//When we click on the tile, we should make it respond. Well use the question mark in
place of an actual game of minesweeper.
stage.addEventListener(mousedown, function revealTile(event) {
var x = Math.floor(event.stageX/8); //StageX is the pixel of the stage we clicked on.
(The formula gives us the index of our tile.)
var y = Math.floor(event.stageY/8); //8 is how wide our tiles are.
tiles[x][y].image.src=images/question mark tile.png;
tiles[x][y].image.onload = function() {tileContainer.updateCache()}; //Update the cache
when our new image has been drawn.
});

You can paste these new functions in over top of their old versions, or you may refer to supplementary listing
7 for the complete file.
Internally, CreateJS is now drawing everything to another canvas, and then drawing that canvas to our stage
when we call stage.update(). (We can obtain a reference to this internal stage via tileContainer.cacheCanvas
if we want to). The performance of this cached mode results in a great performance gain, and Chrome now
reports only a few percent of its cycles used on the minesweeper mockup page.

59

The Best Of
Supplementary Listing 7. The cached Minesweeper field

<HTML>
<head>
<title>Example 2-1: Minesweeper</title>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
</head>
<body>
<canvas id=output width=320 height=320></canvas>
<script>
use strict
var stage = new createjs.Stage(output);
//Set up some z-layers, as in example 1.
var tileContainer, uiContainer;
stage.addChild(tileContainer = new createjs.Container());
stage.addChild(uiContainer = new createjs.Container());
tileContainer.cache(0,0,320,320);
//Add 1600 tiles in a square. This should load one of our processors a little, and we
can observe it with our task manager. You can open one up in Chrome by pressing shift-esc.
var tiles = [];
for (var x = 0; x < 40; x++) {
tiles.push([])
for (var y = 0; y < 40; y++) {
tiles[x][y] = addTile(tileContainer, x, y);
};
};
tiles[39][39].image.onload = function() {tileContainer.updateCache()}; //When the

60

The Best Of
last tiles image has loaded, we need to refresh the cache. Otherwise, well just draw a blank
canvas.
//When we click on the tile, we should make it respond. Well use the question mark
in place of an actual game of minesweeper.
stage.addEventListener(mousedown, function revealTile(event) {
var x = Math.floor(event.stageX/8); //StageX is the pixel of the stage we clicked
on. (The formula gives us the index of our tile.)
var y = Math.floor(event.stageY/8); //8 is how wide our tiles are.
tiles[x][y].image.src=images/question mark tile.png;
tiles[x][y].image.onload = function() {tileContainer.updateCache()}; //Update the
cache when our new image has been drawn.
});
//Add two blue bars to the stage to track the mouse.
var horizontalBlueBar = addGridTool(uiContainer, -90);
var verticalBlueBar = addGridTool(uiContainer, 0);
//Well make them track our mouse cursor. How quickly they do so will also give us a
good feel for our framerate.
stage.addEventListener(stagemousemove, function updateGridTool(event) {
horizontalBlueBar.y = event.stageY;
verticalBlueBar.x = event.stageX;
});
//When we redraw the stage, we should make the blue bars flicker a bit for effect.
createjs.Ticker.addEventListener(tick, function paintStage() {
horizontalBlueBar.alpha = 0.3 + Math.random()/3;
verticalBlueBar.alpha = 0.3 + Math.random()/3;
stage.update();
});
function addTile(stage, x, y) {
var tile = new createjs.Bitmap(images/blank tile.png);
tile.x = x*8,
tile.y = y*8;
//Our tile is 16 pixels wide, but well scale
them down for this example.
tile.scaleX = 0.5, tile.scaleY = 0.5; //We need to draw lots of objects to produce
a measurable stress on a modern computer.
stage.addChild(tile);
return tile;
}
function addGridTool(stage, rotation) {
var gridTool = new createjs.Bitmap(images/bar gradient.png);
gridTool.regX = 4;
//Offset the bar a bit in the narrow dimension, so our mouse
will be over the middle of it.
gridTool.scaleY = 320; //Make the bar as long as the gamefield.
gridTool.rotation = rotation;
stage.addChild(gridTool)
return gridTool;
}
</script>
</body>
</HTML>

61

The Best Of

Resizing
When a canvas has its width or height properties set, it is also cleared. Without intervention, this will cause
our stage to occasionally render a blank frame to screen. The graphics will be drawn by EaselJS; the canvas
resized and cleared; and then the canvas will be rendered to screen by the browser. To fix this, well just call
stage.update() after the canvas has been resized. Listing 8 has this call commented out on line 60, so you can
see the difference.
Listing 8. A resizable canvas

<HTML>
<head>
<title>Example 3-1: Resizing</title>
<meta charset=utf-8>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
<style>
div {
background-color: black;
overflow: hidden; /*Make the div resizable.*/
resize: both;
width: 275px;
height: 200px;
position: relative; /*Make #instructions positionable in the corner.*/
}
#output {
pointer-events: none; /*This would cover up our resizing handle otherwise.*/
width: 100%;
height: 100%;
}
#instructions {
pointer-events: none;
color: white;
font-family: Arial;
font-size: 10px;
position: absolute; /*Position the instructions in the corner with the drag handle.*/
margin: 0px;
bottom: 5px;
right: 5px;
}
</style>
</head>

62

The Best Of
<body>
<div id=container>
<p id=instructions>Drag me!
</p>
<canvas id=output width=275 height=200></canvas>
</div>
<script>
use strict;
var stage = new createjs.Stage(output);
var circle = new createjs.Shape();
circle.graphics //Draw a circle with a line through it.
.beginFill(white)
.drawCircle(0,0,50)
.beginStroke(black)
.moveTo(-50,0)
.lineTo(+50,0)
.endStroke();
circle.x = 100;
circle.y = 100;
stage.addChild(circle);

logic.

createjs.Ticker.addEventListener(tick, function paintStage() {


circle.rotation += 1; //Rotate the circle so we can see how often were running our
stage.update();
});
function resizeStage(width, height) {
stage.canvas.width = width, stage.canvas.height = height;
//stage.update();
};

this.)

//Watch for our parent container getting resized. (There is no native event for

var element = document.getElementById(container);


window.addEventListener(mousemove, function pollSize(event) { //Well watch for the
mouse being moved, and check if the mouse is resizing the container.
var newWidth = parseInt(element.style.width) || stage.canvas.width; //Style.width
and style.height arent set until we resize the container in that direction, so we might
legitimately have to resize x to something while y is undefined.
var newHeight = parseInt(element.style.height) || stage.canvas.height;
if(newWidth !== stage.canvas.width || newHeight !== stage.canvas.height) {
resizeStage(newWidth, newHeight); };
});
stage.update();
</script>
</body>
</HTML>

CreateJS encourages separation of logic and rendering, so we can simply tell it to draw the stage twice a
frame. This is also useful on mobile devices, where the user can rotate the phone. It is not nice to have the
entire screen flicker if youve got a full-screen canvas displayed. The solution really is simple, but it had
eluded me for a long time.
Side note: There is no onresize function for HTML elements, even ones marked as resizable in CSS! In the
solution here, I have sacrificed some speed and correctness for simplicity.

63

The Best Of

HTML5 and CreateJS


Because a CreateJS stage is a simple wrapper around a canvas tag, stages behaves like just another
element. It can be styled with CSS, animated via jQuery, transformed, and flowed with the rest of the page
content. If you were making a game with CreateJS, you would probably want to design the majority of your
user interface in HTML and layer it on top of the game. In the following example, we will make the player
character, Frogatto, ask for some input.
Listing 9. HTML Input

<HTML>
<head>
<title>Example 4-0: DOM Interface</title>
<meta charset=utf-8>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
<style>
#speech-bubble { /*A grey speech-bubble.*/
background-color: lightgrey;
position: absolute; /*Make bubble repositionable.*/
display: inline-block;
border-radius: 7.5px;
padding: 7.5px;
margin-left: 0px; /*These will get set from the Javascript.*/
margin-top: 0px;
border: 1px solid darkgrey;
}
#speech-bubble:after { /*Give the speech bubble a triangular point.*/
content: ;
position: absolute;
left: 50%; /*Center the triagle.*/
margin-left: -15px; /*Give the triangle a negative margin of half the triangle width, so
the triangle is centered.*/
bottom: -15px; /*Make a triangle 15px high and offset it downwards by that much.*/
border-width: 15px 15px 0;
border-style: solid;
border-color: lightgrey transparent;
}
canvas {
outline: 1px solid black;
cursor: move;
}
</style>
</head>
<body>
<div id=speech-bubble>
Whats your name?<br>
<input id=name onkeypress=if(event.which === 13) getName()></input> <button
onclick=getName()>OK</button>
</div>
<canvas id=output width=300 height=200></canvas>
<script>
use strict;

64

The Best Of
var stage = new createjs.Stage(output);
createjs.Ticker.addEventListener(tick, function paintStage() {
stage.update(); });
//Create a new player. Hes draggable with the mouse.
var playerSpriteSheet = new createjs.SpriteSheet({
images: [images/frogatto.png], //A simplified edition of Frogatto, from Frogatto &
Friends. Used with permission.
frames: [
//[x, y, width, height, imageIndex, regX, regY]
[124,18,32,32,0,16,31], //Idle animation.
[159,18,32,32,0,16,31],
[194,18,32,32,0,16,31],
],
animations: { //Refer to http://www.createjs.com/Docs/EaselJS/classes/SpriteSheet.html
for documentation.
idle: { //We will use an idle animation for this example, to give it some life.
frames:[0,1,2,1],
frequency: 6,
next: idle,
},
}
});
var player = new createjs.BitmapAnimation(playerSpriteSheet);
player.gotoAndPlay(idle);
player.x = 150, player.y = 150;
stage.addChild(player);
player.onPress = function(event) {
var offset = { //Capture the offset of the mouse click relative to the player.
x: event.target.x event.stageX,
y: event.target.y event.stageY,
};
event.onMouseMove = function(event) { //During this click, when we move the mouse, update
the player position and the speech bubble position.
event.target.x = event.stageX + offset.x;
event.target.y = event.stageY + offset.y;
repositionSpeechBubble(event.target);
stage.update(); //Update the stage early to synch with user input better. This does
make the player animation play faster, however.
}
}
//Position the speech bubble HTML element above
var speechBubble = document.getElementById(speech-bubble);
function calculateSpeechBubbleOffset() { //We dont have access (that I know about) from
CSS to calculate half our width as a margin value. This is essentially a regX/regY value for
the DOM speech bubble, which makes later positioning easier and faster.
speechBubble.style.marginLeft = -speechBubble.offsetWidth/2+px;
speechBubble.style.marginTop = -speechBubble.offsetHeight+px;
}
calculateSpeechBubbleOffset();
function repositionSpeechBubble(object) {
object = player.localToGlobal(8,-40); //The offset of the speech bubble point from our
regX/regY point.
speechBubble.style.left = object.x + px,

65

The Best Of
speechBubble.style.top = object.y + px;
}
repositionSpeechBubble(player);
function getName() {
var name = document.getElementById(name).value;
document.getElementById(speech-bubble).innerHTML = Hello, +name+.;
calculateSpeechBubbleOffset();
}
</script>
</body>
</HTML>

We can drag Frogatto around, and the text box moves with him.
Drawing a text input box would be time-consuming using CreateJS, since wed have to figure out how to
draw: a box; text; a cursor; text selection, an ok button; and then how to position it all. Since we have the
power and depth of HTML available to us, we should use it where we can!
This also helps with separation of duty in code. We can style our text boxes without having to figure out
the program first. We dont have to parse all the style details of our text boxes when were figuring out how
the program positions them over Frogatto.
In closing, I would recommend using CreateJS if you want to draw animations on a web page. As with
the majority of Javascript libraries, its most useful in conjunction with the rest of HTML5. CreateJS is a
powerful abstraction, although it can introduce significant overhead if misused.

About the Author

David Roberts is a developer currently living in Vancouver, British Columbia. In


his spare time, he works on a beautiful open-source platformer game called
Frogatto and Friends. You can download it at http://www.frogatto.com, or you can
steal his recent 3D puzzle game from http://cubetrains.com.

66

The Best Of

Build Customized Web Apps through


Joomla
by Randy Carey
Lead: Dont reinvent the wheel. By developing web applications on an object-oriented
CMS, a developer leverages proven web features, freeing him to focus on coding the
business needs.
What you will learn:

What Joomla offers as a framework for web applications.


Example web apps built with Joomla.
The process of generating base Joomla app and customizing it to meet business needs.

What you should know:

Be familiar with Joomla as a CMS, and the more thorough one is with Joomla the more one will appreciate this article.
An appreciation of object-oriented architecture, the Model-View-Controller pattern, and CMS features.

In my opinion in the 90s, application development centered on the desktop and local networks. As we
have become an Internet-connected society, the expectations for applications are now mostly web-centered.
The challenge is to remain focused on building the functionality that a business needs while integrating
it with the Internet technologies (such as AJAX, session cookies, web forms, ecommerce, etc.) as well as
web concerns (such as security and cross-browser consistency). A framework-based CMS, which contains
reusable code for managing web-related issues, provides a smart platform for developing web apps.
Unlike most other open source CMSs, Joomla is architected the way a software engineer expects: objectoriented, structured around design patterns like the model-view-controller, a library-based framework to
reuse important functionality, and a design that expects developers to extend it. As a software engineer who
built desktop and systems applications, I find Joomla meets my technology demands for building custom
web applications. I leverage Joomla for the web details so I can focus on coding the business part of the
application. Arguably, Joomla is more than a CMS for building websites it serves well as a framework for
building web applications.

Reusable Joomla features


Here is a list of some of Joomlas web-based features that can be easily reused by custom applications.
multilingual. Tabs, field names, messages, etc. one can provide a language file of the proper translations
for each language supported, and Joomla will supply the correct translation based upon the users
language.
user management. Users can create or be given password-secured user accounts. Admins can manage the
user list as well as broadcast messages to users.
access control. Logged in users can be assigned to custom-declared user groups. Your custom application
(a Joomla component) can establish rules for authorizing a user to perform any given action on your
application based upon the users membership in these user groups. Joomlas ACL accommodates the
RBAC1 model.
web form integration. Using XML files and reusing field input classes one can quickly create mechanisms
for displaying, collecting, and updating data to the database.
responsive web interface. Make your interface display responsively for desktops, tablets, and mobile
devices.

67

The Best Of
CMS features like WYSIWYG editor, search, CAPTCHA, SEF URL routing, categories and tagging,
administrative backup and security tools, etc.
library of reusable classes. Being object-oriented, Joomlas CMS functionalities can be reused at the
class level: input fields and validation, database connections, toolbars, CAPTCHA integration, session
management, pagination of lists, etc. The library also includes a framework of non-CMS functionality for
things like email messaging, manipulating images, or connecting to APIs of social media like Facebook,
LinkedIn, and Google.
event-based extensions. Joomla calls them plugins. They are fired upon standard and custom events
invoking code that can be aware of the current user, session, application being run, etc. Common uses
include changes to the pages content, logging information, and even overriding PHP classes.
Admin panel. Manage data records with regard to content, ordering, publishing state, and creation/
deletion. Set application options.
Website integration. More often than not, a web application needs to be accessible through a website,
display information on site pages, and integrate with the websites users and data.
reusable extensions. Developers list their installable extensions through a directory of several thousand
items from simple plugins to full-featured applications. These can be reused and leveraged by custom
applications.
Community driven, Joomla is constantly evolving with security updates and new feature or enhancements.

Example Web Apps that Leverage Joomla


In my work of delivering custom online solutions, I have leveraged Joomla to develop both simple and
sophisticated web applications. In both examples below, the solution leveraged Joomla features and
extensions, allowing me to focus mostly on adding the business logic.

Leads Generator and Management


A regional site prompts online users to submit their requests for vacations in terms of date, type of lodging,
location, number of rooms, etc. Once a lead is submitted, the application identifies client lodges that meet
the requested criteria are notified by email. The lodge signs in to a dashboard that manages their leads and
allows them to auto-generate and a personalized response. Clients can also manage their lodge details as well
as set how frequently they should be notified of leads. The system tracks all leads and responses, providing
a report to each lodge. The admin can set system wide settings such as how many lodges can claim the same
lead, and the system emails the admin of flagged scenarios such as no one responding to a lead request or a
client that has not logged in for X number of days.
This application leveraged the web forms that feed the database, user management of subscribing lodges,
access control to display only one lodges set of data per lodge, email system to autosend messages, and
the admin screens for reviewing and managing system wide data. And of course, the collection of lead
information had to integrate into the clients website.

Affiliate Store Email Campaigns


A manufacture of consumer parts wanted to develop their own promotions that are supported by their
affiliate retails stores. Marketing develops each promotion through a webpage. The application allows them
to tailor the promotion through an email to be sent to subscribing consumers as well as an announcement
email sent to each affiliate. Of course, marketing can send themselves test emails to assure that the layout is
just right. The announcement is sent first, allowing the retailer to opt in or out of any given campaign. When
the campaign is later launched, only consumers to participating stores will receive the promotion, which is

68

The Best Of
auto-personalize with information for the given consumer and store. The emails are sent through Mandrill, a
branch of MailChimp, which handles whitelisting and reporting.
This application leverages user management of consumers and retail representatives, a popular extension
for managing a directory of stores, another extension for bridging to Mandrill, access control for managing
dashboard data, JCE editor for composing email content and media management, standard article creation
for each campaigns landing page, and the components admin panel for composing, testing, and launching
emails. Segmentation of consumers to one of over a thousand affiliate stores is maintained by the Joomla
extensions, so list segmentation was best handled by the site-integrated web app, which has access to these
ever changing records, instead of constantly synchronizing segmentation with MailChimp.

Reusing Extensions
Often I can find a Joomla component that already implements most of what I need. In those cases, I will start
with that extension and tailor it with the customized code I need. For example, a training company needed a
way to list their classes (schedule, location, description) and a way to register and pay online. Starting with
an event registrant system, I used the language feature to change terms, coded the clients unique business
rules, and wrote a payment plugin that interfaced with their accounting package and payment processor.
I was able to reuse the code providing functionalities like calendaring, popup Google maps of locations,
various types of display modules, and the shopping cart. Of course, this approach means forking from an
existing extension so can no longer look to the extension developer for updates, but it does allow you to
reuse a lot of functionality instead of re-inventing it.
CAN BE A SIDE BAR ITEM

Resources
To build nontrivial web applications with Joomla a programmer needs to understand its architecture and how it
works at code level. Whether you are new to Joomla or a seasoned professional, if you intend to work with its
PHP and XML code, you ought to read Joomla Programming (Dexter & Landry, Addison Wesley Publishing),
a book written by two key developers who helped to architect this CMS. The book is essential reading as it
thoroughly explains how Joomla works at the code level and provides thorough coding examples.
A second recommended resource is Learning Joomla 3 Extension Development (Tim Plummer, Packt
Publishing). This book is to-the-point of building custom Joomla extensions. But arguably the first book
provides a more thorough explanation as to what is happening in the code.
end

Example Development of a Joomla App


To demonstrate the process of I use to develop custom web apps, I am choosing a relatively simple example.
The application allows a pet shop owner to manage his current inventory of puppies and kittens so that
available animals are listed and displayed on his shops website. I am keeping the requirements simple so as to
better illustrate and stay within the scope of this article, but the app can be extended further with richer features.
Consistent with the Joomla framework, this application will be an installable component. A component is
the heart of Joomla applications. It provides for managing the apps primary and related data, all application
behavior and the business rules revolving around that data, and the primary display of that data. A
component involves several directories and files to implement the MVC pattern (Model-view-Controller) as
well as form declarations, options, and helper classes.
Fortunately, some vendors provide online tools for generating a basic component. I user componentcreator (http://component-creator.com) which is free for components built upon a single database table,
and available for multi-table components at a reasonable subscription. Another resource for consideration
is MVC Component Generator (http://agjoomla.com/en/tools/mvc-component-generator.html). Available
69

The Best Of
options are listed in this directory: http://extensions.joomla.org/extensions/tools/webbased-tools. Tools like
these build upon the Joomla framework, allowing you to focus almost entirely on just the business logic your
application needs.
Because the component is built upon its data, the first thing to do is sketch out the data fields and how they
will be organized within tables. In this example, the primary table will list the details of a puppy or kitten:
breed, sex, color, date-of-birth, image, price, and an optional description. To distinguish between puppy
versus kitten, I will create a category for each and each pet record will need to set the category to one of
these two. To illustrate the use of related tables, I will create a table of breeds (name, description) that the pet
record will reference.

Generating a Joomla Component


I log in to Component-Creator and create a new component. I name it com_petstore, set some componentlevel values, and then create my two tables. (Remember, I can create more than one table within a
component because I am a subscriber of their service.)
I name the first table pets which will look like #__petstore_pets. Some standard housekeeping fields (id,
ordering, state, checkout) are generated automatically. Then I add my fields. Each field requires a name and
a type. The type dropdown includes a sizable list of Joomla field types, and most of the time these are what
Ill choose. Further down the list are standard SQL types. Each field type is associated with its own set of
options that appears.
Of the fields Im creating, most fields are text fields. By adding a category field the component will be
generated with its own type of category values and a list view to manage them. This category field is
populated by a dropdown offering just these values. The field sex is a list field where I define the only two
valid options of male and female. Birth_date is a calendar field.
Next I create the table #__petstore_breeds. Each record contains a text field for the breeds name and an editor
field (WYSISYG textarea) to contain a description that can be displayed as formatted text which could
contain images and links. I could add more fields, but this is all I need for now.
I return to the pets table and change the field type of breed to the type foreign key. I am prompted to declare
the related table (#__petstore_breeds), the key value (id), and the display value (name).
I think Ive got the data set up as I want it so I find the tools Build button and click it. Within a second or
two a zip file is downloaded to my computer. It is the installable component that I just created. Log in to the
admin side of the Joomla installation where well be further developing this component. Installation of this
new extension is completed within a matter of seconds.
Next, navigate to this new component. Three list views are available: pets, categories, and breeds. Open
the new option for each list and confirm that the edit form and its fields are configured as you like. Walk
through the components workflow and add test data. If you see something that needs to change, now is the
time to return to the component creation tool, change the settings, rebuild, reinstall, and re-assess the admin
side. Once you make code-level additions and changes, you will be challenged to merge a new version from
component creator with your custom code.
So far we have not coded anything. Nevertheless, once we install our component, we have a working
application that provides a solid base of code which includes much of what we want. And from this code
base we will customize as our web application requires.

Customize the Application


Every project is different, and your customization needs will be different per project. You might want to
process user inputs, send email notifications, generate reports, display dashboards, inject AJAX, etc. The

70

The Best Of
more you understand Joomla, its library, its MVC structure, and object-oriented PHP programming, the more
sophisticated you can be in adding custom features. This article can cover only a few examples, but they
should demonstrate typical techniques for customization a Joomla application.
Customizing the layout

The most common need is to tailor the display of data. Our base component provides some bland layouts of
all the data. To see the layout, create a new menu item of type Petstore -> Pets, then view that page on the
front-end. In Joomla the front-end layouts are found under the directory
/components/<com_component_name>/views/<view_name>/tmpl/<layout_name>.php

So the layout file that lists all pets in the front-end is


/components/com_petstore/views/pets/tmpl/default.php

The pre-built code holds a list of all items this page should display (accommodating for pagination). It
iterates through the list displaying each item in an HTML table. Following this pattern we can code the
layout by rewriting the foreach loop to look like this.
Listing 1. An example
<table class=pet-list>

<?php foreach ($this->items
<tr>

<td><?php echo

<td><?php echo

<td><?php echo

<td><?php echo

<td><?php echo

<td><?php echo
</tr>

<?php endforeach; ?>
</table>

as $item) : ?>
$item->id; ?></td>
$item->category; ?></td>
$item->breed; ?></td>
$item->color; ?></td>
$item->image; ?></td>
$.$item->price; ?></td>

Adding files to the header

This generates an HTML table listing all the data, but it is an unformatted table. Joomla provides library
functions to add CSS to the header either as an embedded declaration or as a link to a CSS file. To include a
file, add this code within a php section
Listing 2. A sample code
Jfactory::getDocument()->addStyleSheet(JURI::base()./components/com_petstore/assets/pets.css);

Of course, make sure you create an assets directory under the component and create this CSS file within it.
A recommended practice is to put all component styling in a file like this, then add the line of code in each
view.html.php file of each view directory.
Add conditional features at runtime

When the server should run some logic based upon a scenario known only at runtime, develop and install a
plugin. Plugins are fired upon certain events, such as during certain stages in the process of building a web
page in response to a browser request. Plugin developing is out of scope for this article, but the relevant
function shown here illustrates the power and versatility of a plugin. Here, we create a system plugin that
will check if the current process involves this component, and if it does, the plugin invokes the line of code
that adds the style sheet to the header.

71

The Best Of
Listing 3. A sample code
// called within a system plugin
public function onAfterRoute() {
$app = Jfactory::getApplication();
if($app->isSite() && $app->input->get(option) == com_petstore){
//conditionally add this style sheet
Jfactory::getDocument()->addStyleSheet(JURI::base()./components/com_petstore/assets/
pets.css);
}
}

SQL and the Model

The fields category and breed are returning the id for those entries, but we want to display the text. The
model for this table contains SQL that returns the values we get for each item. What we want is a JOIN
statement in the SQL that allows us to get the related values. An investigation of the model found at
/components/com_petstore/models/pets.php

reveals that our component builder did just that. And if it didnt, we could always add the SQL ourselves. So
to get the category name instead of its id value, we simple call $item->category_title, and we do likewise to
get the name of the breed.
But we can go a step further. Lets say we want to incorporate the description of the breed within our list of
pets. We can simply replicate the line of code that gets the name of the breed and change the copied line to
get the description instead. Once that field is added to the returned $item object, our layout file can add the
description text to the HTML as a tooltip or lightbox.
This function (the models getListQuery() is an important one. It is here where we manipulate the SQL to
filter the items we return from the database, to order the results, to declare which fields will be returned, and
to enforce access control
Access control

Lets assume that the shop does not want the public to see pets until they have been on the site for X-days.
However, for a small fee a user can subscribe to get the complete and most current listing. We would install a
subscription extension that allows the public to register, pay the subscription online, and then be added to the
subscription user group. Through Joomlas ACL we create an access level for subscribers. All that with no coding.
Now we return to the model class. We will use Joomla code to determine if the site visitor is a subscribed user
$isSubscriber = in_array(6,$user->getAuthorisedViewLevels()));

and if so we show the whole list. If not, we have the model add an SQL condition to filter out all records
that are not X-days old.
Adding configuration settings

Of course it is better not to hard-code the number of days, nor to hard-code the id of the access level used
for subscriptions. We want the store owner to be able to set values like these in a configuration screen.
Configuration values are easily added to a component though a file named config.xml which is found in the
components base directory (on the administrator side). For our component that is
/administrator/components/com_petstore/config.xml

Examine this file that our component builder generated, or look at the config.xml file of other components,
and you will quickly see the XML pattern for adding fields. Heres the code I would use for X-days.

72

The Best Of
Listing 4. A sample code
<field name=xdays type=list default= label=X-days description=days to defer public
viewing>
<option value=1>1</option>
<option value=2>2</option>
<option value=3>3</option>
</field>

The components admin screen provides an Options button for reaching the configuration screen, and the
code within the model can reference configuration values this way
$xdays = JComponentHelper::getParams(com_petstore)->get(xdays);

Library functions

Most of the functionality in Joomla is rooted in the classes of its library. A savvy app builder will leverage
these. As an example, we will want to add a feature that automatically resizes uploaded photos to pixel
dimensions for website use. The Joomla class JImage provides the needed functionality as the following
code demonstrates
Listing 5. A code demonstration
jimport(joomla.image.image);
$jimg = new Jimage($item->image);
if($jimg->isLoaded() &&($jimg->getWidth() > $maxWidth || $jimg->getHeight() > $maxHeight)){
$jimg->cropResize($maxWidth,$maxHeight,false);
$jimg->tofile($item->image);
}

Here again, it would be nice to set $maxWidth and $maxHeight values within the components configuration
settings.
Going deeper

Real business needs typically calls for more customization that runs deeper. For example, the application
could allow subscribers to sign up for daily email digests of newly added puppies and kittens, and the user
could be allowed to select the type and breeds to monitor. Maybe the store owner wants to track how long
each pet is listed before it is sold, and reports can be run to show the average time each breed or price range
remains unsold. Starting with the Joomla platform and the base component we generated, an experienced
developer should be able to deliver such a web app.
As stated earlier, Joomla application development does require the developer to understand the code and
architecture of Joomla. The better one understands it, the more sophisticated applications one can develop
and deliver. As one can see, the Joomla platform provides the reusable functionality for most web needs,
freeing you to focus your coding effort on the custom functionality that the business needs.

About the Author

Randy Carey holds an MS in software engineering and has transitioned to web


architecture. Through Careytech Studios (careytech.com) he develops custom
applications for web firms and ad agencies. Through the iCue Project
(iCueProject.com) he is developing best practices and Joomla extensions to
improve the workflows of clients managing their websites.

73

The Best Of

What is Drupal? Concepts And Tips for


Users and Developers
by Juan Barba
Drupal is known as a very difficult content management system to use. The trick part is:
Drupal is more than only a content management system. Its a rich internet plattform ready
to grow up and develop complex applications like eCommerce sites, CRM platforms, web
services, government sites.
What you will learn

Drupal as a framework
Drupal basic content structure
Overview of what is Drupal able to do

What you should know

PHP basic knowledge


Preferably, more usege of the content.

Why Drupal?
Many people knows content management systems (CMS) just as some web application used to generate
blogs, news columns and many kind of content oriented sites. The most used CMS at the Web are:
Wordpress, Joomla and Drupal (gratefully all of them are open-source). So, how to know which of them you
should choose for your project?
According to users, Wordpress is easy and fast to use but too little extendable. Drupal is like a big control
panel in which people needs to know how to use it. Joomla fits somewhere in the middle. As it seems,
Drupal is often seen like the most difficult CMS to use. So why that many people use it? What makes Drupal
a unique CMS?
Drupal isnt even just considered as a CMS, but as a web development framework and platform. Thats why
some organizations and corporations uses Drupal as their information technology solution:
Nokia Research
The White House
MIT Division of Student Life
There are more examples of people using Drupal on drupal.org https://drupal.org/case-studies.
Drupal is scalable. You can develop sites to integrate to other web services like social networks, CRMs,
mobile applications, etc. You can even use Drupal not as a website but as a web service that other
applications as mobile applications, desktop applications or other Drupal sites may serve to.
Drupal is secure. Inside Drupal.org community is dedicated to finding and solving security issues with
Drupal. Besides, Drupal maintains high standards for security procedures for system administration. The
code is all developed in Drupal.org, although some of it has been supported by third party applications. As
the code is open-source, there is no chance on getting malware on Drupal core. All these points becomes
Drupal to one of the safest content management platforms.

74

The Best Of

Some basic comparison


One of the great benefits about Drupal is the way it allows to build content without making any kind of
SQL query. To explain this lets say some client wants to have a website about international food recipes.
Building site from zero, using PHP, would take too much time of development. The project costs grows, as
well as th effort for the development team. So this is a work for a CMS. Lets choose the CMS known as the
fastest one for developing content-rich websites: Wordpress. Imagine that the client wants the site to be able
to manage different type of cuisines. With Wordpress you can create posts for each food item, and then
create a category tree to classify all of them. Maybe would need to classify food by type, like salads, drinks
or desserts. For that you may use tags.
Ok thats good, now we have a basic structure to administer content, but what about if we need to create
some special features for each kind of cuisine? The client wants Mexican food has got different fields than
Italian food to make content administration easier. Wordpress allows to insert title, body, categories, format,
some content authoring settings and tags. But there is no way to adding more special fields to a post or to a
page. You cannot even add another content type away from Post or Page unless you write a new plugin or
find another one to solve this problem. Or what if we wants to view pages with various posts organized by
categories or tags to show a page of only Mexican or Italian food? Wordpress brings you some ways to show
you information classified by tags, but if you want to make it look in a different layout and different strings
(e.g. This entry was posted in category and tagged by user. Bookmark the permalink), youll need to write
some code in Wordpress theme or define a format writing a new or modifying an existing plugin. As you can
see, you need too much plugins only for organize your plugins! And most of the plugins are not developed
by wordpress.org but by third party organizations. Theres even a risk of infection with malware!

So how Drupal works to solve my life?


Drupal (specially Drupal 7) got solutions for all the problems mentioned above. The way Drupal manages
content is in items called nodes. These nodes are defined by a template-like structure signed as the content
types (notice the lack of this concept on Wordpress solution). User can configure per content type whatever
fields he need to classify, and those fields can be numbers, strings, text area and even files. For the example
Drupal brings by default two content types: pages and articles, but user is able to create various content
types. So a user creates nodes depending on a content type, and that node would have different fields than
other node from another content type. A programmer may understand this like a class as the content type and
a node as an object, an instance of a content type. Back to the food example user may create two content
types: Mexican food and Italian food. Both of these content types can have different fields but they also can
share fields on the structure definition, e.g. both Mexican and Italian food may be able to define ingredients
for the recipe, but ingredients field just needs to be defined once.
You may also let users to comment on some kind of nodes. Drupal also has comments that can be activated
or deactivated depending on each content type, but also depending on each node.
So how about classifying a node? Drupal allows nodes to be organized by taxonomies. Taxonomies is
an another structure in which user can define keyboards to organize nodes called terms. These terms can
be gathered within vocabularies. Vocabularies can be related to one or many content types as a field. For
example: lets create a vocabulary called meal term, and create three terms: Rare, Medium and Well
done. This vocabulary can be used only in one content type or in many, so if we relate this taxonomy as a
field for the content type Mexican food, well be able to create Mexican food nodes organized by meal
temperature.
How about showing up content (various nodes) like Mexican food or Mexican meal? Drupal can
extend functionality by installing modules. But there is a special module called Views. It allows to display,
sort, filter, receive arguments, show nodes related to taxonomy terms and users or other kind of entities
in a graphical way as if user were building a mySQL query in Drupal itself. Views module is so used and
extended than this module.
Drupal also allows to administer users with different roles, and each role provides different permissions
along the website like publishing content, create, edit, delete nodes, taxonomies, users, etc. and you can also
75

The Best Of
have fields for users and allow them to register or not (maybe your sites doesnt need to have new users but
admin only or your site is in beta version, so you preffer to invite your guests).
As mentioned, Drupal can be extended by installing or creating modules. Modules are divided in 3 types:
Core modules: These comes alongside with Drupal instalation
Contrib modules: Installed from Drupal.org. Noteworthy that almost none of these modules are hosted by
third party organizations, but Drupal.org and its contributors. So theres no risk of finding malware on any
of these modules. All contributed modules are shared under GNU GPL license, as same as Drupal itself.
Custom modules: Modules built by website developer.
Drupal architecture allows to get modules installed without modifying core content, in theory it shouldnt be
even necessary to modify contributed modules, but only to create or alter things in your custom modules.
A basic structure for a module is:
mymodule (folder in sites/all/modules/custom)
mymodule.info (basic information: name, description, version, etc.)
mymodule.module (the code itself, it may call to other files inside or outside this module)

What else I can manage with Drupal?


A website cannot be complete without the front-end part. Drupal also has some configuration which allows
to define layout and other necessary parts of the Website you want to build.
These are the concepts and tips that every person who wants to know about Drupal must know to go ahead
with the frond-end.

Themes
As almost every CMS, Drupal allows to install themes. Themes works the same as modules. There are
core, contributed and custom themes. But the difference between themes and modules is that some themes
are not made to be used as a front-end theme but as a base theme, so front-end developer is able to build a
subtheme. One of the most used and well developed base themes is Omega. This theme is prepared to work
along with some of the best theming practices and tools like grids (960gs, blueprint), sass, responsive design,
HTML5, media CSS3, media queries, etc.

Blocks and regions


A block is a box visible in some selected areas of the website called regions. Regions are defined by the
currently selected theme. Some blocks are generated by modules, and user can also create its own blocks.
Blocks can contain text, HTML or even PHP code, although thats not a good practice. Its better to create
a module and define existance and content for your new block by using these hooks: hook_block_info and
hook_block_view. All will be explained further.

76

The Best Of

Getting deeper: entities, hooks and APIs


Entities
Did you noticed these highlighted (bold and underscored) words like node, taxonomy, users,
comments, files and entities? Drupal 7 manages a concept called entity. An entity is a way to abstract
different kind of data in the site, in a way that makes this data more flexible, for example: you can have
fields on nodes by defining content types but users can also have defined some fields, as same as taxonomy
vocabularies, even comments can have fields (files cannot have fields, but that entity is special).
To explain the entities concept, I must explain an entity bundles. An entity bundle is a way to typify entity
types. For example, we know that node is an entity type, so if entity types can be typified by bundles, a
nodes bundle would be the content type. It occurs the same analogy with taxonomy terms and vocabularies,
being taxonomy the entity type and the vocabulary the bundles.
The advantage for managing data in this way may not be noticed by users but for developers, because
each entity may have different kind of actions or way to interact with the site. Entities alongside with other
module configurations can achieve new functionalities or even use cases for the web application project.
For example: there is a suite of contributed modules called Ubercart, which allows users to implement an
eCommerce totally based on Drupal. So to achieve this functionality, Ubercart creates new entities like
Product, Cart, Orders, etc. Another example to show is the Field Collection module. This module creates a
new entity which allows to have various fields inside one.
The funniest part is that all this functionality (core or contributed) can be highly extended by using custom
modules without altering contributed code and just implementing their hooks.

Hooks
A hook is something like an programming event. When a node is saved or when sites initializes, when
defining new pages for the site, when an entity is loaded, after a form is builded, etc. Basically, its a way to
alter some pre-defined behaviour of an activity. Lets see an example when a form is builded.
Listing 1. An example
function mymodule_form($form, &$form_state) {
$form = array();
$form[text] = array(

#type => textfield,

#title => t(Foo),

#required => TRUE,
);
return $form;
}

Thats a normal definition of a form (the interesting part is that now you know how to create forms in Drupal
by php code although it still needs a submit action). To see that form, it needs to be called by drupal_get_
form function, and that function needs to be called by a hook which defines pages (hook_menu).
If that form is defined on my module, there is no problem to modify it because I have access to this code.
But what about if I need to modify something in the node creation form? This is how a hook works: a hook
is a way to extend another functionality and hooks are not used itself but implemented. The hook we are
going to implement as an example is hook_form_alter. To implement a hook you create a function with this
name: mymodule_form_alter . Notice that we replaced the word hookby the name of our module. Thats the
way our module implements a hook. Now, its important to know that hooks have parameters that must be
the same in all hook implementations. For example: hook_form_alter has these parameters: (&$form, &$form_
state, $form_id). So mymodule_form_alter must also use this parameters.
77

The Best Of
Now our hook implementation has this
Listing 2. A hook implementation
function mymodule_form_alter(&$form, &$form_state, $form_id) {
// Code goes here
if ($form_id == mymodule_form) {

$form[text][title] = Bar;
}
}

This code will be executed after mymodule_form is builded by drupal_get_form. Notice that is enough to define the
name of the function based on the name of the hook, use the correct parameters and Drupal will do the rest.
Another very important example of hook is hook_menu, this hook is used to define urls and their callback
Listing 3. The hook_menu
function mymodule_menu() {
$items[my-form-url] = array(

title => My form,

page callback => drupal_get_form,

page arguments => mymodule_form,

access callback => TRUE,
);
return $form;
}

This function implementing hook_menu will create an url called my-form-url so each time we enter to http://
mysite.com/my-form-url, this page will show us whatever the function in page callback returns. In this case it
will return a form because we called a Drupal function drupal_get_form and we passed, and argument which is
mymodule_form, access callback is used to define who can see this page. In this case, everyone can do that.

Drupal API
Drupal core brings an API that allows developers to build and extend core and contributed modules, but as you
can notice, not everything in Drupal can be based on hooks, an API has functions, constants, even classes.
Drupal facilitates the task of create forms, pages, creation of entity types, bundles,entity instances, etc.
Specifically these are the APIs in the core:
DBTNG (Database data manipulation)
Entity API (can be extended by the Entity API contributed module)
FIeld API (creates field types by PHP to use them on different entities)
Form API
Image API (based on php-gd extension to manipulate images)
Node API
Theme API

78

The Best Of

Drupal distros
Have you ever heard about a Linux distro? Basically its a Linux operating system distributed with different
programs and functions. Drupal is so scalable that its also possible to apply the same analogy in Drupal. A
Drupal distribution is a Drupal installation configured with some installed modules, themes and pre-defined
settings called installation profiles. All these works together to solve an entire use case.
An example for a Drupal, distro is Commerce Kickstart. This one brings up a Drupal copy focused to build
an eCommerce site based on the Drupal Commerce modules suite. It brings up lots of things ready to use
like administration of products, orders, users, taxes, payment options, etc.
Other example is RedHen CRM. This distro is created to offer a fast way to create a CRM based on Drupal.
It can manage contacts, organizations, relationships, etc. This CRM is ready to connect with other enterprise
CRMs like Salesforce.
But the coolest part of it is not necessarily the use case a drupal distro may achieve but the flexibility and
scalability that Drupal allows itself to build and extend even more the functionality of one of these distros.
You can install other modules or themes like if they were a normal Drupal installation or make your own
modules to achieve more specialized functions. For example: you may need to create a new shipping option
that is valid only in the country the website is being developed for or you may need that a RedHen CRM
would be able to serve data to a mobile application. Of course, you can do all of that.

Summary
Drupal provides all the tools that a project development team needs to build, program, theme, maintain and
extend a Web project. Drupal enhances time and quality boost with less man-hours and less costs for the
project, because team does not need to worry on building a database structure, develop a safe product, or
make application scalable, because Drupal already done it for you.

About the Author

The author has been working as a web developer and web designer in a small company for the last 2
years. He has used various tools for web development like Javascript, PHP, MySql, Wordpress, Drupal.
He made some contributions in Fedora design team and helped to solve issues, also he create patches on
Drupal.org.

79

The Best Of

AngularJS Tool Stack


by Zachariah Moreno
AngularJS provides Web developers with a robust tool stack alongside of a forward thinking
MV* framework that when coupled, results in a positive development experience. This
combination is what has developers everywhere talking about and building AngularJS
applications.
What You Will Learn

The readers of this article will have gained a strong working knowledge of how to leverage the tools built around the AngularJS
framework resulting in development efficiency, standards met and best practices followed. We will achieve these goals by learning how to scaffold the structure of our application using the Yeoman command line utility, rapidly prototype the user interface with Bootstrap from Twitter, employ the AngularJS Sublime Text package, and debug with the Batarang Chrome Developer
Tools extension. All of the aforementioned tools are developed, documented, maintained, supported and open sourced by the AngularJS community for developers everywhere to build the best Angular applications possible.

What You Should Know

To get the most out of this article developers should have a working knowledge of the fundamental Web technology stack including but not limited to




JavaScript
Google Chrome/Developer Tools
MV* architecture
Shell/Bash
Node.js/NPM

The AngularJS community has been facilitating a positive developer experience from day one. Angular
achieves this by setting and following proven standards that ease a lot of the pain points felt when
developing in previously popular frameworks, while distilling the best of them into a single lightweight
JavaScript file for our convenience. A key attribute retained from past frameworks is an emphasis on
tooling that empowers developers to build the best application possible in the shortest amount of time.
Take for example the Ruby on Rails framework. Rails focus on a standard naming convention and file/
folder architecture allowed for a robust command line utility to ship with the framework that provided
developers a means to quickly scaffold the structure of an application, add 3rd party gems and therefor
develop more efficiently. AngularJS has followed this pattern by adopting a similar standard in the MVC, or
Model View Controller architecture (Although it is technically a Model-View View Model pattern, I prefer
Addy Osmanis term MV*). Angulars charm doesnt stop there, the community has built tools that foster
a positive developer experience throughout the development lifecycle including generators, test suites, UI
libraries, editor integration and debugging tools.
When building an application with a new framework for the first time it is important that the developer
experiences quick wins with minimal effort. AngularJS achieves this by integrating with the Yeoman
command line utility as a generator. Yeoman ties together a number of highly useful tools including
Grunt, Twitters Bower package manager for open source GitHub repositories, LiveReload integration
with your Web browser of choice and a powerful build script. Grunt gives us access to a fast Node.js
server that works in conjunction with LiveReload. This pair will allow us to rapidly develop our Angular
app locally. Well then employ Bower to install the Angular-UI Twitter Bootstrap prototyping library to
build the interface with ease. All the while using the AngularJS Sublime Text plugin to make editing our
app painless and debugging with the Batarang Chrome DevTools extension to ensure our app behaves as
intended. Let the development commence.

80

The Best Of

Installing and Working with Yeoman


Yeoman is a snap to install in your development environment as it relies on the Node Package Manager or
NPM. Begin by launching the Terminal or PowerShell and typing npm install -g yo.
Because Yeoman is dependent upon so many other tools, all of the aforementioned libraries will be installed
automatically in this step. Up to this point nothing we have done is specific to AngularJS, and Yeoman is a
priceless development workflow enhancement no matter your framework of choice. But because this article
is focused on AngularJS, let us install the Yeoman generator for Angular.npm install -g generator-angular.
To fire up Yeoman and get a local Node server running type grunt
To scaffold an Angular app we will use the yo command with yo

server.

angular.

Yeoman will ask you a series of questions such as, Would you like to include Twitter Bootstrap? (Y/n) type
Y for all.
Notice that your default Web browser has opened (if not previously open) with a new tab pointed at
localhost:127.0.0.1:9000. The default content of the page will display a list of libraries added to our
application by Yeoman and the Angular generator. At this point we can begin speeding up the process of
boilerplating by leveraging the Angular sub-generators to scaffold views, controllers, routes, services, etc.

Yo Angular
The following sub-generator commands can all be run to scaffold a new portion of your AngularJS
application. This is immensely powerful because all of the grunt work is done for you. Take for example the
command yo angular:route tasks.
Creates a new controller file in app/scripts/controllers/ named tasks.js
Creates a new tests file in test/spec/controllers named tasks.js
Creates a new view file in app/views/ named tasks.html
And lastly adds a new route to /tasks in the existing app.js that is found in /app/scripts/
The Angular sub-generator commands can be scoped granularly by allowing us to create any one of the
MVC components individually. Some of the more common commands are listed below. For a full list
navigate to yeoman.io.

yo angular:view <NAME>

yo angular:controller <NAME>

yo angular:route <NAME>

yo angular:service <NAME>

yo angular:provider <NAME>

yo angular:factory <NAME>

81

The Best Of

AngularUI Bootstrap
The AngularJS community has built a components library for Angular called AngularUI Bootstrap based
upon Twitters Bootstrap front-end UI framework. This is a very quick way to build you interface upon
two well supported and well documented Open Source projects. Because Angular supports the bleeding
edge HTML5 Web Components specification, all of the individual UI elements are implemented as Web
Components. We can install AngularUI by running bower install angular-ui.
A relatively simple example of using the angularUI library of Web Components is a Bootstrap alert
Listing 1. HTML in app/views/tasks.html
<div ng-controller=AlertDemoCtrl>
<alert ng-repeat=alert in alerts type=alert.type close=closeAlert($index)>{{alert.msg}}</
alert>
<button class=btn ng-click=addAlert()>Add Alert</button>
</div>

Listing 2. JavaScript in app/controllers/tasks.js


function AlertDemoCtrl($scope) {
$scope.alerts = [
{ type: error, msg: Oh snap! Change a few things up and try submitting again. },
{ type: success, msg: Well done! You successfully read this important alert message. }
];
$scope.addAlert = function() {
$scope.alerts.push({msg: Another alert!});
};

$scope.closeAlert = function(index) {
$scope.alerts.splice(index, 1);
};

AngularJS Sublime Text Package


While developing an Angular app it is helpful to augment our workflow further with Angular specific syntax
completion, snippets, go to definition and quick panel search in the form of a Sublime Text package. To
install within Sublime Text
Install Package Control if you havent already
Type command
Select Package

+ shift + P

in Sublime

Control: Install Package

Type angularjs and press enter

Batarang DevTools Extension


An invaluable piece to our AngularJS tool stack is the Google Chrome Developer Tools extension, Batarang.
Batarang adds a third party panel (on the right of the Console) to DevTools that facilitates Angular specific
inspection in the event of debugging. We can view data in the scopes of each model, analyze each expressions
performance and view a beautiful visualization of service dependencies all from within Batarang. Because
Angular augments the DOM with ng- attributes, Batarang also provides a Properties pane within the Elements
82

The Best Of
panel, to inspect the models attached to a given elements scope. The extension is easy to install from either
the Chrome Web Store or the projects GitHub repository and inspection can be enabled by
Opening the Chrome Developer Tools
Navigating to the AngularJS panel
Selecting the Enable checkbox on the far right tab.
Your active Chrome tab will then be reloaded automatically and the AngularJS panel will begin populating
with inspection data.

Summary
AngularJS is proving to be a valuable member of the Web stack for many reasons, tooling being only one.
Through these tools developers are able to build their applications faster, with greater ease and with more
robust feature without the framework getting in their way. For these reasons the Angular community has
continued to grow at an accelerated rate since its inception three years ago. To conclude we have learned
how to use the Yeoman command line utility to scaffold our MV* application, prototype our views with the
AngularUI library, write code faster with the AngularJS Sublime Text package and debug with the AngularJS
Batarang Chrome Developer Tools extension. These tools are constantly being refined by the Angular
community to evolve in parallel to the framework and will therefore continue to improve our development
experience.

About the Author

Zachariah Moreno is a 22 year old Web developer from Sacramento, California that enjoys contributing
to and working with Open Source projects of the Web flavor. He can usually be found on Google+
posting and discussing design, developer tools, workflow, technology, photography, golf and his English
Bulldog, Gladstone.

83

The Best Of

Thinking the AngularJS Way


by Shyam Seshadri
AngularJS has been one of the major frameworks that has been leading the Javascript
revolution in the developer community nowadays. And with great features like data-binding,
inbuilt testability, reusable component creation and much more, it allows for a lot of
flexibility and power in how we develop large-scale Javascript applications.
But at the same time, it takes a fundamental shift in how we think, as well as a learning curve in
understanding the concepts that drive AngularJS. This article aims to dive into the some of these paradigm
shifts or conceptual frameworks that help making the transition to AngularJS development smoother.
This article assumes some basic knowledge about AngularJS if you have developed applications, even
tutorials in AngularJS, then even better. But even if you dont, the general concepts should still be apparent
and easy for you to apply when you get started with AngularJS. We will briefly explain some of the specific
concepts, but this would be a good time to refer to http://docs.angularjs.org/guide/ the official AngularJS
Developer Guide.
So without further adieu, let us dive into some concepts you should internalize when you work in AngularJS.

The Model is the truth


While AngularJS calls itself a MVW (Model, View, Whatever) framework, it really helps to consider it in the
standard Model-View-Controller (MVC) paradigm. There is the Model, which is your data (usually JSON,
but could be something more complex as well). This is usually data that is retrieved from the server, as well
as additional view only state that the application stores. Then there is the View, which is what the user gets
to see and interact with. In AngularJS, the View is written in HTML, and is created by combining the Model
with the templates that you define. And then there is the Controller, which houses the business logic and the
understanding of how to respond to user actions. While there are more components your actual application
gets divided into, it makes it easier for us to consider just these categorizations to underline a key concept.
The very first anti-pattern that people end up adopting when they start using AngularJS is trying to port what
they already know and using it exactly how they used to within AngularJS. This could mean using jQuery
to get the value of input elements, or showing and hiding elements conditionally from the controller. STOP.
Dont do this.
In AngularJS, the model is the truth. Figure 1 will help to illustrate how the Model, View and the Controller
all interact with each other.

84

The Best Of
Figure 1. Model, View, Controller in the AngularJS World
As shown above, the Model is just data. It represents the truth of your application, of what the user sees. But
it is up to the Controller and the View to decide what part of the model gets displayed to the user, and how.
Instead of you manually changing parts of the view, or grabbing the content of the form.
Your prime concern with respect to any user action should consist of one of the following:
grab the current state of the model and send it to the server,
update the model based on the server response,
modify the model to change how the UI looks.
Between these three actions, the majority of your use cases would be covered.
Let us now take an example of this concept plays into the real world with a few common use cases:

Highlighting unread emails in an inbox list


Let us consider a simple inbox list that shows the subjects and dates of some emails. The backing JSON
array for this inbox could be something like the code in Listing 1.
Listing 1. A JSON Array of emails
[

{id:
{id:
{id:
{id:
{id:

1,
2,
3,
4,
5,

subject:
subject:
subject:
subject:
subject:

Hi,
Hi,
Hi,
Hi,
Hi,

Im
Im
Im
Im
Im

the
the
the
the
the

first email, unread: true, ts: 123},


second email, unread: false, ts: 234},
third email, unread: false, ts: 345},
fourth email, unread: true, ts: 456},
fifth email, unread: true, ts: 567}

Listing 1 shows a simple array of JSON objects, each of which has an id, a subject, a timestamp, and a
boolean which signifies whether the mail is unread or not.
Now, the traditional jQuery way of highlighting these unread emails would be as shown in Listing 2.
Listing 2. Highlighting unread emails using jQuery
for (var i = 0; i < emails.length; i++) {
if (emails[i].unread === true) {
$(email- + emails[i].id).addClass(unread-mail);
}
}

In Listing 2, we loop over all the emails, and when we find an unread email, we add the CSS class unread-mail
to the HTML. This is an imperative way of doing it, and is what most people are used to. Hence, when they
switch to AngularJS, this is the type of code that often shows up in controllers. What developers should instead be
thinking is how can I declaratively define this, so that the view decides based on the model what to do.
Listing 3 shows how the code might look in AngularJS (a purely HTML solution):
Listing 3. AngularJS template solution to highlight unread emails

85

The Best Of
<li ng-repeat=email in emails ng-class={unread-mail: email.unread}>
<!-- Display email subject and timestamp here -->
</li>

Immediately, two things should stand out from Listing 3. First, we have completely, in a declarative manner,
defined what our UI is going to look like. In a jQuery world, this would have involved looping over the
emails, and then adding a template and inserting it into the DOM. In AngularJS, the magic of data-binding
takes care of all of this. Second, we have also declaratively mentioned which emails need to be highlighted
because they are unread by using the ng-class directive. The ng-class basically tells AngularJS to add the
unread-mail class when email.unread is true, and to remove it otherwise.

Tabs
Lets talk about another common case where a jQuery approach is not what we want. Lets say we have two
tabs, and based on which tab is selected, we want to highlight the tab, as well as change the content. So let us
first take a look at the HTML backing this tab structure, as shown in Listing 4.
Listing 4. HTML for showing Tabs
<ul class=tabs>
<li class=tab1 selected>Tab 1</li>
<li class=tab2>Tab 2</li>
</ul>
<div class=tab1 content>Content for Tab 1 here</div>
<div class=tab2 content>Content for Tab 2 here</div>

A Unordered list holds our tabs at the top, and the divs hold the contents. Now in jQuery, we would have to
do the following every time someone clicks on Tab 1 or Tab 2:
add selected class to the Tab,
remove selected class from the other tabs,
hide all the tab contents,
show only selected Tabs contents.
Yikes! That is a lot of work. Now, how can we leverage AngularJSs model to do this instead?
Listing 5. AngularJS approach to having Tabs
<ul class=tabs>
<li class=tab1
ng-class={selected: isSelected(tab1)}
ng-click=selectTab(tab1)>Tab 1</li>
<li class=tab2
ng-class={selected: isSelected(tab2)}
ng-click=selectTab(tab2)>Tab 2</li>
</ul>
<div class=tab1 content ng-show=isSelected(tab1)>Content for Tab 1 here</div>
<div class=tab2 content ng-show=isSelected(tab2)>Content for Tab 2 here</div>

Listing 5 shows how we can use ng-class here again, similar to before, by setting a class selected based on
a function call. Well take a look at the function in a second, but basically, it will return true or false based on
whether the currently selected tab is the one specified in the argument. Now, we reuse the same isSelected

86

The Best Of

function to conditionally show and hide the contents of the tab as well. How does the isSelected, selectTab
functions look? Something as simple as the code in Listing 6.
Listing 6. AngularJS functions to support the Tab app
var currentTab = tab1;
$scope.selectTab = function(tab) {
currentTab = tab;
};
$scope.isSelected = function(tab) {
return tab === currentTab;
};

Again, we have, in a declarative manner, specified what the UI is going to show, how it is going to display
certain elements and style them. No need to dig through multiple javascript files looking for where the
element ID is being used to manipulate the DOM.
In AngularJS, we modify the Model, and let AngularJS do the heavy lifting.

Rely and use the data-binding


I cant stress this enough. AngularJS gives you two-way data-binding. What this means is, any updates
done to the Model are instantly reflected in the UI, and any input that the user enters through the UI are
immediately reflected back in the Model. This is great for developers because we dont have to write the
boilerplate code that
takes the model from the server and plugs it into the UI in the right places,
reads the values from a form one by one, and then plugs it into a model to send to the server.
Both these steps are done by AngularJS for you, for free!
Let's illustrate that with a simple example of a form (cut short for ease of reading, of course) as it is in
Listing 7.
Listing 7. A simple HTML form
<form id=my-form>
<input type=text id=nameField>
<input type=email id=emailField>
<button>Submit</button>
</form>

Now, lets say we get these fields from the server when the page loads as JSON. And, when the user hits
submit, we might have to do some other validation and then finally send it across the wire. Listing 8 shows
how these two functions might look like.
Listing 8. jQuery way of handling forms
function setFormValues(userDetails) {
// userDetails is JSON from the server
$(#nameField).value(userDetails.name);
$(#emailField).value(userDetails.email);
}
function getFormValues() {
var userDetails = {};
userDetails.name = $(#nameField).value();

87

The Best Of

userDetails.email = $(#emailField).value();
// Do some other work with it and then send it

Now consider if we had radio buttons. Or check boxes. You would have to loop through each one to grab its
value and figure out the final state of the model. It is extra code you shouldnt have to write.
Now let us take a look at how we can leverage the two way binding in AngularJS to accomplish the same in
Listing 9.
Listing 9. AngularJS Form example
<form id=my-form>
<input type=text id=nameField ng-model=user.name>
<input type=email id=emailField ng-model=user.email>
<button>Submit</button>
</form>

Showing the user details in the form is now as simple as:


$scope.user = userDetails;

Furthermore, anytime we need access to the contents of the form, we can simply refer to the $scope.user
variable and use it as we need to. No need to reach out into the DOM, manipulate state or anything else.
AngularJS handles all this for you. Want to send the form contents to the server as part of the registration
flow? Just send $scope.user, which has the most up to date value of the form!
Let us take a look at Listing 10 which demonstrates how such a normal flow would work in jQuery, vs
Listing 11 which shows the same flow in AngularJS. Both fetch some data from the server, get the data to
display in the UI and then let the user edit it and save it.
Listing 10.1. HTML code for using jQuery to handle a form based applications
<form name=myForm>
<input type=text id=nameTxt>
<input type=email id=emailTxt>
<button class=updateButton>Update</update>
</form>

Listing 10.2. JS for using jQuery to handle a form based application


var fetchData = function() {
$.get(/api/user, function(user) {
$(#nameTxt).val(user.name);
$(#emailTxt).val(user.email);
});
}
fetchData();
$(.updateButton).click(function() {
var user = {};
user.name = $(#nameTxt).val();
user.email = $(#emailTxt).val();
$.post(/api/user, user);

88

The Best Of
});

Listing 11.1. HTML using AngularJS to handle form based application


<form name=myForm>
<input type=text ng-model=user.name>
<input type=email ng-model=user.email>
<button class=updateButton ng-click=updateUser()>Update</update>
</form>

Listing 11.2. JS for using AngularJS to handle form based application


// Inside a controller
var fetchData = function() {
$http.get(/api/user).success(function(user) {
$scope.user = user;
});
};
$scope.updateUser = function() {
$http.post(/api/user, $scope.user);
};

You can immediately see that in the AngularJS code, we dont have to write any code to transfer the data
from the UI to the code and back from the code the UI. We leverage AngularJSs data-binding and thus
significantly reduce the amount of code we write (and thus the possibility of errors as well!).

DOM manipulations are for Directives


SO where do these jQuery DOM manipulations belong, if they dont belong in the controller? What about
some more complex examples, like needing an accordion or datepicker, which are available in say jQueryUI. Wouldnt the controller be where I mark the correct the input fields or DIV elements using jQuery?
The association you want to start making in Angular is to use Directives whenever you need to transform
or manipulate the DOM. Need to create an input datepicker? Think directive. Need to create a reusable
component to display thumbnails of images? Think Directive!
Directives are AngularJS way of encapsulating both view and corresponding logic into a reusable
component. For example, let us take how you would traditionally use a jQuery UI datepicker:
$(#inputDate).datepicker();

Then, we would have to get and set the date as follows:


$(#inputDate).datepicker(getDate);
$(#inputDate).datepicker(setDate, 05/23/1975);

Now consider adding other options on a case by case basis. You might want to know when the user selects a
date. or you might want to set a different date format instead of the default MM/DD/YYYY. While you can
do all of this normally, there are a few pain-points. Namely,
This is not declarative. Someone would look at the HTML and never realize that the input field magically
becomes a datepicker at some point. You would have to dig through the code to find out who is doing what
For someone who has no experience with jQuery-UI, they would have to sift through the API docs to
89

The Best Of
figure out how to do common things.
Anyone looking to reuse the component would have to end up copy pasting this code, and rewrite callback
functions and instantiations wherever they need it. Or copy paste a lot of code.
Now consider the alternative, where jQuery UI datepicker is wrapped as a reusable component, exposing just
the most commonly used APIs in a declarative manner. For example, in an ideal world, I would want to do
something like:
<input type=text jqui-datepicker ng-model=startDate date-format=dd/mm/yyyy onselect=dateSelected(date)>

Anyone looking at the HTML can immediately understand that the value of the datepicker is available in the
model variable called startDate, and that when the date is selected, a function called dateSelected is called.
Listing 12 demonstrates how we would write such a directive. This directive needs to take care of two
things:
Getting the data from the jQuery UI datepicker, and informing AngularJS about its change
Telling jQuery UI about any changes that happen inside of AngularJS
Also note the use of scope.$apply(). This is to let AngularJS know that the model has changed outside of
AngularJSs control, and it needs to update all the views to reflect this new change. In this case, it is the user
selecting a date in the jQuery UI datepicker widget.
Listing 12. A Simple Jquery UI Datepicker Directive
angular.module(fundoo.directives, [])
.directive(datepicker, function() {
return {
// Enforce the angularJS default of restricting the directive to
// attributes only
restrict: A,
// Always use along with an ng-model
require: ngModel,
// This method needs to be defined and passed in from the
// passed in to the directive from the view controller
scope: {
select: &
// Bind the select function we refer to the right scope
},

90

The Best Of
link: function(scope, element, attrs, ngModelCtrl) {
var optionsObj = {};
// Use user provided date format, or default
optionsObj.dateFormat = attrs.dateFormat || mm/dd/yy;
optionsObj.onSelect = function(dateTxt, picker) {
scope.$apply(function() {
// Update AngularJS model on jQuery UI Datepicker date selection
ngModelCtrl.$setViewValue(dateTxt);
if (scope.select) {
scope.select({date: dateTxt});
}
});
};

// Notify jQuery UI datepicker of changes in AngularJS model


ngModel.$render = function() {
// Use the AngularJS internal binding-specific variable
element.datepicker(setDate, ngModelCtrl.$viewValue || );
};
element.datepicker(optionsObj);

};
});

This is one type of directive, where we care about input from the user. On the other hand, sometimes, we
might want to just get data into our widget and display it.
For example, if we wanted a custom component that we use to display a photo along with its comments,
likes and other metadata in a grid, we could create a component that we end up using as follows:
<div my-photoview photo-meta=photoObj></div>

The JS code for the myPhotoview widget would look something like Listing 13.
Listing 13. A photo display widget
angular.module(fundoo.directives).directive(myPhotoview, function() {
return {
restrict: A,
scope: {
photoMeta: =,
},
template: <div class=photo-widget> +
<img ng-src={{photoMeta.url}}/> +
<span class=caption>{{photoMeta.caption}}</span> +
</div>,
link: function($scope, $element, $attrs) {
// More specific rendering logic, watches, etc can go here
}

91

The Best Of

});

};

Here, photoObj is a javascript object that contains the URL of the photo, the caption, the comments
information and the number of likes. The directive could encapsulate all the logic of how this is rendered,
and other additional functionality like liking the photo, commenting on the photo, etc. It might even decide
to conditionally include other templates, or use jQuery to manipulate certain parts of its template.
The interesting things to note here are
The naming convention: When we declare our directive in the JS code, we defined it as myPhotoview.
But when we use it in the HTML, we need to use it as my-photoview. The camelcase from the JS gets
translated to dash separated words in the HTML. This is true for the directive as well as all the attributes
defined on it.
The scope definition: The scope defines something called photoMeta, with its value as =. This means
that when the directive is used, we can pass in any javascript object to it using the attribute photo-meta
in the HTML, and the value of the javascript object will be available within the directive as $scope.
photoMeta. In the case of Listing 13, we can access the contents of photoObj as $scope.photoMeta and
display it.
Data-binding: The best part about defining the photoMeta in the scope as = is that it tells AngularJS
that the object needs to be kept upto date inside the directive. That is, if photoObj changes in the parent
controller, then the latest value must be made available to the directive. No extra code needed!
Link function: The link function is the place to put additional logic. For example, while the caption and the
image itself would change automatically if photoObj ended up changing in Listing 13, if we wanted to do
some additional data manipulation or DOM manipulation, the link function is where that code would go.
The main take-away from both these examples is to encapsulate all this DOM modifying behavior within
Directives.

When to use AngularJS Services (or factories, or


providers!)
AngularJS services are often not used to its full potential inside an AngularJS application. More often
than not, what ends up happening is people put more and more of their business logic and code in their
controllers, and they end up with giant monolithic messes that they then have to dig themselves out of.
But fear not, that is exactly that AngularJS services are there for. The following are some great examples of
what belong in or should be created as AngularJS services:
Layer that talks XHR with your servers
Common APIs that are reused across your application
But those are the common use cases. But one important fact for you to note as a developer is that AngularJS
services are singletons. That means, if you declare an AngularJS factory, it is initialized only once for the
duration of your application. That means you can leverage the fact that AngularJS services are singletons to
use them for:
Application level data store
Developing a Caching layer for your application
Using it as a communication mechanism between different controllers

92

The Best Of
Developing an offline model that uses LocalStorage
Storing state of views to remember what to display when the user switches views
How does this work? Let us take a simple AngularJS service that is defined in Listing 14.
Listing 14. AngularJS Service as an App Store
angular.module(MyApp).factory(AppStore, function() {
return {
value: 200,
doSomething: function() {}
};
});

Now any directive, controller or service that asks for AppStore will get the same instance of the AppStore
service. That means if one controller sets AppStore.value to 250, then the second controller will see the same
value there as well.
You might ask at this point, what is the difference then between a Service and a Factory. The simplest way to
think of each one is as follows:
Factory: A factory is a function that is responsible for creating a value or an object. The advantage of a
factory is that it can ask for other dependencies, and use them when creating the value. The AngularJS
factory just invokes the function passed to it, and returns the result.
Service: An AngularJS service is a special case of a factory, where we are more OO oriented, and thus
want AngularJS to invoke the new operator on the function and pass us the result.
Let us take a look at Listing 15 which demonstrates how these are different:
Listing 15. AngularJS Service and Factory
// Ask for the $http service as a dependency
function TestService($http) {
this._$http = $http;
}
TestService.prototype.fetchData = function() {
return this._$http.get(/my/url);
}
angular.module(TestApp, [])
.service(TestService, TestService)
.factory(TestFactoryObject, function($http) {
return {

93

The Best Of
fetchData: function() {
return $http.get(/my/url);
}

};
}).controller(TestCtrl, function(TestFactory, TestService) {
TestFactory.fetchData();
TestService.fetchData();
});

There are a few things to note regarding all these:


As new is called on the function passed to angular.service, we have to define the API functions on this
or the Prototype.
For the AngularJS factory, we have to define the publicly visible functions as part of the return object.
Variables and functions not a part of this will not be accessible directly from the service object returned.
The choice of which one to use really comes down to how much of an OO pattern you follow when
developing, but the same logic and functionality could be implemented with both.

Separate your Business Logic and Presentation logic


Now that we have an understanding of what belongs in the service layer, we can start talking about
Controllers and what to do with and in them. As we mentioned before, it helps to consider Angular in an
MVC pattern. The aim is to have a clear separation between the view and the business logic.
Services, which we touched upon above, are the business logic. Your services should be answering questions like
how should I fetch data from the server?
how do I delete an email?
should I allow this action to take place?
how should data be cached?
should data be returned from my local cache or the server?
is this data valid?
Any business logic that is independent of the view belongs in the service layer. So now, what does the
controller do?
The controllers in angular should just contain view logic, and how to respond to user actions and behaviors.
DOM manipulation, as we have already seen, belongs in directives. Controllers should be responsible for:
Fetching Data from Services and assigning it to the Scope
Setting up callbacks and handlers for the UI, and delegating it to services
Following this general rule of thumb goes a long way in creating a manageable, bug free AngularJS
application.

Dependency Injection
AngularJS heavily relies on Dependency Injection, and you should too.

94

The Best Of
Dependency Injection is a concept that says users of a service or a dependency should declare them and ask
for it, instead of trying to instantiate it yourself when you need it. This has a few advantages:
Dependencies are explicit Anyone can look at the declaration of a controller or service and figure out
what is needed to make it work
Testability In tests, we can easily swap out a heavy service (something that talks to the server, say) with
a mock and test just the functionality we care about. This is covered in more depth in the next section.
What this means for you is that you should try and leverage AngularJS' dependency injection system and let
Angular do the heavy lifting whenever possible. Let AngularJS figure out how to get you a fully configured
service (which in turn might depend on 5 other things).
And remember, Dependency Injection is everywhere in AngularJS.
Need access to a service from a directive? Add the dependency and you can use it.
Need to access one service from another? Dependency Injection!
Need a constant value in the Controller? You can ask AngularJS for it
And this makes our testing life way easier.

Testing is key! And unit test early and often


Too often, testing our applications becomes an afterthought. No one ever is of the opinion that testing is not
needed, but before we know it, our applications grow to a point where testing becomes a thankless game
of catchup with an ever changing code base. And in a dynamic language like Javascript where there is no
compiler, no type safety, developing without a testing harness is like sky diving without a parachute You
will land in the same spot, but in one case, you are going to be a bloody mess.
On one of our projects, we had a giant section of code that was completely untested. And suddenly, in one
release, we started seeing some scary behavior IE would just crash after 5 10 minutes of being open,
Firefox and Chrome would just slow down to a crawl. After some major debugging, we traced it to the
seemingly innocuous lines of code in Listing 16.
Listing 16. A tricky little bug
angular.module(MyApp, []).controller(MyCtrl, function($http) {
var myAwesomeFunction = function() {
$http.get(/my/url).success(function(response) {
// Do stuff with the server data
setInterval(myAwesomeFunction, 10000);
$scope.$apply();
});
};

});

// Kick start the process


myAwesomeFunction();

95

The Best Of
If you didnt catch the bug there, dont worry about it Neither did we for quite some time. So what was the
attempt there in the first place? We were trying to fetch some data from the server, and then keep polling the
server every 10 seconds to see if it was up to date. This was of course part of a larger codebase, and I have
stripped away everything that is not relevant.
Now, we didnt have any unit tests for this, so obviously, we were relying on manual QA and pure luck to
ensure it works. And when it didnt, finding this needle in the haystack was not fun.
There are two problems here, both of which have to do with the use of setInterval. Firstly, setInterval is not
AngularJS aware, so we need to manually tell AngularJS to update its views by calling $scope.$apply(). But
the second is more insidious. Instead of calling setTimeout (or better, the angular version of $timeout), we
are calling setInterval. The number of calls happening to /my/url with time is show in Figure 2.

Figure 2. Number of calls of myAwesomeFunction with time


At the end of the 4th minute, we had close to 9 million calls happening simultaneously. You can imagine why
IE didnt like this one bit!
Now instead, assume we actually did as we preached, and had written the unit tests for this beast. To make
the test easier to read and manage, let us say that instead of making a server call, our awesomeFunction
had instead updated a variable called count on the scope everytime it executed. Our code and test would then
look something like Listing 17.
Listing 17. The fixed code, and the unit test
angular.module(MyApp, []).controller(MyCtrl, function($timeout) {
$scope.count = 0;
var myAwesomeFunction = function() {
$scope.count++;
$timeout(myAwesomeFunction, 10000);
};

});

// Kick start the process


myAwesomeFunction();

// Unit test begins here


describe(Testing MyApp MyCtrl, function() {

96

The Best Of

beforeEach(module(MyApp));
var timeout, $scope, ctrl;
beforeEach(inject(function($rootScope, $controller, $timeout) {
$scope = $rootScope.$new();
timeout = $timeout;
// Create the controller and trigger the constructor
// AngularJS will automatically figure out how to inject most of
// the dependencies other than $scope
ctrl = $controller(MainCtrl, {
$scope: $scope
});
}));

});

it(should make a single request after every timer tick, function() {


// Initially kick off, first request made
expect($scope.count).toEqual(1);
// Simulate a timer tick, another update should have happened
timeout.flush();
expect($scope.count).toEqual(2);
timeout.flush();
expect($scope.count).toEqual(3);
});

If we had had these kinds of unit tests right from day 1, we could have saved multiple man days in tracking
this bug. And writing these kinds of unit tests in AngularJS, once you have the harness in place, is extremely
straight forward. And AngularJS gives you inbuilt mocks to mock out XHR requests, and timers and the ilk.
Other than catching these one off bugs, why write these tests? You should write your unit tests so that
They do the job of your compiler Any typo or syntax error is caught immediately rather than waiting for
your browser to tell you
They act as specification for your code The tests define what the expected behavior is, what should the
side effects be, what requests should be made, etc.
They prevent regressions & bugs It stops some other developer from unknowingly changing expected
behavior and side effects.
AngularJSs dependency injection system allows you to manipulate the tests exactly how you want it, and
get it to the state you care about, before triggering any functions you want.
YearOfMoo has a great article (http://www.yearofmoo.com/2013/01/full-spectrum-testing-with-angularjsand-karma.html) on the other kinds of testing you can do with AngularJS, including End to End scenario
tests where you open up the browser and reliably test behavior without the flakiness that is inherent in End to
End tests. But at the end of the day, just ingrain the habit of writing your unit tests early and often. You will
thank yourself for it later.

Group by functionality, and leverage angular modules


One of the early recommendations in AngularJS was to have one folder for JS files, and then within that, one
folder each for controllers, directives, filters and services. But then when you end up having a file each per
controller, directive and so on, the list of files becomes unmanageable real quick.

97

The Best Of
Instead, what seems to work better is to organize your files inside the JS folder by module or functionality.
What do I mean by that?
Consider a simple Client Server application, with some 3rd party components like jQuery UI wrapped as
directives. The traditional recommended structure would have been something like Figure 3 below.

Figure 3. AngularJS project structure as per Seed App


This would all be as part of one module, which would say be
angular.module(MyApp, []);

What instead works better, and is more extensible and reusable is grouping by functionality. In this case, let
us first create a module for jQueryUI directives
angular.module(MyApp.directives.jqui, []);

This would have both the Datepicker and Accordion directives. If I now wanted to reuse these directives in
another project, I can just pluck these files along with the module, and just add a dependency on directives.
jqui and start working away.
Similarly, if tomorrow I decide to switch from jQuery UI to say, Twitter Bootstrap, I just change my
dependency to MyApp.directives.bootstrap, and as long as I name the directives and have their API in the same
manner, I can seamlessly switch between dependencies. That is the power of Directives and Modules.
Similarly, the entire XHR service layer could be included in one module, say MyApp.services.xhr. This gives
us the flexibility of reusing the same service layer across multiple apps, or say between a mobile version of
the app and the desktop version. Each functional component (Search, Checkout) could each be a separate
module (MyApp.services.search, MyApp.services.checkout), which allows you to plug and play different
modules for various apps. This sort of structure really pays big dividends in case of a large company, where
code reusability, maintainability and division of responsibility is needed. Your final app structure might look
something like Figure 4 below.

98

The Best Of
Figure 4. A more modular AngularJS app structure
A much more nested structure, but easier to manage, maintain and modify. Your final App module would just
pull in all its needed dependencies:
angular.module(MyApp, [MyApp.directives.jqui,
MyApp.services.xhr,
MyApp.services.search,
MyApp.services.checkout,
MyApp.controllers.search,
MyApp.controllers.checkout]);

In Summary
We covered a whole bunch of slightly unrelated stuff in the span of a few pages. But internalizing these short
tidbits of information goes a long way towards having a smooth, productive AngularJS experience. Try and
let AngularJS do the heavy-lifting, minimize your work, and remember, the aim in AngularJS is to write the
least amount of code to do the most work, while having fun!

About the Author

Shyam Seshadri was present at the birth of AngularJS, and has co-authored a book on it for OReilly
Publications. An Ex-Googler, he now splits his time between consulting on exciting web and mobile
projects and developing his own applications as part of Fundoo Solutions (http://www.befundoo.com).

99

The Best Of

Reusable UI Components in AngularJS


by Abraham Polishchuk & Elliot Shiu
Packaging modules for reuse is a useful technique that makes code easier to read and
maintain. We will walk you through building such components using AngularJS and discuss the
underlying features used.
While building an AngularJS application, you might repeatedly find yourself writing new UI components to
solve similar problems. This doesnt bode well for you because the code base will grow exponentially due to
duplication and in the long run, maintainability will suffer. Separating your app into smaller components allows
you to avoid the aforementioned pitfalls. As long as your dependency architecture follows the Law of Demeter,
you will be able to reuse your components across use cases and even applications without copying the core
logic. This will allow for development, tests and builds to be worked on in parallel by disparate teams.

What To Expect
You should have some experience with JavaScript, HTML, and Object Oriented Design. Familiarity with
Design Patterns (Dependency Injection in particular) would also be useful and knowledge of the basics
of AngularJS is highly recommended. If you are looking to build a library of reusable components that
will then be composed into a single page app or just want to learn more about AngularJS, then please
keep reading. The core concepts covered will be: using directives and controllers to compose modular
components; isolate scope and its relevance to reusability; using dynamic dependency injection and services
to share data between scopes.

In the Beginning
We will be building a set of two tables which will shuttle data from the first table to the second one. Table
one will be populated with items from a mock endpoint and when a user clicks an item in the first table, it
will be displayed in the second.
AngularJS provides a powerful feature to extend native DOM functionality. At its core, a directive is a
function that executes when the AngularJS HTML compiler reaches it in the DOM. They can be passed
controllers to provide logic to drive specific features, and even templates to set innerHTML. It is important
to note, that by default directives do not create a new scope; they share scope with their parent object.
However, this can be overridden by creating an isolate scope using the syntax scope: {}, which will create
a brand new scope object which does not inherit prototypically from its parent. As such, this is useful for
encapsulating functionality into a DOM Element, which does not depend on any of its parent scopes. The
simplest way to pass a string into an isolate scope, is to use the @ operator, which will bind a scope variable
to a string passed in as an attribute on the DOM node. For a sample, see Listing 1a through 1c, which will
demonstrate the above by way of creating a wrapper directive to house all further components.
Listing 1a. index.html
<body ng-app=myApp>
<div two-grid-shuttle source-service=mockService></div>
</body>

100

The Best Of
Listing 1b. twoGridShuttle.js directive
angular.module(myApp).directive(twoGridShuttle, function () {
return {
scope: { sourceService: @ },
controller: twoGridShuttleController,
templateUrl: two_grid_shuttle.html
};
});

Listing 1c. two_grid_shuttle.html template


<div shuttle-grid source-model=sourceItems source-service=sourceService click-function=addIt
emToTarget(item) class=shuttle-grid grid-left></div>
<div shuttle-grid source-model=selectedItems source-service=sourceService click-function=rem
oveItemFromTarget(item) class=shuttle-grid grid-left></div>

Command and Control


Now that we have an external layer, it is necessary to provide the underlying logic. In this controller we
will use a $watch to listen for when mockService becomes bound to our scope variable, sourceService. Initially,
when the Controller is instantiated its scope will be a new scope object until it prototypically inherits from
the parent scope (the directives isolate scope) in the next $digest() iteration. Once bound, the $watch will
fire and we can use the $injector function to retrieve our dependency from the AngularJS object cache. For a
sample see Listing 2.
Listing 2. twoGridShuttle.js controller
angular.module(myApp).controller(twoGridShuttleController, [$scope, $injector, function
(scope, injector) {
scope.injectService = function (serviceName) {
scope[serviceName] = injector.get(serviceName);
scope.models = scope.sourceService.models;
};
scope.addItemToTarget = function (item) {
scope.models.selectedItems.push(item);
};
scope.removeItemFromTarget = function (item) {
scope.models.selectedItems.splice(scope.models.selectedItems.indexOf(item), 1);
}
scope.$watch(sourceService, function (newVal, oldVal) {
if(newVal && typeof(newVal)==string) {
scope.injectService(newVal);
}
});
}]);

Propagating The Awesome


In a sense, our child shuttleGrid directive will be simpler than the parent, not needing a controller. We will
still need access to sourceService and our convenience functions: addItemToTarget, and removeItemFromTarget
which live in the parent scope. By utilizing the full power of isolate scope, we can pass sourceService into the
shuttleGrid via a bidirectional binding on a parent scope variable with =. This means that any changes made
101

The Best Of
to the value in the directives isolate scope will propagate to the parent scope it was bound from, and vice
versa. We also access our two methods in the parent scope by passing a function reference into the isolate
scope by way of the & operator. Lastly, the directive makes use of a link function that will bind the scope to
the DOM, after it runs.
For a sample see Listing 3a through 3b:
Listing 3a. shuttleGrid.js directive
angular.module(myApp).directive(shuttleGrid, function () {
return {
scope: { sourceModel: @, sourceService: =, clickFunction: & },
templateUrl: shuttle_grid.html,
});

Listing 3b. shuttle_grid_directive.html template:


<table>
<tbody>
<tr ng-repeat=element in sourceService.models[sourceModel]>
<td ng-click=clickFunction({item: element})>{{element}}</td>
</tr>
</tbody>
</table>

Communicating Is Hard
The final piece of the puzzle is to cement our components with a service. It is useful to note that services
are singletons and are therefore a self evident vehicle to share values between different scopes. While they
can be used to query a backend for JSON or to access an AngularJS resource, both of these applications
are outside the scope of this article. Instead we will use our service to define a mock data structure. When
implementing a component based on this template, it is imperative to have an architectural discussion about
the design of the specific data held by the service (and by extension returned by any server side APIs), as all
components will have to make assumptions about this. In our case, we will posit the existence of a models
object populated with data. For an example see Listing 4:
Listing 4. mockService.js service:
angular.module(myApp).service(mockService, function () {
var service;
service = { models: {} };
service.models.sourceItems = [foo, bar, bazz];
service.models.selectedItems = [];
return service;
});

On the web

A fully working implementation of the above code can be found here: http://bit.ly/1fddMGY.

Watch Out Now


The above approach is not a silver bullet. For a start, working with encapsulated code requires a better
understanding of both AngularJS, and the quirks of JavaScript prototypical inheritance. The code will appear
less readable initially and some dependencies may not be immediately obvious to developers who are not
intimately familiar with the codebase. Designing your application in a modular fashion requires a large
upfront investment of effort at the architecture level. This is the only way to ensure that the code builds in
102

The Best Of
such a way as to easily allow bugs to be tracked down at the component level. Finally, it is worth noting that
JavaScript minification renames function parameters, and care needs to be taken to either use the square
bracket Dependency Injection syntax as demonstrated in our example, or apply $inject to avoid mysterious
production only bugs.

About the Authors

Abraham Polishchuk graduated with a B.S. in Computer Science from the University of Edinburgh.
Previously he was a Chef guy and a Test Automation Engineer. He enjoys travel, rock music, martial
arts, and programming with new technologies. His latest hobby is Haskell and Yesod. Feel free to contact
Abraham at apolishc@gmail.com.
Elliot Shiu is DevBootcamp graduate who in a previous life was a Network Engineer. You can find him
mentoring aspiring programmers, looking for fresh powder at Mammoth Mountain. He is passionate
about using elegant technologies to solve practical problems. Drop him a line at elliot@sandbochs.com
or read his blog at http://www.sandbochs.com.
Currently, they are colleagues over at goBalto Inc. working on the the full stack: AngularJS, Ruby on
Rails, and PostgreSQL.

103

The Best Of

Test Automation with Selenium 2


by Veena Devi
Selenium 2 aka Webdriver is an Open source Automation tool from the selenium Family. This
tool will help you to build Test Automation for any web application.
This tool support cross browser Testing and Functional testing for web applications. Lets
start building our own framework using Selenium WebDriver.
WebDriver is script/program based automation tool. We need to setup a development environment for a test
framework.

Environment setup as below


Download and Install JDK 1.6 or later from http://java.com/en/download/index.jsp
Download eclipse zip file from http://www.eclipse.org/downloads/packages/eclipse-ide-java-eedevelopers/junor based on system configuration (windows/linux)
Unzip the eclipse and Open eclipse IDE by clicking eclise.exe.
Download selenium Webdriver jars from http://selenium.googlecode.com/files/selenium-java-2.35.0.zip.
Unzip the jar files.
Download latest jxl jar from http://www.findjar.com/jar/net.sourceforge.jexcelapi/jars/jxl-2.6.jar.html for
reading data file
Install TestNG plugin from the Eclipse Marketplace.
Now environment is ready for the Test Framework.

Rules for Automation Framework


Standard JAVA naming conventions should be utilized
If a method performs an action that clicks on any element on the page, the name of the method should
start with click
All methods intended for the implementation of tests for the automation of the web project, should be public
All assertions and verifications should be within the tests using TestNG framework asserts

Common Automation task and solution


How to Open Browser
How to Launch URL
How to Notify the Element to be Interacted using Webdriver
How to Mention the Interaction
Confirm/verify the Expected Result

104

The Best Of
Understanding the WebDriver APIs will provide the solution to the tasks mentioned above.

Features of WebDriver
Launch a browser using respective driver. WebDriver supports browsers like FireFox, IE, Safari and
Chrome. For Firefox, it has native inbuilt support. For other browsers WebDriver needs to know the
executable path of the browser.
Listing 1. Code sample
//For Firefox Browser
WebDriver driver = new FireFoxDriver();
For IE:Download and install Internet Explorer

driver from http://docs.seleniumhq.org/download/

System.setProperty(webdriver.ie.driver, <parent directory>/IEDriverServer.exe);



DesiredCapabilities ieCapabilities = DesiredCapabilities.internetExplorer();
ieCapabilities.setCapability(InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_
DOMAINS, true);
ieCapabilities.setCapability(ignoreProtectedModeSettings, true);

WebDriver iedriver = new InternetExplorerDriver(ieCapabilities);

Launch the AUT (Application Under Test). The get method of the driver object will access a valid
url as a parameter and opens in current active browser window. (NOT SURE, I THINK IT OPENS A
NEWBROWSER).
driver.get(<url>)

We can use navigate method of driver object


driver.navigate.to(<url>)

WebDriver provides lot of Locator Strategies, By Class having list of static methods to handle web
elements
By.className
By.cssSelector
By.id
By.linkText
By.name
By.partialLinkText
By.tagName
By.xpath
These methods will return an object of WebElement.

105

The Best Of
Listing 2. Code sample

For Identifying WebElement


Using By.className:
<td class=name> </td>
WebElement td=driver.findElement(By.className(name));
Using By.cssSelector:
<input id=create>
driver.findElement(By.cssSelector(#create))
Using By.id:
<td id=newRecord> </td>
WebElement td=driver.findElement(By.id(newRecord));
Using By.id:
<a onclick=gotonext()>Setup </a>
WebElement link=driver.findElement(By.linkText(Setup));

Handling Complicated Elements: when in a web Application other than normal html tags are used, there may
be complicated elements such as Dropdown, iframes and tables.
Handle Dropdown: WebDriver API provides Select class to handle Dropdown list and Its options, WebElement
can converted in to Select Object to fetch options.
Listing 3. Code sample
HTML CODE
<select id=city>

<option value=Op1>Chennai</option>

<option value=Op2>Hyderabad</option>

<option value=Op3>Bangalore</option>
</select>
WebElement selectElement= driver.findElement(By.id(city));
Select selectObject=new Select(selectElement)

Handle iframe: An inline frame is used to embed another document within the current HTML document. It
means the iframe is actually a webpage within the webpage which have its own DOM for every iframe on
the page.
Listing 4. Code sample
<frame name=frame1 id=Frame1>

</iframe>
To access DOM elements inside the driver control needs to change to this frame
driver.switchTo().frame(frame1);
Switching to frame can be handled in different ways
frame(index)
frame(Name of Frame [or] Id of the frame
frame(WebElement frameElement)
driver.switchto.defaultcontent();
//This will change the control of driver in

to parent window

WebDriver Interactions with WebElement. You need different types of interactions with different types of
WebElements like Textbox, Button, Link, Checkbox or Dropdown.
106

The Best Of

webElement.sendKeys()

Type a sequence of characters to the text box fields

webElement.click

webElement.clear //

webElement.submit //

selectObject.getOptions() //

selectCatogory.selectByValue(value) //

selectCatogory.deselectByValue(value) //

click the button /link element


clearing the value in given Text area
submit the Form Element
Will fetch all available Options for that dropdown
Select the provided value from dropdown
deselect the Option with given value

Verifying WebElement state: WebElements should visible to interact from driver, or it must be enable to
click/type. To get those state of element.
Listing 5. Code sample
WebElement element = driver.findElement(by. cssSelector(#Name) //this is username Text Box
element.isEnabled() / element.isDisabled()

Return Boolean based on the current state of element.


If the element is type of radio button or check box, to verify the state whether its selected or not
userName.isSelected()
<id=userName>SDJ </a>
username.getText() return the Text part of element

Identifying Attributes and properties for WebElements. For a chosen element, we can verify other properties
in DOM by providing the attribute name
userName.getAttribute(name)//name= name of the attribute

Navigating between the browser Windows. In a web application, a functionality can be opened in a new
window, or can navigate to next page, driver object have a facility to navigate between back and forth
between windows and also to switch to a new window.
//To Open a URL
driver.navigate().to(https://www.google.co.in/);
//Refresh the Current Page
driver.navigate().refresh();
//move back from current window
driver.navigate().back();
//step forward from current window
driver.navigate().forward();

There may be delays in web pages load times due to many factors, this can be due to network speed, more
Ajax Calls, more images, etc. Until an element is loaded, WebDriver cannot interact with that element.
Webdriver API has wait commands built in.
//Implicit wait hold the driver before each element interaction before throw an error
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
//Wait for the page to load completely before throwing error
driver.manage().timeouts().pageLoadTimeout(30, SECONDS);

107

The Best Of
//Tell driver to wait until the element load/gets visibility-Explicit wait
new WebDriverWait(driver, 60)
.until(new ExpectedCondition<Boolean>(){
@Override
public Boolean apply(WebDriver d) {
return theElement.isDisplayed();

Action Builder. mIf you want to perform complicated actions like DoubleClick or drag a element from one
place to other place, you need to use the Action interface
Actions action = new Actions(ffDriver) ;
//To Double Click
action.doubleClick(element);
action.build().perform();
//To drag from one position to another
Actions dragAction = new Actions(ffDriver) ;
dragAction.dragAndDrop(dragthis, dropHere);
dragAction.build().perform();

Now, a user can extend the framework using the TestNG framework for good Reporting structure as well as
for Test Execution Control like number of Test cases executed by configuring Test Groups, Data driver test
case using Data Provider, and also use a Configuration and Integration Tools like Maven/Ant will gives the
effective maintenance for Test Framework.
The Combination of WebDriver+TestNg+Maven supports a Effective, easy maintenance Test Framework in
a Hybrid Way.
Happy Testing

Where to go from here?


http://docs.seleniumhq.org/ place for all documentation and download of selenium tool group
https://code.google.com/p/selenium/ site for developers of selenium tool
http://www.seleniumwebdriver.com/ Forum for all your Selenium related questions

About the Author

Veena Devi, 32, having strong background of software development and Testing, Test Automation over
9 yrs, Trainer and consultant for Web Application Automation Testing, also Testing Consultant for
TinyNews, a startup company. She is part of Quality Learning, a place for all software testing training.

108

The Best Of

Grabbing the Elements in Selenium


by Nishant Verma
This article is going to explore the different ways in which we can identify an HTML element
for authoring your tests. This will also help you understand which identifier to use when
multiple identifiers are present for the same element.
What you will learn

What are the different locators, which can be used to identify element?
How to get the handle of element you need to write your selenium test?
What should be the strategy to choose the locator when there are lots of options?
What you should know

A high level idea of HTML elements.


A brief hands on experience of Selenium.
While writing the GUI functional test, we interact with objects which are textbox, button, label, drop
downetc.
The success of any automation tool is primarily based on how easy and accurate it is to identify those objects
(also known as elements). So lets understand in some detail how to identify elements and what Selenium
has to offer.
Selenium offers 2 different apis at the driver instance level; findElement and findElements. The first one
return the element it founds based on the criteria specified and the latter actually gives you the list of
matching elements it founds.
Location of element is driven by couple of identifiers, which is evident from the below list.

Figure 1. Location of element


className(String className)

helps find the elements based on the value of the class attribute.

Class attribute may have more than one value and in that case both the values can be used. Refer the pic
below to see how the class attribute is specified in HTML and the usage of it.
109

The Best Of

Listing 1. Code sample


driver.findElement(By.className(Hotels)).click();

helps find the elements based on the value of the id attribute. Refer the pic below to see how
the id attribute (with value origin_autocomplete) is specified in HTML and the usage of it.
id(String id)

Listing 2. Code sample


driver.findElement(By.id(origin_autocomplete)).sendKeys(test);

helps to find elements based on the value of the text. Generally these are used
when you dont find id or className. Refer the screenshot below for the usage of it.
linkText(String linkText)

Listing 3. Code sample


driver.findElement(By.linkText(My Trips)).click();

name(String name)

helps to find the elements based on the value of the name attribute.

If you refer the picture 3 above, you will notice that one of the attributes for the input field is name with
value origin.
driver.findElement(By.name(origin)).sendKeys(Bangalore);
partialLinkText(String linkText)

helps to find the elements based on the

given link text. In picture 5 below, there is a link on the website with text Tell us what you think, we can very
well use partialLinkText for such kind of links. Implementation is shown below.

Listing 4. Code sample


driver.findElement(By.partialLinkText(Tell us what you think)).click();

helps to find the elements based on the XPath. XPath stands for XML Path
Language and basically provide you a way for traversing to the element through a hierarchical structure
of aXML document. There is couple of browser add-ons that could be used to get the XPath of an element,
some of them being:
xpath(String xPathExpression)

110

The Best Of
Firebug (https://addons.mozilla.org/en/firefox/addon/firebug)
XPather (https://addons.mozilla.org/en-US/firefox/addon/xpather)
If you use any of the above tool to find out xpath for the element highlighted in Picture 3, you would find it
as mentioned below.
XPath = //*[@id=origin_autocomplete]

So the same implementation could be expressed in a different way using XPath.


driver.findElement(By.xpath(//*[@id=origin_autocomplete])).sendKeys(Bangalore);

helps to find the elements based on the CSS patterns specified. We plan to not
divert ourselves into the detail of what is CSS, how to construct cssSelector etc.
cssSelector(String selector)

However we will tell you an easy way to figure out the selector. If you are using Firefox as the browser,
install Firebug and Fire Path add on.
Once these add ons are installed, select the element you want to use and right click on it to select Inspect
element with Firebug. On the highlighted element in the HTML Tree in the Firebug window, right click to
select Copy CSS Path.
Once you get the CSS path, the above test can be expressed using cssSelector.
driver.findElement(By.cssSelector(input#origin_autocomplete.autocomplete)).
sendKeys(Bangalore);

To summarize what we discussed just now, there are different ways to identify an element. And each
identifier has its own pros and cons.
is the most simplest and easy to use locators. An advantage with them is, it increases the
readability of your test code. Its also better than other locators in terms of test performance.
id or className

However if you are using lot of ids and your test code is becoming too clumsy.
One suggestion here would be to have a separate file and then probably you can put more meaningful name
to them if they are not properly named in the page source (example: Google search textbox on the home
page has the value q for the attribute id).
or partialLinkText is mostly used with links and are limited to that. They are easy to use. However
they are a little problematic to maintain because of often changing link texts.
linkText

XPath is simple to use but makes your test code look ugly. XPath should generally be used when the object
is having neither id nor className. When we run the test that uses XPath, browser runs it XPath processor to
check if it can find any object. This actually impacts the test performance.
One important thing that we tend to forget while using XPath is, to ensure the order of the elements. So it
should be ideally used to verify some object with respect to certain other object.

About the Author

Nishant is a Computer Science Engineer by education and has close to 8 years of experience in Test
Automation & Management domain, which spans over different companies and multiple projects. He
has also worked extensively on test automation tools like Selenium, Watin, QTP, Loadrunner in past and
is currently working as a Lead QA Consultant with ThoughtWorks Technologies. He maintains his own
websitewww.nishantverma.com and actively right articles on Testing Techniques, Test Automation, Agile
Testing and Tool Comparison. His hobby is reading and writing blogs, listening music and reading books.

111

March 9-11, 2015


Santa Clara , CA
Registration Now Open!

Learn how to design, build and develop apps


for the wearable technology revolution
at Wearables TechCon 2015!
Two Huge Technical Tracks
Hardware and Design Track
Choose from 30+ classes on product design, electronic engineering for
wearable devices and embedded development. The hardware track is a
360-degree immersion on building and designing the next
generation of wearable devices.

Software and App Development Track


Select from 30+ classes on designing software and applications for
the hottest wearable platforms. Take deep dives into the leading SDKs,
and learn tricks and techniques that will set your wearable software
application apart!

A BZ Media Event

2 Days of Exhibits
Business-Critical Panels
Special Events
Industry Keynotes

Wearables DevCon blew away all my


expectations, great first year. Words
can't even describe how insightful
and motivating the talks were.
Mike Diogovanni, Emerging Technology
Lead, Isobar

www.wearablestechcon.com

Potrebbero piacerti anche