Sei sulla pagina 1di 112

F

RE

VD
D
E

THE COMPLETE MAGAZINE ON OPEN SOURCE

Volume: 11 | Issue: 04

IS NOW OPEN SOURCE FOR YOU

` 100

Volume: 01 | Issue: 09 | Pages: 112 | June 2013

Developing

Apps
Build Apps For Ubuntu
With Quickly!
Create StandAlone
Apps For Android
With PhoneGap

Special Read

Bring The Power Of


Hadoop To Your Organisation
India
us
singapore
malaysia

Career Highlights
What It Takes To Be An
Open Source Expert

` 100
$ 12
s$ 9.5
mYR 19
06

Give A Head-Start To Your


Career With Apps Development

9 770974 105001

Contents
Developers
31

A Bird's Eye View of


Android System Services

36

Develop Apps Quickly


with Quickly

43

Try Your Hand at GNOME


Shell Extensions

51

PhoneGap Application
Development:
A Developer's Delight!

63

Object Oriented Programming with JavaScript

67

Write a Linux SPI Device


Driver With Ease

39 How to Improve the Performance of Drupal Sites

ADmin

he Fe

sh-T
Bluefi

Simulate Your Network


with NS2

76

Securing the SSH Service

For YoU & me

Ed ito

($fd);

fclose

;
E, r)
DM
6);
409
(REA
pen fd)) { ts($fd,
fo
e
=
($
$fd e (!feof er = fg r;
ff
whil
$bu $buffe
o
ech
}

48
85

What Makes LATEX a Hit?

89

The Imaginary
Music of Octave

An Introduction to Hadoop
and Big Data Analysis

fd);

se($

fclo

60 Bluefish-The Feature Rich Editor

ON THE DVD
Live DVD

A x86/x86_64 version

Ubuntu 13.04, codenamed, Raring Ringtail, is a fast, secure and


simple operating system for desktops and servers. It comes with
new features that make your music, videos, documents and apps
much easier to access
xubuntu: An Ubuntu derivative with KDE
kubuntu:An Ubuntu derivative with XFCE desktop environment
Ubuntu server 13.04:
A x86_64 version
Comes with Grizzly release of OpenStack
4 | june 2013

Rich

);
ME, r
(READ
fopen
{
096);
$fd = feof($fd)) ets($fd, 4
(!
fg
while $buffer = er;
ff
u
b
$
echo

72

Ubuntu 13.04

ature

3 Layered Anti Spam/ Anti Virus Protection 25 GB Mailbox size


Restore emails from backup for up to 14 days Instant Activation

QualiSpace oers UNIQUE BUSINESS EMAIL SOLUTIONS


for SMALL SCALE & MEDIUM-SIZED organizations

YOUR own VIRTUAL PRIVATE


CHOICE of HIGH END DEDICATED
3 Layered Anti Spam/ Anti Virus Protection
25 GB Mailbox size
SERVER
on XENfromplatform.
& US datacenter
Restore
emails
backup for up toSERVER
14 daysin INDIA
Instant
Activation

For more information, kindly contact us at :


own VIRTUAL PRIVATE
CHOICE of HIGH END DEDICATED
E:YOUR
salSERVER
es@qual
i
s
pace.com
T:
+91
(22)
67816677
W: www.qualispace.in
on XEN platform.
SERVER in INDIA & US datacenter

For more information, kindly contact us at :


E: sales@qualispace.com

T: +91 (22) 67816677

.com/qualispace

.com/qualispace

.com/qualispace

.com/qualispace

W: www.qualispace.in

.com/qualispace

.com/qualispace

.qualispace.com

.qualispace.com

Contents

Editor

Rahul chopRa

Editorial, Subscriptions & Advertising


Delhi (hQ)
D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020
Phone: (011) 26810602, 26810603; Fax: 26817563
E-mail: info@efyindia.com
BeNGAlURU
Ms Jayashree
Ph: (080) 25260023; Fax: 25260394
E-mail: efyblr@efyindia.com

Customer Care

reGUlAr FEATURES

08 You Said It...


10 Q&A Powered By
OSFY Facebook

12 Offers of the Month


14 New Products
18 Open Gadgets

22
27
81
107
108
110
93
98
100

Your Raspberry Pi
79 Access
Remotely with VNC

102

e-mAil: support@efyindia.com

Back Issues
Kits n Spares
New Delhi 110020
Phone: (011) 26371661-2
E-mail: info@kitsnspares.com
Website: www.kitsnspares.com

FOSSBytes
Innovation

Advertising

Editorial Calendar
FOSS Jobs

hYDeRABAD
Saravana Anand
Mobile: 09916390422
E-mail: efyenq@efyindia.com

Tips & Tricks

KolKAtA
Gaurav Agarwal
Ph: (033) 22294788; Telefax: (033) 22650094
Mobile: 9891741114
E-mail: efycal@efyindia.com

Events
An Introduction to Graphviz

mUmBAi
Ms Flory DSouza
Ph: (022) 24950047, 24928520; Fax: 24954278
E-mail: efymum@efyindia.com

What it Takes to be
an Open Source Expert

PUNe
Sandeep Shandilya; Ph: (022) 24950047, 24928520
E-mail: efypune@efyindia.com

"Sony Wants to Get Things


Right with Android"
Kenchiro Hibi, managing
director, Sony India
Apps Development:
A Career with Immense
Growth Possibilities

open GUrUs
82

Operating Modes in
x86 Systems: An Inside Story

ColUmns

87

91

"Developers play a key


role in bringing about
digital literacy" Narendra
Bhandari, director, Intel
Software and Services
Group, Intel South Asia

Companies have realised


that open source is one of
the most important optionsSunando Banerjee,
channel business manager
- Openbravo, APAC and the
Middle East

6 | june 2013

CheNNAi
Saravana Anand
Mobile: 09916390422
E-mail: efychn@efyindia.com

SiNGAPoRe
Ms Peggy Thay
Ph: +65-6836 2272; Fax: +65-6297 7302
E-mail: pthay@publicitas.com,
singapore@publicitas.com
UNiteD StAteS
Ms Veronique Lamarque, E & Tech Media
Phone: +1 860 536 6677
E-mail: veroniquelamarque@gmail.com
ChiNA
Ms Terry Qin, Power Pioneer Group Inc.
Shenzhen-518031
Ph: (86 755) 83729797; Fax: (86 21) 6455 2379
Mobile: (86) 13923802595, 18603055818
E-mail: terryqin@powerpioneergroup.com,
ppgterry@gmail.com
tAiwAN
Leon Chen, J.K. Media
Taipei City
Ph: 886-2-87726780 ext.10; Fax: 886-2-87726787

Exclusive News-stand Distributor (India)


iBh BooKS AND mAGAziNeS DiStRiBUtoRS Pvt ltD
Arch No, 30, below Mahalaxmi Bridge, Mahalaxmi, Mumbai - 400034
Tel: 022- 40497401, 40497402, 40497474, 40497479, Fax: 40497434
E-mail: info@ibhworld.com

33

CodeSport

58

Exploring Software:
The Anatomy of an
Android X86 Installation

LEADING PLAYERS

A List Of Cloud
Solution Providers

GUJARAt
Sandeep Roy
E-mail: efyahd@efyindia.com
Ph: (022) 24950047, 24928520

104

Printed, published and owned by Ramesh Chopra. Printed at Tara Art Printers
Pvt Ltd, A-46,47, Sec-5, Noida, on 28th of the previous month, and published
from D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020. Copyright
2013. All articles in this issue, except for interviews, verbatim quotes, or unless
otherwise explicitly mentioned, will be released under Creative Commons
Attribution-NonCommercial 3.0 Unported License a month after the date
of publication. Refer to http://creativecommons.org/licenses/by-nc/3.0/
for a copy of the licence. Although every effort is made to ensure accuracy,
no responsibility whatsoever is taken for any loss due to publishing errors.
Articles that cannot be used are returned to the authors if accompanied by
a self-addressed and sufficiently stamped envelope. But no responsibility is
taken for any loss or delay in returning the material. Disputes, if any, will be
settled in a New Delhi court only.

SUBSCRIPTION RATES PeriodNews-stand price


You Pay
Year
Five
Three
One

Overseas

(`)
6000
3600
1200

(`)
3600
2520
960

US$ 120

Kindly add ` 50/- for outside Delhi cheques.


Please send payments only in favour of EFY Enterprises Pvt Ltd.
Non-receipt of copies may be reported to support@efyindia.comdo
mention your subscription number.

Trained participants from over 43 Countries in 6 Continents


Linux OS Administration & Security Courses for Migration
LLC102: Linux Desktop Essentials
LLC033: Linux Essentials for Programmers & Administrators
LLC103: Linux System & Network Administration
LLC203: Linux Advanced Administration
LLC303: Linux System & Network Monitoring Tools
LLC403: Qmail Server Administration
LLC404: Postfix Server Administration
LLC405: Linux Firewall Solutions
LLC406: OpenLDAP Server Administration
LLC408: Samba Server Administration
LLC409: DNS Administration
LLC410: Nagios - System & Network Monitoring Software
LLC412: Apache & Secure Web Server Administration
LLC414: Web Proxy Solutions
Courses for Developers
LLC104: Linux Internals & Programming Essentials
LLC106: Device Driver Programming on Linux
LLC108: Bash Shell Scripting Essentials
LLC109: CVS on Linux
LLC204: MySQL on Linux
LLC205: Programming with PHP
LLC206: Programming with Perl
LLC207: Programming with Python
LLC208: PostgreSQL on Linux
LLC504: Linux on Embedded Systems
LLC702: Android Application Development
RHCE Certification Training
RH124: Red Hat System Administration - I
RH134: Red Hat System Administration - II
RH254: Red Hat System Administration - III
RH299: RHCE Rapid Track Course
RHCVA / RHCSS / RHCDS / RHCA Certification Training
RHS333: Red Hat Enterprise Security: Network Services
RH423: Red Hat Enterprise Directory Services & Authentication
RH401: Red Hat Enterprise Deployment & Systems Management
RH436: Red Hat Enterprise Clustering & Storage Management
RH442: Red Hat Enterprise System Monitoring & Performance Tuning
RHS429: Red Hat Enterprise SELinux Policy Administration
RH318: Red Hat Enterprise Virtualization
NCLA / NCLP Certification Training
Course 3101: SUSE Linux Enterprise 11 Fundamentals
Course 3102: SUSE Linux Enterprise 11 Administration
Course 3103: SUSE Linux Enterprise Server 11 Advanced Administration

LLC - 14th Anniversary


Special Offer up to 30 June 2013
Programming with Qt
Device Driver Programming on Linux
Embedded Programming on Linux

RHCVA / RHCSS / RHCA Training - Exams


RH318: 15, & 22 June 2013; EX318: 24 July;
RHS333: 15 June; EX333: 25 July; RH423:15 June; EX423: 24 June;
RH401: 1 July, Ex401: 5 July; RH436: 8 July; EX436: 12 July
RH442: 15 July; EX442: 19 July

RH199/299 from 10, 17, 24 June


EX200/300 Exam 14, 21 & 28 June
LLC - Authorised Novell Practicum Testing Centre
NCLP Training on Courses 3101, 3102 & 3103

CompTIA Storage+ & Cloud+ Training


& Certification Anniversary Special Offer
Microsoft Training Co-venture: CertAspire
Microsoft Certified Learning Partner

www.certaspire.com
For more info log on to:

www.linuxlearningcentre.com
Call: 9845057731 / 9449857731
Email: info@linuxlearningcentre.com

RHCSA, RHCE, RHCVA,


RHCSS, RHCDS & RHCA
Authorised Training
& Exam Centre

Registered Office: # 635, 6th Main Road, Hanumanthnagar, Bangalore 560019

# 2, 1st E Cross, 20th Main Road, BTM 1st Stage, Bangalore 560029.
Tel: +91.80.22428538, 26780762, 65680048
Mobile: 9845057731, 9449857731, 9343780054

Gold

Practicum

TRAINING
PARTNER

TESTING
PARTNER

YOU SAID IT
'OSFY boosts my knowledge bank'
I am a regular reader of OSFY. Despite having no computer
background, I could learn on a trial and error basis about Linux
through the Internet and other free sources. Today, I use only
Linux, especially the Ubuntu distro. I must admit that OSFY
played the biggest role in boosting my knowledge on Linux. The
magazines very crisp and lucid style is its USP.
In your April 2013 issue, I read the article, 'Getting Started with
an Open Source Circuit Simulator' by Vineeth Kartha. I loved it and
would like to thank the author for sharing such an informative piece.
The article also explains how to install QUCS. Readers with a
limited knowledge of the command line may not be able to install
it. The important thing is the incorporation of the repository, which
may be difficult through the command line. There are chances of
getting an output like 'Command not found'. So heres how I did it.
I opened the USC and went to EDIT > Software Centres >
Other Software > Add... >
I then entered the APT line "ppa:fransschreuder1/qucs" >
Add Source > Closed.
Then I reloaded the Ubuntu software centre to confirm it was
loaded. I then opened the terminal (Ctrl + Alt + T) and applied
the following:
sudo apt-get update
sudo apt-get install qucs

The QUCS was installed successfully.


As I sign off, I would like to congratulate you for bringing
out a wonderful magazine for open source lovers.
Navinchandra Talati, n_m_talati@yahoo.co.in
ED: We must confess that such mails help us in our continuous
endeavour to make OSFY a better magazine, with every
issue. Thanks for such inspiring and insightful feedback. With
reference to the article on QUCS, we will definitely convey
your words of appreciation to the author. Also, thanks a lot for

Share Your

sharing the wonderful tip on installing QUCS with our readers.


This will surely help them. Keep sharing your feedback with us.

I learnt the basics of Linux from OSFY


I am a professor at SIT Mangalore and a regular subscriber to
OSFY. I take this opportunity to let you know that OSFY played a
major role in helping me understand the basics of Linux and open
source. Apart from that, some of the developers' articles, along
with the code snippets, helped me in executing and installing
software that I required. Thanks a lot for publishing such
informative articles in your magazine. I hope OSFY will cater to
an even wider audience in the days to come. All the best!
Mohan K, hamohax@gmail.com
ED: Thanks a lot for the feedback. It feels great when readers
tell us how they benefited from our magazine. We cherish our
readers' views and with every edition, we try to evolve into a
better product. And your comments help us do that even better.
Yes, there is a lot of demand for developers' articles. Keep
sending us your suggestions and feel free to get in touch with us
at osfyedit@efyindia.com in case of any query.

The content in the e-zine should be different


The e-zine is a great service to subscribe to. However, I feel that
most users on the portal have already subscribed to the magazine.
It would be great if you come up with articles and forums that
have not been covered in the print edition. I am sure this will
benefit the readers immensely.
Meraj Ahmad Siddiqui, msiddiqui.jmi@gmail.com
ED: Interesting feedback. However, the purpose of the e-zine
is to provide you and other subscribers an archival facility
to store all issues subscribed by you, as well as allow you to
access them where-ever you are. We do publish extra content
on our Web portal: www.linuxforu.com. Take a look and share
your views with us.

Please send your comments


or suggestions to:
The Editor
D-87/1, Okhla Industrial Area, Phase I,
New Delhi 110020, Phone: 011-26810601/02/03,
Fax: 011-26817563, Email: osfyedit@efyindia.com

8 | june 2013

RHCA
RHCE

ADVANCE
LINUX
MODULES

PHP &
MYSQL
Indias first network security education
provider now available at Four
different locations

SHELL
CCNA SCRIPT

RHCSS
Online Training And
Summer/Industrial
Training

Registration Open

REDHAT EXAM & TRAINING


SCHEDULE AT GRRAS
JAIPUR
9 June 2013
10 june 2013
11 june 2013
12 june 2013
13 june 2013
17-20 june 2013

=
=
=
=
=
=

Ex-423
Ex-333
Ex-318
RHCSA/RHCE
Ex-442
RH401 Deployment &
Satellite training

21 june 2013
22 june 2013
24 june 2013
25-28June 2013

=
=
=
=

29 june 2013
30 june 2013

=
=

Ex-423
Ex-429
Ex-333
RH436 Clustering
and storage training
Ex-436
RHCSA/RHCE

NAGPUR
8 June 2013
9 June 2013
17 June 2013

=
=
=

Ex-423
Ex-333
RHCSA/RHCE

21 June 2013
22 june 2013
29 June 2013

=
=
=

Ex-318
Ex-429
RHCSA/RHCE

=
=
=

Ex-333
RHCSA/RHCE
EX-423

18 June 2013
19 June 2013
28 June 2013

=
=
=

EX-429
EX-318
RHCSA/RHCE

PUNE
7 June 2013
13 June 2013
17 June 2013

www.grrasspace.com
JAIPUR : GRRAS Linux Training and Development Center
219, Himmat Nagar, Behind Kiran Sweets, Gopalpura
Turn, Tonk Road, Jaipur(Raj.)
Tel: +91-141-3136868, +91- 9887789124,
+91- 9785598711, Email: info@grras.com

VPS Severs
Email Marketing Solutions
PUNE: GRRAS Linux Training and Development
Center 18, Sarvadarshan, Nal-stop, karve Road,
Opposite Sarswat-co-op Bank, Pune 411004
M: +91-9975998226, +91-7798814786
Email: info.pune@grras.com

Java Hosting
Shared Hosting

Sever Management
Domain Registration

NAGPUR: GRRAS Linux Training and Development Center 53


Gokulpeth, Suvrna Building, Opp. Ram Nagar Bus Stand and
Karnatka sangh Building, Ram Nagar Square, Nagpur- 440010,
Phone: 0712-3224935, M: +91-9975998226,
Email: info.nagpur@grras.com

www.grras.org

Powered By

www.facebook.com/linuxforyou

Nick Jamison:

Aniruddha P Tekade:

I am a newbie to the whole Linux world. What are the


differences and similarities between Linux and BSD.
Which do you prefer and why?

I am a beginner in JAVA and want to learn how the projects


are developed. So, is there any kind of documentation or
video lectures where 'the stepwise development' of JAVA
project is given, so that I can follow and learn to develop?

Like . comment

Sky Convergence: I like Linux because it is immune

from Windows viruses. I can insert flash drive on


my Linux computer without worrying about getting
infected of viruses from Windows.

Utsav Rana: UNIX is a server side operating system


whereas Linux is the child version of UNIX. Ubuntu
is based on Debian and the most preferred distro for
newbies and home users.
Nick Jamison: Ok, thank you immensly for all the
answers! I appreciate it!

Tripathi Satyesh: BSD;Berklay software distribution is a unix system that is free to use but
its codes are not open due to its inheritance of
code from unix.
Jason Gookstetter: Awesome question.
Ubuntu is great for starters. You will see
the word 'flavors' often. It's all preference.
Your perfect version of linux is like a mutt
:D, distro+desktop environment+eye candy+
functionality = your preferred choice. I recommend LTS of any kind, stick with what works
and modify from there. Modify/write an install
script to fit exactly what you like once a distro is
chosen and you'll find life easier.

Er Avishek Kumar: BSD also tends to be


more stable, albeit much less bleeding edge
than Linux. The FreeBSD team is really meticulous about software updates and release cycles.
FreeBSD is also a lot more rough around the
edges for most newbies to UNIX and UNIXLike OSs. But put FreeBSD on an old computer
and after stomaching the rather long bootup,
you won't be displeased with its efficiency and
stability. Plus installing everything from source
code in FreeBSD is made easy via ports, which
influenced Gentoo's emerge.Do not expect any
hand holding or work being done for you with a
FreeBSD setup man pages and the handbook
are your salvation. With that said, stick to a
Debian-based OS, such as Linux Mint, Ubuntu
or one of the Ubuntu derivatives. At least until
your horns are no longer green.
Image quality is poor as the photos have been directly taken from www.facebook.com
10 | june 2013

Like . comment

Masroor Ahmed Nizamani: For video tutorials, have look at thenewboston youtube channel
tutorials howtoprogramwithjava[dot]com podcasts
Derek Banas Java tutorials. If you could have a
look at Lynda.com's Foundations of Programming
: Fundamentals, they have Java tutorials too. I am
not professional programmer but let me give you
some idea. First you have to learn programming
language Rules/Syntax. And then how to use libraries try to make things break things as you progress
you get an idea how to do things! Then learn Data
Structure and Algorithm after that Design Patterns.
And the most important thing in programming is all
about practice so do not copy paste code always try
to write it yourself.
Vikas Dwivedi: Refer updated books. Read and make
notes using official Java documentation. There are also
some online tutorials like lynda.com, thenewboston
etc. But, I personally prefer reading books as many as
possible because they give more comprehensive understanding. You need to have a deeper understanding
about the language before taking/starting any project.
Nikhil Ikhar: http://www.tutorialspoint.com/java/

is
good place to start. First get familiar with basics of java
n min required modules, then dive deep in any of them.

Bikramjit Mandal: Download and install NetBeans, then go to the website and follow a demo.
But first of all decide what kind of project you want
to do. All the best!

Apinder Singh:

Any distro which can work fine on AMD A8 4500M


APU with Radeon HD 7640G+7670M graphics without
tweaking?
Like . comment

Phani Kumar Veluturi: Check for CPU usage


of applications using top / htop and for power use
"acpi -V" / "hddtemp" use KDE instead of gnome.

Apinder Singh: Facing overheating problem


with default installation of distros like Fedora,
Linux Mint and Ubuntu.

Q&A
Rajat Khandelwal:
I have installed Windows and Fedora 18 on my PC. My PC
got shutdown three times due to power supply and when
I again restarted my PC it gave me an error while loading
GRUB GRUB loading.
W...elcome to GRUB!
... Entering rescue mode...
error : the symbol 'grub_zalloc' not found
grub rescue> Now what should I do? Do I need to install
Fedora again?
Like . comment

Ranjan Mondal: Just insert the recovery disk or


open Fedora 18 with recovery mode ..then thry
this .# grub2-mkconfig >/etc/grub2.cfg .then #
grub2-install /dev/sda(your device name).

Dipten Halder: You need to re-install the grub


boot loader. What Linux distro you are using?
Kanagaraj Govindasamy: Just reinstall your

GRUB alone.

Facebook

Shankar Shanky: Ubuntu Phone coming Soon.


Rahul Binjve: Lumia 520 is the best phone under 10k.
I'm a full time android + Linux user, but gotta admit that
there's no competition to Lumia 520.
Er Avishek Kumar: Security relates to software
and not Hardware. Nothing is 100% secure. Use
any phone having Android with a good Antivirus.

Shashaank Areguli: LG L5 2 is good...as far as


the security factor goes, it depends on user.

Bilal Javed: Micromax Canvas 2 looks cool to me.

Akshay Mukadam: I have a problem in installing


Fedora 18 CD. As I make the partition, it fails.
Please help.

Like . comment

Riya Patankar: Boot with live media, make


Nikhil Kalra:

Can someone refer me a good phone, below Rs 12,000


that is good enough to tackle security/malware hazards?
Like . comment

Jeremy Sanders: Ubuntu Phone, Android is


based on Java, horribly insecure.

partitions, and then with DVD just choose the


partitions and choose the format option rather
than creating them.

Akshay Mukadam: Not working. Before finishing the setup, we made all the swap,boot etc.
But the partition fails after clicking it. I tried it with
another expert but it failed.

Read Open Source For You:


Asias Leading Open Source Magazine
june 2013 | 11

offe

rS

Get
1 month

Free usage
credit of

Rs.500/-

THE
monTH

Cloud Website Hosting @ Affordable Price

Free

Sign Up now and get Rs. 500 usage credit!


Coupon Code :- eNlightJune
watch
Offer to
is
out th 3
June1

For more information,


call us on 1800-209-3006/+91-253-6636500
or email us at sales@esds.co.in

Hurry! till
alid
Offer Vne 2013!
u
30th J

Hurry! till
alid
Offer Vne 2013!
u
30th J

One month free services on opting


for our quarterly plan
Grab our dedicated Servers on a quarterly plan &
get 1 month absolutely FREE! For more details
Contact us @ 1-800-102-8757 or write to
us @ onlinesales@ctrls.in . To chat with
our solutions expert visit www. Ctrls.com

Free

Hurry! till
alid
Offer Vne 2013!
u
30th J

Contact us @ 98409 82184/85 or


write to enquiry@vectratech.in

Hurry! till
alid
Offer Vne 2013!
u
20th J

www.vectratech.in

Program Days: June 29-30, 2013 (2 days)


August 17-18, 2013 (2 days) Place : Bangalore.

Write to : training@livematics.com or
call +91 9742643700 & mention
coupon code: OSFYJUNE13

www.livematics.com

Discount
& more
This summer, learn to master your field.
Get flat 10% discount on every course
Use coupon code: Summer2013
Contact us at +91-98877 89124 or
write to info@grras.com
Catch us on facebook.com/grras

www.vectratech.in

Hurry! till
alid
Offer Vne 2013!
Ju
th
0
3

Java Design Patterns


Master Design Patterns in Java from an
industry veteran.
Practical and real world examples explained
neatly in a weekend.

20%

25% off
on online
training

Shell
Scripting
Free!

Contact us at +91-98868 79412 /


+91-80-42425000 or write to
info@astTECS.com

www.astTECS.com

Do not wait! Be a part of the winning team


Get 35% off on course fees and if you appear for
two Red Hat exams, the second shot is free.

Partner with a Global Brand!


Get 50% off on franchise fees! Be your own
boss with significant earning potential

Get 33%
discount

35%
off & more

Hurry! till
alid
Offer Vne 2013!
u
30th J

Call on (022) 6781 6677 today to avail


this special offer.

Get
1 month

www.ctrls.com

Hurry! till
alid
Offer Vne 2013!
u
th
30 J

Go BIG with our Premium Business Email


solutions and give yourself enough space to grow.
QualiSpace offers one month FREE
on Premium Mailing Solution.

www.qualispace.in

www.esds.co.in

One
month free
services

Get 1 month Free

Get the expertise from industry experts


Enrol for Enterprise Linux System Administration+
VMware @23500 INR and get to learn
Shell Scripting for FREE!!
Contact Bhavesh on +918793342945
or write to info@linuxlab.org.in &
mention coupon code: OSFYJUNE2013

www.linuxlab.org.in

Hurry! till
alid
Offer Vne 2013!
Ju
th
0
3

20% discount for RHCE training & Special


offers for all other Red Hat Certifications
Get yourself registered now!
Contact : Tel: 0484-2366258,0481,
Mob: 09447294635, 09447169776
Email: training@ipsrsolutions.com
& mention coupon code: OSFYAPRIL13

www.ipsr.org

To Advertise Here,
Contact Omar on
995 888 1862 or
011-26810601/02/03 or
write to efyenq@efyindia.com
www.linuxforu.com

To know more call +9141095707

new products
Up your style quotient
with Rapoo's multi-touch
wireless mouse

If you are looking for a chic, wireless


mouse with a glossy finish for your
PC or laptop, Rapoo's wireless ultraslim and multi-touch optical mouse,
the T6, can be a good buy. The
wireless mouse is compatible with
both Linux and Windows OSs.
The mouse has no visible buttons
and scroll wheel, and the multitouch technology lets you scroll both
vertically and horizontally. It also has a
two-finger swipe gesture for 'back' and
'forward' commands for easy access.
The key specs are a reliable 2.4 GHz
wireless connection, a 360 degree wide
range and a transmission distance of
up to 10 metres, two-finger swipe for
'back' and 'forward' commands and new
low-power consumption technology
with a power on/off button. The Rapoo
T6 mouse comes with a nano USB
2.0 connector, which connects with
2.4 G wireless transmission with a
working range of up to 10m. Sunil
Srivastava, manager, India Sales and
Marketing for Rapoo, says, The
mouse is designed for all those PC
users who have a penchant for style. It
has all the advanced features that can
beat our major competitors. We hope to
launch some more such products in the
coming days.

Price: ` 4,049

Address: Rapoo Technologies


Limited, D-1, TF, Shyam Bhawan,
Plot No 514 and 516, Zhenda Colony,
Fatehpur Beri, Asola Extension, New
Delhi 110074; Ph: +91-98999-94802;
Email: sunil.srivastava@rapoo.com;
Website: www.rapoo.com

14 | JUNE 2013

WickedLeak brings out India's first


budget-friendly quad core tablet
If you are in the market for a
pocket-friendly Quad-core tablet,
you can certainly check out
WickedLeak's Wammy Desire Tab
2. Priced at Rs 9999, the 17.7 cm
tablet comes powered with a 1.4
GHz Quad-core Samsung Exynos
processor coupled with 1 GB of
RAM. It comes fitted with a 2
MP rear and 0.3 MP front-facing
snapper for video calling. Whats
interesting is that consumers can
make a choice between Android versions 4.0 (Ice Cream Sandwich) and 4.2
(Jelly Bean) when they buy the tablet. (Note: The 4.2 version is in beta stage.)
Shares Aditya Mehta, CEO, WickedLeak, We, at WickedLeak, have a strong
customer-centric focus and value customer feedback highly, which is translated
into our products. The Wammy Desire Tab is an example of this.

Price: ` 9,999

Address: WickedLeak Inc, Aditya Villa, Waman Wadi, S.T. Road, Chembur,
Mumbai 400071; Ph: 65017532: Website: www.wickedleak.org

The new Micromax 3D smartphone!


Micromax is scripting a new growth story with
its low-priced Canvas smartphones. And the
latest to arrive in this family is the Micromax
A115 Canvas 3D.
It sports an impressive 12.7-cm (5-inch) display
but comes with in-built stereoscopic 3D technology,
which means you can watch the content without 3D
glasses. The phone runs Androids Jelly Bean OS
and is powered by a dual-core processor, coupled
with 512 MB RAM.
It comes fitted with a 5 MP rear camera, with
a 0.3 MP front-facing snapper for video calling.
Shubhodip Pal, chief marketing officer, Micromax,
says, With the launch of the Canvas 3D, we have
yet again succeeded in delivering an innovative
product for the Indian youth who are always looking
for technological advancements. All our products cater to the youth segment
of the country.

Price: ` 9,999

Address: Micromax, Micromax House, 697, Udyog Vihar, Phase-V,


Gurgaon, Haryana; Email: info@micromaxinfo.com; Ph: 0124-4811000;
Website: http://www.micromaxinfo.com/

new products

Enhance your music


experience with
Portronics Bluetooth
speakers

All those who have an ear for music


will be thrilled to hear that Portronics,
one of the emerging pioneers in
innovative, portable and digital
devices, has launched its new version
of the wireless stereo Bluetooth
speakerPure Sound BT. This
speaker delivers full stereo sound,
enabling you to revel in the music on
your Bluetooth enabled devices with
the freedom of wireless connectivity.
You don't need to don your ear phones
while you listen to your favourite
music on your mobile phone, tablet,
MP3 player, etc.
Despite its compact size, the Pure
Sound BT speaker brings soaring highs
and deep, booming, bass to every room
in the house. You can ideally have a
distance of 10m between the mobile
phone and the speaker. Pure Sound BT
is powered by a pair of proprietary and
highly sophisticated acoustic drivers,
unmatched in their ability to produce
extreme high and low frequencies
from stereo speakers.
Shares Jasmeet Sethi, director,
Portronics, Pure Sound BT is a perfect
blend of awesome performance, great
features, and ease of use. It is also
the perfect integration of technology
and utility, giving you the ideal music
gadget you would love to have.

Price: ` 2,499

Address: Portronics Digital Pvt Ltd, 4E


/ 14 Azad Bhavan, Jhandewalan, New
Delhi 110055; Email: supportcenter@
portronics.com; Ph: 91-9971833777;
Website: http://www.portronics.com/

16 | JUNE 2013

Intex rides the crest with its first


3G-enabled dual core tablet
Keeping pace with the changing trends of consumer
demands, Intex has launched its first 3G-enabled dual
core tablet, the i-Buddy Connect 3G. This tablet sports a
17.8-cm (7 inch) display. Powered by a 1 GHz dual core
processor, the device comes with 1 GB of RAM and runs
on Android 4.0. Sanjay Kumar Kalirona, GM, Mobility
Business, Intex Technologies (I) Ltd, says, The iBuddy
Connect 3G aims to provide entertainment and education
to our customers, both in urban and semi-urban cities and
towns, who are high users of the Internet. It comes pre-loaded with a slew of educational
content that can be accessed for free via educational software called EDUCLASS.

Price: ` 9,990

Address: Intex Technologies (India) Limited, D - 18/2, Okhla Industrial Area,


Phase II, New Delhi 110020; Email: info@intextechnologies.com; Ph: +91 11
41610224/25/26; Website: http://www.intextechnologies.com/index.html

Asus widens its horizons with its first voice calling tablet
If a mid-range priced tablet is what you desire, you can
definitely consider Asus FonePadthe first voice calling
tablet from the company. Priced at Rs 15,999, the tablet
sports a 17.7-cm (7-inch) IPS display and runs the Android
4.1 (Jelly Bean) operating system.
The tablet is powered by a 1.2 GHz processor, coupled with
1 GB RAM. Shares Peter Chang, regional head, South Asia,
and country manager, System Business Group, Asus India,
Our constant search for the incredible has led us to the launch
of the Asus FonePad.

Price: ` 15,999

Address: Asus Technology Pvt Ltd, 4C, Gundecha Enclave,


Kherani Road, Near Sakinaka Police Chowki, Sakinaka,
Andheri-E, Mumbai 400072; Ph: 022 67668800;
E-mail: info_india@asus.com; Website: in.asus.com.

Zync launches Quad 10.1 tablet

Here is yet another Jelly Bean powered tablet for you


to consider. As the name suggests, the Zync Quad 10.1
tablet has a 25.6 cm (10.1 inch) full HD IPS display
with a 1920 x 1200 pixel screen resolution. Powered
by a quad-core 1.5 GHz processor, it comes fitted with
a 5 MP rear auto focus camera with face detection and
a 2 MP front-facing camera for video chat. Says Amul
Mohan Mittal, co-founder, Zync Global, This Quad
10.1 tablet is packed with premium features and is being
offered to the consumer at a very affordable price.

Price: ` 14,990

Address: Zync Global Pvt Ltd, Sector 2, Noida (NCR), Uttar Pradesh 201301;
Ph: 91 120 4821999; E-mail: support@zync.in; Website: www.zync.in.

Tablets
Samsung Galaxy Note 510

Zync Quad 10.1

Simmtronics XPAD XQ1

Croma CRXT 1134

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

Launch Date:

Android 4.1 aka Jelly Bean

May 2013

May 2013

` 30,900
Specification:

NEW

8 inch (WXGA) TFT


touchscreen, 1280 x
800 pixels screen resolution,
4,600 mAh battery,1.6 GHz
quad core processor, 5 MP
rear camera, 1.3 MP front camera, 16/ 32GB internal
memory, expandable up to 64 GB via microSD, 3G, Wifi

ESP:

` 14,990
Specification:

NEW

10 inch full HD
display, 1920 1200
pixels screen resolution, 1.5
GHz quad-core processor, 5
MP rear camera, 2 MP front
camera, 16 GB internal memory, expandable up
to 32 GB via microSD, 3G, WiFi

` 13,999
Specification:

Intex I Buddy Connect

Celkon CT 888

OS:

OS:

OS:

Android 4.0

Android 4.0

Launch Date:

Launch Date:

Launch Date:

April 2013

April 2013

MRP:

MRP:

` 8,990

April 2013
` 15,999
ESP:

` 15,999
Specification:

NEW

` 9,990
Specification:

NEW

` 7,999
Specification:

ESP:

` 6,990
Specification:

NEW

7 inch capacitive
touchscreen, 1024
x 600 pixels screen resolution,
1 GHz processor, 3400 mAH
battery, 512 MB RAM, 8 MP rear
and 2 MP front camera, 4 GB internal memory, Wifi

Karbonn Smart Tab


TA-FONE A37
OS:

Android 4.0
Launch Date:

April 2013
MRP:

ESP:

17.8-cm (7 inches)
touchscreen, 1 GHz
Dual core processor,1GB RAM, 2 MP rear and 0.3 MP
secondary camera, 3000 mAh battery, 4 GB internal
mempry, expandable up to 32 GB, 3G, Wifi

17.7-cm (7-inch)
IPS display, 1280
800 pixels screen resolution,
1.2 GHz processor,1 GB RAM,
3 MP rear and 1.2 MP front
camera, 8/16 GB internal
storage options available and
microSD card slot, 3G, WiFi

NEW

MRP:

` 9,990
ESP:

` 6,990

10.1-inch HD capacitive touchscreen, 1 GHz quad


core processor, 2 MP rear and
0.3 MP front camera, 4-in-1
multiple video viewing, 2 GB DDR3 RAM, 16
GB internal memory, 3G, Wifi

Asus Fonepad
Android 4.1 aka Jelly Bean

MRP:

` 13,999
ESP:

Android 4.1 aka Jelly Bean


May 2013

MRP:

` 14,990

` 30,900

Android 4.1 aka Jelly Bean


May 2013

MRP:

MRP:
ESP:

Android 4.1 aka Jelly Bean

NEW

7 inch capacitive
touchscreen, 1024
x 600 pixels screen resolution,
1.2 GHz dual core processor,
3500 mAh battery, 2 MP
rear camera, 512 MB RAM,
4 GB internal memory, expandable up to 32
GB, 3G, Wifi

` 7,990
ESP:

` 7,290
Specification:
7-inch capacitive touch
screen, 800 x 480 pixels
screen resolution, 1 GHz
processor, 512 MB RAM, 3000 mAh battery, 2 MP rear
camrea, 0.3 MP (VGA) front-camera, 4 GB internal memory,
expandable memory up to 32 GB, 3G, Wifi

Salora Fontab

Videocon VT75C

WishTel IRA Capsule

iBall Edu-Slide

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

MRP:

MRP:

MRP:

ESP:

ESP:

` 5,990

` 16,000

ESP:

7 inch LCD capacitive


touchscreen, 1024 x 600
pixels screen resolution, 1.5
GHz processor,3500 mAh
battery, 2 MP rear and 0.3 MP
secondary camera, memory expandable up to
32 GB, 3G, Wifi

Specification:

Specification:

Specification:

17.7-cm (7-inch) display


touchscreen,
1600 x 1200 pixels screen
resolution, 1 GHz processor,
512 MB RAM, 3,000 mAh
battery, 2 MP rear and 0.3
MP front-facing camera, 4 GB internal memory, expandable
memory up to 32 GB, 3G via dongle, WiFi

10.1 inch LED multi


touch capacitive touchscreen, 1024 x 786
pixels screen resolution, 1.6 GHz dual core
processor, 1GB RAM, 8000 mAH battery, 5 MP
rear and 0.3 MP front camera, expandable
memory up to 32 GB, 3G, Wifi

25.6-cm (10.1-inch) touchscreen, 1280 x 800 pixels


screen resolution, 1.5 GHz dual-core processor, 1 GB
RAM, 2 MP rear and VGA front-facing camera, 8 GB
internal storage, 3G, Wifi

Zync Quad 9.7

Zync Quad 8.0

Lava E-Tab Connect

Swipe Halo

OS:

OS:

OS:

Android 4.1 aka Jelly Bean

Android 4.1 aka Jelly Bean

Android 4.1 aka Jelly Bean

OS:

Launch Date:

Launch Date:

Launch Date:

March 2013

March 2013

March 2013

MRP:

MRP:

MRP:

` 13,990

` 12,990

` 9,499

ESP:

ESP:

ESP:

Android 4.1 aka Jelly Bean


Launch Date:

April 2013

April 2013

MRP:

` 6,890
ESP:

` 6,890
Specification:

Android 4.1 aka Jelly Bean

NEW

` 6,499

Android 4.1
March 2013
` 16,000

` 13,990

` 12,990

` 9,499

Specification:

Specification:

Specification:

9.7-inch screen with an


LED-backlit Super HD IPS
touchscreen, 2048 x 1536
pixels screen resolution, 1.5 GHz
processor, 2 GB RAM, 8,000 mAh battery, 5 MP rear camera,
2 MP front camera, 16 GB internal memory, expandable up
to 32 GB, 3G, WiFi

18 | june 2013

8-inch
capacitive touchscreen,
1024 x 768 pixels
screen resolution, 1.5 GHz processor,
2 GB RAM, 5800 mAh battery,
5 MP rear camera, 2 MP front
camera, 16 GB internal memory,
expandable up to 32 GB, 3G, WiFi

7-inch screen with WVGA


capacitive touchscreen, 2. 1 GHz
Qualcomm processor, 512MB
RAM, 3,000 mAh battery, 2
MP rear camera, 4 GB internal
storage, expandable up to 32
GB, 3G, Wifi

Android 4.1 aka Jelly Bean


March 2013
` 14,999
` 12,999

Android 4.1 aka Jelly Bean


Launch Date:

March 2013
MRP:

` 6,990
ESP:

` 6,990
Specification:
17.7-cm (7-inch) TFT LCD multitouch capacitive touchscreen,
1.5 GHz processor, 512 MB RAM,
3,400 mAh battery, 2 MP rear
camera, 0.3 MP front camera, 2G, WiFi

OPEN GADGETS
Tablets
Salora Protab HD

Salora Protab

Datawind Ubislate 7C+ Edge

OS:

OS:

OS:

Launch Date:

Launch Date:

MRP:

MRP:

` 6,199

` 5,999

ESP:

ESP:

` 4,999

` 5,999

Specification:

Specification:

7 inch LCD capacitive touchscreen, 480 x 800 pixels


screen resolution, 1.5 GHz processor, 0.3MP frontfacing camera for video calling, 512 MB RAM, 3200
mAh battery, 4GB internal memory, expandable up to
32 GB, 3G, WiFi

7 inch capacitive touchscreen, 800


480 pixels screen resolution, 1
GHz processor, 512MB RAM, VGA
secondary camera, 4 GB internal memory, expandable memory up to
32 GB, 2G, Wifi

Lava E-Tab Xtron

Champion Computers
Wtab 705 Talk

Android 4.1 aka Jelly Bean


Launch Date:

March 2013
MRP:

` 6,599
ESP:

` 5,499
Specification:
7 inch LCD capacitive touchscreen, 1024 x 600 pixels
screen resolution, 1.2 GHz processor, 1 GB RAM, 0.3 MP
front camera, 3200 mAh battery, 4 GB internal memory,
expandable up to 32 GB, 3G, WiFi

Simmtronics XPad X1010

Android 4.0

Android 4.0

March 2013

March 2013

OS:

OS:

Android 4.0
Launch Date:

February 2013
MRP:

` 8,399
ESP:

` 8,399
Specification:
10.1-inch capacitive
touchscreen, 1024 x 600 pixels
screen resolution, 1.2 GHz
processor, 5,600mAh battery,
0.3MP front-facing camera, 8
GB internal memory, expandable up to 32 GB, 3G, Wifi

Android 4.1 aka Jelly Bean

OS:

Android 4.0

Launch Date:

February 2013

Launch Date:

February 2013

MRP:

` 6,499

MRP:

` 6,330

ESP:

` 6,499

ESP:

Specification:

` 6,330

7-inch IPS multi touchscreen,


1024 x 600 pixels screen resolution, 1.5 GHz dual core processor,
3500 mAh battery, 2 MP rear
camera, 8 GB internal memory,
expandable up to 32 GB, WiFi

Specification:
17.8 cm capacitive touchscreen,
480 x 800 pixels screen resolution,
1.5 GHz processor, 4 GB internal
memory, expandable memory up to 32 GB, it has built-in
support for 2G network

Laptops
Dell Vostro 2520

Acer Gateway NE56R

Ambrane Mini

OS:

OS:

OS:

Linux

Android 4.0

Launch Date:

Launch Date:

Launch Date:

December 2012

November 2012

MRP:

MRP:

MRP:

` 22,699

` 5,499

ESP:

ESP:

ESP:

` 27499

` 20800

Specification: 15.6 inch HD WLED Anti-Glare Display,

Specification: 15.6 inch TFT LCD display screen,

Linux
January 2013
` 33500

1366 x 768 pixels screen resolution, Core i3 (2nd Generation) processor, 2 GB DDR3 memory, expandable up to 8
GB, Intel HD Graphics 3000, 500 GB hard disk capacity,
2.36 kg weight.

` 5,034
Specification: 7 inches TFT capacitive

1366 x 768 pixels screen resolution, 2.1 GHz Intel Pentium


processor, 2 GB memory, expandable up to 8 GB, DVD
SuperMulti Drive with dual layer support, 500 GB hard disk
storage capacity, 2.6 kg weight

touch screen, 800 x 480 pixel screen resolution,


1.2 GHz processor, 3000 mAh battery,
Built-in 0.3 MP camera, WiFi

www.myOpenSourceStore.com, Email: info@myOpenSourceStore.com

Your Window to FREE professional Software

Buy Hardware

USE TECHNOLOGY TO REDUCE


YOUR OPERATIONAL COST

Download Software FREE

Hire Consultant
Get complete configuration
&
Get Expert Support

The logos used in this banner are the properties of their individual organizations

june 2013 | 19

OPENGADGETS
SMARTPHONES
Sony Xperia L

Samsung Galaxy Win I8552

Micromax A115 3D

Celkon A119Q

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

Launch Date:

MRP:

MRP:

MRP:

MRP:

Android 4.1 aka Jelly Bean


May 2013
` 19,990
ESP:

` 18,990
Specification:

Android 4.1 aka Jelly Bean


May 2013

NEW

` 19,850
ESP:

` 17,900
Specification:

Android 4.1 aka Jelly Bean


May 2013

NEW

Android 4.2
May 2013

` 14,990

NEW

ESP:

` 9,999
Specification:

4.3 inch capacitive touchscreen,


1 GHz dual core processor, 1750
mAh battery, 8 MP rear and 0.3
MP front camera, 8 GB internal
memory, expandable up to 32
GB, 3G, Wifi

4.7-inch TFT capacitive


touchscreen, 1.2 GHz quad core
processor, 2,000 mAH battery,
5 MP rear camera, 0.3 MP front
camera, 8 GB internal memory,
expandable up to 32 GB, 3G, Wifi

12.7 cm capacitive
touchscreen, 1 GHz dual core
processor, 2000 mAh battery, 5
MP rear and 0.3 MP front camera,
0.93 GB internal memory, expandable up to 32 GB, 3G, WiFi

Samsung Galaxy Fame


Duos S6812

Swipe 9X

Croma CRCB2093

` 12,499
ESP:

` 12,499
Specification:

12.7-cm (5-inch) HD display


touchscreen, 1.2 GHz quad-core
processor, 1 GB RAM, 2,100
mAh battery, 12 MP rear and 3
MP front camera, 4 GB internal
memory, expandable up to 32 GB, 3G, WiFi

Samsung Galaxy S4

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

Launch Date:

MRP:

MRP:

MRP:

MRP:

ESP:

ESP:

ESP:

Android 4.1 aka Jelly Bean


May 2013

Specification:

Android 4.0

May 2013

` 11,400
` 10,900

Android 4.0

NEW

` 8,544
Specification:

April 2013

` 8,990

NEW

4.7-inch capacitive
touchscreen, 854 x 480 pixels screen
resolution, 1 GHz dual core processor,
2000 mAh battery, 8 MP rear and
2 MP front camera, 4 GB internal
memory, expandable up to 32 GB, 3G, WiFi

3.5 inch TFT


tocuhscreen, 320 x 480 pixels
screen resolution, 1 GHz processor, 1,300 mAh battery, 5 MP rear
camera, expandable memory up
to 32 GB, 3G, Wifi

Android 4.2

May 2013

` 8,999

NEW

` 41,500

NEW

` 8,990
Specification:
4.63 inch capacitive
touchscreen display, 854 x 480
pixels screen resolution, 1 GHz
dual core processor, 2000 mAH
battery, 8 MP rear and 2 MP front
camera, 4 GB internal storage, WiFi

ESP:

` 36,990
Specification:

NEW

5-inch Super AMOLED


capacitive touchscreen, 1.6 GHz
Octa-Core processor, 2600 mAh
battery, 2 GB RAM, 13 MP rear and 2 MP front camera, 16
GB internal memory, expandable up to 64 GB, 3G, 4G, Wifi

Samsung Galaxy S II Plus I9105

Spice Stellar Pinnacle Pro Mi 535

Fly F45s

Adcom Thunder A530

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

MRP:

MRP:

ESP:

ESP:

` 14,990

` 9,713

Android 4.1 aka Jelly Bean


Launch Date:

Android 4.2

Android 4.1 aka Jelly Bean

April 2013

April 2013

` 14,990

MRP:

` 24,000
ESP:

` 23,099

Specification:

Specification:

April 2013

` 12,500

NEW

Specification:

Android 4.1 aka Jelly Bean


Launch Date:

April 2013

MRP:

` 12,000

NEW

ESP:

` 9,990
Specification:

4.3-inch Super Amoled Plus display


touchscreen, 1.2 GHz dual core
processor, 6500 mAh battery, 8 MP
rear camera, 16 GB internal memory,
expandable up to 32 GB,3G, WiFi

5.3 inch IPS LCD


touchscreen, 1.2 GHz
quad core processor, 2550 mAh
battery, 8 MP rear camera, 16 GB
internal memory, expandable up to
32 GB, 3G, Wifi

4.5-inch qHD IPS capacitive touchscreen,


540 x 960 pixels screen resolution,
1.2GHz dual core processor, 1GB of
RAM, 2050 mAh battery, 12 MP rear
camera, 4 GB internal memory, expandable up to 32 GB, 3G

Byond P1

Lemon P100

Lemon Attitude P101

Lava Iris 455

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

Launch Date:

MRP:

MRP:

MRP:

MRP:

ESP:

ESP:

ESP:

Android 4.1 aka Jelly Bean


April 2013

Specification:
5.3 inch capacitive
multi-touch screen, 1 GHz processor, 2000 mAh battery, 8 MP rear
camera and 1.3 MP front camera,
4 GBinternal memory and expandable up to 32 GB, 3G, Wifi

20 | june 2013

Android 4.0

April 2013

` 11,999
` 10,999

Android 4.0

NEW

` 9,999
Specification:

Android 4.1 aka Jelly Bean

April 2013

` 10, 999

April 2013

` 9,499

NEW

5 inch capacitive
touchscreen, 1 GHz dual core
processor, 2500 mAh battery, 8
MP rear camera and VGA front camera, memory expandable
up to 32 GB, 3G, Wifi

` 7,879
Specification:

5.3 Inch multitouch LCD


touchscreen, 1.2GHz processor,
512 MB RAM, 2800 mAh battery,
8 MP rear camera and 2 MP
front Camera 4 GB internal storage, expandable up to 32
GB, 3G, WiFi

` 7,799

NEW

4.3 inches IPS


capacitive touchscreen, 480 x 800
pixels screen resolution, 1 GHz dual
core processor, 1450 mAH battery,
8 MP rear and 0.3 MP front camera, expandable memory
up to 32 GB, 3G, Wifi

ESP:

` 7,290
Specification:
1.4-cm (4.5-inch) display touch
screen, 1 GHz dual-core processor,1,500 mAh battery, 5 MP rear
camera with Flash, VGA front-facing
camera, 512 MB RAM, 4 GB internal
storage (2 GB usable), expandable
up to 32 GB, 3G, Wifi dual-SIM, Bluetooth, GPS,
micro USB

OPEN GADGETS
SMARTPHONES
Micromax A91 Ninja

Spice Smart Flo Pace 2 Mi 502

Micromax A72 Canvas Viva

Lemon P7

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

Android 4.0
Launch Date:

April 2013
MRP:

` 8,999
ESP:

` 8,999
Specification:
11.4-cm (4.5-inch) TFT display
touchscreen, 1 GHz dual-core
processor,1,800 mAh battery, 512 MB RAM, 5 MP
rear and 0.3 MP front camera, internal memory 4
GB, expandable up to 32 GB, 3G, Wi-Fi

Sony Xperia Z

Android 4.0

Android 2.3

April 2013

April 2013

MRP:

` 8,990
ESP:

` 6,999

MRP:

NEW

Specification:
5-inch WVGA display touchscreen,
480 x 800 pixels screen resolution,
dual SIM 2G+2G, 1 GHz quad-core
processor, 2,100 mAh battery, 5 MP
rear camera, 1.3 MP front facing
camera, 512 MB RAM, 512 MB internal storage, expandable upto 32 GB via microSD, WiFi

Sony Xperia ZL

` 7,999
ESP:

` 6,499
Specification:
5 inch capacitive touchscreen, 480
x 800 pixels screen resolution, 1
GHz processor, 2000 mAh battery,
256 MB RAM, 3 MP rear camera,
memory expandable up to 32
GB, WiFi

Android 2.3 aka Gingerbread


April 2013
MRP:

` 5,199
ESP:

` 4,299
Specification:

3.5 inch (8.9 cms), IPS capacitive


touchscreen, 640 x 960 pixels
screen resolution, GSM 900 / 1800
/ WCDMA 2100 MHz | GSM 900 /
1800 MHz processor, 3.2 MP rear
camera, expandable memory up to 32 GB, 3G

Xolo X1000

Gionee Dream D1

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

Launch Date:

Android 4.1 aka Jelly Bean


March 2013
MRP:

` 38,990
ESP:

` 37,990
Specification:
5 inches (12.7 cm) full HD touchscreen, 1080 x 1920 pixels screen
resolution, 1.5 GHz processor, 2 GB
RAM, 2330 mAh battery, 13 MP rear
camera, 2 MP front camera, 16 GB
internal memory, expandable up to
32 GB, 3G, WiFi

Android 4.1 aka Jelly Bean


March 2013
MRP:

` 36,990
ESP:

` 35,490
Specification:
5 inches (12.7 cm) TFT capacitive
touchscreen, 1080 x 1920 pixels
screen resolution, 1.5 GHz processor, 2330 mAh battery, 13 MP rear
camera, 2 MP front camera,16 GB
internal memory, expandable up to
32 GB, 3G, WiFi

NEW

Android 4.0
March 2013
MRP:

` 24,999
ESP:

` 19,999
Specification:

Android 4.1 aka Jelly Bean


March 2013
MRP:

` 17,999
ESP:

` 17,999
Specification:

11.9-cm (4.7-inch) TFT LCD touchscreen, 720 X 1280 pixels screen


resolution, 2 GHz Intel Atom processor, 1 GB RAM, 1,900 mAh battery, 8
MP rear camera, 1.3 MP front camera, 8 GB internal
storage, expandable up to 32 GB, 3G, Wifi

4.65 inch HD Super AMOLED


plus, 1280 x 720 pixels screen
resolution, 1.2 GHz quad core
processor, 1 GB RAM, 2100 mAh
battery, 8 MP rear camera, 1
MP front camera, 4 GB internal
memory, expandable up to 32 GB, 3G, Wifi

Wicked Leak Wammy Passion Y

HTC E1

WickedLeak Wammy Titan 2

Karbonn Titanium S5

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

Launch Date:

MRP:

MRP:

Expected to be
around `16,000

ESP:

Android 4.1 aka Jelly Bean


March 2013
MRP:

` 16,999
ESP:

` 16,999
Specification:
12.7-cm (5-inch) HD IPS display
touchscreen, 1280 x 720 pixels
screen resolution, 1.2 GHz quad-core
processor,1 GB RAM, 2,800 mAh
battery, 8 MP rear and 2 MP frontfacing camera, 4 GB internal storage,
expandable upto 32 GB, 3G, WiFi

Lava Iris 430


OS:

Android 4.0
Launch Date:

March 2013
MRP:

` 7,500
ESP:

` 6,000

Android 4.1 aka Jelly Bean


March 2013

Android 4.2
March 2013
` 14,990

ESP:

` 13,990

NA

Specification:

Specification:
1.15 GHz4.3 inch LCD display
touchscreen, 800 x 480 pixels
screen resolution, 1.15 GHz
processor, 1 GB RAM, 2100 mAh
battery, 5 MP rear camera, 8 GB
internal memory, expandable up to 32 GB,3G, WiFi

13.4-cm (5.3-inch) qHD


display touchscreen,960 X
540 pixels screen resolution, 1.2 GHz MediaTek
quad core processor, 1 GB
RAM. 2300 mAh battery.12
MP rear camera, 4 GB internal memory, expandable
up to 32 GB, 3G, Wifi

Android 4.1 aka Jelly Bean


March 2013
MRP:

` 11,990
ESP:

` 11,990
Specification:
5 inch QHD multi touch capacitive
touchscreen, 540 x 960 pixels
screen resolution, 1.2GHz quad
core processor, 1 GB RAM, 2000
mAh battery, 8 MP rear camera, 2
MP front camera, 3G, WiFi

Karbonn A6

Micromax A35 Bolt Ninja

Karbonn A4

OS:

OS:

Android 2.3

OS:

Android 4.0
Launch Date:

Launch Date:

Launch Date:

MRP:

MRP:

March 2013
MRP:

` 5,990
ESP:

Specification:

` 5,390

10.9-cm (4.3-inch) WVGA


display touchscreen, 1 GHz
dual-core processor, 512 MB
RAM, 1,400 mAh battery, 5
MP rear and VGA front-facing
camera, 4 GB internal memory, expandable up
to 32 GB, 3G, WiFi

4.0 inch IPS WVGA touchscreen,


480 x 800 pixels screen resolution, 1 GHz processor,512 MB
RAM, 1450 mAh battery, 5 MP
rear camera, microSD internal
memory, expandable upto 32 GB

Specification:

March 2013
` 5,499
ESP:

` 4,249
Specification:
4 inch capacitive TFT touchscreen, 480 x 800 pixels screen
resolution, 1 G Hz processor,
1500 mAh battery, 256 MB RAM,
2 MP rear camera, 512 MB
internal memory, expandable up to 32 GB, WiFi

Android 2.3
March 2013
` 5,290
ESP:

` 4,685
Specification:
4 inch display touchscreen, 320
x 480 pixels screen resolution, 1
GHz processor, 512 MB RAM, 3.2
MP rear camera, 512 MB internal
memory, expandable up to 32 GB, WiFi

june 2013 | 21

FOSSBYTES
Powered by www.efytimes.com

Debian 7.0 Wheezy released


To all those Debian fans who have been
patiently waiting for the latest version
of their favourite distro, the wait is
finally over! The much-awaited Debian
7.0 Wheezy is out! The new stable
version of Debian comes with some
really interesting features including
multiarch support, specific tools to deploy private clouds, an improved installer,
and a complete set of multimedia codecs and front-ends, which remove the need for
third-party repositories.
The announcement about the release was made in a blog post of the Debian
Project team. Elaborating on the multiarch support introduced in Debian 7, the post
said, Multiarch support, one of the main release goals for Wheezy, will allow Debian
users to install packages from multiple architectures on the same machine. This means
that you can now, for the first time, install both 32- and 64-bit software on the same
machine and have all the relevant dependencies correctly resolved, automatically.
The installation process of Debian 7 has been simplified further. The
software can now be installed using software speech, which can help the
visually impaired who do not use a Braille device, in a significant way. The
installation system is available in 73 languages, and more than a dozen of them
are available for speech synthesis too. In addition, for the first time, Debian
supports installation and booting using UEFI for new 64-bit PCs (amd64),
although there is no support for Secure Boot yet.

Mozilla rolls out Firefox 21, making it available for download

For all those Firefox fans out there,


the latest version of the browser
is available for you to download.
Firefox 21, the latest open source
browser, is available for download
via Mozillas FTP servers.
Windows users can access the US
English version of the 32-bit installer
folder and download the Firefox
Setup 21.0.exe file for their PCs. For
all those waiting for official news on
the browser, keep a close watch on
Mozillas website for an update.
You can get a fair idea of what Firefox 21 offers if you look at its beta release
notes. Some of the features include an improvised three-state-UI for Do Not Track.
This means Firefox has now put the mechanism of auto-suggest in place, with which
the browser will suggest how to improve application start-up times. The browser
maker has reportedly used FHR (Firefox Health Report) to improve performance,
fix problems and allow users to see how their browsing experience on this version of
Firefox compares against earlier versions of it.
Mozilla has also included some graphics-related performance improvements
and bug fixes in the new browser version. Users can also find out the system
requirements for running Firefox 21 from Mozillas official website. This is the
desktop version of the browser, which will soon be followed by its mobile variant.
22 | june 2013

NASA PCs in space to run


Debian Linux!

Linux has finally reached space,


with NASA adopting it for the
laptops in the International Space
Station (ISS). In addition, the first
humanoid robot in space, named
R2, is powered by Linux.
This news was shared by Keith
Chuala, a contractor at the United
Space Alliance, which manages the
Space Operations Computing (SpOC)
for NASA, and leader of the ISSs
Laptops and Network Integration
Teams. These computers that are used
by ISS astronauts will run Debian 6.
Earlier, Scientific Linux, a clone of
Red Hat Enterprise Linux (RHEL)
was being used for these computers.
While Linux has been on the
scene at the ISS ever since it was
launched, it was earlier only used
for NASAs ground operations and
never for the computers in space.

OpenStreetMap releases
new map editor

The OpenStreetMap (OSM) project


has released its new map editor
for those who contributed in the
creation of the latest version. The
map editor was first showcased
back in February 2013, and after a
long wait the release is finally out.
The new iD editor has been
developed with funds from
the Knight Foundation and,
interestingly, it does not require
Flash to run, which is a refreshing
change from the previous ones. The
tool has been written completely
in HTML5 and uses the D3
visualisation library.
The Foundation hopes that the
new map editor will enable them to
involve more contributors for the
crowd-sourced mapping service.
Apart from launching the editor, the
Foundation has also introduced a
funding appeal to raise money for
new hardware for its set-up.

FOSSBYTES
You can now use BBM on Android!

Yes, you read that right. BlackBerry is now ready to share its very popular
service, BlackBerry Messenger, with the Android and iOS platforms. The
company shared its plans to make its mobile social network, BBM, available to
iOS and Android users this summer, with support planned for iOS6, and Android
4.0 or higher. Of course, the availability will be subject to approval by the Apple
App Store and Google Play.
For those who have been living
in caves, BBM is the instant mobile
messaging service that is considered
fast, reliable and engaging. After
the BBM service becomes available
for Android and iOS, it will be
able to broaden users connections
to include friends, family and
colleagues on other mobile
platforms.
In the planned initial release, iOS
and Android users would be able to
experience the immediacy of BBM
chat, including multi-person chat,
as well as the ability to share photos and voice notes, and engage in BBM Groups,
which allows BBM customers to create groups of up to 30 people.

India gets worlds largest offshore IT training campus

Indias capital, Delhi, has been


selected as the location for the worlds
largest offshore IT training campus.
Set up by offshore training firm,
Koenig Solutions, this campus is
being claimed as the worlds largest
offshore IT training facility.
The campus is spread over an
area of 1672 sq m (18,000 sq ft) and
includes 70 classrooms, 30 testing
stations and offers end-to-end solutions
that include accommodation, examination facilities, meals, daily transport, etc, all of
which form part of the course fee.
The company currently has five centres in the country -- at Goa, Shimla,
Dehradun, Delhi and Bengaluru. There are plans in place to expand this network
further to cover more such tourist centres, Koenig Solutions CEO and founder
Rohit Aggarwal said in a PTI report.
The company is looking to set up facilities in Chennai and Pune as well,
Aggarwal added. IT training and certification today is a $ 20 billion industry
worldwide and the education tourism market in India alone could become $ 1
billion by 2020. Koenigs education tourism business model is unique and aims
to tap this growing market, he said.

OpenSUSE launches Linux for Education!

Here is some good news for Linux lovers. The OpenSUSE Education Team
has launched Li-f-e (Linux for Education) 12.3-1. This first release is based on
openSUSE 12.3 with all the official updates applied. Li-f-e incorporates the latest
june 2013 | 23

FOSSBYTES

Turn your TV into a Linux


all-purpose PC!

We all have heard about Android


TV sticks, using which you can
turn your regular TV into a smart
TV, which can then run thousands
of Android apps. Now, those
interested in Linux can run Ubuntu
on sticks like Rockchip RK3066
dual-core processors, which can be
bought for around $42.
There is also a customised
version of Ubuntu called Picuntu
that has been in the works for quite
some time. The solution basically
offers you a basic desktop Linux
environment, which can be used on
devices like the UG802 or MK808
mini PC. However, if you want
something more advanced and
feature-packed, then try Picuntu
home://io.
The Picuntu home://io edition
is based on Picuntu RC3. It
comes with initial support for
hardware accelerated graphics
using the RK3066 processors
Mali 400 graphics. However, the
biggest differentiator between
this edition and a regular Picuntu
installation is that it comes with
some extra goodies. The developer,
JustinTime4Tea, has reportedly
added a number of apps that let
you quickly start using a TV box
as a Linux-powered game console,
Web server, or all-purpose PC,
reports Lilliputing, a website
focused on news from the mobile
computing space.

24 | june 2013

stable versions of all popular desktop environments such as KDE, GNOME


and Cinnamon, and it includes a wide range of software catering to the needs
of everyone, a selection from the openSUSE Education repository, multimedia
from the Packman repository,
development tools, KIWI-LTSP
(which allows normal PC or diskless
thin clients to network boot from
a server running Li-f-e), and a lot
more. In a nutshell, everything you
need to make your computer useful
is available right out-of-the-box as
soon as Li-f-e is installed on it.
As this edition is based on
openSUSE 12.3, all the official 12.3 updates, repositories from the build service and
Packman can be used to install additional software and keep it up to date.
The minimum hardware requirement is 1 GB of RAM and 15 GB of free disk
space. Installation from a USB stick will take about 40 minutes to complete, while
from a DVD it takes much longer.

Now Google dumps custom Linux in favour of Debian

Debian is surely going places! First it took over NASAs PCs in space and now
Google has decided to ditch its custom version of Linux for Debian! The search
giant will be moving the default software for its rentable cloud servers to Debian.
The company announced the decision of making Debian the default image
type for its Compute Engine recently. So now, Linux OS GCEL (Google Compute
Engine Linux) will be replaced by Debian 6.0 and 7.0.
Commenting on the move, Jimmy Kaplowitz, Google site reliability engineer,
wrote, We are continually evaluating other operating systems that we can enable
with Compute Engine. However, going forward, Debian will be the default image
type for Compute Engine.
Google has asked all developers to switch to Debian images instead of GCEL,
which as per Google FAQs is a Linux distribution using Debian packages found in
typical minimal Ubuntu distributions.
Both the versions of Debian, Debian 6.0 Squeeze and 7.0 Wheezy have some
differences, besides the process of having module loading and direct memory access
disabled for security purposes.

PayPal introduces Android SDK for developers

Though this news holds


great significance for
developers, online
shoppers who use PayPal
for payments may find
it equally useful. The
online payment processor,
PayPal, has introduced the
Android SDK for USbased developers starting
from May 15, 2013.
While the company launched the iOS SDK just two months back in March, the
Android version is being seen as an important move to cater to the growing needs of
smartphone users. The Android SDK will support Android 2.2 and higher versions.

FOSSBYTES
The Android SDK will support in-app payments from both PayPal
accounts as well as credit cards. As stated by John Lunn, global director of
Paypals Developer Network, The new Android SDK is a native mobile
payments solution that integrates simply and seamlessly into your Android
apps, and removes payment friction so developers can focus on creating
amazing experiences.
The kit supports all iOS 5+ versions on all iPhone and iPad screen sizes and
resolutions. The developers would find it easy to integrate PayPal as it features
a simple UI. With card.io credit card scanning, customers dont require an
additional reader device. Also, PayPal Android SDK adopts a proof of payment
system, which means you need not worry about PCI compliance.

Developers and users can interact on Google Play

Google is doing its bit to ensure that the developer and end user gap is bridged
as much as possible. The search giant has said that from now on, all Google Play
developers will be able to reply to user reviews. This is being done with the aim
of strengthening ties between developers and users, which will help them work on
improving their apps.
Commenting on the move, Ellie Powers from the Google Play team
wrote in a blog post, Were happy to announce today that all developers on
Google Play can now reply to user reviews. You can reply to user reviews
in the Google Play Developer Console, and your replies are shown publicly
below the corresponding user review on Google Play. Users receive an
email notification when you reply and can either reply to you directly by
email, or update their review if they choose to do so though keep in mind
that users are not obligated to update their reviews. You can also update
your reply at any time.
This is not very new to Google. Back in June 2012, this feature was
introduced for top developers only. Now, after almost an year, the company has
extended it to all developers. The company has also shared some guidelines on
this matter with developers.

TRAINING PARTNER

Now, Linux rocks on


Chromebooks too!

Linux is leaving no stone unturned to


expand its horizons. The latest kernel
3.9 version released
comes with an
interesting facet
compatibility with none other than Google Chromebooks that run on the
Chrome operating system. This is the first time that Linux coding has been
embedded keeping the Chromebooks in mind, which clearly highlights the
importance of these notebooks for the Linux community.
Every Linux user knows that a kernel is integral to the working of an open
source operating system. It is mainly used for tasks like communicating with
hardware and managing resources. Android also uses the kernel version just like
the Chrome OS. But the difference in the kernel is what separates Android from
the Chrome OS platform. To run Linux applications on Chromebook requires a
higher-end kernel version. Recently, Ubuntu users managed to run the platform
on Chromebooks using a customised version of the distro called ChrUbuntu.
But from now on, users need not bother with customising distros as the drivers
required to use a Chromebook are being bundled into the 3.9 kernel.
june 2013 | 25

FOSSBYTES

Google Play gets a fresh look

If you are a fan of the Google Play Store,


heres news for you. The Store has got a
revamped look and the search giant has
rolled it out in the Indian market too. The
latest version is 4.0.27. Those who have
not yet installed the Play Store by using
an APK can expect it to reflect on their
Android device anytime now.
The redesigned store was first
launched in the US almost two
months back. Since then, the search
giant has been fixing bugs and has
also had two more minor releases in
preparation for the mega release.

Linux Kernel 3.9


released

If you are a Linux


enthusiast and looking
for the latest update on
your favourite OS, here is a piece of
news for you. Linus Torvalds released
the latest kernel version Linux 3.9. The
version has been released 10 weeks
after Linux 3.8 made its way to users.
Compared to the previous kernel,
the latest version comes with a device
mapper target that allows users to set
up an SSD as a cache for hard disks
to boost performance. Also included
in the kernel is support for multiple
processes that wait for requests on the
same port. This is a feature that will
allow the processes to distribute server
work better across multiple CPU cores.
The kernel version also
comprises KVM virtualisation on
ARM processors, and RAID 5 and
6 support has been added to Btrfss
existing RAID 0 and 1 handling, the
report added.

26 | june 2013

JBoss is now WildFly

With the scope of JBoss widening beyond an application server, it was time to
keep pace with the changing times. So Red Hat has come up with a new name for
itWildFly. This development comes after Red Hat announced its plan to rename
its widely used Java server last year. Many were quite surprised by this decision
back then, as the company
created the JBoss brand
name for open source
applications when it was
launched in 1999.
However, it is a wellknown fact that JBoss
is part of the community
portal JBoss.org, which
is the application server
itself and one of Red
Hats middleware products. The company decided to implement the strategy
of clear differentiation, which has already been seen for Fedora and Red Hat
Enterprise Linux (RHEL). It is hoped that the change in branding will not lead to
confusion among customers.
The JBoss application server will be known as WildFly from now on. The name
has emerged as the result of a survey and voting process, which was conducted
late last year. The name WildFly is meant to reflect the servers extremely agile,
lightweight, untamed and truly free nature.

Fedora 18 based Korora 18 Flo Linux distro released

Korora 18 Flo, which is based on Fedora 18, has now been released. Back in 2010,
the Linux distribution from Australia moved over to Fedora. And now, the Korora
distro gets Korora 18, Flo. According to developers, this is the final name of the
distro as no major issues were found during beta testing. Korora 18 comes in two
flavours with a GNOME and KDE desktop.
Both flavours have an Adobe Flash plug-in, experimental support for the
Valve Steam gaming client, VLC as the default media player and Firefox as the
default browser. Users can install third-party software, as installation for Chrome,
RPMFusion and VirtualBox have been configured.

Turn Raspberry Pi into a powerful


computer for robotics

Launched by Roboteq on Kickstarter, RIO (Raspberry


IO) is a Raspberry Pi-based robot navigation computer
project aimed at creating an intelligent I/O card that
stacks over the $35 Raspberry Pi Linux Single Board
computer. The RIO card includes a set of I/O and
connectivity features.
Roboteq, the industrial partner in this project,
said it would manufacture, market and sell the RIO
card worldwide. The RIO card and the Raspberry Pi combine to create a
powerful embedded robot navigation computer that is remarkable for its size,
power consumption and price. Added to Roboteqs extensive offering of motor
controllers, the RIO-based computer opens a world of applications in sea, land
or airborne unmanned robotic vehicle applications, as well as in more traditional
automation and machine control systems.

The Raspberry Pi
is Fuelling a D-I-Y Revolution
Priced easy on the pocket and being small enough to fit into it, the Raspberry Pi is truly
creating a revolution in hands-on learning and lunchroom innovation. Students, hobbyists and
even scientists are grabbing the Pi and doing some amazing things with it devices that can
feed cats, water plants, guard the home, monitor pollution, improve food production and more.
People are carrying it around in funky cases, using it as a computer or to add more snazzy
features to devices they see around them. Here are some such Raspberry Pi-based open source
innovations that you could hack into too

Complete ecosystem for the Internet of Things


http://pinocc.io/

used on the Pinoccio boards. The board features 256 K


Flash, 32K SRAM and 8K EEPROM; hardware support for
wireless, over-the-air programming, 2.4GHz connectivity
using the 802.15.4 wireless standard and Web connectivity
via an available Wi-Fi shield. It also has sufficient I/O pins
and ports including a micro USB port for charging and
programming, a Li-Po rechargeable battery (550mAh) and
the ability to check battery voltage from the microcontroller
via a low-power MOSFET-based circuit.

Water the plants just right, with H2O IQ

Pinoccio is a complete, open source hardware platform for


the Internet of Things (IoT). Developed by Eric Jennings
and Sally Carson, and launched in March 2013, Pinoccio
is a wireless, Web-ready microcontroller with Wi-Fi, a
lithium-polymer battery, a built-in radio and the required
application programming interface (API). The board is
Arduino-compatible, so you can use the Arduino IDE
too for programming it. Pinoccio will help you to easily
develop smart devices for the IoT era, where everything
from public infrastructure to household gadgets is
becoming connected and intelligent.
The open twist: Pinoccio is completely open source,
and the designs and code are available on Github (https://
github.com/Pinoccio). Like the Arduino Mega, this board
is also based on the ATmega128RFA1, but it has a much
smaller footprint and also includes a built-in radio. All
firmware files including bootloaders and app hex files;
hardware reference files including schematics, board layout
and datasheets; Wi-Fi hardware references and shield
references are available online. The Eagle library is also
available with landing patterns and silkscreens of devices

http://blog.valkyriesavage.com/blog/2013/01/18/h2o-iq/
Here is an interesting device born out of a design course
project! When a group of students taking a course at the
Citris Invention Lab, University of California, Berkeley, had
to invent a smart device as part of their course, they decided
to build H2O IQ, which could automatically water plants in a

garden. The goal was to build a device that could understand


the moisture requirements of a species and provide just
enough water at just the required times, so that the plant
would be healthy and, at the same time, water would be
conserved. However, when they discussed their project with
June 2013 | 27

Innovation
potential users, they found that people actually enjoyed
time in their garden, so they reduced the scope of the device
to simply monitor and inform the user when watering is
required so that they could still attend to their plants on their
own. However, when the user is not available, the device
may be programmed to automatically water the garden.
The open twist: H2O IQ is a completely open source
device with all design information available online at
https://github.com/valkyriesavage/fluffy-toboggans. The
3D printed device is to be placed in the soil right next to
the plant. At present, the prototype is programmed only
for tomato plants, but its scope will be broadened. The
device has a moisture sensor built with two galvanised
nails submerged in plaster. A solar panel printed on the top
charges the small battery that powers the device. H2O IQ
also has an XBee radio on-board, which communicates
moisture readings to a Raspberry Pi located at the edge of
the garden. In order to conserve power, the microcontroller
wakes up just once an hour for this transmission. When
the device powers on, the Pi uses the opportunity to update
the watering instructions if there is a new user request, and
the ideal watering curve is programmed in it. The Pi also
acts as a Web server, feeding information to a page with a
Google Stocks powered display of historical moisture data
and the ideal readings for the selected species of plant.
Users can access this site online to view alerts, to set up
auto watering, or to reprogram the device in other ways.

Turn your Raspberry Pi into a robot

http://www.dexterindustries.com/BrickPi/
Dexter Industries is a company that develops robotic sensors
for LEGO Mindstorms NXT that are easy and fun to use,
yet capable of being used in real-life applications. Its new
project, BrickPi, is an open source slide-on board for the
Raspberry Pi that helps you connect LEGO blocks, sensors,
motors and parts to easily turn the compact Raspberry Pi into
a powerful robot. You can connect up to three motors and up
to four sensors, both digital and analogue.
The open twist: The BrickPi is based on the Atmel
Atmega 328, and is powered by Arduino. Complete
instructions on building, setting up and using the BrickPi
are available online. The system consists of easily
available components listed in the bill of materials, a
circuit board, and a spot of laser cutting for which you
might need to use an external service. The Eagle board
and schematic are available on the Github repository
(https://github.com/DexterInd/BrickPi), and so are the
CAD files for the top and bottom parts of BrickPis
acrylic case. Assembling the BrickPi is relatively easy.
While a little surface mount soldering is required,
most of the parts are through-hole. Once you have all
this set, you need to flash up the firmware. There is,
however, no USB communication for the Arduino and
it must be programmed using an ISP programmer, such
28 | June 2013

as the AVRISP mkII from Atmel. Finally, you need to


prepare your BrickPi for operation by setting up I2C
communications and Wi-Fi, and running the Python
scripts. Then, you are all set to start using the robot! If
you love it, you could even consider helping the team
its website reports that it needs some better Python code
for controlling the BrickPi.

Knowing the air quality in your neighbourhood

http://www.lvaqi.org/2013/
Little Village Air Quality Initiative is an interesting open
source project that aims to measure and visualise the air
quality in an area. The project is still under development
but the concept is interesting and definitely inspirational
worth helping or giving an independent try. The LVAQI uses
multiple sensors programmed to collect data on the levels
of carbon monoxide and nitrogen dioxide, temperature,
humidity, etc, in real-time. Visualisations written in C++
using openFrameworks will be used to convert the data into
graphical representations. There will be multiple Raspberry Pi
computers with displays set up in public locations to display
the air quality visualisations to help people understand the
levels of pollution better.

The open twist: The board has been designed


using Eagle CAD software and made with a CNC
engraving machine. It is now ready for silk-screening
and manufacture, and will be released as open hardware.

Innovation
The platform has been assembled and is ready to collect
data. In the current stage, the sensor platform is capable
of reading temperature, humidity, and the levels of
carbon monoxide, tropospheric ozone, and combustion
gas. All gases are measured in parts per million (ppm).
There will be a total of five air quality sensor boards
that will communicate to a server to deposit the data.
The developer is still working on the visualisations, after
which the system will be ready for deployment.

Making the Khan Academy more accessible

http://pi.mujica.org/
The Khan Academy is becoming one of the most
acclaimed online content libraries, with many students
trusting its lessons more than they trust their teacher.
The simple and easy to understand lessons with graphic
representations of almost all concepts spanning fields
from physics to business are indeed a boon even for
adults who want to learn more. If you ever felt that the
Khan Academy would be more useful in remote areas
than in urban centres, youll be happy to hear of KAPi, a project that aims to take Khan Academy content to
students who have no Internet connectivity.
The open twist: KA-Pi is a simple, plug-andplay server solution to play the Khan Academy videos
where no Internet access is available. By using a green,
low-power, small Raspberry Pi computer, the solution
becomes all the more accessible. You just need to

download the KA-Pi card image, burn it to a 16 GB SD


card and plug the card into your Raspberry Pi. The card
image includes the operating system, a Web server and
lightweight content, making it a truly plug and play
system. The content can even be streamed to a local
network. The content, from Khan Academy on a Stick,
comprises simple MP4 encoded videos that will play on
browsers, iPads, iPhones and Android devices without the
use of Flash. Khan Academy on a Stick content is open
sourced using a Creative Commons licence.

Ensuring local, clean and sustainable


food production

http://hapihq.com/
One often wonders how technology can help solve
social and economic problems, and one of the biggest
such problems is that of the availability of food. Can
technology, especially open source technology, help
improve food production and ensure better nutrition for
all? The Hydroponic Automation Platform Initiative
(HAPI) led by Tyler Reed with sponsorship from the
Human Services Research and Technology Institute,
Washington, is an attempt towards this end. Hydroponic
irrigation is a way of growing food using the minerals
in water, without soil. It is a lesser known technique,
but definitely easier to adopt in urban areas. So, by
providing a complete connected platform with automation
modules, structural designs, a clean seed network and a
best practice application, HAPI hopes to promote urban
hydroponics, thereby improving collective yield and
ensuring sustainable food production.
The open twist: HAPI is the first worldwide open
source initiative for developing scalable hydroponic and
aquaponic structures and automation systems (http://
sourceforge.net/p/hydroplatform/wiki/Home/). The project
is still in the fund-raising and planning stage. There is,
however, a spot of work going on concerning individual
components and modules. After zeroing in on the hardware
modules and management software functions, the team will
be looking at creating firmware to control lighting, feeding,
pH levels, TDS/eC levels and possibly complete reservoir
refreshes. They will be exploring connectivity between
HAPI-based systems and the outside world, e.g., system
control via ssh or the Web. The project is seeking urban
collaborators to develop rooftop hydroponic farms, as well
as developers familiar with Arduino and Raspberry Pi.
They will be raising funds on KickStarter.

By: Janani Gopalakrishnan Vikram


The author is a technically-qualified freelance writer, editor and
hands-on mom based in Chennai.

June 2013 | 29

OSFYClassifieds
Classifieds for Linux & Open Source IT Training Institutes
IPSR Solutions Ltd.

WESTERN REGION

SOUTHERN REGION

Linux Lab (empowering linux mastery)


Courses Offered: Enterprise Linux
& VMware

*astTECS Academy
Courses Offered: Basic Asterisk Course,
Advanced Asterisk Course, Free PBX
Course, Vici Dial Administration Course

Courses Offered: RHCE, RHCVA,


RHCSS, RHCDS, RHCA,
Produced Highest number of
Red Hat professionals
in the world

Address (HQ): 1176, 12th B Main,


HAL 2nd Stage, Indiranagar,
Bangalore - 560008, India
Contact Person: Lt. Col. Shaju N. T.
Contact No.: +91-9611192237
Email: info@asterisk-training.com
Website: www.asttecs.com,
www.asterisk-training.com

Address (HQ): Merchant's


Association Building, M.L. Road,
Kottayam - 686001,
Kerala, India
Contact Person: Benila Mendus
Contact No.: +91-9447294635
Email: training@ipsrsolutions.com
Branch(es): Kochi, Kozhikode,
Thrissur, Trivandrum
Website: www.ipsr.org

Advantage Pro
Courses Offered: RHCSS, RHCVA,
RHCE, PHP, Perl, Python, Ruby, Ajax,
A prominent player in Open Source
Technology

Linux Learning Centre


Courses Offered: Linux OS Admin
& Security Courses for Migration,
Courses for Developers, RHCE,
RHCVA, RHCSS, NCLP

Address (HQ): 1 & 2 , 4th Floor,


Jhaver Plaza, 1A Nungambakkam
High Road, Chennai - 600 034, India
Contact Person: Ms. Rema
Contact No.: +91-9840982185
Email: enquiry@vectratech.in
Website(s): www.vectratech.in

Address (HQ): 635, 6th Main Road,


Hanumanthnagar,
Bangalore - 560 019, India
Contact Person: Mr. Ramesh Kumar
Contact No.: +91-80-22428538,
26780762, 65680048 /
+91-9845057731, 9449857731
Email: info@linuxlearningcentre.com
Branch(es): Bangalore
Website: www.linuxlearningcentre.com

Address (HQ): 1104, D Gold House,


Nr. Bharat Petrol Pump, Ghyaneshwer
Paduka Chowk, FC Road, Shivajinagar
Pune-411 005
Contact Person: Mr.Bhavesh M. Nayani
Contact No.: +020 60602277,
+91 8793342945
Email: info@linuxlab.org.in
Branch(es): coming soon
Website: www.linuxlab.org.in
Linux Training & Certification
Courses Offered: RHCSA,
RHCE, RHCVA, RHCSS,
NCLA, NCLP, Linux Basics,
Shell Scripting,
(Coming soon) MySQL
Address (HQ): 104B Instant Plaza,
Behind Nagrik Stores,
Near Ashok Cinema,
Thane Station West - 400601,
Maharashtra, India
Contact Person: Ms. Swati Farde
Contact No.: +91-22-25379116/
+91-9869502832
Email: mail@ltcert.com
Website: www.ltcert.com

NORTHERN REGION
GRRASLinuxTrainingandDevelopmentCenter
Courses Offered: RHCE,RHCSS,RHCVA,
CCNA,PHP,ShellScripting(onlinetraining
isalsoavailable)
Address (HQ): GRRASLinuxTrainingand
DevelopmentCenter,219,HimmatNagar,
BehindKiranSweets,GopalpuraTurn,
TonkRoad,Jaipur,Rajasthan,India
Contact Person: Mr.AkhileshJain
Contact No.: +91-141-3136868/
+91-9983340133,9785598711,9887789124
Email: info@grras.com
Branch(es): Nagpur,Pune
Website(s): www.grras.org,www.grras.com

Duestor Technologies
Courses Offered: Solaris, AIX,
RHEL, HP UX, SAN Administration
(Netapp, EMC, HDS, HP),
Virtualisation(VMWare, Citrix, OVM),
Cloud Computing, Enterprise
Middleware.
Address (H.Q.): 2-88, 1st floor,
Sai Nagar Colony, Chaitanyapuri,
Hyderabad - 060
Contact Person: Mr. Amit
Contact Number(s): +91-9030450039,
+91-9030450397.
E-mail id(s): info@duestor.com
Websit(es): www.duestor.com

Eastern Region
Academy of Engineering and
Management (AEM)
Courses Offered: RHCE, RHCVA,
RHCSS,Clustering & Storage,
Advanced Linux, Shell
Scripting, CCNA, MCITP, A+, N+
Address (HQ): North Kolkata, 2/80
Dumdum Road, Near Dumdum
Metro Station, 1st & 2nd Floor,
Kolkata - 700074
Contact Person: Mr. Tuhin Sinha
Contact No.: +91-9830075018,
9830051236
Email: sinhatuhin1@gmail.com
Branch(es): North & South Kolkata
Website: www.aemk.org

Overview

Developers

A Bird's Eye View of


Android System Services
Android is gaining unprecedented momentum in the smartphone and tablet market. This
huge success is usually attributed to factors like the open ecosystem, the choice of Java
as the application programming language, the choice of the Linux kernel, etc. A lesser
known fact is the existence of an application framework that has feature-rich APIs, which
help in developing cool applications in no time, besides being customisable to a great
extent. At the heart of the Android framework are the system services that manage almost
everything in the Android platform. This article attempts to provide a birds eye view of the
system services in Android.

n Android applications, services are typically used to


perform background operations that take a considerable
amount of time. This ensures faster responsiveness to
the main thread (a.k.a. the UI thread) of an application,
with which the user is directly interacting. The life cycle
of the services used in applications is managed by the
Android Framework, i.e., these services have startService(),
bindService() and stopService() calls that are called when
an activity (or some other component) starts, binds or stops a
service. The Android system might force-stop a service when
available memory is low; http://developer.android.com/guide/
components/services.html gives a detailed description of the
Android service, which is an application component.

System services

The system services play a key role in exposing the lowlevel functions of the hardware and the Linux kernel to the
high-level applications. The system services live from boot
to reboot, i.e., the entire life of the system. There are about 70
system services in the Jelly Bean release of Android. Table
1 shows a list of some of the system services, the names of
which are self-explanatory. It would easily take pages to

explain the functionality of each one of these, which is beyond


the scope of this article.
ActivityManagerService
(~14500 LOC)

LightsService

AlarmManagerService

NetworkManagementService

BackupManagerService

NotificationManagerService

BatteryService

StatusBarManagerService

ConnectivityService

VibratorService

CountryDetectorService

PackageManagerService
(~10500 LOC)

DevicePolicyManagerService

PowerManagerService
(~2500 LOC)

DeviceStorageMonitorService

WifiService

LocationManagerService

WindowManagerService
(~11000 LOC)

Table 1: A list of some Android system services


LOC stands for Lines of code
--------------------------------------------------------------------june 2013 | 31

Developers

Overview
APPLICATIONS

Home

Contacts

Phone

init

...

Browser

APPLICATION FRAMEWORK
Window
Manager

Activity Manager
Package Manager

Telephony
Manager

Content
Providers
Resource
Manager

View
System
Location
Manager

LIBRARIES

Vold, netd,
installd
Notification
Manager

ANDROID RUNTIME

Surface Manager

Media
Framework

SQLite

Core Libraries

OpenGL|ES

FreeType

WebKit

Dalvik Virtual
Machine

SGL

SSL

libc

servicemanager

Ueventd,
adbd,
debuggerd

ActivityManager
Service

PowerService

Packagemana
gerService

Other services

Figure 2: System services startup

LINUX KERNEL
Camera Driver

Flash Memory
Driver

Binder (IPC)
Driver

Keypad Driver

WiFi Driver

Audio
Driver

Power
Management

Figure 1: Android architecture

frameworks/base/services/java/com/android/server/
provides the path of the Java source files for these
services. Most of these services have a JNI counterpart,
through which these services talk to the lower layers like
the kernel. In an Android phone, this list of services can
be viewed by issuing the command service list through
the adb shell.

System services start-up

Zygote
(Dalvik VM)

System_Server Process

Wifi Service

Display
Driver

After boot-up, the Linux kernel starts the init process,


which is the grand-daddy of all processes in the system,
with a PID = 1. The following steps explain the start-up
procedure until the Home screen shows up:
1. A set of native services (written in C/C++) like vold,
netd, installd and debuggerd are started. These services
have root permission.
2. Then the Service Manager and Media Server are
started. These run with the systems permission.
3. init then starts the app_process (frameworks/base/
cmds/app_process/app_main.cpp). This launches the
zygote process, in a Dalvik VM via frameworks/
base/core/jni/AndroidRuntime.cpp and frameworks/
base/core/java/com/android/internal/os/ZygoteInit.
java. This zygote process then becomes the parent of
every application.
a. When the zygote starts, it has an active instance of
the Dalvik VM that contains the classes required by
the applications pre-loaded in its address space. In
Android, each application runs its own Dalvik VM.
When an application is started, zygote forks a child
process (which is the app process); both the zygote
and the app processes use the same VM instance
until the child does a write [modifies the data], in
which case a copy of the Dalvik VM is created, for
the child. This ensures a faster start-up time for the
applications. Thus the zygote becomes the parent for
all the applications.
b. We can actually check this in the adb shell using the
32 | june 2013

Rild (For
Radio)

Most of the details that we have discussed here are not


documented very well at developer.android.com. This article is my
understanding of the system, gained from browsing through the
source code. So feel free to send me clarifications or corrections.
Next month, we will discuss the skeleton of a typical system service.

ps commands.
ps | grep zygote and ps | grep com.android

...will reveal this relationship.


4. The zygote starts the system_server process. This code is
in the /init.rc file in an Android phone:
service zygote /system/bin/app_process -Xzygote /system/bin
--zygote --start-system-server

5. This system_server process then starts all the system services.


These services run as threads inside the system_server
process. This can be verified by running the following
command in the adb shell. This command lists the names of
the threads running as part of the system_server process.
$: cat /proc/`pidof system_server`/task/*/comm

The system server code is in frameworks/base/services/


java/com/android/server/, named as System_server.java.
6. One of the services is the ActivityManagerService,
which as a final step of its initialisation, starts the
Launcher application (which brings up the Home
Screen) and sends the BOOT_COMPLETE Intent. This
intent signals to the rest of the platform that the system
has completed boot up.
By: R Durgadoss
The author is a kernel programmer and spends his spare time
hacking Android. You can reach him at r.durgadoss@gmail.com

CODE
Sandya Mannarswamy

SPORT

This months column continues the discussion on data storage systems,


with a focus on how file systems remain consistent even in the face of
errors occurring in the storage stack.

ast month we discussed the concept of scaleup and scale-out storage and their relative
merits. One of the most important parts of a
storage stack is the file system. In Linux, there are a
number of popular file systems like ext3, ext4, btrfs,
etc. File systems hide the complex details of the
underlying storage stack such as the actual physical
storage of data. They act as containers of user data
and serve any IO requests from user applications.
File systems are complex pieces of software.
Traditionally, they have been a kernel component.
Different file systems offer different functionalities
and performance. However, their interactions with
user applications have been simplified through the
use of the Virtual File System (VFS), which is an
abstract layer sitting on top of concrete file system
implementations like ext3, ext4 and ZFS. The client
application can program to the APIs exposed by the
VFS and does not need to worry about the internals
of the underlying concrete file systems. Of late,
there has been considerable interest in developing
user space file systems using the FUSE module
available in mainstream Linux kernels (fuse.
sourceforge.net/). User-space file systems can be
created by using the kernel FUSE module, which
intercepts the calls from the VFS layer and redirects
them back to user-space file system code.
In this months column, we look at the
challenges in ensuring the safety and reliability of
data in file systems.

Keeping data safe

Given the massive importance and explosion of data

in this era of data driven businesses, file systems


are entrusted with the important responsibility of
keeping data safe, correct and consistent, forever,
while ensuring high accessibility to the data.
Data loss is unthinkable, while unavailability of
data even for short durations can have disastrous
consequences on businesses. While data has been
growing at an exponential rate, the reliability of
the file systems, which act as data containers, has
not matched the demands of the always on/always
consistent data. Unfortunately, storage systems do
fail and the manner in which they fail is complex
they can exhibit partial failures, failures which are
recognised much after they occur, failures which are
transient, and so on. Each of these different types of
failures imposes different reliability requirements
on file systems, which need to detect, handle and
repair such failures.
Figure 1 shows a simple model of the storage
system, wherein the bottom most layer is the
magnetic media on which the data is physically
stored, and the top most layer is the generic file
system with which the client application interacts.
Each block of the storage system can fail in
different ways, leading to data unavailability and
inconsistency. As the data flows through the storage
stack, at each point it is vulnerable to errors. An
error can occur in any of the layers and propagate
to the file system. This results in inconsistent or
incorrect data being written or being read by the
user through file system interfaces.
At the bottom-most layer, disks can fail in
complex ways. It is no longer possible to make

June 2013 | 33

CodeSport

Guest Column

Generic File System

Generic Block I/O


Device Driver
Device Controller

Storage Subsystem

Host

Specific File System

Transport

Firmware
Electrical
Mechanical
Disk

Cache

Media

Figure 1: A simplified view of Storage stack

the simple assumption that disks either work or fail, when


building reliability mechanisms for file systems. While allor-nothing disk failures are easier to understand and protect
against, the more insidious nature of disk failures is partial
failures such as
Latent sector errors where a disk block or set of blocks
become inaccessible
Silent disk block corruption
Disk firmware bugs can result in misdirected or torn
disk writes
As data flows from disk through transport (such as the
SCSI bus or the network), errors in the transport layer can
result in incorrect data being propagated to the host. Next in
the chain above are the hardware controller and device driver
software present in the host, which are again potential sources
of errors. Device drivers can be quite complex and can have
bugs which lead to silent data corruption. The next component
in the chain is the generic block IO interface of the operating
system, which again is quite complex and can contain latent
bugs that could result in data corruption.
While these errors are external to the file systems, the
file system itself is a complex piece of code of millions
of lines, prone to insidious software bugs in its own code,
which lead to corruption and inconsistency. Such internal
file systems errors are difficult to find and fix during
development and can remain latent for a long time, only to
rear their heads in production, leading to inconsistent file
systems or loss of data to the end user. Given this potential

34 | June 2013

for error arising in each part of the storage stack, file


systems have the enormously complex task of ensuring
the highest degree of data availability to the user.
While failures can result in any data block becoming
unavailable or inconsistent, the impact of failures of
certain blocks that contain critical file system meta-data
can result in the entire file system becoming unavailable,
resulting in non-availability of vast amounts of data from
the end-user perspective. Hence, certain data such as file
system meta-data needs to have much higher levels of
data integrity than application data. Therefore, the file
system needs to have special mechanisms in place for the
following reasons:
To prevent the corruption of meta-data
In the event of corruption, to be able to detect any such
corruption quickly and recover while in operation
In the worst case event of a crash, achieve a
consistent file system state quickly without excessive
downtime of data to the users
In common-usage English, the term resilience
means the ability to recover quickly from an illness or
a misfortune. File systems need to be resilient to errors,
no matter where the error occurs in the storage stack. A
resilient file system needs to be able to have mechanisms to
detect errors or corruptions in meta-data and fix or recover
from such errors wherever possible. And in the event of
unavoidable crashes, have the ability to return a consistent
state without excessive downtime. However, this is not as
simple a task as it seems. Detecting faults in file system
metadata can be a very complex process, involving the
following steps:
0. Make the assumption that the disks are perfect and no
errors occur
1. Check error codes in lower level components in the
storage stack below the file system
2. Perform sanity checks in the file system such as the use
of magic numbers and header information for important
file system data structures
3. Check-summing of metadata
4. Detect failures through metadata replication
Checksums have been employed widely in storage
systems to detect corruption. Check-sums are block hashes
computed with a collision-resistant hash function and are
used to verify data integrity. For on-disk data integrity,
checksums are stored or updated on disk during write
operations and read back to verify the block or sector
contents during reads.
Many storage systems such as ext4, ZFS, etc, use
checksums for on-disk data integrity. Check-summing, if not
carefully integrated into the storage system, can fail to protect
against complex failures such as lost writes and misdirected
writes. While they are typically useful in detecting issues
in components below the file system such as the storage

Guest Column
controller, which can lead to missing writes or buggy
writes, they cannot detect errors that originate in the file
system code. So given the possibility that file systems can
become inconsistent due to errors occurring either outside
the FS in the storage stack or due to buggy code in the FS
itself, how do we check the consistency of the file system
and if not consistent, bring it to a consistent state?

File system consistency checking

FSCK or file system consistency check is a utility that is


traditionally used to perform a check on the consistency of
the file system; if inconsistencies are found, it can repair
them automatically or, in certain cases, with the help of the
user. Windows users would know it by its avatar, chkdsk.
File systems inconsistencies can arise due to an:
(a) unclean shutdown of the file system, typically due
to either power failure or the user not following proper
shutdown procedures; or (b) due to hardware failures,
which lead to the file system meta-data information on
the disk becoming inconsistent. Allowing a corrupted file
system to be used can lead to further inconsistencies and,
in certain cases, even to permanent data loss. Hence, when
systems are brought back online after a crash, operators
have been known to run the fsck before the file system
can be made online and user IO operations allowed on it.
FSCK independently tries to build its knowledge
of the structure and layout of the file system from the
various data structures on disk, and corroborates it with
the summary/computed information maintained by the
file system. If the two pieces of information dont match,
an inconsistency is detected and FSCK tries to repair
the inconsistency. If automatic repair is not possible, the
problem is reported to the user. A good overview of FSCK
can be found in http://lwn.net/Articles/248180/.
Here is a question to our readers. Do all file systems
need a FSCK utility? For instance, there are file
systems that support journaling or write-ahead logging,
wherein all changes to metadata are first logged on to
a journal log on persistent media before the metadata
itself is updated. File system inconsistencies created by
partial meta-data writes resulting from a sudden crash
while operations are in mid-flight are addressed by
means of the file system journal log, which can replay
the log on restart and recover to a consistent state. Do
such journaling file systems need FSCK?

My must-read book for this month

This months must-read book suggestion comes from


one of our readers, Sonia George. She recommends
In Search of Clusters (2nd Edition) by Gregory
F Pfister, which discusses the internals of cluster
computing and provides details of different clustering
implementations. Thanks, Sonia, for your suggestion.

CodeSport

If you have a favourite programming book or article that


you think is a must-read for every programmer, please do
send me a note with the books name, and a short write-up
on why you think it is useful so that I can mention it in this
column. This would help many readers who want to improve
their software skills.
If you have any favourite programming questions or
software topics that you would like to discuss on this forum,
please send them to me, along with your solutions and
feedback, at sandyasm_AT_yahoo_DOT_com. Till we meet
again next month, happy programming and heres wishing
you the very best!

By: Sandya Mannarswamy


The author is an expert in systems software and is currently
working with Hewlett Packard India Ltd. Her interests
include compilers, multi-core and storage systems. If you
are preparing for systems software interviews, you may
find it useful to visit Sandya's LinkedIn group Computer
Science Interview Training India at http://www.linkedin.com/
groups?home=HYPERLINK "http://www.linkedin.com/group
s?home=&gid=2339182"&HYPERLINK "http://www.linkedin.
com/groups?home=&gid=2339182"gid=2339182

Get The Right Knowledge


From The Right People
Get the expertise from industry experts
Advanced Linux System Administration Training
Advanced Vmware Training
Advanced Shell & Perl Scripting Training
Advanced Sun Solaris Training
Cloud Computing Training

Services & Support


Linux & Unix Support
Windows to Linux Migration Services
Administration of Linux Servers
Virtualization Support
Web Hosting & Designing
Enterprise Security Solutions
Networking Solutions
Cloud Solutions

One Call & We Are There At Your Door Step!!!

Linux Lab Training Center

(Empowering Linux & VMware Mastery)


104, DGold House,Nr. Shital Hotel, Behind Bharat Petrol Pump, Ghyaneshwer Paduka
Chowk,Fergussion College(FC) Road, Pune-411 005
Phone:- +91 8793342945,020-60602277
Email:- info@linuxlab.org.in, linuxlabindia@gmail.com,Website:- www.linuxlab.org.in

June 2013 | 35

Developers

Let's Try

Develop Apps Quickly with Quickly


This aptly named tool makes the development of apps a simple and fast process.

Bazaar

Winpdb

Glade

PyGObject
GTK+
Python
Winpdb
PyGObject
Bazaar

Bazaar Glade
Winpdb PyGObject
Python GTK+

buntu is a Linux distribution that has gained a lot of


momentum in the past few years. Ever since it brought
about revolutionary changes like Unity in its interface,
and now the upcoming display server known as Mir, it has been
a topic of debate among Linux and open source enthusiasts.
Regardless of whether these changes are good or bad, these
features have certainly given it unprecedented exposure to the
public, who are increasingly looking beyond Windows and
Mac, to install Ubuntu on their machines. Ubuntu installation
is now easier than ever before, due to the much improved and
refined installation procedure.
Under these circumstances, a lot of organisations like
Valve are being tempted to begin support for the platform with
products like Steam, and games like Portal and Left for Dead
2 which are now available on Linux. It now also makes sense
for all big and small developers to take advantage of Ubuntus
growing popularity and start focusing on this platform. But this
throws up the problem of re-learning the innumerable tools and
tricks involved in developing and publishing an app for Ubuntu.
Though Linux provides multiple options for tools at each stage
of software development, it becomes even harder when the
developer has to choose the best tool for the purpose.
Quickly makes this process much easier by providing a
standard application stack that guides the developer by simplifying
and reducing each step to a much easier interface. It does this
by providing a ready-made empty application template along
36 | june 2013

with a tutorial that takes you through each step of application


development. Of course, alternatives do exist in the form of Qt and
the Mono framework, but Quickly will prove to be easier requiring
less time to learn how to use the development tools.

The Quickly application stack

The Quickly application stack consists of a specific set of tools


that are chosen on behalf of the developer to make the work
much easier and provide a streamlined workflow, which is
automated and interwoven into these technologies so that it can
effectively act as a wrapper for all the unnecessary hard work
that needs to be done.
It focuses on the following tools to make your process of
application development easier:
Python
GTK+
PyGObject
Glade
Bazaar
Winpdb
Some of these tools, like Python, GTK+, PyGObject and
Glade, are almost ubiquitous with Linux application development.
Apps developed with these could easily be ported to other Linux
distributions, if the packaging is done for their respective systems.
Python is one of the few programming languages that is getting
immensely popular for any type of application and if you dont

Let's Try

Developers

already know how to work with it, you should learn how to,
because you are much more likely to encounter it for some other
purpose due to many important components of Ubuntu being
made in Python itself. Ubuntu also exposes many important APIs
and libraries in Python for developers to use, so it makes sense to
work in Python to make this work easier.
Bazaar, on the other hand, is a revision control system
that is pretty robust. Since it is also used by Canonical for the
development of Ubuntu, easier integration with its ecosystem
is obviously one of its advantages. While it is certainly not
the most popular, it is used by many developers due to its
distributed capabilities and Launchpad integration.

Getting started with Quickly

To get started with installing Quickly, go to http://developer.


ubuntu.com/get-started/ and click on the Download Quickly
button. Alternatively, you can run the following command on
the development machine:

Figure 1: The sample application window opens up when the application is created for the first time

sudo apt-get install quickly quickly-ubuntu-template

When Ubuntu completes downloading and installing the


Quickly application along with its dependencies, you will
be able to use the Quickly workflow to create an Ubuntu
application within seconds.
To get started with a basic application template, you can
run the following:
quickly create ubuntu-application myfirstapp

Figure 2: The Glade UI design environment

This will create a project directory along with the bazaar


repository so that you can start playing around with it. You
will then also be able to see a sample window (Figure 1) that
says the application has been created and can be edited further
in the application designer for GTK+, i.e., Glade.
You can now go to the applications directory, which has
been created by Quickly:

Glade (Figure 2) will allow you to add new components


and change their properties so that you can customise how
each new window looks, along with a design view, just like
any other IDE that you might have seen.
When the design part for your application is done, you
can begin writing some code to actually make it work,
and translate those bare bones design components into an
integrated application that actually responds to the users
actions in the manner in which you desire. This can be
done at the edit stage, with a text editor like Gedit. The
following command loads all the files (Figure 3) that can
be edited so that the application can be customised to work
according to your needs:

cd myfirstapp

Glade can now be started with the following command:


quickly design

EXCELLENT EMAIL INFRASTRUCTURE SOLUTIONS


WITH UNMATCHED SERVICE SUPPORT!
TechnoMail - Enterprise Email Server Anti SPAM,
Anti Virus, Email Content Filtering
Firewall, Internet Access Control Content Filtering, Site Blocking
Bandwidth Management System
Managed Email Hosting Solutions

1, Vikas Permises, 11 Bank Street,


Fort Mumbai, India-400 001,
Mobile: 09167399917.
Email: info@technoinfotech.com

june 2013 | 37

Developers

Let's Try

Figure 4: The winpdb debugging window


Figure 3: Gedit opens up all the files customisable in the Quickly project
quickly license
quickly edit

This involves event driven programming where a block


of Python code is written for each signal. Signals for each
component can be activated from within the signals tab in
Glade. When the coding for this application is complete, you
can run the application with the following command:
quickly run

When everything is up and running fine, you can save and


commit the work in Bazaar version control:
quickly save commit message

Towards the end of software development, you might


consider debugging your application with the help of a utility
provided by Quickly, which is known as winpdb (Figure 4). It
works like any other debugging application that can step through
code, and provide information about variables and exceptions
that occur within the application. This Python debugging
application can be launched with the following debugging
command:
quickly debug

You could also choose to share your applications code on


Launchpad so that more collaborators can work and submit
changes to your application. This can be done with the help of
the quickly share and quickly release commands, but you need
a Launchpad account for that. Launchpad is a project hosting
tool that is used by Ubuntu and innumerable other projects to
collaborate on their products.

Licensing and packaging

Before releasing the finished application for distribution to


others, it is important to mention the licence under which the
application is being distributed. As of now, by default, Quickly
assumes this to be GPL3. To generate a licence, use the
following licence command:
38 | june 2013

This command adds the licence at the beginning of every file in


the form of a comment header. It only works for the BSD, GPL-2,
GPL-3, LGPL-2 and LGPL-3 licences. If any other licences are to
be used, they are to be added in the form of a COPYING file, which
will add it to each file generated by the Quickly workflow.
The Desktop file and Setup.py file can now be edited to
add information specific to the application and the developer.
This includes the category, type and icon of the application in
the Desktop file, while the Setup.py file includes the authors
contact details and description of the application.
When everything is ready, you should be all set to start
packaging your application in the form of a deb package so that it
can be distributed and installed easily on any Ubuntu system:
quickly package

This will generate a deb package and a zip of the source


code in the parent directory of the Quickly project folder.
This deb can then be installed on your Ubuntu system (or
any other Ubuntu system), and you will be able to see that it
integrates into the dash properly as is the case with any other
Ubuntu application. To distribute your app more easily, you
might prefer using a Personal Package Archive or maybe even
publish your application to the Ubuntu Software Center (USC).
Every application that is submitted to the USC goes through
an approval process defined by Canonical, and it could take a
couple of days before the application is finally visible online.
Since youve now had a brief introduction to how the Quickly
application development cycle works, you might have already
noticed that the commands and work needed at each step are
almost zero, and the majority of the work is done for the developer
by preconfigured tools. I hope that as the project progresses, more
options and templates will be made available.
By: Ankit Mathur
The author is a geek with a crush on Java, who also loves
flirting with almost anything related to databases and Web
technologies. Feel free to poke fun at his articles and direct your
feedback to ankitreloaded@gmail.com.

How To

Developers

How to Improve the


Performance of Drupal Sites
A site is only as fast as its last mile connectivity. If users access a site from a slow
connection, even a site capable of responding quickly to requests will appear to be slow.
A content management system like Drupal throws in additional challenges to the Web
architect in improving performance because, typically, the Apache process takes up
more space than a site serving traditional HTML or PHP pages. Read on to take a look at
the various external factors that impact performance, and explore ways to mitigate them.

f customers complain of lengthy page load times,


browsers displaying white pages, and 408 Request
Timeout errors, then these are tell tale signs indicating that
your site infrastructure is unable to cope up with the traffic,
usually resulting in the server taking a long time to respond to
a request or sometimes not returning the requested pages.

Drupal site performance bottlenecks: Where


does the time go?

The first step is to identify the bottlenecks between the time


a page was requested for from the browser till the time it is
completely loaded, to determine the causes of delay.
There are several components between the browser and
the Web server that could contribute to the delay - some
of these can be fine tuned by you, as the architect, while for
the others that are not managed by you, alternative solutions
will have to be found. Lets start with a component that is
not managed by you.

Network latency

Latency is caused by the underwater cables laid along the


seabed and other telecom hardware. These infrastructure
components contribute significantly to performance
degradation. So investigations should ideally start from the

client side by measuring the latency of the Internet connection.


You can do this from a terminal or command line by running
traceroute to check potential bottlenecks. Here is what it looks
like for one of our Rackspace servers hosted out of Texas in US.
With such high network latencies, its obvious that users
experience very lengthy page download periods. The quickest
and probably the only way to resolve this is to ask the site
visitor to access it from a faster network connection. The
average latencies for servers located across the Pacific range
from 300ms to 400ms. If further reduction is needed, consider
moving the site from the US to a server hosted out of India,
where the average latency is about 30 to 40ms. Once you are
comfortable with the network latency, its time to check for
other extraneous factors affecting response speed.

Are crawlers and bots consuming


too many resources?

Another rather painful behind-the-scenes cause for latency is


the work of bots, crawlers, spiders and other digital creatures.
More often, crawlers and bots can creep into your site
unnoticed, and though not all of them do so with a malicious
intent, they can consume valuable resources and slow down the
response times for users. In some of the corporate sites that we
built, we found the Google bot had indexed the site even before
June 2013 | 39

Developers

How To

Figure 1: Latency

we mapped it to a URL! There are several ways to ensure that


bots dont get overly aggressive with their indexing process
by specifying directives in the robots.txt file. These directives
tell these creatures where not to go on your site, thus making
bandwidth and resources available to human visitors.
You can specify a crawl delay with the Crawl-delay
directive. So a statement like the one below will tell the bot to
wait for 10 minutes between each crawl.
Crawl-delay: 10

You can limit the directories crawled by disallowing


directories that you do not want displayed in search results
using the Disallow directive. For example, the statement
below tells the crawler not to index the CHANGELOG.txt
file, and you can specify entire directories if you wish.
Disallow: /CHANGELOG.txt

However, these instructions come with a caveat compliance with these directives is left entirely to the
authors of the spider or bot. Dont be surprised if they
dont seem to be respecting these directives (though
all the major ones do), in which case, frankly, there is
nothing much to be done about it! To be effective, the
robots.txt should be well written. A tool to check the
syntax of the robots.txt file is available at http://www.sxw.
org.uk/computing/robots/check.html. Run the robots.txt
through this to check if its properly written.
Some search engines provide tools to control the
behaviour of their bots. Often, the solution to an overaggressive Google bot is to go into Google Webmaster
and reduce the crawl rate for the site, so that the bot is
40 | June 2013

not hitting and indexing too many pages at the same time.
Start with 20 per cent, and you can go up to 40 per cent
in severe cases. This should free up resources so that the
server serves real users.

Apache Bench

The HTTP server benchmarking tool gives you an insight


into the performance of the Apache server. It specifically
shows how many requests per second the server is capable of
serving. You can use AB to simulate concurrent connections
and determine if your site holds up. Install Apache Bench on
your Ubuntu laptop using your package manager and fire the
following command from the terminal:
ab -kc 10 -t30 http://www.sastratechnologies.in:80/

The above command (the trailing slash is required) tells


Apache Bench to make 10,000 connections, use the HTTP
keep alive feature of Apache, and wait for a maximum
of 30 seconds for a response from the server. The output
contains various figures, but the one you should look for is the
following line:
Requests per second:

0.26 [#/sec] (mean)

Check memory usage

Though Apache is bandwidth limited, the single biggest


hardware issue affecting it is RAM. A Web server should
never have to swap, as swapping increases latency. The
Apache process size for Drupal is usually larger compared
to ground-up PHP sites, and as the number of modules
increases, the memory consumption also increases. Use the
top command to check the memory usage on the server and

How To

Developers

dont need. This will reduce the memory footprint


considerably. On some of our installations, we
disable status and mod-cgi because we dont need
the server status page, and we run PHP as an
Apache module and not as CGI.

Configure Apache for optimum


performance

Another raging debate is on whether to use Worker or


Prefork. Our Apache server uses the non-threaded MPM
Prefork instead of the threaded Worker. Prefork is the
default setting and we have left it at that. We havent had
issues with this default configuration and hence arent
motivated enough to try Worker. You can check what
you are running using the following commands:
/usr/sbin/httpd -l (for Centos) and
/usr/sbin/apache2 -l (for Ubuntu).

Figure 2: Memory

free up RAM for Apache. Figure 2 shows a screenshot from


one of our servers.
Look for unnecessary OS processes that are running and
consuming memory. From the listing, right at the bottom, you
can see that postfix is running. Postfix is a mail server that
was installed as part of the distribution and is not required to
be on the same server as the site. You can move services like
this to a separate server and free up resources for the site.

Set the HostnamesLookups to Off to prevent


Apache from performing a DNS lookup each time
it receives a request. The downside is that the
Apache log files will have the IP address instead of the
domain name against each request.

Use the Drupal modules that inhibit


performance, selectively

Use ps -aux | sort -u and netstat -anp | sort -u to check the


processes and their resource consumption. If you find that
the Apache process size is unduly high, the first step is to
disable unnecessary modules on the production servers to
free memory. Modules like Devel are required only in your
development environment and can be removed altogether.
After you have disabled unnecessary processes and removed
unnecessary modules, run the top command again to check
if the changes you have made have taken effect.

Reduce Apaches memory footprint

The Web server is installed as part of your distribution


and can contain modules that you might not require,
for example, Perl, etc. Disable modules that are not
required. On Ubuntu, you can disable an Apache module
using a2dismod while on Centos machines, youll have
to rename the appropriate configuration file in /etc/
httpd/conf.d/. On Centos, some modules are loaded by
directives in /etc/httpd/conf/httpd.conf. Comment out the
modules (look for the LoadModule directive) that you
June 2013 | 41

Developers

How To

AllowOverride None prevents Apache from looking


for .htaccess in each directory that it visits. For example, if
the directive was set to AllowOverride All then if you were
to request for http://www.mydomain.com/member/address,
Apache will attempt to open a .htaccess file in /.htaccess, /
var/.htaccess, and /var/www/.htaccess. These lookups add
to the latency. Setting the directive to None improves
performance because these lookups are not performed.
Set Options FollowSymLinks and not Options
SymLinksIfOwnerMatch. The former directive will follow
symbolic links in this directory. In case of the latter,
symbolic links are followed only if the ownership of the
link and the target match. To determine the ownership,
Apache has to issue additional system calls. These
additional calls add to the latency.
Avoid content negotiation altogether. Do not set
Options Multiviews. If this is set and if the Apache server
receives a request for a file that does not exist, it looks for
files that closely match the requested file. It then chooses
the best match from the available list and returns the
document. This is rather unnecessary in most situations.
Its better to trigger a 404.
In addition to this, you will need to set the following
parameters in the httpd.conf (Centos) / apache.conf (Ubuntu)
file depending on the distribution you are using.
MaxClients: This is the maximum concurrent requests that
are supported by Apache. Setting this to Too low will cause
requests to queue up and eventually time-out, while server
resources are left under utilised. Setting this to Very high could
cause your server to swap heavily and the response time degrades
drastically. To calculate the MaxClients, take the maximum
memory allocated to Apache and divide it by the maximum child
process size. The child process size for a Drupal installation is
anywhere between 30 to 40 MB compared to a ground-up PHP
site, which is about 15 MB. The size might be different on your
server depending on the number of modules. Use

too high then unnecessary resources are consumed while


Apache listens for requests. Set MinSpareServers to 5 and
MaxSpareServers to 20.
StartServers specifies the number of child server
processes that are created on start-up. On start-up,
Apache will continue to create child processes till the
MinSpareServers limit is reached. This doesnt have an
effect on performance when the Apache server is running.
However, if you restart Apache frequently (seldom would
anyone do so) and there are a lot of requests to be serviced,
then set this to a relatively high value.
MaxRequestsPerChild: The MaxRequestsPerChild
directive sets the limit on the number of requests that an
individual child server process will handle before it dies. By
default, it is set to 0. Set this to a finite valueusually a few
thousands. Our servers have it set to 4000. It helps prevent
memory leakage.
KeepAlive: KeepAlive allows multiple requests to be
sent over the same TCP connection. Turn this on if you
have a site with a lot of imagesotherwise for each image
a separate TCP connection is established and establishing a
new connection has its overheads, adding to the delay.
KeepAliveTimeOut: This specifies how long Apache has
to wait for the next request before timing out the connection.
Set this to 2 - 5 seconds. Setting this to a high value could tie
up connections that wait for requests when they can be used
to serve new clients.
Disable Logging in access.log. If you would like to go
to the extremes, you can disable Apache from logging each
request it receives by commenting the CustomLog directive in
httpd.conf.
A similar directive is likely to be in your vhost
configuration so ensure that you comment it out. I wouldnt
recommend it though, because access logs are used to provide
an insight into visitor behaviour. Instead, install log rotate.
On Ubuntu, use:

ps -ylC httpd sort:rss (on Centos) or


ps -ylC apache2 -sort:rss (on Ubuntu)

sudo apt-get install logrotate

to figure out the non-swapped physical memory usage


by Apache (in KB). On our servers, with 2 GB of RAM,
the MaxClients value is (2 x 1024) / 40, i.e., 51. Apache
has a default setting of 256 and you might want to revise it
downward because the entire 2 GB is not allocated to Apache.
MinSpareServers, MaxSpareServers and StartServers:
The MinSpareServers and MaxSpareServers determine
how many child processes to keep active while Apache
is waiting for requests. Set the MinSpareServers very
low and a bunch of requests will come in; Apache has to
spawn additional child processes to serve these requests.
Creating child processes takes time and if Apache is
busy creating these processes, the client requests wont
be served immediately. If the MaxSpareServers is set
42 | June 2013

On Centos, however, log rotate is run daily via the file /


etc/cron.daily/logrotate. The configuration files for it are
available in /etc/logrotate.conf and /etc/logrotate.d
Increase the Apache buffer size. For very high latency
networks, especially transcontinental connections, you
can set the TCP buffer size in Apache with the directive
SendBufferSize bytes.
By: Sridhar Pandurangiah
The author is the co-founder and director of Sastra Technologies,
a start-up engaged in providing EDI solutions on the cloud.
He can be contacted at: sridhar@sastratechnologies.in /
sridharpandu@gmail.com. He maintains a technical blog:
sridharpandu.wordpress.com

Let's Try

Developers

Try Your Hand at GNOME Shell Extensions


One of the most popular features of GNOME is that it enables users to build extensions for
their shell to increase its basic functionality.You can think of these extensions as add-ons to
the GNOME shell. Heres more on developing GNOME shell extensions.

ne of the many choices available to a Linux user is


that of the desktop environment. Among the many
available, GNOME stands out as one of the most
prominent. The GNOME shell has a very user friendly
interface, which makes it popular amongst users.
First things first. GNOME extensions are written in
Javascript. This is actually Mozilla's special version of
Javascript, which is based on Spidermonkey and is widely
known as GJS. A working knowledge of regular Javascript is
enough to begin extension development.
Let us have a look at what the components of a
GNOME extension are.
Metadata.json: This file contains data about the
extension details like the extension name, description,
identifier and compatibility information (which versions
of the GNOME shell are compatible with the extension).
Extension.js: This is the GNOME-Javascript(GJS) file,
which will store the logic that controls the extension. This
file must contain the following three functions.
init (metadata) This is called to initialise the
extension. The metadata argument is the data stored
in the metadata.json file, along with some additional
details like the absolute path to your extension.
enable This is called when the extension is enabled.
This function stores the actions to be performed when
the extension is enabled, and this generally is the
function in which the logic for extensions is placed.
disable This is called when the extension is disabled.
it generally contains code, related to clean up
activities, which stops whatever actions the extensions
perform and restores the system to a state such as
when the extension was never there.
There is also an optional stylesheet.css file, which
is used to style various attributes of the extension, like
buttons or displays.

Creating an extension

When you write your extension, the files related to the


extension should be placed in the ~/.local/share/gnome-shell/
extensions directory. in the same directory, you will be able
to see various other extensions that exist on your system, and
it may be a good idea to look at some of them to get a better
understanding of how they work. You will also see that the
already existing extensions will have the above mentioned
files (extension.js, stylesheet.css, metadata.json) .
Now, let us start building our extension. Although it is
not mandatory (you can write the extension from scratch),

GNOME 3 has a bash command that helps create new


extensions. Lets use the following command to create a draft
of our extension:
gnome-shell-extension-tool --create-extension

When you run the above command, you will be prompted


to enter the following metadata.
Name Name of the extension.
Description Description of the extension.
UUiD This is a unique identifier for the extension. It
should be in an email address format, like foo@bar.com.
in general, as long as the UUiD can be used to create a
subdirectory, it will be regarded as a valid UUiD.
Now, you will be told that your extension has been created
and the path to it will also be displayed. You are free to make
the required changes to your extension. Next, activate the
extension by using the GNOME tweak-tool (if you have not
already installed it, you can download and install it from your
distro's software manager). Under the tweak tool, go to shell
extensions and turn on your extension there. if you cannot see
it, restart your GNOME shell (Alt + F2 and type 'r', and then
press Enter) to force load all the extensions.
You will see a new icon in the GNOME tray (top right
corner of your screen) in the shape of gears. if you click on it,
your extension will display Hello world.
This is the default extension, created by the GNOME shell
extension tool. We must modify the files located in ~/.local/
share/gnome-shell/extensions, in the folder with the same
name as the UUiD you entered earlier.

Understanding the code

Lets look at the code that the GNOME shell extension-tool


created, by first examining the extension.js file:
june 2013 | 43

Developers

Let's Try

const St = imports.gi.St;
const Main = imports.ui.main;
const Tweener = imports.ui.tweener;
let text, button;
function _hideHello() {
Main.uiGroup.remove_actor(text);
text = null;
}

Figure 1: Hello world extension


function _showHello() {
if (!text) {
text = new St.Label({ style_class: 'helloworldlabel', text: "Hello, world!" });
Main.uiGroup.add_actor(text);
}
text.opacity = 255;
let monitor = Main.layoutManager.primaryMonitor;
text.set_position(Math.floor(monitor.width / 2 - text.
width / 2),
Math.floor(monitor.height / 2 - text.
height / 2));
Tweener.addTween(text,
{ opacity: 0,
time: 2,
transition: 'easeOutQuad',
onComplete: _hideHello });
}
function init() {
button = new St.Bin({ style_class: 'panel-button',
reactive: true,
can_focus: true,
x_fill: true,
y_fill: false,
track_hover: true });
let icon = new St.Icon({ icon_name: 'system-run',
icon_type: St.IconType.SYMBOLIC,
style_class: 'system-statusicon' });

function disable() {
Main.panel._rightBox.remove_child(button);
}

in the above code, St has been imported because that


is the library that allows us to create Ui elements. Main
is the instance of a class that has all the Ui elements, and
we add all Ui elements to this main instance. Tweener is
responsible for handling all the animations that might be
performed. Lets look at what each of the functions in the
above code does.
init() - Create a button for the top panel by passing a
map of properties from St.bin. in the map, set the button
to be reactive to mouse clicks an set the style class of
the button. Set it such that the button can be focused by
keyboard navigation, and that the button only fills x-space
and not y-space. Create an icon, and set the icon as a
child of the button. Finally, bind the button press event,
to call the function _showHello, which is responsible for
displaying the Hello world message.
Enable() - Add the button we created earlier to the right
panel of the top panel.
Disable() - Remove the button from the right panel.
Next, let us look at the metadata.json file:
{"shell-version": ["3.4.1"], "uuid": "foo@bar", "name":
"hello", "description": "OSFY extension"}

As you can see, in the above code, the metadata.json


file stores data about your extension. Details stored by
default are the shell-version for which the extension was
created. (This value will vary depending on your version of
the GNOME shell. Also, the syntax for extensions will be
different if you are running a GNOME version before 3.2.)
Other details stored in the metadata.json file are the name,
UUiD and description.
And in the end, lets look at stylesheet.css:

function enable() {
Main.panel._rightBox.insert_child_at_index(button, 0);
}

.helloworld-label {
font-size: 36px;
font-weight: bold;

button.set_child(icon);
button.connect('button-press-event', _showHello);

44 | june 2013

Let's Try

Developers

let children = Main.panel._rightBox.get_children();


for (let i = 0; i < children.length; i++) {
if (children[i] && children[i]._delegate._
iconActor) {
children[i]._delegate._iconActor.icon_type =
St.IconType.SYMBOLIC;
children[i]._delegate._iconActor.style_class
= 'system-status-icon';
}
}
}

Figure 2: Looking Glass


color: #ffffff;
background-color: rgba(10,10,10,0.7);
border-radius: 5px;
padding: .5em;
}

You can change various styles associated with your


extension by changing whats needed in this file.

Looking Glass

This is the tool used to debug your GNOME shell


extensions and Javascript code. To open Looking Glass,
press Alt+F2, type in lg and hit Enter. The Looking
Glass window will open up.
Looking Glass is divided into tabs: Evaluator,
Windows, Memory and Extensions. The Evaluator window
is where you can type in random Javascript code. You
can also use the picker at the left corner of Looking
Glass, which you can use to select various elements of
the GNOME shell and find out the element's name. The
Windows tab shows which windows are active, while the
Extensions tab shows the various extensions running on
your system, along with any errors. if any extension runs
into some error, the error will be displayed here.

Customising your extension

Let us now modify your standard Hello world extension.


Lets try and modify the type of icons displayed in the right
panel. By default, the icons displayed in the right panel are
symbolic icons, and your extension will change them to full
colour icons. Let us look at the code to do this:
const
const
const
const

St = imports.gi.St;
Main = imports.ui.main;
Tweener = imports.ui.tweener;
PopupMenu = imports.ui.popupMenu;

function init() {
}
function disable() {

function enable() {
let children = Main.panel._rightBox.get_children();
for (let i = 0; i < children.length; i++) {
if (children[i] && children[i]._delegate._
iconActor) {
children[i]._delegate._iconActor.icon_type =
St.IconType.FULLCOLOR;
children[i]._delegate._iconActor.style_class
= 'color-status-button';
}
}
}

For the sake of simplicity, i have left the init function


empty, but all your initialisations should be done inside
that function. in the Enable function, you get all elements
in the right panel with the Main.panel._rightBox.
get_children() function; then you loop over all children,
changing their style class from symbolic icon to full colour.
in the Disable function, do the same with one difference
change the icon style from full colour to symbolic.
As we have seen, the GNOME shell allows you to
extend its functionality by adding extensions. Building
these extensions is easier with tools like the GNOME shell
extension-tool, and Looking Glass.
You can learn a lot about building extensions by reading
the code of existing extensions. You will find a few of them
on GNOME's GiT under a module called gnome-shellextensions. Also remember that Looking Glass is your friend.
Get familiar with it and use it to solve many a coding error.
Further documentation can be found on the GNOME website
https://live.gnome.org/GnomeShell/Extensions. You have also
seen how easy it is to customise the GNOME shell to suit
your needs. GNOME extensions let users take control, making
it easier to achieve the perfectly customised system.
By: Sahil Chelaramani
The author is an open source activist who loves Linux, Python
and Android. He is a third year student at the Goa Engineering
College (GEC).

june 2013 | 45

Connect with

Indias Leading IT
Professionals

For more details about the conference, visit: www.osidays.com


Organisers

Media Partners

www.osidays.com

Asias Leading
Conference
On Open Source
10th Edition

SOURCE INDIA

11 - 13
November
2013

NIMHANS Convention Center

BENGALURU

FREEZE Your
Calendar NOW!

Nov

11

Register Now For FREE Complimentary Passes

http://osidays.com/osidays/registration/
EFY Enterprises Pvt Ltd, D-87/1, Okhla Industrial Area, Phase 1, New Delhi 110020; Phone: 91-11-26810601 (02/03)

For U & Me

Let's Try

What Makes

a Hit ?

lets look at some more ways


Continuing from the last article on the subject,
in which LaTeX can make life easier.

f the previous article motivated you to learn about


LaTeX on your own and use it for your work, then
what follows will augment your LaTeX skills for most
basic tasks. There is enough and more of LaTeX to be
learnt from a wealth of resources on the Internet, if you
have the liking or need for it.

Hyperlinking

Documents produced today are as much for on-screen


viewing as for printing. Hence, hyperlinking within
and across documents is almost a hygiene factor for
a document. We will use the hyperref package for
hyperlinking. Now that you are well on your way to
becoming a LaTeX pro, lets learn it directly from a
template that you can paste into an editor and experiment
with. Note that we use the xcolor package to differentiate
hyperlinks based on colour.
%A template to learn hyperlinking.
\documentclass[10pt,a4paper]{article}
\usepackage[pdftex,
colorlinks=true,
urlcolor=Blue,
filecolor=Green,
linkcolor=Red,
pdftitle={Title of Article},
pdfauthor={Author of Article},
pdfsubject={Subject of Article},
pdfkeywords={any, key, words},
pdfproducer={pdfLaTeX},
pdfpagemode=None,
bookmarks=true,
48 | June 2013

bookmarksopen=true]{hyperref}
\usepackage[usenames,dvipsnames,
svgnames,table]{xcolor}
%<------ Preamble of Document ------->
%The document title.
\title{\textbf{Title Of Article}}
\author{Author1 \\ Author2 \\ etc.}
\date{} %%switches off date. Delete this
%if you want the date to appear.
\begin{document}\label{top}
\maketitle %necessary to create the title.
\tableofcontents
\pagebreak[4] %force pagebreak here.
\section{First Section}\label{first}
%-------Your Content Here------->
Content for the first section.
\subsection{Subsection Title}\label{one-dot-one}
%Content
\subsubsection{Subsubsection Title}
%Content
\section{Second Section}\label{second}
This is content for the second section. You can go to \
hyperref[first]{Section One} or any other section or location
from here, including the \hyperref[top]{top}. Make sure you
label the location. Or you can go to page \pageref{first}
that has Section One. You can even link to an external PDF \
href{test.pdf}{file}.
The following section shows you how to link external URLs:
\section{References}
\begin{itemize}

Let's Try
\item Link to
\href{http://www.example.com}
{www.example.com}
\end{itemize}
\end{document}

Large projects

How do you handle a large software project? You break it


down into parts, and work on the parts individually. The
build/integration process then assembles the project. This
process is well tested and is known to scale up and scale out
very well. So, why should large literary or documentation
projects be any different?
The LaTeX \input command lets you include one file into
another. If you have a book project, you could keep a base
file for the overall framework of the book containing all the
typesetting and styling information. You could then include
individual chapters that are ready, each in its own .tex file that
might, in turn, include files for individual sections and stubs
for the ones that are not. This lets you concentrate on the
content of individual chapters and even farm them out to coauthors. Your book gets built from the top level base file. You
can put individual chapters under version control, use GNU/
make and plan labelled releases of your book.

A custom title page

The simplest example of the use of the \input command would


be the inclusion of a custom title page. If you had a title page
like the one below, saved under the name title-page.tex you
could include it into your main article with a \input{./titlepage.tex} inserted just after the \begin{document} command.
%%
%%
%%
%%
%%

Sample title page template.


Save this as title-page.tex in your
current directory.
Include in main document with command:
\input{./title-page.tex}

\begin{titlepage}
\centering
\vspace*{120pt} %vertical space
\Huge{\textsf{\textsc{Your Title}}}
\vspace{20pt}
\Large{\textsf{Author Name}}
\vspace{100pt}
\begin{minipage}{0.4\textwidth}
\begin{flushleft} \Large
Left line 1\\
Left line 2\\
\end{flushleft}

For U & Me

\end{minipage}
\begin{minipage}{0.4\textwidth}
\begin{flushright} \Large
Right line 1\\
Right line 2\\
\end{flushright}
\end{minipage}
\vfill % Fill to bottom of the page
\large\today
\end{titlepage}

Creating custom commands

If you issue the same set of commands over and over again
or decide to get creative with your LaTeX skills, then you
have the option of defining your own custom commands.
The simplest kind is akin to macro expansion. For instance,
we have been using the word LaTeX quite often in this
article. You could define it as follows in the preamble of
your .tex file \newcommand {\ltx}{{\LaTeX}}. Every time
you issue the command \ltx it would get expanded to what
we have mapped it to. You can use the \renew command
to overwrite existing commands; use this with caution and
only after you have seen some usage examples.
A more useful command might be the one to include figures.
Lets define a command with four arguments as follows:

ED ER
T
N NC i
A
WA
ELhenna
E
FR t C
a
:

Coding, testing, implementation


Cloud deployment
Mobile deployment
Conversion to different languages
Conversion of speech to text
Splitting to several small modules for
different level usage
The Web application is a financial management
and commerce management product
Work To Be Done At Chennai, Tamil Nadu On Contract Basis,
Payment Basis: Lump Sum or Partnering /
Profit Sharing During Maintenance Can Also Be Considered.

KKE AMALGAMATIONS
Email: kkeamalg101@gmail.com
June 2013 | 49

For U & Me

Let's Try

\newcommand {\insfig}[4] {
% 4 args: file, scale, caption, label
% scale: 0.1, 0.2, 0.5, etc.
% needs the graphicx package
\begin{figure}[htb]
\includegraphics[scale=#2]{#1}
\caption{#3}
\label{#4}
\end{figure}
}

With this definition in place, you can insert captioned


figures with just one command, as follows:
\insfig{filename}{scale}{caption}{label}

Creating presentations

LaTeX presentation packages give you the means to make


truly portable presentations. Apart from offering numerous
slick looking, colour-coordinated themes, they also give
you overlays and progressive disclosure of slide content.
All that needs to be done is to carry a PDF version of the
presentation generated by LaTeX, without bothering about
the version of the presentation software the podium laptop
might be running, as long it has a PDF reader available.

The Beamer package

Beamer is a well-documented \ltx\ presentation package. We


focus on it because it is intuitive even if it has a longbut
not steeplearning curve. Powerdot is another prominent
presentation package.
In Beamer, each slide is a frame. You can also use standard
\ltx\commands such as \verb+\section+ that you would use with
the article document class making it easy to transfer material
between articles and slides and \textit{vice-versa}.
The following is a basic Beamer template that you can
start playing with and customising:
%Sample Beamer Template for LaTeX
\documentclass{beamer}
\title[Short Title]{Long Title}
\author{Author1 and Author2}
\date{\today}
\usetheme{Warsaw}
\begin{document}
\maketitle
\begin{frame}
\frametitle{Slide1: Progressive Disclosure}
\setbeamercovered{dynamic}
Item 1 \\ \pause
Item 2 \\ \pause
50 | June 2013

Item 3 \\
\end{frame}
\begin{frame}
\frametitle{Slide2: Another Progressive
Disclosure Style}
\begin{itemize}[<+->]
\item Item 1
\item Item 2
\item Item 3
\end{itemize}
\end{frame}
\begin{frame}{Slide 3: Multi-Column}
\begin{columns}[c]
\column{0.4\textwidth}
\centering
Item 1\\
Item 2\\
Item 3
\column{0.4\textwidth}
\centering
You can include a graphic here.
\end{columns}
\vfill
\begin{center}
Lots more information at \\
\scriptsize www.ctan.org/\\
tex-archive/macros/LaTeX/contrib/\\
beamer/doc/beameruserguide.pdf
\end{center}
\end{frame}
\end{document}

Transitioning to typesetting

LaTeX is a vast subject that has been extensively discussed


and written about. With this and the previous article, we
have just skimmed its surface. If great content takes pride of
place in our minds, then precise typesetting must stand right
next to it to create that good impression. LaTeX, because
of its batch-mode operation, brings portable, precision
typesetting even to relatively lower-end machines. So will
you let LaTeX typeset your next magnum opus?

References
[1] http://www.LaTeXtemplates.com/ is a good place for
LaTeX templates.
[2] http://www.ctan.org/tex-archive/macros/LaTeX/contrib/
beamer/doc/beameruserguide.pdf for Beamer documentation.
[3] Wikibooks has a comprehensive guide on LaTeX.

By: Gurudutt Talgery


The author likes exploring the contribution of FOSS to improving
the productivity of knowledge workers and writing about it.

Overview

Developers

PhoneGap Application Development:


A Developer's Delight!
In this era of mobile phones, developers are concentrating on building mobile apps rather
than desktop applications. There is a huge variety of apps being published every day,
which makes the mobile market very fluid. PhoneGap is a great framework for mobile
application developers to work on.

n the world of applications, there is a kind of tug-ofwar situation between native and hybrid apps. Native
apps have an edge in using the native features of the
device and give their users a better experience in terms of
performance. On the other hand, hybrid apps offer tough
competition by following the 'build once, use anywhere'
policy, which lets developers make an application and run
it simultaneously on various platforms. There are various
platforms that allow hybrid applications the capability
of using the native features of the device and of running
the apps as native apps. PhoneGap is one such highly

appreciated framework that lets developers make one app


for various platforms.

Getting started

PhoneGap lets developers make their apps using HTML, CSS


and JavaScript, and packages those as native applications
(.apk in case of Android) to be deployed in various mobile
platforms like Android, iOS, BlackBerry, Windows, Bada
and so on. In this regard, it makes the developers job easier
as they do not need to know platform-specific languages and
can develop one app for different devices. It provides various

june 2013 | 51

Developers

Overview
Set up the Android environment in Eclipse by installing
the ADT plug-in there.
Set up the path for android_sdk by going to Windows >
Preferences> Android > Browse; this is the path to the
SDK folder.
Thats itwe have the environment set for the
development now. Lets dive into it.
After setting up everything, start integration with
PhoneGap.
Create a new project by pointing your cursor to File >
New >Android Project.
Specify the project name (Let's call it: PhoneGapDemo).
The same is depicted in Figure 1.
Specify the Build Target. Any target can be chosen, i.e.,
2.2, 3.2, 4.0, etc. (Let's choose 2.3).
Specify the Application Name (Lets call it
MyPhoneGap).
Specify the Package Name. The package name must
start with com (com.phonegap).
Specify the Activity Name. It is a Java file name
(MainActivity.java).
Click Finish. We are done with building a new project
and it will be on your Project Explorer now. The screen
shot in Figure 2 depicts this.
Now, lets integrate this project with the PhoneGap we
downloaded earlier, as follows:
Create a new directory (folder) and name it libs.
Open your PhoneGap package and go to the folder
Android and then copy the .jar file found there. Paste this
file into the libs folder that you just created.
Create a folder under the assets folder and name it www.
Copy the .js file from the same PhoneGap package,
Phonegap>Android and place it in www.
Set the build path. Right click on Project> Build Path>
Configure Build Path> Libraries> Add Jar > Add
Phonegap.jar file.
Paste all the .css files in www if you already have some
Create an xml folder inside the res directory.
Copy the config.xml file, which is in the PhoneGap
package that you downloadedphonegap-2.7.0/lib/
android/xml/config.xmland paste it into the xml folder.

Figure 1: Creating the project

APIs, which enable the apps to use a device's native features.


The framework includes:
Accelerometer
Camera
Geolocation
Storage
Media
Notification
SplashScreen
Contacts and many more features
My systems configuration:
Linux distro: Ubuntu 12.04 32-bit desktop version
Eclipse: Juno
Android version: 4.1
PhoneGap: Cordova 2.7.0
All the following results and screenshots are with respect
to the above configurations.

Setting up the environment

To get started with the actual development, set up an


environment where some installations and configurations
need to be done.
The minimum requirements are:
1. Download and install an IDE Eclipse is recommended
2. Download the Android SDK and the ADT plug-in
(Android 2.2, 2.3 and 4.x are supported)
3. Download and extract the latest PhoneGap package
Cordova 2.7.0 (this is the latest one)
Note: The Android_SDK comes in different versions
for different operating systems android_sdk_windows,
android_sdk_linux and android_sdk_mac

The project set-up

Pre-requisites: All the tools need to be downloaded and


installed.
You need an IDE installed to provide the development
environment. After it gets installed, open its workbench
and choose a workspace.
52 | june 2013

Note: config.xml contains all the plugins needed for


using PhoneGap APIs.
The boring integration part is over, so let's start the real
development.

Developing Hello world

The steps for the development are as follows:


1. Create an HTML page inside the www folder.
2. Modify the Java class MyPhoneGapActivity.java.
3. Add some permissions in the AndroidManifest.xml.
4. Deploy and run the application on the emulator.

Overview

Figure 3: Hello World

Add the following code as a direct child of root


<manifest> .This will give your application the flexibility
to attain different screen sizes for different devices.

<supports-screens
android:largeScreens="true"
android:normalScreens="true"
android:smallScreens="true"
android:resizeable="true"
android:anyDensity="true" />

Figure 2: Project structure

Developers

Now to enable your application to access some of the


device features like the camera, GPS, Internet, etc,
some permissions need to be given. This can be done by
adding uses-permission either by using the Permissions
tab that you can see on opening AndroidManifest.xml or
by adding the following code as a child of <manifest>

Figure 4: Accelerometer

1. As you make your PhoneGap apps in HTML, CSS and JS,


create an HTML page saying, Hello World, place it in its
appropriate place and run it on your device. Lets create a
very simple HTML page first:
// index.html
<!DOCTYPE html>
<html>
<head>
<title>My Page</title>
<meta name="viewport" content="width=device-width,
initial-scale=1">
<script src="cordova-2.7.0.js"></script>
</head>
<body>
<h1>Hello World</h1>
</body>
</html>

So in this HTML page, there is a <head> section in which


you have embedded your phonegap.js file and in the <body>
section there is a heading tag with Hello World, which is
going to be displayed when you run this app on the device.
2. Now, lets modify MyPhoneGapActivity.java to provide
it the path of the HTML page you just created. The few
changes that need to be made are:
Add the package name import com.phonegap.DroidGap;
Modify the class statement and let it extend
DroidGap instead of Activity public class
MyPhoneGapActivity extends DroidGap
Replace setContentView() with super.loadUrl("file:///
android_asset/www/index.html");
The rest will be the same.
3. Now let's get ready to do some configuration in
AndroidManifest.xml and add some permissions.

<uses-permission android:name="android.
permission.CAMERA" />
<uses-permission android:name="android.permission.
VIBRATE" />
<uses-permission android:name="android.permission.
ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.
ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.
ACCESS_LOCATION_EXTRA_COMMANDS" />
<uses-permission android:name="android.permission.
READ_PHONE_STATE" />
<uses-permission android:name="android.permission.
INTERNET" />
<uses-permission android:name="android.permission.
RECEIVE_SMS" />
<uses-permission android:name="android.permission.
RECORD_AUDIO" />
<uses-permission android:name="android.permission.
MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.
READ_CONTACTS" />
<uses-permission android:name="android.permission.
WRITE_CONTACTS" />
<uses-permission android:name="android.permission.
WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.
ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.
GET_ACCOUNTS" />
<uses-permission android:name="android.permission.
BROADCAST_STICKY" />

Now youre all set to deploy and run the application in the
emulator/device.
4. To deploy and run, start the emulator, go to Windows>Avd
june 2013 | 53

Developers

Overview

Manager. Specify the name, target, size and click on Start.


The emulator will be launched.
Now click on MyPhonegapDemo Project and then right
click Run As>Android Application. Running the project in the
emulator will result in the screen shot shown in Figure 3.
Congratulations. You are now done creating a simple
application.

Exploring PhoneGap APIs

As I have already given you the list of APIs included in


PhoneGap, let's explore them to see what benefit can be
obtained from each and how they can be included in your
application to make it feature rich. These APIs give you
the advantage of accessing native device features like the
accelerometer, the compass, contacts, etc. Lets start playing
around with these APIs.
Accelerometer: This is a motion sensor that can detect a
change in movement with respect to the devices orientation
in all the three axes. It allows you to change the position of an
object in your application with every move of your device.
Usage: This can be implemented in racing games.
The methods for using the accelerometer are:
1. getCurrentAcceleration()Used to get the current
acceleration along three axes x,y,z.
2. watchAcceleration() - Used to get the acceleration with
some regular time interval.
3. clearWatch() - Used to stop the watch that you started
with watchAcceleration()
The arguments to pass through these methods are:
1. accelerometerSuccess() - This method will be called when
we get successful acceleration from the device.
2. accelerometerError() - This method will be called when
we fail to receive acceleration from the device.
3. accelerometerOptions This will let you set the frequency
for the time interval used in watchAcceleration()
So lets try to put these things up in the code to see
it in action.
Replace index.html with accelerometer.html
Note: The code below is for the watchAcceleration
method.
<html> <head>
<title>Accelerometer API</title>
<script type="text/javascript" charset="utf-8"
src="cordova-2.7.0.js"></script>
<script>
var watch=null;
// variable to be used inside
watchAcceleration method
document.addEventListener("deviceready", deviceReady,
false); (1)
function deviceReady() {
startWatch(); (3)
54 | june 2013

(2)

}
function startWatch() {
var options = { frequency: 5000 };
(4)
watch = navigator.accelerometer.
watchAcceleration(onSuccess, onError, options); (5)
}
function onSuccess(acceleration) {
var anyElement = document.getElementById('acceleration');
(6)
anyElement.innerHTML = 'Acceleration X: ' +
acceleration.x + '<br />' +
'Acceleration Y: ' + acceleration.y +
'<br />' +
'Acceleration Z: ' + acceleration.z +
'<br />' +
'Timestamp: '
+ acceleration.
timestamp + '<br />';
(7)
}
function onError() { (8)
alert('onError!');
}
function stopWatch() {
if (watch) { (9)
navigator.accelerometer.clearWatch(watch); (10)
watch= null;
}
}
</script> </head> <body>
<p id="acceleration">Waiting for acceleration</p>
</body> </html>

The output of the above code is shown in Figure 4.


Explanation: The code above shows the acceleration with
an increasing timestamp.
Reference 1 in the code shows the document.
addEventListener function, which depicts the wait for
PhoneGap to load. References 2 and 3 depict that when
PhoneGap is loaded, the deviceReady method will call
startwatch(), i.e., the user defined method. Reference 4
shows that inside the startwatch(), there is a variable called
the options ID, depicting the frequency for the timestamp.
Reference 5 shows that the method watchAcceleration()
fetches the current acceleration along the x,y and z axes.
The arguments passed will be called at the time of success
or failure. References 6 and 7 indicate that in case of
success, the body element 'Waiting for acceleration' will
be replaced by acceleration with respect to x, y and z.
Reference 8 shows that in case of an error, an alert will
be shown with an error message. References 9 and 10
show that in the stopWatch() method, if the watch is not
null, it means the watchAcceleration() method has been
called before; so use clearwatch() to stop watching the
accelerometer.

Overview

3. contactError This
will be called in case of an
error.
4. contactOptions This contains the search
parameter based on which
the contact list will be
displayed.

Note: To use getCurrentAcceleration(), there is no


need of the startwatch() and stopWatch() methods. Just call
navigator.accelerometer.getCurrentAcceleration (onSucess,
onError) in deviceReady.
Geolocation: This provides the current information on
the location of the device, i.e., its latitude and longitude. It
can fetch this information from the device's GPS, Wi-Fi,
Bluetooth, etc. Some devices already have built-in support, in
which case this API need not to be used.
Geolocation is used in map applications, and can help you
locate hospitals or restaurants nearby, based on the current
location of the device.
The methods for using the geolocation are:
1. getCurrentPosition() This is used to get the current
position of the device and returns the 'Position' object
as a parameter.
4. WatchPosition - Watches for any changes in the
device's position.
5. clearWatch() This is used to stop watching the
changes in position.
The arguments to pass through these methods are:
1. geolocationSuccess - This method will be called when
the position has been successfully located.
2. geolocationError - This method will be called when
the devices position could not be located.
3. geolocationOptions This will provide the timeout in
milliseconds. And if there is no update found after the
time-out, an error will be thrown.
Note: All these methods can be implemented
similar to those implemented in accelerometer.html,
and to retrieve and display latitude and longitude,
use 'position.coords.latitude' and 'position.coords.
longitude respectively.
Contacts: This API provides support to access the contacts
database of the device; it allows you to create a new contact
and find a particular one.
Usage: Applications can see the contact list saved in the
devices database.
The methods for using the contacts API are:
1. contacts.create This will create a new contact with
some details about the contact.
2. contacts.save This will persist the newly created
contact into the contacts database.
3. contacts.find This will find the particular contact
based on the search parameter.
Arguments to be passed in contacts.find() are:
1. contactFields This method will decide which
properties should be returned/displayed for a contact.
2. ContactSuccess This callback function will be
returned when a contact has been successfully
retrieved from the database.

Developers

Contacts.html
// Creating a contact
<!DOCTYPE html><html>
<head>
<title>Contact
Figure 5: Creating a contact
Example</title>
<script type="text/javascript" charset="utf-8"
src="cordova-2.7.0.js"></script>
<script type="text/javascript" charset="utf-8">
document.addEventListener("deviceready", deviceReady,
false);
function deviceReady() {
var addContact = navigator.contacts.
create({"displayName": "Anupriya Sharma"}); (1)
addContact.note = "This is my contact list";
addContact.save(onSuccess,onError); (3)
}
function onSuccess()
{
alert("Saved");
}
function onError()
{
alert("Failed");
}
</script> </head> <body>
<h1>Adding a new Contact</h1>
</body>/html>

(2)

Let us now examine what is happening in the above code.


In Line 1, 'addContact' is the variable that will hold the
value of the 'Contact' object which is going to be created
by using the '.create' method, and arguments of the create
method depict the contact name.
In Line 2, you have just added a note with the newly
created contact.
In Line 3, the method ' save()' will actually persist the
newly created contact in the database, and will be displayed in
the contacts of the emulator/device.
The result of the above code is displayed in Figure 5.

Searching contacts

contacts.find () - To implement this method, the following


changes should be done in contacts.html
june 2013 | 55

Developers

Overview

function deviceReady()
var options = new ContactFindOptions(); - (1)
options.filter="Anupriya"; (2)
var fields = ["displayName", "name"]; (3)
navigator.contacts.find(fields, onSuccess, onError,
options); (4)
}
function onSuccess(contacts) {
for (var i=0; i<contacts.length; i++) {
console.log("Display Name = " + contacts[i].
displayName);
(5)
}
}

The result of the above code is shown in Figure 6.


(1) contactFindOptions() - This method is used as a search
filter when you query the contacts database.
(2) The contacts are going to be filtered based on the value
provided in options.filter.
(3) It will find all the contacts wherein the name field value is
'Anupriya'.
(4) Contacts.find() method will call onSuccess or onError on
success or failure, respectively.
(5) The onSuccess method will display all the matches found
in the LogCat.

Figure 6: Finding contacts

Note: Here the contacts are displayed only in the


console, but they can be shown in list formats according to
the need of the application.
So the basics of PhoneGap, along with very simple
APIs, have been covered, which can help your hybrid apps
to access the simple native features of your devices. There
are a lot of APIs still to be explored, which I will cover in
the next article. So till then, explore and make the best use
of what we have covered here. Suggestions and queries are
always welcome.
By: Anupriya Sharma
The author has just graduated and is currently working in the
Android department of a reputed MNC. She loves Android and
iOS development. Apart from that, she manages to spare some
time for cooking, dancing and her all-time favourite, shopping.
You can contact her at anupriyasharma2512@gmail.com.

You can mail us at osfyedit@efyindia.com. You can send this form to


The Editor - D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563

56 | june 2013

Exploring Software

Anil Seth

Guest Column

The Anatomy of an
Android X86 Installation
The Android x86 ISO is under 200 MB, making it an attractive small
distribution to explore. The ISO is a live CD with an option to install.
It is fun to use this as a platform to explore and learn what else can
be done with it, without rebuilding from the source.

ooting starts with a boot loader like Grub or


syslinux. Two parameters are mandatory the
Linux kernel and an initial RAM file system
(initrd.img). There may be additional kernel options
which are used as and when needed by examining /proc/
cmdline. The kernel passes control to an executable
called init in initrd. The init executable is often a shell
script, though not necessarily.
This script will create a usable systemfinding critical
devices and mounting the real root file system. It will
finally switch root to the real root and start another init
script/ executable. In case the executable is not called init, it
may be passed as a parameter to an option called init.
Let us now explore the initrd.img on the Android x86
ISO (http://www.android-x86.org/) and see how it works
and what we can do with it. Extract the contents of the
initrd.img into a working directory.
$sudo mount /repos/android-x86-4.2-20130228.iso /mnt/
livecd/
$cp /mnt/livecd/initrd.img initrd.gz
$zcat initrd.gz | cpio -ivd

The Init script is about 200 lines and easily readable.


You can get an insight into what to look for by checking out
the contents of isolinux.cfg in /mnt/livecd/isolinux:
label livem
menu label Live CD - ^Run Android-x86 without
installation
kernel /kernel
append initrd=/initrd.img root=/dev/ram0 androidboot.
hardware=android_x86 video=-16 quiet SRC= DATA=

Two unusual parameters are SRC and DATA. The init


script searches for a root device by mounting each available
disk partition and DVD, and searches for ramdisk.img and
system.sfs in $SRC. So, you can copy the contents of /mnt/
58 | JUNE 2013

livecd into /android in the disk partition containing /boot.


Next, add the following entry in grub.cfg preferably by
editing 40_custom in grub.d and regenerating grub.cfg.
menuentry 'Android '{
linux /android/kernel root=/dev/ram0 androidboot.
hardware=android_x86 video=-16 quiet SRC=/android DATA=
initrd /android/initrd.img
}

You should now be able to boot into Android from the


local disk using grub. In my experiments, Android booted
on a netbook but not on a desktop. This is because there
are no drivers in initrd. The kernel is supposed to include
the required drivers for accessing the core devices. In
fact, the init script has the option to boot from an NFS
disk, provided the kernel is customised to include the
required Ethernet and nfs drivers. The pre-built kernel is
intended for use with common netbooks and small screen
x86 devices.
Creating a directory /android/data allows persistent
storage. Alternatively, DATA can be assigned a block device.

Booting over the network

The next experiment would be to find out if you can boot over
the network without customising the kernel. Can you follow
the example of Puppy Linux and copy the necessary files
inside initrd.img?
The init script mounts a disk partition on /mnt and
searches for the files needed. You can modify the script so
that if it finds the files under /mnt, it will not mount the
partition.
On the DHCP, PXE boot server:
1. Create a directory android in html-root: /var/www/html
(Fedora) or /var/www (Ubuntu). Use http rather than tftp
as that is faster.
2. Copy the contents of /mnt/livecd in it.
3. To enable pxe boot:

Guest Column Exploring Software

Copy pxelinux.0 from syslinux


Create a directory pxelinux.cfg
Move isolinux/isolinux.cfg to pxelinux.cfg/default
Move android-x86.png and vesamenu.cfg from
isolinux into android, i.e., one level up.
4. To create a new initrd:
Extract the contents of intrd.img into a working
directory
Move install.img, ramdisk.img and system.sfs in
the mnt folder of the working directory
Modify the init script to ignore mounting if /mnt/
ramdisk.img exists. E.g., a quick fix is to call
try_mount conditionally in check_root:
if [ ! -f /mnt/ramdisk.img ]; then
try_mount ro $1 /mnt && [ -e /mnt/$SRC/ramdisk.img ]
[ $? -ne 0 ] && return 1
fi

Create a new initrd.img and replace the one in /


var/www/android

5. Configure dhcp with the following line to boot pxelinux


using http: filename "http://server/android/pxelinux.0";
In my experiments, this worked on both the netbook and
the desktop. However, Android is designed around a small
device. So, using it on a desktop is an interesting experiment
but certainly not worth frequent use.
The first lesson is that creating a customised initrd is not
very difficult.
The more important and very noticeable lesson though
is that the ease of use and the ability to just access an
incredible amount of information comes at a price - the
loss of security and privacy. The risk of owning such
devices may be far too high if one is in the habit of leaving
the device unattended!

By: Anil Seth


The author has earned the right to do what interests him. You can
find him online at http://sethanil.com, http://sethanil.blogspot.com,
and reach him via email at anil@sethanil.com.

JUNE 2013 | 59

Developers

Overview

Bluefish-The Feature Rich Editor


This light and fast open source editor, best suited for Web development, is available
across multiple platforms and supports many programming languages. The editor has
also been translated to Tamil.

or

h Ed it

e Ric

at ur
he Fe

ish-T

Bluef

E, r);
EADM
pen(R )) {
96);
fo
=
fd, 40
$fd
$fd
(!feof( er = fgets($
e
il
h
w
$buff buffer;
$
echo
}

($fd);

fclose

;
E, r)
DM
6);
A
E
R
409
(
pen fd)) { ts($fd,
o
f
$
e
=
(
$fd e (!feof er = fg r;
ff
l
whi
$bu $buffe
o
ech
}

luefish is an open source editor that is highly


customisable. It supports many programming,
scripting and mark-up languages. The editor can be
extended by adding external programs such as filters and
by also adding snippets of code. As it is an open source
project, new features and language support, too, can be
added. This article is based on the latest version of Bluefish
2.2.4 on Windows.

Cross-platform support

The cross-platform support is very useful if you choose to


master just one editor for all your programming or mark-up
needs. If your work environment is Microsoft Windows and
the home PC has a Mac or Linux OS (or vice versa), then a
cross-platform editor is extremely useful.

Light and fast

The binary size of the latest version of Bluefish (2.2.4)


on Windows is around 4.2 MB. Over the years, it has
improved both on features and performance. From version
1.0 to the latest, the editor has improved on the maximum
file size it can handle and also the number of files that can
be opened simultaneously.

Multiple language support for the Web


programming languages, in particular

The list of programming languages that Bluefish supports is


60 | june 2013

d);
e($f

s
fclo

large, the most popular ones being C, C++ and Java. In markup languages, it supports HTML5 and ColdFusion Markup
language. The scripting languages it supports are Perl,
Python, Ruby, PHP, JavaScript and VBScript.
The editor is translated into 17 languages such as Russian,
Japanese, Chinese and Tamil. It is best suited for Web
development and deployment on Web servers.

Basic editing with Bluefish


Screen layout and panels

The editor has a menu bar, toolbar, tabs for quick access
HTML mark-up, a file browser and editing area. At the
bottom of the screen is the command output and status
bar. The complete UI layout is customisable, with an
option to hide/show panels. The menus are tearable, for
accessing the most frequently-used menu items quickly.
The side panel on the left gives views of the file browser,
bookmarks, character maps and code snippets. Bookmarks
and snippets are special features in Bluefish, which we will
cover in upcoming articles.
In the side panel with the file browser view, one can perform
operations on files like rename, delete, create new file, etc.

Standard project support

When working on Web projects or Java, one works with a set


of files, which can be grouped as one project. You can create

Overview

Developers

multiple projects of the same code base or different ones.

Word processing tools

The editor comes with a rich set of word processing


capabilities, like filtering the file contents, beautifying
the code (specific to the language), removing empty
lines, converting files from dos2unix, removing duplicate
lines, etc. Many command line utilities to process files
can be added.
One interesting feature in word processing is the
Synchronise Text Block feature. When multiple files are
open, one can ensure a block of code is the same across the
files. One of the scenarios could be that the copyright text
needs to be the same across files. Another scenario could be
when the footer for all Web pages needs to be exactly the
same across different pages. To ensure that, you can use the
Synchronise Text Block feature. The editor allows you to
mark the beginning and the end of the section you need to
synchronise. Note that these sections will be overwritten in
all destination files.

Spell check

Bluefish comes with the spell check feature, which is useful


when developing Web pages that have a lot of content. The
user has the option of selecting the language and the locale at
the time of editing, based on language sets chosen at the time
of installation. The editor also gives the option to Add to
dictionary and Ignore a spelling.

Code block folding

For easy code navigation, Bluefish offers a way to collapse


and expand blocks of code. The blocks are automatically
identified based on the programming language. To identify
more types of blocks, you can modify the default settings in
Preferences. For example, in PHP, Bluefish can identify
the comments block, PHP code snippets, HTML sections and
JavaScript functions as separate sections.

Advanced features of Bluefish


File Browser - Open Advanced

When you need to browse a large project with multiple


files, it would be painful to open one file at a time.
Bluefish gives an option to open multiple files from a
directory. Right click on a directory (left side panel) and
select Open Advanced ... Files can be opened based on
type (.css, .java, .html), a string or pattern within the file
and also recursively traverse multiple sub-directories.

Bookmarking

When browsing code across multiple files or a single large


file, bookmarks come handy. They let you pin a few locations
of code that you would like to switch to frequently. The
bookmarks are persistent across sessions of Bluefish.

Figure 1: Screen layout

Additionally, you can create bookmarks based on a


pattern. This would be useful if you want to review functions
that all start with a common string. In the Advanced Find
and Replace dialogue box, there is the option to bookmark
all lines that match a string. The capability to match a regular
expression makes it more powerful.

Find and Replace

All editors support the Find and Replace functions.


Additionally, Bluefish allows restricting or expanding the
scope of the search to different scenarios. If a chunk of text is
selected and then the Find and Replace dialogue opened, the
scope of the search is restricted to the selection.
The search can be restricted from the current point to
the end of the document. To expand the scope of the search,
you can search the entire document, all open documents in
the editor or a complete directory. Searching the complete
directory and also being able to select file-types (.h or .css)
is a powerful feature. Bluefish also allows you to define the
depth of directory navigation for a search.
Bluefish is one of the few editors that supports regular
expressions in search and replace. This is very helpful with
browser Web pages with mark-up. You can search for all the places
that a particular HTML tag is used and replace the deprecated tags
with a new one. For example, different programmers use different
ways to specify the background-colour of HTML elements. They
could use keywords likered or rgb(255,0,0) or hex code. To
search all the places that hex code was used, the search string could
be: background-color: #[0-9A-F]+;
The power of regular expressions is exploited when you
want to convert a huge HTML table in insert records within
the MySQL database using PHP code!

Syntax highlighting

Bluefish supports multiple programming, scripting and


mark-up languages, which means it can identify the
syntax (grammar) of each language and also give visual
june 2013 | 61

Developers

Overview

Figure 2: Table wizard dialogue box

Figure 3: Open file and print

clues to programmers. This helps in readability and


finding syntax errors. Some popular languages supported
by Bluefish are listed below:
ASP .NET and VBS
C/C++
CSS
CFML
Clojure
D
HTML, XHTML and HTML5
Java and JSP
JavaScript and jQuery
MediaWiki, Wordpress
Perl
PHP
Python
R
Ruby
Shell
Scheme
SQL
XML

and also modify remote files. This feature is useful for Web
development and for publishing it online.

Auto completion

For all the programming languages that Bluefish supports, the


auto completion functionality is available. This makes coding
much easier and also less error prone. After typing the first letter
of the keyword, the editor gives a dropdown list of suggestions.

Dialogues and wizards

In Web development, especially, mistakes in scripting or


mark-up can lead to hours of debugging. To reduce the time
for debugging, Bluefish comes with a feature to write code
with the help of dialogue boxes and wizards.
For example, the dialogue box is for creating an HTML
table. The user needs to enter a few values like the dimensions
and pick up values like colour. This wizard also gives an
exhaustive list of attributes, which enables programmers to
use the right attribute to achieve the desired result.

Support for remote files like FTP, SFTP,


HTTP and WebDAV

This feature is seen in Linux systems. Bluefish opens


remote files using FTP, SFTP and HTTP protocols, and
offers the same convenience as local files. You can view
62 | june 2013

Snippets

In programming, we come across frequently used chunks of


text. Some examples of such text in Web programming could
be: opening a database connection, creating a table, opening a
file and reading it. The following snippet is generated when a
user chooses PHP -> File -> Open and Print.
$fd = fopen(README, r);
while (!feof($fd)) {
$buffer = fgets($fd, 4096);
echo $buffer;
}
fclose($fd);

To maximise the benefit of a feature, you can enforce


coding standards and also train newcomers by creating sets
of snippets frequently used in projects. While one of the
programmers creates the set of snippets and exports it into
a file, other programmers can import it into the editors for
coding. Creating snippets is extremely simple.

Auto recovery of files

This feature avoids loss of text or work due to system crashes.


There is a temporary file that is frequently saved while editing
is going on. If the system crashes before the next file-save,
Bluefish recovers files from a temporary store and ensures
changes are not lost.
References
[1] Bluefish Project Home - http://bluefish.openoffice.nl/index.html
[2] Bluefish: The Definitive Guide - http://bluefish.openoffice.nl/
manual/bk01-toc.html
[3] Create Web Pages 5 Times Faster - http://www.makeuseof.
com/tag/createweb-pages-faster-bluefish-editor/
[4] Hidden Features of Bluefish - http://bfwiki.tellefsen.net/index.
php/Hidden_features
[5] 7 Linux Web Editors Compared - http://www.techradar.com/
news/internet/web/7-linux-web-editors-that-get-the-job-done499389?artc_pg=1

By: Janardhan Revuru


The author is fond of productivity tools and encourages programmers
to not only use, but to contribute to open source projects. He is
currently working at Hewlett Packard in Bengaluru.

Let's Try

Developers

Object Oriented Programming


with JavaScript
For many of us, JavaScript is a language that is used to manipulate HTML DOM elements
only. But it also has object-oriented characteristics. Read on to learn more about this.

avaScript helps in adding a dynamic nature to HTML


pages. It is based on the ECMAScript, which is the
scripting language standardised by Ecma International
in the ECMA-262 specification and ISO/IEC 16262. Most
modern browsers come with ECMAScript5+.
JavaScript is mainly associated with DOM manipulation.
In DOM, the window is the root object of the page and all your
other objects like documents, navigation, etc, come under it.

Functions

As we all know, a function is a block of code that can be used


again and again. In JavaScript, there is no concept of classes.
In the following pages we will look at how a function can
also act as a class. A function in JavaScript can act as a simple
function as well as a constructor for the object.

Ways of creating functions

A simple function can be created as follows:

function info()
{
console.log(Linux For You is now Open Source For You);
}
We can also create the above function by assigning it to a
variable:
var info=function()
{
console.log(Linux For You is now Open Source For You);
}

In the above case, if you print out the typeof(info), it will


give you the function as its result.
There is a special function in JavaScript known as a
self-executing function. This will get executed after your
full DOM tree gets loaded in the memory. So if you want to

june 2013 | 63

Developers

Let's Try

write a piece of code that holds HTML elements in it, you can
write it within the self-executing function. Given below is an
example of this:
(function()
{
//function code
})(arguments)

Note: All the examples in this article are according


to the ECMAScript5+. Make sure your browsers
implement the same.

Functions as first class citizens

This is a very common term in JavaScript for functions. You


will come across this term on the Web many times while
searching for JavaScript. Functions are considered first class
citizens because they possess the following properties:
1. Are assigned to a variable
2. Can be passed as an argument
3. Can be returned from another function
//Can be passed as an argument
function sum(a,b)
{
return a+b;
}

Note: If you declare any variable withvar outside


the function, it will also be considered as a global variable.

An example
//code
name=Open Source For You
function info()
{
var name=Electronics For You;
console.log(name) //prints Electronics For You
}
console.log(name); //prints Open Source For You

Closures

You can also use a function within a function. Thats right!


This is possible in JavaScript. Any function (parent) can
contain another child function.
function parent()
{
function child()
{
// child will have access to all the parent variables
}
}

sum(add(5,6));

//Can be returned from another function


function sum(a,b)
{
return function(a,b)
{
return a+b;
}
}

The good part is that all the variables declared within your
parent function can also be accessed by your child function.
Closure is a separate space in the memory that will be
created whenever a child function resides inside the parent
function. Closures will contain all the variables of the parent
function, so whenever the child gets access to that variable,
it can start its execution. In simple terms, its like first the
parent execution will take place and then the child execution.
Memory of the parent function will not be destroyed after its
execution. It will be still available for its child function.

Scope

An example of closure

The scope of the variable is quite important in JavaScript


and it is slightly different from the normal programming
languages.
Local variable: If a variable is declared with the keyword
var inside the function, we call it a local variable. The scope
of the variable is limited to that function only.
Global variable: If you declare any variable without
the keyword var, it is considered as a global variable.
That variable will be the part of your root object, i.e.,
a window object. Memory will not be released after its
execution. So it is not advisable to use global variables
for any reason.

64 | june 2013

function info()
{
var magazineName=Open Source For You;
var name=function()
{
console.log(name);
}
return name();
}
info() //will print Open Source For You

Let's Try

Developers

In the above example, magazineName will get stored


into the separate space know as closure. So when you are
returning the name(), the name function will access the
closure for all the local variables of the parent function info().

The closure looping problem

In the above code, for every iteration ofi a new


closure will be created. So now if funcs[0] tries to access
the value of i it will get zero. The same holds for the
funcs[1] and funcs[2] , and hence you will get your
expected output.

You should be very careful while dealing with loops within


the closure. Consider the example shown below:
var funcs = {};
for (var i = 0; i < 3; i++) {
//
functions
funcs[i] = function() {
//
funcs
console.log(My value: + i); //
value.
};
}
for (var j = 0; j < 3; j++) {
funcs[j]();
//
each one to see
}

lets create 3
and store them in
each should log its

for (var j = 0; j < 3; j++) {


funcs[j]();
}

Objects

Everything in JavaScript is an object. In fact, arrays, dates,


strings and even numbers are considered to be objects.
JavaScript is a prototype-based language and all the OOP
concepts are directly implemented by the object itself.

Ways of creating an object


and now lets run

On execution of the above code, you will see the


following output:
My value: 3
My value: 3
My value: 3
instead of the expected
My value: 0
My value: 1
My value: 2
This happens because, after the execution of the parents
function, the value of i will be 3 in the closure space. So
when the child function looks for the value of i it will get 3
instead of the expected 0,1,2.
Note: JavaScripts scopes are function-level, not
block-level, and creating a closure just means that the
enclosing scope gets added to the lexical environment of
the enclosed function.
Solution: You can correct the closure looping problem by
creating a separate closure for every value of i, as follows:

//Object Literals
var obj={};

This is the simplest way of creating an object in


JavaScript. Objects in JavaScript can have properties and
methods. To include the properties into the above object,
simply bind it using the dot operator.
obj.firstName=Ankur;
obj.lastName=Aggarwal;
obj.fullName=function()
{

return obj.firstName+obj.lastName

}
obj.fullName(); // will print Ankur Aggarwal

//new keyword
function info()
{

this.firstName=Ankur;

this.lastName=Aggarwal;
this.fullName=function()
{

var funcs = [];

return obj.firstName+obj.lastName
}

for (var i = 0; i < 3; i++) {


funcs[i] = (function(index) {
return function() {
console.log(My value: + index);
}
})(i);

}
var obj=new info();

Another way of creating the object is by using a new

june 2013 | 65

Developers

Let's Try
name

authorName

articleInfo

prototype

articleInfo
prototype

_proto_

osfy

Figure 1: An example of inheritance

Figure 2: Graphical representation

keyword (inheritance). While you are creating the object


using a new keyword, the function will behave as a
constructor.

the constructor properties also, it will look into prototype


properties. So it will keep on looking until it finds null.

Inheritance

__proto__: Every object in JavaScript will have a property


called __proto__, which will always point to the parent
object. So in the above example, object obj.__proto__ is
pointing to the info.constructor.
Prototype: Every function in JavaScript will
contain some constructor properties. Further constructor
properties will also have prototype properties. This
prototype property is the basis of inheritance, and hence
the inheritance in JavaScript is referred to as prototypal
inheritance.
Consider this figure as an example.
In the code shown in figure 1, osfy is the object in
which __proto__ of osfy is pointing to the prototype
of articleInfo. All the constructor properties as well as
prototype properties of the articleInfo are now inherited by
the osfy. Check out the last line, osfy.__proto__. It clearly
shows that it has got all the properties of articleInfo and is
pointing to the same object, including its constructor and
prototype properties.
A graphical representation of the above example is shown
in Figure 2.

Prototype chaining

Prototype chaining explains how the inheritance works in


JavaScript. Consider Figure 2 for an explanation. Lets
suppose you are looking for the name property on the
osfy object. It will first look in its own properties. If the
browser engine is not able to find it in the osfy object,
then it will move to its __proto__, which is pointing to
the articleInfo object. Then it will look into its constructor
properties and there it will find the required name
property. If the engine is unable to find the property in

66 | june 2013

An example
function person()
{

this.firstName=Ankur,

this.lastName=Aggarwal
}
person.prototype.firstName=Anil;
var p=new person();
p. firstName //prints ankur
delete p.firstName;
p. firstName //Anil

From the above example it is clear that the flow is:


Object Properties->Parent Object->Constructor
Properties->Prototype Properties->.....->null
These are the basics of JavaScripts object-oriented
nature, which might have come as a surprise to you.
According to me, JavaScript is the most misunderstood
language in the computing world. A few months ago, I
was using JavaScript just for DOM manipulation but after
getting an in-depth knowledge of it, I am in love with it.
I can say that JavaScript is a real programming language.
So just give it a try and you will come to know about the
good parts of JavaScript. Queries and suggestions are
always welcome.
By: Ankur Aggarwal
The author can be contacted at ankur.aggarwal2390@gmail.com or
followed on Twitter at www.twitter.com/ankurtwi. He blogs at www.
flossstuff.wordpress.com

Let's Try

Developers

Write a Linux SPI Device


Driver With Ease
This article describes how to write a Linux
device driver for a SPI flash memory.
However, many parts of the article are
general in nature and therefore can be
used as a reference while writing
drivers for other SPI devices.

he Serial Peripheral Interface (SPI)


Bus is a synchronous serial data
link standard that operates in full
duplex mode. Devices communicate in master/
slave mode, where the master device initiates the data
frame. Multiple slave devices are allowed with individual
slave select (chip select) lines. The SPI is used to talk to a
variety of peripherals such as sensors, control devices, flash
memories, LCD displays, touchscreens, etc.
In Linux, flash memories are referred to as MTD devices
(Memory Technology Devices). In this article, terms like an
SPI device, SPI slave device, MTD device and SPI flash are
used interchangeably.

Hardware architecture

The SPI devices like flash memories, touch-screens and


some authentication devices are usually not connected to the
CPU directly. Instead, another chip like the PCH (Platform
Controller Hub) mediates between the CPU and the SPI
device. The PCH is a family of Intel microchips. It is the
successor to the previous Intel Hub Architecture, which
used the northbridge and the southbridge instead, and first
appeared in the Intel 5 Series.
The PCH controls certain data paths and support
functions used in conjunction with Intel CPUs. The I/O
functions are reassigned between this new central hub and
the CPU in comparison to the previous architecture, wherein
some northbridge functions like the memory controller and
PCI-e lanes were integrated into the CPU while the PCH took
over the remaining functions in addition to the traditional
roles of the southbridge.
Figure 1 shows a typical hardware connection. The
CPU talks to the PCH through the PCI interface. The
PCH contains a SPI controller that talks to the flash chip
through the SPI interface.

Driver software architecture

Linux uses a layered architecture to provide access to all MTD


devices including NAND-flashes, NOR-flashes, SPI-flashes and

all such MTD devices. Figure


2 shows one such example.
As shown in Figure 2, to access
the SPI flash chip, several layers like
mtdchar, the MTD subsystem, chip driver,
SPI framework and SPI controller are involved.
Although it may look complex initially, the layered design
makes it simple and possible to access any kind of MTD
device in any manner. For example, if an MTD device is to
be accessed as a block device rather than a character device,
then the mtdchar layer can be replaced by the Linux VFS,
Block I/O and blockdev layers. The blockdev layer provides
a flash translation layer that emulates the MTD device as a
block device. Therefore, it is possible to use disk-based file
systems like ext2 and FAT on the MTD device. Similarly, if
an SPI flash chip is replaced by a CFI flash chip, then the
chip driver layer is changed to an appropriate chip driver
(like cfi_cmdset_0001.c). In the same manner, if a Winbond
SPI flash is replaced by a Micron SPI flash, then the SPI
controller layer should be changed. Therefore, the layered
architecture provides reusable components that make it
easy to access any kind of MTD device. The only layer that
is never changed is the MTD sub-system layer, which is
responsible for providing a unified and uniform layer that
enables a seamless combination of low-level MTD chip
drivers with higher-level interfaces.
Each layer has an assigned job. The chip driver has the
responsibility of implementing the chip-specific protocol.
Therefore, there are different chip drivers for SPI flashes
and CFI flashes because each of them follows a different
protocol. This protocol consists of command sets and
opcodes understood by the flash chip. The SPI framework
is a common layer for all SPI devices. This layer has the
responsibility of mediating between the chip driver and the
controller driver as you will see later in this article. The SPI
controller layer is responsible for actual communication to
the SPI device. Therefore, for every unique SPI device, there
has to be a SPI controller driver. So lets focus on such SPI
controller drivers for SPI flash memory.
June 2013 | 67

Let's Try

Developers

An SPI slave driver


The board mapping driver has the responsibility of
registering flash chip information with the SPI framework.
The PCI driver has the responsibility of identifying the SPI
controller on the PCH, initialising things like memory mapping
or interrupt registration. The SPI slave driver is responsible for
initialising and communicating with the flash chip.

CPU

SPI controller driver: Initialisation

PCI

PCH
SPI

Flash
Chip

Flash chip information registration


to the SPI framework

Figure 1: Hardware architecture

MTD Utils
mtdchar
(mtdchar.com)
Linux MTD Layer
(mtd.c)
Chip Driver
(m25p80.c)
SPI Core Framework
(spi.c)
SPI Controller Driver
(atmel_spi.c)
Memory Controller

SPI Flash Device


Figure 2: Software architecture

SPI controller driver parts

The SPI controller driver described here consists of three


parts:
A board mapping driver
A PCI driver
68 | June 2013

The board mapping driver and PCI driver mentioned earlier


take care of initialisation. The initialisation includes the
following activities, at the very least:
Flash chip information registration to the SPI framework
SPI master registration to the SPI framework
SPI slave device creation by the SPI framework
Registration to the MTD sub-system

The board mapping driver is responsible for registering


flash information with the SPI framework. Ideally, this
initialisation should be done in the arch/x86/ directory
during early initialisation. This information is persistent
until system reboot. Therefore, if this information is
registered by a kernel module, then the module must not
be removed and re-inserted. The easiest way to do this is
to write a module without the cleanup_module() function.
Since this information is part of system initialisation, the
board mapping driver module must be loaded before the
SPI controller and chip driver.
Role of board mapping driver is limited to register
partition information. As shown in the code snippet, the
board mapping driver registers partition information using
spi_register_board_info() function.
struct mtd_partition pch_spi_flash_partitions[] = {
{
.name = "partition1",
.offset = 0,
/* Offset = 0x0 */
.size = 4 * 1024 * 1024,
},
{
.name = "partition2",
.offset = MTDPART_OFS_APPEND, /* Offset = 4MB */
.size = MTDPART_SIZ_FULL,
},
};
const struct flash_platform_data pch_spi_flash = {
.name
= "spi_flash",
.parts
= pch_spi_flash_partitions,
.nr_parts = ARRAY_SIZE(pch_spi_flash_partitions),
.type
= "w25q64",
};

Let's Try
static struct spi_board_info pch_spi_slaves[] = {
{
.modalias = "m25p80",
/* Name of spi_driver for
this device*/
.max_speed_hz = 33000000, /* max spi clock (SCK) speed
in HZ*/
.bus_num = 0,
/* Framework bus number*/
.chip_select = 0,
/* Framework chip select.*/
.platform_data = &pch_spi_flash,
.mode = SPI_MODE_0,
},
};
spi_register_board_info (pch_spi_slaves, ARRAY_SIZE(pch_spi_
slaves));

This mapping driver registers flash chip information


with the SPI framework. The modalias string must be set
to the name of the chip driver going to control the flash.
You can verify that m25p80.c registers itself with the same
name. The max_speed_hz is a flash chip parameter and must
be obtained from the hardware manual. The bus_number
is the SPI bus number. The chipselect can be 0 or 1. The
chipselect is used by a SPI master to control two SPI slaves.
This information is specific to the board architecture. The
platform_data will be used by the chip driver and the MTD
sub-system for chip identification and partitioning.
The spi_register_board_info() function stores the flash
information in a global link list pointed to by board_list. Later,
this list is scanned by spi_match_master_to_boardinfo() to
check if any of the slaves can be claimed by the master.

SPI master registration to the SPI framework

Developers

struct drv_private_data *priv = spi_master_get_


devdata(pmaster);

/*Register the controller with the SPI core. * /


spi_register_master(pmaster);

The code snippet shown above registers an SPI master


with the SPI framework. This SPI master represented by
struct spi_master stands for the SPI master controller on the
PCH. This SPI master has the responsibility of managing SPI
slave devices like SPI flash.
Every SPI master has to register three methods with
the SPI framework. These methods are setup, transfer and
cleanup. Each of these methods is called when an SPI slave
device is created, operated and removed.

SPI slave device creation by SPI framework

After the board driver and PCI driver have registered


flash information (SPI slave device information) and have
created and registered the SPI master, the SPI framework
creates slave SPI devices. The slave SPI devices are
represented by struct spi_device.
During spi_register_master execution, the SPI framework
checks if there is any SPI slave device that is hooked to the
same bus number registered by the SPI master. If such a
device is found, the SPI framework creates slave SPI devices
and registers those devices with the device subsystem. This is
done by a call to spi_match_master_to_boardinfo() function
(from spi_register_master function).
SPI slave devices can be created by the SPI framework in
the following way:

The PCI driver takes care of identifying the PCH and


registering the SPI master. Remember, the SPI controller is
a part of PCH; therefore, the PCI driver has to identify its
PCH first. Readers can refer to my article http://linuxgazette.
net/156/jangir.html to learn about PCI device identification
and initialisation. This article assumes that pDev is pointing
to the PCH PCI device.
Register the SPI master through the PCI driver:

void spi_match_master_to_boardinfo(struct spi_master


*master, struct spi_board_info *bi)
{
struct spi_device *dev;

/* allocate SPI master */


struct spi_master *pmaster = spi_alloc_master(&pDev->dev,
sizeof(struct drv_private_data));

if (master->bus_num != bi->bus_num)
return;
dev

/* initialize SPI master */


pmaster->bus_num = 0;
pmaster->num_chipselect = 0;
pmaster->setup = pch_spi_setup;
pmaster->transfer = pch_spi_transfer;
pmaster->cleanup = pch_spi_cleanup;
/* initialize driver private data */

spi_new_device(master, bi);

Registration to the MTD sub-system

Now that the SPI slave device is registered with the device
sub-system, an MTD device is registered with the MTD subsystem so that the flash can be accessed through the MTD
sub-system like /dev/mtd, /dev/mtdblock. These MTD devices
are used by MTD utilities like flashcp, erase_all, etc. But this
registration is done neither by the board driver nor by the PCI
driver. This is done by the chip driver (m25p80.c). The probe
function of the chip driver is invoked by the SPI framework
whenever it creates an SPI slave device. The modalias field
June 2013 | 69

Developers

Let's Try

of the SPI slave is used to find an appropriate chip driver, and


then the probe function of the chip driver is called.
The chip driver probe function receives platform_data
information registered by the spi_register_board_info()
function. This information is used by the chip driver to define
partitions of the flash that are available through /proc/mtd.
You can arrive at the skeleton of a typical SPI chip driver
as follows:
int m25p_probe(struct spi_device *spi)
{
struct flash_platform_data *data;
data = spi->dev.platform_data;
struct flash_info
*info;
/* find the actual chip from list of chips
stored in m25p_ids[].
* A sample entry in m25p_ids[] looks like
* { m25pe80, INFO(0x208014, 0, 64 * 1024,
16,
0) },
* The info pointer is assigned to such an
entry of matching chip
*/

Communication between the chip driver and the SPI


controller driver
The transfer method of the SPI controller driver
Communication between the chip driver and SPI
controller driver: The communication between the chip driver
and SPI controller driver takes place using two important
structures: (i) struct spi_message; (ii) struct spi_transfer.
The spi_message represents a single SPI message
between the chip driver and controller driver. The SPI
message may contain one or two SPI transfers. The SPI
transfer represents one operation. For example, when the
chip driver wants to read from the SPI flash, it has to create
one SPI transfer for the write command and a second SPI
transfer for thebuffer to read data in. These two transfers
are embedded in one SPI message, and this SPI message
is sent to the SPI framework. The SPI framework delivers
this SPI message to the transfer method of the SPI master.
Therefore, the SPI transfers in SPI messages can be
referred to as CMD Transfersand RSP Transfers. Each
SPI transfer contains two buffers tx_buf and rx_buf , which
may be NULL if not relevant to the command. Table 1
shows a typical mapping.
As shown above, some SPI commands send only one SPI
transfer (CMD Transfer) in the SPI message, while some send
two SPI transfers (CMD Transfer and RSP Transfer) in the
SPI message. The SPI transfer method of the SPI controller
should be ready to deal with this.
Lets look at an example of how the chip driver
creates SPI messages and SPI transfers, and sends them to
the SPI controller.

/* create and initialize MTD device */


struct mtd_info *mtd = kzalloc(sizeof(struct mtd_
info), GFP_KERNEL);
mtd->name = data->name;
mtd->type = MTD_NORFLASH;
mtd->writesize = 1;
mtd->flags = MTD_CAP_NORFLASH;
Table 1: SPI command to SPI Transfers buffer mapping
mtd->size = info->sector_size *
CMD Transfer
info->n_sectors;
SPI
rx_
command
tx_buf
mtd->erase = m25p80_erase;
buf
mtd->read = m25p80_read;
Buffer containing
Write Status
mtd->write = m25p80_write;
NULL
Register
opcode
mtd->erasesize = info->sector_size;
mtd->dev.parent = &spi->dev;
Buffer containing

Page Program

/* register MTD device */


int parts = data->parts;
int nr_parts = data->nr_parts;
return add_mtd_partitions(mtd,
parts, nr_parts);
}

At this stage, the SPI slave device is


ready to be accessed by MTD utils. Also
the /proc/mtd file provides some useful
information about the MTD device.

The SPI controller driver: Operation


In this section, let us examine the data
transfer to the SPI slave device. There are
two important things to understand here:
70 | June 2013

opcode
Buffer containing

Read Data
Bytes

opcode

Read Status
Register

opcode

Erase Whole
Flash Chip

opcode

Sector Erase
Read JEDEC
ID
Write Enable

Buffer containing
Buffer containing
Buffer containing

opcode
Buffer containing

opcode
Buffer containing

opcode

NULL
NULL

NULL

RSP Transfer
tx_buf

rx_buf

NULL

Buffer to
write to status register

Data buffer to be
NULL
written to flash
Buffer to
NULL
read data
bytes
Buffer to
NULL
read status
register

NULL

N/A

NULL

N/A

NULL

NULL

NULL

N/A

Buffer to
read JEDEC
ID

Let's Try
int

m25p80_read(struct mtd_info
size_t len, size_t *retlen, u_char

*mtd,
*buf)

loff_t from,

{
struct m25p *flash = mtd_to_m25p(mtd);
struct spi_transfer t[2];
struct spi_message m;
spi_message_init(&m);
memset(t, 0, (sizeof t));
/* setup CMD transfer */
t[0].tx_buf = flash->command;
t[0].len = 4; /* CMD size */
flash->command[0] = OPCODE_READ;
flash->command[1] = from >> 16;
flash->command[2] = from >> 8;
flash->command[3] = from;
spi_message_add_tail(&t[0], &m);
/* setup RSP transfer */
t[1].rx_buf = buf;
t[1].len = len;
spi_message_add_tail(&t[1], &m);
/* send command to SPI framework */
spi_sync(flash->spi, &m);
*retlen = m.actual_length - 4; /* dont count cmd
size */
return 0;
}

Now that it is clear how a chip driver creates SPI


messages and SPI transfers, and how those messages reach
the SPI master, lets look at real data transfer.
The transfer method of SPI controller drivers:
The transfer method of the SPI controller driver is called
whenever any operation (read/ write/ erase/ status) is
performed on the slave device. As explained earlier, this
method receives SPI messages as an argument. The transfer
method has the responsibility of processing this SPI message
and carrying out the appropriate operation on the SPI slave
device. However, this method has a constraint in that it
cannot sleep. Therefore, it is very common that this method
only offloads the work to a work-queue and returns. The
work-queue function later completes the actual I/O.

The actual transfer

The work-queue function responsible for the actual transfer


is SPI slave dependent. You need to refer to the SPI slave
hardware manual to understand how exactly the data
transfer can be done. Some SPI slave devices are interrupt
driven, and are, therefore, capable of informing about I/O
completion through an interrupt; while some simple devices
have to be polled for completion.

Developers

The following transfer function shows a typical data transfer


from a Winbond flash chip W25Q64BV. This chip is controlled by
some registers that can be memory mapped by the PCI driver.
Read the Software Sequencing Flash Control Register to
check if the slave device is free
Write the I/O address to the Flash Address Register
Write the operation code (SPI opcode) to the Opcode Type
Configuration Register
Write opcode meta data to the Opcode Menu
Configuration Register
Write opcode prefix data to the Prefix Opcode
Configuration Register
Write data to Flash Data Registers
Trigger operations by writing to the Software Sequencing
Flash Control Register
Read the Software Sequencing Flash Control Registerto
check if the operation is completed
Read data from the Flash Data Registers
As you see, this function has to perform chip-specific
work. Some steps in the above list may or may not be
applicable, based on the SPI opcode under process.
As described, there are two important things an SPI
controller driver should do. The first is to implement chipspecific setup/ transfer/ clean-up functions, and the second is to
register with the Linux device/MTD/SPI sub-system. The first
part is hardware specific while the second part is common.
So far, this article has not mentioned the locking and
concurrency aspects. But it is very important to consider them
when working on SMP systems or working on a system with
more than one SPI slave device. There are two important cases
that a driver author should keep in mind.
1. The SPI controller driver should synchronise concurrent
requests to multiple SPI devices on the same bus. If two chip
drivers issue requests at the same time to two devices on the
same SPI bus, then the controller driver needs to guarantee
that one request waits until the other has finished. This kind
of synchronisation is usually implemented by the queuing
mechanism within the controller driver. The common queue
represents a unique resource that is protected via some sort
of locking. Only the driver that owns the lock can add new
requests to the queue, others will have to wait until the lock
is released. Therefore, it is very common to see work-queue
offloading and a queue in the transfer method.
2. The setup function is called as soon as any SPI slave device
is found. Therefore, it is possible that when the setup for a
new SPI slave device is called, the transfer method for some
old SPI slave device is in progress. Hence, the setup method
must not update any shared data structures.
By: Mohan Lal Jangir
The author, whose interests are the Linux kernel, data
networking, network processors and network security, is a senior
technical manager at Samsung Research India, Bengaluru, and
holds a Masters from IIT Delhi.

June 2013 | 71

Admin

Let's Try

Simulate Your Network with NS2


Learn about Network Simulator-2, its installation and the execution of simple TCL
files of network architecture.

etwork Simulator-2 (NS2) is a discrete event network


simulator that plays an important role in network
research and development. It is one of the most
widely used open source network simulators. It has been
enhanced over the years, leading to a complete package
consisting of various modules for functions such as routing,
the transport layer protocol and applications.
To observe network performance, an easy scripting
language can be used, by which you can configure the
network as per your architecture and observe the results to
check the correctness of your configuration.
NS2 plays the role of both emulator and simulator. The
latter contains three types of discrete event schedulers:
list, heap and hash-based calendar. NS2 provides default
implementations for network nodes, links between nodes,
routing algorithms, some transport level protocols (especially
UDP and TCP) and some traffic generators. It also contains
some useful utilities like the TCL debugger, simulation
scenario generator and simulation topology generator. The
TCL debugger is used to debug TCL scripts, which becomes
necessary if one is using large scripts to control a simulation.
It is also possible to use NS2 as an emulator, which is
currently supported by only FreeBSD. The NS2 emulator
can be used to connect the tool to a live network. The

72 | June 2013

NS2 emulator works on two modes, i.e., the protocol mode


and the opaque mode. In the protocol mode, the emulator
interprets received traffic, whereas in the opaque mode, the
received data is not interpreted.

An insight into the architecture of NS2

NS2 is primarily designed on two languages: C++ and Objectoriented Tool Command Language (OTCL). C++ is used for
defining the internals of NS2 while OTCL is used to control
the simulation as well as to schedule discrete events. C++ and
OTCL are linked together using TCLCL. After the linking of
C++ member variables to OTCL object variables by using the
bind member function, C++ variables can be modified through
OTCL directly. The main drawbacks of this approach are that
the user has to know C++ as well as OTCL, and the debugging
of simulations becomes more difficult.
After simulation, NS2 outputs either text-based or
animation-based simulation results. To interpret these
results graphically and interactively, tools such as NAM and
XGraph are used, which are explained in the next section.
To analyse some particular behaviour of the network, extract
a relevant subset of the text-based data and transform it into
a more understandable presentation. The basic architecture
is shown in Figure 1, pictorially.

Let's Try

Traffic Trace
(file.bin)

oTcl Script
Topology & Traffic
Conditions
(file tcl)

NS2
(Simulator)
c++
oTcl

ASCII Trace
(file.tr)

Analyser Program (s)


(Script.Awk.Sed, Python.Perl...)

Plotting program
(gnuplot.xgraph...)

NAM Trace
(file.nam)

NAM
(Animater)

Analyser Program (s)


(Script.Awk.Sed, Python.Perl...)

Graph

Admin

Open the terminal and change the working directory to


where you downloaded the ns-allinone package.
Now untar (uncompress) the package using the
following command:

tar xzvf filename

Change the directory to ns-allinone2.29, run the


installation script using the following command and wait until
the installation is successfully completed:

Figure 1: Network Simulator 2 architecture

Supporting tools with NS2

./install

After successfully installing the NS2 package, configure


the .bashrc file, which is present in the following path:

Lets explore the supporting tools that come with the NS2
installation, which help set up your environment and run
your simulations.
NAM (Network AniMator): NAM is a TCL-based
animation tool for viewing network simulation traces and real
traces of packet data. To use NAM, you have to first generate
a trace file that contains topological information like nodes,
links and packet traces.
Once the trace file is generated, NAM will read it, create a
topology, pop up a window, do the layout if necessary, and then
pause at the time of the first packet in the trace file. NAM provides
control over many aspects of animation through its user interface,
and does animation using the following building blocks: node,
link, queue, packet, agent and monitor. The official NAM user
manual can be found at http://www.isi.edu/nsnam/nam/
XGraph: XGraph is a plotting program that is used
to create graphic representations of simulation results. It
is important because it allows some basic animation of
data sets. The animation only pages through data sets in
the order in which they are loaded. It is quite crude, but
useful if all the data sets are in one file in the time order,
and are put out at uniform intervals. Also, the code will
take derivatives of your data numerically and display
these in a new XGraph window.

NS2 functionalities

If you see a % sign displayed in the terminal,


congratulations, you have installed NS2 successfully!
Now to validate the installation, run the following
command (optional):

Wired world
Routing: distance vector (DV), link state (LS), and multicast
Transport protocols: TCP, UDP, RTP and SCTP
Traffic sources: WEB, FTP, telnet, CBR, and Stochastic
Queuing disciplines: Drop-Tail, RED, FQ, SFQ, and DRR
QoS: IntServ and Diffserv emulation

Wireless
Ad hoc routing (AODV, DSDV) and mobile IP
Directed diffusion, and sensor-MAC

Installing NS2 on Ubuntu

Download the latest release of the ns-allinone package


from the official NS2 weblink http://www.isi.edu/nsnam/
ns/ns-build.html

home/ns2username

Edit the .bashrc file using the following command:


gedit .bashrc

Set the following path in the last line of .bash


# export PATH="$PATH:/home/ns2userName/ns-allinone-2.29/bin:/
home
/ns2userName/ns-allinone-2.29/tcl 8.4.11/unix:/home/
ns2userName/ns-allinone-2.29/tk8.4.11/unix"
export LD_LIBRARY_PATH="/home/ns2userName/ns-allinone-2.29/
otcl-1.11,/home/ns2userName/ns-alli none-2.29/lib"
export TCL_ LIBRARY="/home/ns2userName/ns-allinone-2.29/
tcl8.4.11/library"

If the path set is correct, open the new terminal and run
the following in the home directory:

ns/

cd /ns-allinone-2.29/ns-2.29 / ./validate

Execution of a simple TCL file in NS2

In this section, lets cover the execution of TCL files for a


simple network architecture.
set ns [new Simulator] //This creates an NS2 simulator object
set tracefile [open out.tr w]
$ns trace-all $tracefile //Open trace file

June 2013 | 73

Admin

Let's Try

set nf [open out.nam w]


$ns namtrace-all $nf //Open the NAM trace file
proc finish {}
{
global ns tracefile nf
$ns flush-trace
close $nf
close $tracefile
exec nam out.nam &
exit 0
}
//'finish' procedure
set n0 [$ns node]
set n1 [$ns node]
$ns simplex-link $n0 $n1 1Mb 10ms DropTail // Create your
topology
- set n0 nodes.
- make the link between node0 and node1
set tcp0 [new Agent/TCP]
$ns attach-agent $n0 $tcp0
set cbr0 [new Application/Traffic/CBR]
$cbr0 attach-agent $tcp0
set tcpsink0 [new Agent/TCPSink]
$ns attach-agent $n1 $tcpsink0
$ns connect $tcp0 $tcpsink0 // Create your agents
-the nodes will follow the TCP at CBR(Constant Bit Rate)
$ns at 1.0 "$cbr start"
$ns at 3.0 "finish" //Scheduling Events
- $ns at 1.0 start
and at 3.0 finish

Figure 2: Execution of a simple TCL file

To interpret the results in the form of animation, you need


to just run the command given below:

/nam filename.nam

And to see the results in numerical form, you need to open


the .tr file in the editor.

/gedit filename.tr

The main aim of this article was to give readers an


insight into the architecture and working of NS2 and how
it could be used to simulate complex networking problems
with simple programming syntax.
As the next step, start with designing a different
network architecture consisting of three nodes, write a
simple script using the reference given above and check its
execution. This would give you a good idea about writing
and executing scripts in NS2. Check the References
section for more details about simulating your network
using Network Simulator 2.

$ns run //starts the simulation

This is a basic example of a .TCL file where you create


two nodes and observe the transfer of packets according to
the Transmission Control Protocol (TCP) at Constant Bit Rate
(CBR), which is a traffic generator. In this example, we have
used the TCP but other protocols and traffic sources can be
used depending on your network architecture.

Execution of the .TCL file in NS2

.TCL files contain the code of network simulation and can be


executed by following the steps given below:
Once you have installed the NS2 successfully, open a
terminal and run the following:
/ns filename.TCL.

Once you run that command, .tr and .nam files will be
created in the same directory that contains the .TCL file.

74 | June 2013

References
[1] Introduction to Network Simulator 2, Springer Publications.
[2] The NS2 project page, http://nsnam.isi.edu/nsnam/index.php/
User_Information
[3] For more examples, visit, www.nsnam.com

By: Naman Jain and Tejaswi Agarwal


Naman Jain is keenly interested in open source programming
and has a good command over Linux systems administration.
His research interests lie in the areas of computer networks,
simulations, and cloud computing. You can contact him at
naman.jain2010@vit.ac.in
Tejaswi Agarwal is a FOSS enthusiast, who is passionate about
compute power utilisation, run time and memory utilisation of
algorithms. Computer architecture, parallel programming and
performance engineering are some of his research areas. He can
be contacted at tejaswi.agarwal2010@vit.ac.in

Admin

Overview

Securing the SSH Service


The SSH service is very widely used in open source infrastructure set-ups. Though SSH
stands for Secure Shell, it is found to be highly vulnerable to security attacks if sufficient
care is not taken by systems administrators during the installation and configuration.
Read on to learn more about such challenges and solutions.

ue to its small footprint on the network, as well as the


ease of installation and maintenance, SSH replaces
many remote shells in modern data centres. Before
we talk about securing SSH, it is imperative to understand
how the protocol works, and the bells and whistles provided
for security configurations.

How SSH works

Please refer to Figure 1, which shows the protocol stack


that forms the SSH services. The Secure Shell (SSH)
protocol runs as a daemon service on Linux servers, similar
to Telnet, while the client uses an SSH client utility such
as Putty to connect to the server. SSH is available on
Windows as well as UNIX platforms, and is widely used
in Linux infrastructures. By default, it uses TCP port 22
for communication. Unlike Telnet, SSH uses cryptography
for authenticating the client and the server, and also for
data transfer purposes, thus ensuring data confidentiality
and data integrity. Its communication has three basic
stepsthe client-server handshake, authentication and
secure data exchange. During the handshake phase, both
the sides exchange information about the
SSH protocol version and the cipher
algorithms they support (which
are typically the combinations of

76 | june 2013

asymmetric and symmetric encryption, as well as hashing


algorithms) and compression algorithms. Unlike SSL, in
SSH the server sends the first data block to the client.
As for authentication, the server is authenticated using a
host key, while the client typically stores the key fingerprint
at some predefined location and validates it later in the
process. Please see Table 1, which shows the supported client
authentication methods.
Table 1

Client authentication method

Description

Public
authentication

Client and server have key pairs and


exchange public keys during the
authentication process

Password
authentication

Plain text password for the given


login user is used for authentication

Host-based
authentication

Limits client access to a particular


host/hosts

Keyboard
authentication

Works on the basis of pre-stored


security questions and answers

As for the SSH configuration, it is important to remember


the files and their paths (Table 2).

Overview

SSH User
Authentication Protocol

SSH
Connection Protocol

SSH Transport Layer Protocol


Server Authentication, Data Security, Compression
TCP/IP
SSH Protocol Stack

Admin

thing to do is to allow the SSH service to listen to only a


particular IP address or a range in terms of the subnet. It is
also advisable to set the IP of the machine to an address to
which the SSH service should be bound. So, in short, we are
setting up the toand from address for the SSH services
network traffic, to ensure no other IP addresses participate in
the secure shell communication process. The former setting
is to be done in the sshd_config file, while the latter is in the
hosts.allow file, and is also called TCP wrapping.
#Listen only on 1.5 local ip address
ListenAddress 192.168.1.5
#Allow connections only from these ip
sshd : 10.0.0.1

Figure 1: The SSH protocol stack


Table 2

Server configuration file

/etc/ssh/sshd_config

Client configuration file

/etc/ssh/ssh_config

Lists all hosts to be allowed to


connect

/etc/hosts.allow

Lists all hosts to be denied

/etc/hosts.deny

Presence of this file refuses all


users to connect except root

/etc/nologin

Securing the SSH service

Now lets look at how we can use the configuration


information from the table, to secure the daemon service.
Lets go through each relevant setting and figure out what
values they need to be set to, in order to achieve the desired
level of security.
Targeting defaults is the first line of defence. It is a common
practice to change the default SSH port, since this makes it
hard for attackers to guess the port at which service is running.
Though it doesnt necessarily stop attacks, it simply makes the
attackers life difficult and hence acts as an important config
setting. Disabling root access using passwords should be
avoided; instead, the root user should be configured to accept
cryptographic authentication only. Similarly, access to all users
should be disabled by default, allowing only SSH protocol
version 2 to be configured. The following steps in the sshd_
config file perform these settings:
# only use protocol version 2
Protocol 2
# Listen on port 88000 instead of default 22
Port 88000
# Whitelist only the allowed users
AllowUsers prashant rajesh
#Optionally deny specific users
DenyUsers mayur
# No root login
PermitRootLogin no

Once these defaults are set up, the next step is to lock
down the SSH service at the network level. The correct

Once the network boundaries are set, the next step is


to control hosts and authentication methods. You need to
ensure that empty passwords are not allowed for any user.
In case of a better security set-up, password authentication
should be disabled altogether, and only the public and
private key pair-based authentication scheme should be
in place. Depending on the security situation, sysadmins
might want to disable host-based authentication and the
remote hosts. Besides, when the users log in, they should be
presented with a text banner, which is usually done from the
compliance policy enforcement. Prior to the user login, it is
possible to set up a time limit, for which the SSHD service
should wait for the user to actually log in. Also, if the user
is leaving the SSH session idle after logging in, there is
a mechanism to log the session out. This is important to
avoid idle sessions that attackers typically look for. For a
given connection, SSH can be instructed to shut down the
connection if a certain number of unsuccessful attempts
are made. This helps to thwart brute force to some extent.
The SSH daemon can also be instructed to perform a sanity
check on the keyfiles and SSH directories, to ensure these
are not tampered with.
# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
# Disable the use of passwords completly, only use public/
private keys
PasswordAuthentication no
#Ignore remote hosts
IgnoreRhosts yes
#Ensure that a banner shows up
Banner /etc/issue
#Wait for user to login for 60seconds
LoginGraceTime 60
#Set idle timeout to 4minutes and
ClientAliveInterval 240
ClientAliveCountMax 0
#Set maximum auth attempts for a connection
MaxAuthTries 4
june 2013 | 77

Admin

Overview

# Force permissions checks on keyfiles and directories


StrictModes yes

At this point, the SSH service has been secured to a


great extent. However, it is very essential to configure the
monitoring and logging of the service, which gives the
administrators a view of the operations of the SSHD service.
The following settings are self-explanatory and should be
tweaked as per requirements. The log level can be tuned
from Info to Debug 1,2,3 in order to debug a particular
SSHD-service-related problem or intrusion attack.
# Syslog Logging
SyslogFacility AUTH
# set log level
LogLevel INFO
# Print most recent user login time
PrintLastLog yes

Besides the above settings, there are configuration


tweaks available for advanced SSH setups such as keybased authentication, password-based authentication,
etc. It is important to note that while most of the SSHD
settings follow standards, a few distros follow different

78 | june 2013

configuration keywords and the help manual should be


referred to, in such cases.
As seen earlier, while the SSH service offers many
important settings by default, these may not be enough
for advanced IT infrastructures. Introducing two-factor
authentication, host-based private and public key-pairbased security, and higher encryption standards for data in
transit between server and client, are a few examples of the
security mechanisms to be considered. Having a firewall and
an intrusion prevention system is also advised for missioncritical data centres running SSH service on servers that
carry important business data or applications. Open source
utilities such as fail2ban or DenyHosts should be used to
deal with SSH brute force attacks. Regular operating system
level patching and SSH service version patching is extremely
important for maximum security.
By: Prashant Phatak
The author has over 22 years of experience in the field of IT hardware,
networking, Web technologies and IT security. Prashant runs his
own firm called Valency Networks in India (www.valencynetworks.
com) providing consultancy in IT security design, security penetration
testing, IT audits, infrastructure technology and business process
management. He can be reached at prashant@valencynetworks.com.

Let's Try

Open Gurus

Access Your Raspberry Pi


Remotely with VNC
Raspberry Pi users who don't have a HDMI enabled monitor and want to remotely access
its graphical desktop can set up a VNC server on Raspberry Pi and remotely connect to
its desktop from an Ubuntu machine.

aspberry Pi can be connected to a HDMI


enabled monitor or TV to view the desktop
of the Raspbian operating system. But if you
dont have an HDMI supported device and would like
to remotely access the Raspberry Pi graphical desktop,
Virtual Network Computing (VNC) is the way to go. You
would need to set up a VNC server on Raspberry Pi and
remotely connect to it from your host via VNC viewer. I
will describe the steps to set up the server.
1. After you have written the Raspbian image onto the SD
card, place the Raspberry Pi in your local network and
allow it to connect to the Internet via a router or modem.
As you dont have display hardware for your Pi, use the
Linux nmap (Network Mapper) utility to get its IP address
on your network.
On your Linux host machine, e.g., Ubuntu, type the
following command:

#sudo apt-get install nmap

Scan the IP addresses on your local network as follows:


#nmap -sV -p 22 192.168.1.1-255

Note: Use the IP address range which is compatible


with your network (I am assuming that the network has a
working DHCP server running).
The results will display every machine that can be
identified on port 22. The Raspberry Pi (running Raspbian)
may show up as follows:
Nmap scan report for raspberrypi.Home (192.168.1.xxx)
Host is up (0.012s latency).
June 2013 | 79

Open Gurus

Let's Try

PORT STATE SERVICE VERSION


22/tcp open ssh
OpenSSH 6.0p1 Debian 3 (protocol 2.0)
Service Info: OS: Linux

2. Now use ssh to connect to the Raspberry Pi using the


obtained IP address on the default port 22:
#ssh pi@192.168.1.xxx

You may receive a message like the one shown below:


The authenticity of host 192.168.1.xxx cant be
established.
RSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx
:xx:xx:xx.
Are you sure you want to continue connecting (yes/no)?

Type Yes. When further prompted, enter the default


password raspberry for the user Pi. You should get the
terminal for Pi:
pi@raspberry~$

3. Now every time you boot, you may get the same IP
address from your LAN. But it is a good idea to assign
a static IP address to Pi. To do this, edit the file /etc/
network/interfaces
For an eth0 connection, change from dhcp tostatic and
address it as follows:

thessh-keygen command. The instruction will be shown on


the warning screen itself, e.g.,:
ssh-keygen -f /home/username/.ssh/known_hosts -R
192.168.1.yyy

Again try to connect to the Pi using ssh and the static IP


address that you have assigned to it.
6. Once you get the terminal prompt, install the VNC server
on Pi, by using the following command:
pi@raspberry~$sudo apt-get install tightvncserver

7. Run vncserver:
pi@raspberry~$vncserver
You will get the following message:
You will require a password to access your desktops.
Password:

In the password field, enter a password that you will use


to connect to Raspberry Pis desktop from your host machine.
When further prompted, verify the password.
When asked, Would you like to enter a view-only
password (y/n)? press n which will result in the following
message on the Pi terminal:

iface eth0 inet static

New X desktop is raspberrypi:1


Creating default startup script /home/pi/.vnc/xstartup
Starting applications specified in /home/pi/.vnc/xstartup
Log file is /home/pi/.vnc/raspberrypi:1.log

Following this line, add the static IP address, gateway and


netmask information, as shown below:

8. Verify if VNC server is running by executing the


following code:

address: 192.168.1.yyy
netmask: 255.255.255.0
gateway: 192.168.1.1

pi@raspberry~$ps aux | grep vnc

Note: Use an IP address, a gateway that is related


to your network and ensure that no other machine in your
network has the same IP.
4. Once you have saved the changes, reboot the Pi, as follows:
pi@raspberry~$sudo reboot

5. After reboot, connect to Pi from your host machine with


the new IP address:
#ssh pi@192.168.1.yyy

If you get any warning message due to the change in


remote host identification, add the correct host key using
80 | June 2013

9. To allow the VNC server desktop to run during the boot


process, place a start-up script in the /etc/init.d/vncserver file:
pi@raspberrypi~$ sudo touch /etc/init.d/vncserver

Edit the above file with the following content:


#! /bin/sh
# /etc/init.d/vncserver
#
#Start or stop vncserver as asked
case $1 in
start)
su pi -c /usr/bin/vncserver
echo Starting VNC server
;;
stop)

Let's Try

Open Gurus

echo Stopping VNC Server


pkill Xtightvnc
;;
esac

Reboot the Pi and you will see VNC server has already
started.
15. On the host Ubuntu machine, install the VNC viewer, as
follows:

10. Make the script executable, by issuing the following


command:

#sudo apt-get install xtightvncviewer

16. Connect to the running VNC server using the IP address of Pi:

pi@raspberry~$sudo chmod u+x vncserver


#vncviewer 192.168.1.yyy:5901

11. Kill the previous instance of VNC server that started


initially, with the following code:
pi@raspberry~$sudo pkill Xtightvnc

12. To ensure VNC server is not running, use the code below:

When prompted for the password, enter the password that


you had set earlier to view the Pis desktop. The graphical
desktop of Pi will pop up on your host screen
17. To stop the VNC viewer, close the application. To stop
the VNC server, issue the following command on the
Raspberry Pi:

pi@raspberry~$ps aux | grep vnc

13. Start the VNC server via the startup script, as follows:
pi@raspberrypi~$ sudo /etc/init.d/vncserver start

You should see the output given below:


New X desktop is raspberrypi:1
Starting applications specified in /home/pi/.vnc/xstartup
Log file is /home/pi/.vnc/raspberrypi:1.log

pi@raspberrypi~$vncserver -kill :1 (Note the space after


kill)

18. You can open multiple VNC connections. The display size
of the Raspberry Pis desktop can be adjusted by modifying
the options in the vncserver command.
References
[1] http://elinux.org/RPi_Hub
[2] http://en.wikipedia.org/wiki/Virtual_Network_Computing
[3] http://www.tightvnc.com/

14. If you want the VNC server to come up each time you
boot, execute the following:

By: Kaushik Debnath

pi@raspberrypi~$update-rc.d vncboot defaults

The author is an engineer by profession with interests in


embedded systems and open source technologies. He is based
in Bengaluru and can be reached at caesar431@gmail.com

osFY Magazine attractions during 2013-14


Month

theMe

Featured List

March 2013

Virtualisation

Virtualisation Solution Providers

April 2013

Open source Databases

Certification & Training Solution Providers

May 2013

Network Monitoring

Mobile Apps

June 2013

Open Source application development

Cloud

July 2013

Open Source on Windows

Web Hosting Providers

August 2013

Open Source Firewall and Network security

E-mail Service Providers

September 2013

Android Special

Gadgets

October 2013

Kernel Special

IT Consultancy

November 2013

Cloud Special

IT Hardware

December 2013

Linux & Open Source Powered Data Storage

Network Storage

January 2014

Open Source for Web development and deployment

Security

February 2014

Top 10 of Everything on Open Source

IT Infrastructure

June 2013 | 81

Open Gurus

Insight

Operating Modes in x86 Systems:


An Inside Story
Heres a discussion on operating (or addressing) modes in x86 architecture, and on
segment selectors, descriptors, paging and multitasking (rather, TSS). Readers trying
to program kernels/bootloaders, or thinking of writing their own toy OS, will find it
interesting. Examples are limited to Intels IA-32 architecture only.

he firmware in the ROM that runs first when an x86processor is powered up, which is referred to as the
BIOS, does a series of primitive operations. The mode
in which the BIOS runs is the real mode or the real addressing
mode. An x86 system would typically run in either real or
protected mode. When the system is reset or powered up, it is
first initialised in real mode and will remain in this mode unless
it is switched to the protected mode by the software.

So what exactly is real addressing mode?

As mentioned earlier, when an x86 system is powered up or


reset, it is in real mode. If you wondered why this is so, its
because in the real addressing mode, an x86 system becomes
backward compatible with all previous x86 chips, and thus
allows the BIOS and other prehistoric software to run. 8086, the
notable 16-bit microprocessor from Intel and the very first with
the x86-architecture, while implementing memory segmentation,
operates only in real mode. Software, in real mode, can directly
access all memory, I/O addresses and hardware peripherals;
however, the memory that can be addressed is limited to only
220 bytes, i.e., only 1 MB of memory can be addressed. Further,
the entire 1 MB of memory cannot be active at the same time
but partitions of 64 KB segments can. Each segment has its own
starting address and is numbered from 00000H to FFFFFH.
82 | june 2013

Because paging is not active in real mode, the linear addresses


generated are the same as the physical addresses. Segment
registers hold the starting address of the active segments in the
entire memory. Refer to Figure 1 for clarity.
In protected mode, nevertheless, memory larger than 1 MB
can be accessed. All modern operating systems work in protected
mode and provide advanced features like paging, virtual memory
and multi-tasking, and thus have a more controlled environment.
We can now focus on the next levelprotected mode, or
even better, protected virtual address mode.

Protected mode: Unlock the possibilities

X86 systems are always powered up/ reset in real mode. But
setting bit 0 (Protected Mode Enable bit) in the control register
CR0 makes it possible to start operations in the protected mode.
Here is the bouquet of benefitsthat protected mode provides:
Larger physical address space - Protected mode seamlessly
increases the addressable physical address space. For example,
a 32-bit x86 system can address up to 64 GB of RAM in the
Physical Address Extension (PAE)2 mode against 4 GB (i.e.,
232 bytes) of RAM without PAE. The PAE feature adds 4
extra bits to a standard 32-bit system (so memory becomes
addressable up to 236 bytes). Modern 32-bit OSs like Linux
take advantage of the PAE feature to support full 64 GB of

Open Gurus

Insight
FFFFFH

15

Code Segment

Index

TI

RPL

Stack Segment
15

Figure 2: Format of a segment selector

CS
SS

Data Segment

DS
ES

Data Segment

FS
GS
Data Segment
Segment Registers
Data Segment

00000H

Figure 1: Active segments of memory

RAM.
Virtual address space The OS, with the help of
sophisticated hardware, allows the running of virtual
memory programs of almost unlimited size. This means
even a very large file can be operated on by mapping it
to the address space of the process without bothering the
available RAM.
Memory protection - A sophisticated memory
management and hardware-assisted protection mechanism
is supported under protected mode.
Multitasking - Special instructions are available in protected
mode that provide support to save the context of the current
processor state and bring in the new task to the current
context of the processor. The previous task can be brought
back to the context and the execution starts from the point
where it was left off in the past. All multi-tasking OSs exploit
this feature based on their implementation of the policy of
task context switching.

Protected mode: Segment translation

Segment translation is one of the key areas in protected virtual


address mode. It is typically a mechanism of converting a logical
address into a linear address by the segmentation hardware of
the MMU unit. When a program gets loaded, the linker/loader
loads the segmentation registers with the appropriate value. A
protected mode segmentation register holds a 16-bit segment
selector. And every segment selector has a linear base address
associated with it, which is stored in the segment descriptor.
A 16-bit selector is used to store an index to access a segment
descriptor in a descriptor table (descriptors were created by the
linker/loader). Figure 2 shows the format of a selector.
As mentioned, the index field (bits [15:3]) has a value
to access a descriptor in a descriptor table. This 13-bit index
field is simply multiplied by the scale factor (i.e., the length

of the descriptor, which is usually 8, as the descriptor requires


8-bytes in the table), and the result is then added to the base
address of the descriptor table to get the desired segment.
1-bit Table Indicator or TI bit (refer to Figure 2) is an
indication to determine which segment descriptor table has
the segment. There are three segment descriptor tables that a
processor can refer to:
The Global Descriptor Table, or GDT,
The Local Descriptor Table, or LTD, and
The Interrupt Descriptor Table or IDT
If TI bit is set to 0, the segment reference is in GDT; otherwise
it is in LDT. IDT is never referred by the segment selectors.
2-bit Requestors Privilege Level or RPL bit (refer to Figure
2) is the indicator of the privilege level or ring. There are four
privilege levels or rings from 0 to 3. Each segment belongs to a
ring with a designated privilege level. Operating system and kernel
modules always run with privilege level 0, which has the highest
priority. Applications that run in ring 3 have the least privileges.
To go over what weve covered, a selector is used to point
a descriptor for the segment in a table of descriptors. The 32bit base address (from the descriptor) is then added with the
32-bit offset to generate a 32-bit linear address.
If a paging unit is not active, the linear address generated
by the segmentation hardware corresponds to the physical
address, but if the paging unit is active, more needs to be
done to generate the physical address; i.e., it requires a paging
translation. Before looking at paging translation, lets briefly
look at segment descriptor tables.

Segment descriptor tables

A logical address gets translated to a linear address with the use


of the segmentation descriptor either from the Global Descriptor
Table (or GDT), or from the Local Descriptor Table (or LDT).
Usually, only one GDT per system is permitted (global
segments; in SMP, one GDT for every CPU in the system can
be allowed); however, one LDT per process (or task) is defined,
i.e., LDT segments are private to a specific program only. The
gdtr control register contains the address and the size of the
GDT, whereas LDT is contained in the ldtr control register. The
instructions LGDT (Load GDT) and SGDT (Store GDT) give
access to GDT, while on the other hand, LLDT (Load LDT)
and SLDT (Store LDT) give access to LDT in a system.
Any descriptor (GDT or LDT) would have the following
main fields:
Base address Contains the linear (or virtual) address of
the first byte of the segment.
Limit - Contains the offset of the last memory cell in the
june 2013 | 83

Open Gurus

Insight
31

Offset

Selector

Linear Address
22 21
12 11

Directory

Table

0
Offset
12

4-KByte Page

Base Address

Other Fields

10

10

Limit

x8

Page Table

Physical Address

Page Directory

Page-Table Entry

Segment Descriptor
32-bit Linear Address

20

Directory Entry

Figure 3: Segment translation

1024 PDE * 1024 PTE= 2

32*

20

Pages

CR3 (PDBR)

segment; thus, this is the segment limit.


Other control bits Some of the other control bits are:
DPL- Descriptor Privilege Level; the value 0 means
the segment is in the kernel mode; however, the value
3 means it is in user mode.
P bit - Segment Present Bit; the value 0 means the
segment is not present in the main memory and is
swapped back to the disk.
S bit - Other than segment descriptors, the GDT can also
hold things like the Task State Segment (TSS), the LDT
or Call Gate descriptors. If this bit is cleared, the segment
is one of the system segments like TSS, LDT or Call
Gate; otherwise, it is a normal code or data segment.
Another type of descriptor table that x86-system has is
the Interrupt Descriptor Table, or IDT. This data-structure is
used to implement the correct response to hardware/software
interrupts and processor exceptions, and thus is nothing but
an interrupt vector table. A special register called idtr register
contains the address of the IDT.
A few words about TSS before we discuss paging. As the
name suggests, Task State Segment Descriptor or TSS3 is used
to save the state of a task. An operating system does a task
switch with the help of TSS, and thus multitasks.

A simple page translation technique

When the paging unit is active, mapping of a linear address to


a physical address involves a paging technique. When the PG
bit of the control register CR0 is set, the paging unit is active,
and thus the segment requires a second level of address
translation. The PG bit is usually set by the operating system
to implement page oriented protection.
Paging is used to separate the tasks and create an
independent virtual address space for every task in the memory.
As a key task, the paging unit checks the requested access type
against the access rights of the linear address, and generates a
page fault exception if the memory access is invalid. Physical
continuous addresses are grouped into fixed length chunks
called a page or a physical page (usually of 4K byte).
In an x86-system, a 32-bit linear address is divided into
the following three fields:
Directory
Table
Offset
84 | june 2013

*32 bits aligned onto a 4-KByte boundary.

Figure 4: Page translation (from Intels official documentation)


(Courtesy: Intel website)

Further, with the help of two translation tables, a linear


address gets translated to a physical address:
Page directory
Page table
Each active process has a page directory associated to it.
Control register CR3 (also known as the Page Directory Base
Register, or PDBR) contains the base physical address of the
page directory of the task.
The Directory field of a linear address is used to determine
the correct page table entry from the page directory. Next, the
Page field (of the linear address) helps in pointing the correct
page frame entry from the page table. The Offset field then
determines the relative position within the page frame.
Figure 4 (from Intels official documentation) depicts the
page translation diagrammatically.
I am sure you might have loads of questions, so Google
around or even feel free to write to me. I will try to clear
your doubts.
References

http://en.wikipedia.org/wiki/Real_mode
http://en.wikipedia.org/wiki/Protected_mode
http://en.wikipedia.org/wiki/Control_register
http://en.wikibooks.org/wiki/X86_Assembly/Protected_Mode
http://ecee.colorado.edu/~ecen2120/Manual/ia32summary.pdf
http://www.intel.com/content/www/us/en/processors/
architectures-software-developer-manuals.html

Citations

Memory Segmentation: http://en.wikipedia.org/wiki/X86_


memory_segmentation
PAE: http://en.wikipedia.org/wiki/Physical_Address_Extension
TSS: http://www.cs.umd.edu/~hollings/cs412/s03/prog1/
ia32ch7.pdf

By: Vivek Kumar


The author is an open source enthusiast and supporter with vast
experience in the design and development of firmware, device drivers
and systems software. A lead engineer with wireless infrastructure
solutions provider Radisys Corporation, he can be contacted at mail.
vivekkumar@rediffmail.com or vivekkumar.p@gmail.com

Overview For U & Me

An Introduction to
Hadoop and Big Data Analysis
Hadoop has become a central platform to store big data through its Hadoop Distributed
File System (HDFS) as well as to run analytics on this stored big data using its MapReduce
component. This article explores the basics of Hadoop.

any of us would have certainly heard about Big


Data, Hadoop and analytics. The industry is now
focused primarily on them and Gartner identifies
strategic big data and actionable analyticsas being among
the Top 10 strategic technology trends of 2013.
According to the Gartner website: Big Data is moving from
a focus on individual projects to an influence on enterprises
strategic information architecture. Dealing with data volume,
variety, velocity and complexity is forcing changes to many
traditional approaches. This realisation is leading organisations
to abandon the concept of a single enterprise data warehouse
containing all information needed for decisions. Instead,
they are moving towards multiple systems, including content
management, data warehouses, data marts and specialised file
systems tied together with data services and metadata, which will
become the logical enterprise data warehouse.
There are various systems available for big data processing
and analytics, alternatives to Hadoop such as HPCC or the
newly launched Red Shift by Amazon. However, the success of
Hadoop can be gauged by the number of Hadoop distributions
available from different technological companies such as
IBM InfoSphere BigInsights, Microsoft HDInsight Service on

Azure, Cloudera Hadoop, Yahoos distribution of Hadoop, and


many more. There are basically four reasons behind its success:
1. Its an open source project.
2. It can be used in numerous domains.
3. It has a lot of scope for improvement with respect to
fault tolerance, availability and file systems.
4. One can write Hadoop jobs in SQL like Hive, Pig,
Jaql, etc, instead of using the complex MapReduce.
This enables companies to modify the Hadoop core
or any of its distributions to adapt to the companys own
requirements and the projects requirements. In this article,
we will only focus on the basics of Hadoop. However, in
forthcoming articles in this series, we will primarily focus
on fault tolerance and the availability features of Hadoop.
Formally, Hadoop is an open source, large scale, batch
data processing, distributed computing framework for big data
storage and analytics. It facilitates scalability and takes care of
detecting and handling failures. Hadoop ensures high availability
of data by creating multiple copies of the data in different nodes
throughout the cluster. By default, the replication factor is set
to 3. In Hadoop, the code is moved to the location of the data
instead of moving the data towards the code. In the rest of this
june 2013 | 85

For U & Me Overview


User
Program
(1) fork
(1) fork

MapReduce
Program

(1) fork

1:run Job
JobClient

3: Copy
Resources

worker

split 2

(6) write
(5) remote read

(3) read
worker

worker

output
file 0

split 3

worker

output
file 1

7: Heart Beat
(returns task)

6: Retrieve
Input Splits

Task Tracker

8: Retrieve
Job Resources

(4) local write

Shared File
System
HDFS

split 4

9: Launch

Child

worker

Input
files

Map
phase

5:Initialize
Job

Job Tracker Node

Client Node

(2)
assign
reduce

split 0
split 1

Job Tracker
4: Submit Job

Client JVM

(2)
assign
map

2:getnew
Job ID

10: Run

Map Task or
Reduce Task

Task Tracker Node

Intermediate files
(on local disks)

Reduce
phase

Output
phase

Figure 2: MapReduce behind the scenes

Figure 1: MapReduce: Simplified data processing on large clusters by Google

article, "whenever I mention Hadoop, I refer to the Hadoop Core


package available from http://hadoop.apache.org".
There are five major components of Hadoop:
1. MapReduce (a job tracker and task tracker)
2. NameNode and Secondary NameNode
3. DataNode (that runs on a slave)
4. Job Tracker (runs on a master)
5. Task Tracker (runs on a slave)

MapReduce

The MapReduce framework has been introduced by Google.


According to a definition in a Google paper on MapReduce,
MapReduce is, A simple and powerful interface that
enables the automatic parallelisation and distribution of
large-scale computations, combined with an implementation
of this interface that achieves high performance on large
clusters of commodity PCs.
It has basically two components: Map and Reduce. The
MapReduce component is used for data analytics programming.
It completely hides the details of the system from the user.

HDFS

Hadoop has its own implementation of distributed file systems


called Hadoop Distributed File System. It provides a set of
commands just like the UNIX file and directory manipulation.
One can also mount HDFS as fuse-dfs and use all the UNIX
commands. The data block is generally 128 MB; hence, a
300 MB file will be split into 2 x 128 MB and 1 x 44 MB. All
these split blocks will be copied N times over clusters. N is
the replication factor and is generally set to 3.

NameNode

NameNode contains information regarding the blocks location


as well as the information of the entire directory structure and
files. It is a single point of failure in the cluster, i.e., if NameNode
goes down, the whole file system goes down. Hadoop therefore
also contains a secondary NameNode which contains an edit log,
which in case of the failure of NameNode, can be used to replay
all the actions of the file system and thus restore the state of the
86 | june 2013

file system. A secondary NameNode regularly creates checkpoint


images in the form of the edit log of NameNode.

DataNode

DataNode runs on all the slave machines and actually stores


all the data of the cluster. DataNode periodically reports to
NameNode with the list of blocks stored.

Job Tracker and Task Tracker

Job Tracker runs on the master node and Task Tracker runs
on slave nodes. Each Task Tracker has multiple task-instances
running, and every Task Tracker reports to Job Tracker in the
form of a heart beat at regular intervals, which also carries details
of the current job it is executing and is idle if it has finished
executing. Job Tracker schedules jobs and takes care of failed
ones by re-executing them on some other nodes. Job Tracker is
currently a single point of failure in the Hadoop Cluster.
The overview of the system can be seen in Figure 2.

Hadoop in action

Lets try a simple Hadoop word count example. First of all,


you need to set up a Hadoop environment either in your
own Linux environment or get a pre-configured virtual
machine from https://ccp.cloudera.com/display/SUPPORT/
CDH+Downloads. If you are willing to configure Hadoop
by yourself, refer to the famous tutorial by Michael Noll,
titledRunning Hadoop on Ubuntu Linux (Multi-Node
Cluster). Once you are done with the configuration of
Hadoop, you can try out the example given below:
$HADOOP_HOME/bin/hadoop start-all.sh
jps
mkdir input
nano wordsIn \\fill this file with random words and exit or
copy a file into input dir in text format
$HADOOP_HOME/bin/hadoop dfs -copyFromLocal ./input /user/
hduser/input
hadoop dfs -cat /user/hduser/input1/* \\ lists the content

Continued on page 90....

Recruitment Trends

For U & Me

Developers play a key role in


bringing about digital literacy
Intel is one of the biggest contributors to
open source technology, which also makes
the company one of the biggest recruiters
of open source professionals. The
company has a lot of initiatives involving
developers. So anyone with the 'right'
skill-set can think of working with this
technology major. Diksha P Gupta from
Open Source For You spoke to
Narendra Bhandari, director, Intel
Software and Services Group, Intel
South Asia, to understand the company's
recruitment strategy with respect to open
source developers. Excerpts

Is this one of the reasons why Intel is eyeing the


Indian market for open source technology?

Narendra Bhandari, director, Intel Software and Services Group, Intel South Asia

Do you feel India is rich enough in open source skills


when compared to the other markets?

Yes. As we see more platforms getting deployed and


business models evolving in the software community, one
can say that India is definitely offering the right kind of
talent. Around 3-5 years ago, software was largely sold
on a licence basis. Today things have changed. Software
for content or a service is sold to a consumer in many
different forms. Open source is definitely one component
that has contributed to this process. The consumerisation
of applications has also contributed to this changing
business model in a pretty significant way. There is a lot of
contribution from India from the applications perspective.
All these developments lead to the conclusion that the
awareness and contributions from the Indian developers'
community are growing. I don't know how to measure this
but I can see a significant change in the way contributions
have gone up from the Indian territory. India is certainly
rich in open source talent.

If youve observed our behaviour, we have almost


continuously invested in India over the last five to seven
years and worked with companies to build solutions
for multiple platforms. We have been working with the
developers, because the devices are growing in the market
and the availability of broadband is also continuously
growing. Our fundamental approach is to drive digital
literacy in every home in the country, if possible. That
will happen through devices and connectivity, amongst
other things. The way Intel wants to address this is by
getting more and more relevant applications. That will
drive technology usage and digital literacy, and attract
more users. Look at it in this waya couple of years back,
one could not think of booking air tickets sitting in the
remotest parts of the country, but it is possible now. To
answer your question about why we continuously work
with the developers, my answer would be: to drive digital
literacy and that will mostly come through applications.
And applications come from developers. If you look at
the entire eco-system, our role will be to continuously
invest, incentivise and educate developers to build more
applications relevant for our lives. So developers play a
key role in bringing about digital literacy.

Since Intels thrust is on open source technology, do


you think that there has been a surge in the demand
for open source professionals in the past few years,
particularly in the Indian market?
JUNE 2013 | 87

For U & Me

Recruitment Trends

As I said, I do not know how to measure this, but all I can


say is that the number of platforms is growing and so are the
ecosystems attached to those platforms. So, obviously, there is
a need for people to participate in the growth process.

source technology in project development.

What kind of talent do you look for when it comes to


hiring for Intel?

I think we look for a complete package when we hire


a developer. We look for various aspects including
knowledge of software, knowledge of hardware, knowledge
Our direct business comes from customers who pick
of the ecosystem, an understanding of
up processor platforms and
"The ideal skills that we want how a particular software can work with
develop products around
developers to build on are
a piece of hardware, how the driver
those platforms. These
interoperates with various other drivers
include HP, Dell, Lenovo and
related to powerhow to
many others. They are not
deliver performance and not in the system, et al. In today's scenario,
we definitely recommend that developers
consuming our applications. It
consume excessive power." build their skills, particularly in terms
is their customers, including
Narendra Bhandari, director,
of how the devices are becoming aware
the enterprises and the SMEs
Intel Software and Services Group, Intel South Asia
of the environment. For example, many
across the country, who
of the devices in the market have a slew of sensors. We
are the consumers of our applications. Having said that, I
strongly recommend that developers start looking at them
would like to draw your attention to several analysts reports
to understand how they interoperate. That is the most
confirming that usage of open source applications is growing.
pervasive aspect that determines how the mechanism
We can safely say that the consumption of open source
interacts with the ecosystem.
technology has increased amongst one and all. It may not be
The ideal skills that we want developers to build on
very visible but it has definitely gone up in a major way.
are related to power how to deliver performance and
not consume excessive power. So if someone is building
Do you think FOSS platforms or tools score over their
a solution for an ultrabook, a phone or even a highproprietary counterparts in the current scenario,
performance computing server that will run millions
particularly for enterprises and SMEs?
of calculations every second, we encourage developers
For IT managers, the debate is not about FOSS or
to start thinking about how to write code efficiently to
proprietary technology. They look for the best solution.
improve the performance. In some way or the other way,
I don't think they start the conversation by saying that I
everyone expects their device, PC or ultrabook to be
am going to use FOSS so let me figure out what business
responsive. Developers should know how to make their
solution I will focus on. They focus on the best solution.
applications responsive enough so that they impress the
In the current scenario, it is pretty apparent that there are
ultimate consumers.
hybrid solutions in the market. I don't think one drives
the other. Theres a lot of good co-operation as far as the
deployment in large enterprises is concerned. So, as I said
So do you get that kind of talent in the Indian colleges
before, I can say that FOSS tools are being preferred and
and universities? Do you see the colleges adapting to
there is a definite under current about FOSS technologies. I
such technological changes to ensure their students are
cannot give you figures to support that statement but I can
employable in companies like Intel?
surely say that things have moved to the hybrid stage from
I can say that over a period of time, the curriculum has
pure proprietary offerings. That in itself is one of the biggest
improved broadly, but I still feel that we could improve
testimonials of a FOSS victory.
a lot faster. Having said that, let me tell you that there
are some colleges working aggressively to adapt their
curriculum, keeping in mind the needs of the industry.
Do FOSS platforms add value to project development?
If you look at the FOSS tools being downloaded, one
can clearly say that these are increasing with every passing
Once the employees, particularly freshers, come on
day. If so, it speaks volumes about the fact that people
board, do you train them internally to get a flair for
are getting comfortable with open source technologies.
your technologies?
Fundamentally, the makers of the platform will also focus
Yes, we have tothere are a number of things that we
on monetising the platforms. I am one person who is more
need to train them on in terms of our processes, our tool
involved with evangelising among developers, so if you
chain and our methodology, in order to make them perform
ask this question to the practitioners, you will get the
efficiently in our environments. The training period could
exact answer. But if you broaden the question, I can say
vary from a couple of weeks to a couple of months. The
that developers do benefit from the tools available openly
new recruits are provided with mentors internally and get
to them. Which is why, they are increasingly using open
time with experts.

Are your clients comfortable with open source solutions?

88 | JUNE 2013

For U & Me

Let's Try

tave

The Imaginary Music of O

Heres an introduction to the wonderful world of imaginary numbers, explored with the
help of Octave, an open source tool.

Ctr7

Ctr8
Ctr6

Ctr7
Ctr5

Ctr6
Ctr5

Ctr3

Ctr4

Ctr2

Ctr3
Ctr2

Ctr1

any of us have done powerful matrix math with


Octave, without ever thinking about how it
works, internally. On that note, lets explore one
of the most fascinating, or rather, the more imaginary
parts of math.

i Fun

Imaginary numbers are those that dont really exist, but we


try to visualise them as points on a 2-D plane with the real
part on the x-axis and the imaginary part on the y-axis. So to
plot the imaginary i, with a blue star (*), issue the following
command at the Octave prompt:
$ octave -qf
octave:1> plot(i, b*)
octave:2>

up pops the display window, as shown in Figure 1.


So what is this i ? It is just the square root of -1. Simple,
right? You remember, we got an error trying sqrt(-1) in bc.
Octave is neat. It will answer you politely with an i. If youre
still confused, just check out the following:

equal to e-/2. Check it out for yourself:


$ octave -qf
octave:1> i^i
ans = 0.20788
octave:2> exp(-pi/2)
ans = 0.20788
octave:3>

Yes, you dont really need to bother about how. Octave


just gets you there so you can have fun with mathematics
without getting into it.

i: just a number

As imaginary numbers are just special numbers, they can be used


in any operations you may think of with numbers in Octave. Try
out sqrt, exp, log, etcaddition, subtraction, multiplication, and
division are trivial operations. And you can have even more fun
with Octaves power of vectors and matrices. Watch out:

$ octave -qf
octave:1> i
ans = 0 + 1i
octave:2> i * i
ans = -1
octave:3> sqrt(-1)
ans = 0 + 1i
octave:4> i^3
ans = -0 - 1i
octave:5>

$ octave -qf
octave:1> sqrt(i)
ans = 0.70711 + 0.70711i
octave:2> exp(i)
ans = 0.54030 + 0.84147i
octave:3> log(i)
ans = 0.00000 + 1.57080i
octave:4> [i # Press <Enter> here
>
i] * [i i]
ans =
-1 -1
-1 -1
octave:5>

i Puzzle

Is ei really cos + i sin ?

And now comes the puzzle part. What is the imaginary part
of ii? Its zero. What? Is it a real number? Yes it is. It is in fact

Heres a very simple check. Lets plot both the curves. Let
(theta) be set to values from 0 to Pi with, say, intervals of
june 2013 | 89

For U & Me

Let's Try
Figure 1

Figure 1

1.1

1
exp *
cos

1.05

0.5

0.95

0.5

0.9

0.5

0.5

0,801450, 1+07603

0.5

1.5

2.5

3.5

3+45919, 0+708121

Figure 1: Plot of i (sqrt(-1))

Figure 2: Plot of Eulers equality

0.01, and then lets plot the two curves in blue and red:

in the following issues of OSFY; so get ready with the


Octave controls.

$ octave -qf
octave:1> th=0:0.01:pi;
octave:2> plot(th, exp(i*th), b*;exp;, th,
cos(th)+i*sin(th), r^;cs;)
octave:3>

Figure 2 shows the two curves coinciding exactly with


each other.
Equipped with i, lets play around with more puzzles

Continued from page 86....


of file
cd $HADOOP_HOME
./bin/hadoop jar hadoop-examples-1.0.4.jar wordcount /user/
hduser/input /user/hduser/output
./bin/hadoop dfs -cat /user/hduser/output/*
./bin/stop-all.sh

The first line in the code starts the required services


for Hadoop. The jps command lets you query all the Java
Virtual Machines running on your system. You should see the
following services running on your system.
1. NameNode
2. DataNode
3. Secondary NameNode
4. Job Tracker
5. Task Tracker
6. Jps
If any of the services listed above is not running, it means
that your Hadoop could not start properly. In Line 3, create
a local folder to be copied on to HDFS. In Line 4, make a
text file and fill it with some random text. In Line 6, list the
contents of the copied file. In Line 8, run the Hadoop word
90 | june 2013

By: Anil Kumar Pugalia


The author is a hobbyist in open source hardware and software, with a
passion for mathematics. His explorations in the field of mathematics,
in every aspect of life, date back to the 1990s. A gold medallist from
the Indian Institute of Science, mathematics and knowledge-sharing
are two of his many passions. Apart from that, he experiments with
Linux and embedded systems, sharing his learnings through his
weekend workshops. Learn more about him and his experiments at
http://sysplay.in. He can be reached at email@sarika-pugs.com.

count example that comes with the distribution. In Line 9, list


the output of the generated result. Finally, in Line 10, stop all
the Hadoop services.
This article has covered various aspects of big data,
analytics and Hadoop. I have primarily focused on the
Hadoop architecture and pointed out the loopholes of
Hadoop in terms of fault tolerance and recovery. We have
also specifically seen how NameNode and Job Tracker
are bottlenecks in the system. They act as a single point
of failure for the whole system. Many of the Hadoop
distributions try to solve this problem of fault tolerance
and recovery that is found in Hadoop Core. However,
implementation of the fault tolerance and recovery algorithm
creates performance issues. The thrust of the research in this
field is to mitigate the performance and reliability issues to
get the best of both worlds.

By: Saurabh Jha


The author is an open source enthusiast who loves to create
awareness about free and open source software and technologies.
His interest includes high performance computing in heterogeneous
environments. He can be reached at his website : http://saurabhjha.in

OpenBiz

For U & Me

Companies have realised that


open source is one of the most
important options
Openbravos ERP solution is being used in almost all the business sectors across the globe.
Read on to know more...
Speaking about the solution, Sunando Banerjee,
channel business manager - Openbravo, APAC and
the Middle East, says, One of the strongest points of
OpenBravos ERP solution is its simplicity. Many ERP
systems fail because both partners and employers find
it difficult to implement them. From its inception, the
program was developed based on feedback from both
customers and developers. It avoids unnecessary choices
and interfaces that tend to confuse users.

Open source: The basis of it all!

Sunando Banerjee, channel business manager - Openbravo, APAC and the Middle East

oing business with open source technology is not


an alien concept anymore. A lot of tech businesses
today have built their products on open source
and, yes, they are making good money too. Openbravo
is one such example of a company using open source
software to build its solution.
Openbravo is an open source ERP system, which is
extremely flexible and totally Web-based. The tool offers
greater productivity, agility and a good return on investment.
The state-of-the-art Openbravo ERP system also has a
comprehensive retail solution, which offers multi-language
and multi-currency support that makes it implementable in
more than 60 countries.
The makers of the solution claim that it has more than
6000 deployments and has had more than three million
downloads.

The solution has been designed using open source


technology. Banerjee describes it in more detail, The core
is completely open source and has been built using Java
technology as the core development platform. We use the
Model Driven Development (MDD) approach that helps
developers add new modules to the core. Apache Tomcat
is the application server. Users have the choice of using
multiple operating systems, including Windows, to access
the system. For the database, we support PostgreSQL or
Oracle. Openbravos core application platform and related
open source modules are released under the Openbravo
Public License, a commercial open source licence that
provides users the full freedom to view and modify the
source code without coercive restrictions. This licence
supports our global open source community of thousands
of open source enthusiasts, ERP consultants, students
and business partners, who collaborate with Openbravos
development team to continuously develop and improve
our core platform and business functionality. People who
invest their time and energy in Openbravo appreciate the
full freedom our licence provides.
Choosing open source technology over several
proprietary variants was a conscious decision made by the
company. Banerjee explains, Within the company, we
believed strongly in the open source philosophy and this
was backed by a strong business plan. All of which made
us choose open source as the technology for this product.
Open source gives you the freedom to be flexible. We are
not against proprietary solutions. But you always need
to remember the value a strong community can bring to
june 2013 | 91

For U & Me

OpenBiz

a product. For example, today, Openbravo has been


downloaded more than 3 million times across different
countries. Just imagine the sheer amount of knowledge
this brings to the product. The strong community that
evolved around the product brings a lot of maturity
to it. Today, Openbravo is being used and localised
in more than 20 countries. The community is hugely
active in countries like China, Bolivia, Venezuela,
Malaysia, Brazil, Italy, Germany, India, UAE, Egypt,
Saudi Arabia and many more such nations.
There was another reason for going the open
source way. The company is of the opinion that ERP
solutions should be accessible to all. The concept of
ERP being used only by large companies needed to
be changed. Banerjee elaborates, The SME market is
growing at a very fast pace. In this market space, you
need an affordable product with great flexibility. We
aimed at bringing about that change in the mindset of
people. Thanks to the technology and the commitment
of our community, today, Openbravo presents a solid
platform for all budding companies that do not want to
pay a fortune to implement an ERP system within their
organisation. Frankly, companies have now realised
that open source is not just an alternative, it is one of
the most important options.

Open source: The need of the hour

The growth of open source technology is courtesy the


current increased technological awareness as well as
socio-economic conditions. Gone are the days when
companies had a tough time selling open sourcebased solutions. The awareness has now penetrated
even to the lower levels of a companys management
team. Commenting on the popularity of open source
technology, Banerjee says, I think we have moved past
the time when open source was considered a burden.
I have never met a single CEO who is bothered about
technology. The reason to implement ERP is to make
the business more profitable. So the users are more
keen on knowing about the functionalities rather than
the technology. In other words, I feel that in the current
socio-economic scenario, being open source-based is
an advantage that we enjoy.
Open source has definitely moved from the time
when its value was recognised only by the long-haired,
rock-loving hackers. Today, even Gartner is predicting
the exponential growth of open source-based business
applications globally. People have understood the
simplicity or the flexibility that open source brings to
its user. Some of our largest customers have actually
moved from their renowned proprietary systems and
opted for Openbravo.
But the road to success was difficult. While the
company certainly enjoys the fruits of success today,
92 | june 2013

there were days when the team had a real challenging time
trying to convince people about the basis of their solution.
Openbravo chose to stick to open source, maintaining its
unique attributes. Banerjee remembers, The road to this
success was not easy. In the initial days, around 20082010, I faced some major challenges while proposing
this open source solution. I remember someone asked me
whether his system could get hacked by anyone if he used
open source. I think the issue was a mindset problem. Open
source initially spread its wings in the form of UNIX.
Those systems were difficult for non-techies to use. I
think not many open source companies felt the need to
make the system user friendly. So obviously, there were
so many myths around the technology. This issue was
well identified by Ismael Ciordia, our co-founder, at a
very early stage. So as a company, we made a huge effort
to make the product simple to use. Today, one of our key
strengths is our simplicity.
As far as India is concerned, apart from the enterprises,
even SMEs are showing great interest in open source
technology, in general. Openbravo claims to have some of
the Fortune 500 companies from India as its customers.
The ERP solution has spread its wings to sectors like retail,
manufacturing, distribution, trading and many more. The
company has also entered some niche market spaces like
hatcheries, dairy products, manufacturing, jewellery, pharma,
retail and construction.
The use of open source technology in Openbravo had
resulted in some real good cost savings. Banerjee claims,
The solution is five times more cost-effective when
compared to similar proprietary ERP solutions. Overall, the
product offers you a lower cost of ownership, thus increasing
the ROI to a great extent. Openbravo also offers multiple
deployment options such as on the cloud, on-demand and
on-premises. The on-premises deployment uses minimum
resources while the on-demand and cloud deployment option
brings a great saving on the overall pricing of the product.

Interacting with the community:


The new age success mantra

A major reason for the success of Openbravo is its


interactions with the community. The company considers
the community as a great strength for Openbravo.
Banerjee says, We have various forums through which
we interact with our community. We also keep sending out
periodic mailers updating those interested about the latest
developments in the product. We are planning a community
meeting sometime in the middle of this year, to have a
chance to physically interact with those whom we only
connect with online.
By: Diksha P Gupta
The author is assistant editor at EFY.

Let's Try

For U & Me

An Introduction to Graphviz
Graphviz comprises a very flexible and handy set of tools that is freely available under an
open source licence. Read on to get more familiar with it.

raphviz tools help you draw, illustrate and present


graph structures. Do not be discouraged and please do
not think that drawing graph structures is restrictive
and limiting. I can promise you that by the end of the article,
you will change your mind.
The good thing is that Graphviz algorithmically arranges
the graph nodes so that the output is both practical and
pleasing! Graphviz can be used in domains such as software
engineering, networking, bioinformatics, databases, Web
structures and knowledge representation. The central part
of Graphviz consists of implementing algorithms for graph
layouts. Most Graphviz code is written in C.

Installing Graphviz

Your Linux distribution probably includes a ready-to-install


Graphviz package, so go ahead and install it, after which you
can try to compile the following code:
digraph G
{
Hello world!;
}

Use the following command for the compilation:


$ dot -Tps hw.dot -o hw.ps

If you see no error messages, then you are ready to


continue reading the rest of the article. The aforementioned
command will produce a Postscript file called hw.ps that you
can view. There are more command line arguments that will
not be discussed here. You can check the man pages or the
website for the tools for more information.
Note that the word digraph means that a directed graph is
going to be created. To create an undirected graph, the word
graph is used instead. It is easily understood that for such a
simple example, it does not make any difference if the graph
is either directed or undirected.

What is Graphviz?

Graphviz (or GraphViz or graphviz) is a collection of tools


for manipulating graph structures and generating graph
layouts. Graphviz supports either directed or undirected
graphs. It offers both graphical and command line tools but
in this article, the command line tools will be used. A Perlto-Graphviz interface library is also available, but it is not
covered in this article. There is also a C++ interface.
Strictly speaking and according to the book The
Design and Analysis of Computer Algorithms, a graph
G=(V, E) consists of a finite and non-empty set of vertices
V and a set of edges E. If the edges are ordered pairs of
vertices, then the graph is said to be directed. If the edges
are unordered pairs, the graph is said to be undirected.

June 2013 | 93

For U & Me

Let's Try

Graphviz has its own dialect that you will Table 1: Node attributes
have to learn. The language is simple, elegant
Name
Explanation
Allowed Values
and powerful. The good thing about Graphviz
shape
The shape of the node
ellipse, diamond, box, circle, etc.
is that you can write its code using a simple
height
The height in inches
a number
plain text editora wonderful side effect
a number
width
The width in inches
label
The name of the node
alphanumeric
of this is that you can easily write UNIX
fontsize
The
size
of
the
font
a number
scripts that generate Graphviz code. In fact,
fontname
The
name
of
the
font
Courier, Helvetica, TimesRoman
this article has such a script that is written in
fontcolour
The
colour
of
the
font
White, black, blue, etc.
Perlmy favourite scripting language.
bold, dotted, filled, etc
style
The style name
Graphviz comprises the following
colour
The colour of the node
white, black, etc.
programs and libraries:
pos
shape
The dot program: A utility program
The coordinates of the
for drawing directed graphs. It
position
accepts input in the dot language.
Table 2: Edge attributes
The dot language can define three
Name
Explanation
Allowed Values
kinds of objects: graphs, nodes and
alphanumeric
label
The label of the edge
edges. It uses the Sugiyama-style
fontsize
The size of the font
hierarchical layout.
fontname
The name of the font
The neato program: This is a utility
fontcolour
The colour of the font
program for drawing undirected graphs.
bold, dotted, filled, etc.
style
They style name
This kind of graph is commonly used
white, black, blue, etc
colour
The colour of the edge
for telecommunications and computer
len
The length of the edge
programming tasks. Neato uses an
dir
The direction of the edge
forward, back, both or none
implementation of the Kamada-Kawai
decorate
Draws a line that connects o or 1
algorithm for symmetric layouts.
labels with their edges
Option value to denote
alphanumeric
id
The twopi program: This is a utility
different edges
program for drawing graphs using a
circular layout. One node is chosen
as the centre, and the other nodes are placed around the
be presented in this part. The presented schema is simple;
centre in a circular pattern. If a node is connected to
yet you can still understand how elegant it is. By reading the
the centre node, it is placed at distance 1. If a node is
Graphviz code you can understand that lines beginning with
connected to a node directly connected to the centre node,
the # character are comments.
it is placed at distance 2 and so on.
The Graphviz code for creating Figure 1 is as follows:
Dotty, tcldot and lefty: These are three graphical
programs. Dotty is a customisable interface for the X
digraph G
Window System written in lefty. Tcldot is a customisable
{
graphical interface written in Tcl 7. Lefty is a graphics
graph [rankdir = LR ];
editor for technical pictures.
node[fontsize = 14 style=bold];
Libgraph and libagraph: These are the drawing libraries.
Their presence means an application can use Graphviz as
# Table-field connection part.
a library rather than as a software tool.
BONUS [label=<tb> BONUS | sal | comm | ename | job
circo: This is a utility program for creating a circular
shape = record];
layout of graphs.
DEPT [label=<tb> DEPT | loc | dname | deptno
fdp: A utility program for generating undirected graphs.
shape = record];
sfdp: A utility program for constructing large
EMP [label=<tb> EMP | empno | ename | comm | mgr |
undirected graphs.
hidedate | deptno | job
Table 1 shows the Node attributes and Table 2 shows the
shape = record]
Edge attributes.
CLIENT [label=<tb> CLIENT | sal | comm | ename | job
You are now ready to continue with practical examples.
shape = record];
Please do not forget to experiment, make changes and create
CLERK [label=<tb> CLERK | sal | comm | ename | job
your own graphs as you are reading the article.
shape = record];

A Graphviz example

A database schema, visualised with the help of Graphviz, will


94 | June 2013

ORDER [label=<tb> ORDER | sal | comm | ename | job


shape = record];
FOO [label=<tb> FOO | sal | comm | ename | job

For U & Me

Let's Try
EMP

HA0

empno

123

HA10 PIK

ename

HA11

23

HA20 123

comm
mgr
hidedate
BONUS
sal

HA42 DDJ

job

comm

CLERK

ename

sal

job

comm

Figure 2: Creating a hash table using dot


# Tablespace-to-tablespace connection.
TB_USERS -> node10 -> TB_ADMIN;

DATA

ename
DEPT

TB_ADMIN -> TB_USERS;

job

loc
dname

HA41 C++

HA40 CUJ

deptno

ADMIN
USER

deptno

FOO
sal

CLIENT
sal
comm

comm
ename
job

ename
job

A more advanced Graphviz example

The following code draws a hash table. The output is shown


in Figure 2.
digraph G
{
rankdir = LR;
node [shape=record, width=.1, height=.1];
nd0 [label = <p0> | <p1> | <p2> | <p3> \\
| <p4> | | , height = 4];

ORDER
sal
comm

node[ width=1.5 ];
nd1 [label = {<e>
nd2 [label = {<e>
nd3 [label = {<e>
nd6 [label = {<e>
nd7 [label = {<e>
nd8 [label = {<e>
nd9 [label = {<e>

ename
job

Figure 1: Creating a DB schema using Graphviz


shape = record];
# Tablespace decoration part.
TB_USERS [label=<tb> USERS shape = record
style=filled color=red];
node10 [label=<tb> DATA shape = record style=filled
color=red];
TB_ADMIN [label=<tb> ADMIN shape = record
style=filled color=red];
# TABLESPACE-table connection part.
BONUS:tb -> TB_USERS:tb;
DEPT:tb -> TB_USERS:tb;
CLIENT:tb -> TB_USERS:tb;
ORDER:tb -> TB_USERS:tb;
EMP:tb -> node10:tb;
CLERK:tb -> node10:tb;
FOO:tb -> TB_ADMIN:tb;

HA0 | 123 | <p> } ];


HA10 | PIK | <p> } ];
HA11 | 23 | <p> } ];
HA20 | 123 | <p> } ];
HA40 | CUJ | <p> } ];
HA41 | C++ | <p> } ];
HA42 | DDJ | <p> } ];

nd0:p0 -> nd1:e;


nd0:p1 -> nd2:e;
nd2:p -> nd3:e;
nd0:p2 -> nd6:e;
nd0:p4 -> nd7:e;
nd7:p -> nd8:e;
nd8:p -> nd9:e;
}

A Perl script that produces Graphviz output

Our script is not going to use the well-known DBD and DBI
Perl modules, because they add complexity to the process
although they are very practical and reliable modules. Basic
PL/SQL is going to be used in order to extract our information
June 2013 | 95

For U & Me

Let's Try

from an Oracle 10g DBMS as plain text. If you want to use


another DBMS, you can do so as long as the plain text output
is similar to the one used.
The following PL/SQL listing shows the PL/SQL code that
extracts the required information from an Oracle 10g DBMS:
REM
REM Author: Mihalis Tsoukalos
REM
set echo off
set heading off embedded off verify off
set feedback off
spool table_col.log
btitle off
ttitle off
set termout off
SELECT table_name, column_name
FROM user_tab_columns
ORDER BY table_name
/
PROMPT TABLESPACES
SELECT table_name, tablespace_name
FROM user_tables
/
spool off

It makes use of Oracle USER_TAB_COLUMNS and


USER_TABLES tables. The Perl script used is shown in the
following listing:
#!/usr/bin/perl -w
#
use strict;
my $tablespace_found = 0;
my $line=;
# The following two hashes will hold the
# names of the tables and the tablespaces.
my %TABLE = ();
my %TABLESPACE = ();
my $firsttable = 1;
die <<Thanatos unless @ARGV;
usage:
$0 inputfile.log outputfilename
96 | June 2013

inputfile.log: The input filename


outputfilename: The filename of the output
Thanatos

if ( @ARGV != 2 )
{
die <<Thanatos
usage info:
Please use exactly 2 arguments!
Thanatos
}
my $input = $ARGV[0];
open(INPUT, < $input ) ||
die Cannot read $input: $!\n;
my $output = $ARGV[1]..dot;
open(OUTPUT, > $output ) ||
die Cannot write $output: $!\n;
print
print
print
print
print
print

OUTPUT
OUTPUT
OUTPUT
OUTPUT
OUTPUT
OUTPUT

digraph G\n;
{\n;
\tgraph [rankdir = \LR\ ]\;\n;
\tnode[fontsize = \14\ style=bold]\;;
\n\n;
\# Table-field connection part.\n;

# Read the input file


while ($line = <INPUT>)
{
chomp $line;
# Drop empty lines
if ( $line =~ /^$/ )
{
next;
}
if ( $line =~ /^TABLESPACE/ )
{
# Close the previous table node.
print OUTPUT \\n\tshape = \record\\n\t]\;\n\n;
print OUTPUT \n\# TABLESPACE-table connection
part.\n;
$tablespace_found = 1;
next;
}
if ( $tablespace_found )
{
my $tablespace = (split , $line)[1];
my $table = (split , $line)[0];
# Connect tables with their Tablespaces
print OUTPUT \t$table:tb -> TB_$tablespace:tb\;\n;
if ( !defined $TABLESPACE{$tablespace} )
{

Let's Try

For U & Me

print OUTPUT TB_$tb_first;\n;


$TABLESPACE{$tablespace} = 1;
}
}
else
{
my $table = (split , $line)[0];
my $column = (split , $line)[1];
if ( defined $TABLE{$table} )
{
print OUTPUT | .lc($column);
}
else
{
# The very first table has to be treated
differently.
if ( $firsttable )
{
$firsttable = 0;
}
else
{
print OUTPUT \\n\tshape = \record\\n\
t]\;\n\n;
}
$TABLE{$table} = 1;
print OUTPUT \t$table [label = \<tb> $table;
print OUTPUT | .lc($column);
}
}
}
print OUTPUT \n\# Tablespace decoration part.\n;
my $tablespace = ;
foreach $tablespace ( keys %TABLESPACE )
{
print OUTPUT \tTB_.$tablespace;
print OUTPUT [label=<tb> .$tablespace.;
print OUTPUT shape = record style=filled
color=red];;
print OUTPUT \n;
}
my $first = 1;
my $tb_first = ;
foreach $tablespace ( keys %TABLESPACE )
{
print OUTPUT \tTB_$tablespace ->;
if ( $first )
{
$tb_first = $tablespace;
$first = 0;
}
}

print OUTPUT }\n;


# Close INPUT and OUTPUT
close(INPUT) ||
die Cannot close $input: $!\n;
close(OUTPUT) ||
die Cannot close $output: $!\n;
exit 0;

The script accepts the results of the PL/SQL code


as input and produces a Graphviz file as its output,
which then has to be processed using the dot utility.
After extracting the information about the tables and
their respective fields, the Perl script writes the word
TABLESPACE in the output, and continues with the
information about the tablespace/table relations. The word
TABLESPACE helps us separate the table-field relations
part from the tablespace-table relations part.
Depending on the total number of tables, the output
may be huge but it gives a good picture of the user
tables and their respective tablespaces. It is important to
understand that the presented PL/SQL and Perl scripts
can easily be modified to display different information
according to our needs. USER_TAB_COLUMNS and
USER_TABLES tables contain much useful information
that we can easily take advantage of.
I hope you find Graphviz both entertaining and
interesting. I personally think that it is an exceptional piece
of software. There is plenty of useful material available in
the Web links provided, so you are bound to find even more
benefits through experimentation.

Web links and bibliography


[1] Graphviz site: http://www.graphviz.org
[2] Aho, Hopcroft and Ullman, The Design and Analysis of
Computer Algorithms. Addison Wesley, 1974.
[3] Michael Jnger, Petra Mutzel (editors), Graph Drawing
Software, Springer, 2003.
[4] GraphViz and C++, Platis N and Tsoukalos M, C/C++ Users
Journal, December 2005.
[5] Emden R Gansner, Eleftherios Koutsofios, Stephen C North
and Kiem-Phong Vo, A Technique for Drawing Directed
Graphs, IEEE Trans. Software Engineering, May 1993.
[6] T Kamada and S Kawai, An algorithm for drawing general
undirected graphs, Information Processing Letters, April 1989.
[7] Output formats: http://www.graphviz.org/content/outputformats

By: Mihalis Tsoukalos


The author enjoys photography, UNIX administration,
programming iOS devices and creating websites. You can reach
him at tsoukalos@sch.gr or tweet him @mactsouk.

June 2013 | 97

For U & Me

Insight

What it Takes to be an

Open Source Expert


OSFY speaks to industry leaders to bring you their thoughts on this hot topic...

hile open source gradually exerts its control


over almost every aspect of technology, it has
emerged as a clear profit earner for businesses and
has helped them move up the value chain. As open source
adoption increases across the globe, the need for a multi-skilled
workforce becomes important to sustain this trend. Driven
by this need, enterprises today are looking for talent with the
advanced skills to cope up with their business objectives.
Needless to say, the demand for open source professionals
has increased manifold in the last few years. According to the
2013 Linux Jobs Report brought out by the Linux Foundation,
almost 90 per cent of employers are planning to hire open
source experts in the next few months. The report also indicates
that there is a yawning gap between the demand and supply of
FOSS professionals and open source talent is not easy to find.
So, what goes into the making of an open source expert? OSFY
got in touch with industry leaders and members of Linux User
Groups to seek answers to this question.

The 3 Cs required to succeed

If you wish to flex your muscles in the open source circuits


and become a pro, there are three Cs that will help you in
your endeavourcuriosity, commitment and community.
Shares Divyanshu Verma, Engineering Manager, Linux
Engineering at Dell R&D, India, To become a FOSS expert,
98 | JUNE 2013

the first thing one needs is to be curious and committed.


FOSS allows everyone and anyone to learn and program,
without donning any corporate hat or a badge, and then, they
subsequently contribute code back to the community. FOSS
experts are supposedly good at working in a virtual world. They
need to be self-motivated to comprehend problems and provide
solutions that are accepted by the open source community. Given
the wide acceptance of the open source paradigm, companies
such as Google, IBM, HP, Dell, Broadcom, Cisco, and Intel,
now motivate their employees to work on FOSS projects and
contribute code freely to the community.
Sankarshan Mukhopadhyay, an active member of the
Chennai Linux Users Group, voices similar views. In his book
Outliers (<http://en.wikipedia.org/wiki/Outliers_(book)>),
Malcolm Gladwell proposed the 10,000 hour rule, claiming
that the key to success in any field is, to a large extent, a
matter of practising a specific task for a total of around 10,000
hours. However, to aspire to be an expert, one needs to take
the first step towards that goal. So, it requires that first step
of contributing to an open source project and participating
in an open source community to be set towards becoming an
expert, says he. Citing another example, Mukhopadhyay says
writer Karl Fogel has a reasonably interesting book (http://
producingoss.com/en/index.html), which mentions what one
needs to get familiar with prior to contributing to open source

Insight
software. Beginners should pick a project, tool, application or
service they love and, if what they selected has an open source
community, get involved by asking smart questions about
where one can begin to contribute, adds Mukhopadhay.

Six handy tips to succeed in the


open source world

Hands-on knowledge: The key differentiator

The advent of true Internet class applications built


completely using open source technologies by companies
such as Google, Facebook, Yahoo, Twitter, etc, has
made FOSS a widely accepted choice among large scale
applications. In addition, the maturity of the market allows
companies to choose open source and then, optionally, buy
support as needed. Dr Pramod Varma, the chief architect
of UIDAIs Aadhaar project, shares, This means FOSS
professionals now have varied opportunities to be a part of
large technology products or various large projects in almost
all verticals. With the product start-up ecosystem coming
of age in India, FOSS plays an even more critical role and
professionals with hands-on knowledge are in great demand.
But at the end of the day, whether it is FOSS or commercial
technologies, it is critical that professionals become experts
in a few areas. Expertise comes from deep hands-on
knowledge in at least a few technologies, allowing experts
to compare, contrast, and learn from abstract design patterns
across various technologies. Technology evolves rapidly, and
it is essential that experts learn and apply common design
and architecture concepts from one to another and continue
to be deeply hands-on.

Be an expert, legally!

Open source has gained traction in the last few years, but it
comes with some legal strings attached. Much legal activity in
the open source area involves compliance analysis in other
words, determining whether a company is complying with
all the relevant licence conditions of its inbound open source
licences. This has given birth to a new breed of professionals
the open source software legal expert. Becoming a pro in this
domain will help, believes Aahit Gaba, commercial and IP
licensing lawyer, a specialist in Open Source Licensing. There
are two different categories of FOSS experts legal FOSS
experts and technical FOSS experts; I come under the former
category. Of late, the software industry (proprietary and open
source) is completely banking on intellectual property rights
with respect to the protection of the contributors. And open
source software adheres to the open source license, which
is a legally binding agreement. So, FOSS legal experts have
immense scope in software organisations, automotive firms,
embedded systems, financial services, mobile telephony, etc.
There is also a dearth of open source attorneys who are well
acquainted with the nitty-gritty of this terrain.

Get involved with start-ups and the community

Open source has been at the genesis of many modern start-up


companies and the best way to gain more experience in the FOSS

For U & Me

Focus on the fundamentals and learn the concepts


well. Too often, the focus is on the 'step by step' solution without much understanding of the what, why,
and how behind the solution.
Application of Mind (AoM). Learn how to apply
the fundamentals (that you already know) to solve
problems. Experiment with your ideas and see what
comes out of it.
Make new mistakes. Learn from your own and others mistakes and do not repeat them. This can only
happen when you experiment a lot and participate in
many forums, especially global ones, as well as being
on mailing lists, blogs, etc. Do not be afraid of making
mistakes or be afraid of failure. Develop a thick skin
and fearlessly ask your own questions.
Keep updating your skills, based on the latest trends.
Be humble. However much we know, it is insignificant
in the larger scheme of things.
Be patient. Success does not come about in a matter
of weeks or even months.
By Arun Khan, FOSS enthusiast and an active
member of the Chennai Linux Users Group.

domain is to work with them. Ranjan D Sakalley, lead consultant


with Thoughtworks, enthuses, If you are committed to
becoming a true-blue FOSS professional, there is nothing better
than kick-starting your career with a start-up company as most
of them are built on open source technology. It's really exciting
to try your hands on different high-scale open source projects as
you get the liberty to experiment and innovate with novel stuff.
And hiring managers constantly look for FOSS skills.
Jyothi Bacche, head, Open Source Practice, MindTree, feels
that one needs to be popular in the open source community
to become an expert. A FOSS expert has the capability to
influence the community, as the latter plays a key role in the
success of open source software. And this comes when you
get yourself involved in various activities in the FOSS arena.
Of course, one should have the expertise in integrating various
open source components like compliance and distribution issues.
You should make a conscious effort to be self-driven and have a
passion to create something for the betterment of the open source
community. Open source adoption in the Indian market is quite
high these days and it's a good time to hone your skills based on
the above points, and get hired, reveals Bacche.
So, this may be a good time to build up on your FOSS
skills and get a leg-up in your career!

By Priyanka Sarkar
The author is a member of the editorial team. She loves to weave
in and out the little nuances of life and scribble her thoughts and
experiences in her personal blog.

JUNE 2013 | 99

For U & Me

Open Strategy

Sony Wants to Get Things


Right with Android
all the mobility platforms available in the market right now
including Windows. But from Sonys point of view, it is
important for us to establish ourselves within the Android
market first and then we might look at other options.

: You talked about Windows being an option for you. What


is stopping you from entering the Windows Phone arena,
as of now?
As I just mentioned, Sony is looking at all the options,
including Windows Phone. But the market for Windows
Phone is yet to pick up. We have seen majors like Nokia and
HTC involve themselves in the Windows ecosystem, but
they did not do as well as their Android counterparts in the
market. Apart from that, not many vendors have opened up
to Windows, probably because of its immature eco-system.
Well watch the developments before we actually move on to
the Windows platform.

Kenchiro Hibi, managing director, Sony India

Sony's smartphone division is struggling


to regain its market share in India. The
company is known to be far behind its
competitors, including Samsung. Now, with
a new nameSony Mobile and after
parting ways with Ericsson, the Japan-based
smartphone brand is looking for a change
in its fortunes. At the start of 2013, the
company introduced its flagship product,
Xperia Z, and will be looking to launch
more products in the smartphone market.
Its never too late to find your way back into
the reckoning and Sony is hoping to do so
this year. The company is banking big on
the Android operating system to work the
magic. S Aadeetya from Open Source For You
spoke to Kenchiro Hibi, managing director,
Sony India, about the company's plans with
Android. Here are some excerpts:

: At the moment there are many operating platforms vying


for space in the smartphone segment. Which one does
Sony Mobile prioritise?
Sonys first preference has always been Android, which
is why you have seen the company launching Xperia
smartphones at regular intervals. That said, we are open to
100 | June 2013

: What about the Firefox OS from Mozilla, which is


expected to make its way into the market by the end of
this year?
Firefox is a platform that Sony is interested in being a
part of. We recently had a tele-conference discussing
the possibility of working on a device that runs on the
Firefox mobile OS. However, the platform is still in its
developmental stage, which is why it wont be possible for
the company to comment on something officially. We are not
working on any Sony smartphone for Firefox, as of now. If
the Firefox OS is able to succeed in the market and offer a
premium user-experience, then we would engage with the
company more deeply.

: Sony has its Playstation OS, which is hugely popular.


Do you think this platform could be integrated into
smartphones, just like Samsungs Bada OS and Tizen, which
will be launched in the coming months?
Yes, Sony has the Playstation OS but then, there are a lot of other
platforms like Windows Phone and Firefox as well. Right now,
Sony is completely focused on getting things right with Android
and after that we will test the other platforms. Sony has not really
thought much about having its own operating system, but maybe
we could work on something in the near future.

: According to some recent reports, Sony is aiming


for the third spot in terms of market position in the
smartphone segment. Any particular reason for that?
We cannot aim for the sky from Day One. Our performance over

Open Strategy

For U & Me

the years has not been as per our expectations, which is why
there has been a change in the entire scheme of things. Sony
decided to part ways with Ericsson and form its Sony Mobile
division with an aim to focus and offer products that will reflect
the companys persona. The market position is merely a number;
what we would like to aim for is to satisfy consumers with our
products and not worry about our position. With Xperia Z and
other upcoming models, we believe there is enough capability in
the brand to compete with the heavyweights.

: What is your market position in the Indian market


and globally?

Sony does not disclose market share figures but, as per


reports, Sony has a 9 per cent market share in the Indian
smartphone segment. Ideally, we would like the figure to be
around 15 to 20 per cent in the coming years and we hope
that our latest premium offerings in the Xperia series will be
able to deliver that. Globally, our positions vary in different
markets but overall, we are No 1 in Japan and in some parts
of Europe and wed like to have a similar position in other
markets as well. This year, Sony will focus on being a
premium brand, which was not the case last year, and offer
products across different price points.

: Sony is not visible in the smart camera segment. Are


there any plans to introduce products in the segment?

Smart cameras primarily revolve around connectivity. Sony


has the highest number of NFC-enabled products in the
world. Before summer, we will have more than 35 NFCbased models in our portfolio that will offer Wi-Fi support
for seamless connectivity and sharing.

: How do you plan to engage with consumers in the


country?

For Sony, reaching out to consumers has always been essential


and it will remain the same with the Xperia smartphones as
well. We already have a wide network of sellers present across
the country. We will be adding over 300 touch points as well

THE COMPLETE MAGAZINE ON OPEN SOURCE

"If the Firefox OS is able to succeed in


the market and offer a premium userexperience, then we would engage
with the company more deeply"
and our marketing investments will be to the tune of Rs 3
billion for this year. Well be focusing on ATL activities, online
and social media as well. But the touch point is the place where
the customer goes and experiences the product before buying,
which is why we have to communicate our value proposition in
the right manner. There will be demonstrations that will be held
in malls and big venues. We will target the top metro cities and
move to other sections later.

: This years Union Budget included an excise duty for


smartphones costing above Rs 2,000. How does that
affect Sony Mobiles pricing strategy from here on?
We have come to terms with the recent budget announcement
that included an excise duty hike on phones costing more than
Rs 2,000. We will be discussing this in our upcoming corporate
meetings and come to a conclusion that will satisfy both the
brand managers and the consumers.

Your favourite Magazine


on Open Source is now
on the Web, too.

LinuxForU.com
Follow us on Twitter@LinuxForYou

June 2013 | 101

For U & Me

Career

Apps Development: A Career with


Immense Growth Possibilities
For someone looking to reach for the sky, apps development is a great field to be in, giving
young developers the opportunity to make money as well as a name for themselves.

ccording to research conducted by Canalys App


Interrogator, there has been healthy growth in the
download and purchase of apps on mobile devices.
The research conducted across the four storesApples App
Store, Google Play, the Windows Phone Store and BlackBerry
World, shows an 11 per cent increase in consumption in Q1
2013 worldwide when compared to Q4 2012. The research
also shows that the direct revenue from paid-for apps, in-app
purchases and subscriptions combined, grew by a slightly
more modest 9 per cent. Combined, downloads from the
stores amounted to more than 13.4 billion, and revenue

102 | june 2013

reached $2.2 billion (before revenue sharing).


If there is anything that is constantly growing in the world
of technology, it is the number of applications, whether it
is in the mobile world or the Web space. This is why apps
development is emerging as one of the hottest career options
for modern day developers.
According to Saurabh Singh, CEO and founder,
AppStudioz, app development is one of the most lucrative
career choices for tech professionals. He says, Whether it
is freshers or experienced professionals, modern day techies
want to get into the field of app development. AppStudioz

Career
works on making both mobile and Web apps for its
clients. Needless to say, the demand for mobile apps is
increasing at a much faster pace than for Web apps. Singh
says, Mobile apps are the in thing. As the smartphones'
ecosystem is improving and developing, the demand for
app developers is growing.
Subhi Quraishi, CEO, ZMQ Software Systems, says,
This is an interesting time for software professionals. The
world of app development has various career paths and
they are all good. It is a growing field. Apps have become
a necessity for all who want a digital presence. Modern
day software development has grown to be app specific.
Hence, anyone with a creative mindset and command over
technology should be here to make it big.

Growth prospects

For someone looking to make a career in apps


development, particularly in the mobile apps segment,
the sky is the limit. If you go by the statistics, the number
of mobile connections is more than TVs, landline
phones and PCs in India. So whether it is entertainment,
shopping, in-car navigation or banking, there is an app for
it and there is scope for apps in every sphere of life. So,
it won't be an exaggeration to say that apps will drive the
growth of the mobile devices and PC industry henceforth.
It is worth mentioning here that Indian mobile
operators are earning a major chunk of their revenue
through Value Added Services (VAS). The demand for
these has led to the increased demand for mobile app
developers in a significant way. Singh emphasises,
The industry is booming and demand is high, but
there is still a dearth of quality talent. By this I mean
sound technical knowledge as well as creative ideas.
Companies like us resort to training the developers
in the domain to meet the demand. Appstudioz has
plans to add 100 developers this year, of which a
majority would be freshers. Singh clarifies, The
college curriculum is still not up to the mark. Hence,
the freshers we hire are first trained by the experts and
then brought on to real projects. Of course, we definitely
look for sound technical knowledge in freshers. This
means that freshers without any experience in the apps
development industry can also have their share of the
apps development pie but have to undergo training.
Don't believe it? Well, you can scan any job site
and you will find numerous jobs in the M-VAS (Mobile
Value Added Services) industry. Therefore, rest assured
that a career in this industry will prove to be a good
move eventually.

Creativity is paramount

Sheer technical knowledge is not the only attribute


required for success in this domain. You have to think

For U & Me

out-of-the-box to make it big here. Quraishi says, This


field is beyond technical knowledge. One must have
a creative way of thinking to survive in this field. By
creative I mean anything out-of-the-box and yet userfriendly. Right from thinking about an app to executing
it, one needs to value the fact that eventually the app is
for a general user, who may or may not be tech-savvy. So,
making an app that is easy to use is a perfect thing.

Desired skillsets

For someone looking to make a career in the apps


development domain, a knowledge of programming
languages including C, C++ and Objective C for writing
applications on iOS (iPhone, iPad) and Java (Android and
the BlackBerry OS) is a must. Of course, one cannot learn
it all while in college, but you can always opt for shortterm courses or a diploma in apps development, which are
being widely offered across the country.
Apart from having a B Tech or an MCA degree, one
is expected to have fairly good knowledge on browsers
(WML and XHTML), gateways/servers (WAP, XML,
VXML, WTA, etc), clients (SMS, e-mail, chat, etc) and
stacks (WAP2.0 and TCP/IP). Developers are, of course,
most in demand in the business, but there is a lot of scope
beyond development as well. An app needs content and
graphics as well, which makes the industry a career choice
for content writers, graphic designers and researchers as
well. Innovation and out-of-the-box thinking can enable
freshers in the industry to grow quickly.
With the changing technology, keeping pace with
industry trends, technological innovations and new products
in the market is important for freshers, who must constantly
update themselves with the latest developments. Android,
iOS and Windows Phone are the latest in the mobile arena,
and even they may soon be replaced by Ubuntu Phone,
Sailfish and Firefox OS. Hence, keeping a tab on the latest
technology can be really helpful to remain in demand.
That's not all. Apps are designed for clients and users.
An app developer must understand the clients' requirements
and their target audience's temperament, and should have the
ability to make modifications promptly.

Pay packages

If you are good at your job, money is not a constraint.


An app developer can easily get an annual package of
Rs 300,000 to Rs 450,000 as a beginner. The early years
could be difficult times, but you should not worry too
much as the road you are treading will soon lead to a
successful career.
By: Diksha P Gupta
The author is assistant editor at EFY.

june 2013 | 103

Brocade
Cisco Systems
Dell India Pvt Ltd
ESDS Software Solutions Pvt Ltd
Hooduku India Pvt Ltd
HP
IBM
Kryptos
NetApp
Netmagic
Oracle India Pvt Ltd
Sify
Tech Mahindra
VMware Software India Pvt Ltd
Wipro
Brocade
Cisco Systems
Dell India Pvt Ltd
ESDS Software Solutions Pvt Ltd
Hooduku India Pvt Ltd
HP
IBM
Kryptos
NetApp
Netmagic
Oracle India Pvt Ltd
Sify
Tech Mahindra
VMware Software India Pvt Ltd
Wipro

A List Of Leading Cloud


Solutions Providers
Brocade | Bengaluru
Building on Brocade VDX switches with Brocade VCS Fabric technology has enabled cloud service providers to build
networks that scale without disruption. This highly automated Ethernet fabric reduces network complexity and enables you
to add services on demand, and reduce the costs associated with network changes and additions. Coupled with Brocade
Network Subscription, a pay-as-you-go acquisition model, Brocade can help you meet your networking challenges with the
flexibility you require. Brocades recent launch of the On-Demand Data Center delivers simplicity, scalability and resource
utilisation. The result is a purpose-built network infrastructure for highly virtualised and cloud computing environments.

Cisco Systems | Bengaluru


Cisco and its partners provide cloud computing solutions that deliver people-centric collaboration, an assured good
user experience, dynamic and efficient cloud infrastructure, context-aware security, and accelerated deployment.
Using Cisco tools and resources, enterprises can build a private cloud that improves productivity and reduces costs. In
addition, Cisco can help service providers take advantage of new business opportunities to monetise cloud computing
services, by offering hosted services on a public cloud. Cisco and its global strategic partners offer a technology-driven
approach to solving business problems through committed partnerships.

LEADING

Dell India Pvt Ltd | Bengaluru


With its proven experience and technology expertise in cloud computing, Dell offers a wide array of cloud services for
IT organisations, which include application integration, enterprise notification, software asset management, storage,
servers, desktops and networking virtualisation, on-site and off-site architecture integration into a hybrid solution, and
hosted services for applications access, data centre infrastructure and systems management. Dell helps companies
design their own flexible, integrated and secure cloud computing strategies, so they can build dedicated solutions on
their premises and simplify the planning, development, deployment and delivery of cloud services.

ESDS Software Solutions Pvt Ltd | Nashik


The companys cloud services solution, the eNlight Cloud Platform, offers IaaS on a pay-per-consume model on the public
and private cloud, with a fixed pricing model. eNlight Cloud, with its intelligent technology, senses the need for additional
resources and also the need to withdraw them, thus ensuring top notch performance at the lowest possible prices.
Major open source-based offerings: eNlight Cloud is entirely based on the Xen kernel, which is open source. The
company has customised the kernel to provide instant auto scaling of the CPU and RAM, which results in 70 per cent
savings on the capex and opex required for hosting websites or applications.
Leading clients: Tencent, Podar International, Kafilla Tours and Travels, Ambuja Cements, KFC and over 150 leading
companies in the private and public sectors.
USP: Intelligent auto-scaling, pay-per-consume, per-minute billing, a pre-paid usage model and instant provisioning
(less than 3 minutes) of virtual machines with all popular Linux distros. Migrating a Linux dedicated server on eNlight
Cloud has resulted in saving a minimum of 80 per cent of power.
Special mention: ESDS has consistently been honoured by government bodies, the media, OEMs, etc, and appreciated
by its clients for its innovative products and services. The company has received numerous prestigious awards like
the IT Enterprises (Special Award) -2009, the Green IT Infrastructure Award 2010, the Best IT Enabled Services
2011 by the government of Maharashtra, the Industrial Excellence Award for Innovation from the Dainik Bhaskar
Group, the Most Promising Banking Technology Solutions & Service Provider in North Maharashtra, and the Young

104 | june 2013

Brocade
Cisco Systems
Dell India Pvt Ltd
ESDS Software Solutions Pvt Ltd
Hooduku India Pvt Ltd
HP
IBM
Kryptos

IT Professional Award 2012 from the Computer Society of India (Western Region) as an endorsement for
eNlight Cloud. ESDS is a CMMI Level 3 company and a SAP Certified Cloud Hosting Provider.
Website: http://www.esds.co.in/

Hooduku India Pvt Ltd | Bengaluru


Hooduku India Pvt Ltd has been providing IT solutions in the cloud and Big Data domain, focusing heavily on open
source technologies. The team of experts and architects at Hooduku Global strongly believe in the open source
community, and have made contributions, including a Web project, to the Linux Foundation. Till date, Joomla/CMS
migration to the Microsoft Azure Cloud Platform (2009) and early adoption of, as well as contributions to, open
source Big Data technologies such as Hadoop clusters and Nutch search engines (2006-07) are accomplishments
that the Hooduku team is proud of. These contributions have been well received within the world open source
community. With public and private cloud adoption ramping up with OpenStack implementations, members of
Hooduku are actively getting involved and have committed greater support in the coming months.

NetApp
Netmagic
Oracle India Pvt Ltd
Sify
Tech Mahindra
VMware Software India Pvt Ltd
Wipro
Brocade
Cisco Systems
Dell India Pvt Ltd
ESDS Software Solutions Pvt Ltd
Hooduku India Pvt Ltd
HP
IBM
Kryptos
NetApp
Netmagic
Oracle India Pvt Ltd
Sify
Tech Mahindra
VMware Software India Pvt Ltd
Wipro

HP | Bengaluru
HP Cloud provides an open cloud interface built on standards-based OpenStack APIs to support migration. It
provides a variety of solution tools from our solution partners to deploy complex multi-tier applications. It helps
you launch your websites, enterprise applications, Big Data, analytics, and mobile workloads, on the cloud.
It also ensures availability and security, and manages your workloads for scalability. The company provides
complete solutions to simplify the deployment and management of your production applications on the cloud.

IBM SmartCloud Enterprise+ is a fully managed, security-rich and production-ready cloud environment designed
to ensure enterprise-class performance and availability. SCE+ offers complete governance, administration and
management control along with service-level agreements (SLAs) to align your specific business and usage
requirements. Multiple security and isolation options built into the virtual infrastructure and network keep this cloud
separate from other cloud environments. IBM recently announced that its cloud services and software will be based
on an open cloud architecture. As the first step, the company unveiled a new cloud offering based on open cloud
standards, including OpenStack, which significantly speeds up and simplifies managing an enterprise-grade cloud.

Kryptos | Chennai
Kryptos cloud-related services are highly elastic and scalable, and are available on a pay-per-use model at
a low cost due to economies of scale primarily delivered by leveraging Internet technologies, virtualisation
services and the availability of computing power.

NetApp | Bengaluru

LEADING

IBM | Bengaluru

NetApps extensive ecosystem of industry-leading technology and delivery partners widens the expertise
and choices available to its clients. That means one can leverage the most innovative technologies across all
layers of your cloud solution. NetApps cloud solutions help its clients to accelerate their time to deployment
with pre-validated infrastructure solutions, automate and manage end-to-end cloud management integration,
and augment their on-premise private cloud with cloud services built on NetApp. In addition, NetApps
Professional Services experts can help clients design the path to cloud computing, successfully delivering
IT as a Service (ITaaS).

Netmagic | Mumbai
Netmagic SimpliCloud is an enterprise grade IaaS (Infrastructure as a Service) cloud platform built on the latest
generation of virtualisation techniques. This means that you can grow your business to reach new markets
without worrying about upfront capex on physical hardware.

june 2013 | 105

Brocade
Cisco Systems
Dell India Pvt Ltd
ESDS Software Solutions Pvt Ltd
Hooduku India Pvt Ltd
HP
IBM
Kryptos
NetApp
Netmagic
Oracle India Pvt Ltd
Sify
Tech Mahindra
VMware Software India Pvt Ltd
Wipro
Brocade
Cisco Systems
Dell India Pvt Ltd
ESDS Software Solutions Pvt Ltd
Hooduku India Pvt Ltd
HP
IBM
Kryptos
NetApp
Netmagic
Oracle India Pvt Ltd
Sify
Tech Mahindra
VMware Software India Pvt Ltd
Wipro

Netmagics HybriCloud service allows customers to have a mix of virtual and physical infrastructure if they need it.
Customers subscribing to its co-location or dedicated hosting services can augment existing physical infrastructure
by securely provisioning additional capacity from its public cloud, and pay as per usage.

Oracle India Pvt Ltd | Gurgaon


Oracle Cloud offers a broad portfolio of Software as a Service (SaaS) applications, Platform as a Service and social
media capabilities, all on a subscription basis. Oracle Cloud delivers instant value and productivity for end users,
administrators and developers alike through functionally rich, integrated, secure, enterprise cloud services. With Oracle
Cloud, one gets enterprise-grade application and platform services based on best-in-class business applications and
the industrys leading database and application server, managed by experts with over a decade of cloud delivery
experience. More than 25 million users rely on Oracle Cloud every day.

Sify | Chennai
The companys enterprise cloud services claim to offer the best-in-class cloud computing services in India with
its unparalleled expertise in storage and network services on the cloud. The platform offers a bouquet of intelligent
services for enterprises looking to grow their business based on contemporary technology with minimal-to-zero capital
expenditure on physical hardware, almost instantaneously, thus leading to a faster time-to-market.
The services leverage the features of virtualisation, and capitalise the massive scalability of the cloud infrastructure,
enabling customers to provision, ramp up, and downgrade their virtual compute, storage and network resources.

LEADING

Tech Mahindra | Pune


Tech Mahindras cloud enablement services provide designs of cloud IaaS, SaaS and PaaS infrastructure, as
well as software and product development, and deployment on the cloud. Other services are: service exposure
enablement, on-demand; self-provision automation; shared and fully virtualised compute, network and storage
services; security assessment and controls implementation with industry partnerships; and telco enablement
for communications/collaboration-as-a-service such as messaging, IM, Web/audio conferencing, document
sharing, etc, which is enabled over the IaaS foundation layer and offered via multiple partnerships for telco/
enterprise customers.

VMware Software India Pvt Ltd | Bengaluru


The company provides solutions across the three layers of an IT environment -- infrastructure, applications and enduser computing, and thus helps the customer adopt a complete cloud environment. VMware has a long history of
support for open source software in its products. In addition to collaborating with the open source community, VMware
works closely with major Linux vendors to ensure high quality support for Linux guest operating systems running on
VMware hypervisors. As an active participant in the open source community, VMware has open sourced the VMware
tools as the Open Virtual Machine Tools project, contributed the VMI (Virtual Machine Interface) paravirtualisation
code under the GPL, collaborated with the Linux kernel community and others in the development of paravirt-ops, and
sponsored OSDLs DCL F2F.

Wipro | Bengaluru
Wipros solutions address customer requirements on a pay-per-use model, with dynamic infrastructure provisioning
capabilities. By leveraging its domain knowledge, systems integration expertise, tier 3 data centres and industry
partnerships, Wipros cloud computing services provide best-of-breed solutions that improve the efficiency and
effectiveness of IT in organisations. Wipros Private Cloud solution brings together the benefits of traditional
IT management, including operational excellence, automation and service delivery models, and merges them
with the dynamic potential of cloud architectures. Wipros Private Cloud solution provides the foundation for a
strong, flexible and valuable cloud infrastructure that supports IT operations and delivers exceptional service
quality to the business.

106 | june 2013

TIPS

&

TRICKS

View the contents of tar and rpm files

Here are two simple commands to show you the


contents of the tar and rpm files.
1. To view the content of a tar file, issue the following
command:

$mp3wrap merged_filename.mp3 filename1.mp3 filename2.mp3

#tar -tvf /path/to/file.tar

where filename1.mp3 and filename2.mp3 are my


input files that can be merged together.
Finally, you can split a single large MP3 file into
small files by installing Mp3split using the following
command:

2.

$sudo apt-get install mp3splt

To view the content of an rpm file, use the command


given below:

Now, to split the large file, run the following command:

#rpm -qlp /path/to/file.rpm

Giriraj G Rajasekharan,
girirajgr@gmail.com

Playing around with MP3 files

Here is a tip that helps you cut, split, join or merge


MP3 files in Ubuntu, resulting in a better quality output.
To cut an MP3 file, you need to install poc-streamer, as
follows:
$sudo apt-get install poc-streamer

The syntax for mp3cut is given below:


mp3cut [-o outputfile] [-T title] [-A artist] [-N albumname] [-t [hh:]mm:ss[+ms]-[hh:]mm:ss[+ms]] mp3 [-t ...]
mp3 -o output: Output file, default mp3file.out.mp3

For example, if you want to cut a one-minute clip of the


MP3 file named input.mp3 to a .wav file called output.wav,
run the following command:

$mp3splt filename.mp3 00.00 01.23 03.20

Filename.mp3 is my input file, which can be split into


two MP3 files. One is from the start to the 1 min 23 sec
point, and another one is from 1 min 23 sec to 3 min 20
sec. Mp3split can make smaller files without decoding even
the file.
Rajasekhar Chintalpudi,
rajasekhar.chintalapudi@gmail.com

GRUB 2 recovery

We often come across a condition in which the boot


loader gets corrupt. Here are a few steps that will help you
recover your GRUB 2 boot loader.
Boot from a live CD or DVD, which supports GRUB
2 (Ubuntu 9.10 CD or above. A DVD will take more time
than a CD, so I suggest you boot from a CD).
Open the terminal and run fdisk -l to check the partition
from which you want to recover GRUB 2.
Here I assume that you want to recover it from /dev/sda1.
Then run the following commands:

$mp3cut -o output.wav -t 00:00(+0)-01:00(+0) input.mp3


$sudo mkdir /media/sda1

If you want to join two MP3 files, you need to install


mp3wrap, as follows:

$sudo mount /dev/sda1 /media/sda1

$sudo apt-get install mp3wrap

$sudo mount --bind /dev /media/sda1/dev

The syntax for mp3wrap is shown below:


108 | june 2013

$sudo mount --bind /proc /media/sda1/proc

Now chroot into that partition by running the command


given below:
$sudo chroot /media/sda1

Then re-install GRUB, as follows:


#grub-install /dev/sda

The output should be like whats shown below:


Installation finished. No error reported.

If you get an error, then try the following command:

Mounting a remote filesystem with ssh

Although running the scp or ssh command is very


convenient, mounting the remote servers filesystem on
the local system has solved many problems for me.
Here is how this can be done. To mount the remote
filesystem to a local Linux workstation, you need to
install the sshfs. To install, you can choose to use the
package manager of your distro or you can compile and
install manually. More details can be found at http://fuse.
sourceforge.net/sshfs.html
After the installation is complete, you can run the
following command on your Linux workstation:
sshfs username@hostname:/<Directory_path_on_Server>
/<mount_point>

#grub-install --recheck /dev/sda

After a successful installation, exit from chroot and


unmount the file systems that were mounted to recover
GRUB. Now reboot.

Thats it, so enjoy! (Of course, you should have the


required permissions.)
Sandesh Nagrare,
sandesh.nagrare@gmail.com

Low-level HDD formatting

#exit
$sudo umount /media/sda1/proc

You often have a hard disk drive that needs to be


formatted such that all data in it gets destroyed. Here is a
command that can erase all data from your HDD.

$sudo umount /media/sda1/dev


#dd if=/dev/zero of=/dev/hda bs=1M
$sudo umount /media/sda1
$sudo reboot

Youve successfully completed recovering your GRUB


boot loader.
Kousik Maiti,
kousikster@gmail.com

Find out the elapsed time of a running


process

There are a lot of processes running on your Linux system.


Here is a command that will let you know how long the
process has been running:
#ps -eo %p %c %t|grep sshd

This will format your primary HDD (hda) to a


low level using dd. This will fill the entire disk with
a sequence of zeros. In case you want to mix zeros
and ones to fill the HDD, you can use the following
command:
#dd if=/dev/urandom of=/dev/hda bs=1M

Note: Please be very careful while using dd as


the wrong use of this command can erase all your
valuable data.
Kousik Maiti,
kousikster@gmail.com

In response to the above command, you will get the


following output:
2850 sshd
29532 sshd

172-01:37:22
125-09:07:10

In the above command %p is pid, %c is command and


%t is elapsed time.
Ravikumar R,
ravikumar.raam@gmail.com

Share Your Linux Recipes!


The joy of using Linux is in finding ways to get around
problemstake them head on, defeat them! We invite you
to share your tips and tricks with us for publication in OSFY
so that they can reach a wider audience. Your tips could be
related to administration, programming, troubleshooting or
general tweaking. Submit them at www.linuxforu.com. The
sender of each published tip will get a T-shirt.

june 2013 | 109

CALENDAR FOR-2013
eVeNTS TO LOOK OUT FOR IN 2013
Date

Name of the eveNt

DescriptioN

locatioN & coNtact

Cloud Connect

The Cloud Connect conference


and expo brings the entire cloud
ecosystem together to drive growth
and innovation in cloud computing.

Mumbai; Sanket Karode ,


Deputy marketing manager, http://www.cloudconsanket.karode@ubm.com, nectevent.in/
Ph: +91 22 61727403

June 12 -13, 2013


(Mumbai) and June 21
(Bengaluru)

6th Tech BFSI 2013 - Transform, Empower & Innovate

Tech BFSI will provide in-depth


insights into the latest developments and technical solutions in
the areas of analytics, business
intelligence, mobile technologies,
cloud computing, data centres, collaboration technologies, virtualisation, security and IT management in
the BFSI Sector.

The Westin, Mumbai,


Garden City (June 12 - 13,
2013) The Leela Palace,
Bengaluru (June 21, 2013);
Aboli Pawar, Associate
Director, Ph: 9004958990
/ 9833226990, aboli@
kamikaze.co.in

http://www.techbfsi.com/

June 12-14, 2013

The Global High on Cloud


Summit

The Global High on Cloud Summit'


will address the issues, concerns,
latest trends, new technology and
upcoming innovations on the cloud
platform.

Mumbai; Prashanth Nair,


Sr. Conference Producer,
Ph: +91-80-41154921;
contactus@besummits.
com

http://www.theglobalhighoncloudsummit.com/

June 18-21, 2013

CommunicAsia 2013 /
EnterpriseIT 2013

Being Asias largest integrated


information and communication
technology event, it is instrumental
in connecting everyone in the ICT
industry.

Marina Bay Sands;


Singapore

www.CommunicAsia.
com & http://www.gotoenterpriseit.com/

Fleming Gulf's 2 Annual


Cloud Computing Summit

The second annual cloud computing


summit is bringing back key CIOs
onto a single platform in order to
overcome the general inertia plaguing
the sector.

New Delhi;
Tikenderjit Singh Makkar,
marketing manager, tikenderjit.singh@fleminggulf.
com; Ph:+ 91 20 6727
6403

http://www.
fleminggulf.com/
conferenceview/2nd-Annual-Cloud-ComputingSummit/464

Reseller Club Hosting Summit, Gurgaon

This is supposedly Asia's largest


gathering for the Internet industry.
Expect to meet some of the biggest
brands from across the hosting
world this October at Gurgaon,
Delhi.

Gurgaon;
Keenan Thomas, sales
manager; Ph:(+91) 22
3079 7637; keenan.t@
rchostingsummit.com

www.rchostingsummit.
com

Open Source India

This is the premier open source conference in Asia targeted at nurturing


and promoting the open source
ecosystem in the subcontinent.

NIMHANS Convention
Center, Bengaluru;
Atul Goel, senior product
and marketing manager;
Ph:880 009 4211; atul.
goel@efyindia.com

http://osidays.com/
osidays/

Interop, Mumbai

INTEROP Mumbai is an independently organised conference and


exhibition designed to empower information technology professionals
to make smart business decisions.

Bombay Exhibition Center,


Mumbai;
Sanket Karode, Dy Marketing Manager, sanket.
karode@ubm.com, Ph:
+91 22 61727403

http://www.interop.in/

June 12-13, 2013

August 21-23, 2013

October 17-18, 2013

November 11-13, 2013

November 27-29, 2013

110 | june 2013

nd

website

m Enterprise
Mainstream Enterprise
Mainstream E
ption of
Adoption of
Adoption
ce Databases
Open Source Databases
Open Source D

n with Ed Boyajian,
A conversation with Ed Boyajian,
A conversation with
nterpriseDB
CEO, EnterpriseDB
CEO, Enterpr

ce database
Are enterprises
Are enterprises
software
embracing open source
today?
embracing
database software
open
today?source dat

s customers
Absolutely.
Absolutely.
and
In 2012 47
we counted
ofIn
the
32 2012
of theGlobal
Fortune
we500
counted
as
1000.
customers32
and
That
47
ofof the
the
includes
Global
Fortune
1000. That500
includesas custom
the Federal
some of the
Aviation
some
biggest IT
of
users
the
Administration,
in the
biggest
world. IT operations
IT users
at thein
Federal
NIC,
the
Aviation
Fujitsu,
world.
Administration,
ITSonyoperations
NIC, Fujitsu, Sonyat the Fed
us fromEricsson
EnterpriseDB.
and
Ericsson
TCS are all and
using Postgres
Interestingly
TCS orare
Postgres
allPlus
using
,from
also
EnterpriseDB.
Postgres
noteworthy
Interestingly,
or Postgres
also, noteworthy,
Plus from
sition of
companies
Skype),
companies
like VMware,
Apple
Microsoft
like (through
and
VMware,
itsFacebook
acquisition
Microsoft
of Skype),
(through
Apple
(through
and Facebook
its
its
(through
acquisition
its
o
he beginning
acquisition
acquisition
ofof
Instagram)
an explosion.
are using
of Instagram)
PostgreSQL. We are at
are
the beginning
usingof
PostgreSQL.
an explosion.
We are at the begi

of traditional
Companies
Companies
aredatabases,
finding that forare
a fraction
finding
PostgreSQL
of the cost
that
of traditional
for
can
adatabases,
fraction
deliver
PostgreSQL
of
the
the
can deliver
cost
the of tradi
reSQL has
sophisticated
had
sophisticated
features
decades
and capabilities
of
features
they
hardening
require.and
PostgreSQL
capabilities
and
has haddevelopment
decades
they
of hardening
require.
and development
PostgreSQL h
s as well
by a as
talented
by
a and
a
fast-growing,
talented
committed community
and of
committed
supportive
developers as well community
as aecosystem
fast-growing, supportive
of developers
ofecosystem of
as we
database database
specialists.
specialists.

How difficult
is it
to migrate tois
a new
database?
How
difficult
it database?
to migrate to a new datab

ity solution
EnterpriseDB
EnterpriseDB
that
has developed
enables
a proven
has
our
Oracle
developed
compatibility
customers
solution
a proven
that
to
enables
run
Oracle
our many
customers
compatibility
to run many
solu
natively Oracle
supports
applications
Oracleusing
many
applications
Postgres of
Plus. Oracles
Postgres
using
Plus natively
Postgres
system
supports Plus.
many
interfaces,
of Oracles
Postgres
system interfaces,
Plus natively
on. Existing
facilitating
technical
facilitating
migrations with minimal
staff
migrations
cost,
risk
from
and with
disruption.
developers
minimal
Existing technical
cost,
to
staff DBAs
risk
from developers
and
to
disruption.
to DBAs to
Exist
d and manage
operations
operations
teams
Postgres
leverage existing
teams
Plus
Oracle
skills
databases.
leverage
to build andexisting
manageEnterpriseDB
Postgres
Oracle
Plus databases.
skillsEnterpriseDB
to build and m
that begins
also has developed
also
with
has
aan
comprehensive
developed
Oracle
migration
migration
a program
comprehensive
that begins
assessment
with an Oracle
migration
migration
and
assessment
program
and that be
ay through
providesto
support
provides
deployment.
and assistance
support
with the and
processassistance
all the way throughwith
to deployment.
the process all the way throu

happens
Postgres databases
arePostgres
deployed?
es areWhat
deployed?
Whatafter
happens
after
databases are

icationsRegardless
based
Regardless
of whether
on community
an organization
of whether
is deploying
PostgreSQL
an
applications
organization
based onor
community
Postgres
is deploying
PostgreSQL or Postgres
applications
hat ensure
Plus, EnterpriseDB
Plus,
success.
EnterpriseDB
provides aWe
portfoliohave
of solutions
provides
made
that ensure
a the
portfolio
success.long-term
We have
ofmade
solutions
the long-termthat ens
ise with
commitment
Postgres-specialized
commitment
to meeting the demands
to meeting
of theproducts,
enterprise
the
with demands
Postgres-specialized
support,
of
products,
the
andsupport,
enterprise
and
with
w Postgres
services.
database
services.
Whats more, weenhancements
Whats
are continually
more,
developing
we
neware
Postgres
and
continually
database
sponsoring
enhancements
developing
the
and sponsoring
new
the Postgr
rganizations
efforts of efforts
the
around
PostgreSQL
ofcommunity.
the
the PostgreSQL
world
More thanturn
2,000 organizations
community.
to EnterpriseDB
around the
More
world turn
than
to
for
EnterpriseDB
2,000for
organizat
Postgres-related
Postgres-related
products and services. products and services.

DB?

How do
customers
EnterpriseDB?contact
How
docontact
customers
EnterpriseDB?
Hurry! Offer expires September 30, 2012

sedb.com
Customers
for
can a
check
wide
our web
array
site atcheck
www.enterprisedb.com
of our
information
forsite
a wideat
array
on
of information
our
on our
Customers
can
web
www.enterprisedb.co
-mail sales@enterprisedb.com
productsproducts
and services, calland
+91 20services,
3058 9500 or e-mail
with
sales@enterprisedb.com
questions
or
questions
call
+91
20 3058 with
9500
or or
e-mail sa
comments.
comments.

bscriptions
Contact
Technical
us Contact
today about : Support
Software
Subscriptions
24x7x365
Technical
Support
24x7x365 Subscript
us
today
about
:
Software
trators and
Migration
Developers
Training
for
Professional
Administrators and
Services
Professional
Services
Assessments
Migration
Assessments
Developers
Training
for Administrators
*
US Only), Email:
Call: +1 781-357-3390
info@enterprisedb.com
1-877-377-4352
(US Only), Email:
Call:or+1
781-357-3390
or info@enterprisedb.com
1-877-377-4352 (US Onl

oftware India Private


EnterpriseDB
Limited
Software India Private
Limited
EnterpriseDB
Softwar

oor, Godrej Castlemaine, Unit


Sassoon
# 3, Ground Floor,
Road
Godrej Castlemaine,
Pune
Unit

Sassoon
411001
# 3,
Road
Ground
Pune 411001
Floor, Go
Test, develop
and deploy
application
VMware
vCloud
powered
cloud
0 F +91 20 3058 9502
www.enterprisedb.com
T +91 your
20 3058
9500 Fon+91
20 3058
9502
Twww.enterprisedb.com
+91
20 3058 9500 F +91
Avail free cloud credit worth ` 25,000*, visit www.cloudinfinit.com for more details

Potrebbero piacerti anche