Sei sulla pagina 1di 108

YOU SAID IT

More content for non-IT readers


I have been reading your magazine since the last few years. The
company I work in is in the manufacturing industry. Similarly,
your subscribers database may have more individuals like me
from companies that are not directly related to the IT industry.
Currently, your primary focus is on technical matters,
and the magazine carries articles written by skilled technical
individuals, so OSFY is really helpful for open source developers.
However, you also have some non-IT subscribers like us, who
can understand that something great is available in the open
source domain, which they can deploy to reduce their IT costs.
But, unfortunately, your magazine does not inform us about open
source solutions providers.
I request you to introduce the companies that provide end-toend IT solutions on open source platforms including thin clients,
desktops, servers, virtualisation, embedded customised OSs, ERP,
CRM, MRP, emails and file servers, etc. Kindly publish relevant
case studies, with the overall cost savings and benefits. Just as you
feature job vacancies, do give us information about the solutions
providers I mentioned above.
Shekhar Ranjankar;
shekhar.ranjankar@unitopgroup.com
ED: Thank you for your valuable feedback. We do carry case studies
of companies deploying open source, from time to time. We also
regularly carry a list of different solutions providers from different
open source sectors. We will surely take note of your suggestion and
try to continue carrying content that interests non-IT readers too.

Requesting an article on Linux server migration


I am glad to receive my first copy of OSFY. I have a
suggestion to make: if possible, please include an article
on migrating to VMware (from a Linux physical server to
VMware ESX). Also, do provide an overview of some open
source tools (like Ghost for Linux) to take an image of a
physical Linux server.
Rohit Rajput;
rohit.solutions@gmail.com

Share Your

ED: Its great to hear from you. We will definitely cover the
topics suggested by you in one of our forthcoming issues. Keep
reading our magazine. And do feel free to get in touch with us if
you have any such valuable feedback.

A request for the Backtrack OS to be


bundled on the DVD
I am a huge fan of Open Source For You. Thank you for
bundling the Ubuntu DVD with the May 2014 issue. Some of
my team members and I require the Backtrack OS. Could you
provide this in your next edition? I am studying information
sciences for my undergrad degree. Please suggest the important
programming languages that I should become proficient in.
aravind naik;
aravindnaik23@gmail.com
ED: Thanks for writing in to us. Were pleased to know that you
liked the DVD. Backtrack is no longer being maintained. The
updated version for penetration testing is now known as Kali
Linux and we bundled it with the April 2014 issue of OSFY. For
career related queries, you can refer to older OSFY issues or you
can find related articles on www.opensourceforu.com

Overseas subscriptions
Previously, I used to get the copies of LINUX For You/ Open
Source For You and Electronics For You from local book stores
but, lately, none of them carry these magazines any more. So
how can I get the copies of all these magazines in Malaysia, and
where can I get previous issues too?
Abdullah Abd. Hamid;
ab@sirim.my
ED: Thank you for reaching out to us. Currently, we do not have
any reseller or distributor in Malaysia for news stand sales, but you
can always subscribe to the print edition or the e-zine version of the
magazines. You can find the details of how to subscribe to the print
editions on www.pay.efyindia.com and for the e-zine version, please
go to www.ezines.efyindia.com

Please send your comments


or suggestions to:

The Editor,
Open Source For You,
D-87/1, Okhla Industrial Area, Phase I,
New Delhi 110020, Phone: 011-26810601/02/03,
Fax: 011-26817563, Email: osfyedit@efy.in
8 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

offe

rS

THE
monTH
2000
Rupees
Coupon

One
month
free

(Free Trial Coupon)

Free Dedicated Server Hosting


for one month
Subscribe for our Annual Package of Dedicated
Server Hosting & enjoy one month free service

Hurry!till 31st
alid
r
e
Off vust 2014!
Aug

For more information, call us


on 1800-209-3006/ +91-253-6636500

No condition attached for trial of our


cloud platform
Hurry!till 31st
valid 14!
r
e
ff
O
t 20
Augus

Enjoy & Please share Feedback at


sales@cloudoye.com

For more information, call us on


1800-212-2022 / +91-120-666-7718

www.cloudoye.com

www.esds.co.in

35%

Get 10%
discount

off & more


Reseller package special offer !

Hurry!till 31st
alid
Offer vust 2014!
Aug

Free Dedicated hosting/VPS for one


month. Subscribe for annual package
of Dedicated hosting/VPS and get
one month FREE
Contact us at 09841073179
or Write to sales@space2host.com

Get 35% off on course fees and if you appear


for two Red Hat exams, the second shot is free.
Hurry!till 31st
alid
Offer vust 2014!
Aug

Get 25%
Pay Annually & get 12 Month Free
Services on Dedicated Server Hosting

Hurry!till 31st
alid
Offer vust 2014!
Aug

Subscribe for the Annual Packages of


Dedicated Server Hosting & Enjoy Next
12 Months Free Services
For more information, call us on
1800-212-2022 / +91-120-666-7777

www.goforhosting.com

Contact us @ 98409 82184/85 or


Write to enquiry@vectratech.in

www.vectratech.in

www.space2host.com

Get
12 Months
Free

Do not wait! Be a part of


the winning team

Off

PACKWEB

PACK WEB
HOSTING
ProX

Time to go PRO now

Considering VPS or a Dedicated


Server? Save Big !!! And go
with our ProX Plans

Hurry!till 31st
alid
Offer vust 2014!
Aug

25% Off on ProX Plans - Ideal for running


High Traffic or E-Commerce Websites.
Coupon Code : OSFY2014
Contact us at 98769-44977 or
Write to support@packwebhosting.com

www.prox.packwebhosting.com

To advertise here, contact Omar


on +91-995 888 1862 or 011-26810601/02/03 or
Write to omar.farooq@efyindia.com
www.opensourceforu.com

Email : sales-in@liferay.com

FOSSBYTES
Powered by www.efytimes.com

CentOS 7 now available

The CentOS Project has announced the general availability of CentOS 7, the first
release of the free Linux distro based on the source code for RedHat Enterprise
Linux (RHEL) 7. It is the first major release after the collaboration between the
CentOS Project and Red Hat. CentOS 7 is built from the freely available RHEL 7
source code tree. The features closely resemble that of Red Hats latest operating
system. Just like RHEL 7, it
is now powered by version
3.10.0 of the Linux kernel, with
a default file system. It is also
the first version to include a
management engine, systemd,
dynamic firewall system called the firewalld, and the boot loader, GRUB2.
The default Java Development Kit has also been upgraded to OpenJDK 7, and
the system now ships with open VMware tools and 3D graphics drivers, out-ofthe-box. Also, like RHEL 7, this is the version of CentOS that claims to offer an inplace upgrade path. Soon, users will be able to upgrade from CentOS 6.5 to CentOS
7 without reformatting their systems.
The CentOS team has launched a new build process, in which the entire
distro is built from code hosted at the CentOS Projects own Git repository.
Source code packages (SRPMs) are created as a side effect of the build cycle,
and will be hosted on the main CentOS download servers.
Disc images of CentOS 7, which include separate builds for the Gnome and KDE
desktops, a live CD image and a network-installable version, are also now available.

Google to launch Android One smartphones with MediaTek chipset

Google made an announcement


about its Android One program
at the recent Google I/O 2014,
San Francisco, California. The
company plans to launch devices
powered by Android One in
India first, with companies like
Micromax, Spice and Karbonn.
Android One has been launched
to reduce the production costs of phones. The manufacturers mentioned earlier
will be able to launch US$ 100 phones based on this platform. Google will
handle the software part, using Android One. So phones will get firmware
updates directly from Google. This is surprising because low budget phones
usually dont receive any software updates. Sundar Pichai, Android head at
Google, showcased a Micromax device at the show. The Micromax Android
One phone has an 11.43 cm (4.5 inch) display, FM radio, SD card slot and dual
SIM slot. Google has reportedly partnered with MediaTek for chipsets to power
the Android One devices. We speculate that it is MediTeks MT6575 dual core
processor that has been packed into Micromaxs Android One phone.
It is worth mentioning here that 78 per cent of the smartphones launched
in Q1 of 2014 were priced around US$ 200 in India. So Googles Android One
will definitely herald major changes in this market. Google will also provide
manufacturers with guidelines on hardware designs. And it has tied up with
hardware component companies to provide high volume parts to manufacturers at a
lower cost in order to bring out budget Android smartphones.
14 | aUGUST 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

A rare SMS worm is attacking


your Android device!
Android does get attacked with
Trojan apps that have no selfpropagation mechanism, so users
dont notice the malfunction. But
heres a different, rather rare, mode
of attack that Android devices are
now facing. Selfmite is a SMS
worm attack. It is the second of such
deadly viruses found in the past two
months. Selfmite automatically sends
SMSs to the users with their name
in the message. The SMS contains a
shortened URL which triggers users
to install a third part APK file called
TheSelfTimerV1.apk. The SMS
says, Dear [name], Look the Selftime.. Some remote server hosts this
malware application. Users can find
SelfTimer installed in the app drawer
of their Android devices.
The Selfmite worm
shows a pop-up to download
mobogenie_122141003.apk, which
offers synchronisation between
Android devices and PCs. The app
has over 50 million downloads on
Play Store, but all are through various
paid referral schemes and promotion
programmes. Researchers at Adaptive
Mobile believe that a number of
Mobogenie downloads are promoted
through some malicious software
used by an unknown advertising
platform. A popular vendor of security
solutions in North America detected
dozens of devices that were infected
with Selfmite. The attack campaign
was launched using Google. The short
linked URL of this malicious app was
distributed in the Google shortlink
format. The APK link was visited
2,140 times. Later, Google disabled it.
Android devices detect apps from
unknown and unauthorised developers.
But some users enable installation
authentication even for apps from
unknown sources. Their devices
become the targets for worms like this.

OSFYClassifieds
Classifieds for Linux & Open Source IT Training Institutes
IPSR Solutions Ltd.

WESTERN REGION

SOUTHERN REGION

Linux Lab (empowering linux mastery)


Courses Offered: Enterprise Linux
& VMware

*astTECS Academy
Courses Offered: Basic Asterisk Course,
Advanced Asterisk Course, Free PBX
Course, Vici Dial Administration Course

Courses Offered: RHCE, RHCVA,


RHCSS, RHCDS, RHCA,
Produced Highest number of
Red Hat professionals
in the world

Address (HQ): 1176, 12th B Main,


HAL 2nd Stage, Indiranagar,
Bangalore - 560008, India
Contact Person: Lt. Col. Shaju N. T.
Contact No.: +91-9611192237
Email: info@asterisk-training.com
Website: www.asttecs.com,
www.asterisk-training.com

Address (HQ): Merchant's


Association Building, M.L. Road,
Kottayam - 686001,
Kerala, India
Contact Person: Benila Mendus
Contact No.: +91-9447294635
Email: training@ipsrsolutions.com
Branch(es): Kochi, Kozhikode,
Thrissur, Trivandrum
Website: www.ipsr.org

Advantage Pro
Courses Offered: RHCSS, RHCVA,
RHCE, PHP, Perl, Python, Ruby, Ajax,
A prominent player in Open Source
Technology

Linux Learning Centre


Courses Offered: Linux OS Admin
& Security Courses for Migration,
Courses for Developers, RHCE,
RHCVA, RHCSS, NCLP

Address (HQ): 1 & 2 , 4th Floor,


Jhaver Plaza, 1A Nungambakkam
High Road, Chennai - 600 034, India
Contact Person: Ms. Rema
Contact No.: +91-9840982185
Email: enquiry@vectratech.in
Website(s): www.vectratech.in

Address (HQ): 635, 6th Main Road,


Hanumanthnagar,
Bangalore - 560 019, India
Contact Person: Mr. Ramesh Kumar
Contact No.: +91-80-22428538,
26780762, 65680048 /
+91-9845057731, 9449857731
Email: info@linuxlearningcentre.com
Branch(es): Bangalore
Website: www.linuxlearningcentre.com

Address (HQ): 1104, D Gold House,


Nr. Bharat Petrol Pump, Ghyaneshwer
Paduka Chowk, FC Road, Shivajinagar
Pune-411 005
Contact Person: Mr.Bhavesh M. Nayani
Contact No.: +020 60602277,
+91 8793342945
Email: info@linuxlab.org.in
Branch(es): coming soon
Website: www.linuxlab.org.in
Linux Training & Certification
Courses Offered: RHCSA,
RHCE, RHCVA, RHCSS,
NCLA, NCLP, Linux Basics,
Shell Scripting,
(Coming soon) MySQL
Address (HQ): 104B Instant Plaza,
Behind Nagrik Stores,
Near Ashok Cinema,
Thane Station West - 400601,
Maharashtra, India
Contact Person: Ms. Swati Farde
Contact No.: +91-22-25379116/
+91-9869502832
Email: mail@ltcert.com
Website: www.ltcert.com

NORTHERN REGION
GRRASLinuxTrainingandDevelopmentCenter
Courses Offered: RHCE,RHCSS,RHCVA,
CCNA,PHP,ShellScripting(onlinetraining
isalsoavailable)
Address (HQ): GRRASLinuxTrainingand
DevelopmentCenter,219,HimmatNagar,
BehindKiranSweets,GopalpuraTurn,
TonkRoad,Jaipur,Rajasthan,India
Contact Person: Mr.AkhileshJain
Contact No.: +91-141-3136868/
+91-9983340133,9785598711,9887789124
Email: info@grras.com
Branch(es): Nagpur,Pune
Website(s): www.grras.org,www.grras.com

Duestor Technologies
Courses Offered: Solaris, AIX,
RHEL, HP UX, SAN Administration
(Netapp, EMC, HDS, HP),
Virtualisation(VMWare, Citrix, OVM),
Cloud Computing, Enterprise
Middleware.
Address (H.Q.): 2-88, 1st floor,
Sai Nagar Colony, Chaitanyapuri,
Hyderabad - 060
Contact Person: Mr. Amit
Contact Number(s): +91-9030450039,
+91-9030450397.
E-mail id(s): info@duestor.com
Websit(es): www.duestor.com

Eastern Region
Academy of Engineering and
Management (AEM)
Courses Offered: RHCE, RHCVA,
RHCSS,Clustering & Storage,
Advanced Linux, Shell
Scripting, CCNA, MCITP, A+, N+
Address (HQ): North Kolkata, 2/80
Dumdum Road, Near Dumdum
Metro Station, 1st & 2nd Floor,
Kolkata - 700074
Contact Person: Mr. Tuhin Sinha
Contact No.: +91-9830075018,
9830051236
Email: sinhatuhin1@gmail.com
Branch(es): North & South Kolkata
Website: www.aemk.org

FOSSBYTES
Expect Android Wear app
section along with Google Play
Service update

Google recently started rolling out its Google


Play Service update 5.0 to all the devices.
This version is an advance from the existing
4.4, bringing the Android wearable services
API and much more. Mainly focused on
developers, this version was announced in
2014. According to the search giants blog,
the newest version of the Google Play store
includes many updates that can increase app
performance. These include wearable APIs,
dynamic security provider, improvements
in Drive, Wallet and Google Analytics, etc.
The main focus is on the Android Wearable
platform and APIs, which will enable more
applications on these devices. In addition to
this, Google has announced a separate section
for Android Wear apps in the Play store.
These apps for the Android Wear section
in the Google Play store come from Google
itself. The collection includes official
companion apps for Android devices,
Hangouts and Google Maps. The main
purpose of the Android Wear Companion
app is to let users manage their devices from
Android smartphones. It provides voice
support, notifications and more. There are
third party apps as well from Pinterest,
Banjo and Duolingo.

Google plans to remove


QuickOffice from app stores

Google has announced the companys


future plans about Google Docs,
Slides and Sheets. It has integrated the
QuickOffice service in Google Docs
now. So, there is no longer a need for
the separate Google QuickOffice app.
QuickOffice was acquired by Google in
2012. It served free document viewing
and editing on Android and iOS for two
years. Google has decided to discontinue
this free service.
The firm has integrated QuickOffice
into the Google Docs, Sheets and Slides
app. The QuickOffice app will be removed
from the Play Store and Apples App Store
soon and users will not be able to see or
install it. Existing users will be able to
continue to use the old version of the app.

Calendar of forthcoming events


Name, Date and Venue

Description

Contact Details and Website

4th Annual Datacenter


Dynamics Converged.
September 18, 2014;
Bengaluru

The event aims to assist the community in


the data centre domain by exchanging ideas,
accessing market knowledge and launching
new initiatives.

Praveen Nair; Email: Praveen.nair@


datacenterdynamics.com; Ph: +91
9820003158; Website:
http://www.datacenterdynamics.com/

Gartner Symposium IT Xpo,


October 14-17, 2014; Grand
Hyatt, Goa

CIOs and senior IT executives from across the


world will gather at this event, which offers
talks and workshops on new ideas and strategies in the IT industry.

Website:
http://www.gartner.com

Open Source India,


November 7-8, 2014;
NIMHANS Center, Bengaluru

Asias premier open source conference that


aims to nurture and promote the open source
ecosystem across the sub-continent.

Omar Farooq; Email: omar.farooq@


efy.in; Ph: 09958881862
http://www.osidays.com

CeBit
November 12-14, 2014;
BIEC, Bengaluru

This is one of the worlds leading business IT


events, and offers a combination of services
and benefits that will strengthen the Indian IT
and ITES markets.

Website:
http://www.cebit-india.com/

5th Annual Datacenter


Dynamics Converged;
December 9, 2014; Riyadh

The event aims to assist the community in


the datacentre domain by exchanging ideas,
accessing market knowledge and launching
new initiatives.

Praveen Nair; Email: Praveen.nair@


datacenterdynamics.com; Ph: +91
9820003158; Website:
http://www.datacenterdynamics.com/

Hostingconindia
December 12-13, 2014;
NCPA, Jamshedji Bhabha
Theatre, Mumbai

This event will be attended by Web hosting


companies, Web design companies, domain
and hosting resellers, ISPs and SMBs from
across the world.

Website:
http://www.hostingcon.com/
contact-us/

New podcast app for Linux is now ready for testing

An all-new podcast app for Ubuntu was launched recently. This app, called
Vocal, has a great UI and design. Nathan Dyer, who is the developer of this
project, has released unstable beta builds of the app for Ubuntu 14.04 and 14.10,
for testing purposes.
Only next-gen easy-to-use desktops are capable of running the beta version
of Vocal. Installing beta versions of the app on Ubuntu is not as difficult as
installing them on KDE, GNOME or Unity, but users cant try the beta version of
Vocal without installing the unstable elementary desktop PPA. Vocal is an open
source app, and one can easily port it to mainstream Linux versions from Ubuntu.
However, Dyer suggests users wait until the first official beta version of the app for
easy-to-use desktops is available.
The official developers blog has a detailed report on the project.

CoreOS Linux comes out with Linux containers as a service!


CoreOS has launched a commercial service to ease the workload of systems
administrators. The new commercial Linux distribution service can update
automatically. Systems administrators do not have to perform any major update
manually. Linux-based companies like RedHat and SUSE use open source
and free applications and libraries for their operations, yet offer commercial
subscription services for enterprise editions of Linux. These services cover
software, updates, integration and technical support, bug fixes, etc.
CoreOS has a different strategy compared to competitive services offered by
other players in the service, support and distribution industries. Users will not
receive any major updates, since CoreOS wants to save you the hassle of manually
updating all packages. The company plans to stream copies of updates directly to
the OS. CoreOS has named the software CoreUpdate. It controls and monitors

16 | aUGUST 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

FOSSBYTES
software packages, their updates and also provides the controls to administrators to
manually update a few packages if they want to. It has a roll-back feature in case
an update causes any malfunction in a machine. CoreUpdate can manage multiple
systems at a time.
CoreOS was designed to promote the use of open source OS kernel, which
is used in a lot of cloud based virtual
servers. The CoreOS consumes less
than half of instance as compared
to other Linux distribution services.
Applications of distributions run in
a virtualised container called Docker. They can start instantly. CoreOS was
launched in December last year. It uses two partitions, which help in easily
updating distributions. One partition contains the current OS, while the other
is used to store the updated OS. This smoothens out the entire process of
upgrading a package or an entire distribution. The service can be directly
installed and run in the system or via cloud services like Amazon, Google or
Rackspace. The venture capital firm, Kleiner Perkins Caulfield and Byers,
has invested over US$ 8 million in CoreOS. The company was also backed by
Sequoia Capital and Fuel Capital in the past.

Mozilla to launch Firefox-based streaming Dongle, Netcast

After the successful launch of


Googles Chromecast, which
sold in millions, everyone else
has discovered the potential of
streaming devices. Recently,
Amazon and Roku launched
their devices. According to
GigaOM, Mozilla will soon
enter the market with its
Firefox-powered streaming
device. A Mozilla enthusiast,
Christian Heilmann, recently
uploaded a photo of Mozillas prototype streaming device on Twitter.
People at GigaOM managed to dig out more on it and even got their hands on
the prototype as soon as that leaked photo went viral on Twitter. The device provides
better functionality and options than Chromecast. Mozilla has partnered with some
as yet unknown manufacturer to build this device. The prototype has been sent to
some developers for testing and reviews. This device, which is called Netcast, has a
hackable open bootloader, which makes it run some Chromecast apps.
Mozilla has always looked for an open environment for its products. It is expected

Linux Foundation releases


Automotive Grade Linux to
power cars

The Linux Foundation recently


released Automotive Grade Linux
(AGL) to power automobiles, a
move that marks its first steps into
the automotive industry. The Linux
Foundation is sponsoring the AGL
project to collaborate with the
automotive, computing hardware
and communications industries,
apart from academia and other
sectors. The first release of this
system is available for free on the
Internet. A Linux-based platform
called Tizen IVI is used to power
AGL. Tizen IVI was primarily
designed for a broad range of
devicesfrom smartphones and
TVs to cars and laptops.
Here is the list of features
that you can experience in the
first release of AGL: a dashboard,
Bluetooth calling, Google Maps,
HVAC, audio controls, Smartphone
Link Integration, media playback,
home screen and news reader. The
Linux Foundation and its partners
are expecting this project to change
the future of open source software.
They hope to see next-generation
car entertainment, navigation
and other tools to be powered by
open source software. The Linux
Foundation expects collaborators to
add new features and capabilities
in future releases. Development
of AGL is expected to continue
steadily.

www.OpenSourceForU.com | OPEN SOURCE For You | aUGUST 2014 | 17

FOSSBYTES

Microsoft to abandon
X-Series Android
smartphones too

It hasnt been long since Microsoft


ventured into the Android market with
its X series devices and the company
has already revealed plans to abandon
the series. With the announcement of
up to 18,000 job cuts, the company is
also phasing out its feature phones and
recently launched Nokia X Android
smartphones.
Here are excerpts of an internal
email sent by Jo Harlow, who heads
the phone business under Microsoft
devices, to Microsoft employees:
Placing Mobile Phone services in
maintenance mode: With the clear
focus on Windows Phones, all Mobile
Phones-related services and enablers
are planned to move into maintenance
mode; effective: immediately. This
means there will be no new features
or updates to services on any Mobile
Phones platform as a result of these
plans. We plan to consider strategic
options for Xpress Browser to enable
continuation of the service outside
of Microsoft. We are committed to
supporting our existing customers,
and will ensure proper operation
during the controlled shutdown of
services over the next 18 months. A
detailed plan and timeline for each
service will be communicated over the
coming weeks.
Transitioning developer
efforts and investments: We
plan to transition developer
efforts and investments to focus
on the Windows ecosystem
while improving the companys
financial performance. To focus
on the growing momentum behind
Windows Phone, we plan to
immediately begin ramping down
developer engagement activities
related to Nokia X, Asha and
Series 40 apps, and shift support to
maintenance mode.

that the companys streaming stick will come with open source technology, which will
help developers to develop HDTV streaming apps for smartphones.

Opera is once again available on Linux

Australian Web browser company, Opera, has


finally released a beta version for its Linux OS.
This Opera 24 version for Linux has the same
features as Opera 24 on the Windows and Mac
platforms. Chrome and Firefox are currently the
two most used browsers on the Linux platform.
Opera 24 will be a good alternative to them.
As of now, only the developer or beta version of Opera for Linux is
available. We are hoping to see a stable version in the near future. In this beta
version, Linux users will get to experience popular Opera features like Speed
Dial, Discover, Stash, etc. Speed Dial is a home page that gives users an
overview of their history, folders and bookmarks. Discover is an RSS reader,
embedded within the browser. Gathering and reading articles of interest would
be more authentic with the Discover feature. Stash is like Pinterest, within a
browser. Its UI is inspired from Pinterest. It allows users to collect websites and
categorise them. Stash is designed to enable users to plan their travel, work and
personal lives with a collection of links.

Unlock your Moto X with your tattoo

Motorola is implementing an
alternative security system
for Moto X. It is frustrating to
remember difficult passwords
while simpler passwords are easy
to crack. To counter this, VivaLnk
has launched digital tattoos. This
tattoo will automatically unlock
the Moto X when applied to the
skin.
This technology is based
on Near Field Communication to connect with smartphones and authenticate
access. Motorola is working on optimising digital tattoos with Googles Advance
Technology and Projects.
The pricing is on the higher side but this is a great initiative in wearable
technology. Developing user friendly alternatives to the password and PIN number
has been a major focus of tech companies. Motorola had talked about this in the
introductory session of the D11 conference at California this May, when it discussed
the idea of passwords in pills or tattoos. The idea may seem like a gimmick, but you
never know when it will become commonly used. VivaLnk is working on making
this technology compatible with other smartphones too. It is considering entering
the domain of creating tattoos of different types and designs.

OpenSSL flaws fixed by PHP

PHP recently pushed out new versions for its popular scripting language, which fix
many crucial bugs and, out of those, two are of OpenSSL. The flaws are not serious
like Heartbleed, which popped up a couple of months back. One flaw is directly
related to OpenSSL handling time stamps and the other is related to the same thing
in a different way. PHP 5.5.14 and 5.4.30 have fixed both flaws.

18 | aUGUST 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

FOSSBYTES
Other bugs which were fixed were not security related but of a more
general type.

iberry introduces the Auxus Linea L1 smartphone


and Auxus AX04 tablet in India

In a bid to expand its portfolio in the


Indian market, iberry has introduced two
new Android KitKat-powered devices in
the countrya smartphone and a tablet.
The Auxus Linea L1 smartphone is priced
at Rs 6,990 and the Auxus AX04 tablet
is priced at Rs 5,990. Both have been
available from the online megastore eBay
India, since June 25, this year.
The iberry Auxus Linea L1
smartphone features a 11.43 cm (4.5
inch) display with OGS technology and
Gorilla Glass protection. It is powered
by a 1.3GHz quad-core MediaTek
(MT6582) processor coupled with 1 GB
of DDR3 RAM. It sports a 5 MP rear camera with an LED flash and a 2 MP frontfacing camera. It comes with 4 GB of inbuilt storage expandable up to 64 GB via
microSD card. The dual-SIM device runs Android 4.4 KitKat, out-of-the-box. The
3G-supporting smartphone has a 2000mAh battery.
Meanwhile, the iberry Auxus AX04 tablet features a 17.78 cm (7 inch) IPS
display. It is powered by a 1.5 GHz dual-core processor (unspecified chipset)
coupled with 512 MB of RAM. The voice-supporting tablet sports a 2 MP rear
camera and a 0.3 MP front-facing camera. It comes with 4 GB of built-in storage
expandable up to 64 GB via micro-SD card slot. The dual-SIM device runs Android
4.4 KitKat out-of-the-box. It has a 3000mAh battery.

Google to splurge a whopping Rs 1,000 million


on marketing Android One

Looks like global search engine giant Google wants to leave no stone
unturned in its quest to make its ambitious Android One smartphone-forthe-masses project reach its vastly dispersed target audience in emerging
economies (including India). The buzz is that Google is planning to splurge
over a whopping Rs 1,000 milllion with its official partners on advertising
and marketing for the platform.
Even as Sundar Pichai, senior
VP at Google who is in charge
of Android, Chrome and Apps, is all set to launch the first batch of low
budget Android smartphones in India sometime in October this year, the
latest development shows how serious Google is about the project.
It was observed that Googles OEM partners were forced into launching a new
smartphone every nine months to stay ahead in the cut-throat competition. However,
thanks to Googles new Android hardware and software reference platform, its
partners will now be able to save money and get enough time to choose the right
components, before pushing their smartphones into the market. Android One will
also allow them to push updates to their Android devices, offering an optimised stock
Android experience. With the Android One platform falling into place, Google will be
able to ensure a minimum set of standards for Android-based smartphones.
www.OpenSourceForU.com | OPEN SOURCE For You | aUGUST 2014 | 19

FOSSBYTES

Linux kernel 3.2.61 LTS


officially released

The launch of the Linux kernel 3.2.61


LTS, the brand-new maintenance
release of the 3.2 kernel series, has
been officially announced by Ben
Hutchings, the maintainer of the
Linux 2.6 kernel branch. While
highlighting the slew of changes that
come bundled along with the latest
release, Hutchings advised users to
upgrade to it as early as possible.
The Linux kernel 3.2.61 is an
important release in the cycle, according to
Hutchings. It introduces better support for
the x86, ARM, PowerPC, s390 and MIPS
architectures. At the same time, it also
improves support for the EXT4, ReiserFS,
Btrfs, NFS and UBIFS file systems. It also
comes with updated drivers for wireless
connectivity, InfiniBand, USB, ACPI,
Bluetooth, SCSI, Radeon and Intel i915,
among others.
Meanwhile, Linux founder Linus
Torvalds has officially announced the
fifth Release Candidate (RC) version
of the upcoming Linux kernel 3.16.
The RC5 is a successor to Linux 3.16rc4. It is now available for download
and testing. However, since it is a
development version, it should not be
installed on production machines.

Motorola brings out Android


4.4.4 KitKat upgrade for
Moto E, Moto G and Moto X
Motorola has unveiled the Android
4.4.4 KitKat update for its devices in
India, for Moto E, Moto G and Moto
X. This latest version of Android has
an extra layer of security for browsing
Web content on the phone.
With this phased rollout, users
will receive notifications that will
enable them to update their OS but,
alternatively, the update can also be
accessed by way of the settings menu.
This release goes on to shore up
Motorolas commitment to offering its
customers a pure, bloatware-free and
seamless Android experience.

With the Android One platform, Google aims to reach the 5 billion
people across the world who still do not own a smartphone. According to
Pichai, less than 10 per cent of the worlds population owns smartphones
in emerging countries. The promise of a stock Android experience at
a low price point is what Android One aims to provide. Home-grown
manufacturers such as Micromax, Karbonn and Spice will create and sell
these Android One phones for which hardware reference points, software
and subsequent updates will be provided by Google. Even though the spec
sheet of Android One phones hasnt been officially released, Micromax is
already working on its next low budget phone, which many believe will
be an Android One device.

SQL injection vulnerability patched in Ruby on Rails

Two SQL injection vulnerabilities were patched in Ruby on Rails, which is


an open source Web development framework now used by many developers.
Some high profile websites also use this framework. The Ruby on Rails
developers recently launched versions 3.2.19, 4.0.7 and 4.1.3, and advised
users to upgrade to these versions as soon as possible. And a few hours later,
they again released versions 4.0.8 and 4.1.4 to fix problems caused by the
4.0.7 and 4.1.3 updates.
One of the two SQL injection vulnerabilities affects applications running
on Ruby versions 2.0.0 through to 3.2.18, which also use the PostgreSQL
database system and query bit string data types. Another vulnerability affects
applications running on Ruby on Rails versions 4.0.0 to 4.1.2, which use
PostgreSQL and querying range data types.
Despite affecting different versions, these two flaws are related and allow
attackers to inject arbitrary SQL code using crafted values.

The city of Munich adopts Linux in a big way!

Its certainly not a case of an overnight conversion. The city of Munich began
to seek open source alternatives way
back in 2003.
With a population of about 1.5
million citizens and thousands of
employees, this German city took its
time to adopt open source. Tens of
thousands of government workstations
were to be considered for the change. Its
initial shopping list had suitably rigid
specifications, spanning everything from
avoiding vendor lock-in and receiving
regular hardware support updates, to
having access to an expansive range of
free applications.
In its first stage of migration, in 2006,
Debian was introduced across a small percentage of government workstations,
with the remaining Windows computers switching to OpenOffice.org, followed
by Firefox and Thunderbird.
Debian was substituted for a custom Ubuntu-based distribution named
LiMux in 2008, after the team handling the project realised Ubuntu was the
platform that could satisfy our requirements best.

20 | aUGUST 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

In The News

SUSE Partners with Karunya University


to Make Engineers Employable
In one of the very first initiatives of its kind, SUSE and Novell have partnered with Karunya
University, Coimbatore, to ensure its students are industry-ready.

ut of the many interviews that we have conducted


standards than they did with proprietary technologies. This
with recruiters asking them about what they look for
trend makes it even more critical to incorporate open source
in a candidate, one common requirement seems to
technologies in the college curriculum.
be knowledge of open source technology. As per NASSCOM
Speaking about the initiative, Venkatesh Swaminathan,
reports, between 20 to 33 per cent of the million students that
country head, The Attachmate Group (Novell, NetIQ,
graduate out of Indias engineering colleges every year, run the
SUSE and Attachmate), said, This is one of the first
risk of being unemployed.
implementations of its kind but we do have
The Attachmate Group, along with
engagements with universities on various
Karunya University, has taken a step
other formats. Regarding this partnership with
forward to address this issue. Novell India,
Karunya, we came out with a kind of a joint
in association with Karunya University, has
strategy to make engineering graduates ready
introduced Novells professional courses
for the jobs enterprises offer today. We thought
as part of the universitys curriculum.
about the current curriculum and how we
Students enrolled in the universitys M.
could modify it to make it more effective. Our
Tech course for Information Technology
current education system places more emphasis
will be offered industry-accepted courses.
on theory rather than the practical aspects of
Apart from this, another company of the
engineering. With our initiative, we aim to bring
Attachmate Group, SUSE, has also pitched
in more practical aspects into the curriculum. So
in to make the students familiar with the
we have looked at what enterprises want from
world of open source technology.
engineers when they deploy some solutions.
Venkatesh Swaminathan,
Speaking about the initiatives, Dr
Today, though many enterprises want to use
country head, The Attachmate Group
J Dinesh Peter, associate professor and
open source technologies effectively, the
(Novell, NetIQ, SUSE and Attachmate)
HoD I/C, Department Of Information
unavailability of adequate talent to handle those
Technology, said, We have already started with our first batch
technologies is a major issue. So, the idea was to bridge the
of students, who are learning SUSE. I think adding open source
gap between what enterprises want and what they are getting,
technology in the curriculum is a great idea because nowadays,
with respect to the talent they require to implement and
most of the tech companies expect knowledge on open
manage new technologies.
source technology for the jobs that they offer. Open source
Going forward, the company aims to partner with at least
technology is the future, and I think all universities must have it
another 15 20 universities this year to integrate its courseware
incorporated in their curriculum in some form or the other.
into the curriculum to benefit the maximum number of students
The university has also gone ahead to provide professional
in India. The onus of ensuring that the technical and engineering
courses from Novell to the students. Dr Peter said, In India,
students who graduate every year in our country are world-class
where the problem of employability of technical graduates is
and employable lies on both the academia as well as the industry.
acute, this initiative could provide the much needed shot in
With this collaboration, we hope to take a small but important
the arm. We are pleased to be associated with Novell, which
step towards achieving this objective, Swaminathan added.
has offered its industry-relevant courses to our students. With
growing competition and demand for skilled employees in
About The Attachmate Group
the technology industry, it is imperative that the industry and
Headquartered in Houston, Texas, The Attachmate Group
academia work in sync to address the lacuna that currently
is a privately-held software holding company, comprising
exists in our system.
distinct IT brands. Principal holdings include Attachmate,
Growth in the amount of open source software that
NetIQ, Novell and SUSE.
enterprises use has been much faster than growth in proprietary
software usage, over the past 2-3 years. One major reason
By: Diksha P Gupta
for this is that open source technology helped companies
The author is senior assistant editor at EFY.
slash huge IT budgets, while maintaining higher performance
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 21

Buyers Guide

SSDs Move Ahead to

Overtake Hard Disk Drives


High speed, durable and sleek SSDs are moving in to replace traditional HDDs.

solid state drive (SSD) is a data storage device


that uses integrated circuit assemblies as its
memory to store data. Now that everyone is
switching over to thin tablets and high performance
notebooks, carrying heavy, bulky hard disks may be
difficult. SSDs, therefore, play a vital role in todays
world as they combine high speed, durability and smaller
sizes, with vast storage and power efficiency.
SSDs consume minimal power because they do not have
any movable parts inside, which leads to less consumption of
internal power.

HDDs vs SSDs

The new technologies embedded in SSDs make them costlier


than HDDs. SSDs, with their new technology, will gradually
overtake hard disk drives (HDDs), which have been around
ever since PCs came into prominence. It takes time for a new
technology to completely take over the traditional one. Also,
new technologies are usually expensive. However, users are
ready to pay a little more for a new technology because it
offers better performance, explains Rajesh Gupta, country
head and director, Sandisk Corporation India.
SSDs use integrated circuit assemblies as memory for
storing data. The technology uses an electronic interface
which is compatible with traditional block input/output
HDDs. So SSDs can easily replace HDDs in commonly
used applications.
An SSD uses a flash-based medium for storage. It is
believed to have a longer life than an HDD and also consumes
less power. SSDs are the next stage in the evolution of PC
storage. They run faster, and are quieter and cooler than the
aging technology inside hard drives. With no moving parts,
SSDs are also more durable and reliable than hard drives.
They not only boost the performance but can also be used
to breathe new life into older systems, says Vishal Parekh,
marketing director, Kingston Technology India.

How to select the right SSD

If youre a videographer, or have a studio dedicated to audio/


video post-production work, or are in the banking sector, you
can look at ADATAs latest launch, which has been featured
later in the article. Kingston, too, has introduced SSDs for all
possible purposes. SSDs are great options even for gamers, or
those who want to ensure their data has been saved in a secure

22 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

medium. Kingston offers an entire range of SSDs, including


entry levels variants as well as options for general use.
There are a lot of factors to keep in mind when you
are planning to buy an SSDdurability, portability,
power consumption and speed. Gupta adds that, The
performance of SSDs is typically indicated by their IOPS
(Input output operation per second), so one should look at
the specifications of the product. Also, check the storage
capacity. If youre looking for an SSD when you already
have a PC or laptop, then double check the compatibility
between your system and the SSD youve shortlisted. If
youre buying a new system, then you can always check
with the vendors as to what SSD options are available.
Research the I/O speeds and get updates about how reliable
the product is.
For PC users, some of the important performance
parameters of SSDs are related to battery life, heating of the
device and portability. An SSD is 100 per cent solid state
technology and has no motor inside, so the advantage is that it
consumes less energy; hence, it extends the battery life of the
device and is quite portable, explains Gupta.
Listed below are a few broad specifications of SSDs,
which can help buyers decide which variant to go in for.

Portability

Portability is one of the major concerns when buying an


external hard drive because, as discussed earlier, everyone
is gradually shifting to tablets, iPads and notebooks and
so would not want to carry around an external hard disk
that is heavier than the computing device. The overall
portability of an SSD is evaluated on the basis of its size,
shape, how much it weighs and its ruggedness.

High speed

Speed is another factor people look for, while buying an


SSD. If it is not fast, it is not worth the buy. SSDs offer
data transfer read speeds that range from approximately 530
MBps to 550 MBps, whereas a HDD offers only around
30 to 50 MBps. SSDs can also boot any operating system
almost four times faster than a traditional 7200 RPM 500
GB hard drive disk. With SSDs, the applications provide a
12 times faster response compared to the HDD. A system
equipped with an SSD also launches applications faster and
offers a high performance overall.

Buyers Guide
Durability

new PC by reviving the system you already own,


adds Parekh.

As an SSD does not have any moving parts like a motor


and uses a flash-based medium for storing data, it is more
likely to keep the data secure and safe. Some of the SSDs
are coated with metal, which extends their life. There are
almost no chances of their getting damaged. Even if you
drop your laptop or PC, the data stays safe and does not
get affected.

A few options to choose from

Many companies, including Kingston, ADATA and


Sandisk, have launched their SSDs and it is quite a task
trying to choose the best among them. Kingston has
always stood out in terms of delivering good products to
not just the Indian market but worldwide. Ashu Mehrotra,
marketing manager, ADATA, speaks about his firms
SSDs: ADATA has been putting a lot of resources into
R&D for SSDs, because of which its products provide
unique advantages to customers. Gupta says, Sandisk
is a completely vertically integrated solutions provider
and is also a key manufacturer of flash-based storage
systems, which are required for SSDs. Because of this,
we are very conscious about the categories to be used in
the SSD. We also make our own controllers and do our
own integration.

Power consumption

In comparison to a HDD, a solid state drive consumes


minimal power. Usually, a PC user faces the challenge of a
limited battery life. But since an SSD is 100 per cent solid
state technology and has no motor inside, it consumes less
energy; hence, it extends the life of the battery and the PC,
adds Rajesh Gupta.
There are plenty of other reasons for choosing a SSD
over a HDD. These include the warranty, cost, efficiency,
etc. Choosing a SSD can save you the cost of buying a

HyperX Fury

from Kingston Technology


It is a 6.35 cm and 7 mm solid state drive (SSD)
Delivers impressive performance at an affordable price
It speeds up system boot up, application loading time and
file execution
Controller: SandForce SF-2281
Performance: SATA Rev 3.0 (6 GBps)
Read/write speed: 500 MBps to boost overall system
responsiveness and performance
Reliability: Cool, rugged and durable drive to push your
system to the limits
Warranty: Three years

Extreme PRO SSD


from Sandisk

Consistently fast data transfer speeds


Lower latency times
Reduces power consumption
Comes in the following capacities: 64 GB, 128
GB and 256 GB
Speed: 520 MBps
Compatibility: SATA Revision 3.0 (6 GBps)
Warranty: Three years
24 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

Buyers Guide

Premier Pro SP920


from ADATA
It is designed to meet the high-performance requirements of
multimedia file transfers
It provides up to 7 per cent more space on its SSD due to the
right combination of controller and high quality flash
It weighs 70 grams and its dimensions are 10069.857 mm
Controller: Marvell
It comes in the following capacities: 128 GB, 256 GB, 512
GB and 1 TB
NAND flash synchronous MLC
Interface: SATA 6 GBps
Read/write speed: From 560 to 180 MBps
Power consumption: 0.067 W idle/0.15 W active

SSD 840 EVO

from Samsung

Capacity: 500 GB (1 GB =1 billion bytes)


Dimensions: 100 x 69.85 x 6.80 mm
Weight: Max 5.3 kg
Interface: SATA 6 GBps (compatible with SATA 3
GBps and SATA 1.5 GBps)
Controller: Samsung 3-core MEX controller
Warranty: Three years

1200 SSD

from Seagate
It is designed for applications demanding the fast,
consistent performance and has dual port
12 GBps SAS
It comes with 800 GB capacity
Random read/write performance of up to 110K /40K IOPS
Sequential read/write performance from 500 MBps to
750 MBps

By: Manvi Saxena


With inputs from ADATA, Kingston and Sandisk.

The author is a part of the editorial team at EFY.

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 25

Developers

How To

An Introduction to the Linux Kernel

This article provides an introduction to the Linux kernel, and demonstrates how
to write and compile a module.

ave you ever wondered how a computer manages the


most complex tasks with such efficiency and accuracy?
The answer is, with the help of the operating system. It
is the operating system that uses hardware resources efficiently
to perform various tasks and ultimately makes life easier. At a
high level, the OS can be divided into two partsthe first being
the kernel and other is the utility programs. Various user space
processes ask for system resources such as the CPU, storage,
memory, network connectivity, etc, and the kernel services
these requests. This column will explore loadable kernel
modules in GNU/Linux.
The Linux kernel is monolithic, which means that the
entire OS runs solely in supervisor mode. Though the kernel
is a single process, it consists of various subsystems and
each subsystem is responsible for performing certain tasks.
Broadly, any kernel performs the following main tasks.
Process management: This subsystem handles the
process life-cycle. It creates and destroys processes,
26 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

allowing communication and data sharing between processes


through inter-process communication (IPC). Additionally,
with the help of the process scheduler, it schedules processes
and enables resource sharing.
Memory management: This subsystem handles all
memory related requests. Available memory is divided into
chunks of a fixed size called pages, which are allocated or
de-allocated to/from the process, on demand. With the help of
the memory management unit (MMU), it maps the process
virtual address space to a physical address space and creates
the illusion of a contiguous large address space.
File system: The GNU/Linux system is heavily dependent
on the file system. In GNU/Linux, almost everything is a file.
This subsystem handles all storage related requirements like
the creation and deletion of files, compression and journaling
of data, the organisation of data in a hierarchical manner,
and so on. The Linux kernel supports all major file systems
including MS Windows NTFS.

Developers

How To

Device control: Any computer system requires various


devices. But to make the devices usable, there should be a
device driver and this layer provides that functionality. There
are various types of drivers present, like graphics drivers, a
Bluetooth driver, audio/video drivers and so on.
Networking: Networking is one of the important aspects
of any OS. It allows communication and data transfer between
hosts. It collects, identifies and transmits network packets.
Additionally, it also enables routing functionality.

Dynamically loadable kernel modules

We often install kernel updates and security patches to make


sure our system is up-to-date. In case of MS Windows, a reboot
is often required, but this is not always acceptable; for instance,
the machine cannot be rebooted if is a production server.
Wouldnt it be great if we could add or remove functionality
to/from the kernel on-the-fly without a system reboot? The
Linux kernel allows dynamic loading and unloading of kernel
modules. Any piece of code that can be added to the kernel at
runtime is called a kernel module. Modules can be loaded
or unloaded while the system is up and running without any
interruption. A kernel module is an object code that can be
dynamically linked to the running kernel using the insmod
command and can be unlinked using the rmmod command.

A few useful utilities

GNU/Linux provides various user-space utilities that provide


useful information about the kernel modules. Let us explore them.
lsmod: This command lists the currently loaded kernel
modules. This is a very simple program which reads the /proc/
modules file and displays its contents in a formatted manner.
insmod: This is also a trivial program which inserts a
module in the kernel. This command doesnt handle module
dependencies.
rmmod: As the name suggests, this command is used to
unload modules from the kernel. Unloading is done only if
the current module is not in use. rmmod also supports the -f
or --force option, which can unload modules forcibly. But this
option is extremely dangerous. There is a safer way to remove
modules. With the -w or --wait option, rmmod will isolate the
module and wait until the module is no longer used.
modinfo: This command displays information about
the module that was passed as a command-line argument.
If the argument is not a filename, then it searches the /lib/
modules/<version> directory for modules. modinfo shows
each attribute of the module in the field:value format.
Note: <version> is the kernel version. We can obtain it
by executing the uname -r command.
dmesg: Any user-space program displays its output on the
standard output stream, i.e., /dev/stdout but the kernel uses
a different methodology. The kernel appends its output to
the ring buffer, and by using the dmesg command, we can
28 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

manage the contents of the ring buffer.

Preparing the system

Now its time for action. Lets create a development


environment. In this section, lets install all the required
packages on an RPM-based GNU/Linux distro like CentOS
and a Debian-based GNU/Linux distro like Ubuntu.

Installing CentOS

First install the gcc compiler by executing the following


command as a root user:
[root]# yum -y install gcc

Then install the kernel development packages:


[root]# yum -y install kernel-devel

Finally, install the make utility:


[root]# yum -y install make

Installing Ubuntu

First install the gcc compiler:


[mickey] sudo apt-get install gcc

After that, install kernel development packages:


[mickey] sudo apt-get install kernel-package

And, finally, install the make utility:


[mickey] sudo apt-get install make

Our first kernel module

Our system is ready now. Let us write the first kernel module.
Open your favourite text editor and save the file as hello.c
with the following contents:
#include <linux/kernel.h>
#include <linux/module.h>
int init_module(void)
{

printk(KERN_INFO Hello, World !!!\n);

}

return 0;

void cleanup_module(void)
{

printk(KERN_INFO Exiting ...\n);
}

How To

Developers

MODULE_LICENSE(GPL);
MODULE_AUTHOR(Narendra Kangralkar.);
MODULE_DESCRIPTION(Hello world module.);
MODULE_VERSION(1.0);

MODULE_VERSION(1.0);

Any module must have at least two functions. The first


is initialisation and the second is the clean-up function.
In our case, init_module() is the initialisation function
and cleanup_module() is the clean-up function. The
initialisation function is called as soon as the module
is loaded and the clean-up function is called just before
unloading the module. MODULE_LICENSE and other
macros are self-explanatory.
There is a printk() function, the syntax of which is
similar to the user-space printf() function. But unlike
printf() , it doesnt print messages on a standard output
stream; instead, it appends messages into the kernels
ring buffer. Each printk() statement comes with a
priority. In our example, we used the KERN_INFO
priority. Please note that there is no comma (,) between
KERN_INFO and the format string. In the absence of
explicit priority, DEFAULT_MESSAGE_LOGLEVEL
priority will be used. The last statement in init_module()
is return 0 which indicates success.
The names of the initialisation and clean-up
functions are init_module() and cleanup_module()
respectively. But with the new kernel (>= 2.3.13)
we can use any name for the initialisation and cleanup functions. These old names are still supported
for backward compatibility. The kernel provides
module_init and module_exit macros, which register
initialisation and clean-up functions. Let us rewrite
the same module with names of our own choice for
initialisation and cleanup functions:

Compiling and loading the module

#include <linux/kernel.h>
#include <linux/module.h>
static int __init hello_init(void)
{

printk(KERN_INFO Hello, World !!!\n);

[mickey]$ make
make -C /lib/modules/2.6.32-358.el6.x86_64/build M=/home/
mickey modules
make[1]: Entering directory `/usr/src/kernels/2.6.32-358.
el6.x86_64
CC [M] /home/mickey/hello.o
Building modules, stage 2.
MODPOST 1 modules
CC
/home/mickey/hello.mod.o
LD [M] /home/mickey/hello.ko.unsigned
NO SIGN [M] /home/mickey/hello.ko
make[1]: Leaving directory `/usr/src/kernels/2.6.32-358.el6.
x86_64

return 0;

static void __exit hello_exit(void)


{

printk(KERN_INFO Exiting ...\n);
}
module_init(hello_init);
module_exit(hello_exit);
MODULE_LICENSE(GPL);
MODULE_AUTHOR(Narendra Kangralkar.);
MODULE_DESCRIPTION(Hello world module.);

Here, the __init and __exit keywords imply initialisation


and clean-up functions, respectively.

Now, let us understand the module compilation


procedure. To compile a kernel module, we are going to
use the kernels build system. Open your favourite text
editor and write down the following compilation steps
in it, before saving it as Makefile. Please note that the
kernel modules hello.c and Makefile must exist in the
same directory.
obj-m += hello.o
all:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
modules
clean:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
clean

To build modules, kernel headers are required. The


above makefile invokes the kernels build system from the
kernels source and finally the kernels makefile invokes
our Makefile to compile the module. Now that we have
everything to build our module, just execute the make
command, and this will compile and create the kernel
module named hello.ko:
[mickey] $ ls
hello.c Makefile

[mickey]$ ls
hello.c hello.ko hello.ko.unsigned hello.mod.c hello.
mod.o hello.o Makefile modules.order Module.symvers

We have now successfully compiled our first kernel


www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 29

Developers

How To

module. Now, let us look at how to load and unload this


module in the kernel. Please note that you must have superuser privileges to load/unload kernel modules. To load a
module, switch to the super-user mode and execute the
insmod command, as shown below:

of the current->pid variable. Given below is the complete


working code (pid.c):
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sched.h>

[root]# insmod hello.ko

insmod has done its job successfully. But where is the


output? It is appended to the kernels ring buffer. So lets
verify it by executing the dmesg command:
[root]# dmesg
Hello, World !!!

We can also check whether our module is loaded or not.


For this purpose, lets use the lsmod command:
[root]# lsmod | grep hello
hello
859 0

To unload the module from the kernel, just execute the


rmmod command as shown below and check the output of the
dmesg command. Now, dmesg shows the message from the
clean-up function:

static int __init pid_init(void)


{

printk(KERN_INFO pid = %d\n, current->pid);


return 0;
}
static void __exit pid_exit(void)
{

}

/* Dont do anything */

module_init(pid_init);
module_exit(pid_exit);
MODULE_LICENSE(GPL);
MODULE_AUTHOR(Narendra Kangralkar.);
MODULE_DESCRIPTION(Kernel module to find PID.);
MODULE_VERSION(1.0);

[root]# rmmod hello

The Makefile is almost the same as the first makefile, with


a minor change in the object files name:

[root]# dmesg
Hello, World !!!
Exiting ...

obj-m += pid.o

In this module, we have used a couple of macros, which


provide information about the module. The modinfo command
displays this information in a nicely formatted fashion:

all:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
modules

[mickey]$ modinfo hello.ko


filename:
hello.ko
version:
1.0
description:
Hello world module.
author:
Narendra Kangralkar.
license:
GPL
srcversion:
144DCA60AA8E0CFCC9899E3
depends:
vermagic:
2.6.32-358.el6.x86_64 SMP mod_unload
modversions

clean:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

Finding the PID of a process

Let us write one more kernel module to find out the Process
ID (PID) of the current process. The kernel stores all process
related information in the task_struct structure, which is
defined in the <linux/sched.h> header file. It provides a
current variable, which is a pointer to the current process.
To find out the PID of a current process, just print the value
30 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

clean

Now compile and insert the module and check the output
using the dmesg command:
[mickey]$ make
[root]# insmod pid.ko
[root]# dmesg
pid = 6730

A module that spans multiple files

So far we have explored how to compile a module from a


single file. But in a large project, there are several source
files for a single module and, sometimes, it is convenient to

How To

Developers

divide the module into multiple files. Let us understand the


procedure of building a module that spans two files. Lets
divide the initialization and cleanup functions from the hello.c
file into two separate files, namely startup.c and cleanup.c.
Given below is the source code for startup.c:

clean

#include <linux/kernel.h>
#include <linux/module.h>

[mickey]$ ls
cleanup.c Makefile startup.c

static int __init hello_init(void)


{

printk(KERN_INFO Function: %s from %s file\n, __func__,
__FILE__);

[mickey]$ make

#include <linux/kernel.h>
#include <linux/module.h>

[mickey]$ modinfo final.ko


filename:
final.ko
version:
1.0
description:
Startup module.
author:
Narendra Kangralkar.
license:
GPL
version:
1.1
description:
Cleanup module.
author:
Narendra Kangralkar.
license:
BSD
srcversion:
D808DB9E16AC40D04780E2F
depends:
vermagic:
2.6.32-358.el6.x86_64 SMP mod_unload
modversions

static void __exit hello_exit(void)


{

printk(KERN_INFO Function %s from %s file\n, __func__,
__FILE__);
}

Here, the modinfo command shows the version,


description, licence and author-related information from each
module.
Let us load and unload the final.ko module and verify the
output:

module_exit(hello_exit);

[mickey]$ su Password:

return 0;

module_init(hello_init);
MODULE_LICENSE(GPL);
MODULE_AUTHOR(Narendra Kangralkar.);
MODULE_DESCRIPTION(Startup module.);
MODULE_VERSION(1.0);

And cleanup.c will look like this.

MODULE_LICENSE(BSD);
MODULE_AUTHOR(Narendra Kangralkar.);
MODULE_DESCRIPTION(Cleanup module.);
MODULE_VERSION(1.1);

Now, here is the interesting part -- Makefile for these


modules:
obj-m += final.o
final-objs := startup.o cleanup.o
all:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
modules
clean:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

The Makefile is self-explanatory. Here, we are saying:


Build the final kernel object by using startup.o and
cleanup.o. Let us compile and test the module:

Then, lets display module information using the modinfo


command:

[root]# insmod final.ko


[root]# dmesg
Function: hello_init from /home/mickey/startup.c file
[root]# rmmod final
[root]# dmesg
Function: hello_init from /home/mickey/startup.c file
Function hello_exit from /home/mickey/cleanup.c file

Passing command-line arguments to the module


In user-space programs, we can easily manage command
line arguments with argc/ argv. But to achieve the same
functionality through modules, we have to put in more of
an effort.

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 31

Developers

How To

To achieve command-line handling in modules,


we first need to declare global variables and use the
module_param() macro, which is defined in the <linux/
moduleparam.h> header file. There is also the MODULE_
PARM_DESC() macro which provides descriptions
about arguments. Without going into lengthy theoretical
discussions, let us write the code:
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
static
static
static
static

char *name = Narendra Kangralkar;


long roll_no = 1234;
int total_subjects = 5;
int marks[5] = {80, 75, 83, 95, 87};

module_param(name, charp, 0);


MODULE_PARM_DESC(name, Name of the a student);
module_param(roll_no, long, 0);
MODULE_PARM_DESC(rool_no, Roll number of a student);

MODULE_AUTHOR(Narendra Kangralkar.);
MODULE_DESCRIPTION(Module with command line arguments.);
MODULE_VERSION(1.0);

After compilation, first insert the module without


any arguments, which display the default values of the
variable. But after providing command-line arguments,
default values will be overridden. The output below
illustrates this:
[root]# insmod parameters.ko
[root]# dmesg
Name
: Narendra Kangralkar
Roll no
: 1234
Subjectwise marks
Subject-1 = 80
Subject-2 = 75
Subject-3 = 83
Subject-4 = 95
Subject-5 = 87
[root]# rmmod parameters

module_param(total_subjects, int, 0);


MODULE_PARM_DESC(total_subjects, Total number of subjects);
module_param_array(marks, int, &total_subjects, 0);
MODULE_PARM_DESC(marks, Subjectwise marks of a student);
static int __init param_init(void)
{

static int i;

printk(KERN_INFO Name
: %s\n, name);
printk(KERN_INFO Roll no
: %ld\n, roll_no);
printk(KERN_INFO Subjectwise marks );


for (i = 0; i < total_subjects; ++i) {
printk(KERN_INFO Subject-%d = %d\n, i + 1,
marks[i]);
}

}

return 0;

static void __exit param_exit(void)


{

/* Dont do anything */
}
module_init(param_init);
module_exit(param_exit);
MODULE_LICENSE(GPL);
32 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

Now, let us reload module with command-line arguments


and verify the output.
[root]# insmod ./parameters.ko name=Mickey roll_no=1001
marks=10,20,30,40,50
[root]# dmesg
Name
: Mickey
Roll no
: 1001
Subjectwise marks
Subject-1 = 10
Subject-2 = 20
Subject-3 = 30
Subject-4 = 40
Subject-5 = 50

If you want to learn more about modules, the Linux


kernels source code is the best place to do so. You can
download the latest source code from https://www.kernel.
org/. Additionally, there are a few good books available in
the market like Linux Kernel Development (3rd Edition)
by Robert Love and Linux Device Drivers (3rd Edition).
You can also download the free book from http://lwn.net/
Kernel/LDD3/.
By: Narendra Kangralkar
The author is a FOSS enthusiast and loves exploring
anything related to open source. He can be reached at
narendrakangralkar@gmail.com

Insight

Developers

Write Better jQuery Code


for Your Project

jQuery, the cross-platform JavaScript library designed to simplify the client-side scripting
of HTML, is used by over 80 per cent of the 10,000 most popularly visited websites. jQuery
is free open source software which has a wide range of uses. In this article, the author
suggests some best practices for writing jQuery code.

his article aims to explain how to use jQuery in


a rapid and more sophisticated manner. Websites
focus not only on backend functions like user
registration, adding new friends or validation, but also on
how their Web pages will get displayed to the user, how
their pages will behave in different situations, etc. For
example, doing a mouse-over on the front page of a site
will either show beautiful animations, properly formatted
error messages or interactive hints to the user on what can
be done on the site.
jQuery is a very handy, interactive, powerful and rich
client-side framework built on JavaScript. It is able to
handle powerful operations like HTML manipulation,
events handling and beautiful animations. Its most
attractive feature is that it works across browsers. When
using plain JavaScript, one of the things we need to ensure
is whether the code we write tends towards perfection. It
should handle any exception. If the user enters an invalid
type of value, the script should not just hang or behave
badly. However, in my career, I have seen many junior
developers using plain JavaScript solutions instead of rich

frameworks like jQuery and writing numerous lines of


code to do some fairly minor task.
For example, if one wants to write code to show the
datepicker selection, on onclick event in plain Javascript,
the flow is:
1. For onclick event create one div element.
2. Inside that div, add content for dates, month and year.
3. Add navigation for changing the months and year.
4. Make sure that, on first client, div can be seen, and on
second client, div is hidden; and this should not affect
any other HTML elements. Just creating a datepicker
is a slightly more difficult task and if this needs to be
implemented many times in the same page, it becomes
more complex. If the code is not properly implemented,
then making modifications can be a nightmare.
This is where jQuery comes to our rescue. By using it, we
can show the datepicker as follows:
$(#id).datepicker();

Thats it! We can reuse the same code multiple times by


www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 33

Developers

Insight

just changing the id(s); and without any kind of collision,


we can show multiple datepickers in the same page. That
is the beauty of jQuery. In short, by using it, we can focus
more on the functionality of the system and not just on small
parts of the system. And we can write more complex code
like a rich text editor and lots of other operations. But if
we write jQuery code without proper guidance and proper
methodology, we end up writing bad code; and sometimes
that can become a nightmare for other team members to
understand and modify for minor changes.
Developers often make silly mistakes during jQuery code
implementation. So, based on some silly mistakes that I have
encountered, here are some general guidelines that every
developer should keep in mind while implementing jQuery code.

General guidelines for jQuery

1. Try to use this instead of just using the id and class of


the DOM elements. I have seen that most developers are
happy with just using $(#id) or $(.class) everywhere:
//What developers are doing:
$(#id).click(function(){

var oldValue = $(#id).val();

var newValue = (oldValue * 10) / 2;
$(#id).val(newValue);
});
//What should be done: Try to use more $(this) in your code.
$(#id).click(function(){

$(this).val(($(this).val() * 10) / 2);
});

2. Avoid conflicts: When working with a CMS like


WordPress or Magento, which might be using other
JavaScript frameworks instead of jQuery, you need to
work with jQuery inside that CMS or project. Then use
the noConflicts of jQuery.
var $abc = jQuery.noConflict();
$abc(#id).click(function(){
//do something
});

3. Take care of absent elements: Make sure that the element


on which your jQuery code is working/manipulating is
not absent. If the element on which your code manipulates
is dynamically added, then first find it, if that element is
added on DOM.
$(#divId).find(#someId).length

This code returns 0 if there isnt an element with someId


found; else it will return the total number of elements that are
inside divId.
34 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

4. Use proper selectors and try to use more find(), because


find can traverse DOM faster. For example, if we want to
find content of #id3
//demo code snippet
<div id=#id1>
<span id=#id2></span>
<div class=divClass>Here is the content.</div>
</div>
//developer generally uses
var content = $(#id1 .divClass).html();
//the better way is [This is faster in execution]
var content = $(#id1).find(div.divClass).html();

5. Write functions wherever required: Generally,


developers write the same code multiple times. To
avoid this, we can write functions. To write functions,
lets find the block that will repeat. For example, if
there is a validation of an entry for a text box and the
same gets repeated for many similar text boxes, then
we can write a function for the same. Given below is a
simple example of a text box entry. If the value is left
empty in the entry, then function returns 0; else, if the
user has entered some value, then it should return the
same value.
//Javascript
function doValidation(elementId){
//get value using elementId
//check and return value
}
//simple jQuery
$(input[type=text]).blur(function(){
//get value using $(this)
//check and return value
});
//best way to implement
//now you can use this function easily with click event also
$.doValidation = function(){
//get value
//check and return value
};
$(input[type=text]).blur($.doValidation);

6. Object organisation: This is another thing that each


developer needs to keep in mind. If one bunch of
variables is related to one task and another bunch of
variables is related to another task, then get them better
organised, as shown below:

Insight
//bad way
var disableTask1 = false;
var defaultTask1 = 5;
var pointerTask1 = 2;
var disableTask2 = true;
var defaultTask2 = 10;
var currentValueTask2 = 10;
//like that many other variables

//better way
var task1 = {
disable: false,
default: 5,
pointer: 2,
getNewValue: function(){
//do some thing
return task1.default + 5;
}
};
var task2 = {
disable: true,
default: 10,
currentValue: 10
};
//how to use them
if(task1.disable){
//do some thing
return task1.default;
}

7. Use of callbacks: When multiple functions are used in


your code and if the second function is dependent on
the effects of the first output, then callbacks are required
to be written.
For example, task2 needs to be executed after
completion of task1, or in other words, you need to halt
execution of task2 until task1 is executed. I have noticed
that many developers are not aware of callback functions.
So, they either initialise one variable for checking [like
mutex in the operating system] or set a timeout for
execution. Below, I have explained how easily this can be
implemented using callback.
//Javascript way
task1(function(){
task1();
});
function task1(callback){
//do something

Developers

if (callback && typeof (callback) === function) {


callback();
}
}
function task2(callback){
//do something
if (callback && typeof (callback) === function) {
callback();
}
}
//Better jQuery way
$.task1 = function(){
//do something
};
$.task2 = function(){
//do something
};
var callbacks = $.Callbacks();
callbacks.add($.task1);
callbacks.add($.task2);
callbacks.fire();

8. Use of each for iteration: The snippet below shows how


each can be used for iteration.
var array;
//javascript way
var length = array.length;
for(var i =0; i<length; i++){
var key = array[i].key;
// like wise fetching other values.
}
//jQuery way
$.each(array, function(key, value){
alert(key);
});

9. Don't repeat code: Never write any code again and again.
If you find yourself doing so, halt your coding and read
the eight points listed above, all over again.
Next time Ill explain how to write more effective plugins,
using some examples.
By: Savan Koradia
The author works as a senior PHP Web developer at Multidots
Solutions Pvt Ltd. He writes tutorials to help other developers
to write better code. You can contact him at: savan.koradia@
multidots.in; Skype: savan.koradia.multidots

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 35

Developers

Lets Try

Back Up a Shared Server in MongoDB


Continuing the series on MongoDB, in this article, readers learn how to set up a backup for the
sharded environment that was set up over the previous two articles.

n the previous article in this series, we set up a sharded


environment in MongoDB. This article deals with one
of the most intriguing and crucial topics in database
administrationbackups. The article will demonstrate the
MongoDB backup process and will make a backup of the
sharded server that was configured earlier. So, to proceed, you
must set up your sharded environment as per our previous
article as well be using the same configuration.
Before we move on with the backup, make sure that
the balancer is not running.
The balancer is the process that
ensures that data is distributed
evenly in a sharded cluster.
This is an automated process
in MongoDB and at most
times, you wont be bothered
with it. In this case, though, it
needs to be stopped so that no
chunk migration takes place
while we back up the server.
If youre wondering what the
term chunk migration means,
let me tell you that if one
shard in a sharded MongoDB environment has more data
stored than its peers, then the balancer process migrates
some data to other shards. Evenly distributed data ensures
optimal performance in a sharded environment.
So now connect to a Mongo process by opening a
command prompt, going to the MongoDB root directory and
typing Mongo. Type sh.getBalancerState() to find out the
balancers status. If you get true as the output, your balancer
is running. Type sh.stopBalancer() to stop the balancer.
The next step is to back up the config server, which
stores metadata about shards. In the previous article, we
set up three config servers for our shard. Since all the

config servers store the same metadata and since we


have three of them just to ensure availability, well
be backing just one config server for demonstration
purposes. So open a command prompt and type the
following command to back up the config database of
our config server:
C:\Users\viny\Desktop\mongodb-win32-i386-2.6.0\
bin>mongodump --host localhost:59020 --db config

This command will dump


your config database under
the dump directory of your
MongoDB root directory.
Now lets back up our
actual data by taking backups
of all of our shards. Issue the
following commands, one by
one, and take a backup of all
the three replica sets of both
the shards that we configured
earlier:
mongodump
mongodump
mongodump
mongodump
mongodump
mongodump

--host
--host
--host
--host
--host
--host

localhost:38020
localhost:38021
localhost:38022
localhost:48020
localhost:48021
localhost:48022

--out
--out
--out
--out
--out
--out

.\shard1\replica1
.\shard1\replica2
.\shard1\replica3
.\shard2\replica1
.\shard2\replica2
.\shard2\replica3

The --out parameter defines the directory where


MongoDB will place the dumps. Now you can start the
balancer by issuing the sh.startBalancer() command
and resume normal operations. So were done with our
backup operation.
If you want to explore a bit more about backups
and restores in MongoDB, you can check MongoDB
documentation and the article in http://www.thegeekstuff.
com/2013/09/mongodump-mongorestore/ which will
give you some good insights into Mongodump and
Mongorestore commands.
By: Vinayak Pandey

Figure 1: Balancer status


36 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

The author is an experienced database developer, with


exposure to various database and data warehousing tools
and techniques, including Oracle, Teradata, Informatica
PowerCenter and MongoDB.

CODE
Sandya Mannarswamy

SPORT

This months column continues the discussion of natural language processing.

or the past few months, we have been


discussing information retrieval and natural
language processing (NLP), as well as the
algorithms associated with them. In this months
column, lets continue our discussion on NLP while
also covering an important NLP application called
Named Entity Recognition (NER). As mentioned
earlier, given a large number of text documents, NLP
techniques are employed to extract information from
the documents. One of the most common sources
of textual information is newspaper articles. Let us
consider a simple example wherein we are given all
the newspaper articles that appeared in the last one
year. The task that is assigned to us is related to the
world of business. We are asked to find out all the
mergers and acquisitions of businesses. We need to
extract information on which companies bought over
other firms as well as the companies that merged
with each other. Our first rudimentary steps towards
getting this information will perhaps be to look
for keyword-based searches that used terms such
as merger or buys. Once we find the sentences
containing those keywords, we could then perhaps
look for the names of the companies, if any occur in
those sentences. Such a task requires us to identify
all company names present in the document.
For a person reading the newspaper article,
such a task seems simple and straightforward. Let
us first try to list down the ways in which a human
being would try to identify the company names that
could be present in a text document. We need to use
heuristics such as: (a) Company names typically
would begin with capital letters; (b) They can contain
words such as Corporation or Ltd; (c) They can
be represented by letters of the alphabet separated
by full stops, such as I.B.M. We could also use
contextual clues such as Xs stock price went up
to infer that X is a business or company. Now, the
question we are left with is whether it is possible

to convert what constitutes our intuitive knowledge


about how to look for a companys name in a text
document into rules that can be automatically
checked by a program. This is the task that is faced
by NLP applications which try to do Named Entity
Recognition (NER). The point to note is that while
the simple heuristics we use to identify names of
companies does work well in many cases, it is also
quite possible that it misses out extracting names
of companies in certain other cases. For instance,
consider the possibility of the companys name
being represented as IBM instead of I.B.M, or as
International Business Machines. The rule-based
system could potentially miss out recognising it.
Similarly, consider a sentence like, Indian Oil
and Natural Gas Company decided that In this
case, it is difficult to figure out whether there are
two independent entities, namely, Indian Oil and
Natural Gas Company being referred to in the
sentence or if it is a single entity whose name is
Indian Oil and Natural Gas Company. It requires
considerable knowledge about the business world
to resolve the ambiguity. We could perhaps consult
the World Wide Web or Wikipedia to clear our
doubts. The use of such sources of knowledge is
quite common in Named Entity Recognition (NER)
systems. Now let us look a bit deeper into NER
systems and their uses.

Types of entities

What are the types of entities that are of interest to


a NER system? Named entities are by definition,
proper nouns, i.e., nouns that refer to a particular
person, place, organisation, thing, date or time, such
as Sandya, Star Wars, Pride and Prejudice, Cubbon
Park, March, Friday, Wipro Ltd, Boy Scouts, and the
Statue of Liberty. Note that a named entity can span
more than one word, as in the case of Cubbon Park.
Each of these entities are assigned different tags such

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 37

as Person, Company, Location, Month, Day, Book, etc. If


the above example is tagged with entities, it will be tagged as
<Person> Sandya </Person>, <Movie>Star Wars</Movie>,
<Book> Pride and Prejudice </Book>, <Location> Cubbon
Park </Location> , etc.
It is not only important that the NER system recognises a
phrase correctly as an entity but also that it labels it with the
right entity type. Consider the sentence, Washington Jr went
to school in England, but for graduate studies, he moved to
the United States and studied at Washington. This sentence
contains two references to the noun Washington, one as a
person: Washington Jr and another as a location: Washington,
United States. While it may appear that if an NER system has a
list of all pronouns, it can correctly extract all entities, in reality,
this is not true. Consider the two sentences, Jobs are hard to
find and Jobs said that the employment rate is picking up..
Even if the NER system has an exhaustive list of pronouns, it
needs to figure out that the word Jobs appearing in the first
sentence does not refer to an entity, whereas the reference Jobs
in the second sentence is an entity.
Given our discussion so far, it is clear to us that NER
systems can be built in a number of ways, though no single
method can be considered to be superior to others and a
combination of techniques is needed. We saw that rulebased NER systems tend to be incomplete and have the
disadvantage of requiring manual extension quite frequently.
Rule-based systems use typical pattern matching techniques
to identify the entities. On the other hand, it is possible
to extract features associated with named entities and use
them to train classifiers that can tag entities, using machine
learning techniques. Machine learning approaches for
identifying entities can be based on: (a) supervised learning
techniques; (b) semi-supervised learning techniques; and (c)
unsupervised learning techniques.
The third kind of NER systems can be based on gazetteers,
wherein a lexicon or gazette for names is constructed and
made available to the NER system which then tags the text,
identifying entities in the text based on the lexicon entries.
Once a gazetteer is available, all that the NER needs to do is
to have an efficient lookup in the gazetteer for each phrase
it identifies in the text, and tag it based on the information
it finds in the gazette. A gazette can also help to embed
external world information, which can help in name entity
resolution. But first, the gazette needs to be built for it to be
available to the NER system. Building a gazette can consume
considerable manual effort. One of the alternatives is to build
the lexicon or gazetteer itself through automatic means, which
brings us back to the problem of recognising named entities
automatically from various document sources. Typically,
external world sources such as Wikipedia or Twitter can be
used as the information sources from which the gazette can
be built. Sometimes a combination of approaches can be used
with a lexicon, in conjunction with a rules-based or machine
learning approach.
40 | august 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

While rule-based NER systems and gazetteer


approaches work well for a domain-specific NER, machine
learning approaches generally perform well when applied
across multiple domains. Many of the machine learning
based approaches use supervised learning techniques, by
which a large corpus of text is annotated manually with
named entities and the goal is to use the annotated data to
train the learner. These systems use statistical models and
some form of feature identification to make predictions
about named entities in unlabelled text, based on what
they have learnt from the annotated text. Typically,
supervised learning systems study the features of positive
and negative examples, which have been tagged as named
entities in the hand-annotated training set. They use that
information to either come up with statistical models,
which can predict whether a newly encountered phrase is
a named entity or not. If it is a named entity, supervised
learning systems predict its type as well. In the next
column, we will continue our discussion on how hidden
Markov models and maximum entropy models can be used
to construct learner systems.

My must-read book for this month

This months book suggestion comes from one of our readers,


Jayshankar, and his recommendation is very appropriate for
this months column. He recommends an excellent resource
for text mininga book called Taming Text by Ingersol,
Morton and Farris. The book describes different algorithms
for text search, text clustering and classification. There is also
a detailed chapter on Named Entity Recognition, which will
be useful supplementary reading for this months column.
Thank you, Jay, for sharing this book link.
If you have a favourite programming book or article that
you think is a must-read for every programmer, please do
send me a note with the books name, and a short write-up on
why you think it is useful, so I can mention it in the column.
This would help many readers who want to improve their
software skills.
If you have any favourite programming questions or
software topics that you would like to discuss on this forum,
please send them to me, along with your solutions and
feedback, at sandyasm_AT_yahoo_DOT_com. Till we meet
again next month, happy programming!

By: Sandya Mannarswamy


The author is an expert in systems software and is currently working
with Hewlett Packard India Ltd. Her interests include compilers,
multi-core and storage systems. If you are preparing for systems
software interviews, you may find it useful to visit Sandya's LinkedIn
group Computer Science Interview Training India at
http://www.linkedin.com/groups?home=HYPERLINK "http://www.
linkedin.com/groups?home=&gid=2339182"&HYPERLINK "http://
www.linkedin.com/groups?home=&gid=2339182"gid=2339182

Exploring Software

Anil Seth

Guest Column

Big Data on a Desktop: A Virtual


Machine in an OpenStack Cloud
OpenStack is a worldwide collaboration between developers and
cloud computing technologists aimed at developing the cloud
computing platform for public and private clouds. Lets install it
on our desktop.

nstalling OpenStack using Packstack is very simple.


After a test installation in a virtual machine, you will find
that the basic operations for creating and using virtual
machines are now quite simple when using a Web interface.

The environment

It is important to understand the virtual environment. While


everything is running on a desktop, the setup consists of
multiple logical networks interconnected via virtual routers
and switches. You need to make sure that the routes are
defined properly because otherwise, you will not be able to
access the virtual machines you create.
On the desktop, the virt-manager creates a NAT-based
network by default. NAT assures that if your desktop can
access the Internet, so can the virtual machine. The Internet
access had been used when the OpenStack distribution was
installed in the virtual machine.
The Packstack installation process creates a virtual
public network for use by the various networks created
within the cloud environment. The virtual machine
on which OpenStack is installed is the gateway to the
physical network.
Virtual Network on the Desktop (virbr0 interface):
192.168.122.0/32
IP address of eth0 interface on OpenStack VM: 192.168.122.54
Public Virtual Network created by packstack on OpenStack VM:
172.24.4.224/28
IP address of the br-ex interface OpenStack VM: 172.24.4.225

Testing the environment

In the OpenStack VM console, verify the network addresses.


In my case, I had to explicitly give an ip to the br-ex
interface, as follows:
# ifconfig
# ip addr add 172.24.4.225/28 dev br-ex

On the desktop, add a route to the public virtual network


on OpenStack VM:
42 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

# route add -net 172.24.4.224 netmask 255.255.255.240 gw


192.168.122.54

Now, browse http://192.168.122.54/dashboard and


create a new project and a user associated with the project.
1. Sign in as the admin.
2. Under the Identity panel, create a user (youser) and
a project (Bigdata). Sign out and sign in as youser to
create and test a cloud VM.
3. Create a private network for the project under
Project/Network/Networks:
Create the private network 192.168.10.0/24 with
the gateway 192.168.10.254
Create a router and set a gateway to the public
network. Add an interface to the private network
and ip address 192.168.10.254.
4. To be able to sign in using ssh, under the Project/
Compute/Access & Security, in the Security
Groups tab, add the following rules to the default
security group:
Allow ssh access: Custom TCP Rule for allowing
traffic on Port 22.
Allow icmp access: Custom ICMP Rule with
Type and Code value -1.
5. For password-less signing into the VM, under the
Project/Compute/Access & Security, in the Key Pairs
tab the following:
Select the Import Key Pair option and give it a
name, e.g., desktop user login.
In your desktop terminal window, use ssh-keygen
to create a public/private key pair in case you
don't already have one.
Copy the contents of ~/.ssh/id_rsa.pub from your
desktop account and paste them in the public key.
6. Allocate a public IP for accessing the VM under
Project/Compute/Access & Security in the Floating
Ips tab, and allocate IP to the project. You may get
a value like 172.24.4.229
7. Now launch the instance under Project/Compute/
Instance:

Guest Column Exploring Software

Give it a name - test and choose the m1tiny flavour.


Select the boot source as Boot from image'
with the image name cirros', a very small
image included in the installation.
Once it is launched, associate the floating
ip obtained above with this instance.
Now, you are ready to log in to the VM
created in your local cloud. In a terminal
window, type:

em1

Internet

ssh cirros@172.24.4.229

virbr0

Desktop

eth0

br-ext

OpenStack VM

eth0
VM

Router

You should be signed into the virtual machine Figure 1: Simplified network diagram
without needing a password.
You can experiment with importing the Fedora VM
Should Packstack install OpenStack client tools [y|n] [y]
image you used for the OpenStack VM and launching it
: y
in the cloud. Whether you succeed or not will depend on
the resources available in the OpenStack VM.
The answers to the other questions will depend on the
network interface and the IP address of your desktop, but
there is no ambiguity here. You should answer with the
Installing only the needed OpenStack services
interface lo' for CONFIG_NOVA_COMPUTE_PRIVIF and
You will have observed that OpenStack comes with a
CONFIG_NOVA_NETWORK_PRIVIF. You don't need an
very wide range of services, some of which are not likely
extra physical interface as the compute services are running
to be very useful for your experiments on the desktop,
on the same server.
e.g., the additional networks and router created in the
Now, you are ready to test your OpenStack
tests above. Here is a part of the dialogue for installing
installation on the desktop. You may want to create a
the required services on the desktop:
project and add a user to the project. Under Project/
Compute/Access & Security, you will need to add
[root@amd ~]# packstack
firewall rules and key pairs, as above.
Welcome to Installer setup utility
However, you will not need to create any additional
Enter the path to your ssh Public key to install on
private network or a router.
servers:
Import a basic cloud image, e.g., from http://fedoraproject.
Packstack changed given value to required value /root/.
org/get-fedora#clouds under Project/Compute/Images.
ssh/id_rsa.pub
You may want to create an additional flavour for a
Should Packstack install MySQL DB [y|n] [y] : y
virtual machine. The m1.tiny flavour has 512MB of RAM
Should Packstack install OpenStack Image Service (Glance)
and 4GB of disk and is too small for running Hadoop. The
[y|n] [y] : y
m1.small flavour has 2GB of RAM and 20GB of disk,
Should Packstack install OpenStack Block Storage (Cinder)
which will restrict the number of virtual machines you
service [y|n] [y] : n
can run for testing Hadoop. Hence, you may create a mini
Should Packstack install OpenStack Compute (Nova) service
flavour with 1GB of RAM and 10GB of disk. This will
[y|n] [y] : y
need to be done as the admin user.
Should Packstack install OpenStack Networking (Neutron)
Now, you can create an instance of the basic cloud
service [y|n] [y] : n
image. The default user is fedora and your setup is ready
Should Packstack install OpenStack Dashboard (Horizon)
for exploration of Hadoop data.
[y|n] [y] : y
Should
[y|n]
Should
[y|n]
Should
[y|n]

Packstack install OpenStack Object Storage (Swift)


[y] : n
Packstack install OpenStack Metering (Ceilometer)
[y] : n
Packstack install OpenStack Orchestration (Heat)
[n] : n

By: Dr Anil Seth


The author has earned the right to do what interests him.
You can find him online at http://sethanil.com, http://sethanil.
blogspot.com, and reach him via email at anil@sethanil.com

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 43

Developers

Let's Try

MariaDB

The MySQL Fork


that Google has Adopted
MariaDB is a community developed fork of MySQL, which
has overtaken MySQL. That many leading corporations in
the cyber environment, including Google, have migrated
to MariaDB speaks for its importance as a player in the
database firmament.

ariaDB is a high performance, open source


database that helps the world's busiest websites
deliver more content, faster. It has been created
by the developers of MySQL with the help of the FOSS
community and is a fork of MySQL. It offers various features
and enhancements like alternate storage engines, server
optimisations and patches.
The lead developer of MariaDB is Michael Monty
Widenius, who is also the founder of MySQL and Monty
Program AB.
No single person or company nurtures MariaDB/MySQL
development. The guardian of the MariaDB community,
the MariaDB Foundation, drives it. It states that it has the
trademark of the MariaDB server and owns mariadb.org, which
ensures that the official MariaDB development tree is always
open to the developer community. The MariaDB Foundation
assures the community that all the patches, as well as MySQL
source code, are merged into MariaDB. The Foundation also
provides a lot of documentation. MariaDB is a registered
trademark of SkySQL Corporation and is used by the MariaDB
Foundation with permission. It is a good choice for database
professionals looking for the best and most robust SQL server.

44 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

History

In 2008, Sun Microsystems bought MySQL for US$ 1


billion. But the original developer, Monty Widenius, was
quite disappointed with the way things were run at Sun and
founded his own new company and his own fork of MySQL
- MariaDB. It is named after Monty's younger daughter,
Maria. Later, when Oracle announced the acquisition of
Sun, most of the MySQL developers jumped to its forks:
MariaDB and Drizzle.
MariaDB version numbers follow MySQL numbers
till 5.5. Thus, all the features in MySQL are available in
MariaDB. After MariaDB 5.5, its developers started a new
branch numbered MariaDB 10.0, which is the development
version of MariaDB. This was done to make it clear that
MariaDB 10.0 will not import all the features from MySQL
5.6. Also, at times, some of these features do not seem to be
solid enough for MariaDBs standards. Since new specific
features have been developed in MariaDB, the team decided
to go for a major version number. The currently used
version, MariaDB 10.0, is built on the MariaDB 5.5 series
and has back ported features from MySQL 5.6 along with
entirely new developments.

Let's Try
Why MariaDB is better than MySQL

When comparing MariaDB and MySQL, we are comparing


different development cultures, features and performance.
The patches developed by MariaDB focus on bug fixing
and performance. By supporting the features of MySQL,
MariaDB implements more improvements and delivers
better performance without restrictions on compatibility with
MySQL. It also provides more storage engines than MySQL.
What makes MariaDB different from MySQL is better testing,
fewer bugs and fewer warnings. The goal of MariaDB is to be
a drop-in replacement for MySQL, with better developments.
Navicat is a strong and powerful MariaDB administration
and development tool. It is graphic database management and
development software produced by PremiumSoft CyberTech
Ltd. It provides a native environment for MariaDB database
management and supports the extra features like new storage
engines, microsecond and virtual columns.
It is easy to convert from MySQL to MariaDB, as we
need not convert any data and all our old connectors to other
languages work unchanged. As of now MariaDB is capable of
handling data in terabytes, but more needs to be done for it to
handle data in petabytes.

Features

Here is a list of features that MariaDB provides:


Since it has been released under the GPL version 2, it is free.
It is completely open source.
Open contributions and suggestions are encouraged.
MariaDB is one of the fastest databases available.
Its syntax is pretty simple, flexible and easy to manage.
It can be easily imported or exported from CSV and XML.
It is useful for both small as well as large databases,
containing billions of records and terabytes of data in
hundreds of thousands of tables.
MariaDB includes pre-installed storage engines like Aria,
XtraDB, PBXT, FederatedX and SphinxSE.
The use of the Aria storage engine makes complex
queries faster. Aria is usually faster since it caches row
data in memory and normally doesn't have to write the
temporary rows to disk.
Some storage engines and plugins are pre-installed in
MariaDB.
It has a very strong community.

Installing MariaDB

Developers

Now, add the apt-get repository as per your Ubuntu


version.
For Ubuntu 13.10
$ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
mariadb/repo/5.5/ubuntu saucy main'

For Ubuntu 13.04


$ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
mariadb/repo/5.5/ubuntu raring main'

For Ubuntu 12.10


$ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
mariadb/repo/5.5/ubuntu quantal main'

For Ubuntu 12.04 LTS


$ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
mariadb/repo/5.5/ubuntu precise main'

Step 2: Install MariaDB using the following commands:


$ sudo apt-get update
$ sudo apt-get install mariadb-server

Provide the root account password as shown in Figure 1.


Step 3: Log in to MariaDB using the following
command, after installation:
mysql -u root -p

Figure 1: Configuring MariaDB

Now lets look at how MariaDB is installed.


Step 1: First, make sure that the required packages
are installed along with the apt-get key for the MariaDB
repository, by using the following commands:
$ sudo apt-get install software-properties-common
$ sudo apt-key recv-keys keyserver hkp://keyserver.ubuntu.
com:80 0xcbcb082a1bb943db

Figure 2: Logging into MariaDB

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 45

Developers

Let's Try

Figure 3: A sample table created

Creating a database in MariaDB

When entering the account administrator password set up


during installation, you will be given a MariaDB prompt.
Create a database on students by using the following
command:

Figure 4: Inserting data into a table

CREATE DATABASE students;

INSERT INTO details(name,age,marks) values ("anu",15,450);

Switch to the new database using the following command (this


is to make sure that you are currently working on this database):

INSERT INTO details(name,age,marks) VALUES("Bob",15,400);

Inserting data into a MariaDB table

To insert data into a MariaDB table, use the following commands:

The output will be as shown in Figure 4.

USE students;

Now that the database has been created, create a table:


CREATE TABLE details(student_id int(5) NOT NULL AUTO_
INCREMENT,
name varchar(20) DEFAULT NULL,
age int(3) DEFAULT NULL,
marks int(5) DEFAULT NULL,
PRIMARY KEY(student_i)d
);

To see what we have done, use the following command:


show columns in details;

Each column in the table creation command is separated


by a comma and is in the following format:
Column_Name Data_Type[(size_of_data)] [NULL or NOT NULL]
[DEFAULT default_value]
[AUTO_INCREMENT]

These columns can be defined as:


Column Name: Describes the attribute being assigned.
Data Type: Specifies the type of data in the column.
Null: Defines whether null is a valid value for that field it can be nullor not null.
Default Value: Sets the initial value of all newly created
records that do not specify a value.
auto_increment: MySQL will handle the sequential
numbering of any column marked with this option, internally,
in order to provide a unique value for each record.
Ultimately, before closing the table definition,
we need to use the primary key by typing PRIMARY
KEY(column name). It guarantees that this column will
serve as a unique field.
46 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

We need not add values in student_id. It is automatically


incremented. All other values are given in quotes.

Deleting a table

To delete a table, type the following command:


DROP TABLE table_name;

Once the table is deleted, the data inside it cannot be


recovered.
We can view the current table using the show tables
command, which gives all the tables inside the database:
SHOW tables;

After deleting the table, use the following commands:


DROP TABLE details;
Query OK, 0 rows affected (0.02 sec)
SHOW tables;

The output will be:


Empty set (0.00 sec)

Google waves goodbye to MySQL

Google has now switched to MariaDB and dumped MySQL.


For the Web community, Googles big move might be a
paradigm shift in the DBMS ecosystem, said a Google engineer.
Major Linux distributions, like Red Hat and SUSE, and wellknown websites such as Wikipedia, have also switched from
MySQL to MariaDB. This is a great blow to MySQL.
Google has migrated applications that were previously

Let's Try

Figure 5: Tables in the database

running on MySQL on to MariaDB without changing the


application code. There are five Google technicians working parttime on MariaDB patches and bug fixes, and Google continues
to maintain its internal branch of MySQL to have complete
control over the improvement. Google running thousands of
MariaDB servers can only be good news for those who feel more
comfortable with a non-Oracle future for MySQL.
Though multinational corporations like Google have
switched to MariaDB, it does have a few shortcomings.
MariaDBs performance is slightly better in multi-core
machines, but one suspects that MySQL could be tweaked
to match the performance. All it requires is for Oracle to
improve MySQL by adding some new features that are not

Developers

present in MariaDB, yet. And then it will be difficult to switch


back to the previous database.
MariaDB has the advantage of being bigger in terms of
the number of users, than its forks and clones. MySQL took a
lot of time and effort before emerging as the choice of many
companies. So, it is a little hard to introduce MariaDB in the
commercial field. Being a new open source standard, we can
only hope that MariaDB will overtake other databases in a
short span of time.

References
[1] http://en.wikipedia.org/wiki/MariaDB
[2] https://mariadb.org/
[3] http://tecadmin.net/install-mariadb-5-5-in-ubuntu/#
[4] https://www.digitalocean.com/community/tutorials/howto-create-a-table-in-mysql-and-mariadb-onan-ubuntucloud-server
[5] http://en.wikibooks.org/wiki/MariaDB/Introduction

By: Amrutha S.
The author is currently studying for a bachelors degree in Computer
Science and Engineering at Amrita University in Kerala, India. She is
an open source enthusiast and also an active member of the Amrita
FOSS club. She can be contacted at amruthasangeeth@gmail.com.

Please share your feedback/ thoughts/


views via email at osfyedit@efy.in

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 47

Developers

Let's Try

Haskell: The Purely Functional


Programming Language
Haskell, an open source programming language, is the outcome of 20 years of research.
Named after the logician, Haskell Curry, it has all the advantages of functional programming
and an intuitive syntax based on mathematical notation. This second article in the series on
Haskell explores a few functions.

onsider the function sumInt to compute the sum of two


integers. It is defined as:

sumInt :: Int -> Int -> Int


sumInt x y = x + y

The first line is the type signature in which the function name,
arguments and return types are separated using a double colon (::).
The arguments and the return types are separated by the symbol
(->). Thus, the above type signature tells us that the sum function
takes two arguments of type Int and returns an Int. Note that the
function names must always begin with the letters of the alphabet
in lower case. The names are usually written in CamelCase style.
You can create a Sum.hs Haskell source file using your
favourite text editor, and load the file on to the Glasgow
Haskell Compiler interpreter (GHCi) using the following code:
$ ghci
GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help
48 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

Loading package ghc-prim ... linking ... done.


Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> :l Sum.hs
[1 of 1] Compiling Main
Ok, modules loaded: Main.

( Sum.hs, interpreted )

*Main> :t sumInt
sumInt :: Int -> Int -> Int
*Main> sumInt 2 3
5

If we check the type of sumInt with arguments, we get the


following output:
*Main> :t sumInt 2 3
sumInt 2 3 :: Int

Let's Try

Developers

from a list:

*Main> :t sumInt 2
sumInt 2 :: Int -> Int

The value of sumInt 2 3 is an Int as defined in the type signature.


We can also partially apply the function sumInt with one argument
and its return type will be Int -> Int. In other words, sumInt 2 takes
an integer and will return an integer with 2 added to it.
Every function in Haskell takes only one argument. So, we
can think of the sumInt function as one that takes an argument and
returns a function that takes another argument and computes their
sum. This return function can be defined as a sumTwoInt function
that adds a 2 to an Int using the sumInt function, as shown below:

*Main> tail a
[2,3,4,5]
*Main> :t tail
tail :: [a] -> [a]

The last function returns the last element of a list:


*Main> last a
5

sumTwoInt :: Int -> Int


sumTwoInt x = sumInt 2 x

*Main> :t last
last :: [a] -> a

The = sign in Haskell signifies a definition and not


a variable assignment as seen in imperative programming
languages. We can thus omit the x' on either side and the
code becomes even more concise:

The init function returns everything except the last


element of a list:

sumTwoInt :: Int -> Int


sumTwoInt = sumInt 2

*Main> :t init
init :: [a] -> [a]

By loading Sum.hs again in the GHCi prompt, we get the


following:
*Main> :l Sum.hs
[1 of 1] Compiling Main
Ok, modules loaded: Main.

*Main> init a
[1,2,3,4]

( Sum.hs, interpreted )

*Main> :t sumTwoInt
sumTwoInt :: Int -> Int

The length function returns the length of a list:


*Main> length a
5
*Main> :t length
length :: [a] -> Int

The take function picks the first n' elements from a list:
*Main> sumTwoInt 3
5

Let us look at some examples of functions that operate on


lists. Consider list a', which is defined as [1, 2, 3, 4, 5] (a list
of integers) in the Sum.hs file (re-load the file in GHCi before
trying the list functions).
a :: [Int]
a = [1, 2, 3, 4, 5]

The head function returns the first element of a list:

*Main> take 3 a
[1,2,3]
*Main> :t take
take :: Int -> [a] -> [a]

The drop function drops n' elements from the beginning


of a list, and returns the rest:
*Main> drop 3 a
[4,5]

*Main> head a
1

*Main> :t drop
drop :: Int -> [a] -> [a]

*Main> :t head
head :: [a] -> a

The zip function takes two lists and creates a new list of
tuples with the respective pairs from each list. For example:

The tail function returns everything except the first element

*Main> let b = ["one", "two", "three", "four", "five"]


www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 49

Developers

Let's Try

*Main> zip a b
[(1,"one"),(2,"two"),(3,"three"),(4,"four"),(5,"five")]
*Main> :t zip
zip :: [a] -> [b] -> [(a, b)]

The let expression defines the value of b' in the GHCi


prompt. You can also define it in a way thats similar to the
definition of the list a' in the source file.
The lines function takes input text and splits it at new lines:

1
*Main>
1
*Main>
2
*Main>
6
*Main>
24
*Main>
120

factorial 1
factorial 2
factorial 3
factorial 4
factorial 5

*Main> let sentence = "First\nSecond\nThird\nFourth\nFifth"


*Main> lines sentence
["First","Second","Third","Fourth","Fifth"]
*Main> :t lines
lines :: String -> [String]

The words function takes input text and splits it on


white space:
*Main> words "hello world"
["hello","world"]
*Main> :t words
words :: String -> [String]

The map function takes a function and a list, and applies


the function to every element in the list:

Functions operating on lists can also be called recursively.


To compute the sum of a list of integers, you can write the
sumList function as:
sumList :: [Int] -> Int
sumList [] = 0
sumList (x:xs) = x + sumList xs

The notation (x:xs) represents a list, where x' is the first


element in the list and xs' is the rest of the list. On running
sumList with GHCi, you get the following:
*Main> sumList []
0
*Main> sumList [1,2,3]
6

*Main> map sumTwoInt a


[3,4,5,6,7]

Sometimes, you will need a temporary function for a


computation, which you will not need to use elsewhere.
You can then write an anonymous function. A function to
increment an input value can be defined as:

*Main> :t map
map :: (a -> b) -> [a] -> [b]

*Main> (\x -> x + 1) 3


4

The first argument to map is a function that is enclosed


within parenthesis in the type signature (a -> b). This function
takes an input of type a' and returns an element of type b'.
Thus, when operating over a list [a], it returns a list of type [b].
Recursion provides a means of looping in functional
programming languages. The factorial of a number, for example,
can be computed in Haskell, using the following code:

These are called Lambda functions, and the '\' represents


the notation for the symbol Lambda. Another example is
given below:

factorial :: Int -> Int


factorial 0 = 1
factorial n = n * factorial (n-1)

The definition of factorial with different input use cases is


called pattern matching on the function. On running the above
example with GHCi, you get the following output:
*Main> factorial 0
50 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

*Main> map (\x -> x * x) [1, 2, 3, 4, 5]


[1,4,9,16,25]

It is a good practice to write the type signature of


the function first when composing programs, and then
write the body of the function. Haskell is a functional
programming language and understanding the use of
functions is very important.
By: Shakthi Kannan
The author is a free software enthusiast and blogs
at shakthimaan.com

How To

Developers

Qt-WebKit, a major engine that can render Web pages and


execute JavaScript code, is the answer to the developers prayer.
Lets take a look at a few examples that will aid developers in
making better use of this engine.

his article is for Qt developers. It is assumed that the


intended audience is aware of the famous Signals and
Slots mechanisms of Qt. Creating an HTML page is
very quick compared to any other way of designing a GUI.
An HTML page is nothing but a fancy page that doesnt have
any logic in its build. With the amalgamation of JavaScript,
however, the HTML page builds in some intelligence. As
everything cannot be collated in JavaScript, we need a back-end
for it. Qt provides a way to mingle (HTML+Java) with C++.
Thus, you can call the C++ methods through JavaScripts and
vice-versa. This is possible by using the Qt-WebKit framework.
The applications developed in Qt are not just limited to various
desktop platforms. They are even ported over several mobile
platforms. Thus, you can design your apps that can just fit into
the Windows, iOS and Android worlds, seamlessly.

What is Qt-WebKit?

In simple words, Qt-WebKit is the Web-browsing module of


Qt. It can be used to display live content from the Internet as
well as local HTML files.

Programming paradigm

In Qt-WebKit, the base class is known as QWebView. The


sub-class of QWebView is QWebViewPage, and a further subclass is QWebFrame. This is useful while adding the desired
class object to the JavaScript window object. In short, this
class object will be visible to JavaScript once it is added to the
JavaScript window object. However, JavaScript can invoke
only the public Q_INVOKABLE methods. The Q_INVOKABLE
restriction was introduced to make the applications being
developed using Qt even more secure.

Q_INVOKABLE

This is a macro that is similar to Slot, except that it has a


return type. Thus, we will prefix Q_INVOKABLE to the
methods that can be called by the JavaScript. The advantage
here is that we can have a return type with Q_INVOKABLE,
as compared to Slot.

Developing a sample HTML page with JavaScript


intelligence
Here is a sample form in HTML-JavaScript that will allow
us to multiply any two given numbers. However, the logic of
multiplication should reside in the C++ method only.

<html>
<head>
<script>
function Multiply()
{
/** MultOfNumbers a C++ Invokable method **/
var result = myoperations.MultOfNumbers(document.forms["DEMO_
FORM"]["Multiplicant_A"].value, document.forms["DEMO_FORM"]
["Multiplicant_B"].value);
document.getElementById("answer").value = result;
}
</script>
</head>
<body>
<form name="DEMO_FORM">
Multiplicant A: <input type="number"
name="Multiplicant_A"><br>
Multiplicant B: <input type="number"
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 51

Developers

How To

name="Multiplicant_B"><br>
Result: <input type="number" id="answer"
name="Multiplicant_C"><br>
<input type="button" value="Multiplication_compute_on_C++"
onclick="Multiply()">
</form>
</body>
</html>

Please note that in the above HTML code, myoperations


is a class object. And MultOfNumbers is its public Q_
Invokable class method.

How to call the C++ methods from the Web


page using the Qt-WebKit framework
Let's say, I have the following class that has the
Q_Invokable method, MultOfNumbers.

class MyJavaScriptOperations : public QObject {


Q_OBJECT
public:
Q_INVOKABLE qint32 MultOfNumbers(int a, int b) {
qDebug() << a * b;
return (a*b);
}
};

This class object should be added to the JavaScript


window object by the following API:
addToJavaScriptWindowObject("name of the object", new (class
that can be accessed))

Here is the entire program:


#include
#include
#include
#include
#include
#include

<QtGui/QApplication>
<QApplication>
<QDebug>
<QWebFrame>
<QWebPage>
<QWebView>

class MyJavaScriptOperations : public QObject {


Q_OBJECT
public:
Q_INVOKABLE qint32 MultOfNumbers(int a, int b) {
qDebug() << a * b;
return (a*b);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
52 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

QWebView *view = new QWebView();


view->resize(400, 500);
view->page()->mainFrame()->addToJavaScriptWindowObject("myope
rations", new MyJavaScriptOperations);
view->load(QUrl("./index.html"));
view->show();
return a.exec();
}
#include "main.moc"

The output is given in Figure 1.

How to install a callback from C++ code to the


Web page using the Qt-WebKit framework

We have already seen the call to C++ methods by


JavaScript. Now, how about a callback from C++ to
JavaScript? Yes, it is possible with the Qt-WebKit. There
are two ways to do so. However, for the sake of neatness in
design, lets discuss only the Signals and Slots mechanisms
for the JavaScript callback.

Installing Signals and Slots for the JavaScript


function

Here are the steps that need to be taken for the callback to be
installed:
a) Add a JavaScript window object to the
javaScriptWindowObjectCleared slot.
b) Declare a signal in the class.
c) Emit the signal.
d) In JavaScript, connect the signal to the JavaScript
function slot.
Here is the syntax to help you connect:
<JavaScript_window_object>.<signal_name>.connect(<JavaScript
function name>);

Note, you can make a callback to JavaScript only after


the Web page is loaded. This can be ensured by connecting
to the Slot emitted by the Signal loadFinished() in the C++
application.
Lets look at a real example now. This will fire a callback
once the Web page is loaded.
The callback should be addressed by the JavaScript
function, which will show up an alert window.

Figure 1: QT DEMO output

How To

Developers

QT_DEMO
<html>
<head>
<script>
function alert_click()
{
alert("you clicked");
}
function JavaScript_function()
{
alert("Hello");
}
myoperations.alert_script_signal.connect(JavaScript_
function);
</script>
</head>
<body>
<form name="myform">
<input type="button" value="Hit me" onclick="alert_click()">
</form>
</body>
</html>

Here is the main file:


#include <QtGui/QApplication>
#include <QApplication>
#include <QDebug>
#include <QWebFrame>
#include <QWebPage>
#include <QWebView>
class MyJavaScriptOperations : public QObject {
Q_OBJECT
public:
QWebView *view;
MyJavaScriptOperations();
signals:
void alert_script_signal();
public slots:
void JS_ADDED();
void loadFinished(bool);
};
void MyJavaScriptOperations::JS_ADDED()
{
qDebug()<<__PRETTY_FUNCTION__;
view->page()->mainFrame()->addToJavaScriptWindowObject("myope
rations", this);
}
void MyJavaScriptOperations::loadFinished(bool oper)
{
qDebug()<<__PRETTY_FUNCTION__<< oper;
emit alert_script_signal();

Hit Me

JavaScript Alert Hello


OK

Figure 2: QT DEMO callback output


}
MyJavaScriptOperations::MyJavaScriptOperations()
{
qDebug()<<__PRETTY_FUNCTION__;
view = new QWebView();
view->resize(400, 500);
connect(view->page()->mainFrame(), SIGNAL(javaScriptWindowObj
ectCleared()), this, SLOT(JS_ADDED()));
connect(view, SIGNAL(loadFinished(bool)), this,
SLOT(loadFinished(bool)));
view->load(QUrl("./index.html"));
view->show();
}
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
MyJavaScriptOperations *jvs = new MyJavaScriptOperations;
return a.exec();
}
#include "main.moc"

The output is shown in Figure 2.


Qt is a rich framework for C++ developers. It not only
provides these amazing features, but also has some interesting
attributes like in-built SQlite, D-Bus and various containers. It's
easy to develop an entire GUI application with it. You can even
port an existing HTML page to Qt. This makes Qt a wonderful
choice to develop a cross-platform application quickly. It is
now getting popular in the mobile world too.
By: Shreyas Joshi
The author is a technology enthusiast and software developer
at Pace Micro Technology. You can connect with him at
shreyasjoshi15@gmail.com.

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 53

Developers

Overview

This article focuses on Yocto a


complete embedded Linux development
environment that offers tools, metadata and
documentation.

he Yocto Project helps developers and companies


get their project off the ground. It is an open source
collaboration project that provides templates, tools and
methods to create custom Linux-based systems for embedded
products, regardless of the hardware architecture.
While building Linux-based embedded products, it is
important to have full control over the software running on the
embedded device. This doesnt happen when you are using a
normal Linux OS for your device. The software should have
full access as per the hardware requirements. Thats where
the Yocto Project comes in handy. It helps you create custom
Linux-based systems for any hardware architecture and makes
the device easier to use and faster than expected.
The Yocto Project was founded in 2010 as a solution
for embedded Linux development by many open source
vendors, hardware manufacturers and electronic companies.
The project aims at helping developers build their own
Linux distributions, specific to their own environments. The
project provides developers with interoperable tools, methods
and processes that help in the development of Linux-based
embedded systems. The central goal of the project is to enable
the user to reuse and customise tools and working code. It
encourages interaction with embedded projects and has been
a steady contributor to the OpenEmbedded core, BitBake, the
Linux kernel development process and several other projects.
It not only deals with building Linux-based embedded
systems, but also the tool chain for cross compilation and
software development kits (SDK) so that users can choose the
package manager format they intend to use.

The goals of the Yocto Project

Although the main aim is to help developers of customised


Linux systems supporting various hardware architectures, it
has also a key role in several other fields where it supports and
54 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

encourages the Linux community. Its goals are:


To develop custom Linux-based embedded systems
regardless of the architecture.
To provide interoperability between tools and working code,
which will reduce the money and time spent on the project.
To develop licence-aware build systems that make it
possible to include or remove software components
based on specific licence groups and the corresponding
restriction levels.
To provide a place for open source projects that help in
the development of Linux-based embedded systems and
customisable Linux platforms.
To focus on creating single build systems that address the
needs of all users that other software components can later
be tethered to.
To ensure that the tools developed are architecturally
independent.
To provide a better graphical user interface to the build
system, which eases access.
To provide resources and information, catering to both
new and experienced users.
To provide core system component recipes provided by
the OpenEmbedded project.
To further educate the community about the benefits
of this standardisation and collaboration in the Linux
community and in the industry.

The Yocto Project community

The community shares many common traits with a typical open


source organisation. Anyone who is interested can contribute to
the development of the project. The Yocto Project is developed
and governed as a collaborative effort by an open community
of professionals, volunteers and contributors.
The projects governance is mainly divided into two wings

Overview

Developers

Latest updates
Yocto Project 1.6
The latest release of Yocto Project (YP) 1.6 Daisy has a
great set of features to help developers build with a very
good user interface. The Toaster, a new UI to the YP build
system, enables detailed examination of the build output,
with great control over the view of the data. The Linux
kernel update and the GCC update to 4.8.2 adds further
functionality to the latest release. It also supports building
Python 3. The new client for reporting errors to a central
Web interface helps developers to focus on problem
management.

Figure1: YP community

DeveloperSpecific Layer

Commercial
Layer

HardwareSpecific BSP

UI-Specific Layer

Yocto-Specific
Layer Metadata

OpenEmbedded
Core Metadata

Figure 2: YP layers

administrative and technical. The administrative board


includes executive leaders from organisations that participate
on the advisory board and also in several sub-groups that
perform several non-technical services including community
management, financial management, infrastructure
management, advocacy and outreach. The technical board
includes several sub-groups, which oversee tasks that range
from submitting patches to the project architect to deciding on
who is the final authority on the project.
The building of the project requires the coordinated
efforts of many people, who work in several roles. These roles
are listed below.
Architect: One who holds the final authority and provides
overall leadership to the projects development.
Sub-system maintainers: The project is further divided
into several sub-projects and the maintainers are assigned
to these sub-projects.
Layer maintainers: Those who ensure the components
excellence and functionality.

AMD and LG Electronics partner with Yocto


The introduction of new standardised features to ensure
quick access to the latest Board Support Packages (BSP)
for AMD 64-bit x86 architecture has made AMD a new
gold member in the YP community.
LG Electronics, joining as a new member organisation
to help support and guide the project, is of great importance.
Embedded Linux Conference 2014
The Yocto Project is one of the silver sponsors of this premier vendor-neutral technical conference for companies
and developers that use Linux in embedded products.
Sponsored by the Linux Foundation, it has a key role in
encouraging newcomers to the world of open source and
embedded products.
Toaster prototype
Toaster, a part of the latest YP 1.6 release, is a Web interface for BitBake, build system. Toaster collects all kinds
of data about the building process, so that it is easy to
search and query through this data in a specific way.

Technical leaders: Those who work within the subprojects, doing the same thing as the layer maintainers.
Upstream projects: Many Yocto Project components such
as the Linux kernel are dependent on the upstream projects.
Advisory board: The advisory board gives direction to the
project and helps in setting the requirements for the project.

Layers

The build system is composed of different layers, which are


the containers for the building blocks used to construct the
system. The layers are grouped according to functionality,
which makes the management of extensions and
customisations easier.
References
[1] https://www.yoctoproject.org/
[2] https://wiki.yoctoproject.org/wiki/Main_Page

By: Vishnu N K
The author, an open source enthusiast, is in the midst of his B. Tech
degree in Computer Science at Amrita Vishwa Vidyapeetham and
contributes to Mediawiki. Contact him at mails2vichu@gmail.com

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 55

Developers

How To

What is Linux Kernel Porting?

One of the aspects of hacking a Linux kernel is to port it. While this might sound difficult, it wont be
once you read this article. The author explains porting techniques in a simplified manner.

ith the evolution of embedded systems, porting


has become extremely important. Whenever
you have new hardware at hand, the first and
the most critical thing to be done is porting. For hobbyists,
what has made this even more interesting is the open source
nature of the Linux kernel. So, lets dive into porting and
understand the nitty-gritty of it.
Porting means making something work on an
environment it is not designed for. Embedded Linux porting
means making Linux work on an embedded platform,
for which it was not designed. Porting is a broader term
and when I say embedded Linux porting, it not only
involves Linux kernel porting, but also porting a first stage
bootloader, a second stage bootloader and, last but not the
least, the applications. Porting differs from development.
Usually, porting doesn't involve as much of coding as in
development. This means that there is already some code
available and it only needs to be fine-tuned to the desired
56 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

target. There may be a need to change a few lines here and


there, before it is up and running. But, the key thing to
know is, what needs to be changed and where.

What Linux kernel porting involves

Linux kernel porting involves two things at a higher level:


architecture porting and board porting. Architecture, in
Linux terminology, refers to CPU. So, architecture porting
means adapting the Linux kernel to the target CPU, which
may be ARM, Power PC, MIPS, and so on. In addition
to this, SOC porting can also be considered as part of
architecture porting. As far as the Linux kernel is concerned,
most of the times, you don't need to port it for architecture
as this would already be supported in Linux. However, you
still need to port Linux for the board and this is where the
major focus lies. Architecture porting entails porting of
initial start-up code, interrupt service routines, dispatcher
routine, timer routine, memory management, and so on.

How To

Developers

Whereas board porting involves writing custom drivers and


initialisation code for devices specific to the board.

Building a Linux kernel for the target platform

Kernel building is a two-step process: first, the kernel


needs to be configured for the target platform. There are
many ways to configure the kernel, based on the preferred
configuration interface. Given below are some of the
common methods.
To run the text-based configuration, execute the following
command:

Figure 1: Plain text-based kernel configuration

$ make config

This will show the configuration options on the console


as seen in Figure 1. It is a little cumbersome to configure the
kernel with this, as it prompts every configuration option, in
order, and doesn't allow the reversion of changes.
To run the menu-driven configuration, execute the
following command:
$ make menuconfig

This will show the menu options for configuring the


kernel, as seen in Figure 2. This requires the ncurses library to
be installed on the system. This is the most popular interface
used to configure the kernel.
To run the window-based configuration, execute the
following command:
$ make xconfig

This allows configuration using the mouse. It requires QT


to be installed on the system.
For details on other options, execute the following
command in the kernel top directory:

Figure 2: Menu-driven kernel configuration


ARCH=<architecture>
CROSS-COMPILE = <toolchain prefix>

The first line defines the architecture the kernel needs to


be built for, and the second line defines the cross compilation
toolchain prefix. So, if the architecture is ARM and the
toolchain is say, from CodeSourcery, then it would be:
ARCH=arm
CROSS_COMPILE=arm-none-linux-gnueabi-

Optionally, make can be invoked as shown below:

$ make help

Once the kernel is configured, the next step is to build


the kernel with the make command. A few commonly used
commands are given below:
$ make vmlinux - Builds the bare kernel
$ make modules - Builds the modules
$ make modules_prepare Sets up the kernel for building the
modules external to kernel.

If the above commands are executed as stated, the kernel


will be configured and compiled for the host system, which
is generally the x86 platform. But, for porting, the intention
is to configure and build the kernel for the target platform,
which in turn, requires configuration of makefile. Two things
that need to be changed in the makefile are given below:

$ make ARCH=arm menuconfig - For configuring the kernel


$ make ARCH=arm CROSS_COMPILE=arm-none-linux-gnueabi- - For
compiling the kernel

The kernel image generated after the compilation is


usually vmlinux, which is in ELF format. This image can't
be used directly with embedded system bootloaders such as
u-boot. So convert it into the format suitable for a second
stage bootloader. Conversion is a two-step process and is
done with the following commands:
arm-none-linux-gnueabi-objcopy -O binary vmlinux vmlinux.bin
mkimage -A arm -O linux -T kernel -C none -a 0x80008000 -e
0x80008000 -n linux-3.2.8 -d vmlinux.bin uImage
-A ==> set architecture
-O ==> set operating system
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 57

Developers
-T
-C
-a
-e
-n
-d

==>
==>
==>
==>
==>
==>

set
set
set
set
set
use

How To

image type
compression type
load address (hex)
entry point (hex)
image name
image data from file

means it won't be built at all. Where are these values stored?


There is a file called .config in the top level directory, which
holds these values. So, the .config file is the output of the
configuration target such as menuconfig.
Where are these symbols used? In makefile, as shown below:
obj-$(CONFIG_MYDRIVER) += my_driver.o

The first command converts the ELF into a raw binary.


This binary is then passed to mkimage, which is a utility to
generate the u-boot specific kernel image. mkimage is the
utility provided by u-boot. The generated kernel image is
named uImage.

The Linux kernel build system

One of the beautiful things about the Linux kernel is that it


is highly configurable and the same code base can be used
for a variety of applications, ranging from high end servers
to tiny embedded devices. And the infrastructure, which
plays an important role in achieving this in an efficient
manner, is the kernel build system, also known as kbuild.
The kernel build system has two main components
makefile and Kconfig.
Makefile: Every sub-directory has its own makefile, which is
used to compile the files in that directory and generate the object
code out of that. The top level makefile percolates recursively
into its sub-directories and invokes the corresponding makefile
to build the modules and finally, the Linux kernel image. The
makefile builds only the files for which the configuration option
is enabled through the configuration tool.
Kconfig: As with the makefile, every sub-directory has a
Kconfig file. Kconfig is in configuration language and Kconfig
files located inside each sub-directory are the programs.
Kconfig contains the entries, which are read by configuration
targets such as make menuconfig to show a menu-like structure.
So we have covered makefile and Kconfig and at present
they seem to be pretty much disconnected. For kbuild to
work properly, there has to be some link between the Kconfig
and makefile. And that link is nothing but the configuration
symbols, which generally have a prefix CONFIG_. These
symbols are generated by a configuration target such as
menuconfig, based on entries defined in the Kconfig file.
And based on what the user has selected in the menu, these
symbols can have the values y', n', or m'.
Now, as most of us are aware, Linux supports hot
plugging of the drivers, which means, we can dynamically
add and remove the drivers from the running kernel. The
drivers which can be added/removed dynamically are known
as modules. However, drivers that are part of the kernel
image can't be removed dynamically. So, there are two ways
to have a driver in the kernel. One is to build it as a part of
the kernel, and the other is to build it separately as a module
for hot-plugging. The value y' for CONFIG_, means the
corresponding driver will be part of the kernel image; the
value m' means it will be built as a module and value n'
58 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

So, if CONFIG_MYDRIVER is set to value y', the driver


my_driver.c will be built as part of the kernel image and if set
to value m', it will be built as a module with the extension
.ko. And, for value n', it won't be compiled at all.
As you now know a little more about kbuild, lets
consider adding a simple character driver to the kernel tree.
The first step is to write a driver and place it at the correct
location. I have a file named my_driver.c. Since its a character
driver, I will prefer adding it at the drivers/char/ sub-directory.
So copy this at the location drivers/char in the kernel.
The next step is to add a configuration entry in the drivers/
char/Kconfig file. Each entry can be of type bool, tristate, int,
string or hex. bool means that the configuration symbol can have
the values y' or n', while tristate means it can have values y',
m' or n'. And int', string' and hex' mean that the value can be
an integer, string or hexadecimal, respectively. Given below is
the segment of code added in drivers/char/Kconfig:
config MY_DRIVER
tristate "Demo for My Driver"
default m
help
Adding this small driver to kernel for
demonstrating the kbuild

The first line defines the configuration symbol. The


second specifies the type for the symbol and the text which
will be shown as the menu. The third specifies the default
value for this symbol and the last two lines are for the help
message. Another thing that you will generally find in a
Kconfig file is depends on'. This is very useful when you
want to select the particular feature, only if its dependency
is selected. For example, if we are writing a driver for
i2c EEPROM, then the menu option for the driver should
appear only if the 2c driver is selected. This can be achieved
with the depends on' entry.
After saving the above changes in Kconfig, execute the
following command:
$ make menuconfig

Now, navigate to Device Drivers->Character devices and


you will see an entry for My Driver.
By default, it is supposed to be built as a module. Once
you are done with configuration, exit the menu and save the
configuration. This saves the configuration in .config file. Now,

How To

Developers

open the .config file, and there will be an entry as shown below:
CONFIG_MY_DRIVER=m

Here, the driver is configured to be built as a module.


Also, one thing worth noting is that the symbol MY_
DRIVER' in Kconfig is prefixed with CONFIG_.
Now, just adding an entry in the Kconfig file and
configuration alone won't compile the driver. There has to
be the corresponding change in makefile as well. So, add the
following line to makefile:
obj-$(CONFIG_MYDRIVER) += my_driver.o

Figure 3: Menu option for My Driver

After the kernel is compiled, the module my_driver.ko will


be placed at drivers/char/. This module can be inserted in the
kernel with the following command:

board-specific custom code and needs to be specifically


brought in with the kernel. And this collection of boardspecific initialisation and custom code is referred to as a
Board Support Package or, in Linux terminology, a LSP. In
simple words, whatever software code you require (which
is specific to the target platform) to boot up the target with
the operating system can be called LSP.

$ insmod my_driver.ko

Aren't these configuration symbols needed in the C code?


Yes, or else how will the conditional compilation be taken
care of? How are these symbols included in C code? During
the kernel compilation, the Kconfig and .config files are read,
and are used to generate the C header file named autoconf.h.
This is placed at include/generated and contains the #defines
for the configuration symbols. These symbols are used by the
C code to conditionally compile the required code.
Now, lets suppose I have configured the kernel and that it
works fine with this configuration. And, if I make some new
changes in the kernel configuration, the earlier ones will be
overwritten. In order to avoid this from happening, we can save
.config file in the arch/arm/configs directory with a name like
my_config, for instance. And next time, we can execute the
following command to configure the kernel with older options:
$ make my_config_defconfig

Linux Support Packages (LSP)/Board Support


Packages (BSP)

One of the most important and probably the most


challenging thing in porting is the development of Board
Support Packages (BSP). BSP development is a onetime effort during the product development lifecycle and,
obviously, the most critical. As we have discussed, porting
involves architecture porting and board porting. Board
porting involves board-specific initialisation code that
includes initialisation of the various interfaces such as
memory, peripherals such as serial, and i2c, which in turn,
involves the driver porting.
There are two categories of drivers. One is the standard
device driver such as the i2c driver and block driver
located at the standard directory location. Another is the
custom interface or device driver, which includes the

Components of LSP

As the name itself suggests, BSP is dependent on the things that


are specific to the target board. So, it consists of the code which
is specific to that particular board, and it applies only to that
board. The usual list includes Interrupt Request Numbers (IRQ),
which are dependent on how the various devices are connected
on the board. Also, some boards have an audio codec and you
need to have a driver for that codec. Likewise, there would be
switch interfaces, a matrix keypad, external eeprom, and so on.

LSP placement

LSP is placed under a specific <arch> folder of the kernel's


arch folder. For example, architecture-specific code for ARM
resides in the arch/arm directory. This is about the code, but
you also need the headers which are placed under arch/arm/
include/asm. However, board-specific code is placed at arch/
arm/mach-<board_name> and corresponding headers are
placed at arch/arm/mach-<soc architecture>/include. For
example, LSP for Beagle Board is placed at arch/arm/machomap2/board-omap3beagle.c and corresponding headers
are placed at arch/arm/mach-omap2/include/mach/. This is
shown in figure 4.

Machine ID

Every board in the kernel is identified by a machine ID.


This helps the kernel maintainers to manage the boards
based on ARM architecture in the source tree. This ID is
passed to the kernel from the second stage bootloader such
as u-boot. For the kernel to boot properly, there has to be a
match between the kernel and the second stage boot loader.
This information is available in arch/arm/tools/mach-types
and is used to generate the file linux/include/generated/
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 59

Developers

How To
done here. This function should be defined during the board
porting. This includes things such as setting up the pin
multiplexing, initialisation of the serial console, initialisation
of RAM, initialisation of Ethernet, USB and so on.
MACHINE_END ends the macro. This macro is defined
in arch/arm/include/asm/mach/arch.h.

How to begin with porting


Figure 4: LSP placement in kernel source

mach-types.h. The macros defined by mach-types.h are


used by the rest of the kernel code. For example, the
machine ID for Beagle Board is 1546, and this is the
number which the second stage bootloader passes to the
kernel. For registering the new board for ARM, provide
the board details at http://www.arm.linux.org.uk/developer/
machines/?action=new.
Note: The porting concepts described over here are
specific to boards based on the ARM platform and may
differ for other architectures.

MACHINE_START macro

One of the steps involved in kernel porting is to define the


initialisation functions for the various interfaces on the board,
such as serial, Ethernet, Gpio, etc. Once these functions are
defined, they need to be linked with the kernel so that it can
invoke them during boot-up. For this, the kernel provides the
macro MACHINE_START. Typically, a MACHINE_START
macro looks like whats shown below:
MACHINE_START(MY_BOARD, "My Board for Demo")
.atag_offset
= 0x100,
.init_early
= my_board_early,
.init_irq
= my_board_irq,
.init_machine = my_board_init,
MACHINE_END

Let's understand this macro. MY_BOARD is machine


ID defined in arch/arm/tools/mach-types. The second
parameter to the macro is a string describing the board. The
next few lines specify the various initialisation functions,
which the kernel has to invoke during boot-up. These
include the following:
.atag_offset: Defines the offset in RAM, where the boot
parameters will be placed. These parameters are passed from
the second stage bootloader, such as u-boot.
my_board_early: Calls the SOC initialisation functions.
This function will be defined by the SOC vendor, if the kernel
is ported for it.
my_board_irq: Intialisation related to interrupts is done
over here.
my_board_init: All the board-specific initialisation is
60 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

The most common and recommended way to begin with


porting is to start with some reference board, which closely
resembles yours. So, if you are porting for a board based on
OMAP3 architecture, take Beagle Board as a reference. Also,
for porting, you should understand the system very well.
Depending on the features available on your board, configure
the kernel accordingly. To start with, just enable the minimal
set of features required to boot the kernel. This may include
but not be limited to initialisation of RAM, Gpio subsystems,
serial interfaces, and filesystems drivers for mounting the
root filesystem. Once the kernel boots up with the minimal
configuration, start adding the new features, as required.
So, lets summarise the steps involved in porting:
1. The first step is to register the machine with the kernel
maintainer and get the unique ID for your board. While this
is not necessary to begin with porting, it needs to be done
eventually, if patches are to be submitted to the mainline.
Place the machine ID in arch/arm/tools/mach-types.
2. Create the board-specific file board-<board_name>'
at arch/arm/mach-<soc> and define the MACHINE_
START for the new board. For example, the boardspecific file for the Panda Board resides at arch/arm/
mach-omap2/board-omap4panda.c.
3. Update the Kconfig file at arch/arm/mach_<soc> to add
an entry for the new board as shown below:
config MACH_MY_BOARD
bool My Board for Demo
depends on ARCH_OMAP3
default y

4. Update the corresponding makefile, so that the boardspecific file gets compiled. This is shown below:

obj-$(CONFIG_MACH_MY_BOARD) += board-my_board.o

5. Create a default configuration file for the new board. To


begin with, take any .config file as a starting point and
customise it for the new board. Place the working .config
file at arch/arm/configs/my_board_defconfig.
By: Pradeep Tewani
The author works at Intel, Bangalore. He shares his learnings on
Linux & embedded systems through his weekend workshops.
Learn more about his experiments at http://sysplay.in. He can
be reached at pradeep_zenith@hotmail.com.

How To

Developers

Writing an RTC Driver


Based on the SPI Bus
Most computers have one or more hardware clocks that display the current time. These are
Real Time Clocks or RTCs. Battery backup is provided for one of these clocks so that time
is tracked even when the computer is switched off. RTCs can be used for alarms and other
functions like switching computers on or off. This article explains how to write Linux device
drivers for SPI-based RTC chips.

e will focus on the RTC DS1347 to explain how


device drivers are written for RTC chips. You can
refer to the RTC DS1347 datasheet for a complete
understanding of this driver.

Linux SPI subsystem

In Linux, the SPI subsystem is designed in such a way that


the system running Linux is always an SPI master. The SPI
subsystem has three parts, which are listed below.
The SPI master driver: For each SPI bus in the system,
there will be an SPI master driver in the kernel, which has
routines to read and write on that SPI bus. Each SPI master
driver in the kernel is identified by an SPI bus number. For the
purposes of this article, lets assume that the SPI master driver
is already present in the system.
The SPI slave device: This interface provides a way of
describing the SPI slave device connected to the system. In

this case, the slave device is RTC DS1347. Describing the


SPI slave device is an independent task that can be done as
discussed in the section on Registering RTC DS1347 as an
SPI slave device.
The SPI protocol driver: This interface provides methods
to read and write the SPI slave device (RTC DS1347).
Writing an SPI protocol driver is described in the section on
Registering the DS1347 SPI protocol driver.
The steps for writing an RTC DS1347 driver based on the
SPI bus are as follows:
1. Register RTC DS1347 as an SPI slave device with the SPI
master driver, based on the SPI bus number to which the
SPI slave device is connected.
2. Register the RTC DS1347 SPI protocol driver.
3. Once the probe routine of the protocol driver is called,
register the RTC DS1347 protocol driver to read and write
routines to the Linux RTC subsystem.
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 61

Developers

How To

After all this, the Linux RTC subsystem can use the
registered protocol drivers read and write routines to read and
write the RTC.

Linux Kernel

SPI Master
driver
spi master
write

RTC DS1347 hardware overview

RTC DS1347 is a low current SPI compatible real time clock.


The information it provides includes the seconds, minutes and
hours of the day, as well as what day, date, month and year
it is. This information can either be read from or be written
to the RTC DS1347 using the SPI interface. RTC DS1347
acts as a slave SPI device and the microcontroller connected
to it acts as the SPI master device. The CS pin of the RTC is
asserted low by the microcontroller to initiate the transfer,
and de-asserted high to terminate the transfer. The DIN pin
of the RTC transfers data from the microcontroller to the
RTC and the DOUT pin transfers data from the RTC to the
microcontroller. The SCLK pin is used to provide a clock by
the microcontroller to synchronise the transfer between the
microcontroller and the RTC.
The RTC DS1347 works in the SPI Mode 3. Any transfer
between the microcontroller and the RTC requires the
microcontroller to first send the command/address byte to the
RTC. Data is then transferred out of the DOUT pin if it is a
read operation; else, data is sent by the microcontroller to the
DIN pin of the RTC if it is a write operation. If the MSB bit
of the address is one, then it is a read operation; and if it is
zero, then it is a write operation. All the clock information is
mapped to SPI addresses as shown in Table 1.
Read
address

Write
address

RTC
register

Range

0x81

0x01

Seconds

0 - 59

0x83

0x03

Minutes

0 - 59

0x85

0x05

Hours

0 - 23

0x87

0x07

Date

1 - 31

0x89

0x09

Month

1 - 12

0x8B

0x0B

Day

1-7

0x8D

0x0D

Year

0 - 99

0x8F

0x0F

Control

00H - 81H

0x97

0x17

Status

03H - E7H

0xBF

0x3F

Clock burst

Table 1: RTC DS1347 SPI register map

When the clock burst command is given to the RTC, the


latter will give out the values of seconds, minutes, hours, the
date, month, day and year, one by one, and continuously. The
clock burst command is used in the driver to read the RTC.

The Linux RTC subsystem

The Linux RTC subsystem is the interface through which


Linux manages the time of the system. The following
procedure is what the driver goes through to register the RTC
with the Linux RTC subsystem.
62 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

spi master
read

SPI
BUS

RCT
DS1347

SPI Subsystem

C
P
U

Slave device
registration

spi write
operation
struct spi_device

SPI Slave device


struct spi_board_info

spi read
operation

SPI Protocol Driver


struct spi_driver

RTC
write

RTC
read

RTC Subsystem

Figure 1: RTC DS1347 driver block diagram

1. Specify the drivers RTC read and write routines through


the function pointer interface provided by the RTC
subsystem.
2. Register with the RTC subsystem using devm_rtc_device_
register API.
The RTC subsystem requires that the driver fill the struct
rtc_class_ops routine, which has the following function
pointers.
read_time: This routine is called by the kernel when the
user application executes a system call to read the RTC time.
set_time: This routine is called by the kernel when the
user application executes a system call to set the RTC time.
There are other function pointers in the structure, but the above
two are the minimum an interface requires for an RTC driver.
Whenever the kernel wants to perform any operation on
the RTC, it calls the above function pointer, which will call
the drivers RTC routines.
After the above RTC operations structure has been filled,
it has to be registered with the Linux RTC subsystem. This is
done through the kernel API:
devm_rtc_device_register(struct device *dev, const char
*name, const struct rtc_class_ops *ops, struct module
*owner);

The first parameter is the device object, the second is the


name of the RTC driver, the third is the driver RTC operations
structure that has been discussed above, and the last is the
owner, which is THIS_MODULE macro.

Registering the RTC DS1347 as an SPI slave device

The Linux kernel requires a description of all devices


connected to it. Each subsystem in the Linux driver model
has a way of describing the devices related to that subsystem.
Similarly, the SPI subsystem represents devices based on the
SPI bus as a struct spi_device. This structure defines the SPI
slave device connected to the processor running the Linux
kernel. The device structure is written in the board file in the

How To
Linux kernel, which is a part of the board support package.
The board file resides in arch/ directory in Linux (for
example, the board file for the Beagle board is in arch/arm/
mach-omap2/board-omap3beagle.c). The struct spi_device
is not directly written but a different structure called struct
spi_board_info is filled and registered, which creates the
struct spi_device in the kernel automatically and links it to the
SPI master driver that contains the routines to read and write
on the SPI bus. The struct spi_board_info for RTC DS1347
can be written in the board file as follows:
struct spi_board_info spi_board_info[] __initdata = {
.modalias = ds1347,
.bus_num = 1,
.chip_select = 1,
};

Modalias is the name of the driver used to identify the


driver that is related to this SPI slave devicein which case
the driver will have the same name. Bus_num is the number
of the SPI bus. It is used to identify the SPI master driver that
controls the bus to which this SPI slave device is connected.
Chip_select is used in case the SPI bus has multiple chip
select pins; then this number is used to identify the chip select
pin to which this SPI slave device is connected.
The next step is to register the struct spi_board_info
with the Linux kernel. In the board file initialisation code, the
structure is registered as follows:
spi_register_board_info(spi_board_info, 1);

The first parameter is the array of the struct spi_board_


info and the second parameter is the number of elements in
the array. In the case of RTC DS1347, it is one. This API
will check if the bus number specified in the spi_board_info
structure matches with any of the master driver bus numbers
that are registered with the Linux kernel. If any of them do
match, it will create the struct spi_device and initialise the
fields of the spi_device structure as follows:
master = spi_master driver which has the same bus number as
bus_num in the spi_board_info structure.
chip_select = chip_select of spi_board_info
modalias = modalias of spi_board_info

After initialising the above fields, the structure is


registered with the Linux SPI subsystem. The following are
the fields of the struct spi_device, which will be initialised
by the SPI protocol driver as needed by the driver, and if not
needed, will be left empty.
max_speed_hz = the maximum rate of transfer to the bus.
bits_per_word = the number of bits per transfer.
mode = the mode in which the SPI device works.

Developers

In the above specified manner, any SPI slave device is


registered with the Linux kernel and the struct spi_device is
created and linked to the Linux SPI subsystem to describe
the device. This spi_device struct will be passed as a
parameter to the SPI protocol driver probe routine when the
SPI protocol driver is loaded.

Registering the RTC DS1347 SPI protocol driver


The driver is the medium through which the kernel interacts
with the device connected to the system. In case of the SPI
device, it is called the SPI protocol driver. The first step in
writing an SPI protocol driver is to fill the struct spi_driver
structure. For RTC DS1347, the structure is filled as follows:
static struct spi_driver ds1347_driver = {

.driver = {
.name = "ds1347",
.owner = THIS_MODULE,
},

.probe = ds1347_probe,
};

The name field has the name of the driver (this should be
the same as in the modalias field of the struct spi_board_info).
Owner is the module that owns the driver, THIS_MODULE
is the macro that refers to the current module in which the
driver is written (the owner field is used for reference
counting of the module owning the driver). The probe is the
most important routine that is called when the device and the
driver are both registered with the kernel.
The next step is to register the driver with the kernel.
This is done by a macro module_spi_driver (struct spi_
driver *). In the case of RTC DS1347, the registration is
done as follows:
module_spi_driver(ds1347_driver);

The probe routine of the driver is called if any of the


following cases are satisfied:
1. If the device is already registered with the kernel and then
the driver is registered with the kernel.
2. If the driver is registered first, then when the device is
registered with the kernel, the probe routine is called.
In the probe routine, we need to read and write on the SPI
bus, for which certain common steps need to be followed.
These steps are written in a generic routine, which is called
throughout to avoid duplicating steps. The generic routines
are written as follows:
1. First, the address of the SPI slave device is written on
the SPI bus. In the case of the RTC DS1347, the address
should contain its most significant bit, reset for the write
operation (as per the DS1347 datasheet).
2. Then the data is written to the SPI bus.
Since this is a common operation, a separate routine ds1347_
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 63

Developers

How To

write_reg is written as follows:


static int ds1347_write_reg(struct device *dev, unsigned char
address, unsigned char data)
{
struct spi_device *spi = to_spi_device(dev);
unsigned char buf[2];
buf[0] = address & 0x7F;
buf[1] = data;

registered. The first thing the probe routine does is to set the
SPI parameters to be used to write on the bus. The parameters
are the mode in which the SPI device works. In the case of
RTC DS1347, it works on Mode 3 of the SPI:
spi->mode = SPI_MODE_3;

bits_per_word is the number of bits transferred. In the


case of RTC DS1347, it is 8 bits.
spi->bits_per_word = 8;

return spi_write_then_read(spi, buf, 2, NULL, 0);


}

The parameters to the routine are the address to which the


data has to be written and the data which has to be written
to the device. spi_write_then_read is the routine that has the
following parameters:
struct spi_device: The slave device to be written.
tx_buf: Transmission buffer. This can be NULL if
reception only.
tx_no_bytes: The number of bytes in the tx buffer.
rx_buf: Receive buffer. This can be NULL if
transmission only.
rx_no_bytes: The number of bytes in the receive buffer.
In the case of the RTC DS1347 write routine, only two
bytes are to be written: one is the address and the other is the
data on that address.
The reading of the SPI bus is done as follows:
1. First, the address of the SPI slave device is written on
the SPI bus. In the case of RTC DS1347, the address should
contain its most significant bit set for the read operation (as
per the DS1347 datasheet).
2. Then the data is read from the SPI bus.
Since this is a common operation, a separate routine,
ds1347_read_reg, is written as follows:
static int ds1347_read_reg(struct device *dev, unsigned char
address, unsigned char *data)
{
struct spi_device *spi = to_spi_device(dev);
*data = address | 0x80;
return spi_write_then_read(spi, data, 1, data, 1);
}

In the case of RTC DS1347, only one byte, which is the


address, is written on the SPI bus and one byte is to be read
from the SPI device.

RTC DS1347 driver probe routine

When the probe routine is called, it passes an spi_device


struct, which was created when spi_board_info was
64 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

After changing the parameters, the kernel has to be


informed of the changes, which is done by calling the spi_
setup routine as follows:
spi_setup(spi);

The following steps are carried out to check and configure


the RTC DS1347.
1. First, the RTC control register is read to see if the RTC is
present and if it responds to the read command.
2. Then the write protection of the RTC is disabled so that
the code is able to write on the RTC registers.
3. Then the oscillator of the RTC DS1347 is started so that
the RTC starts working.
Till this point the kernel is informed that the RTC is on an SPI
bus and it is configured. After the RTC is ready to be read and
written by the user, the read and write routines of the RTC are to be
registered with the Linux kernel RTC subsystem as follows:
rtc = devm_rtc_device_register(&spi->dev, "ds1347", &ds1347_
rtc_ops, THIS_MODULE);

The parameters are the name of the RTC driver, the


RTC operation structure that contains the read and write
operations of the RTC, and the owner of the module. After
this registration, the Linux kernel will be able to read
and write on the RTC of the system. The RTC operation
structure is filled as follows:
static const struct rtc_class_ops ds1347_rtc_ops = {

.read_time = ds1347_read_time,

.set_time = ds1347_set_time,
};

The RTC read routine is implemented as follows.


The RTC read routine has two parameters, one is the
device object and the other is the pointer to the Linux RTC
time structure struct, rtc_time.
The rtc_time structure has the following fields, which
have to be filled by the driver:
tm_sec: seconds (0 to 59, same as RTC DS1347)
tm_min: minutes (0 to 59, same as RTC DS1347)

How To
tm_hour: hour (0 to 23, same as RTC DS1347)
tm_mday: day of month (1 to 31, same as RTC DS1347)
tm_mon: month (0 to 11 but RTC DS1347 provides
months from 1 to 12, so the value returned by RTC needs to
have 1 subtracted from it)
tm_year: year (year since 1900; RTC DS1347 stores years
from 0 to 99, and the driver considers the RTC valid from
2000 to 2099, so the value returned from RTC is added to 100
and as a result the offset is the year from 1900)
First the clock burst command is executed on the RTC,
which gives out all the date and time registers through the SPI
interface, i.e., a total of 8 bytes:
buf[0] = DS1347_CLOCK_BURST | 0x80;
err = spi_write_then_read(spi, buf, 1, buf, 8);
if (err)
return err;

Then the read date and time is stored in the Linux date
and time structure of the RTC. The time in Linux is in binary
format so the conversion is also done:
dt->tm_sec = bcd2bin(buf[0]);
dt->tm_min = bcd2bin(buf[1]);
dt->tm_hour = bcd2bin(buf[2] & 0x3F);
dt->tm_mday = bcd2bin(buf[3]);
dt->tm_mon = bcd2bin(buf[4]) - 1;
dt->tm_wday = bcd2bin(buf[5]) - 1;
dt->tm_year = bcd2bin(buf[6]) + 100;

After storing the date and time of the RTC in the Linux
RTC date and time structure, the date and time is validated
through rtc_valid_tm API. After validation, the validation
status from the API is returnedif the date and time is valid,
then the kernel will return the date and time in the structure
to the user application; else it will return an error:

Developers

/* year in linux is from 1900 i.e in range of 100


in rtc it is from 00 to 99 */
dt->tm_year = dt->tm_year % 100;
buf[7] = bin2bcd(dt->tm_year);
buf[8] = bin2bcd(0x00);

After this, the data is sent to the RTC device, and the
status of the write is sent to the kernel as follows:
return spi_write_then_read(spi, buf, 9, NULL, 0);

Contributing to the RTC subsystem

The RTC DS1347 is a Maxim Dallas RTC. There are


various other RTCs in the Maxim database and they are
not supported by the Linux kernel, just like it is with
various other manufacturers of RTCs. All the RTCs that are
supported by the Linux kernel are present in the drivers/rtc
directory of the kernel. The following steps can be taken to
write support for the RTC in the Linux kernel.
1. Pick any RTC from the Manufacturer (e.g., Maxim)
database which does not have support in the Linux kernel
(see the drivers/rtc directory for supported RTCs).
2. Download the datasheet of the RTC and study its features.
3. Refer to rtc-ds1347.c and other RTC files in the drivers/
rtc directory in the Linux kernel and go over even this
article for how to implement RTC drivers.
4. Write the support for the RTC.
5. Use git (see References below) to create a patch for the
RTC driver written.
6. Submit the patch by mailing it to the Linux RTC mailing list:
a.zummo@towertech.it
rtc-linux@googlegroups.com
linux-kernel@vger.kernel.org
7. The patch will be reviewed and any changes required will
be suggested, and if everything is fine, the driver will be
acknowledged and be added to the Linux tree.

return rtc_valid_tm(dt);

The RTC write routine is implemented as follows.


First, the local buffer is filled with the clock burst write
command, and the date and time passed to the driver write
routine. The clock burst command informs the RTC that
the date and time will follow this command, which is to
be written to the RTC. Also, the time in RTC is in the bcd
format; so the conversion is also done:
buf[0]
buf[1]
buf[2]
buf[3]
buf[4]
buf[5]
buf[6]

=
=
=
=
=
=
=

DS1347_CLOCK_BURST & 0x7F;


bin2bcd(dt->tm_sec);
bin2bcd(dt->tm_min);
(bin2bcd(dt->tm_hour) & 0x3F);
bin2bcd(dt->tm_mday);
bin2bcd(dt->tm_mon + 1);
bin2bcd(dt->tm_wday + 1);

References
[1] DS1347 datasheet, datasheets.maximintegrated.com/en/ds/
DS1347.pdf
[2] DS1347 driver file https://git.kernel.org/cgit/linux/kernel/git/
torvalds/linux.git/tree/drivers/rtc/rtc-ds1347.c
[3] Writing and submitting your first Linux kernel patch video,
https://www.youtube.com/watch?v=LLBrBBImJt4
[4] Writing and submitting your first Linux kernel patch text file
and presentation, https://github.com/gregkh/kernel-tutorial

By: Raghavendra Chandra Ganiga


The author is an embedded firmware development engineer
at General Industrial Controls Pvt Ltd, Pune. His interests lie in
microcontrollers, networking firmware, RTOS development and
Linux device drivers.

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 65

Developers

Let's Try

Use GIT for

Linux Kernel
Development

This article is aimed at newbie developers who are planning to set up a development
environment or move their Linux kernel development environment to GIT.

IT is a free, open source distributed version control


tool. It is easy to learn and is also fast, as most of
the operations are performed locally. It has a very
small footprint. Just to compare GIT with another SVN
(Source Version Control) tool, refer to http://git-scm.com/
about/small-and-fast.
GIT allows multiple local copies (branches), each
totally different from the otherit allows the making of
clones of the entire repository so each user will have a full
backup of the main repository. Figure 1 gives one among
the many pictorial representations of GIT. Developers can
clone the main repository, maintain their own local copy
(branch and branch1) and push the code changes (branch1)
to the main repository. For more information on GIT, refer
to http://git-scm.com/book.
Note: GIT is under development and hence changes are
often pushed into GIT repositories. To get the latest GIT code,
use the following command:

communications between the software and hardware using


IPC and system calls. It resides in the main memory (RAM),
when any operating system is loaded in memory.
The kernel is mainly of two types - the micro kernel and
the monolithic kernel. The Linux kernel is monolithic, as is
depicted clearly in Figure 2.
Based on the above diagram, the kernel can be viewed
as a resource manager; the managed resource could be a
process, hardware, memory or storage devices. More details
about the internals of the Linux kernel can be found at http://
kernelnewbies.org/LinuxVersions and https://www.kernel.org/
doc/Documentation/.

Linux kernel files and modules

In Ubuntu, kernel files are stored under the /boot/ directory


(run ls /boot/ from the command prompt). Inside this
directory, the kernel file will look something like this:
vmlinuz-A.B.C-D

$git clone git://git.kernel.org/pub/scm/git/git.git

The kernel

The kernel is the lowest level program that manages


66 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

where A.B is 3.2, C is your version and D is a patch or fix.


Lets delve deeper into certain aspects depicted in Figure 3:
Vmlinuz-3.2.0-29-generic: In vmlinuz, z indicates the

Let's Try
compressed Linux kernel. With the development of
virtual memory, the prefix vm was used to indicate that the
kernel supports virtual memory.
Initrd.img-3.2.0-29-generic: An initial ramdisk for your kernel.
Config-3.2.0-29-generic: The config file is used to
configure the kernel. We can configure, define options and
determine which modules to load into the kernel image
while compiling.
System.map-3.2.0-29-generic: This is used for memory
management before the kernel loads.

Public1

Developer

Setting up a development environment

Lets set up the host machine with Ubuntu 14.04. Building


the Linux kernel requires a few tools like GIT, make, gcc and
ctag/ncurser-dev. Run the following command:
Sudo apt-get install git-core gcc make libncurses5-dev
exuberant-ctags

Developer

Main repository

Integration

Figure 1: GIT

Kernel modules

The interesting thing about kernel modules is that they can be


loaded or unloaded at runtime. These modules typically add
functionality to the kernelfile systems, devices and system
calls. They are located under /lib/modules with the extension .ko.

Public2

Developers

User Applications
Kernel

System Call

Process
Management &
architecture

Physical/ virtual
Memory
Management

Virtual File System


& Device Driver

Hardware
CPU

Memory

Storage & other


devices

Figure 2: Linux kernel architecture

Once GIT is installed on the local machine (I am using


Ubuntu), open a command prompt and issue the following
commands to create an account:
Figure 3: Locating Ubuntu Linux kernel files
git config --global user.name Vinay Patkar
git config global user. Email Vinay_Patkar@dell.com

Lets set up our own local repository for the Linux kernel.
Note: 1. Multiple Linux kernel repositories exist online.
Here, we pull Linus Torvalds Linux-2.6 GIT code -- Git clone
http://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
2. In case you are behind a proxy-server, set the proxy
by running git config global https.proxy https://domain\
usernmae:password@proxy:port.

Figure 4: GIT pull

Now you can see a directory named linux-2.6 in the


current directory. Do a GIT pull to update your repository:

Next, find the latest stable kernel tag by running the


following code:

Cd linux-2.6
Git pull

git tag -l | less


git checkout -b stable v3.9

Note: Alternatively, you can clone the latest stable build


as shown below:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/
stable/linux-stable.git
cd linux-2.6

Note: RC is the release candidate, and it is a functional


but not stable build.
Once you have the latest kernel code pulled, create your
own local branch using GIT. Make some changes to the code
and to commit changes to the code, run git commit a.
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 67

Developers

Let's Try
your current config.
There are multiple files that start with config; find the file that
is associated with your kernel by running uname a. Then run:
cp /boot/config-`uname -r`* .config or
cp /boot/config-3.13.0-24-generic .config
Make defconfig
<---- for default configuration
Or
Make nconfig
<---- for minimal configuration, here we
can enable or disable features

Figure 5: GIT checkout

At this point edit MakeFile as shown below


VERSION = 3
PATCHLEVEL = 9
SUBLEVEL = 0
EXTRAVERSION = -rc9
<-- there [edit this part]
NAME = Saber-toothed Squirrel

Now run:
Make

Figure 6: Make

This will take some time and if everything goes well,


install the newly built kernel by running the following
command:
Sudo make modules_install
Sudo make install

At this point, you should have your own version of the


kernel, so reboot the machine and log in as the super user (root)
and check uname a. It should list your own version of the Linux
kernel (something like Linux Kernel 3.9.0rc9).
References
Figure 7: Modules_install and Install

Setting up the kernel configuration

Many kernel drivers can be turned on or off, or be built


on modules. The .config file in the kernel source directory
determines which drivers are built. When you download the
source tree, it doesnt come with a .config file. You have several
options for generating a .config file. The easiest is to duplicate

[1] http://linux.yyz.us/git-howto.html
[2] http://kernelnewbies.org/KernelBuild
[3] https://www.kernel.org/doc/Documentation/
[4] http://kernelnewbies.org/LinuxVersions

By: Vinay Patkar


The author works as a software development engineer at Dell
India R&D Centre, Bengaluru, and has close to two years
experience in automation and Windows Server OS. He is
interested in virtualisation and cloud computing technologies.

Know the Leading Players


in Every Sector of the
Electronics Industry

ACCESS ELECTRONICS
B2B INDUSTRY WITH A

www.electronicsb2b.com

Log on to www.electronicsb2b.com and be in touch with the Electronics B2B Fraternity 24x7
68 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

How To

Admin

Managing Your IT Infrastructure


Effectively with Zentyal
Zentyal (formerly eBox Platform) is a program for servers used by small and medium businesses
(SMBs). It plays multiple rolesas a gateway, network infrastructure manager, unified threat
manager, office server, unified communications server or a combination of all of the above. This is
the third and last article in our series on Zentyal.

n previous articles in this series, we discussed various


scenarios that included DHCP, DNS and setting
up a captive portal. In this article, lets discuss the
HTTP proxy, traffic shaping and setting up of Users and
Computers modules.

The HTTP proxy set-up

We will start with the set-up of the HTTP proxy module of


Zentyal. This module will be used to filter out unwanted
traffic from our network. The steps for the configuration are
as follows:
1. Open the Zentyal dashboard by using the domain name
set up in the previous article or use the IP address.
2. The URL will be https://domain-name.
3. Enter the user ID and password.

4. From the dashboard, select HTTP Proxy under the


Gateway section. This will show different options
like General settings, Access rules, Filter profiles,
Categorized Lists and Bandwidth throttling.
5. Select General settings to configure some basic
parameters.
6. Under General settings, select Transparent Proxy. This
option is used to manage proxy settings without making
clients aware about the proxy server.
7. Check Ad Blocking, which will block all the
advertisements from the HTTP traffic.
8. Cache size defines the stored HTTP traffic storage area.
Mention the size in MBs.
9. Click Change and then click Save changes.
10. To filter the unwanted sites from the network, block
www.OpenSourceForU.com | OPEN SOURCE For You | july 2014 | 69

Admin

How To

them using Filter profiles. Click Filter profiles under


HTTP proxy.
11. Click Add new.
12. Enter the name of the profile. In our case, we used
Spam. Click Add and save changes.
13. Click the button under Configuration.
14. To block all spam sites, lets use the Threshold option.
The various options of Threshold will decide how to
block the enlisted sites. Lets select Very strict under
Threshold and click Change. Then click Save changes
to save the changes permanently.
15. Select Use antivirus to block all incoming files,
which may be viruses. Click the Change and the Save
changes buttons.
16. To add a site to be blocked by proxy, click Domain and
URLs and under Domain and URL rules, click the Add
new button.
17. You will then be asked for the domain name. Enter
the domain name of the site which is to be blocked.
Decision option will instruct the proxy to allow or deny
the specified site. Then click Add and Save changes.
18. To activate the Spam profile, click Access rules under
HTTP proxy.
19. Click Add new. Define the time period and the days
when the profile is to be applied.
20. Select Any from Source dropdown menu and then
select Apply filter profile from Decision dropdown
menu. You will see a Spam profile.
21. Click Add and Save changes.
With all the above steps, you will be able to either block
or allow sites, depending on what you want your clients to
have access to. All the other settings can be experimented
with, as per your requirements.

Bandwidth throttling

The setting under HTTP proxy is used to add delay pools,


so that a big file that users wish to download does not
hamper the download speed of the other users.
To do this, follow the steps mentioned below:
1. First create the network object on which you wish to
apply the rule. Click Network and select Objects under
Network options.

2. Click Add new to add the network object.


3. Enter the name of the object, like LAN. Click Add, and
then Save changes.
4. After you have added the network object, you have to
configure members under that object. Click the icon
under Members.
5. Click Add new to add members.
6. Enter the names of the members. We will use LAN users.
7. Under IP address, select the IP address range.
8. Enter the range of your DHCP address range, since we
would like to apply it to all the users in the network.
9. Click Add and then Save changes.
10. Till now, we have added all the users of the network, on
which we wish to apply the bandwidth throttling rule.
Now we will apply the rule. To do this, click HTTP
Proxy and select Bandwidth throttling.
11. This setting will be used to set the total amount of
bandwidth that a single client can use. Click Enable per
client limit.
12. Enter the Maximum unlimited size per client, to be
set as a limit for a user under the network object. Enter
50 MB. A client can now download a 50 MB file with
maximum speed, but if the client tries to download a file
of a greater size than the specified limit, the throttling
rule will limit the speed to the maximum download rate
per client. This speed option is set in the next step.
13. Enter the maximum download rate per client (for our
example, enter 20). This means that if the download
reaches the threshold, the speed will be decreased to
20 KBps.
14. Click Add and Save changes.

Traffic shaping set-up

With bandwidth throttling, we have set the upper limit for


downloads, but to effectively manage our bandwidth we
have to use the Traffic shaping module. Follow the steps
shown below:
1. Click on Traffic shaping under the Gateway section.
2. Click on Rules. This will display two sections: rules for
internal interfaces and rules for external interfaces.
3. Follow the example rules given in Table 1-- these can be
used to shape the bandwidth on eth1.

Table 1

Based on
the firewall

Service

Source

Destination Priority

Guaranteed
rate (KBps)

Limited rate
(KBps)

Yes

Any

Any

512

Yes

Any

Any

512

Yes

Any

Any

1024

2048

Yes

Any

Any

1024

2048

Yes

Any

Any

10

70 | july 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

How To

Admin

Table 2

Based on the
firewall

Service

Source

Destination

Priority

Guaranteed
rate (KBps)

Limited
rate (KBps)

Yes

Any

Any

10

No (Prioritise small packets)

60

200

The rules mentioned in Table 1 will set protocols with


their priority over other protocols, for guaranteed speed.
4. The rules given in Table 2 will manage the upload speed
for the protocols on eth0.
5. After adding all the rules, click on Save changes.
6. With these steps, you have set the priorities of the
protocols and applications. One last thing to be done here
is to set the upload and download rates of the server. To
do this, click Interface rates under Traffic Shaping.
7. Click Action. Change the upload and download speed
of the server, supplied by your service provider. Click
Change and then Save changes.

Users set-up: To set up users for the domain system and


captive portal, follow the steps shown below.
1. Click Users and Computers under the Office section.
2. Click Manage. Here you will see the LDAP tree. Select
Users and click on the plus sign.
With all the information entered and passed, users can
log in to the system through the captive portal.

References
[1] http://en.wikipedia.org/wiki/Proxy_server#Transparent_proxy
[2] http://en.wikipedia.org/wiki/Bandwidth_throttling
[3] http://doc.zentyal.org/en/qos.html

Setting up Users and Computers

Setting up of groups and users can be done as follows.


Group set-up: For this, follow the steps shown below.
1. Click Users and Computers under the Office section.
2. Click Manage. Select groups from the LDAP tree. Click
on the plus sign to add groups.

By: Gaurav Parashar


The author is a FOSS enthusiast, and loves to work with open
source technologies like Moodle and Ubuntu. He works as an
assistant dean (for IT students) at Inmantec Institutions, Ghaziabad,
UP. He can be reached at gauravparashar24@gmail.com

Customer Feedback Form


Open Source For You

None

OSFY?

You can mail us at osfyedit@efy.in You can send this form to


The Editor, OSFY, D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563

www.OpenSourceForU.com | OPEN SOURCE For You | july 2014 | 71

Insight Admin
Windows as the host OS
VMware workstation (Any guest OS)
VirtualBox (Any guest OS)
Hyper-V (Any guest OS)
Linux as the host OS
VMware workstation
Microsoft virtual PC
VMLite workstation
VirtualBox
Xen
A hypervisor or virtual machine monitor (VMM) is a piece
of computer software, firmware or hardware that creates and
runs virtual machines. A computer on which a hypervisor is
running one or more VMs is defined as a host machine. Each
VM is called a guest machine. The hypervisor presents the
guest OSs with a virtual operating platform, and manages the
execution of the guest operating systems. Multiple instances
of a variety of operating systems may share the virtualised
hardware resources.

since Type 2 hypervisors depend on an OS, they are not in full


control of the end users machine.
Hypervisor Type 1 products



VMware ESXi
Citrix Xen
KVM (Kernel Virtual Machine)
Hyper-V

Hypervisor Type 2 products



VMware Workstation
VirtualBox

Table 1
Hypervisors and their cloud service providers

Hypervisor

Cloud service provider

Xen

Amazon EC2
IBM SoftLayer
Fujitsu Global Cloud Platform
Linode
OrionVM
VMware Cloud

ESXi
KVM

Hypervisors of Type 1 (bare metal installation)


and Type 2 (hosted installation)

When implementing and deploying a cloud service, Type 1


hypervisors are used. These are associated with the concept of
bare metal installation. It means there is no need of any host
operating system to install the hypervisor. When using this
technology, there is no risk of corrupting the host OS. These
hypervisors are directly installed on the hardware without
the need for any other OS. Multiple VMs are created on this
hypervisor.
A Type 1 hypervisor is a type of client hypervisor that
interacts directly with hardware that is being virtualised. It
is completely independent of the operating system, unlike a
Type 2 hypervisor, and boots before the OS. Currently, Type
1 hypervisors are being used by all the major players in the
desktop virtualisation space, including but not limited to
VMware, Microsoft and Citrix.
The classical virtualisation software or Type 2 hypervisor
is always installed on any host OS. If the host OS gets corrupt
or crashes for any reason, the virtualisation software or Type
2 hypervisor will also crash and, obviously, all VMs and other
resources will be lost. Thats why the hypervisor technology
or bare metal installation is very popular in the cloud
computing world.
Type 2 (hosted) hypervisors execute within a conventional
OS environment. With the hypervisor layer as a distinct
second software level, guest OSs run at the third level
above the hardware. A Type 2 hypervisor is a type of
client hypervisor that sits on top of an OS. Unlike a Type
1 hypervisor, a Type 2 hypervisor relies heavily on the
operating system. It cannot boot until the OS is already up and
running, and if for any reason the OS crashes, all end users
are affected. This is a big drawback of Type 2 hypervisors, as
they are only as secure as the OS on which they rely. Also,

Hyper-V

Red Hat
HP
Dell
Rackspace
Microsoft Azure

Data centres and uptime tier levels

Just as a virtual machine is mandatory for cloud computing,


the data centre is also an essential part of the technology. All
the cloud computing infrastructure is located in remote data
centres where resources like computer systems and associated
components, such as telecommunications and storage
systems, reside. Data centres typically include redundant
or backup power supplies, redundant data communications
connections, environmental controls, air conditioning, fire
suppression systems as well as security devices.
The tier level is the rating or evaluation aspect of the
data centres. Large data centres are used for industrial scale
operations that can use as much electricity as a small town.
The standards comprise a four-tiered scale, with Tier 4 being
the most robust and full-featured (Table 2).

Cloud simulations

Cloud service providers charge users depending upon the


space or service provided.
In R&D, it is not always possible to have the actual cloud
infrastructure for performing experiments. For any research
scholar, academician or scientist, it is not feasible to hire
cloud services every time and then execute their algorithms or
implementations.
For the purpose of research, development and testing,
open source libraries are available, which give the feel
of cloud services. Nowadays, in the research market,
cloud simulators are widely used by research scholars and
practitioners, without the need to pay any amount to a cloud
service provider.
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 73

Admin

Insight

Table 2

Tier
Level
1

Possible unavailability
in a given year

Requirements
Single non-redundant distribution path serving the IT equipment
Non-redundant capacity components
Basic site infrastructure with expected availability of 99.671 per cent
Meets or exceeds all Tier 1 requirements
Redundant site infrastructure capacity components with expected availability of 99.741 per
cent
Meets or exceeds all Tier 1 and Tier 2 requirements
Multiple independent distribution paths serving the IT equipment
All IT equipment must be dual-powered and fully compatible with the topology of a sites
architecture
Concurrently maintainable site infrastructure with expected availability of 99.982 per cent
Meets or exceeds all Tier 1, Tier 2 and Tier 3 requirements
All cooling equipment is independently dual-powered, including chillers, heaters, ventilation
and air-conditioning (HVAC) systems
Fault-tolerant site infrastructure with electrical power storage and distribution facilities with
expected availability of 99.995 per cent

Using cloud simulators, researchers can execute their


algorithmic approaches on a software-based library and
can get the results in different parameters including energy
optimisation, security, integrity, confidentiality, bandwidth,
power and many others.

Tasks performed by cloud simulators

The following tasks can be performed with the help of


cloud simulators:
Modelling and simulation of large scale cloud
computing data centres
Modelling and simulation of virtualised server hosts,
with customisable policies for provisioning host
resources to VMs
Modelling and simulation of energy-aware
computational resources
Modelling and simulation of data centre network
topologies and message-passing applications
Modelling and simulation of federated clouds
Dynamic insertion of simulation elements, stopping
and resuming simulation
User-defined policies for allocation of hosts to VMs,
and policies for allotting host resources to VMs

Scope and features of cloud simulations

The scope and features of cloud simulations include:


Data centres
Load balancing
Creation and execution of cloudlets
Resource provisioning
Scheduling of tasks
Storage and cost factors
Energy optimisation, and many others

Cloud simulation tools and plugins


Cloud simulation tools and plugins include:
CloudSim

74 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

1729.224 minutes
(28.8 hours)
1361.304 minutes
(22.6 hours)
94.608 minutes
(1.5 hours)

26.28 minutes
(0.4 hours)

CloudAnalyst
GreenCloud
iCanCloud
MDCSim
NetworkCloudSim
VirtualCloud
CloudMIG Xpress
CloudAuction
CloudReports
RealCloudSim
DynamicCloudSim
WorkFlowSim

CloudSim

CloudSim is a famous simulator for cloud parameters developed in


the CLOUDS Laboratory, at the Computer Science and Software
Engineering Department of the University of Melbourne.
The CloudSim library is used for the following
operations:
Large scale cloud computing at data centres
Virtualised server hosts with customisable policies
Support for modelling and simulation of large scale cloud
computing data centres
Support for modelling and simulation of virtualised server
hosts, with customisable policies for provisioning host
resources to VMs
Support for modelling and simulation of energy-aware
computational resources
Support for modelling and simulation of data centre
network topologies and message-passing applications
Support for modelling and simulation of federated clouds
Support for dynamic insertion of simulation elements, as
well as stopping and resuming simulation
Support for user-defined policies to allot hosts to VMs,
and policies for allotting host resources to VMs
User-defined policies for allocation of hosts to virtual
machines

Insight Admin
The major limitation of CloudSim is the lack of a
graphical user interface (GUI). But despite this, CloudSim is
still used in universities and the industry for the simulation of
cloud-based algorithms.

contains the CloudSim examples class files


jars/CloudSim-examples-< CloudSimVersion >-sources.
jar-- contains the CloudSim examples source code files

Downloading, installing and integrating CloudSim

After installing Eclipse IDE, lets create a new project and


integrate CloudSim into it.
1. Create a new project in Eclipse.
2. This can be done by File->New->Project->Java
Project

CloudSim is free and open source software available at http://


www.cloudbus.org/CloudSim/. It is a code library based on
Java. This library can be directly used by integrating with the
JDK to compile and execute the code.
For rapid applications development and testing, CloudSim
is integrated with Java-based IDEs (Integrated Development
Environment) including Eclipse or NetBeans.
Using Eclipse or NetBeans IDE, the CloudSim library can
be accessed and the cloud algorithm implemented.
The directory structure of the CloudSim toolkit is given
below:
CloudSim/
-- CloudSim root directory
docs/
-- API documentation
examples/
-- Examples
jars/
-- JAR archives
sources/
-- Source code
tests/
-- Unit tests
CloudSim needs to be unpacked for installation. To
uninstall CloudSim, the whole CloudSim directory needs to
be removed.
There is no need to compile CloudSim source code. The
JAR files with the CloudSim package have been provided to
compile and run CloudSim applications:
jars/CloudSim-<CloudSimVersion>.jar-- contains the
CloudSim class files
jars/CloudSim-< CloudSimVersion >-sources.jar-contains the CloudSim source code files
jars/CloudSim-examples-< CloudSimVersion >.jar--

Figure 1: Creating a new Java Project in Eclipse

Steps to integrate CloudSim with Eclipse

Figure 2: Assigning a name to the Java Project

Figure 3: Build path for CloudSim library


www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 75

Admin

Insight
Starting Cloud Simulation with Dynamic and Hybrid Secured Key
Initialising...
MD5 Hash Digest(in Hex. format)::
6e47ed33cde35ef1cc100a78d3da9c9f
Hybrid Approach (SHA+MD5) Hash Hex format:
b0a309c58489d6788262859da2e7da45b6ac20a052b6e606ed1759648e43e40b
Hybrid Approach Based (SHA+MD5) Security Key Transmitted =>
ygcxsbyybpr4 ?
Starting CloudSim version 3.0
CloudDatacentre-1 is starting...
CloudDatacentre-2 is starting...
Broker is starting...
Entities started.

Figure 4: Go to the path of CloudSim library

0.0: Broker: Cloud Resource List received with 2 resource(s)


0.0: Broker: Trying to Create VM #0 in CloudDatacentre-1
0.0: Broker: Trying to Create VM #1 in CloudDatacentre-1
[VmScheduler.vmCreate] Allocation of VM #1 to Host #0 failed by
MIPS
0.1: Broker: VM #0 has been created in Datacentre #2, Host #0
0.1: Broker: Creation of VM #1 failed in Datacentre #2
0.1: Broker: Trying to Create VM #1 in CloudDatacentre-2
0.2: Broker: VM #1 has been created in Datacentre #3, Host #0
0.2: Broker: Sending cloudlet 0 to VM #0
0.2: Broker: Sending cloudlet 1 to VM #1
0.2: Broker: Sending cloudlet 2 to VM #0
160.2: Broker: Cloudlet 1 received
320.2: Broker: Cloudlet 0 received
320.2: Broker: Cloudlet 2 received

Figure 5: Select all JAR files of CloudSim for integration

320.2: Broker: All Cloudlets executed. Finishing...

3. Give a name to
your project.
4. Configure the
build path for adding
the CloudSim library.
5. Search and select
the CloudSim JAR files.
In the integration
and implementation
of Java code and
CloudSim, the Javabased methods and
packages can be used.
In this approach, the
Java library is directly
associated with
CloudSim code.
After executing
the code in Eclipse,
the following output
Figure 6: JAR files of CloudSim visible in the
will
be generated, which
referenced libraries of Eclipse with Java Project
makes it evident that the
integration of the dynamic key exchange is implemented with
the CloudSim code:

320.2: Broker: Destroying VM #1

76 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

320.2: Broker: Destroying VM #0


Broker is shutting down...
Simulation: No more future events
CloudInformationService: Notify all CloudSim entities for
shutting down.
CloudDatacentre-1 is shutting down...
CloudDatacentre-2 is shutting down...
Broker is shutting down...
Simulation completed.
Simulation completed.
============================= OUTPUT ================= ========
Cloudlet ID STATUS Data centre ID VM ID Time Start Time
Finish Time
==============================================================
1

SUCCESS

160

0.2

160.2

SUCCESS

320

0.2

320.2

SUCCESS

320

0.2

320.2

Cloud Simulation Finish


Simulation Scenario Finish with Successful Matching of the Keys
Simulation Scenario Execution Time in MillSeconds => 5767
Security Parameter => 30.959372773933122
2014-07-09 16:15:21.19

Insight Admin

Figure 7: Create a new Java program for integration with CloudSim

Communities of users and data centres supporting the social


networks are characterised and based on their location. Parameters
such as user experience while using the social network application
and the load on the data centre are obtained/logged.
CloudAnalyst is used to model and analyse real world
problems through case studies of social networking
applications deployed on the cloud.
The main features of CloudAnalyst are:
User friendly graphical user interface (GUI)
Simulation with a high degree of configurability and
flexibility
Performs different types of experiments with repetitions
Connectivity with Java for extensions

The GreenCloud cloud simulator

Figure 8: Writing the Java code with the import of CloudSim packages

Figure 9: Execution of the Java code integrated with CloudSim

The CloudAnalyst cloud simulator

CloudAnalyst is another cloud simulator that is completely


GUI-based and supports the evaluation of social network tools
according to the geographic distribution of users and data centres.

GreenCloud is also getting famous in the international


market as the cloud simulator that can be used for energyaware cloud computing data centres with the main focus on
cloud communications. It provides the features for detailed
fine-grained modelling of the energy consumed by the data
centre IT equipment like the servers, communication switches
and communication links. GreenCloud simulator allows
researchers to investigate, observe, interact and measure the
clouds performance based on multiple parameters. Most
of the code of GreenCloud is written in C++. TCL is also
included in the library of GreenCloud.
GreenCloud is an extension of the network simulator
ns-2 that is widely used for creating and executing network
scenarios. It provides the simulation environment that enables
energy-aware cloud computing data centres. GreenCloud
mainly focuses on the communications within a cloud. Here,
all of the processes related to communication are simulated at
the packet level.

By: Dr Gaurav Kumar


By: Anil Kumar Pugalia
The author is associated with various academic and research
institutes, delivering lectures and conducting technical
workshops on the latest technologies and tools. Contact him at
kumargaurav.in@gmail.com

EB Times
Electronics Trade Channel Updates

is Becoming Regional
Get North, East, West & South Edition at you
doorstep. Write to us at myeb@efyindia.com and
get EB Times regularly
This monthly B2B Newspaper is a resource for traders, distributors, dealers, and those
who head channel business, as it aims to give an impetus to channel sales

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 77

Admin

How To

Docker is an open source project, which packages applications and their dependencies
in a virtual container that can run on any Linux server. Docker has immense possibilities
as it facilitates the running of several OSs on the same server.

echnology is changing faster than styles in the fashion


world, and there are many new entrants specific to
the open source, cloud, virtualisation and DevOps
technologies. Docker is one of them. The aim of this article
is to give you a clear idea of Docker, its architecture and its
functions, before getting started with it.
Docker is a new open source tool based on Linux
container technology (LXC), designed to change how you
think about workload/application deployments. It helps
you to easily create light-weight, self-sufficient, portable
application containers that can be shared, modified and
easily deployed to different infrastructures such as cloud/
compute servers or bare metal servers. The idea is to
provide a comprehensive abstraction layer that allows
developers to containerise or packageany application and
have it run on any infrastructure.
Docker is based on container virtualisation and it is not
new. There is no better tool than Docker to help manage kernel
level technologies such as LXC, cgroups and a copy-on-write
filesystem. It helps us manage the complicated kernel layer
technologies through tools and APIs.

a single host. LXC does this by using kernel level name


space, which helps to isolate containers from the host.
Now questions might arise about security. If I am logged
in to my container as the root user, I can hack my base OS;
so is it not secured? This is not the case because the user
name space separates the users of the containers and the
host, ensuring that the container root user does not have
the root privilege to log in to the host OS. Likewise, there
are the process name space and the network name space,
which ensure that the display and management of the
processes run in the container but not on the host and the
network container, which has its own network device and
IP addresses.

What is LXC (Linux Container)?

Docker leverages a copy-on-write filesystem (currently


AUFS, but other filesystems are being investigated). This
allows Docker to spawn containers (to put it simplyinstead
of having to make full copies, it basically uses pointers back
to existing files).

I will not delve too deeply into what LXC is and how it works,
but will just describe some major components.
LXC is an OS level virtualisation method for running
multiple isolated Linux operating systems or containers on
78 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

Cgroups

Cgroups, also known as control groups, help to implement


resource accounting and limiting. They help to limit resource
utilisation or consumption by a container such as memory, the
CPU and disk I/O, and also provide metrics around resource
consumption on various processes within the container.

Copy-on-write filesystem

How To

libcontainer

Systemdnspawn

libvirt

LXC

Linux Kernel

netlink
netfilter
SELinux
cgroups
apparmor namespace

Hardware (Intel, AMD)

Admin

Image: An image is a read-only layer used to build a


container.
Container: This is a self-contained runtime environment
that is built using one or more images. It also allows us to
commit changes to a container and create an image.
Docker registry: These are the public or private servers,
where anyone can upload their repositories so that they can
be easily shared.
The detailed architecture is outside the scope of this article.
Have a look at http://docker.io for detailed information.
Note: I am using CentOS, so the following
instructions are applicable for CentOS 6.5.
Docker is part of Extra Packages for Enterprise Linux
(EPEL), which is a community repository of non-standard
packages for the RHEL distribution. First, we need to install
the EPEL repository using the command shown below:
[root@localhost ~] # rpm -ivh http://dl.fedoraproject.org/
pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Figure 1: Linux Container

As per the best practice update,

Containerisation vs virtualisation

What is the rationale behind the container-based


approach or how is it different from virtualisation?
Figure 2 speaks for itself.
Containers virtualise at the OS level, whereas
both Type-I and Type-2 hypervisor-based solutions
virtualise at the hardware level. Both virtualisation and
containerisation are a kind of virtualisation; in the case
of VMs, a hypervisor (both for Type-I and Type-II) slices
the hardware, but containers make available protected
portions of the OS. They effectively virtualise the OS. If
you run multiple containers on the same host, no container
will come to know that it is sharing the same resources
because each container has its own abstraction.LXC takes
the help of name spaces to provide the isolated regions
known as containers. Each container runs in its own
allocated name space and does not have access outside of
it. Technologies such as cgroups, union filesystems and
container formats are also used for different purposes
throughout the containerisation.

[root@localhost ~] # yum update -y

docker-io is the package that we need to install. As


I am using CentOS, Yum is my package manager; so
depending on your distribution ensure that the correct
command is used, as shown below:
[root@localhost ~] # yum -y install docker-io

Once the above installation is done, start the Docker


service with the help of the command below:
[root@localhost ~] # service docker start

To ensure that the Docker service starts at each reboot, use


the following command:
[root@localhost ~] # chkconfig docker on

Linux containers

Unlike virtual machines, with the help of LXC


you can share multiple containers from a single
source disk OS image. LXC is very lightweight,
has a faster start-up and needs less resources.

Installation of Docker

Before we jump into the installation process,


we should be aware of certain terms commonly
used in Docker documentation.

Figure 2: Virtualisation
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 79

Admin

How To

To check the Docker version, use the following


command:

Run the following command to see your new image in


the list. You will find the newly created image lamp-image
is shown in the output:

[root@localhost ~] # docker version

How to create a LAMP stack with Docker

We are going to create a LAMP stack on a CentOS VM.


However, you can work on different variants as well. First,
lets get the latest CentOS image. The command below will
help us to do so:
[root@localhost ~] # docker pull centos:latest

Next, lets make sure that we can see the image by


running the following code:
[root@localhost ~] # docker image centos
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
centos latest 0c752394b855 13 days ago
124.1 MB

[root@localhost ~] # docker images


REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
lamp-image latest b71507766b2d 2 minutes ago
339.7 MB
centos latest 0c752394b855 13 days ago
124.1 MB

Lets log in to this image/container to check the PHP


version:
[root@localhost ~] # docker run -i -t lamp-image /bin/bash
bash-4.1# php -v
PHP 5.3.3 (cli) (built: Dec 11 2013 03:29:57)
Zend Engine v2.3.0 Copyright (c) 1998-2010 Zend Technologies

Running a simple bash shell to test the image also helps


you to start a new container:

Now, let us configure Apache.


Log in to the container and create a file called index.html.
If you dont want to install VI or VIM, use the Echo
command to redirect the following content to the index.html file:

[root@localhost ~] # docker run -i -t centos /bin/bash

<?php echo Hello world; ?>

If everything is working properly, you'll get a simple


bash prompt. Now, as this is just a base image, we need to
install the PHP, MySQL and the LAMP stack:
[root@localhost ~] # yum install php php-mySQL mySQL-server
httpd

The container now has the LAMP stack. Type exit to


quit from the bash shell.
We are going to create this as a golden image, so that
the next time we need another LAMP container, we dont
need to install it again.
Run the following command and please note the
CONTAINER ID of the image. In my case, the ID was
4de5614dd69c:
[root@localhost ~] # docker ps -a

The ID shown in the listing is used to identify the


container you are using, and you can use this ID to tell
Docker to create an image.
Run the command below to make an image of the
previously created LAMP container. The syntax is docker
commit <CONTAINER ID> <name>. I have used the
previous container ID, which we got in the earlier step:
[root@localhost ~] # docker commit 4de5614dd69c lamp-image

80 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

Start the Apache process with the following command:


[root@localhost ~] # /etc/init.d/httpd start

And then test it with the help of browser/curl/links utilities.


If youre running Docker inside a VM, youll need to
forward port 80 on the VM to another port on the VMs
host machine. The following command might help you
to configure port forwarding. Docker has the feature to
forward ports from containers to the host.
[root@localhost ~] # docker run -i -t -p :80 lamp-image /
bin/bash

For detailed information on Docker and other


technologies related to container virtualisation, check out the
links given under References.
References
[1] Docker: https://docs.docker.com/
[2] LXC: https://linuxcontainers.org/

By: Pradyumna Dash


The author is an independent consultant, and works as a cloud/
DevOps architect. An open source enthusiast, he loves to cook
good food and brew ideas. He is also the co-founder of the site
http:/www.sillycon.org/

How To

Admin

Wireshark: Essential for a Network


Professionals Toolbox
This article, the second in the series, presents further experiments with Wireshark, the
open source packet analyser. In this part, Wireshark will be used to analyse packets
captured from an Ethernet hub.

he first article in the Wireshark series, published in the


July 2014 issue of OSFY, covered Wireshark architecture,
its installation on Windows and Ubuntu, as well as various
ways to capture traffic in a switched environment. Interpretation
of DNS and ICMP Ping protocol captures was also covered.
Let us now carry the baton forward and understand additional
Wireshark features and protocol interpretation.
To start with, capture some traffic from a network
connected to an Ethernet hubwhich is the simplest way to
capture complete network traffic.
Interested readers may purchase an Ethernet hub from a
second hand computer dealer at a throwaway price and go
ahead to capture a few packets in their test environment. The
aim of this is to acquire better hands-on practice of using

Wireshark. So start the capture and once you have sufficient


packets, stop and view the packets before you continue reading.
An interesting observation about this capture is
that, unlike only broadcast and host traffic in a switched
environment, it contains packets from all source IP addresses
connected in the network. Did you notice this?
The traffic thus contains:
Broadcast packets
Packets from all systems towards the Internet
PC-to-PC communication packets
Multicast packets
Now, at this point, imagine analysing traffic captured from
hundreds of computers in a busy networkthe sheer volume of
captured packets will be baffling. Here, an important Wireshark
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 81

Admin

How To

Figure 1: Traffic captured using HUB

Figure 3: ARP protocol

Figure 2: Default Wireshark display filters

feature called Display Filter' can be used very effectively.

Wiresharks Display Filter

This helps to sort/view the network traffic using various


parameters such as the traffic originating from a particular IP
or MAC address, traffic with a particular source or destination
port, ARP traffic and so on. It is impossible to imagine
Wireshark without display filters!
Click on Expressions or go to Analyse Display filters
to find a list of pre-defined filters available with Wireshark.
You can create custom filters depending upon the analysis
requirementsthe syntax is really simple.
As seen in Figure 2, the background colours of the display
filter box offer ready help while creating proper filters. A green
background indicates the correct command or syntax, while a
red background indicates an incorrect or incomplete command.
Use these background colours to quickly identify syntax and
gain confidence in creating the desired display filters.
A few simple filters are listed below:
tcp: Displays TCP traffic only
arp: Displays ARP traffic
eth.addr == aa:bb:cc:dd:ee:ff: Displays traffic where the
Ethernet MAC address is aa:bb:cc:dd:ee:ff
ip.src == 192.168.51.203: Displays traffic where the
source IP address is 192.168.51.203
ip.dst == 4.2.2.1: Displays traffic where the destination IP
address is 4.2.2.1
ip.addr == 192.168.51.1: Displays traffic where the
82 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

source or the destination IP address is 192.168.51.1


Click on Save to store the required filter for future use. By
default, the top 10 custom filters created are available for ready
use under the dropdown menu of the Filter dialogue box.
With this background, let us look at two simple protocols
ARP and DHCP.

Address Resolution Protocol (ARP)

This is used to find the MAC address from the IP address. It


works in two stepsthe ARP request and ARP reply. Here
are the details.
Apply the appropriate display filter (ARP) and view only
ARP traffic from the complete capture. Also, refer to Figure
3 - the ARP protocol.
The protocol consists of the ARP request and ARP reply.
ARP request: This is used to find the MAC address of a
system with a known IP address. For this, an ARP request is
sent as a broadcast towards the MAC broadcast address:
Sender
Sender
Target
Target

MAC address 7c:05:07:ad:42:53


IP address 192.168.51.208
MAC address 00:00:00:00:00:00
IP address 192.168.51.1

Note: Target IP address indicates the IP address for


which the MAC address is requested.
Wireshark displays the ARP request under theInfo box
as: Who has 192.168.51.1? tell 192.168.51.208
ARP reply: This ARP request broadcast is received by
all systems connected to the network segment of the sender
(below the router), mind well, this broadcast also reach router
port connected to this segment.
The system with the destination IP address mentioned in
the ARP request packet replies with its MAC address via an
ARP reply. The important contents of the ARP reply are:

How To

Admin

Sender MAC Address Belonging to system which replies to the


ARP request Updated by the system 00:21:97:88:28:21
Sender IP Address Belonging to system which replies to the
ARP request 192.168.51.1
Target MAC Address Source MAC of ARP request packet
7c:05:07:ad:42:53
Target IP Address Source IP address of the ARP request
packet 192.168.51.208

Wireshark displays the ARP reply under the Info box as:
192.168.51.1 is at 00:21:97:88:28:21.
Thus, with the help of an ARP request and reply, system
192.168.51.208 has detected the MAC address belonging to
192.168.51.1.

Dynamic Host Configuration Protocol (DHCP)

This protocol saves a lot of time for network engineers by


offering a unique dynamic IP address to a system without
an IP address, which is connected in a network. This also
helps to avoid IP conflicts (the use of one IP address by
multiple systems) to a certain extent. The computer users
also benefit by the ability to connect to various networks
without knowing the corresponding IP address range and
the unused IP address.
This DHCP protocol consists of four phasesDHCP
discover, DHCP offer, DHCP request and DHCP ACK. Let us
understand the protocol and interpret how these packets are
seen in Wireshark.

Figure 4: DHCP protocol


www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 83

Admin

How To
1. The DHCP Server Identifier field, which specifies the IP
address of the accepted server.
2. The host name of the client computer.
Use Pane 2 of Wireshark to view these parameters under
Bootstrap Protocol Options 54 and 12.
The DHCP request packet also contains additional
client requests for the server to provide more configuration
parameters such as the default gateway, DNS (Domain Name
Server), address, etc.
DHCP acknowledgement: The server acknowledges a
DHCP request by sending information on the lease duration
and other configurations, as requested by the client during the
DHCP request phase, thus completing the DHCP cycle.
For better understanding, capture a few packets, use
Wireshark Display Filters to filter and view ARP and DHCP,
and read them using Wireshark panes.

Saving packets

Figure 5: Screenshot of DHCP protocol

When a system configured with the Obtain an IP address


automatically setting is connected to a network, it uses DHCP
to get an IP address from the DHCP server. Thus, this is a
clientserver protocol. To capture DHCP packets, users may
start Wireshark on such a system, then start packet capture
and, finally, connect the network cable.
Please refer to Figures 4 and 5, which give a diagram and
a screenshot of the DHCP protocol, respectively.
Discovering DHCP servers: To discover DHCP server(s)
in the network, the client sends a broadcast on 255.255.255.255
with the source IP as 0.0.0.0, using UDP port 68 (bootpc) as
the source port and UDP 67 (bootps) as the destination. This
message also contains the source MAC address as that of the
client and ff:ff:ff:ff:ff:ff as the destination MAC.
A DHCP offer: The nearest DHCP server receives this
discover broadcast and replies with an offer containing
the offered IP address, the subnet mask, the duration of the
default gateway lease and the IP address of the DHCP server.
The source MAC address is that of the DHCP server and the
destination MAC address is that of the requesting client. Here,
the UDP source and destination ports are reversed.
DHCP requests: Remember that there can be more than
one DHCP server in a network. Thus, a client can receive
multiple DHCP offers. The DHCP request packet is broadcast
by the client with parameters similar to discovering a DHCP
server, with two major differences:

84 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

Packets captured using Wireshark can be saved from the menu


File Save as in different formats such as Wireshark, Novell
LANalyzer and Sun Snoop, to name a few.
In addition to saving all captured packets in various file
formats, the File Export Specified Packets option offers
users the choice of saving Display Filtered packets or a
range of packets.
Please feel free to download the pcap files used for
preparing this article from opensourceforu.com. I believe all
OSFY readers will enjoy this interesting world of Wireshark,
packet capturing and various protocols!

Troubleshooting tips

Capturing ARP traffic could reveal ARP poisoning (or ARP


spoofing) in the network. This will be discussed in more
detail at a later stage. Similarly, studying the capture of DHCP
protocol may lead to the discovery of an unintentional or a
rogue DHCP server within the network.

A word of caution

Packets captured using the test scenarios described in


this series of articles are capable of revealing sensitive
information such as login names and passwords. Some
scenarios, such as using ARP spoofing may disrupt the
network temporarily. Make sure to use these techniques
only in a test environment. If at all you wish to use them in
a live environment, do not forget to get the explicit written
permission before doing so.
By: Rajesh Deodhar
The author has been an IS auditor and network security consultanttrainer for the last two decades. He is a BE in Industrial Electronics,
and holds CISA, CISSP, CCNA and DCL certifications. He can be
contacted at rajesh@omegasystems.co.in

Let's Try

Open Gurus

Building the Android Platform:


Compile the Kernel
Tired of stock ROMs? Build and flash your own version
of Android on your smartphone. This new series of
articles will see you through from compiling your
kernel to flashing it on your phone.

any of us are curious and eager to learn how to


port or flash a new version of Android to our
phones and tablets. This article is the first step
towards creating your own custom Android system. Here,
you will learn to set up the build environment for the
Android kernel and build it on Linux.
Let us start by understanding what Android is. Is it an
application framework or is it an operating system? It can be
called a mobile operating system based on the Linux kernel,
for the sake of simplicity, but it is much more than that. It
consists of the operating system, middleware, and application
software that originated from a group of companies led by
Google, known as the Open Handset Alliance.

Android system architecture

Before we begin building an Android platform, lets


understand how it works at a higher level. Figure 1 illustrates
how Android works at the system level.
We will not get into the finer details of the architecture in
this article since the primary goal is to build the kernel. Here
is a quick summary of what the architecture comprises.
Applications and Framework

Binder IPC

Android System Services


System Services

Media Services
AudioFlinger

Camera
Service

MediaPlayer
Service

Search Service

Activity
Manager

Other Media
Services

Window
Manager

Other System
Services and
Managers

HAL
Camera HAL

Audio HAL

Graphics HAL

Other HALs

Linux Kernel
Camera Driver

Audio Driver
(ALSA, OSS,
etc)

Figure 1: Android system architecture

Display Drivers

Other Drivers

Application framework: Applications written in Java


directly interact with this layer.
Binder IPC: It is an Android-specific IPC mechanism.
Android system services: To access the underlying
hardware application framework, APIs often communicate
via system services.
HAL: This acts as a glue between the Android system and
the underlying device drivers.
Linux kernel: At the bottom of the stack is a Linux kernel,
with some architectural changes/additions including
binder, ashmem, pmem, logger, wavelocks, different outof-memory (OOM) handling, etc.
In this article, I describe how to compile the kernel for the
Samsung Galaxy Star Duos
(GT-S5282) with Android
version 4.1.2. The build
process was performed on
an Intel i5 core processor
running 64-bit Ubuntu Linux
14.04 LTS (Trusty Tahr).
However, the process should
work with any Android
kernel and device, with minor
modifications. The handset
details are shown in the
screenshot (Figure 2) taken
from the Setting ->About
Figure 2: Handset details for GT-S5282
device menu of the phone.
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 85

Open Gurus

Let's Try

System and software requirements

Before you download and build the Android kernel, ensure


that your system meets the following requirements:
Linux system (Linux running on a virtual machine will
also work but is not recommended). Steps explained
in this article are for Ubuntu 14.04 LTS to be specific.
Other distributions should also work.
Around 5 GB of free space to install the dependent
software and build the kernel.
Pre-built tool-chain.
Dependent software should include GNU Make,
libncurses5-dev, etc.
Android kernel source (as mentioned earlier, this
article describes the steps for the Samsung Galaxy Star
kernel).
Optionally, if you are planning to compile the whole
Android platform (not just the kernel), a 64-bit
system is required for Gingerbread (2.3.x) and newer
versions.
It is assumed that the reader is familiar with Linux
commands and the shell. Commands and file names are
case sensitive. Bash shell is used to execute the commands
in this article.

Step 1: Getting the source code

The Android Open Source Project (AOSP) maintains


the complete Android software stack, which includes
everything except for the Linux kernel. The Android
Linux kernel is developed upstream and also by various
handset manufacturers.
The kernel source can be obtained from:
1. Google Android kernel sources: Visit https://source.
android.com/source/building-kernels.html for details.
The kernel for a select set of devices is available here.
2. From the handset manufacturers or OEM website: I
am listing a few links to the developer sites where you
can find the kernel sources. Please understand that the
links may change in the future.
Samsung: http://opensource.samsung.com/
HTC: https://www.htcdev.com/
Sony: Most of the kernel is available on github.
3. Developers: They provide a non-official kernel.
This article will use the second methodwe will get
the official Android kernel for Samsung Galaxy Star (GTS5282). Go to the URL http://opensource.samsung.com/
and search for GT-S5282. Download the file GT-S5282_
SEA_JB_Opensource.zip (184 MB).
Lets assume that the file is downloaded in the ~/
Downloads/kernel directory.

Step 2: Extract the kernel source code

Let us create a directory android to store all relevant


files in the user's home directory. The kernel and Android
NDK will be stored in the kernel and ndk directories,
86 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

respectively.
$ mkdir ~/android
$ mkdir ~/android/kernel
$ mkdir ~/android/ndk

Now extract the archive:


$ cd ~/Downloads/kernel
$ unzip GT-S5282_SEA_JB_Opensource.zip
$ tar -C ~/android/kernel -zxf Kernel.tar.gz

The unzip command will extract the zip archive,


which contains the following files:
Kernel.tar.gz: The kernel to be compiled.
Platform.tar.gz: Android platform files.
README_Kernel.txt: Readme for kernel compilation.
README_Platform.txt: Readme for Android platform
compilation.
If the unzip command is not installed, you can extract
the files using any other file extraction tool.
By running the tar command, we are extracting the
kernel source to ~/android/kernel. While creating a subdirectory for extracting is recommended, lets avoid it
here for the sake of simplicity.

Step 3: Install and set up the toolchain

There are several ways to install the toolchain. We will


use the Android NDK to compile the kernel.
Please visit https://developer.android.com/tools/sdk/
ndk/index.html to get details about NDK.
For 64-bit Linux, download Android NDK androidndk-r9-linux-x86_64-legacy-toolchains.tar.bz2 from
http://dl.google.com/android/ndk/android-ndk-r9-linuxx86_64-legacy-toolchains.tar.bz2
Ensure that the file is saved in the ~/android/ndk
directory.
Note: To be specific, we need the GCC 4.4.3
version to compile the downloaded kernel. Using
the latest version of Android NDK will yield to
compilation errors.
Extract the NDK to ~/android/ndk:
$ cd ~/android/ndk
# For 64 bit version
$ tar -jxf android-ndk-r9-linux-x86_64-legacytoolchains.tar.bz2

Add the toolchain path to the PATH environment


variable in .bashrc or the equivalent:

Let's Try

Open Gurus

you are unsure.

Step 5: Build the kernel

Finally, we are ready to fire the build. Run the make


command, as follows:
$ make zImage

Figure 3: Kernel configuration making changes


#Set the path for Android build env (64 bit)
export PATH=${HOME}/android/ndk/android-ndk-r9/toolchains/
arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/
bin:$PATH

Step 4: Configure the Android kernel

Install the necessary dependencies, as follows:

If you want to speed up the build, specify the -j option to


the make command. For example, if you have four processor
cores, you can specify the -j4 option to make:
$ make -j4 zImage

The compilation process will take time to complete, based


on the options available in the kernel configuration (.config)
and the performance of the build system. On completion, the
kernel image (zImage) will be generated in the arch/arm/boot/
directory of the kernel source.
Compile the modules:

$ sudo apt-get install libncurses5-dev build-essential


$ make modules

Set up the architecture and cross compiler, as follows:


$ export ARCH=arm
$ export CROSS_COMPILE=arm-linux-androideabi-

The kernel Makefile refers to the above variables


to select the architecture and cross compile. The cross
compiler command will be ${CROSS_COMPILE}gcc
which is expanded to arm-linux-androideabi-gcc. The same
applies for other tools like g++, as, objdump, gdb, etc.
Configure the kernel for the device:
$ cd ~/android/kernel
$ make mint-vlx-rev03_defconfig

The device-specific configuration files for ARM


architecture are available in the arch/arm/configs directory.
Executing the configuration command may throw a
few warnings. You can ignore these warnings now. The
command will create a .config file, which contains the
kernel configuration for the device.
To view and edit the kernel configuration, run the
following command:
$ make menuconfig

Next, lets assume you want to change lcd overlay


support.
Navigate to Drivers Graphics Support for
framebuffer devices. The option to support lcd overlay
should be displayed as shown in Figure 3.
Skip the menuconfig step or do not make any changes if

This will trigger the build for kernel modules, and .ko files
should be generated in the corresponding module directories.
Run the find command to get a list of .ko files in the kernel
directory:
$ find . -name *.ko

What next?

Now that you have set up the Android build environment,


and compiled an Android kernel and necessary modules,
how do you flash it to the handset so that you can see the
kernel working? This requires the handset to be rooted first,
followed by flashing the kernel and related software. It turns
out that there are many new concepts to understand before
we get into this. So be sure to follow the next article on
rooting and flashing your custom Android kernel.

References
https://source.android.com/
https://developer.android.com/
http://xda-university.com

By: Mubeen Jukaku


Mubeen is technology head at Emertxe Information Technologies
(http://www.emertxe.com). His area of expertise is the architecture
and design of Linux-based embedded systems. He has vast
experience in kernel internals, device drivers and application porting,
and is passionate about leveraging the power of open source for
building innovative products and solutions. He can be reached at
mubeenj@emertxe.com

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 87

For U & Me

Lets Try

Lists: The Building Blocks of Maxima


This 20th article in our series on Mathematics in Open Source showcases the list manipulations
in Maxima, the programming language with an ALGOL-like syntax but Lisp-like semantics.

ists are the basic building blocks of Maxima. The


fundamental reason is that Maxima is implemented in
Lisp, the building blocks of which are also lists.
To begin with, let us walk through the ways of creating
a list. The simplest method to get a list in Maxima is to just
define it, using []. So, [x, 5, 3, 2*y] is a list consisting of four
members. However, Maxima provides two powerful functions
for automatically generating lists: makelist() and create_list().
makelist() can take two forms. makelist (e, x, x0, xn)
creates and returns a list using the expression e, evaluated
for x using the values ranging from x0 to xn. makelist(e,
x, L) creates and returns a list using the expression e,
evaluated for x using the members of the list L. Check out
the example below for better clarity:

its arguments. Note that makelist() is limited by the variation


it can have, which to be specific, is just one i in the first
two examples and x in the last one. If we want more, the
create_list() function comes into play.
create_list(f, x1, L1, ..., xn, Ln) creates and returns a list
with members of the form f, evaluated for the variables x1,
..., xn using the values from the corresponding lists L1, ..., Ln.
Here is just a glimpse of its power:
$ maxima -q
(%i1) create_list(concat(x, y), x, [p, q], y, [1, 2]);
(%o1)

[p1, p2, q1, q2]

(%i2) create_list(concat(x, y, z), x, [p, q], y, [1, 2], z, [a,


b]);
(%o2)

$ maxima -q

[p1a, p1b, p2a, p2b, q1a, q1b, q2a, q2b]

(%i3) create_list(concat(x, y, z), x, [p, q], y, [1, 2, 3], z,

(%i1) makelist(2 * i, i, 1, 5);

[a, b]);

(%o1)

(%o3)

[2, 4, 6, 8, 10]

[p1a, p1b, p2a, p2b, p3a, p3b, q1a, q1b, q2a, q2b, q3a,

(%i2) makelist(concat(x, 2 * i - 1), i, 1, 5);

q3b]

(%o2)

(%i4) quit();

[x1, x3, x5, x7, x9]

(%i3) makelist(concat(x, 2), x, [a, b, c, d]);


(%o3)

[a2, b2, c2, d2]

(%i4) quit();

Note the interesting usage of concat() to just concatenate


88 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

Note that all possible combinations are created using the


values for the variables x, y and z.
Once we have created the lists, Maxima provides a host of
functions to play around with them. Lets take a look at these.

Lets Try For U & Me


Testing the lists

The following set of functions demonstrates the various checks


on lists:
atom(v) - returns true if v is an atomic element; false
otherwise
listp(L) - returns true if L is a list; false otherwise
member(v, L) - returns true if v is a member of list L;
false otherwise
some(p, L) - returns true if predicate p is true for at least
one member of list L; false otherwise
every(p, L) - returns true if predicate p is true for all
members of list L; false otherwise

members in the list L


reverse(L) - returns a list with members of the list L in
reverse order
$ maxima -q
(%i1) L: makelist(i, i, 1, 10);
(%o1)

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i2) cons(0, L);


(%o2)

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i3) endcons(11, L);


(%o3)

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]

(%i4) rest(L);
(%o4)

$ maxima -q
(%i1) atom(5);
(%o1)

(%o5)
true

(%i2) atom([5]);
(%o2)

[8, 9, 10]

(%i12) L2: rest(rest(L, -3), 3);


(%o12)

false

(%i9) some(primep, [1, 2, 4, 9]);


(%o9)

[1, 1, 2, 2, 3, 3, 5, 5, 7, 7, 8, 8, 9, 9, 10, 10]

(%i11) L1: rest(L, 7);


(%o11)

true

(%i8) some(primep, [1, 4, 9]);


(%o8)

[1, 2, 3, 5, 7, 8, 9, 10]

(%i10) delete(4, delete(6, join(L, L)));


(%o10)

false

(%i7) member(x, [a, x, c]);


(%o7)

[1, 2, 3, 4, 5, 7, 8, 9, 10]

(%i9) delete(4, delete(6, L));


(%o9)

true

(%i6) member(x, [a, b, c]);


(%o6)

[1, a, 2, b, 3, c, 4, d]

(%i8) delete(6, L);


(%o8)

true

(%i5) listp([x, 5]);


(%o5)

[1, 2, 3, 4, 5, 6, 7]

(%i7) join(L, [a, b, c, d]);


(%o7)

false

(%i4) listp([x]);
(%o4)

[4, 5, 6, 7, 8, 9, 10]

(%i6) rest(L, -3);


(%o6)

false

(%i3) listp(x);
(%o3)

[2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i5) rest(L, 3);

[4, 5, 6, 7]

(%i13) L3: rest(L, -7);


(%o13)

true

[1, 2, 3]

(%i14) append(L1, L2, L3);

(%i10) every(integerp, [1, 2, 4, 9]);

(%o14)

(%o10)

(%i15) reverse(L);

true

[8, 9, 10, 4, 5, 6, 7, 1, 2, 3]

(%i11) every(integerp, [1, 2, 4, x]);

(%o15)

(%o11)

(%i16) join(reverse(L), L);

false

(%i12) quit();

List recreations

Next is a set of functions operating on list(s) to create and return


new lists:
cons(v, L) - returns a list with v, followed by members of L
endcons(v, L) - returns a list with members of L followed by v
rest(L, n) - returns a list with members of L, except the first n
members (if n is non-negative), otherwise except the last -n
members. n is optional, in which case, it is taken as 1
join(L1, L2) - returns a list with members of L1 and L2
interspersed
delete(v, L, n) - returns a list like L but with the first n
occurrences of v deleted from it. n is optional, in which
case all occurrences of v are deleted
append(L1, ..., Ln) - returns a list with members of L1, ...,
Ln, one after the other
unique(L) - returns a list obtained by removing the duplicate

(%o16)

[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
[10, 1, 9, 2, 8, 3, 7, 4, 6, 5, 5, 6, 4, 7, 3, 8, 2, 9, 1,

10]
(%i17) unique(join(reverse(L), L));
(%o17)

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i18) L;
(%o18)

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i19) quit();

Note that the list L is still not modified. For that matter, ,
even L1, L2, L3 are not modified. In fact, that is what is meant
when we state that all these functions recreate new modified
lists, rather than modify the existing ones.

List extractions

Here is a set of functions extracting the various members of a


list. first(L), second(L), third(L), fourth(L), fifth(L), sixth(L),
seventh(L), eight(L), ninth(L), and tenth(L), respectively return
the first, second, ... member of the list L. last(L) returns the last
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 89

For U & Me

Lets Try

member of the list L.

(%o8)

[2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i9) K: copylist(L);
$ maxima -q

(%o9)

(%i1) L: create_list(i * x, x, [a, b, c], i, [1, 2, 3, 4]);

(%i10) length(L);

(%o1)

(%o10)

[a, 2 a, 3 a, 4 a, b, 2 b, 3 b, 4 b, c, 2 c, 3 c, 4 c]

(%i2) first(L);

[2, 4, 6, 8, 10, 12, 14, 16, 18, 20]


10

(%i11) pop(L);

(%o2)

(%i3) seventh(L);

(%o11)

(%i12) length(L);

(%o3)

3 b

(%i4) last(L);

(%o12)

(%i13) K;

(%o4)

4 c

(%i5) third(L); last(L);

(%o13)

[2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i14) L;

(%o5)

3 a

(%o14)

(%o6)

4 c

(%i15) pop([1, 2, 3]); /* Actual list is not allowed */

(%i7) L;
(%o7)

[4, 6, 8, 10, 12, 14, 16, 18, 20]

arg must be a symbol [1, 2, 3]


[a, 2 a, 3 a, 4 a, b, 2 b, 3 b, 4 b, c, 2 c, 3 c, 4 c]

#0: symbolcheck(x=[1,2,3])(basic.mac line 22)

(%i8) quit();

#1: pop(l=[1,2,3])(basic.mac line 26)

Again, note that the list L is still not modified. However, we


may need to modify the existing lists, and none of the above
functions will do that. It could be achieved by assigning the
return values of the various list recreation functions back to
the original list. However, there are a few functions, which do
modify the list right away.

(%i16) quit();

-- an error. To debug this try: debugmode(true);

List manipulations

The following are the two list manipulating functions provided


by Maxima:
push(v, L) - inserts v at the beginning of the list L
pop(L) - removes and returns the first element from list L
L must be a symbol bound to a list, not the list itself, in
both the above functions, for them to modify it. Also, these
functionalities are not available by default, so we need to load
the basic Maxima file. Check out the demonstration below.
We may display L after doing these operations, or even check the
length of L to verify the actual modification of L. In case we need to
preserve a copy of the list, the function copylist() can be used.

Advanced list operations

And finally, here is a bonus of two sophisticated list operations:


sublist_indices(L, p) - returns the list indices for the
members of the list L, for which predicate p is true.
assoc(k, L, d) - L must have all its members in the form
of x op y, where op is some binary operator. Then, assoc()
searches for k in the left operand of the members of
L. If found, it returns the corresponding right operand,
otherwise it returnsd; or it returns false, if d is missing.
Check out the demonstration below for both the above
operations
$ maxima -q
(%i1) sublist_indices([12, 23, 57, 37, 64, 67], primep);
(%o1)

[2, 4, 6]

(%i2) sublist_indices([12, 23, 57, 37, 64, 67], evenp);


(%o2)

[1, 5]

(%i3) sublist_indices([12, 23, 57, 37, 64, 67], oddp);


$ maxima -q

(%o3)

(%i1) L: makelist(2 * x, x, 1, 10);

(%i4) sublist_indices([2 > 0, -2 > 0, 1 = 1, x = y], identity);

(%o1)

(%o4)

[2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

[2, 3, 4, 6]
[1, 3]

(%i2) push(0, L); /* This doesnt work */

(%i5) assoc(2, [2^r, x+y, 2=4, 5/6]);

(%o2)

(%o5)

push(0, [2, 4, 6, 8, 10, 12, 14, 16, 18, 20])

(%i3) pop(L); /* Nor does this work */

(%i6) assoc(6, [2^r, x+y, 2=4, 5/6]);

(%o3)

(%o6)

pop([2, 4, 6, 8, 10, 12, 14, 16, 18, 20])

false

(%i4) load(basic); /* Loading the basic Maxima file */

(%i7) assoc(6, [2^r, x+y, 2=4, 5/6], na);

(%o4)

(%o7)

/usr/share/maxima/5.24.0/share/macro/basic.mac

(%i5) push(0, L); /* Now, this works */


(%o5)

[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i6) L;
(%o6)

[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i7) pop(L); /* Even this works */


(%o7)

na

(%i8) quit();

(%i8) L;

90 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

By: Anil Kumar Pugalia


The
is aKumar
gold medallist
from NIT, Warangal and IISc,
By:author
Anil
Pugalia
Bengaluru. Mathematics and knowledge-sharing are two of
his many passions. Learn more about him at http://sysplay.in.
He can be reached at email@sarika-pugs.com.

Overview For U & Me

Replicant: A Truly Free


Version of Android

Replicant is a free and open source mobile operating system based on the Android
platform. It aims at replacing proprietary Android apps and components with open source
alternatives. It is security focused, as it blocks all known Android backdoors.

martphones have evolved from being used just for


communicating with others to offering a wide range
of functions. The fusion between the Internet and
smartphones has made these devices very powerful and useful
to us. Android had been a grand success in the smartphone
business. Its no exaggeration to say that more than 80 per cent of
the smartphone market is now occupied by Android, which has
become the preference of most mobile vendors today.
The reason is simple, Android is free and available to public.
But theres a catch. Have you ever wondered how well
Android respects openness ? And how much Android
respects your freedom? If you havent thought about it, please
take a moment to do so. When youre done, you will realise
that Android is not completely open to everyone.
Thats why were going to explore Replicant - a truly
free version of Android.

Android and openness

Lets talk about openness first. The problem with a closed


source program is that you cannot feel safe with it. There have
been many incidents, which suggest that people can easily be
spied upon through closed source programs.
On the other hand, since open source code is open and
available to everyone, one cannot plant a bug in an open source
program because the bug can easily be found. Apart from that
aspect, open source programs can be continually improved by
people contributing to themenhancing a feature and writing
software patches, also there are many user communities that

will help you if you are stuck with a problem.


When Android was first launched in 2007, Google also
announced the Open Handset Alliance (OHA) to work with
other mobile vendors to create an open source mobile operating
system, which would allow anyone to work on it. This seemed
to be a good deal for the mobile vendors, because Apples
iPhone practically owned the smartphone market at that time.
The mobile vendors needed another player, or game changer,
in the smartphone market and they got Android.
When Google releases the Android source code to the public
for free, it is called stock Android. This comprises only the
very basic system. The mobile vendors take this stock Android
and tailor it according to their devices specificationsfeaturing
unique visual aspects such as themes, graphics and so on.
OHA has many terms and conditions, so if you want to use
Android in your devices, you have to play by Googles rules.
The following aspects are mandatory for each Android phone:
Google setup-wizard
Google phone-top search
Gmail apps
Google calendar
Google Talk
Google Hangouts
YouTube
Google maps for mobiles
Google StreetView
Google Play store
Google voice search
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 91

For U & Me

Overview

These specifications are in Googles Mobile Application


Distribution Agreement- (MADA) which was leaked in
February 2014.
There are some exceptions in the market such as
Amazons Kindle Fire, which is based on the Android OS but
doesnt feature the usual Google stuff and has Amazons App
Store instead of Google Play.
For a while, we were all convinced that Android was
free and open to everyone. It may seem so on the surface but
under the hood, Android is not so open. We all know that,
at its core, Android has a Linux kernel, which is released
under the GNU Public License, but thats only a part of
Android. Many other components are licensed under the
Apache licence, which allows the source code of Android to
be distributed freely and not necessarily to be released to the
public. Some mobile vendors make sure that their devices
run their very own tailored Android version by preventing
users from installing any other custom ROMs. A forcibly
installed custom ROM in your Android will nullify the
warranty of the device. So, most users are forced to keep the
Android version shipped with the device.
Another frustrating aspect for Android users is with respect
to the updates. In Android, updates are very complex, because
there is no uniformity among the various devices running the
Android OS. Even closed OSs support their updatesfor
example, Apples iOS 5 supports iPhone 4, 4s, iPad and iPad 2;
and Microsoft allows its users to upgrade to Windows 7 from
Windows XP without hassles. As you have probably noticed,
only a handful of devices receive the new Android version.
The rest of the users are forced to change their phones. Most
users are alright with that, because today, the life expectancy of
mobiles is a maximum of about two years. People who want to
stay updated as much as possible, change their phones within a
year. The reason behind this mess is that updates depend mostly
on the hardware, the specs of which differ from vendor to
vendor. Most vendors upgrade their hardware specs as soon as
a new Android version hits the market. So the next time you try
to install an app which doesnt work well on your device, just
remember, Its time to change your phone!

Android and freedom

Online privacy is becoming a myth, since security threats pose


a constant challenge. No matter how hard we work to make
our systems secure, theres always some kind of threat arising
daily. Thats why systems administrators continually evaluate
security and take the necessary steps to mitigate threats.
Not long ago, we came to know about PRISM - an NSA
(USA) spy program that can monitor anyone, anywhere in the
world, at any time. Thanks to Edward Snowden, who leaked
this news, we now realise how vulnerable we are online.
Although some may think that worrying about this borders on
being paranoid, theres sufficient proof that all this is happening
as you read this article. Many of us use smartphones for almost
everything. We keep business contacts, personal details, and
92 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

The following list features devices supported by Replicant and


their corresponding Replicant versions.
HTC Dream/HTC Magic: Replicant 2.2
Nexus One: Replicant 2.3
Nexus S: Replicant 4.2
Galaxy S: Replicant 4.2
Galaxy S2: Replicant 4.2
Galaxy Note: Replicant 4.2
Galaxy Nexus: Replicant 4.2
Galaxy Tab 2 7.0: Replicant 4.2
Galaxy Tab 2 10.1: Replicant 4.2
Galaxy S3: Replicant 4.2
Galaxy Note 2: Replicant4.2
GTA04: Replicant 2.3
Separate installation instructions for these devices can be found
on the Replicant website.

confidential data such as bank account numbers, passwords,


etc, on it. Its not an exaggeration to state that our smartphones
contain more confidential data than any other secure vault in
this world. In todays world, the easiest way to track peoples
whereabouts is via their phones. So you should realise that
you are holding a powerful device in your hands, and you are
responsible for keeping your data safe.
People use smartphones to stay organised, set reminders
or keep notes about ideas. Some of the apps use centralised
servers to store the data. What users do not realise is that you
lose control of your data when you trust a centralised server
that is owned by a corporation you dont know. You are kept
ignorant about how your data is being used and protected. If
an attacker can compromise that centralised server, then your
data could be at risk. To make things even more complicated,
an attacker could erase all that precious data and you wouldnt
even know about it.
Most of the apps in the Google Play store are closed source.
Some apps are malicious in nature, working against the interests
of the user. Some apps keep tabs on you, or worse, they can
steal the most confidential data from your device without your
knowledge. Some apps act as tools for promoting non-free
services or software by carrying ads. Several studies reveal
that these apps track their users locations and store other
background information about them.
You may think of this as paranoia, but the thing is that cyber
criminals thrive on the ignorance of the public. It may be argued
that most users do not have any illegal secrets in the phone,
nor are they important people, so why should they worry about
being monitored? Thinking along those lines resembles the man
who ignores an empty gun at his door step. He may not use that
gun, but is completely ignorant of the fact that someone else
might use that gun and frame him for murder.

Replicant

Despite the facts that stack up against Android, it is almost


impossible to underestimate its benefits. For a while, Linux was
considered a nerdy thing, used only by developers, hackers
and others in research. Typically, those in the normal user
community did not know much about Linux. After the arrival of

Overview For U & Me


Android, everyone has the Linux kernel in their hands. Android
acts as a gateway for Linux to reach all kinds of people. The
FOSS community believes in Android, but since Android poses
a lot of problems due to the closed nature of its source code,
some people thought of creating a mobile operating system
without relying on any closed or proprietary code or services.
Thats how Replicant was born.
Most of Androids non-free code deals with hardware
such as the camera, GPS, RIL (Radio interface layer), etc. So,
Replicant attempts to build a fully functional Android operating
system that relies completely on free and open source code.
The project began in 2010named after the fictional
Replicant androids in the movie Blade Runner. Denis
GNUtoo Carikli and Paul Kocialkowski are the current lead
developers for the Replicant.
In the beginning, they began by writing code for the
HTC Dream in order to make it a fully functional phone
that did not rely on any non-free code. They made a little
progress such as getting the audio to work with fully
free and open source code, and after that they succeeded
in making and receiving calls. You can find a video of
Replicant working on the HTC Dream on YouTube.
The earlier versions of Replicant were based on AOSP
(Android Open Source Project) but in order to support more
devices, the base was changed to Cynogenmodanother
custom ROM which is free but still has some proprietary
drivers. The Replicant version 4.2 was released on January
22, 2014, which is based on Cynogenmod 10.1.
On January 3, 2014, the Replicant team released its
full-libre Replicant SDK. Youve probably noticed that the
Android SDK is no longer open source software. When you
try to download it, you will be presented with lengthy terms
and conditions, clearly stating that you must agree to that
licenses terms or you are not allowed to use that SDK.
Replicant is all about freedom. As you can see, the
Replicant team is labelling it the truly free version of Android.
The team didnt focus much on open source, although the
source code for Replicant is open to everyone. When it comes
to freedom, from the users perspective, the word simply
means that they are given complete control over their device,
even though they might not know what to do with that control.
The Replicant team isnt making any compromises when it
comes to the users freedom. Although there may be some
trade-offs concerning freedom, the biggest challenge for the
Replicant team is to write hardware drivers and firmware that
can support various devices. This is a difficult task since one
Android device may differ from another. Its not surprising that
they mainly differ in their hardware capabilities. That is why
some apps that work well on one device may not necessarily
work well on another. This problem could be solved if device
manufacturers decide that the drivers and firmware should be
given to the public, but we all know thats not going to happen.
Thats why there are some devices running on Replicant that
still dont have 3D graphics, GPS, camera access, etc, but as

mentioned earlier, people who value their freedom above all


else, find Replicant very appealing.
The Replicant team is gradually making progress in adding
support for more devices. For some devices, the conversion
from closed source to open source becomes cumbersome,
which is why these devices are rejected by the Replicant team.

F-Droid

One of the reasons for the grand success of Android is the


wide range of apps that is readily available on the Google
Play store for anyone to download.
For Replicant, you cannot use Google Play but you can
use an alternativeF-Droid, which has only free and open
source software.
The problem with Google Play is that many apps on it are
closed source. So since we may not be able to look at their source
code, theres a great possibility of an app that could spy on you
or worse, steal your data being installed on it. By installing
apps from Google Play, users inadvertently promote non-free
software. Some apps also track their users whereabouts.
F-Droid, on the other hand, makes sure all apps are built
from their source code. When an application is submitted to
F-Droid, it is in the form of source code. The F-Droid team
builds it into a nice APK package from the source, so the user
is assured that no other malicious code is added to that app
since you can view the source code.
The F-Droid client app can be downloaded from
the F-Droid website. This app is extremely handy for
downloading and installing apps without hassle. You dont
need an account but can install various versions of apps
provided there. You can choose the one that works best for
you and also easily get automatic updates.
If youre an Android user but want FOSS on your device,
F-Droid is available to you. You have to allow your device
to install apps from sources other than Google Play (which
would be F-Droid). Using the single F-Droid client, you can
easily browse through various sections of apps and easily
remove the installed apps in your device or update your apps.
Using Replicant doesnt grant your device complete
protection, but it can make your device less vulnerable to
threats. It can offer you real control over your device and
you can enjoy true freedom. If your device doesnt support
Replicant, you can use Cynogenmod instead, which is
officially prescribed as an alternative to Replicant.
As Benjamin Franklin put it, Those who give up
essential liberty to purchase a little temporary safety, deserve
neither liberty nor safety. Its up to you to choose between
liberty and temporary safety.

By: Magimai Prakash


By:
Anil
The
author
hasKumar
completed aPugalia
B.E. in computer science. As he
is deeply interested in Linux, he spends most of his leisure time
exploring open source.

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 93

For U & Me

Open Strategy

Firefox May Change the


Mobile Market!
TCL Communications smartphone brand, Alcatel One Touch, launched the Alcatel
One Touch Fire smartphone globally last year. Fire was the first ever phone to run
the Firefox OS, an open source operating system created by Mozilla. According to
many, this OS is in some ways on par with Android, if not better. Sadly, Fire has failed to
see the light of day in India, because our smartphone market has embraced Android
on such a large scale that other OSs find it hard to make an impact. In a candid chat,
Piyush A Garg, project manager, APAC BU India, spoke to Saurabh Singh from Open
Source For You about how the Firefox OS could be the next big thing and why Alcatel
One Touch has not yet given up on it.

t was not very long ago (July 25, 2011, to be precise) that
Andreas Gal, director of research at Mozilla Corporation,
announced the Boot to Gecko project (B2G) to build a
complete, standalone operating system for the open Web, which
could provide a community-based alternative to commercially
developed operating systems such as Apples iOS and
Microsofts Windows Phone. Besides, the Linux-based operating
system for smartphones and tablets (among others) also aimed
to give Googles Android, Jollas Sailfish OS as well as other
community-based open source systems such
as Ubuntu Touch, a run for their money
(pun intended!). Although, on
paper, the project boasts of
tremendous potential, it has
failed to garner the kind of
response its developers had
initially hoped for. The
relatively few devices in a
market that is flooded with
the much-loved Android
OS could be one possible
reason. Companies like
ZTE, Telefnica and
GeeksPhone have taken the
onus of launching Firefox OSbased devices; however, giants
in the field have shied away from
adopting it, until now.
Hong Kongs Alcatel One
Touch is one of the few companies
that has bet on Firefox by launching the
Alcatel One Touch Fire smartphone globally, last year.
The Firefox OS 1.0-based Fire was primarily intended for
emerging markets with the aim of ridding the world of
feature phones. Sadly, the Indian market was left out when
the first Firefox OS-based smartphone was testedcould
Android dominance be the reason? Alcatel Fire (Alcatel
4012) was launched globally last year. We tried everything,

94 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

but theres such a big hoo-ha about Android. Last year, it


was a big thing. First, you have to create some space for the
OS itself, and then create a buzz, revealed Piyush A Garg,
project manager, APAC BU India.
According to Garg, theres still a basic lack of awareness
regarding the Firefox OS in India. Techies might be aware of
what the Firefox OS is but the average end user may not. And
ultimately, it is the end user who has to purchase the phone.
We have to communicate the advantages of Mozilla Firefox
to the end user, create awareness and only then launch a
product based on it, he said.

Alcatels plans for Firefox-based


smartphones

So the bottom line is, India will not


see the Alcatel One Touch Fire any
time soon; or maybe not see it at
all. Sadly, yes. Fire is not coming
to India at all. Its not going to
come to India because Fire was
an 8.89 cm (3.5 inch) product.
Instead, we might be coming up
with an 8.89-10.16 cm (3.5-4 inch)
product. Initially, we were considering
a 12.7-13.97 cm (5-5.5 inch) device.
However, we are looking to come
up with a low-end phone and such
a device cannot come in the 12.7 cm
(5 inch) segment. So, once the product is
launched with an 8.8910.16 cm (3.5-4 inch)
screen with the Firefox OS, we may launch a whole series of
Firefox OS-based devices, said Garg.

The Firefox OS ecosystem needs a push in India

With that said, it has taken a fairly long time for the company to
realise that the Firefox OS could be a deal-breaker in an extensive
market such as India. Firefox OS may change the mobile game.
However, it still needs to grow in India. Considering the fact

Open Strategy For U & Me


ideas. Then either we accept them, which means we buy the idea,
or we work out some kind of association with which developers
get revenue out of the collaboration. In China, more than 100,000
developers are engaged in building apps for Alcatel. India is on
our to-do list for building a community of app dvelopers. Its
currently at an amateur stage; however, we expect things to
happen eventually, he said.
Although theres no definite time period for the launch
of Alcatels One Touch Firefox
OS-based smartphone in India
(Garg is confident it will be here
by the end of 2014, followed by
a whole series, depending upon
how its received), one thing that
is certain is that the device will
be very affordable. Cutting costs
while developing such low-end
devices is certainly a challenge
for companies, since customers do
tend to choose value for money
when making their purchases.
We are not allowed to do any
trimming with respect to the
hardware qualitysince we
are FCC-compliant, we cannot
compromise on that, said Garg.
So what do companies like Alcatel One Touch actually
do to cut manufacturing costs? We look at larger quantities
that we can sell at a low cost, using competitive chipsets
that are offered at a low price. On the hardware side, we
may not give lamination in a low-cost phone, or we may not
offer Corning glass or an IPS, and instead give a TFT, for
instance, Garg added.

that Android has such a huge base in India, we are waiting


for the right time to launch the Firefox-based smartphones
here, he said. But is the Firefox OS really a deal-breaker
for customers? The Firefox OS can be at par with Android.
The major advantages of Mozilla Firefox are primarily the
memory factor and the space that it takesthe entire OS as
well as the applications. Its not basically an API kind of OS;
its an installation directly coming from HTML. Thats a major
advantage. Also, apps for the OS are
built using HTML5, which means
that, in theory, they run on the Web
and on your phone or tablet. What
made Android jump from Jelly
Bean to KitKat (which requires low
memory) is the fact that the end user is
looking at a low memory OS. Mozilla
Firefox is also easy to use. I wont
say better or any less, but at par
with Android, said Garg, evidently
confident of the platform.
To take things forward, vis--vis
the platform, Alcatel One Touch is
also planning to come up with an
exclusive App Store, with its own set
of apps. We have already planned our
play store, and tied up with a number
of developers to build our own apps. I cannot comment on the
timeline of the app store but its in the pipeline. We currently
have as many as five R&D centres in China. We are not yet in
India, although we are looking to engage developers here as
well. Were already in the discussion phase on that front, said
Garg. So, whats the companys strategy to engage developers in
particular? We invite developers to come up and give in their

OSFY Magazine Attractions During 2014-15


Month

Theme

Featured List

buyers guide

March 2014

Network monitoring

Security

-------------------

April 2014

Android Special

Anti Virus

Wifi Hotspot Devices

May 2014

Backup and Data Storage

Certification

External Storage

June 2014

Open Source on Windows

Mobile Apps

UTMs fo SMEs

July 2014

Firewall and Network security

Web Hosting Solutions Providers

MFD Printers for SMEs

August 2014

Kernel Development

Big Data solution Providers

SSDs for Servers

September 2014

Open Source for Start-ups

Cloud

Android Devices

October 2014

Mobile App Development

Training on Programming Languages

Projectors

November 2014

Cloud Special

Virtualisation Solutions Providers

Network Switches and Routers

December 2014

Web Development

Leading Ecommerce Sites

AV Conferencing

January 2015

Programming Languages

IT Consultancy Service Providers

Laser Printers for SMEs

February 2015

Top 10 of Everything on Open Source

Storage Solutions Providers

Wireless Routers

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 95

Interview For U & Me


HPs latest mantra is the new style of IT. Conventional
servers and data storage systems do not work for the company
and its style of IT any longer. This is about the evolution of
converged systems that have taken over the traditional forms
of IT. The company is taking its mantra forward in every
possible way.
HP has recently launched the HP Apollo family of
high-performance computing (HPC) systems. The company
claims that HP Apollo is capable of delivering up to four
times the performance of standard rack servers while using
less space and energy. The new offerings reset data centre
expectations by combining a modular design with improvised
power distribution and cooling techniques. Apart from this,
the company claims that HP Apollo has a higher density at
a lower total cost of ownership. The air-cooled HP Apollo
6000 System maximises performance efficiency and makes
HPC capabilities accessible to a wide range of enterprise
customers. It is a supercomputer that combines high levels of
processing power with a water-cooling design for ultra-low
energy usage.
These servers add to the fast pace of changes going on in
the IT space today. Vikram K from HP shares his deep insight
into how IT is changing. Read on...

Since you have just launched your latest servers here, what
is your take on the Indian server market?

From a server standpoint, we are very excited, because virtually


every month and a half, weve been offering a new enhancement
or releasing a new product, which is different from the previous
one. So the question is - how are these different? Well, we have
basically gone back and looked at things through the eyes of the
customer to understand what they expect from IT. They want
to get away from conventional IT and move to an improvised
level of IT. So we see three broad areas: admin controlled IT;
user controlled IT, which is more like the cloud and is workload
specific; and then there is application-specific compute and
serve IT. These are the three distinct combinations. Within
these three areas, we have had product launches, one after the
other. The first one, of course, is an area where we dominate. So,
we decided to extend the lead and that is how the innovations
continue to happen.

What do you mean by new style of IT?

It is the time for converged systems, which are opening up


an altogether new dimension of IT. With converged systems,
you get three different systems comprising the compute part,
and the storage and the networking parts, to work together.
A variety of IT heads are opting for this primarily because
they want to either centralise IT, consolidate or improve
the overall efficiency and performance. When they do that,
they need to have better converged systems management.
So we have combined our view of converged systems and
made them workload specific. These days we have workload
specific systems. For example, with something like a column-

It is the time for converged systems,


which are opening up an altogether
new dimension of IT. With converged
systems, you get three different
systems comprising the compute part,
and the storage and the networking
parts, to work together.
oriented database like Vertica, we have a converged system
for virtualisation. Some time back, servers were a sprawl, but
these days, virtual machines are a big sprawl.

Converged systems have been around for about 18


months now. Can you throw some light on customers
experiences with these systems?
Yes, converged systems have been around for a while now
and we have incrementally improved on their management.
What we have today as a CSM for virtualisation or CSM for
Hanna, wasnt there a year back. The journey has been good
and plenty of enterprises have expressed interest in such
evolved IT. With respect to the adoption rate, the IT/ITES
segment has been the first large adopter of converged systems,
primarily because it has a huge issue about just doing the
systems integration of X computers that compute Y storage
while somebody else takes care of the networks . Now, it
is the time for systems that come integrated with all three
elements, and the best part is that it is very workload specific.
We see a lot of converged systems being adopted in the
area of manufacturing also. People who had deployed SAP
earlier have some issues. One of them is that it is multi-tier,
i.e., it has multiple application servers and multiple instances
in the database. So when they want to run analytics, it gets
extremely slow because a lot of tools are used to extract
information. We came up with a solution, which customers
across the manufacturing and IT/ITES segments are now
discovering. That is why we see a very good adoption of
converged systems across segments.

We hear a lot about software defined data centres


(SDCs). Many players like VMware are investing a lot in
this domain. How do you think SDCs are evolving in India?

The software-defined data centre really does have the potential


to transform the entire IT paradigm and the infrastructure and
application landscape. We have recently launched new products
and services in the networking, high-performance computing,
storage and converged infrastructure areas. They will allow
enterprises to build software-defined data centres and hybrid
cloud infrastructures. Big data, mobility, security and cloud
computing are forcing organisations to rethink their approach to
technology, causing them to invest heavily in IT infrastructure.
So, when we are talking about software defined data centres, we
are talking about a scenario in which it can be a heterogeneous
www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 97

For U & Me

Interview

setup of hypervisors, infrastructure, et al, which will help you


migrate from one to another, seamlessly.

So, software defined data centres could replace


traditional data centres in the future? Therefore, can we
consider them a part of new-age IT?
Well, I dont believe that is so. We have been living with old TP
for about 30-35 years. As the cloud, big data and mobility pick
up even more, and are used in the context of analytics, you will
still have two contexts residing together, which is old TP and
old AP. Then you would have more converged systems and will
talk about converged system management. That is exactly our
version of how we want to define software defined data centres.

We talk a lot about integrated and converged systems. It


sounds like a great idea as it would involve all the solutions
coming in from one vendor. But does that not lead to some kind
of vendor lock-in?
No it doesnt, primarily because these are workload specific.
So, one would not implement a converged system just for
the sake of it. As I mentioned, it has to be workload specific.
So, if you want to virtualise, then you would do one type
of converged system or integrated system. If you want to
do Hanna, that is an entirely different converged system.
What helps the customers is that it breaks down the cycle of
project deployment and hence, frees up a lot of resources that
would otherwise be consumed for mere active deployment or
transitioning from one context to another.

98 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

So, are SMBs ready to jump onto the integrated systems


bandwagon?

Yes, there are quite a few SMBs in India that are very
positive about integrated systems. Customers, irrespective
of the segment that they belong to, look at it from the angle
of how the business functions, and what kind of specificity
they want to get to. I wouldnt be particularly concerned
about the segment, but I would look at it from the context of
what workload specificity a customer wants.

What are the issues that you have seen IT heads face
while adopting converged IT systems?

Fortunately, we have not heard of many challenges that


the IT heads have faced while adopting converged IT
solutions. In fact, it has eased things for them, primarily
because they have been told in advance about what they
are getting into. They are no more dealing with three
separate items. They are getting into one whole thing,
which is getting deployed and what they used to take
months to achieve, is done in two or three days. This
is because we run the app maps prior to the actual sale
and tell them what exactly will reach them, how it will
run and what kind of performance it will deliver. The
major challenges are related to the fact that they are on
the verge of a transition (from the business perspective),
and they see any transition as being slightly risky.
Hence, they thoroughly check on the ROI and are
generally very cautious.

Overview

For U & Me

Swap Space for Linux:


How Much is Really Needed?
Linux divides its physical memory (RAM) into chunks called pages. Swapping is the process
whereby pages get transferred to a preconfigured hard disk area. The quantum of swap space
is determined during the Linux installation process. This article is all about swap space, and
explains the term in detail so that newbies dont find it a problem choosing the right amount of it
when installing Linux.

he virtual memory of any system is a combination


of two things - physical memory, which can be
accessed, i.e., RAM, and swap space. The latter
holds the inactive pages that are not accessed by any
running application. Swap space is used when the RAM
has insufficient space for active processes, but it has
certain spaces which are inactive at that point in time.
These inactive pages are temporarily transferred to the
swap space, which frees up space in the RAM for active
processes. Hence, the swap space acts as temporary
storage that is required if there is insufficient space
in your RAM for active processes. But as soon as the
application is closed, the files that were temporarily
stored in the swap space are transferred back to the RAM.
The access time for swap space is less. In short, swapping
is required for two reasons:
When more memory than is available in physical memory
(RAM) is required by the system, the kernel swaps lessused pages and gives the system enough memory to run
the application smoothly.
Certain pages are required by the application only at
the time of initialisation and never again. Such files are
transferred to the swap space as soon as the application
accesses these pages.
After understanding the basic concept of swap space,
one should know what amount of space needs to be actually
allotted to the swap space so that the performance of Linux
actually improves. An earlier rule stated that the amount of

swap space should be double the amount of physical memory


(RAM) available, i.e., if we have 16 GB of RAM, then we
ought to allot 32 GB to the swap space. But this is not very
effective these days.
Actually, the amount of swap space depends on the kind
of application you run and the kind of user you are. If you are
a hacker, you need to follow the old rule. If you frequently
use hibernation, then you would need more swap space
because during hibernation, the kernel transfers all the files
from the memory to the swap area.
So how can the swap space improve the performance of
Linux? Sometimes, RAM is used as a disk cache rather than
to store program memory. It is, therefore, better to swap out
a program that is inactive at that moment and, instead, keep
the often-used files in cache. Responsiveness is improved by
swapping pages out when the system is idle, rather than when
the memory is full.
Even though we know that swapping has many
advantages, it does not necessarily improve the performance
of Linux on your system, always. Swapping can even make
your system slow if the right quantity of it is not allotted.
There are certain basic concepts behind this also. Compared
to memory, disks are very slow. Memory can be accessed in
nanoseconds, while disks are accessed by the processor in
milliseconds. Accessing the disk can be many times slower
than accessing the physical memory. Hence, the more the
swapping, the slower the system. We should know the
amount of space that we need to allot for swapping. The
www.OpenSourceForU.com | OPEN SOURCE For You | july 2014 | 99

For U & Me

Overview

following rules can effectively help to improve Linuxs


performance on your system.
For normal servers:
Swap space should be equal to RAM size if RAM size is
less than 2 GB.
Swap space should be equal to 2 GB if RAM size is
greater than 2 GB.
For heavy duty servers with fast storage requirements:
Swap space should be equal to RAM size if RAM size is
less than 8 GB.
Swap space should be equal to 0.5 times the size of the
RAM if the RAM size is greater than 8 GB.
If you have already installed Linux, you can check
your swap space by using the following command in the
Linux terminal:

cat /proc/sys/vm/swappiness

A temporary change (lost at reboot) in a swappiness value


of 10, for example, can be done with the following command:
sudosysctlvm.swappiness=10

For a permanent change, edit the configuration file as follows:


gksudogedit /etc/sysctl.conf

If the swappiness value is 0, then the kernel restricts the


swapping process; and if the value is 100, the kernel swaps
very aggressively.
So, while Linux as an operating system has great powers,
you should know how to use those powers effectively so that
you can improve the performance of your system.

cat /proc/swaps

Swappiness and how to change it

By: Roopak T J

Swappiness is a parameter that controls the tendency of the


kernel to transfer the processes from physical memory to
swap space. It has a value between 0 to 100 and in Ubuntu,
it has a default value of 60. To check the swappiness value,
use the following command:

The author is an open source contributor and enthusiast. He


has contributed to a couple of open source organisations
including Mediawiki and LibreOffice. He is currently in his
second year at Amrita University (B. Tech). You can contact him
at mailstorpk@gmail.com

THE COMPLETE MAGAZINE


ON OPEN SOURCE

www.electronicsforu.com

www.eb.efyindia.com

www.OpenSourceForu.com

100 | july 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

www.ffymag.com

www.efyindia.com

TIPS

&

TRICKS

Booting an ISO directly from the hard


drive using GRUB 2

We often find ourselves in a situation in which we have


an ISO image of Ubuntu on our hard disk and we need
to test it by first running it. Try out this method for using
the ISO image.
Create a GRUB menu entry by editing the /etc/
grub.d/40_custom file. Add the text given below just after
the existing text in the file:
#gksu gedit /etc/grub.d/40_custom

Add the menu entry:

Now reboot the system. The new menu entry will be


added in the Grub boot option.
Kiran P S,
pskirann@gmail.com

Playing around with arguments

While writing shell scripts, we often need to use


different arguments passed along with the command. Here is
a simple tip to display the argument of the last command.
Use !!:n to select the nth argument of the last command,
and !$ for the last argument.
dev@home$ echo a b c d

menuentry Ubuntu 12.04.2 ISO

a b c d

dev@home$ echo !$

set isofile=/home/<username>/Downloads/ubuntu-12.04.2desktop-amd64.iso #path of isofile

echo d
d

loopback loop (X,Y)$isofile


dev@home$ echo a b c d
linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/
filename=$isofile noprompt noeject

a b c d

initrd (loop)/casper/initrd.lz

dev@home$ echo !!:3

echo c
c

isofile variable is not required but simplifies the


creation of multiple Ubuntu ISO menu entries.
The loopback line must reflect the actual location of the
ISO file. In the example, the ISO file is stored in the users
Downloads folder. X is the drive number, starting with 0;
Y is the partition number, starting with 1. sda5 would be
designated as (hd0,5) and sdb1 would be (hd1,1). Do not
use (X,Y) in the menu entry but use something like (hd0,5).
Thus, it all depends on your systems configuration.
Save the file and update the GRUB 2 menu:
#sudo update-grub
102 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

Shivam Kotwalia,
shivamkotwalia@gmail.com

Retrieving disk information from


the command line

Want to know details of your hard disk even without physically


touching it? Here are a few commands that will do the trick. I
will use /dev/sda as my disk device, for which I want the details.
smartctl -i /dev/sda

smartctl is a command line utility designed to perform

SMART (Self-Monitoring, Analysis and Reporting


Technology) tasks such as printing the SMART self-test and
error logs, enabling and disabling SMART automatic testing,
and initiating device self-tests. When the command is used
with the i switch, it gives information about the disk.
The output of the above command will show the model
family, device model, serial number, firmware version, user
capacity, etc, of the hard disk (sda).
You can also use the hdparm command:

is that a single command does a single operation and does it well.

hdparm -I /dev/sda

$sudo apt-get install wkhtmltopdf

hdparm can give much more information than smartctl.


Munish Kumar,
munishtotech@gmail.com

Writing an ISO image file to a CD-ROM


from the command line

We usually download ISO images of popular Linux distros for


installation or as live media, but end up using a GUI CD burning
tool to create a bootable CD or DVD ROM. But, if youre feeling
a bit geeky, you could try doing so from the command line too:

Pankaj Rane,
pankaj.rane2k8@gmail.com

Downloading/converting HTML pages to PDF

wkhtmltopdf is a software package that converts


HTML pages to PDF. If this is not installed on your system,
use the following command to do so:

After installing, you can run the command using the


following syntax:
$wkhtmltopdf URL[oftheHTMLfile] NAME[of the PDF file].pdf

For example, by using:


$wkhtmltopdf opensourceforu.com OSFY.pdf

the OSFY.PDF will be downloaded to the current


working directory.
You can read the documentation to know more about this.

# cdrecord -v speed=0 driveopts=burnfree -eject dev=1,0,0


<src_iso_file>

speed=0 instructs the program to write the disk at the lowest


possible drive speed. But, if you are in a hurry, you can try
speed=1 or speed=2. Keep in mind that these are relative speeds.
The -eject switch instructs the program to eject the disk
after the operation is complete.
Now, the most important part to specify is the devices ID. It
is absolutely important that you specify the device ID of your CD
ROM drive correctly or you may end up writing the ISO to some
other place on the disk and corrupting your entire hard disk.
To find out the device ID of your CD ROM drive, just run the
following command prior to running the first command:
#cdrecord -scanbus

Your CD ROMs device ID should look something like


whats shown below:
1,0,0

Also, note that you cannot create a bootable DVD disk


using this command. But, do not be disheartenedthere is
another simpler command to burn a bootable DVD, which is:
# growisofs -dvd-compat -speed=0 -Z /dev/dvd=myfile.iso

Here, /dev/dvd is the device file that represents your DVD


ROM. It is quite likely to be the same on your system as well.
Do not use growisofs to burn a CD ROM. The beauty of Linux

Manu Prasad,
mmanuprasad@gmail.com

Going invisible on the terminal

Did you ever think that you could type commands that
would be invisible on your system but still would execute,
provided you typed them correctly? This can easily be done by
changing the terminal settings using the following command:
stty -echo

To restore the visibility of your commands, just type the


following command:
stty echo

Note: Only the minus sign has been removed.


Sumit Agarwal,
sumitagarwal0591@gmail.com

Share Your Linux Recipes!


The joy of using Linux is in finding ways to get around
problemstake them head on, defeat them! We invite you to
share your tips and tricks with us for publication in OSFY so
that they can reach a wider audience. Your tips could be related
to administration, programming, troubleshooting or general
tweaking. Submit them at www.linuxforu.com. The sender of
each published tip will get a T-shirt.

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 103

For U & Me

Overview

The Mozilla Location Service:


Addressing Privacy Concerns
Dubbed a research project, Mozilla Location Service is the crowd-sourced mapping
of wireless networks (Wi-Fi access points, cell phone towers, etc) around the world.
This information is commonly used by mobile devices and computers to ascertain
their location when GPS services are not available. The entry of Mozilla into this field
is expected to be a game changer. So get to know more about Mozillas MozStumbler
mobile app as well as Ichnaea.

he Mozilla mission statement expresses a desire to


promote openness, innovation and opportunity on
the Web. And Mozilla is trying to comply with this
pretty seriously.
Firefox, Thunderbird, Firefox OS the list of
Mozillas open source products is growing. Yet there are
several areas in which tech giants like Google, Nokia
and Apple are dominant and the mobile ecosystem is one
of them. Mozilla is now trying to break into this space.
After Firefox OS, the foundation now offers a new
service for mobile users.
104 | August 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

There are several services that a user might not even


be aware of while using a cell phone. The network-based
location service is one of the most used services by cell
phone owners to determine their location if the GPS
service is not available. Several companies currently offer
this service but there are major privacy concerns associated
with it. It is no secret that advertising companies track a
users location history and offer ads or services based on it.
Till now, there was no transparent option among
these services but Mozilla has come to our rescue, to
prevent tech giants sniffing out our locations. As stated on

Overview For U & Me

Figure 1: The MozStumbler app

Figure 2: MozStumbler options

Mozillas location service website, The Mozilla Location


Service is a research project to investigate crowd-sourced
mapping of wireless networks (Wi-Fi access points, cell
towers, etc) around the world. Mobile devices and desktop
computers commonly use this information to figure out their
location when GPS satellites are not accessible.
In the same statement, Mozilla acknowledges the presence
of and the challenges presented by the other services,
saying, There are few high-quality sources for this kind of
geolocation data currently open to the public. The Mozilla
Location Service aims to address this issue by providing an
open service to provide location data.
This service provides geolocation lookups based on
publicly observable cell tower and Wi-Fi access point
information. Mozilla has come out with an Android app to
collect publicly observable cell towers and Wi-Fi data; its
called MozStumbler.
This app scans and uploads information of cell towers
and Wi-Fi access points to Mozilla servers. The latest
stable version of this app is ver 0.20.5 which is ready for
download. MozStumbler provides the option to upload this
scanned data over a Wi-Fi or cellular network. But you
dont need to be online while scanning; you can upload this
data afterwards.
Note: 1. This app is not available on Google Play
store but you can download it from https://github.com/
MozStumbler/releases/
2. The Firefox OS version of this app is on its way too. You
can stay abreast of whats happening with the Firefox OS
app at http://github.com/FxStumbler/

Figure 3: MozStumbler settings

You can optionally give your username in this app to track


your contributions. Mozilla has also created a leader board to
let users track and rank their contributions, apart from more
detailed statistics that are available on this website. No user
identifiable information is collected through this app.
Mozilla is not only collecting the data but also providing
users with a publicly accessible API. It has code named the API
Ichnaea, which means the tracker. This API can be accessed
to submit data, search data or search your location. As the data
collection is still in progress, it is not recommended to use this
service for commercial applications, but you can try it out on
your own just for fun.
Note: Mozilla Ichnaea can be accessed at
https://mozilla-ichnaea.readthedocs.org
The MozStumbler app provides an option for geofencing,
which means you can pause the scanning within a one km radius of
the desired location. This deals with user concerns over collecting
behavioural commute data such as Home, Work and travelling habits.
In short, Mozilla is trying to provide a high quality location
service to the general public at no cost! Recently, Mozilla India
held a competition Mozilla Geolocation Pilot Project India,
which encouraged more and more users to scan their area. To
contribute to this project, you can fork the repository on github or
just install the app; you will be welcomed aboard.

By: Vinit Wankhede


The author is a fan of free and open source software. He is
currently contributing to the translation of the MozStumbler app
for Mozilla location services.

www.OpenSourceForU.com | OPEN SOURCE For You | August 2014 | 105

Potrebbero piacerti anche