Sei sulla pagina 1di 108

Contents

Admin
28
41
45

51
55

Build a Hybrid Cloud


with Eucalyptus
The Growing Popularity of
the Snort Network IDS
Deep Learning for Network
Packet Forensics Using
TensorFlow
pfSense: Adding Firewall
Rules to Filter Services
Nagios: The System Monitoring
Tool You Can Depend On

Developers
59

62
67
72
74
77
80
82

Use the Django and REST


Frameworks to Create a
Simple API
My Library Application in
App Inventor 2
PhoneGap: Simplifying
Mobile App Development
Work with Flow to
Debug Your Program
How do Arrays
Decay into Pointers?
The Fundamentals of
RDMA Programming
Android Data Binding:
Write Less to do More
A Quick Look at Multiarchitecture, Multi-platform
Mobile App Development
Frameworks

FOR U & ME
85

88

OLabs Makes School


Laboratories Accessible
Anytime, Anywhere
Cool Terminal Tricks

OpenGurus
91
96

CoAP: Get Started


with IoT Protocols
Profanity: The Command Line
Instant Messenger

Must-Have Network Monitoring


Tools for Systems Administrators

31

Remote Server Monitoring

38

with Android Devices

REGULAR FEATURES
08

New Products

10

FOSSBytes

4 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

23 Editorial Calendar
104 Tips & Tricks

Contents

Editor

Rahul chopRa

Editorial, SubScriptionS & advErtiSing


Delhi (hQ)
d-87/1, okhla industrial area, phase i, new delhi 110020
ph: (011) 26810602, 26810603; Fax: 26817563
E-mail: info@efy.in

MiSSing iSSuES
E-mail: support@efy.in

back iSSuES
Kits n Spares
new delhi 110020
ph: (011) 26371661, 26371662
E-mail: info@kitsnspares.com

nEwSStand diStribution

ph: 011-40596600
E-mail: efycirc@efy.in

advErtiSEMEntS
mumbai
ph: (022) 24950047, 24928520
E-mail: efymum@efy.in

Building a Multi-host,
Multi-Container Orchestration and
Distributed System using Docker
Columns

CaseStudy

17

CodeSport

22

20

Exploring Software: The


Possibilities with Blockchains

48

Jugnoo Uses Open Source to


Dominate the Auto-rickshaw
Aggregator Space

Educating the marketplace about


deploying data encryption is one of our
fundamental charters
Rahul Kumar,
country head of WinMagic

DVD of
ThE MoNTh
Try out the latest
powerful Fedora.

6 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

PuNe
ph: 08800295610/ 09870682995
E-mail: efypune@efy.in
GuJaRaT
ph: (079) 61344948
E-mail: efyahd@efy.in
chiNa
power pioneer group inc.
ph: (86 755) 83729797, (86) 13923802595
E-mail: powerpioneer@efy.in
JaPaN
tandem inc., ph: 81-3-3541-4166
E-mail: tandem@efy.in

24

SiNGaPORe
publicitas Singapore pte ltd
ph: +65-6836 2272
E-mail: publicitas@efy.in
TaiwaN
J.k. Media, ph: 886-2-87726780 ext. 10
E-mail: jkmedia@efy.in
uNiTeD STaTeS
E & tech Media
ph: +1 860 536 6677
E-mail: veroniquelamarque@gmail.com

Raspberry Pi enables embedded


engineering while being a cheap,
low-power Linux platform
Eben Upton,
Raspberry Pi creator

beNGaluRu
ph: (080) 25260394, 25260023
E-mail: efyblr@efy.in

100

printed, published and owned by ramesh chopra. printed at tara


art printers pvt ltd, a-46,47, Sec-5, noida, on 28th of the previous
month, and published from d-87/1, okhla industrial area, phase i, new
delhi 110020. copyright 2016. all articles in this issue, except for
interviews, verbatim quotes, or unless otherwise explicitly mentioned,
will be released under creative commons attribution-noncommercial
3.0 unported license a month after the date of publication. refer to
http://creativecommons.org/licenses/by-nc/3.0/ for a copy of the
licence. although every effort is made to ensure accuracy, no responsibility whatsoever is taken for any loss due to publishing errors. articles
that cannot be used are returned to the authors if accompanied by a
self-addressed and sufficiently stamped envelope. but no responsibility
is taken for any loss or delay in returning the material. disputes, if any,
will be settled in a new delhi court only.

SUBSCRIPTION RATES

Fedora 24 Workstation
Live (64-bit)
Fedora 24 Server
Fedora 24 LXDE Live

106

Year
Five
three
one

Newstand Price
(`)
7200
4320
1440

You Pay
(`)
4320
3030
1150

Overseas

uS$ 120

kindly add ` 50/- for outside delhi cheques.


please send payments only in favour of EFY Enterprises Pvt Ltd.
non-receipt of copies may be reported to support@efy.indo mention
your subscription number.

new products
Ambranes
power bank
with torchlight

Affordable Bluetooth
headphones from
Portronics

Electronic gizmo and mobile accessory


maker, Ambrane, has launched a high
capacity power bank at an affordable
price. The P1310 power bank features
two USB 2.0 ports, one micro USB port
and offers a 5v/2.1A maximum output.
Its super-sized capacity is
supposedly to support up to 3.5 full
charges, which helps a user to charge
a 2,500mAh battery smartphone
approximately four times with a fully
charged power bank. The company
claims that the power bank can
withstand up to 300-500 charge/
discharge cycles with an average
charging time of 12-13 hours.
The device uses Samsungs
lithium-ion battery, and is backed
by a RISC microprocessor for faster
charging and enhanced battery life. It
is capable of automatically adjusting
the output power level, based on the
connected device. With a built-in
torchlight, the power bank comes
with nine-layered circuit protection
and optimised charging efficiency. It
has four LED indicators that show the
remaining battery level.
The power bank is compatible
with nearly all smartphones and
tablets, as well as a variety of digital
cameras and handheld gaming
devices. The Ambrane P1310 is
available online and via retail stores.

Portable devices manufacturer,


Portronics, has launched its MuffsXT
on-ear Bluetooth headphones. These
come with a built-in microphone and
ear pad mounted controls, which allow
users to control music and phone calls.
The headphones feature Bluetooth
4.1 connectivity and are compatible
with almost all Bluetooth-enabled
devices. Powered by a 195mAh
battery, these are supposedly able to
get fully charged within two hours.
They come with 40mm drivers and
are capable of playing 12 hours of

Address: Ambrane India, C-91/7,


Wazirpur Industrial Area, New Delhi
110052; Ph: 08588806580
Price:

` 999

Price:

` 1,999
non-stop music on a single charge.
The Portronics MuffsXT headphones
are available via online portals.
Address: Portronics, 4E/14, Azad
Bhavan, Jhandewalan, New Delhi
110055; Ph: 09555245245; Website:
www.Portronics.com

Xiaomi releases its


largest smartphone
Chinese smartphone manufacturer,
Xiaomi, has recently launched the Mi
Max in India, which is known to be
the largest smartphone available at
an affordable price. The device has
a 16.35cm (6.44 inch) full HD
Price:
(1080x1920) 342ppi display with
`
14,999
an all-metal body and a fingerprint
sensor on the rear panel.
The variant of the device available
in India features 3GB RAM with
32GB inbuilt storage. It is powered by
Backed with a massive 4850mAh
a Snapdragon 650SoC with a hybrid
battery, the device weighs 203 grams.
dual SIM configuration, which enables
It also sports an infrared emitter to
users to place a microSIM card of up to
act as a universal remote control,
128GB in the secondary SIM card slot.
apart from an ambient light sensor,
On the camera front, the device
accelerometer, gyroscope, etc.
sports a 16 megapixel rear camera
The Mi Max is available in dark
offering phase detection autofocus
grey, gold and silver colours, via
(PDAF) and LED, along with a 5
online stores.
megapixel front camera with an 85
degree wide angle view. Both cameras
Address: Xiaomi India Pvt Ltd, 8th
have an f/2.0 aperture.
Floor, Tower-1, Umiya Business Bay,
The connectivity features of the
Marathahalli Sarjapur Outer Ring
phablet include 4G LTE with VoLTE,
Road, Bengaluru, Karnataka 560103;
Bluetooth 4.1, and GPS/A-GPS with
Email: service.in@xiaomi.com;
Wi-FI 802.11ac with Mimo.
Website: www.mi.com

8 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Tablet with
fingerprint
sensor from iBall

NAS device from


Western Digital
launched in India

Consumer electronics company, iBall,


has unveiled its iBall slide Bio-Mate
tablet with a fingerprint sensor for
safeguarding privacy.
The device comes with a bigger
20.3cm (8 inch) screen and a sleek cobalt
brown matte body. It features a massive
IPS HD display with 1280x800 pixels.
It offers authorised access to up to five
fingers, allowing users to give access to
their most trustworthy friends. The BioMate incorporates an advanced 1.3GHz
quad-core processor, which enables you
to switch between apps and access the
Web faster, along with 1GB RAM that
enhances its processing speed.
It features an enhanced 8 megapixel
AF rear camera and a 2 megapixel front
camera for selfies and video chatting.

The digital storage solutions company,


Western Digital, has launched the My
Cloud EX2 Ultra two-bay network
attached storage (NAS) system. The device
is designed to automatically sync contacts
across computers, and easily share files
and folders for creative professionals
who require multiple backup options.
Backed by a 1.3GHz dual core
processor, the device has 1GB RAM,
offering security features to help users
protect and manage movies, photos, music
and files. The management options of the
device include RAID0, RAID1, JBOD
and spanning modes. The data protection
options include NAS to NAS, USB to
cloud and LAN/WAN backup.
In order to easily configure it into other
data protection modes, the device comes
preconfigured with RAID1. To further
expand the capacity of the My Cloud EX2
Ultra, it comes with support for USB 3.0,
which is compatible with hard drives. It is
also equipped with WD RED hard drives
powered by NASware 3.0 technology,
along with a dashboard for users to create
accounts and monitor
the storage.

Card readercum-power bank


from Kingston
Multinational computer technology
corporation, Kingston, has unveiled two
new variants of its MobileLite wireless
card reader-cum-power bank the
MobileLite Wireless G3 and MobileLite
Wireless Pro. Both the variants come
with a SD card slot, through which the
users are able to access content on their
tablet/smartphone via the MobileLite
app by simply connecting their
storage devices to it.
The MobileLite Wireless G3 comes
with a USB port and is equipped with
a 5400mAh battery, while the Wireless
Pro variant comes with a 6,700mAh
battery which allows users to charge
their smartphone on the go.
With the MobileLite app, users can

Price:

` 7,999
The tablet has the Android 5.1
Lollipop OS with a powerful
4300mAh battery. On the
connectivity front, the device
offers a Wi-Fi hotspot, dual-SIM with
3G, Bluetooth 4.0, the USB OTG
function, GPS and A-GPS, along with
FM radio with FM recording.
The iBall Slide Bio-Mate is
available at retail stores.
Address: iBall, U-202, Third Floor,
Pillar No-33, Near Radhu Palace, Laxmi
Nagar Metro, New Delhi 110092;
Ph: 011-26388180

Price:

` 4,999 for MobileLite


G3 and ` 8,999 for
MobileLite Pro

easily transfer, back up or share photos


and videos without using a PC.
The devices are dual band Wi-Fi
capable (802.11ac) for fast data transfer.
Apart from charging mobile devices and
wireless data streaming, users can also
store data on the Wireless Pro version as
it offers 64GB of internal storage.
Both the devices are available
via retail stores.
Address: Kingston Technology India,
703, 7th Floor, Quantum Towers, Off
SV Road, Chincholi Bunder Road,
Malad West, Mumbai 400064; Ph:
1860-2334515

The prices, features and specifications are based on information provided to us, or as available on various
websites and portals. OSFY cannot vouch for their accuracy.

Price:

` 17,499 for
driver-less variant

The WD My Cloud
EX2 drives are
available in three
variants 4TB, 6TB and 8TB, and are
available via selected retail stores.
Address: Western Digital Ltd, 401,
Eros Corporate Tower, Nehru Place,
New Delhi 110019; Ph: 91-1166542000;
Email: Wdindia@wdc.com;
Website: www.wdc.com
Compiled by: Aashima Sharma

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 9

FOSSBYTES
PHD Chamber of Commerce
and Industry organises
national conference on cyber
security along with DeitY
PHD Chamber of Commerce and Industry recently partnered with the Department
of Electronics and Information Technology (DeitY) to host a national conference
on cyber security.
The conference,
called Cyberix
2016 Securing
Digital India,
was held at the
PHD House in
New Delhi, and
was aimed at
understanding the
changing facets
of cyber threats
and its impact on
vital installations and businesses.
The conference was inaugurated by the guest of honour, Brig. M.U. Nair,
DACIDS, DIARA, Indian Army. During the event, there was a special address by
R.K. Sudhanshu, joint secretary cyber laws and e-security, DeitY, government
of India. The president of the PHD Chamber of Commerce and Industry, Gopal
Jiwarajka, said in his welcome address that India has over 460 million Internet
users and with such high numbers, there has been a corresponding rise in cyber
crime over the past few years. He said that although the Internet increases
communication, it can also increase the incidence of cyber threats, which is
evident by the fact that India ranks fifth in online payment hacking.
Cyber security has never been more important for governments and businesses
across the globe than it is today. Its scope ranges from detecting and anticipating
cyber attacks on key defence installations to preventing corporate hacking
attacks. Jiwarajka also said that there is a huge demand for security solutions and
services to protect the confidential data of the government and the military, apart
from public data, the banking financial service and insurance (BFSI) sector, data
and data of hospitals.
Jiwarajka also expressed concern about the cyber security talent gap and the
lack of awareness about available dedicated solutions. He attributed these factors
as the biggest challenges being faced by the industry.
In his special address, Sudhanshu said that DeitY is contemplating
amendments to the existing Indian cyber laws to align them with the prevailing
realities of modern times as well as bringing about new encryption and privacy
policies to cater to the evolving cyberspace. He also mentioned that there is a need
for basic education regarding cyber security at all levels of the present education
system in the country.
The conference was conducted along with associate partners including CMAI,
TEMA and CIO Klub. The EFY Group, the parent company of Open Source For
You, was among its media partners.
10 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Compiled by:
Jagmeet Singh

Tata Consultancy
Services hosts Open
Source Convention
2016 in Mumbai
Tata Consultancy Services
(TCS) recently hosted the
Open Source Convention
2016 in Mumbai
an annual event that
highlights TCS thought
leadership in the world of
open source and offers a
hands-on experience on
open source technologies.
The day-long Open
Source Convention
2016 witnessed eminent
speakers from all across
India sharing their thoughts
and experiences. The event
was conducted with the
theme of Interoperability
to let its customers
enable enterprise-level
interoperability using open
source solutions.
Relevant and with
enough pointers to
influence roadmaps and
future decisions, said
the CTO of a leading
Indian financial solutions
company, about the
convention. Another CTO
stated that the convention
provided practical
insights to open source
technology adoption.
The event was well
attended by customers
including CIOs and CTOs
of top companies in the
country. It brought about
fruitful opportunities for
the attendees to interact
directly with thought
leaders, and know
more about the latest
developments in the open
source arena.

FOSSBYTES
Raspberry Pi-powered AI solution defeats US Air Force pilot
in combat simulation

While the original aim of launching the Raspberry Pi was to just let enthusiasts
test their DIY projects, now, leveraging some software tweaks, it can beat even
humans at their
own game. Nick
Ernest, a doctoral
graduate from
the University
of Cincinnati,
has designed
an artificial
intelligence (AI)
solution for the
single-board
computer to prove
its capabilities.
Called
ALPHA, the
AI solution defeated retired US Air Force Colonel Gene Lee during combat
simulation. Lee described the new development as the most aggressive,
responsive, dynamic and credible AI solution, to date.
Flying against opponents on a flight simulator was not a new task for Lee as
he has been beating computer programs since the early 1980s. However, ALPHA
made the pilot tired, drained and mentally exhausted due to its exceptional
results. I was surprised at how aware and reactive it was. It seemed to be aware
of my intentions and reacted instantly to my changes in flight and my missile
deployment. It knew how to defeat the shot I was taking, Lee said.
Ernest aims to enhance the AI program in the future by extending its
capabilities and reducing mistakes. The goal is to continue developing ALPHA,
to push and extend its capabilities, and perform additional testing against other
trained pilots. Fidelity also needs to be increased, which will come in the form of
even more realistic aerodynamic and sensor models, Ernest explained.
ALPHA could become the ultimate solution for air combat simulation. It
could even emerge as a benchmark for other AI programs, and perform tasks
based on the commands of a manned wingman in actual flight, using the power
of Raspberry Pi.

NEC Corporation launches OSS Technology Centre in India

NEC Corporation and NEC Technologies India (NTI) have announced the
launch of their OSS
Technology Centre in
the country. The aim of
the new organisation is
to offer technical support
related to the use of open
source software.
Having begun operations
on July 1, the new OSS
Technology Centre will
have around 50 people
based in NTI. The staff

Microsoft brings Docker


Datacenter to Azure
Marketplace

At DockerCon 2016 in Seattle,


Microsoft announced the debut of
Docker Datacenter on the Azure
Marketplace. The new move by
the Redmond giant is apparently
aimed at expanding its presence in
the world of enterprises using open
source technologies.
Mark Russinovich, Azures chief
technology officer at Microsoft,
showcased a Docker Datacenterbased container cluster that uses
Azure resources and runs on a
private cloud infrastructure managed
by Azure Stack. Deployment
from the Azure Marketplace using
pre-defined ARM (Azure Resource
Manager) templates simplifies and
reduces the time to start up Docker
Datacenter and get productive,
Russinovich wrote in a blog post.

Microsoft tied up with Docker


to bolster its development towards
Linux solutions. The partnership has
already resulted in the launch of some
tools and extensions for Windows
users. Most recently, Windows 10
got an update to let users natively use
Docker on their systems.
In addition to the new Docker
Datacenter announcement,
Russinovich unveiled SQL Server in
a Linux-based Docker container.
Microsofts SQL Server on Linux
is presently available as a private
preview. Users with preview access
will now be able to get SQL Server
on Linux directly on their Ubuntu
systems just as a Docker image.

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 11

FOSSBYTES

Red Hat JBoss EAP 7


released to support hybrid
cloud applications

Red Hat announced the release of its


JBoss Enterprise Application Platform
7 at the Red Hat Summit 2016 in San
Francisco. The new platform, known
as Red Hat JBoss EAP 7, is to
bridge current and future development
paradigms and to support hybrid
cloud applications through a set of
new technologies.
The original Red Hat JBoss EAP
was designed to bring open source
closer to the enterprise world. Its new
version comes with the same focus but
uses some new technologies to support
DevOps development practices. The
platform now has the widely deployed
Java EE APIs, as well as Red Hats
JBoss Developer Studio and JBoss
Core Services Collection.
With JBoss EAP 7, we are
addressing the needs of both enterprise
IT and developers with a balanced
vision designed to bridge the reality of
building and maintaining a business
today with the aspiration of IT
innovation tomorrow, said Mike Piech,
VP and GM of middleware, Red Hat.
JBoss EAP 7 has JBoss Core
Services Collection that provides
developers with some common
foundational building blocks to
develop new enterprise applications.
Additionally, there are management
and monitoring capabilities for
existing applications and services. The
Collection package includes Red Hat
JBoss Operations Network, Apache
HTTP server and a sign-on server.
Red Hat considers its JBoss EAP
7 as a new milestone that would help
the open source solutions provider
extend its present product portfolio
and attract more partners. Likewise,
the platform advances the existing
experience with new application
approaches like containers and
microservice architectures.

at the Centre comprises engineers who are well versed in open source solutions
architecture as well as essential software development and support capabilities.
NEC and NTI will leverage the newly established OSS Technology Centre to
provide rapid support to users carrying out the construction of OSS-based systems
on a global scale, said Nobuhiko Kishinoue, general manager, cloud platform
division, NEC Corporation.
The Technology Centre will carry out development of new network functions
related to Software-Defined Networking (SDN) and Network Functions
Virtualisation (NFV). It will also help in developing application platform
functions like Platform-as-a-Service (PaaS). Besides, the organisation will
contribute to making source code openly available.
We will be able to utilise the large experienced and English-speaking
engineering resource pool in India for the new centre. We plan to hire both fresh
university graduates and experienced corporate engineers. The OSS Technology
Centre initiative is an umbrella organisation for all OSS related activities between
NTI and NEC to be brought together under one task force, said Anil Gupta, MD
and CEO, NTI.
NEC Corporation estimates that business for open source software will
increase in the future. Therefore, the new centre could soon become fruitful for
the Japanese company.

Next Android version to have nougat flavour

Google has formally revealed the official name of the next Android version. It
wont be the highly anticipated Nutella, but Android Nougat.
Android Nougat emerged as a result of Googles crowdsourcing attempt at its
annual I/O conference last
month. The new version
will succeed last years
Android 6.0 Marshmallow
and come with features
such as multi-window
support and direct reply.
Additionally, it will
provide improvements
in the existing offerings
like Doze and Data Saver
mode.
For developers,
Google has already
started releasing N
Developer Preview
builds with some
unique peculiarities. While the first preview version came preloaded with a new
JIT compiler to enhance the overall performance of the software and apps, the
second build brought with it new 3D rendering API Vulkan. Notifications also got
revamped with several noticeable enhancements.
Google hasnt revealed the release date of its new Android version. However,
the final SDK for Android N was released in early June, this year, to let developers
test their apps on the upcoming operating system ahead of its formal rollout.
Android Nougat might not be a game changer for the mobile world. But with
the new release, Google is aiming to get bigger.

12 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

FOSSBYTES
The new Android version is likely to take virtual reality (VR) to a whole new
level through the Daydream integration. Similarly, it will offer new experiences to
enable real apps without their installation on the hardware using the concept of
Android Instant Apps.
Factory images of Android Nougat are expected to go live for Nexus devices
very soon. Meanwhile, you can get the new experience on your Google device by
downloading the latest Android Developer Preview.

Adobe announces new open standard for


cloud-based digital signatures

Adobe has announced its new open standard to enable cloud-based digital
signatures across mobiles and the Web. Through a Cloud Signature Consortium,
the new standard will allow users to sign documents digitally from anywhere
around the world.
To widen the Cloud Signature Consortium and to expand the open standard
in a short span of time, Adobe has partnered with some industry leaders. The
consortium is targeting to
provide the secure form
of electronic signing to
more than seven billion
mobile devices.
With more than six
billion digital and
electronic signature
transactions processed
each year through
Adobe Sign and Adobe
Document Cloud, we are
focused on moving the signature industry forward, said Bryan Lamkin, executive
vice president and general manager of digital media, Adobe. The open standard
will benefit processes where signer identification is critical. It provides a secure
solution to authorise documents when applying for a marriage or business licence
as well as gives some social security benefits.
Research firm IDC considers that the new development in the world of digital
security will enable best practices for the industry. An open standard focused
on cloud-based digital signatures will not only help companies save time and
resources but ultimately move an entire industry forward with best practices that
benefit all, said Melissa Webster, vice president of content and digital media
technologies, IDC.
In addition to Adobe, the Cloud Signature Consortium includes members such
as trust service providers, academics and security-focused organisations.
The consortium will initially focus on the European Union and then build a
global network of industry contributions. Specifications of the new standard are
likely to be released by the end of 2016 to kick-start the all-new experience.

NIIT utilises the MEAN stack to offer Web app


development course in India

NIIT has launched a new course in Web app development under its DigiNxt series.
The new course is developed through the MEAN stack, which includes a group of
open source technologies including MongoDB, Express.js, AngularJS and Node.js.
The 14-week course will enable students to develop, test and release Web apps,

Facebook launches open


source Torchnet toolkit to
enable new AI experiments

Intending to take artificial intelligence


(AI) to new levels, Facebook has
launched Torchnet. The open source
software toolkit is based on the
deep machine learning framework
Torch, and will foster rapid and
collaborative development of deep
learning developments by the Torch
community.
Debuted at the International
Conference on Machine Learning
(ICML) in New York, Torchnet
uses AI concepts through a series of
boilerplate code, key abstractions
and reference implementations. The
offerings through the toolkit can be
combined or reused separately to
drive further developments among
researchers in the field of deep
machine learning.
We created Torchnet to give
researchers clear guidelines on how to
set up their code, and boilerplate code
that helps them develop more quickly.
The modular Torchnet design makes
it easy to test a series of coding
variants focused around the data set,
the data loading process, and the
model, as well as optimisation and
performance measures, the Torchnet
team members, led by Facebooks
Artificial Research (FAIR) lab, wrote
in a blog post.
Torchnet is written in the multiparadigm programming language,
Lua. It is designed to run on any
device with a standard x86 chip.
Facebook is not the only tech
company that has started viewing
deep learning with keen interest.
Starting from Amazon and Google to
Microsoft, all the leading players are
in the race to expand their presence in
the world of AI with their offerings.
In fact, Twitter is already using the
same Torch framework to build
machine learning software.

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 13

FOSSBYTES
embracing interfaces,
the Web server layer,
middleware and the
back-end, supported by
a product engineering
culture. Participants of the
programme will spend at least 70 per cent of their time on project work. For this,
NIIT has designed two different projects Online Ticket Booking and Online
Cab Booking.
With the launch of MEAN stack we aim to create full-stack developers for
start-ups and product engineering organisations, said Prakash Menon, president
of the global skills and careers group, NIIT.
On successful completion of the course, 100 per cent of the students will
supposedly be offered placement assistance. Companies like LinkedIn, Netflix
and Uber are already using the MEAN stack and will require relevant talent.
We at NIIT, with over 35 years of experience and a keen understanding of
the changing skills requirements of the industry, are committed to offering a series
of industry-aligned training programmes in digital transformation. To cater to this
need, we recently launched Java Enterprise Apps with Dev Ops, which has received
an overwhelming response from the industry and, in continuation to this, we now
introduce the MEAN stack programme, Menon added.
Admissions for the new MEAN stack course began at NIIT centres on June
24, this year.
In addition to the new Web app development programme, NIIT offers various
digital skills development courses like Java enterprise engineering with DevOps,
Big Data and data sciences, cyber security, database systems, the Internet of
Things (IoT), artificial intelligence, machine learning and virtual reality. These
courses are targeted at graduates and graduating students in science, technology,
engineering and mathematics.

Microsoft releases .NET Core 1.0 to go beyond Windows

As part of its
ongoing efforts to
support the open
source community,
Microsoft has
released .NET Core
1.0. The new .NET
runtime platform has
been opened to the
community and is
available as a crossplatform offering
across Linux and
Mac OS X, in addition to the companys proprietary Windows operating system.
Microsoft announced the first release of .NET Core back in 2014, but the
platform took more than a year to take shape. The Redmond giant has also
revealed ASP.NET Core 1.0 as an open source, modular version of the original
ASP.NET framework.
Additionally, Microsoft has expanded its existing partnership with Red Hat
to debut its .NET Core on Red Hat Enterprise Linux and OpenShift. Today,
were pleased to announce that .NET Core is now not only available on Red Hat

14 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

FOSSBYTES
Enterprise Linux and OpenShift via certified containers, but is supported by Red
Hat and extended via the integrated hybrid support partnership between Microsoft
and Red Hat, the Red Hat team members wrote in a statement.
The new development is aimed to help enterprises build new solutions
on .NET and test them directly on computing platforms other than Windows.
This might be considered as a backdrop for Microsofts own operating system.
However, the software giant is targeting to please more developers through the
new announcement.
Microsoft claims that about 18,000 developers from over 1,300 companies
have already contributed to the initial development of .NET Core. This is
just the beginning, but certainly quite a huge one to make the new platform
successful in the market.
Microsoft could use developer contributions in expanding its existing product
offerings. Also, the extensive community support would bring .NET in line with
Java and other developer platforms.
Developers can leverage solutions like Portable Class Libraries (PCL) to take
advantage of .NET Core 1.0 and enhance their existing apps. Likewise, there are
Xamarin tools to enable app development for multiple platforms using the new
open source model.
Microsoft isnt alone in the race to develop new open source technologies.
Facebook and Google are working with the community. While Facebook has
framework React to support developers, Google is making things easier for the AI
world through TensorFlow.
The verdict on how successful .NET Core is has not come out. Nonetheless,
it already has brought Microsoft closer to Red Hat and the ever-growing open
source community.

Microsoft releases WebRTC version of Skype for Linux

Months after its much anticipated arrival, Microsoft has finally released a new
version of Skype for Linux client. However, the app isnt as advanced as the one
for Windows or Mac systems, and is backed by the open WebRTC project.
The new Skype for Linux is an alpha version. This explicitly confirms the
presence of some bugs and an inferior user experience over its counterparts on
other platforms such as Windows, Mac and Android, among others.
As you may have guessed by the name, Skype for Linux Alpha is not a
fully functioning Skype client as of yet. Were sharing it with you now as we
want to get it in your hands as soon as possible, so we can continue to develop
the new version together. Once youve downloaded the app, youll notice that

Fedora 24 now out with


new Linux experience
across the cloud, desktop
and server

Red Hats community-driven open


source operating system has been
updated to Fedora 24. The new
version brings an all-new Linux
experience across the three distinct
editions Fedora 24 Cloud,
Fedora 24 Server and Fedora 24
Workstation.
In its beta version, Fedora 24
comes preloaded with Glibc 2.23
(the latest GNU C Library model)
to deliver better performance than
its predecessor. The developer
community can also utilise
some improvements to POSIX
compliance and GNU Compiler
Collection (GCC) 6. Moreover,
the Linux 4.5.7 kernel is there to
offer overall stability on each of the
different Fedora editions.
Fedora 24 continues Fedoras
drive to provide the latest,
powerful open source tools and
components to a variety of end
users, from developers to systems
administrators, Fedora Project
leader Matthew Miller said.
Fedora 24 Cloud comes with
OpenShift Origin. The Kubernetesbased distribution enables a
smoother experience for building
and launching containerised
applications. There is also Fedora
Atomic Host to let developers
run Docker applications and
start their developments directly
from Cockpit.
Red Hats Fedora Project
already has over a million users on
its open source operating system.
This user base could reach new
heights with the release of Fedora
24. It also sets the ground for a
new version of Red Hat Enterprise
Linux (RHEL) and certainly takes
Linux to new levels.

www.OpenSourceForU.com | OPEN SOURCE FOR YOU | AUgUSt 2016 | 15

FOSSBYTES
its very different from the Skype for Linux client you use
today, the Skype team wrote in a blog post.
Some basic features that made Skype a popular VoIP
client in the market are missing from the Skype for Linux
Alpha version. These include the highly popular videocalling support and 32-bit Linux compatibility. However,
Microsoft is claiming that the new app will sport a fast
and responsive Skype UI along with some new emoticons
and options to share files, photos and videos. Video calling
support is also likely to be added in the coming weeks.
Unlike some dated Skype versions for the Linux platform,
the new app is backed by an all new calling architecture. This
new technology enhancement allows you to directly call your
friends and family who are using the latest Skype apps on
their Windows, Mac, iOS and Android devices but restricts
the calling functionality for older versions based on the open
source platform.
While Microsoft is all praises for the new features, the
Linux community has criticised the delay in the release of
a full-fledged Skype version for the open source operating

Google launches Android Skilling


programme to train two million
Indian developers

In a bid to utilise the extensive manpower in India, Google


has announced the launch of its Android Skilling programme.
The new initiative is aimed at training two million Indian
developers over the next three years and is a major
contribution towards the governments Skill India initiative.

Citing a recent research report, Google says that India


will have the largest population of developers in the world
by 2018, producing over four million app developers. This
is indeed reason enough for the search giant to launch its
new programme.
By building a world-class curriculum and making it
easily accessible to millions of students and developers
in India, we want to contribute to the Skill India initiative
and help make India the global leader in mobile app

system. Nice one, so you basically put Skype into a Web


renderer and release it like an application, one of the Linux
users commented on the Skype forums.
A community-based Skype version has reportedly been
delivering a similar experience since quite a long time.
Called Ghetto Skype, the open source client is available on
GitHub and has been using the Electron framework to enable
Web-powered VoIP support.
In addition to the new release of the Skype for Linux,
Microsoft has expanded its operations on Chrome OS and
Linux by providing an easier way to make free voice and
video calls on Skype without the requirement for an app or
browser plugin. There is an ORTC technology that has already
been offering the plugin-free calling support on Skype for Web,
Outlook and Office Web apps. You just need to visit the Skype
website from your Chromebook or Chrome browser on Linux
to start making one-to-one and group voice calls.
The Redmond giant is also working on enabling video
calls and direct calls to landlines and mobiles through
Chromebook and the Chrome browser in Linux.
development, said Caeser Sengupta, VP, product
management at Google, in a statement.
The Android Skilling programme will have three major
elements, including end-to-end Android training, training
channels and associate Android developer certification.
Google has partnered with the National Skill Development
Corporation of India to integrate its in-person Android
Developer training into the computer science curricula of
various Indian universities.
Additionally, Google has teamed up with training
partners such as Edureka, Koenig, Manipal Global,
Simplilearn, Udacity and UpGrade. These partner
institutions will operate as authorised Android training
partners in the country and will provide students with a
unified training model that has been designed by Google.
To create new career options for budding developers,
Google has also introduced its job-oriented Associate Android
Developer certification. This performance-based exam will
come at a one-time charge of Rs 6,500, and let developers
obtain certification directly through the official website.
Googles Android Skill programme emerges days
after Apple announced its app centre in Bengaluru. While
Google is targeting Android developers in the country,
Apple is aiming to reach a large number of developers and
startups who want to enhance their iOS platform.
Last year, Google and Udacity allied with Tata Trusts
to kick-start their jointly designed online training courses
in India. Those courses were also intended to raise Android
app development from the Indian market.

For more news, visit www.opensourceforu.com

16 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

CODE

SPORT

Sandya Mannarswamy

In this months column, we return to our discussion on natural language processing.

n the last two columns, we have been discussing


computer science interview questions. This month,
lets continue our ongoing discussion on natural
language processing (NLP) by looking at one of its
emerging applications.
Automation is one of the areas likely to benefit from
NLP techniques and tools. Many of us have interacted
with customer service agents from different sectors such
as e-commerce, banking, hospitality, etc, over the phone,
via email or chat, at all hours of the day, 24x7. Some of
these communication channels are in real-time, such as
the phone or chat, while others are offline, such as email.
Today, while these services are provided by human
agents, there is an increasing push towards automation.
The chat typically starts off with the agents greeting
the customer and asking what they can do for the
latter. The customers then state their problem or
put forth their query, which is then followed by the
agents asking for more details from the customers to
authenticate their identity, obtain the order information,
and then provide steps to answer the query or solve the
problem. This is an interactive conversation which has
a typical structure and follows a well-defined pattern of
dialogue between the customer and agent. The agents
are typically provided with a FAQ or problemsolution manual in which information on resolving
customer queries is systematically documented, with
details on each step the agent needs to follow. The
conversations are audited to ensure that the agents
were courteous in their communication with customers,
that they follow the appropriate problem-resolution
script, and that they use the appropriate greetings at the
start and end of communication.
Let us consider a simple scenario. You have ordered
an item on an e-commerce website and have not yet
received it. So you are using the Web chat interface to
speak to the customer service agent. After the customary
greeting, you tell the agent that you have ordered item
XYZ one week back but it has not yet been delivered.

The agent then asks you for the order details. He then
apologises to you for putting you on hold while he looks
up the backend database. Once he has the details, he
either mentions that, The item has already been shipped
and you can expect to receive it on date XX-YY-ZZ, or
he tells you that the delivery has been delayed and gives
you the reason for the delay, etc. Now, if you analyse the
many thousands of such conversations that take place
every day, you can find that they follow a typical pattern.
From that, the crucial next question that springs to mind
is, Can these queries be answered by an automated
agent instead of a human agent?
Well, the answer is yes, and NLP comes to our rescue
in automating these communication services. While at
first glance it seems quite difficult to replace the humanto-human communication with human-to-virtual agent
communication, it is indeed possible. In many cases,
virtual agents provide initial communication support for
simple queries. However, human agents can take over in
case the conversation becomes complex and deviates a lot
from the normal script.
There are two key components to a solution to
virtualise the communication between human customer
and virtual agent. First, the human customers part of
the conversation needs to be analysed and then the
appropriate response and follow-up question, if any,
needs to be generated in natural language format and fed
to the virtual agent. So both natural language processing
and natural language generation need to be done.
The key challenge is to ensure that the conversational
experience does not deteriorate by automating
the process and the human customer does not feel
dissatisfied (of course, the ideal experience would be
when the human customer cannot detect whether he is
talking to a virtual agent or a human agent).
Let us analyse the problem in a little more detail,
focusing first on the natural language processing part.
Let us assume that we are given a database of thousands
of chats that have happened between human agents and

www.OpenSourceForU.com | OPEN SOURCE FOR YOU | AUgUSt 2016 | 17

CodeSport

Guest Column

human customers. We are now asked to build a system where


it is possible to train virtual agents that can take the place of
human customer service agents in answering customer queries.
As we mentioned before, the communication can take place
over either the phone, chat, or email. The voice communication
is further complicated by the fact that speech-to-text conversion
accuracies are still limited when it occurs in real-time. Voice
communication also requires that the speakers conversational
cues such as emotion, pitch and tone should be detected, and the
appropriate response cues should be used by the virtual agent.
This is a difficulty we will set aside for now and focus on text
based communication only, namely chat/email.
Virtual agent email replies to customer queries are simplified
by the fact that these can be driven by an offline process, and the
replies generated by the virtual agent can be subjected to random
human reviews and various other checks before they are shared
with human customers, since this is not done in real-time. On the
other hand, chat communication needs to be done in real-time and
hence requires a shorter response time, which doesnt allow any
offline review of the virtual agents communication and thereby
requires greater precision and accuracy.
What are the key problems that need to be addressed in
analysing the online chat? Let us make some simplifying
assumptions. We will assume that the agent starts the dialogue
with the standard greeting, which is immediately followed by
the customer explaining what the problem is. The first step is
to identify the category of the problem/query associated with
a specific customer. There can be different problem/query
categories for an e-commerce website, such as queries to
track an item, cancellation of a purchase, delay in refunds, an
address change for a customer, etc.
We will assume that there are a fixed set of problem
categories. So this step gets reduced to the following problem:
Given a small piece of text, the problem is to classify it into
one of the N known problem categories. This is a well-known
document classification problem. Given a short document,
can you classify it into a known category? If we assume that
there are already annotated conversations where the problem
category has been identified, we can build a supervised training
based classifier to classify a new incoming document into one
of the known categories.
Here is a question for our readers: If you are asked to come
up with a supervised classification scheme for this problem, given
annotated data of conversations and their problem categories, (a)
What would be the features you will use? (b) Which supervised
classifier will you use? Remember that this is a multi-class
classification problem, since there are many problem categories.
Now let us make the assumption that we have access to the
corpus of a bunch of conversations, but the conversations are
not annotated with the problem category. Can you describe an
unsupervised technique with which you can still determine the
problem category of an incoming chat, given the corpus?
Now let us assume that we have correctly identified the
problem category. As we said before, human agents are typically
18 | AUgUSt 2016 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com

given access to the solutions database or FAQs, which they


consult to find the answers for each problem category. They use
these answer templates and provide the response to the user. If
we dont have access to this solutions database, is it possible
for us to analyse the conversations corpus and automatically
identify the answer template for each problem category? First,
we cluster the chats as per their problem category. Now, we have
a subset of chats belonging to each problem category. Given
this subset S for problem category P, how do we find out what is
the possible answer template script for this problem category?
Note that the human agent verbalises the answer template in a
suitable form for a specific customer, personalising it for each
customer. Also, given that it is an interactive dialogue, there can
be considerable noise introduced for each customer, based on
the specific query formulation. One possibility is to analyse the
conversational snippets uttered by the agent as part of the chat,
and find out the common sequence of actions suggested by the
agent across these conversations.
Here is a question for our readers: Given a corpus of
conversational chats corresponding to a specific problem
category, marked with agent utterance and customer utterance
in the dialogue, you are asked to find out the underlying
common action sequence, if any, suggested by the agent. Let us
make some simplifying assumptions. Let us assume that such
an action sequence definitely exists in all the conversations.
We will also assume that the verb phrases used in the agent
conversation denote the steps of the action sequence. Let
us assume that each unique verb in our vocabulary can be
represented by a unique character. Given that, we can represent
this as a simpler problem: You are given N sequences of
character strings (each string represents the action sequence of
verbs in one chat each). You are then asked to find the longest
common sub-sequence of all these sequences. Can you come
up with an algorithm? Please note that we are looking for the
longest common sub-sequence across N multiple sequences
where N can be greater than two. Can you come with a solution
which runs in polynomial time? Please send me your responses
over email, on how you would solve this problem.
If you have any favourite programming questions/software
topics that you would like to discuss on this forum, please
send them to me, along with your solutions and feedback,
at sandyasm_AT_yahoo_DOT_com. Till we meet again next
month, happy programming!

By: Sandya Mannarswamy


The author is an expert in systems software and is currently
working as a research scientist at Xerox India Research
Centre. Her interests include compilers, programming
languages, file systems and natural language processing.
If you are preparing for systems software interviews, you
may find it useful to visit Sandyas LinkedIn group Computer
Science Interview Training India at http://www.linkedin.com/
groups?home=HYPERLINK http://www.linkedin.com/group
s?home=&gid=2339182&HYPERLINK http://www. linkedin.
com/groups?home=&gid=2339182gid=2339182

Exploring Software

Anil Seth

Guest Column

The Possibilities with


Blockchains
The heart of bitcoins is the blockchain, which could be
considered as the worldwide ledger of value, being a
distributed database that maintains a continuously growing list
of tamper-proof data records.

itcoins never interested me sufficiently. I didnt


ever try to understand how the crypto-currency
works. However, my view changed with the
recent announcements related to the Linux Foundations
Hyperledger Project, IBMs contribution of code to the
project, and Microsofts Bletchley project for blockchains
that uses open source technologies. Blockchains are the
distributed database created for and used by bitcoins.
A good place to learn about bitcoins and blockchains is
the Khan Academys course on the subject at https://goo.gl/
sz3BG0. Another good article is by Scott Driscoll on How
Bitcoin Works Under the Hood at http://goo.gl/t67EXn.

A bitcoin blockchain

Basically, blockchain is a ledger that maintains all the


transactions of bitcoins. The elegance of the blockchains
lies in that they ensure that the transactions can be trusted
without the need for a trusted authority. They use private/
public key pairs to ensure the following for a transaction:
Only the owners of a bitcoin can spend it by using their
private key.
The sender digitally signs the transaction so that it
cannot be forged or disowned.
The money is transferred using the public key of the
recipient, who can prove ownership by using her/his
private key.
Since account balances are not maintained, a
transaction will consist of links to the previous transactions
by which the bitcoins being spent were received. This
transaction message is broadcast to the network of bitcoin
nodes. Any node can refer to the chain of previous
transactions to confirm that the spender does indeed own
these bitcoins.
The blockchain has to ensure that the past
transactions cannot be modified or deleted. All nodes
of the network also have to agree on a common
blockchain. The bitcoin system again relies on
mathematics to ensure that it is almost impossible for
any confirmed transaction to be altered or for someone
20 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

to defraud another by spending the same bitcoin twice.


The blockchain ensures this as follows:
Transactions are grouped in blocks, which also contain
a reference to the previous block. Transactions in a
block are deemed as confirmed.
Any node can create a block of recent unconfirmed
transactions and send it across the network, provided
it solves a crypto hash problem before any other node.
Solving the puzzle requires both a very high processing
power and the luck of a lottery. The nodes which do
so are called the miners because if they succeed, they
are allowed to create (mine) a fixed number of bitcoins
as a reward. The bitcoin system adjusts the solution
requirements so that some node in the network can
solve the problem in 10 minutes.
If, in the unlikely event, two nodes find a solution at
the same time, whichever branch becomes longer will
be the one accepted by the network.
If any block is changed, all subsequent blocks become
invalid and need to be recreated. Hence, unless one
node has more processing power than the combined
processing power of the rest of the network, it is highly
improbable that any node can successfully replace a
confirmed transaction.

Extending blockchains

As can be expected, open source makes it possible for


people to extend the functionality of a system and find new
unexpected uses for it.
One interesting use for blockchains that has been
explored is in voting. Each voter gets a coin which can
be spent on any candidate. In cases where the secrecy
of the ballot is not an issue, this is a straightforward
use case. Where secrecy is needed, anonymising
software can be used to help ensure the voters identity
is not revealed. Denmarks Liberal Alliance has used
blockchains for its internal elections. Each voter can
maintain the voting results, and there is no need for a
trusted election commission.

Guest Column Exploring Software

People have used bitcoin transactions to store data instead


of transferring coins. For example, you could compute a hash
of a document and store it in a blockchain. That can act as
proof that the document being viewed has not been changed
after that transaction. It could easily replace a notary. Dispute
resolution could take minutes instead of, possibly, years.
Storing the hash of a document in a blockchain can also
act as a proof in case of copyright claims.
While the bitcoin system is public and permissionless, companies are extending the system to add the
need for permissions and create private blockchains.
Banks are exploring the use of blockchains to make
existing processes more efficientfor example, interbank transactions, trading of securities, money transfers,
etc. Blockchain usage can enable inter-bank settlements
without the need of a central bank.
One area in which consumers could benefit immensely
is the transfer of money across countries. This is currently
slow, and often involves hefty charges for both the sender
and the recipient.
Settlements with credit cards take days. Credit card
transactions add a significant cost to the merchants.

Settlements with blockchains can be done in minutes. Reports


state that the Reserve Bank of India is looking into the
possibility of using blockchains to reduce the importance and
use of cash in the Indian economy.
Any item that has a value can potentially be traded and its
ownership maintained using blockchains. You can get an idea
of the range of applications being explored in an article called
Lets Talk Payments at https://goo.gl/ZF78do.
A number of countries are exploring the use of
blockchains for land records and property transactions. This
has immense social benefits as the existing processes are
inefficient, incomplete and prone to fraud and corruption.
Time will tell whether blockchains live up to the current
hype and expectations. There are open source alternatives to
bitcoin blockchains that have additional capabilities, which
you may explore, or you could even add to the efforts of
groups building future blockchains.

By: Dr Anil Seth


The author has earned the right to do what interests him. You can
find him online at http://sethanil.com and http://sethanil.blogspot.
com, or you can reach him via email at anil@sethanil.com.

OpenStack India will be co-located with


Open Source India at Bengaluru, this October
OpenStack India 2016 will be co-located this year with
Open Source India, which is Asia's largest gathering
on open source. OpenStack India is a one-day
conference for developers, users and administrators
as well as a great place to get started with OpenStack
cloud software.
OpenStack, which has thousands of developers
all over the world working in tandem to develop the
strongest, most robust and most secure product, is
believed to be the future of cloud computing, where it is
redefining Infrastructure as a Service (IaaS). Providing
infrastructure means that OpenStack makes it easy for
users to quickly add a new instance, upon which other
cloud components can run. Typically, the infrastructure
then runs a platform upon which a developer can create
software applications that are delivered to the end users.
As more and more companies begin to adopt
OpenStack as a part of their cloud toolkit, the number
of applications running on an OpenStack backend is

expanding. Because OpenStack is not


owned by one single company, getting
information about it can be a little tedious, which is
precisely the reason for us pledging to ensure that
our audience gets to know the latest in OpenStack.
The idea is to bring you up-to-date information about
OpenStack, to help provide answers to common
questions from end users, developers and decisionmakers seeking to deploy it in their organisations.
The most influential representatives of the open
source community from India and abroad, as well as
industry experts, will share the latest IT trends in the
cloud architecture based on the OpenStack platform.
If you're looking for OpenStack related
information/updates, products and services such as
distributions, appliances, public clouds, consultants or
training, this is where you should be!
Mark the date! It's October 21, 2016, at NIMHANS
Convention Centre, Bengaluru.

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 21

For U & Me CaseStudy

Jugnoo Uses Open Source


to Dominate the Auto-rickshaw
Aggregator Space
Chandigarh based Jugnoo is using open source technologies to upgrade the Indian urban
transport scene through an advanced app-based model. In the process, it is helping to
enhance the earnings of hundreds and thousands of auto-rickshaw drivers in the country.

s the world is moving towards digitalisation, the


Indian transport system is becoming smarter.
Many app-based aggregators are stepping into the
market to transform the traditional commuting options
across the country. Jugnoo is a Chandigarh based startup
that has picked some open source technologies not just
to grow bigger in the
world of app-based
transporters but also
to reform Indian
transport, as a whole.
Open source
provides us the ability
to use the software
fairly quickly without
getting into formal
business software
purchase processes. In
startups, where speed
is the king, this is
something that is very
helpful when trying out something new, said Chinmay
Agarwal, co-founder and CTO of Jugnoo, in an exclusive
conversation with OSFY.
Agarwals team has used some open source frameworks to
design the core structure of Jugnoo, with an aim to help autorickshaw drivers. Additionally, there is the Elastic search engine
support to help developers fix bugs within the Jugnoo apps.
The company recently ventured into the machine
learning field and has introduced a bot that offers the autorickshaw booking service via Facebook Messenger. The bot
comes through some API integration and enables real-time
notifications using the messaging platform.
In addition to its newer features for some advanced
smartphone users like the Messenger bot, Jugnoo uses the

22 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

open source project Grafana for its infrastructure monitoring,


and the JavaScript library D3.js to enable various dynamic
data visualisations. We like using open source products
wherever we can, and it is a fairly reasonable approach,
Agarwal asserted.
Apart from using community-based solutions to
enable its key services,
Jugnoo is leveraging the
power of open source in
its organisation as well.
The startup has deployed
Odoo for its HR operations.
It is also planning to
switch completely from
proprietary offerings like
Tally and SAP ERP to
Odoo in the near future.

Strong community
support is vital

A lot of organisations
are currently using Odoo for their business processes as
it is cheaper and has a strong community support. We
liked its implementation for HR, and since we are small
at this point, it will be relatively easy for us to transition
ultimately to Odoo for other business processes too,
Agarwal said.
Deploying open source technologies is not always a
breeze, though. Agarwal mentioned that certain finishing
touches are sometimes missing in open source solutions.
If the software does not have well-managed community
support, it is often problematic to use and hence not
recommended, he added.
However, Jugnoo addresses this challenge by using
solutions that are backed by fairly strong communities.

CaseStudy For U & Me


Agarwal stated, We try to stick with software that has
strong community support, and if we happen to require
software that doesnt have a strong community behind it, we
commit our internal resources for its upkeep.

Why open source for startups?

Jugnoo was co-founded by Agarwal and his IIT course-mate


Samar Singla in October 2014. Both the co-founders are not new
to the startup culture as they launched mobility solutions provider
Click Labs before this new venture. The prior experience helped
the team choose the relevant open source technologies.
Open source software provides easy and speedy
installation and use, results in fewer audit headaches as you
scale up, costs less and overall, helps startups a lot when
you often do not have dedicated bureaucratic IT teams,
Agarwal stated.

Cost-effectiveness is a major attraction

One of the factors behind choosing open source solutions


over some proprietary options is certainly their costeffectiveness. In the case of Jugnoo, Agarwal revealed that
the entire infrastructure costs were just one-third the cost
of completely proprietary infrastructure, thanks to open
source deployments.
When asked about how security is ensured through open
source deployments, Agarwal said that his team deploys only
those solutions that are already being used by a large number
of organisations across the globe. But if the company really
needs to use some less-known open source projects, it tests its
functionality in a sandbox environment first.
Jugnoo currently has a presence in 35 cities and around
12,000 auto-rickshaw drivers in India are using its app to earn
their livelihood. The startup is not looking to grab the market of
taxi aggregators like Uber and Ola. Instead, it continues its focus
on auto-rickshaws in the country by using open source solutions.

Chinmay Agarwal, co-founder and CTO of Jugnoo, at the Jugnoo server room

We focus on making our auto-rickshaw drivers the core


revenue builders by engaging them in the services we provide,
be it Dodo or Fatafat. We aim at growing to be the market
leader in the auto-rickshaw space and to capture up to 20 per
cent of the total market. We see ourselves in another 65 towns
by the end of this financial year, Singla said.
Jugnoos success with open source technologies is very
encouraging for organisations thinking of moving away from
proprietary solutions. Open source has been fruitful for many
Indian entrepreneurs like Agarwal and Singla, who are aiming
to grow bigger through some efficient deployments.
By: Jagmeet Singh
The author is an assistant editor at EFY.

oSFY Magazine Attractions During 2016-17


Month

theMe

March 2016

Open source Databases

April 2016

Backup and Data Storage

May 2016

Web Development

June 2016

Open Source Firewall and Network security

July 2016

Mobile App Development

August 2016

Network Monitoring

September 2016

Open Source Programming Languages

October 2016

Cloud Special

November 2016

Open Source on Windows

December 2016

Machine Learning

January 2017

Virtualisation (containers)

February 2017

Top 10 of Everything

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 23

For U & Me Interview

Educating the marketplace about


deploying data encryption is one
of our fundamental charters
Why is data encryption gaining importance? How cost-effective are the solutions emerging
for enterprises and startups?Rahul Kumar,country head of WinMagic, shared his views
on these topics withJagmeet SinghofOSFY.Here are excerpts from the conversation.

Q Data loss is indeed becoming


a business problem. What are the
solutions WinMagic offers to save
crucial data?
Essentially, data losses result in big
problems because all companies have
some important data. Companies are
doing a lot to try and protect their
data but what ends up happening
is that they leave behind a gaping
hole in the entire system when they
dont consider one-key encryption.
What happens if you lose your laptop
today? A large amount of your data
will be gone, and you dont know
just how difficult it is to acquire the
same data back again. Nor do you
know who has your information
now. There is only one solution to
protect your data in such cases. It is
to encrypt the device with solutions
such as WinMagic. Data encryption
solutions limit the instances of data
loss and offer a strict security layer
on top of users personal data.

Rahul Kumar,
country head of WinMagic

24 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Interview For U & Me


We use an intelligent key management system with
encryption. Certainly, data encryption has become a critical
element in the industry. So our prime focus area is to provide
our key management system to our customers. This ensures
that everything they are doing is on encryption.

Q What is the USP of WinMagics intelligent key


management system?
The USP of our intelligent key management system is that
it offers the best user experience. Experience matters a lot
in the world of data encryption as the users themselves
need to manage the encryption of their personal data. Thus,
we always aim to deliver the best experience to our users.

typically based on only the value. This refers to the


value customers get from the solutions, in terms of the
security of their data.

Q Do you have any unique pricing model


that distinguishes WinMagic from other data
encryption companies?
The beauty of the data encryption market is that no
pure-play competition exists among different solution
providers. Therefore, WinMagic offers a value-based
model as its unique feature.

Q India is yet to implement a strong encryption policy.


Whats your take on this as a solutions provider?

Q What are your strategies to push Indian customers


into deploying data encryption?

Yes, we need a strong and clearly worded encryption


policy. One of the key
elements missing in policies
Educating the market about
There should be a
is that today, organisations
deploying data encryption is one
dont have to disclose anything
of our fundamental charters. We
policy, which mandates
understand that currently, the
that organisations make a if they lose customers
data. There should be a
market has not been penetrated,
full disclosure if they have policy, which mandates that
and that is a critical opportunity
organisations make a full
for us. Customers are increasingly lost end users data.
disclosure if they have lost end
recognising the value of data
users data. That would ensure that organisations take
and hence the importance of encrypting it. This is the big
the necessary steps to protect their customers data.
opportunity that we see in the market.

Q How do you educate the market about


data encryption?
We use several ways to educate the market about data
encryption. We talk to engineers, conduct webinars and
participate in some industry events to make people aware of
the need for data encryption. Also, we host our own events
and participate in the main partner-generated activities.

Q What is the pricing trend of the available data


encryption solutions? With evolving technology, is it going
down on a per-user basis?
The pricing trends in the data encryption world are quite
simple. Customers are not mainly considering prices while
choosing solutions for their requirements. Instead, they
opt for solutions by their value. So we can say that value is
driving the data encryption market, not price.
There is a myth that prices are going down with the
evolution in technology. Certainly, technology is evolving
and in fact, technology evolves all the time. But at the
same time, the prices are not going down proportionate to
the evolution in technology.

Q What is the typical pricing model here? Is it on a


per-user basis or are there other models also available
in the market?
As value based offerings primarily influence the market
of data encryption solutions, the pricing model too is

Q Where does the cloud fit in your plans?


Customers are increasingly moving towards cloud
infrastructure to leverage the flexibility that the cloud
provides. However, in many cases, the cloud service
providers are not responsible for the security of the
organisations data on the cloud. It is the responsibility of
the organisations themselves to protect their data. This is
what we enable by encrypting the virtual machines on the
cloud public, private and hybrid.

Q What are the technologies that you are using to


encrypt the data stored on the cloud?
We are using the same AES-256 algorithm that is already
encrypting data through various solutions across the
globe. On the cloud, we are really looking at the access to
the cloud system. This relates to how you access anything
on the cloud. We encrypt the virtual machine and the
instance on the cloud to protect the system.

Q Lastly, what are your views on the future


of data encryption?
The adoption of data encryption has already increased
significantly. I am quite confident that in the next
two or three years, this adoption will move up to new
levels. Today, customers increasingly understand the
value of their data. This will result in the growth of
data encryption in the future.
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 25

Admin

Insight

Build a Hybrid Cloud


with Eucalyptus
Eucalyptus is open source software for building an AWS (Amazon Web Services)
compatible private and hybrid cloud computing environment. It provides flexibility,
cost benefits, agility and data governance.

Hybrid Cloud
Community
Cloud

ucalyptus is open source software that implements an


Amazon Web Services compatible cloud, which is costeffective, flexible and secure. It can be easily deployed
in existing IT infrastructures to enjoy the benefits of both
public and private cloud models. Eucalyptus is an acronym
for Elastic Utility Computing Architecture for Linking Your
Programs to Useful Systems. Basically, Eucalyptus provides
an Infrastructure as a Service (IaaS) offering. The main
advantage is that it provides easy and secure deployment. The
private cloud is deployed in the premises of the enterprise
and can be accessed by users over the intranet, so critical
and important data remains secure from outside intrusion.
Also, it provides AWS APIs. So, at any time, consumers can
easily migrate or load balance their less sensitive data into the
Amazon public cloud; thus, they dont have to worry about
the elasticity of their network.

History

Development on Eucalyptus began as a research project in US


based Rice University in 2003. In 2009, a company named
Eucalyptus Systems was formed to commercialise Eucalyptus
software. Later, in 2012, the firm entered into an agreement
with AWS for maintaining compatibility and API support.
In 2014, it was acquired by HP (Hewlett-Packard), which
28 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Private
Cloud

Public Cloud

incidentally has its own cloud offerings under the HPE Helion
banner. The Helion portfolio has a variety of cloud related
products, which includes HPs own flavour of OpenStack
called HP Helion OpenStack. Now, Eucalyptus is a part of
the HPE portfolio and is called HPE Helion Eucalyptus.
It provides an open solution for building a hybrid cloud,
leveraging the benefits of other HP Helion products.

Eucalyptus architecture

Figure 1 demonstrates the overall architecture of


Eucalyptus in data centre design. Eucalyptus CLIs can
manage both AWS and its own private instances. Users
can easily migrate instances from Eucalyptus to Amazon
Elastic Cloud. Compute, storage and network is managed
by the virtualisation layer. Instances are separated by
hardware virtualisation. The following terminology is used
by Eucalyptus.
Images: Any software module, configuration,
application software or system software bundled and
deployed in the Eucalyptus cloud is called a Eucalyptus
machine image (EMI).
Instances: When we run the image and use it, it becomes
an instance. The controller will decide how much memory to
allocate and provide all other resources.

Insight
Data Center

User
Interface
and API

Management
Console

AWS-Compatible
APIs

Cloud

Cloud Controller
CLC

Scalable Object
Storage
SOS

Applications

HPE Helion Eucalyptus


AWS Compatible API

AWS Compatible API

Auto scaling

Cloudwatch

Elastic load
balancing

Compute

Storage

Network

Hybrid Management Console

Cluster for
High
Availability

Admin

Cluster Controller
CC

Storage Controller
SC

Identity and
Access
Management

Cloud Consumers and Admin

Virtualization

Nodes

Node Controller
NC
VM

Node Controller
NC

VM

VM

VM

Node Controller
NC
VM

Physical Infrastructure

VM

Figure 2: Eucalyptus components


Figure 1: Eucalyptus software architecture

Networking: The Eucalyptus network is divided into


three modes.
Managed mode: In this mode, it just manages a local
network of instances, which includes security groups
and IP addresses.
System mode: In this mode, it assigns a MAC address
and attaches the instances network interface to the
physical network through the NCs bridge.
Static mode: In this mode, it assigns IP
addresses to instances.
Static and system mode do not assign elastic IPs, security
groups, or VM isolation.
Access control is used to provide restriction to users. Each
user will get a unique identity. All identities can be grouped
and managed by access control.
Eucalyptus elastic block storage (EBS) provides blocklevel storage volumes, which we can attach to an instance.
Auto scaling and load balancing is used to automatically
create or destroy instances or services based on requirements.
CloudWatch provides different metrics for measurement.

Eucalyptus components

Eucalyptus has a total of six components, of which five are


the main components and one is optional.
Cloud controller: Cloud controller (CLC) is the main
controller, which manages the entire cloud platform. It
provides a Web and EC2 compatible interface. All the
incoming requests come through the Cloud controller. It
performs scheduling, resource allocation and accounting. It
manages all the underlying resources. Only one controller can
exist per cloud.
Walrus: This is similar to AWS S3 (Simple Storage
Service). It provides persistent storage to all the instances. It
can contain any kind of data like application data, volume or
image snapshots.
Cluster controller: This is the heart of the cluster within
a Eucalyptus cloud. It manages VM (instance) execution and
service level agreements. It communicates with the storage

and network controller.


Storage controller: Storage controller (SC) is similar to
AWS EBS (Elastic Block Storage). It provides block level
storage to instances and snapshots within a cluster. If an
instance wants persistent storage outside of storage, then it
must pass through Walrus. The storage controller doesnt
handle this kind of request.
Node controller: NC (node controller) hosts all the
instances and manages their end points. There is no limit
to the number of NCs in the Eucalyptus cloud. It takes
images and also caches from Walrus and creates instances.
One should manage the number of NCs used as it affects
the performance.
Enterprises can use any AWS-compatible tools or scripts
to manage their own on-premise infrastructure. AWS API
is implemented above Eucalyptus; so both are backward
compatible. Users can run any apps that are supported by
AWS from Eucalyptus.
Euca2ool: Euca2ool is the Eucalyptus CLI for
interacting with Web services. It is a Python based tool
which is compatible with all the AWS services like S3, auto
scaling, ELB (Elastic Load Balancing), CloudWatch, EC2,
etc. It is an all-in-one solution for both the AWS and the
Eucalyptus platforms.

Other tools

There are many other tools that can be used to interact with
Eucalyptus and AWS, and they are listed below.
s3curl: This is a tool for interaction between Eucalyptus
Walrus and AWS S3.
Cloudberry S3 Explorer: This Windows tool is for
managing files between Walrus and S3.
s3fs: This is a FUSE file system, which can be used to
mount a bucket from S3 or Walrus as a local file system.
Vagrant AWS Plugin: This tool provides config files
to manage AWS instances and also manage VMs on
the local system.
You can refer to https://github.com/eucalyptus/eucalyptus/
wiki/AWS-tools for more information.
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 29

Admin

Insight

The advantages of the Eucalyptus cloud

Eucalyptus can be used to get the advantages of both the


public and private clouds.
Users can run Amazon or Eucalyptus machine images as
instances on both the clouds.
It has 100 per cent API compatibility with all the AWS
services. There are many tools developed to interact
seamlessly between AWS and Eucalyptus.
Eucalyptus can be used with DevOps tools such as Puppet
and Chef. Popular SDKs like AWS SDKs for Java and
Ruby and Fog work smoothly with Eucalyptus.
It is not very popular in the market but is a strong
competitor to OpenStack and CloudStack.
Table 1 sums up the features of the Eucalyptus private
cloud software.

Architecture
Installation
Administration
Security

Five main components; same as AWS


Installation is easy compared to other
IaaS offerings.
Strong CLI compatible with EC2 API
Baseline security + component registration

Popularity
IaaS offering

Medium
Public + private (hybrid)

Table 1: Eucalyptus private cloud software summary

Eucalyptus vs other IaaS private clouds

There are many IaaS offerings available in the market like


OpenStack, CloudStack, Eucalyptus and OpenNebula,
all of which are being used as both public and
private IaaS offerings.
Of all the IaaS offerings, OpenStack still remains
the most popular, active and biggest open source cloud
computing project, yet enthusiasm for Eucalyptus,
CloudStack and OpenNebula remains solid. Based on
business critical requirements, cloud service providers and
administrators can choose specific IaaS offerings.

By: Maulik Parekh


The author has an M. Tech degree in cloud computing from VIT
University, Chennai. He has rich and varied experience at reputed
IT organisations. He can be reached at maulikparekh2@gmail.com
or https://www.linkedin.com/in/maulikparekh2.

support@efy.in
Do you have a query,
suggestion or a complaint?
You can e-mail it to
support@efy.in
and we will take care
of the rest.

THE COMPLETE MAGAZINE


ON OPEN SOURCE

www.electronicsforu.com

www.eb.efyindia.com

30 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

www.opensourceforu.com

www.efy.in

Overview

Admin

Must-Have Network Monitoring


Tools for Systems Administrators
The trick in managing networks is to anticipate glitches and problems, and nip them
in the bud before they can do much harm. To manage and monitor networks manually
is tedious and time consuming, besides being quite impossible when there are a
large number of nodes; hence the need for monitoring tools. This article gives you a
sampling of the most indispensable monitoring tools.

onsider todays computer networks, which are large


complex systems, in which many components of
various vendors are integrated in order to pass on
information. These networks range from small campuses
to large geographical regions and worldwide networks.
The main purpose of having these networks is to share
information among computers. Networks have various
applications like email, online transaction processing, remote
connectivity, downloading and various social media activities.
Organisations that have installed these network applications
require that these applications run without any hiccups. For
this, network managers have to monitor the network in order
to facilitate information flow and to check the status of all
network equipment.
Network monitoring is regarded as difficult and
demanding, yet a vital part of any network or systems
administrators job. Network monitoring enables operators
to fully understand the current behaviour of the network. So,
accurate and efficient monitoring is important to ensure that
the network operates according to the defined manner and
network administrators find it easier to troubleshoot any sort

of error in the network. Network monitoring is defined as the


process of capturing network traffic and inspecting it closely
to determine what is happening on the network.
Organisations require their network to be up and
functioning 247 in order to generate revenue, for which they
need the right set of tools to monitor and manage the network.
Some tools are open source while some are proprietary and
hence quite expensive. Organisations have heterogeneous
environments comprising multiple network hardware and
software from different vendors running under the same
roof, for which the network monitoring solution needs to be
flexible enough to adapt to changing environments and should
support various kinds of hardware and software.
In order to provide organisations dynamic and flexible
solutions for network monitoring, the preferred methodology,
nowadays, is to make use of open source network monitoring
tools. But finding the most suitable tool for network monitoring
that fits the precise needs of a particular organisation is quite a
challenging task as there are numerous options available.
The various open source tools currently available cover
almost all requirements for monitoring networks. These
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 31

Admin

Overview

include Nagios, Zabbix, Cacti, openNMS, Icinga, Op5,


Munin, Network Management Information System (NMIS),
NetXMS, etc.
Let us delve deeper into these tools to get an idea of their
features and technicalities.

Nagios

Nagios, now known as Nagios Core, is a free, open source


and powerful network monitoring tool which facilitates
monitoring systems, networks and infrastructure, and
ensures that all sorts of critical systems, applications and
services are always up and running efficiently. Nagios
Core offers monitoring and alerting services for servers,
switches, applications and all sorts of network services.
If any problem arises in the network, Nagios Core alerts
the administrators about it and alerts them again when the
issue gets resolved.
Nagios Core is regarded as the heart of the application,
which comprises the core network monitoring engine and the
basic Web based UI. On the top of Nagios Core, administrators
can implement plugins to facilitate administrators with
additional monitoring capabilities like services, applications,
data visualisations, graphs and even MySQL database support.
There are various versions of Nagios:
a. Nagios XI facilitates easy monitoring of mission-critical
infrastructure like applications, services, operating
systems, network protocols, system metrics and network
infrastructure.
b. Nagios Log Server simplifies the process of log data
searching, as it automates the process of alerts when any
potential threat is identified and quickly logs the data. The
Nagios log server enables administrators to search for all
sorts of network logs at one location with high availability
and fault tolerance features.
c. Nagios Network Analyser provides in-depth lookup of
all network traffic sources and security threats, enabling
systems admins to gather all the information to monitor
the health of the network.
d. Nagios Fusion provides network administrators with an
easy and in-depth comprehensive view of multiple Nagios
Core or Nagios XI servers.
Version 4.1.1 is the latest release of Nagios available for
free download under GPLV2.
Listed below are the main features of Nagios Core.
Monitors all sorts of network services like SMTP, HTTP,
HTTPS, NNTP, SNMP, SSH, FTP, etc.
Monitors all host resources like processor load, disk usage
and all operating systems like Windows, Linux and their
event logs.
Remote network monitoring via Nagios Remote Plugin
Executor.
Proper data visualisation via graphs using plugins.
Can define event handlers to run during service or host
events for proactive problem resolution.
32 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Nagios agents: These are listed below.


1. NRPE: Nagios Remote Plugin Executor (NRPE) allows
remote system monitoring of various resources like disk
usage, system load and number of users logged in.
2. NRDP: Nagios Remote Data Processor (NRDP) has a
flexible data transport mechanism and processor, and uses
standard ports and protocols (HTTP and XML).
3. NSClient++: This is used to monitor various services of
Windows machines like memory usage, CPU load, disk
usage, running processes, etc.
4. NCPA: Nagios Cross Platform Agent (NCPA) supports
installation on Windows, MAC OS X and Linux for monitoring
CPU usage, disk usage, processes, services and network usage.

Zabbix

Zabbix is regarded as an enterprise oriented open source


monitoring tool for networks and all sorts of application software.
It works with a centralised Linux based Zabbix server.
Zabbix is designed to do all sorts of monitoring and
tracking with regard to network services, servers and various
network hardware. It makes use of MySQL, PostgreSQL,
SQLite, Oracle or IBM DB2 to store the data. It offers data
gathering and monitoring options for servers and even supports
the monitoring of virtual machines.
Architecture: Zabbix architecture is composed of three
different servers/componentsthe Web server, the database
server and the Zabbix server
In addition, using the whole Zabbix architecture in large
environments allows us to have two other actors, i.e., Zabbix
agents and Zabbix proxies, which also play a crucial role in
efficient overall network monitoring.
Zabbix server acquires data from Zabbix proxies, which in
turn acquire data from the Zabbix agents connected to them.
And with all the data stored on a database server, the whole
system will be monitored via a Web based UI.
The latest version of Zabbix is 3.0.3 which was released
in May 2016.
The unique features of the Zabbix network monitoring
system are listed below:
Zabbix has a centralised Web interface for monitoring all
servers, services and other network hardware.
Zabbix systems are easy to integrate with other systems because
of the API available in varied programming languages.
Zabbix enables systems administrators to monitor the
network via SNMP, IPMI, JMX, ODBC, SSH, HTTP,
HTTPS, TCP/UDP, etc.
Other features include: Web monitoring, secure user
authentication, flexible email notifications, audit log and
agent-less monitoring.
The Zabbix monitoring system offers a wide range of
customisation options for items, graphs and data visualisation.

Cacti

Cacti is regarded as a complete open source Web based

Admin

Overview

graphical network monitoring tool written in PHP/MySQL.


It makes use of the RRDTool (Round Robin Database Tool)
to store data, generate graphics and collect network traffic
data using the Net-SNMP protocol. Being a powerful network
monitoring tool, Cacti allows systems administrators to collect
data from almost any sort of network hardware like routers,
switches, firewalls, load balancing equipment as well as
servers, and presents the data in properly visualised graphs.
The front-end of Cacti can handle multiple users, each with
their respective graph sets, and is mostly used by Web hosting
providers to monitor the bandwidth statistics of customers.
The back-end of Cacti has two forms: cmd.php a PHP
based executable script for smaller installations, or Spine a
C-based poller that can scale to thousands of hosts.
The operation of the Cacti Web based monitoring tool is
divided into three different tasks, which are described below.
Data retrieval: Cacti makes use of a poller to retrieve data.
Its application is executed at regular intervals of time under
varied OSs to monitor routers, switches, servers and other
network hardware. Cacti makes use of the SNMP protocol for
live monitoring of data from various devices.
Data storage: Cacti makes use of the RRDTool to store
data either in a SQL or flat database. RRD is a system to store,
and shows time series data collected from different SNMPcapable devices.
Data presentation: Cacti has an inbuilt graph presentation
based utility to deploy graphs as per the reports based on the
time series data collected from various network devices. Graphs,
in turn, provide fast and easy visualisation of data for network
administrators to maintain the health of the network 24x7.
The latest version of Cacti is 0.8.8h, and was released in
May 2016. Its features are listed below:
Unlimited graph items, graph data manipulation and graph
templates
Built-in SNMP support, user based management and security
Data source templates and host templates
Data gathering on a non-standard time span
Fully flexible and dynamic data sources

OpenNMS

OpenNMS is regarded as an enterprise grade free and open


source network monitoring and management platform for
systems and network administrators. It was developed to create
a pure, distributed, scalable management application platform
for all aspects of network management with special focus on
fault and network performance management.
Open NMS provides automated and directed discovery
and provisioning, event and notification management, service
assurance and performance measurement.
OpenNMS is built using the Java programming language
and is available for free under GNU version 3. The OpenNMS
package provides us with a complete network management solution
which can scale up to thousands of nodes to easily and effectively
collect and store network information. OpenNMS enables network
34 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

administrators to monitor all sorts of resources, quotas, network


usage statistics, etc. Data can be further analysed via graphs, and
OpenNMS provides a proper Web user interface for all sorts of
data related to network devices. This highly dynamic and flexible
tool enables systems administrators to customise dashboards, duty
schedules and on-call calendars on a per-user or per-group basis.
The current version of OpenNMS is 18.0, which was
released in May 2016.
Its features are:
Event management and notification: OpenNMS is based
on the principle of Publish and subscribe. Processes in
the software can publish events and other processes can
subscribe to them.
Discovery and provisioning: OpenNMS consists of an
advanced provisioning system for adding devices to the
management system by submitting the range of IP addresses
to the system. It consists of adopters to integrate with other
processes within the application as well as external software
like a dynamic DNS server and RANCID.
Server monitoring: OpenNMS monitors network based
services ranging from very simple ICMP pings to complex
protocols like SMTP or page sequence monitoring.
Data collection: OpenNMS collects information of
various protocols like SNMP, HTTP, JMX, XMP, XML,
NSClient and JDBC.

Icinga

Icinga is a free, open source, scalable and extensible network


monitoring application which checks the availability of resources,
notifies users of outages and provides extensive business
intelligence data. Its new features include a Web 2.0 style user
interface, additional database connectors for MySQL, Oracle and
PostgreSQL, and a REST API that lets administrators integrate
various extensions without modifying the Icinga Core.
The latest release of Icinga is version 2.4.9, which came
out in May 2016.
Architecture: Icinga Core is developed in C language
and has a modular architecture with a standalone core, user
interface and database on which users can install various
plugins and add-ons.
The components of the architecture are:
1. Icinga Core: This manages all sorts of monitoring tasks
and receives various results from plugins. The core
communicates the results to IDODB through the IDOMOD
interface and the IDO2DB service daemon over SSL
encrypted TCP sockets.
2. Icinga 2: This manages monitoring tasks, running checks
and the sending of all sorts of alert notifications. It can be
enabled on-demand, such as the checker or notification
component.
3. User interfaces: Icinga has two types of user interfaces.
(a) Icinga Classical UI: This is based on Nagios CGIs
and has new features added to this interface such as
pagination, JSON output and CSV export.

Admin

Overview

(b) Icinga Web: This is also known as the new Web and
has a Web 2.0 inspired front-end to offer drag and
drop customised dashboards. It communicates to the
core, database and other third party add-ons.
4. Icinga Data Out Database: This acts as a storage point for
historical data monitoring for add-ons.
5. Icinga Reporting: This is a reporting module based on
the open source Jasper Reports. The reporting module
provides template based reports with varied access levels,
and automated report generation and distribution.
6. Icinga Mobile: This is a user interface for smartphones
and tablets. It is available for iOS, Android, BlackBerry,
etc, and is based on JavaScript and Sencha Touch.
Important plugins of Icinga are:
1. Performance monitoring: PNP4Nagios,
NagiosGrapher and InGraph
2. Configuration interfaces and tools: Nconf, Nagios QL
and LConf
3. Business process monitoring: Business process add-ons
4. Network visualisation: NagVis and Nagmap
5. Windows monitoring: NSClient++ and Cygwin
6. SNMP trap monitoring: SNMPTT and NagTrap

Op5 Monitor

Op5 is free and open source server and network monitoring


software based on Nagios. Op5 specialises in displaying the
status, health and performance of IT networks and has an
integrated log server and Op5 logger. Op5 is developed and
supported by Op5 AB.
The various products under Op5 are listed below.
1. Op5 Free: This is a perfect product for small IT offices.
Basically, it is very easy to use and understand, and can
monitor all types of servers and network devices, along
with applications.
2. Op5 Pro: This is more suitable for organisations in need
of single system development. It provides comprehensive
monitoring for servers, network devices, applications,
databases, storage and even cloud based services.
3. Op5 Ent+: This is suitable for large enterprises for
monitoring devices and all sorts of servers.
4. Op5 Live: This is easy to use software available and
suitable for everyone.
The following are the features of the Op5 monitoring software.
1. Server monitoring: Monitors all sorts of servers and
provides alerts, reports and graph based visualisation.
Op5 is efficient in monitoring physical, virtual, cloud
and even hybrid server environments.
2. Virtual monitoring: Fully efficient network monitoring
software for monitoring VMware ESX, vSphere,
KVM, Citrix Zen and even Microsoft Hyper V.
3. Cloud monitoring: Op5 provides facilities to systems
administrators to completely monitor SaaS, PaaS and
IaaS, along with other types of cloud infrastructure.
4. Open source: As it is completely open source and
36 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

based on Nagios, there are no problems as such,


in implementation.
5. Scalable: Op5 is highly flexible and scalable for
monitoring large volumes of disk drives and handles
distributed monitoring as well as load sharing in an
easy manner.
6. Data centre monitoring: It is very efficient in managing and
monitoring data centres comprising physical and virtual
servers, application management and unified computing.
7. Reporting: It manages loads of information from various
IT hardware and software, and presents the reports in
a comprehensive manner in easy GUI based graphs for
thorough understanding by systems administrators.
8. Integrated log server monitoring: The Op5 logger
provides centralised storage to log various events, which
enhances security and data integrity.
Important extensions of Op5 Monitor are:
Op5 Monitor Peer
Op5 Monitor Poller
Op5 Monitor Cloud Extension

Munin

According to its official website, Munin is a networked


resource monitoring tool that can help analyse resource trends
and what just happened to kill our performance? problems.
It is designed to be very plug and play. A default installation
provides a lot of graphs with almost no work.
Munin is a free and open source network and system monitoring
tool, which provides systems administrators with the great
advantage of monitoring and alerting services for servers, switches,
applications, etc. It is written in PERL programming language
and uses the RRDTool to create graphs. It can be accessed via a
simple Web interface. Munin provides comprehensive performance
monitoring of computers, networks, SANs, applications, etc.
The latest version of Munin is 2.99.3 and its features are
listed below:
Munin runs a munin-node service on every monitored box,
and the Munin server connects to the munin-node via TCP
port 4949 to retrieve the data.
Provides comprehensive data visualisation using graphs,
giving the status as OK, WARN, CRITICAL or UNKNOWN.
More than 500 monitoring plug-ins are available till date.

Network Management Information System

Network Management Information System (NMIS) is regarded as


an open source network management system licensed under GNU
license v3. It can play a crucial role in monitoring the performance
of an organisation by measuring IT environments, assets and fault
monitoring as well as other valuable information.
NMIS provides a highly scalable, flexible and easy to
implement and maintain network monitoring environment
for IT organisations. It can run both in physical and virtual
environments, and can manage thousands of devices that have a
vast amount of storage at a single point of time.

Overview

Admin

Figure 1: Nagios

The latest version available is NMIS 8.5.10G, which was


launched in September 2015.
Its features are:
Performance management and real-time monitoring
Operation tools and distributed monitoring
Faults and events monitoring, and real-time notification
Business rules engine
Scalability and management reporting
UI designed to provide specialised views, to avoid missing
the wood for the trees in large environments
Extremely efficient monitoring system

NetXMS

NetXMS is an open source enterprise graded multi-platform


management and monitoring system, which provides
comprehensive monitoring of event management, performance,
alerting, reporting and graphing for all layers of IT infrastructure
from network devices to the business application layer.
Architecture: NetXMS architecture is three tiered.
1. Information is collected by monitoring agents either
high-performance agents or SNMP agents.
2. Information is delivered to the monitoring server for
processing and storing.
3. Information is displayed via a rich client application or
Web interface.
The latest version is 2.0.4, which was released in June 2016.
Its features are:
Unified platform for management and monitoring of entire

IT infrastructure.
Designed for maximum performance and scalability.
Distributed network monitoring and automated network
discovery.
Business impact analysis tools; quick deployment with
minimal efforts.
Easy and simple integration with a wide range of
products.
Flexible and easy to use.

References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]

https://www.nagios.org/
http://www.zabbix.com/
http://www.cacti.net/
http://www.opennms.org/
https://www.icinga.org
https://www.op5.com
http://munin-monitoring.org/
https://opmantek.com/network-management-system-nmis/
https://www.netxms.org

By: Prof. Anand Nayyar


The author is assistant professor in the department of
computer applications and IT at KCL Institute of Management
and Technology, Jalandhar, Punjab. He loves to work on open
source technologies, embedded systems, cloud computing,
wireless sensor networks and simulators. He can be reached
at anand_nayyar@yahoo.co.in

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 37

Admin

Overview

Remote Server Monitoring


with Android Devices
Server monitoring is an onerous task. Remote server monitoring is a boon to administrators
of networks. It gives them more flexibility and leaves them free to move away from their
offices. This article is an introduction to remote server monitoring through Android devices.

ithout any doubt, many businesses rely heavily on


their IT network. The stability, infrastructure and the
downtime costs linked to maintaining these networks
are growing concerns. Besides affecting productivity, IT network
downtime causes degradation in the quality of service provided
to customers. Looking at the level of competition in this rapidly
growing market, a
knock to a firms
reputation can
be a costly affair
for businesses.
The answer to the
question, What
measures should
be taken to secure
your business and
reputation to retain
customers? is
simple. It is server
monitoring.
Server
monitoring is the
process of taking
precautionary
steps to help and
detect any issue in
your IT network that can affect performance. It enables you
to recognise and rectify the problem that could cause a major
setback to your business. This technology provides the facility
of continuously scanning the servers on an allocated network
and examining the entire system for any failure or flaw that is
discovered by different types of server monitoring software.
Server monitoring allows users to manage servers using
several server management tools from a host of dealers.
With the addition of virtualisation, network layouts
and storage networks to the mix, it becomes difficult for
traditional server hardware and OS-specific tools to keep
that momentum going, because of which these tools cant
offer end-to-end support. The purpose of server monitoring
technology is to provide an effective and efficient up-to-date
visual model for monitoring and operating servers.

The need for remote server monitoring

In case of a system crash, it takes time and money to get it


38 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

fixed. In a crucial situation like this, it is difficult for any


organisation to expend more finances to get things up and
running again. This is exactly where server monitoring
comes into play, helping you to resolve any minor issue that
could turn out to be a major one in the near future.
One of the prime benefits of monitoring your remote
server is being
informed of
performance related
issues before end
users perceive that
there is a problem.
Listed below are
a few reasons for
having a remote
server monitoring
tool.
Server hardware
health identification
and troubleshooting:
Since the
dependability of
an IT network is of
prime consideration,
the importance of
hardware cannot
be ignored. Possessing a server monitoring tool not only
assists you with identifying the hardware issue at an early
stage, it also analyses the problem before it affects the overall
performance of the business.
Keep a check on the performance and availability of
remote servers: Server monitoring tools help monitor the
performance and efficiency of the installed remote servers. In
case of any reliability issues, these tools aid in keeping track
of the problem.
Remotely arbitrate performance issues: Remote
monitoring tools allow you to carry out steps to resolve
problems without being physically present at the location.
When trapped in a troubleshooting situation, these tools
enable solutions by rebooting servers or restarting websites,
which cuts down the chances of intensifying the problems.
Some of the key features of server monitoring are:
1. Automatically discovers the application and
dependency mapping.

Overview
2. Eliminates faulty scripts with customised built-in alerts.
3. Monitors multiple vendor infrastructure through one Web
interface.
4. Accesses applications on the private, public as well as
hybrid cloud.
5. Modifies built-in templates for extensive customisation.

Nagios Process
Monitoring Logic
Check Logic
Embedded Perl
Interpreter

Android devices for server monitoring

The basic definition of a network is that it is a connection of


machines. When several specifically operated machines are
grouped together, they form a network. It is quite an easy job
to look after, manage and control the ongoing activities of
a network when you are in the office. But, the same routine
task literally becomes difficult while you are away from the
office or travelling out of town. Rather than depending upon
a third party, you have the option of carrying out this task via
your cell phone. The server acts as a medium of establishing
communication between the client and Android phones.
According to a research paper by Angel Gonzalez Villan
and Josep Jorba from the Open University of Catalonia,
Barcelona, Spain, an Android application has been developed
to run a group of server programs on a mobile device,
connected to the network or USB interface. The accessibility
of the server is handed over to a small client written in Java,
which is operational on desktop and Web systems.
Accepting connections from different clients, the server layer
performs the services of device management. The client layer
the remotely accessible one handles the interactions between
the monitored device and control equipment. This architecture
system provides a host of connections to different clients,
allowing remote control to all users. The implementation of
server monitoring is done as an Android application for the user
to activate the services provided by the service layer.

Plugins

Nagios

This is one of the best monitoring tools, and empowers


organisations to recognise and solve IT infrastructure issues,
avoiding crucial business mishaps. This app enables users to
create reports on trends, alerts/notifications and availability,
all via a Web interface.

Perl Plugins

Hosts and Services

Monitoring Abstraction Layer

Monitored Entities

Figure 1: The Nagios process

Given below are the pros and cons of Nagios.

Pros

Nagios is open source software. Its free to use and edit.


It has an open configuration, which makes it easy to add
custom scripts to extend the services available.
There are many devices which the Nagios system
can monitor. The requirement is an SNMP protocol
on that device.
Alerts, notifications or comments about the status of
the system are provided. It has a variety of tools for
this purpose.
It has many plugins and add-ons, which are free to
download and develop.

Cons

Applications used for server monitoring

Without server monitoring tools or software, IT professionals


have to take on a big burden as it requires immense manual
effort to manage servers and other integral applications.
Remote server management looks after and upgrades the
uptime of servers, failing to do which could result in a change
in administrative plans. A good server audit not only gives
enumerative information of the servers, but also ensures
better functionality and smooth performance. This includes
alerting capabilities, comprehensive coverage, performance
benchmarking, data visualisation, etc.
Here is a list of a few server monitoring tools that can
effectively cater to needs of businesses.

Admin

Many features are not available on the free version


of Nagios. Features such as wizards or the interactive
dashboard are available on Nagios XI, which is very
expensive.
Therere many configuration files which are very hard to
configure.
Nagios Core has a confusing interface.
Nagios cant monitor all aspects of networks (such as
bandwidth usage or availability).
Nagios cant manage the network, but just monitors it.

OpenNMS

Equipped with automated and manual discovery options,


OpenNMS performance measurement has a system
and measurement tool. Being open source and hence
freely available, there is absolutely no upgrade or
maintenance cost.
Given below are the pros and cons of OpenNMS.

Pros

Free licensing!
Good support and documentation through wikis and
mailing lists.

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 39

Admin

Overview
Map Map 1
View Map

Map Designer

Settings

Get HTML

Comments

Router 1
Switch 2

Router 2

Root

Figure 2: OpenNMS reporting


Figure 3: Paesslers map designer

Full featured and infinitely flexible.


Its Path outages feature minimises excessive alerting.
Reasonable support costs through the OpenNMS Group.

Cons

Steep learning curve.


Interface not very intuitive.
Most customisation requires learning and modifying
various config files.
Money saved on licensing may have to be spent on
development and maintenance.

Paessler

After re-designing its Web interface completely, Paessler has


added support for an HTML interface for a host of mobile
device platforms. Its interface has integrated Google Maps,
which allows this software to display geographical maps for
creating custom network views.
The pros and cons of Paessler are listed below.

Pros

Low price.
Flexible monitoring for apps, networks and more.
Includes many useful features at no extra cost, e.g.,
Netflow, high-availability, remote probes.
Flexible alerting, comprehensive reporting.

Cons

Map designer falls short.


Dashboards are not easily customised.
Manually configuring sensors is not recommended.

SolarWinds

SolarWinds products are used by millions of users across


the globe for the maintenance of network devices. Offering
an excellent UI design and mobile accessibility, SolarWinds
boasts of features like customisable as well as automated
network mapping.

40 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

SolarWinds has the following pros and cons.

Pros

Excellent UI design.
Customisable, automated network mapping.
Great community support from Thwack.
Mobile access.
Native VMware support.

Cons

Cant configure alerts from the Web console.


Group Dependency configuration is clumsy.
Reporting module needs better ad-hoc reports.
No native support for Microsoft Hyper-V, but
only for SNMP.

References
[1] http://www.monitance.com/en/product-news/what-isserver-monitoring-and-why-is-it-important/
[2] http://www.solarwinds.com/server-application-monitor
[3] http://www.solarwinds.com/solutions/remote-servermonitoring
[4] http://www.embedded.com/electrical-engineercommunity/general/4429112/Remote-control-of-devicesusing-an-Android-platform
[5] http://mashable.com/2015/11/17/network-servertools/#ONbS_YrIIgqZ
[6] http://www.monitis.com/blog/2011/02/22/11-top-servermanagement-monitoring-software
[7] https://haydenjames.io/20-top-server-monitoringapplication-performance-monitoring-apm-solutions/

By: Meghraj Singh Beniwal


The author has a B. Tech in electronics and communication,
is a freelance writer and an Android app developer. He
currently works as an automation engineer at Infosys, Pune.
He can be contacted at meghrajsingh01@rediffmail.com or
meghrajwithandroid@gmail.com.

Lets Try

Admin

The Growing Popularity of the

Snort Network IDS

Lets have a look at the premier, free, open source, network intrusion and
detection system called Snort. It is an amazing tool that lives up to its billing. This
tutorial walks you through the basics of Snort.

nort is a very popular open source network intrusion


detection system (IDS). It can be considered a packet
sniffer and it helps in monitoring network traffic
in real-time. In other words, it scrutinises each and every
packet to see if there are any dangerous payloads. In addition,
Snort can also be used to perform protocol analysis, content
searching and matching. And it can be used to detect various
types of attacks such as port scans, buffer overflows, etc. It
is available on Windows, Linux, various UNIX as well as
all major BSD operating systems. Snort doesnt require that
you recompile your kernel or add any software or hardware
to your existing distribution, but it does require that you have
root privileges. It is intended to be used in the most classic
sense of a network IDS. All it does is examine network
traffic against a set of rules, and then alerts the systems
administrators about suspicious network activity so that they
may take appropriate action.
One can configure Snort in the following
three different modes:
a. Sniffer mode
b. Packet logger mode
c. Network intrusion detection mode
In the sniffer mode of operation, Snort will read network
packets and just display them on the console. In packet
logger mode, it will be able to log the packets to the disk. In
network intrusion detection mode, Snort will monitor the
network traffic and analyse it based on the rules defined by
the user. It will then be able to take a specific action based on
the outcome. Please note that Snort has an inbuilt real-time
alerting capability.
As expected, network IDS are placed at certain points
within the network so that the traffic to and from all
devices on the network can be monitored. Once an attack
or an abnormal situation is identified, an alert can be

triggered to the systems administrator to take corrective


actions, as needed.

How to install and run Snort

Let us first understand how one can install Snort.


As a first step, execute the following command on your
Linux terminal:
pswayam@pswayam-VirtualBox:~$ sudo apt-get install snort

Once the installation is complete, you can check


how successful the installation has been by using the
following command:
pswayam@pswayam-VirtualBox:~$ snort version

Snort comes with a very detailed man page and readers


should go through it for a thorough understanding of this IDS.
A few more programs are needed if you want to run Snort.
These are:
Apache2 for the Web server
MySQL-server for the database
PHP5 for the server-based script
PHP5-MySQL
PHP5-gd for graphics handling
PEAR (PHP Extension and Application Repository)
And we can use apt-get to install all the above programs
as shown by the following commands:
pswayam@pswayam-VirtualBox:~$
pswayam@pswayam-VirtualBox:~$
pswayam@pswayam-VirtualBox:~$
pswayam@pswayam-VirtualBox:~$
pswayam@pswayam-VirtualBox:~$

apt-get
apt-get
apt-get
apt-get
apt-get

install
install
install
install
install

apache2
mysql-server
php5
php5-mysql
php5-gd

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 41

Admin

Lets Try

pswayam@pswayam-VirtualBox:~$ apt-get install php-pear

As a next step, we need to create a set of directories


required to successfully run Snort. One can use the following
commands to create these files:
pswayam@pswayam-VirtualBox:~$ mkdir /etc/snort
pswayam@pswayam-VirtualBox:~$ mkdir /etc/snort/rules
pswayam@pswayam-VirtualBox:~$ mkdir /var/log/snort

The snort.conf file controls everything about what


Snort watches, how it defends itself from attack, what rules
it uses to find malicious traffic, and even how it watches
for potentially dangerous traffic that isnt defined by a
signature. A very good understanding of what is in this
file and how it can be configured is pretty essential for
successful deployment of Snort as an IDS. By default, this
configuration file is located @ /etc/snort. If the configuration
is located somewhere else, then you need to specify a c
switch along with the files location.
Alerts are placed in the Alert file in your logging directory (/
var/log/snort by default, or the directory you specify with the -l
switch). Snort exits with an error if the configuration file or its
logging directory does not exist. You can specify what type of
alert you want (full, fast, none, or UNIX sockets) by supplying
the -A command-line switch. You need to use the -A switch if
you do not use the -c switch, or Snort will not enter alert mode.
Additionally, if you use just the -c switch, Snort will use the full
alert mode by default. The snort.conf file is used to store all the
configuration information for a Snort process running in IDS
mode. The majority of the snort.conf files content is commentedout instructions on how to use the file. Some of the popular
configurable items of this snort.conf file are listed in Table 1.
Table 1

Variable

Description

HOME_NET

Specifies the IP address of the


system that you are protecting.
So when you use Snort, you will
need to change this parameter to
your actual home network IP address range and do not leave this
to the default value of any.

ORACLE_PORTS

Used to specify the port that


Oracle is listening on.

RULE_PATH

Points to the location of the rule


sets in your file system. Typically,
rules are stored in /etc/snort/
rules. So please make sure to
use the full path name or whatever the right location is on your
system.

Please note here that to get Snort ready to run, one needs
to change the default configuration settings to match your
42 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Figure 1: Checking if the Snort installation was successful

local environment and operational preferences. Also, if you


want to change the settings of the default configuration file, you
need to have root privileges.
It is very common to see include items in Snort
configuration files. These are commands that instruct Snort
to include the information in files located in Snorts sensors
file system. Typically, these files contain configuration related
information and also the files that contain rules which Snort
can use to catch bad behaviour. As expected, the actual rules
themselves are not entered directly into the configuration file to
simplify management.
One can set an alert in the rules files with the
following structure:
<Rule Actions> <Protocol> <Source IP Address> <Source Port>
<Direction Operator> <Destination IP Address> <Destination port
> (rule options)

Let us take a quick look at the rule structure with an


example of each:
Structure

Example

Rule action

alert

Protocol

ICMP

Source IP address

any

Source port

any

Direction operator

->

Destination IP address

any

Destination port

any

Rule options

(msg:ICMP Packet; sid:477;


rev:3;)

It is important to understand here that a few lines are added


for each alert and these lines carry the following information:
a. IP address of the source
b. IP address of the destination
c. Packet type and useful header information
In order to execute Snort, one can use the following command:
pswayam@pswayam-VirtualBox:~$ snort -c /etc/snort/snort.conf -l
/var/log/snort/

Now, a file called Alert will get created in the /var/log/snort

Lets Try

Admin

headers to the screen. This can be done


by using the following command:
pswayam@pswayam-VirtualBox:~$ snort v

If you want Snort to display packet


data as well as the headers, you need to
add the d option as shown below:
pswayam@pswayam-VirtualBox:~$ snort vd

Figure 2: Executing Snort

In some cases, we may even be


interested in a more descriptive display
and for this, we need to use the e option.
In packet logger mode, the main
idea is to record the packets to the disk. In this mode, we need
to specify a logging directory and Snort will automatically
know how to get into the packet logger mode. A simple
command to operate Snort in this mode looks like this:
pswayam@pswayam-VirtualBox:~$ snort dev l ./log

Figure 3: A look at Snort options

directory. This file contains the alerts generated while Snort


was running. Snort alerts are classified according to their type.
A Snort rule specifies a priority for an alert as well. This lets
you filter out low priority alerts to concentrate on the most
worrisome. One can also run Snort as a daemon process by
using the D option as shown in the following command:

The above command assumes that you have a directory


named log in the current directory. If that is not the case,
Snort will exit with an error message. As expected, in order
to log relative to the home network, you just need to tell
Snort which the home network is. Another useful mode for
logging is the binary mode and in this, Snort logs the packets
in tcpdump format to a single binary file located in the
logging directory. The following command shows how binary
mode can be used:
pswayam@pswayam-VirtualBox:~$ snort dev l ./log b

You can see from the above command that there is no need
to specify the verbose mode or d /-e switches. This is because,
in binary mode, the entire packet is logged, not just parts of it.

pswayam@pswayam-VirtualBox:~$ snort D -c /etc/snort/snort.


conf -l /var/log/snort/

Note that if you want to be able to restart Snort by sending


the SIGHUP signal to the daemon, you will need to use the
full path to the Snort binary, when you start it.
You can type snort help to get a clearer understanding of
the various options that are available with Snort.

How to use Snort

Let us now understand how to use Snort in various modes.


As mentioned earlier, Snort can be configured in three modes
Sniffer mode, packet logger mode and network intrusion
detection mode.
In Sniffer mode, Snort will be able to print TCP/IP packet

Figure 4: Snort in packet logger mode


www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 43

Admin

Lets Try

Figure 5: Snort in network intrusion mode

If you need to enable network intrusion mode, you can


use the following command:
pswayam@pswayam-VirtualBox:~$ snort dev l ./log -h $HOME_
NET -c /etc/snort/ snort.conf

Please note that snort.conf is the name of your rule file


and HOME_NET is a variable whose value is defined in the
configuration file. The rule set defined in the snort.conf file
will be applied to each packet, and a decision will be made
on whether or not action needs to be taken. The default snort.
conf references several other rule files, so it is a good idea to
read through the entire snort.conf file before calling it from
the command line.
It is also important to note here that if you are going to
use Snort over a long period as an IDS, then do not use the v
switch in the command line for the sake of speed. When used
as a network IDS, Snort provides near real-time intrusion
detection capabilities.
Sometimes, you may be interested in the performance of
Snort. In which case, you can use the b and A fast options.
With these options, the packets will be logged in tcpdump
format and you can expect very minimal alerts. The following
command can be used for this purpose:
pswayam@pswayam-VirtualBox:~$ snort b A fast c/etc/snort/
snort.conf -lib

Snort applies its rules to packets in a specific order. The


default order is that Alert rules will be applied first, then the Pass
rules, and finally the Log rules. Snort provides a o switch option
to change the default rule application behaviour to Pass rules,
then Alert, and then Log, as shown in the following command:
pswayam@pswayam-VirtualBox:~$ snort -d -h 192.168.1.0/24 -l
./log -c /etc/snort/snort.conf -o
44 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Another important
concept that we need to
understand in Snort is
output configurations. Snort
provides a variety of options
to display the output and
the detected information.
Please note that many
Snort administrators use
third-party applications to
monitor and investigate the
information generated by
Snort. To do this, Snort must
output the data in a particular
format. One can enable
multiple output plugins and
these allow a variety of tools
to be employed by Snort administrators. Let us have a quick
look at some of these output configurations.
a. Alert syslog: UNIX based systems use the syslog facility
to consolidate the generated messages. Snort-generated
information can be presented to syslog in a number of
ways. In this output configuration, the format looks like
the following:
Output alert_syslog: <facility> <Priority>

An example is: output alert_syslog: LOG_AUTH LOG


ALERT
b. Log tcpdump: This logs packets to the tcpdump file
format. There are a variety of applications that can read
this format.
c. MySQL: MySQL logging requires that the database
support be present on the system running Snort, and that it
is in the form of the MySQL client software and libraries.
After getting a clear understanding of Snort, a question
will arise on where we need to place it. Putting Snort on
each side of a firewall can be most enlightening. It is a best
practice to run Snort behind a firewall, as this adds a layer of
protection and is easier to manage.
Snort is considered the first choice when it comes to
network IDS in many organisations. Cost-effectiveness
and robustness are the two main parameters behind why
people select it. But now with the several commercial
versions of Snort available, industry support can be
considered another reason behind its popularity. Added to
this, the open source Snort community is also very active
in providing the much needed support and enhancements
of various features.
By: Swayam Prakasha
The author has a masters degree in computer engineering and
has been working in the field of information technology for several
years. You can reach him at swayam.prakasha@gmail.com

Lets Try

Admin

Deep Learning for Network Packet


Forensics Using TensorFlow
TensorFlow is an open source Python library for machine learning. It does
mathematical computation using dataflow graphs. This article dwells on the use of
TensorFlow as a forensic tool for classifying and predicting malware sourced from
honeypots and honeynets.

ata mining and machine learning are key methods


used to gain information from databases in which
hidden patterns are analysed from a huge repository
of records. In classical methodology, there are a number
of algorithms for clustering, association rule mining,
visualisation and classification to get meaningful information
for predictive analysis.
Nowadays, a number of tools and technologies are
available for the implementation of data mining and machine
learning including WEKA, Tanagra, Orange, ELKI, KNIME,
RapidMiner, R and many others which have Java, Python or
C++ for back-end programming and customisation.

Machine learning has core tasks associated with


classification and recognition, which are usually related
to artificial intelligence. Generally, these operations are
performed using some metaheuristic approach in which
global optimisation or simply effective results can be fetched
from a huge search space of solutions.
There are a number of prominent soft computing
approaches such as neural networks, fuzzy logic, support
vector machines, swarm intelligence, metaheuristics, etc.
Metaheuristics, in turn, has many optimisations like Ant
Colony Optimisation, Cuckoo Search, Bees Algorithm,
Particle Swarm Optimisation, etc.
Despite the number of metaheuristic approaches and
other effective soft computing algorithms, there are many
applications for which a higher degree of accuracy and lower
error rate is required.
For machine learning, the approaches of
artificial neural networks (ANN) or support vector
machines (SVM) can be used with the dataset to be
integrated. ANN can be used for malware detection
or classification, face recognition, fingerprint or finger
vein structure analysis in which the previous dataset
is used for training a model and then the prediction or
classification of further datasets is done. ANN
based learning is fully dependent on the dataset
used for training the model and apparently, if
the dataset is not accurate, the predictive analysis will
affect the accuracy of the results.
To implement modelling and training with ANN,
there are a number of open source tools available
including SciLab, OpenNN, FANN, PyBrain and
many others.

Deep learning

Deep learning is one of the branches


of machine learning with a strong base of
algorithms that have multi-layered processing, a
higher degree of computations and accuracy, and a
lower error rate with the integration of deep graph
based learning. Other branches of deep learning
include deep structured learning, deep machine
learning or hierarchical learning.

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 45

Admin

Lets Try

Figure 1: The TensorFlow logo

Deep learning can be


used for various real world
applications including
speech recognition, malware
detection and classification,
natural language processing,
bioinformatics, computer
vision and many others.

Figure 2: The official TensorFlow page

TensorFlow: A Python based open source


software library for deep learning

TensorFlow (tensorflow.org) is a powerful open source


software library for the implementation of deep learning.
It is based on Python and C++ at the back-end, with the
incorporation of algorithms for data flow as well as graphs
based numerical computations to achieve multi-layered
computations with higher accuracy and a lower error rate.
TensorFlow has been developed by Google under a research
project for deep learning titled Google Brain. It is a second
generation system developed by Google after DistBelief.
TensorFlow was released as open source in November 2015
by Google and this move has motivated research scholars,
academicians and scientists to work on this powerful library.
TensorFlow can be installed with binary packages or
authorised GitHub sources. Any one of the following methods
can be used to install it:
Python Pip
Virtualenv
Anaconda
Docker Toolbox
Source based installation

Running and testing TensorFlow on the


command line

At the terminal, the following instructions can be


executed for testing:

$ python
>>> import tensorflow as myTensorFlow
>>> MyMessage = myTensorFlow.constant(Hello To All)
>>> mysession = myTensorFlow.Session()
>>> print(sess.run(MyMessage))
Hello To All
>>> x = myTensorFlow.constant(9)
46 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

>>> y = myTensorFlow.constant(2)
>>> print(mysession.run(x + y))
11
>>>

Using TensorFlow for network packet analysis

For classification and prediction of malware from network


packets, the dataset should be fetched from any authentic
source like honeypots, honeynets, data repositories, open
data portals, etc. Many anti-virus companies and research
organisations release their datasets for R&D purposes,
enabling research algorithms to be implemented by research
scholars and practitioners.
If the datasets of malware or network traffic are required,
the following URLs can be used:
http://www.netresec.com/?page=PcapFiles
https://wiki.wireshark.org/SampleCaptures
https://snap.stanford.edu/data/#email
http://www.secrepo.com/
As the network traffic is captured in PCAP format,
there is a need to transform the PCAP format to CSV
(comma separated values) using Snort IDS, by which
all the alert files from PCAP can be generated and then
changed to CSV format.
To read a pcap file in Snort IDS, use the following code:
$ snort -r mypcap.pcap
$ snort --pcap-single= mypcap.pcap

The alert file format generated by Snort IDS is as follows:


[**] [1:2016936:2] Suspicious inbound to Database port 1433
[**]
[Classification: Potentially Bad Traffic] [Priority: 2]
07/07-10:10.817217 <IP>:<PORT> -> <IP>:<PORT>
TCP TTL:112 TOS:0x0 ID:256 IpLen:24 DgmLen:20

Lets Try
******S* Seq: 0x43EE1111 Ack: 0x0 Win: **** TcpLen: 20
[Xref => http://doc.url.net/2016936]

TensorBoard

EVENTS

Write a regex to create a tag group

From the Snort IDS generated alert file, the records


of Potentially Bad Traffic can be cut and placed in a
separate CSV file.
The finally obtained CSV file is read in TensorFlow for
deep learning of the model and further predictive analysis
of upcoming network traffic. Using this approach, the
upcoming traffic can be analysed for the probability of
being malware or not.
>>> import pandas
>>> dataset = pandas.read_csv(mydataset.csv)
>>> dataset.shape
(1000, 5)
>>> data.columns
Index([uClassification, uDGMLEN, uIPLEN, uTTL,
uIP],dtype=object)
>>> y, X = train[Classification], train[[DGMLEN, IPLEN,
IP]].fillna(0)
>>> X_Modeltrain, X_Modeltest, y_Modeltrain, y_Modeltest =
train_Modeltest_split(X, y, test_size=0.1, random_state=29)
>>> lr = LogisticRegression()
>>> lr.fit(X_Modeltrain, y_Modeltrain)
>>> print accuracy_score(y_Modeltest, lr.predict(X_
Modeltest))
Output - 0.60183838
The accuracy score of existing model can be evaluated

Visualisation of learning and graphs


using TensorBoard

The visualisation of learning behaviour can be analysed

AUDIO

GRAPHS

HISTOGRAMS

accuracy
accuracy

Split on underscores

[**] [1:2016936:2] Suspicious inbound to Database port 1433


[**]
[Classification: Potentially Bad Traffic] [Priority: 2]
07/07-10:10.838622 <IP>:<PORT> -> <IP>:<PORT>
TCP TTL:113 TOS:0x0 ID:256 IpLen:22 DgmLen:21
******S* Seq: 0x6D5F1111 Ack: 0x0 Win: **** TcpLen: 20
[Xref => http://doc.url.net/2016936]

IMAGES

Admin

Data download links

0.900
0.700

Horizontal Axix
STEP

0.500
RELATIVE

0.300

WALL

0.100
0.000 200.0 400.0 600.0 800.0 1.000k

Runs
Write a regex to filter runs
test

dropout_keep_probability

train
TOGGLE ALL RUNS

dropout_keep_probability

Figure 3: Live demo of TensorBoard

using TensorBoard, which is a set of Web based


applications. TensorBoard includes the visualisation
of five types of data including images, audio, scalars,
graphs and histograms so that better and more effective
analysis can be done.
A live demo of TensorBoard is available at http://www.
tensorflow.org/tensorboard/.
To open and execute TensorBoard, the application
should be opened in a Web browser on Port 6006 - http://
localhost:6006/.
At the top right panel, there are navigation tabs using
which different types of visualisation options can be selected.

Using TensorFlow for research and development

TensorFlow can be used by research scholars and


academicians to implement research proposals and projects
that include deep learning. The results of TensorFlow can
be compared with other tools for analytics and machine
learning so that a clear and self-executed experience
can be gained.

By: Dr Gaurav Kumar


The author is the MD of Magma Research and Consultancy. He
is associated with various academic institutes for delivering
expert lectures and conducting technical workshops on the latest
technologies and tools. Email: kumargaurav.in@gmail.com; URL:
www.gauravkumarindia.com

Know the Leading Players


in Every Sector of the
Electronics Industry

ACCESS ELECTRONICS
B2B INDUSTRY WITH A

www.electronicsb2b.com
Log on to www.electronicsb2b.com and be in touch with the Electronics B2B Fraternity 24x7
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 47

Admin

How To

Building a Multi-Host,
Multi-Container Orchestration and
Distributed System using Docker
This article is about the next generation Docker clustering and distributed
system, which comes with various interesting built-in features like orchestration,
self-healing, self-organising, resilience and security.

ocker is an open platform designed to help both


developers and systems administrators to build, ship and
run distributed applications by using containers. Docker
allows the developer to package an application with all the
parts it needs, such as libraries and other dependencies, and to
ship it all out as one package. This ensures that the application
will run on any other Linux machine, regardless of any
customised settings that the machine might have, which could
differ from the machine used for writing and testing the code.
Docker Engine 1.12 can be rightly called the next
generation Docker clustering and distributed system. One
of the major highlights of this release is the Docker Swarm
Mode, which provides a powerful yet optional ability to create
coordinated groups of decentralised Docker engines. Swarm
Mode combines your engine in swarms of any scale. It is
self-organising and self-healing. It enables an infrastructureagnostic topology. The newer version democratises
orchestration with out-of-the-box capabilities for multicontainer on multi-host app deployments, as shown in Figure 1.
Built as a uniform building block for self-organising and
healing a group of engines, Docker ensures that orchestration
is accessible to every developer and operation user. The new
Swarm Mode adopts the de-centralised architecture rather than
the centralised one (key-value store) as seen in the earlier Swarm

48 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

releases. Swarm Mode uses the Raft consensus algorithm to


perform leader selection, and maintain the clusters states.
In the Swarm Mode, all Docker engines will unite
into a cluster with a management tier. It is basically a
master-slave system, but all Docker engines will be united
and they will maintain a cluster state. Instead of running
a single container, you declare a desired state for your
application, which means multiple containers and then the
engines themselves will maintain that state. Additionally,
a new Docker service feature has been added in the new
release. The command,docker service create is expected
to be an evolution of docker run, which is an imperative
command and it helps you to get the container up and
running. The new docker service create command declares
that you have to set up a server, which can run one or more
containers and those containers will run, provided the state
you declare for the service is maintained in the engine inside
the distributed store based on the Raft consensus protocol.
That brings up the desired state reconciliation. Whenever any
node in the cluster goes down, the Swarm itself will recognise
that there has been a deviation in the desired state, and it will
bring up a new instance to reconstruct the reconciliation.
Docker Swarm Mode is used to orchestrate distributed
systems at any scale. It includes primitives for node

How To
Images

group, and this is clearly shown in dashed lines.

Containers
manages

Client
docker CLI
Network
manages

Rest APIs
Server
Docker
Daemon

Admin

Data
Volumes

Getting started with Docker Engine 1.12

manages
Client

Docker Engine
Orchestration Components

Docker Engine

Swarm Mode
Manager

container

Swarm Mode
Worker

TLS

Certificate
Authority

Volumes

Load
Balancing

Service
Discovery

Plugins

Distributed
store
runc

runc

Networking

In this section, we will cover the following aspects:


Initialising the Swarm Mode
Creating the services and tasks
Scaling the service

Container
Runtime

Swarm mode cluster architecture

runc

A RISE OF Docker 1.12

Internal Distributed State Store

The Best way to Orchestrate Docker is Docker

Manager

Figure 1: The evolution of Docker 1.12

discovery, Raft-based consensus, task scheduling and much


more. Lets look at the features Docker Swarm Mode adds to
Docker Cluster functionality, as shown in Figure 2.
Looking at these features, Docker Swarm Mode brings the
following benefits.
Distributed: Swarm Mode uses the Raft Consensus
algorithm in order to coordinate, and does not rely on a
single point of failure to make decisions.
Secure: Node communication and membership
within a Swarm are secure out-of-the-box. Swarm
Mode uses mutual TLS for node authentication, role
authorisation, transport encryption, and for automating
both certificate issuance and rotation.
Simple: Swarm Mode is operationally simple and
minimises infrastructure dependencies. It does not need
an external database to operate. It uses the internally
distributed State store.
Figure 3 depicts Swarm Mode cluster
architecture. Fundamentally, its a master and slave
architecture. Every node in a swarm is a Docker host running
a Docker engine. Some of the nodes have a privileged role
called the manager. The manager node participates in the
Raft consensus group. As shown in Figure 3, components
in blue share an internal distributed state store of the cluster,
while the green coloured components/boxes are worker nodes.
The worker nodes receive work instructions from the manager

Orchestration

Desired State Reconciliation


Services Types - Replicated and Global Services
Configurable Updates - Parallelism & Updates
Restart Polices

Scheduling
Resource Awareness
Constraints
Strategies

Cluster Management
State Store
Topology Management
Node Management

Worker

Worker

Manager

Worker

Worker

Manager

Worker

Worker

Worker

Figure 3: Swarm Mode Cluster architecture

Rolling updates
Promoting a node to the manager group
To test drive the Docker Mode, I used a four-node cluster
in the Google Cloud Engine all running the latest stable
Ubuntu 16.04 system as shown in Figure 4.

Initialising the Swarm Mode

Docker 1.12 is still in the experimental phase. Setting up


Docker 1.12-rc2 on all the nodes should be simple enough
with the following command:
#curl -fsSL https://test.docker.com/ | sh
Name

Zone

In use IP

External IP

Connect

swarm-agent3

asia-east1-b

default

10.140.0.2

130.211.249.130

SSH

swarm-agent1

asia-east1-c

default

10.140.0.3

130.211.251.130

SSH

swarm-agent2

asia-east1-c

default

10.140.0.4

130.199.169.202

SSH

swarm-agent1

asia-east1-b

default

10.140.0.5

104.199.164.86

SSH

Network

In use by

Figure 4: Test set-up

Run the command (as shown in Figure 5) to initialise


Swarm Mode under the master node.

Figure 5: Initialising Swarm Mode

Listing of the Docker Swarm master node is


shown in Figure 6.

Security

Mutual TLS
Acceptance Policy
Certification Rotation

Figure 2: Features of Docker Swarm Mode

Figure 6: Listing the master node

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 49

Admin

How To

Let us add the first Swarm agent node (worker node) as


shown in Figure 7.

Figure 7: Adding a worker node

Lets go back to the Swarm master node to see the latest


Swarm Mode status.
Similarly, we can add other Swarm agent nodes to the
swarm cluster and see the listings, as shown in Figure 8.

Now you can see that there are five different containers
running across the cluster, as shown in Figure 13.
The command in Figure 13 declares a desired state
on your swarm of five busybox containers, reachable as
a single, internally load balanced service of any node in
your swarm.

Figure 13: Demonstrating rolling updates in Swarm Mode


Figure 8: Listing the three-node cluster

Creating services and tasks

Lets try creating a single service called collab, which uses


the busybox image from Dockerhub, and all it does is ping the
collabnix.com website.

Rolling updates

Updating a service is pretty simple. The Docker service


update is feature-rich and provides loads of options
to play around with the service. Let's try updating the
redis container from 3.0.6 to 3.0.7 with a 10s delay and
parallelism count of 2.

Promoting a node to the manager group

Lets try to promote Swarm Agent Node-1 to the manager


group, as shown in Figure 14.
Figure 9: Creating a Docker service

Figure 10: Listing the Docker service

A task is an atomic unit of service. We actually create a


task whenever we add a new service. For example, as shown
in Figure 11, we have created a task called collab.

Figure 14: Promoting worker node to the manager group

To summarise, Docker comes with a distribution platform,


and makes multi-host and multi-container orchestration easy.
It has new API objects like services and nodes that will let
you use the Docker API to deploy and manage apps on a
group of Docker engines, and provides scaling, promotion
and rolling updates for the cluster nodes.

Figure 11: Scaling the service

References

Scaling the service

[1] https://blog.docker.com/2016/06/docker-1-12-built-inorchestration
[2] http://www.collabnix.com
[3] https://github.com/docker

To scale the service to 5, run the command shown in Figure 12.

By: Ajeet S. Raina

Figure 12: Listing the Docker containers across the cluster

50 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

The author is a project lead engineer with Dell India R&D. He is


a Docker Captain (honoured by Docker Inc.). He blogs about
the latest Docker releases through his personal space www.
collabnix.com. You can reach him at Ajeet_Raina@dell.com.

How To

Admin

pfSense: Adding Firewall Rules


to Filter Services
pfSense is an open source firewall, router and UTM (unified threat management)
distribution based on FreeBSD. This is the third article in the series on pfSense, and it
helps readers in designing and configuring firewall rules as per their requirements.

he first two articles in this series


described the basic pfSense
set-up, installation and
configuration of the Squid Proxy
server, SquidGuard proxy filter,
and configuration of dual WAN
failover. This article starts off from
the point when pfSense has been
configured, at the end of the second
article. It then continues to configure
the firewall to filter services to allow
internal computer systems to access required
websites/IP addresses located in the Internet
using permited services by configuring firewall
rules. Please refer to the earlier articles to establish
a firewall in dual WAN failover.

What is services filtering?

Many people view a firewall as a device to block


access to undesirable websites, which is partially true.
Emphasis must also be given to blocking requests from
the internal network towards the Internet or external
network, using undesirable services. This control is still
not seen in many implementations.
For example, a firewall not configured to block
undesirable services will not block malicious software such
as viruses, worms, spyware, etc, from sending emails out using
email services such as SMTP or from sending outgoing traffic
using non-standard ports. This type of traffic could also lead to
blacklisting of your static IP address.
It is crucial that services blocking is enabled along with
website filtering to ensure correct firewall configuration.

The concept of the port

To explain it in simple terms, imagine a server connected to a


single client by a crossover cable. This server is running three
different services HTTP, SSH and FTP. The client system is
trying to access these services simultaneously using only one
physical cable. This gives rise to two questions:
1. How does the server differentiate between the requests
received from different clients? How does it determine
which packet is for which service?
2. How does the client differentiate between the replies
received from the server? How does it determine which
packet is received as reply to which request sent earlier?

The answer lies in the concept of a port


different services run on different ports.
The HTTP service runs on Port 80, SSH
on Port 22, FTP on Port 21, and so on. In
all, there are 65,535 ports.
While sending requests to the server,
the client sends the IP address of the server
as part of the IP header and the port number
for the service as part of the TCP header.
In addition, the client also sends the self IP
address as the source IP address, and adds a
randomly generated source port as the source
port number.
While replying, the server reverses the source and
the destination IP addresses so that the packet reaches the
client, and also reverses the source/destination port numbers
for the client to understand which packet belongs to which
service request.
The handshake remains the same for multiple clients and
servers. The source and destination IP addresses identify the
client and the server, while the source and destination ports
identify the service request and the reply. See Figure 1 for
quick understanding.
In firewall parlance, the terms ports and services are
often used interchangeably, and mean the same thing.
Please note that this explanation is only to simplify the
concept of ports. You can refer to a more detailed explanation of
the TCP/IP handshake on Wikipedia.

The firewall configuration scenario

Let us consider a typical requirement for a company, which


would be to allow access depending on the work profile of
employees. Lets assume that there are three groups admins,
engineers and accountants with various access requirements.
The first step is to prepare a basic Access Control List (ACL),
a sample of which is shown in Table 1. We will use this ACL to
configure pfSense for this article.
Discussions should be held with all computer users to try and
find all the services and websites being used by them, in order
to create ACLs. Employees should be asked whether they use
a specific website frequently or not. For instance, during such
discussions, a website being used once in three months might
get identified, which runs on a non-standard port 8080. In the
example given in Table 1, it could belong to the Pune Municipal
Corporation local body tax division.
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 51

Admin

How To

Table 1: Access Control List

Internal
groups
Internal
network
Admin

External
access requirements
DNS servers

Mailing services to access Gmail


using the
mail client
Additional
mail server
using SMTP
and POP
services
Web browsing
Admin
console of
additional
mail server
Engg
Mailing services to access Gmail
using the
mail client
Customer
site FTP access
Accounts Mailing services to access Gmail
using the
mail client
LBT Pune
website

Port
Alias
Group
TCP, UDP
53
GmailServices

External IP Alias
Group

Mail services

mail.companymail.com

Internet
services
8085

Any

DNS servers
Gmail servers

www.companymail.com

Gmail servers

FTP services

ftp.customersite.
com

GmailServices

Gmail servers

8080

www.website.
com

Please discuss in detail with computer users and have the


patience to create these lists. The more details you get, the
fewer reconfiguration calls will be needed later.
Identify internal IP addresses, external IP addresses,
external host names and services required for controlling
access, etc. For example, for allowing Gmail access, we need
to configure three groups:
a. GmailServices Port alias group containing the following
TCP services (ports) required for Gmail access:

Figure 1: Concept of ports


52 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

IMAPS 993
SMTPS 465
POP3S 993
Submission 587
GmailServers IP alias group containing IP addresses for
the following servers:
smtp.gmail.com
imap.gmail.com
c. Group of internal systems required to access Gmail using
the mail client.
Think about how to group various internal/external IP
addresses and services to create the minimum number of
access rules. Create IP and ports alias lists from Firewall
Alias menus.

b.

Note: Planning before implementing rules is the best


way there are no shortcuts.
Once the aliases are ready, go to Firewall Rules LAN
and proceed to create the desired access rules as per the
requirements already defined in Table 1.
From the Source dropdown box, select Single host or
alias. Type the name of the predefined alias in the box in front
pfSense will auto display all matching aliases.
Since this firewall is configured with dual WAN, click
on Display Advanced under Extra Options and select
DualWAN Gateway.
Similarly, create the additional required rules to allow
traffic from the source towards the destination by using
Services. For example, to allow the computer from Admin
group to access Gmail using a mail client, create the
following rule:
ct the pass rule with source as Admin group.
the destination with Gmail servers using Gmail services.
the gateway and dual WAN to ensure this rule works
with both the WAN interfaces.
Knowledgeable readers may revisit ACL in Table 1 before
reading further, to check if any rule is missing.
At the time of installation, pfSense configures a default
rule, which allows all traffic from the LAN net towards any
destination. Once all rules are configured, disable this default
rule by clicking the button.

Figure 2: GmailServices

How To

Admin

the next sentence.


Once configuration is implemented according to the ACL
defined in Table 1, LAN users will not be able to ping the
Internet or even the firewall itself, since ICMP packets are not
allowed towards the firewall and towards the Internet. The
importance of ICMP packets for troubleshooting cannot be
emphasised enough. Here, two rules will be required to allow
ping requests towards pfSense and towards external DNS
servers, at the minimum.
Figure 3: IP aliases

Note: A default anti lockout rule is configured to ensure


admin access to the firewall from the internal network. Take
care not to disable this rule, otherwise you wil be locked out
of the firewall.

Note: To ensure correct documentation, update the ACL


table first and then change the corresponding rule in the firewall.

An important configuration for accessing


Gmail via the client

Readers will definitely notice that the imap.gmail.com and


smtp.gmail.com IP addresses keep changing practically
Diagnostics
every time. Due to this, it is difficult to create a rule to allow
outgoing traffic towards these domain names.
Often, the implementer will stare at various inaccessible sites/
About using FQDN for the host alias, the pfSense website
services after such rules are implemented. The best way to
diagnose these issues is to browse Status System Logs Firewall. has the following caveat: DNS names that use very low
TTLs and change frequently, such as round robin entries,
All blocked log entries will be seen on this page, ordered
are not reasonable to use in this fashion. This means that
chronologically. The button can be used to filter the results
large sites like google.com, which return a different set of IP
based on the source IP address, destination IP address, etc.
addresses with each query, would not be viable in an alias.
Thus, using various domain names such as smtp.gmail.
Missing rule?
com, imap.gmail.com, etc, as alias entries will not work
Readers may revisit the ACL in Table 1 before reading

Figure 4: DNS rules

Figure 5: LAN to WAN firewall rules


www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 53

Admin

How To
1. Back up the firewall before making any changes this
will enable a rollback of settings if something goes wrong
after the change.
2. Make changes in the documentation such as the ACL table
first to ensure the documentation is up to date, always.
3. Make changes to firewall settings, ensuring comments for
created rules. Implement only documented rules.
4. Test the change to ensure it is giving only the desired
access.
5. Back up the firewall after the change is complete.
Following these steps will ensure correct and up-to-date
documentation of implemented firewall rules, which will definitely
help future firewall troubleshooting, reconfiguration and migration.

Interesting pfSense features related


to firewall rules

Figure 6: Domain overrides

properly in setting firewall rules.


This can be overcome in two ways:
1. Search all Gmail IP addresses, and allow traffic on
GmailServices towards all these IP addresses. This
configuration will require considerable effort.
2. Configure Domain Override settings under DNS Resolver
under Services to resolve smtp.gmail.com and imap.gmail.
com to correct static IP addresses. Further, configure a
rule to allow all traffic for GmailServices towards these IP
addresses. Use the following steps:
1. Find the current IP address belonging to such FQDNs
by pinging them.
2. Configure the DNS resolver to resolve these URLs to
these IP address(es).
3. Use these IP addresses to configure groups and
firewall rules.
4. Ensure that all users use pfSense as their DNS server
so that the IP address for overridden domains will
resolve to the preconfigured IP address. Note that
systems configured with DNS servers other than
pfSense will get different IP addresses for these
overridden domains and access will be blocked.

Managing firewall configuration changes

As a good change management practice, the following


sequence should be observed:
54 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

pfSense provides easy addition of pass or drop rules by


clicking the signs in the destination column. Once
such a rule is created, do not forget to inspect this rule
from Firewall Rules LAN, and change the default
gateway as well as add a proper comment for easy
identification at a later date.
Check logs of firewall rule changes from the menu
Diagnostics Backup and Restore. Here, details of changed
rules can be checked by comparing the earlier rule and
changed rule by selecting radio buttons corresponding to the
required rules and clicking
on them.

Latest available update

Please update pfSense to the latest new minor release version


2.3.1-RELEASE-p5 which was built on June 16, 2016.
Details of new features and changes for this release are
available at https://doc.pfsense.org/index.php/2.3.1_New_
Features_and_Changes#Update_5.

References
[1] TCP Handshake: https://en.wikipedia.org/wiki/
Transmission_Control_Protocol
[2] pfSense firewall rule basics: https://doc.pfsense.org/
index.php/Firewall_Rule_Basics
[3] Example of pfSense basic firewall rule: https://doc.
pfsense.org/index.php/Example_basic_configuration
[4] Note about FQDNs: https://doc.pfsense.org/index.php/
Using_FQDNs_in_Aliases
[5] Forum link for allowing Gmail: https://forum.pfsense.org/
index.php?topic=101809.0
[6] URL Alias: https://forum.pfsense.org/index.
php?topic=66499.0

By: Rajesh M. Deodhar


The author is an IS auditor, network security consultant and
trainer with more than 25 years of industry experience. He
is an industrial electronics engineer with CISA, CISSP and
DCL certification. Please feel free to contact him on rajesh at
omegasystems dot co dot in.

How To

Admin

Nagios: The System Monitoring


Tool You Can Depend On

Enterprise class IT system monitoring of servers, networks and applications is


possible with the FOSS tool Nagios. Problems can be identified and remedied long
before they become critical and lead to system failures and downtime. Nagios is
very versatile as it is platform-independent.

agios is a highly powerful open source monitoring


tool which helps IT enabled sectors to provide an
early diagnosis of as well as address critical failures
in the monitored network. Unknown server breakdowns or
any network outages can be reported well in advance before
critical business gets affected. The major aspects of the tool
are scalability and flexibility. It also provides online reporting
of network statistics and by making it the central management
system, administrators can detect any anomalies in the network.
Early detection and mitigation of such malicious threats can
improve the quality of service without affecting the clients.
Nagios is a widely used open source network monitoring
software deployed across networks. It keeps track of the
hosts and their respective services. It alerts administrators
when the system senses any intrusions and malicious

activities. It also completely monitors the IT enabled


network, providing online reporting of services, running
applications and keeping an eye on whether processes are
functioning properly. Once a failure is encountered, the
alerts are reported, enabling admins to take proper remedial
measures before the entire network shuts down or an outage
is experienced by the end users. The infected machines
can be isolated from the network so that the security of the
network is assured and quality of service is achieved. The
algorithm and the functionality of Nagios XI is shown with
the help of a service state diagram in Figure 1.
Nagios can be used for cluster management, monitoring
Web services and as the central management system, depending
on the deployment model. When an agent based system is
used for monitoring purposes, it consumes host resources such

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 55

Admin

How To
OK; ac <- 1; !recovery

WARNING
WARNING ; ac < mca - 1
ac++

Monitoring the Windows


desktop

On a Windows desktop, NagiosXI


monitors the following:
Memory usage
OK ; ac <- 1
WARNING
CPU load
ac==mca - 1
ac++ ; !problem
Disk usage
CRITICAL
WARNING
OK
ac < mca - 1
ac < mca - 1
Service states
CRITICAL
WARNING
ac++
ac++
CRITICAL
ac==mca - 1
OK ; ac <- 1
Running processes
ac++ ; !problem
CRITICAL
Monitoring private services or
CRITICAL
ac==mca - 1
ac <- 1
ac++ ; !problem
attributes of a Windows machine
ac < mca - 1
Hard CRITICAL
Soft CRITICAL
requires the installation of an agent.
CRITICAL ; ac < mca - 1
The agent acts as a proxy between
ac++
OK ; ac <- 1 ; !recovery
CRITICAL
the NagiosXI plugins, which
Figure 1: State diagram
do the monitoring of the actual
service or attribute of the Windows
machine. We have used NSClient++
as the add-on on the Windows
Windows Machine
machine and are using the check_nt
Monitoring Server
CPU
plugin to communicate with the
Memory
NSClient++ add-on.
How NSClient++ communicates
Nagios
check_nt
NSClient++
Disk Space
with NagiosXI is shown in Figure 2.
Processes
How to configure NSClient++
in
Windows:
After downloading the
Services
NSClient++ 4.0.2, we have to install
it and, during installation, specify the
IP address of our NagiosXI machine
Figure 2: NSClient++ communicating with NagiosXI
as well as the NSClient password.
Configuring NSClient in NagiosXI: After installing
as CPU and memory. The monitoring can be carried out at
NSClient++ in Windows, we have to make some changes in
the host and network level to report system statistics. The
the NagiosXI configuration wizard and select the Windows
intelligent platform management interface (IPMI) stack is
desktop option, before setting the IP address of the Windows
used for hardware monitoring. Host monitoring can be carried
desktop in NagiosXI.
out using Nagios, Ganglia or other related software. HPC
Settings for the desktop metrics: We can set our own
cluster monitoring integrates the IPMI stack with Nagios. The
parameters for monitoring the desktop. For CPU, memory
unified framework summarises the hardware monitoring of
usage and disk usage, we have to set our own threshold value,
the server to the Nagios Web interface.
below which the soft state will be converted into the hard
The IPMI sub-system functions autonomously,
critical and hard warning state. The desktop metrics settings
irrespective of the OS, and permits administrators to monitor
are shown in Figure 3.
the system remotely without the presence of the OS or the
Desktop monitoring settings: We can determine the
management application. The system can be monitored
maximum number of checks for the desktop. The NagiosXI
continuously and it abstracts the data from local area
will keep on checking till the count reaches the maximum
networks when the power source is connected. The IPMI
count value, after which NagioXI will notify the admin about
prescribes only the structure and format of the interfaces as a
the error with the help of alert messages and e-mails.
standard, while the deployment varies.
Settings for desktop notifications: Once an error
has been detected, the admin should be informed on time
Monitoring checks
before the end user gets to know. Various methods are used
The monitoring checks can be categorised into local and
for notifying the admin. We can create our own method of
remote. Local checks include the following aspects:
notification using e-mails, SMS alerts, etc.
Monitoring the Windows desktop
After configuring all the initial settings, we can show the
Monitoring Windows server
status of CPU usage, disk drives, etc. The checks display the
Remote checks include the following aspects:
states of various monitored parameters. They also display the
Monitoring website URLs
maximum checks for each device.
Google Map integration with NagiosXI
OK
ac <- 1

WARNING
ac <- 1
ac < mca - 1

Soft WARNING

WARNING
ac==mca - 1
ac++ ; !problem

Hard WARNING

56 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

How To
Server Metrics

Desktop Metrics
Specify which services youd like to monitor for the Windows machine.
Ping
Monitors the machine with an ICMP ping, Useful for watching network latency and general uptime
CPU
Monitors the CPU (processor usage) on the machine.

Warning Load: 85

% Critical Usage: 95

% Critical Usage: 95

Specify which services youd like to monitor for the Windows server.
Ping
Monitor the server with an ICMP ping. Useful for watching network latency and
CPU
Monitors the CPU (processor usage) on the server

80

Memory Usage
Monitors the memory usage on the machine.

Warning Usage: 85

Admin

90

Memory Usage
Monitors thememory usage on the server.

Uptime

80

90

Monitors the uptime on the machine.


Uptime

Disk Usage

Monitors the uptime on the server.

Monitors disk usage on the machine.


Drive : C:

Warning Usage: 80

% Critical Usage: 90

Drive :

Warning Usage: 80

% Critical Usage: 90

Disk Usage
Monitors disk usage on the server.

Drive:
Back

C:

80

95

Drive:

80

95

Drive:

80

95

Drive:

80

95

Drive:

80

95

Next

Figure 3: Desktop metrics settings

Monitoring Windows Server

Now, lets monitor the Windows Server, just as we have


monitored the Windows desktop.
The steps to be followed for Windows Server
are listed below:
1. First, the agent NSClient++ must be installed in our
Windows machine.
2. Inside NagiosXI, we have to configure the Windows
server option and then assign the IP address
192.168.5.67 (for example) to the Windows server.
Settings for the server metrics: Just as we did for
the Windows desktop, we can set our own parameters for
monitoring the servers. For CPU, memory usage, disk
usage, etc, we can set our own threshold value, below
which the soft state will be converted into the hard
critical and hard warning state. The server metrics is
shown in Figure 4.
Server services: We can also monitor any services,
processes as well as the performance counter of our
services, if we want. And the results of this monitoring
will be displayed in our dashboard. The services added are
shown in Figure 6. We can monitor the SQL server as well
as the IIS Web servers.
Performance counter: In the performance counter,
we are monitoring three major parameters such as page file
usage, log on error and server work queue.
We can manually specify the values for critical and
warning states. For implementation, we have specified
the warning states as: 70 pages for file usage, two for
log on errors and a warning of four for the server work
queue. This means that if our server processes more
than four tasks in a queue then the state will change,
and NagiosXI will generate the critical message to the
admin to correct the error. The performance counter is
shown in Figure 5.

Add Row

Delete Row

Figure 4: Server metrics

Server monitoring settings: We can also determine


the maximum number of checks for the desktop. NagiosXI
will keep on checking till the count reaches the maximum
count value. Once the value reaches its maximum, NagiosXI
will notify the admin about the error with the help of alert
messages and e-mails.
Status generation for Windows Server: The status
graph of Windows Server contains all the readings of
the various parameters that we have initially set. The
two services, IIS Web service and SQL service, are both
monitored. Both the services are also showing as unknown
since the services are not started in the server. Our
performance counters are also shown in the graph. The
status of the log on errors, page file usage and the server
work queue shows OK. All the server metrics settings,
such as CPU usage, Disk D and memory usage, also shows
OK, but Disk C is shown as being in a critical state as the
attempt count has reached a maximum level.
Graph generation: The graph is made based on device
Performance Counters
Specify any performance counters that should be monitored.

Performance Counter

Display Name

Counter Output Format

Warning
Value

Critical
Value

\\Paging File(_Total)\\%Usa Page File Usage

Page File Usage is %21%% 70

90

\\Server\\Errors System

Login Errors since last rebo 2

20

Current work queue (an indi 4

Logon Errors

\\ServerWork Queues(0)\\Q Server Work Queues

Figure 5: Performance counters


www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 57

Admin

How To

Key Services
60ms

0.75s

30%

40ms

0.5s

20%

20ms

0.25s

10%

12:00

15:00

18:00

21:00

CPU Usage (ScottsServer) [5 min avg load]


HOST (gateway.nagios.local) [pl]

12. Feb

03:00

HTTPS (gateway.nagios.local) [time]

06:00

09:00

HOST (gateway.nagios.local) [rta]

Figure 6: Key services


Host

Status

Duration

localhost

Up

420d 8h 29m 33s 1/10

2011-03-12 19:32:56 OK-127.0.0.1:rta 0.023ms, lost 0%

www.google.com

Up

201d 7h 28m 23s 1/5

2011-03-12 19:34:08 HTTP OK HTTP/1.0 200 OK-9726 bytes in


0.149 seconds

www.nagios.com

Up

N/A

2011-03-12 19:33:58 HTTP OK HTTP/1.1 200 OK-35616 bytes


in 0.698 seconds

www.yahoo.com

Up

201d 7h 24m 58s 1/5

Attempt

1/5

Page 1

Last Check

Status Information

2011-03-12 19:30:45 HTTP OK HTTP/1.0 200 OK-170615 bytes


in 1.041 seconds

of 1 15 Per Page

Go

Last Updated: 2011-03-12 19:34:55

Use the system configuration


files and denote the proper IP
to achieve the expected target
statistics. Sensors like netflow
v9 wont work for public
networks because we cant
access the router.
To summarise, when
Nagios is properly configured
and installed in the network,
it can monitor the system
effectively for network usage,
processes, activities, as well
as CPU, memory and disk
usage. Choose the appropriate
parameter, as all the inherent
parameters to report the system
statistics are preinstalled.
The user can specify the IP
address that requires critical
monitoring of inbound and
outbound traffic.

Figure 7: Host status

monitoring during an entire day, like CPU usage, Nagios


host service and Nagios HTTP services. The X-axis shows
the time in hours and the Y-axis shows pre-defined states in
percentage. The graph is shown in Figure 6.
The blue graph denotes the CPU usage during the entire
day, as we can see that according to the time on the X-axis
the states are changing during the day. We have pre-defined
the levels of critical and warning states as 10 per cent
(critical) and 20 per cent (warning state).
The yellow graph shows the packets lost during the
entire day. As we can see, the packets are lost only during the
first four hours; thus NagiosXI will notify the user every 30
minutes if the critical state is reached.
The red graph shows the round trip average (RPA)
time taken by the host to check the plugins performance
of the network.

Monitoring website URLs

We can monitor any remote URL by the same process we


used to monitor our local checks. We first have to configure
the website URL settings in NagiosXI by going to the
configuration wizard. Then we can add any URL of the
website we want to monitor.
After we have entered the URL, we can change the host
name and choose the port number, URL options as well
as the URL services. We can setup www.google.com and
www.yahoo.com for monitoring.
In Figure 7 we can see the current host status of the
Google and Yahoo websites. It shows that there is no error
and the attempt count is less than the maximum attempt.
58 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

References
[1] Shival Dubey, Anant Wadhwa, Subodh Kumar, Vineet Virmani
and Vaibhav Hans:Agent-Less Hardware Monitoring Tool
for Cluster Management, International Journal of Advanced
Research in Computer Science and Software Engineering,
Vol.3, No. 9, September 2013.
[2] RuanEllis: HOWTO: Set up NAGIOS for an HPC cluster.
December 2014. Available from: https://github.com/alcessoftware/symphony/wiki/HOWTO:-Setup-NAGIOS-for-anHPC-cluster
[3] Nagios report:Nagios XI Installing The Windows Agent:
NSClient++ Available from: https://assets.nagios.com/
downloads/nagiosxi/docs/Installing-The-Windows-AgentNSClient++-for-Nagios-XI.pdf
[4] Ravi Saive: How to Add a Windows Host to a Nagios
Monitoring Server November 2013. Available from: http://
www.tecmint.com/how-to-add-windows-host-to-nagiosmonitoring-server/
[5] Magesh Maruuthamuthu: How to add a Remote Windows
Host on a Nagios Server to Monitor, February 2015.
Available from: http://www.2daygeek.com/add-remotewindows-host-on-nagios-server-to-monitor/#

By: Kiruthika Devi B.S., Abhishek Singh,


Karan Malik and Subbulakshmi T.
Kiruthika is a full time Ph.D scholar at Vellore Institute of
Technology, Chennai doing her research in security. She can
be contacted at kiruthikadevi.bs2015@vit.ac.in
Dr T. Subbulakshmi is a FOSS enthusiast and working as a
professor at Vellore Institute of Technology, Chennai. Her
research includes FOSS for security. She can be contacted at
research.subbulakshmi@gmail.com.
Abhishek Singh and Karan Malik are IV year B.Tech students
at Vellore Institute of Technology, Chennai. They can be
contacted at abhishek.kumar2013@vit.ac.in and karan.
malik2013@vit.ac.in, respectively.

How To Developers

Use the Django and REST


Frameworks to Create a Simple API

This article is a short tutorial on how to create a simple TaskAPI with SQLite, which
communicates over JSON and REST. We will use the Django Web and REST frameworks.

owadays, we live in a multi-platform world. Everyone


has at least one computer, tablet, smartphone or
smartwatch. Our little gadgets store and retrieve data.
What a great opportunity this presents to touch billions of
people with excellent applications, which even those without
any computer knowledge can handle.
Whatever front-end you can think of, on whichever
platform, you still need a way to store data. The back-end is
the heart of every simple and complex application.
In this article, we will discuss how to create a simple
TaskAPI with SQLite, which communicates over JSON and
REST. We will use the Django Web and REST frameworks.

Virtual environment and project set-up

First of all, it is a good habit to set up a new Python virtual


environment to quarantine the requirements from the rest
of the system. You can install virtualenv and all the other
requirements over PyPI.
I assume you have already installed Python 3.X
and virtualenv.
$
$
$
$

mkdir Taskproject
cd Taskproject
virtualenv env
source env/bin/activate

Now, we are in our virtual environment and every


requirement we want to install will be placed inside the env
folder. Lets install the necessary requirements in specific
versions via Pip.
In this article, we use Python 3.X, Django 1.9 and the
Django REST framework 3.0.
-# Install Django and Django REST framework into the virtualenv
$ (env) pip install django
$ (env) pip install djangorestframework
# by default PIP install the latest version.

Creating a Django project and app

A Django project can manage multiple Django applications. We


are going to create a project called TaskAPI, and an application
called Task inside the TaskAPI project.
$ (env) django-admin.py startproject TaskAPI
$ (env) cd TaskAPI
$ (env) django-admin.py startapp Task

Adjusting the project settings

# On Windows use env\


Scripts\activate

The settings are defined in the file /TaskAPI/settings.py. First, we


have to add the installed apps. For our application, we need to
install the Task app along with the mandatory Django application,
and the Django REST framework, rest_framework.
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 59

Developers How To
The installed apps are listed in the INSTALLED_APPS
constant in setting.py.
INSTALLED_APPS = (
rest_framework,
Task,
)

Any global settings for a REST framework API are


kept in a single configuration dictionary named REST_
FRAMEWORK.
REST_FRAMEWORK = {
DEFAULT_MODEL_SERIALIZER_CLASS:
rest_framework.serializers.
ModelSerializer,
}

Creating models

Lets go back to the Task app in the folder Task and create the
models we need in the file /Task/models.py. In order to define
a Task model, we need to derive from the Model class. Lets
use User class of the standard authentication as a foreign key
in the owner attribute.
We define an owner, task name, description of the task, a
status and an attribute which stores the last time a model got
updated.
from django.db import models
from django.contrib.auth.models import User
class TaskModel(models.Model):
user = models.OneToOneField(User)
task_name = models.CharField(max_length=100)
task_description = models.TextField(max_length=200)
status = models.BooleanField(default=False)
date = models.DateTimeField(auto_now_add=True)

Initialising the database

We have to set up the database for storing data. In the default


settings, an SQLite database with the required schema is
automatically created with the following commands:
$ (env) python manage.py makemigrations
$ (env) python manage.py migrate

Creating serialisers

Well declare a serialiser that we can use to serialise and


deserialise data that corresponds to TaskModel objects.
Lets create a new module named Task/serializers.py that
well use for our data representations.
The ModelSerializer class provides a shortcut that lets
you automatically create a Serializer class with fields that
correspond to the Model fields.
60 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

from django.contrib.auth.models import User


from rest_framework import serializers
from .models import TaskModel
class TaskSerializer(serializers.ModelSerializer):
user = serializers.CharField(source=user.
username,read_only=True)
class Meta:
model = TaskModel
fields = (user,task_name,task_
description,status,date)
class RegistrationSerializer (serializers.ModelSerializer):
password = serializers.CharField(write_only=True)
def create(self, validated_data):
user = User.objects.create(
username = validated_data[username]
)
user.set_password(validated_data[password])
user.save()
return user
class Meta:
model = User
fields = (username,password)

We defined two serialiser classes TaskSerializer and


RegistrationSerializer. TaskSerializer has the user as a
read_only serialiser field, which references our logged in
user or we can say a user who has permission to perform
CRUD operations on the Task model.
RegistrationSerializer has a password field as a
write_only because we dont want to serialise the hashed
password. We override the serialisers create() method in
order to create a user instance.

Adjusting the URL dispatcher

Now, lets set up the URL dispatcher in the file /TaskAPI/


urls.py. The URL dispatcher sets the URL routes to
specific views in Django. A view handles logic and sends
HTTP responses.
The REST framework adds support for automatic
URL routing to Django, using the Routers.Django REST
framework, which provides SimpleRouter, DefaultRouter
and CustomRouters.
Lets use SimpleRouter and URL patterns in this
project for now.
We will implement TaskView for the creation, listing,
deletion and updation of tasks and RegistrationView
for user registration.
from
from
from
from

django.conf.urls import include, url


django.contrib import admin
rest_framework import routers
Task import views

How To Developers
router = routers.SimpleRouter()
router.register(rtask,views.TaskView,base_name=task)
urlpatterns = [
url(r^,include(router.urls)),
url(r^register, views. RegistrationView.as_view(),
name=register),
url(r^admin/, include(admin.site.urls)),
url(r^api-auth/, include(rest_framework.
urls,namespace=rest_framework)),
]

The explanation for various end points used in the code


above is given below.
/task: We create a SimpleRouter object and register our
view for automatic URL routing. This end point handles all
CRUD operations related to the task object.
/register: This end point is defined for user registration.
RegistrationView is Django class based views.
/api-auth: This is the built-in Django REST authentication
end point.

Figure 1: API end points

Adding the views: RegistrationView


and TaskView

First, we implement the TaskView. The following belongs


to the file /Task/views.py. For TaskView, we intentionally
use the REST frameworks viewsets.ModelViewSet, which
will automatically create all the HTTP verbs end points
for us. Lets define the HTTP verbs GET, POST and PUT.
Every user needs to be authenticated in order to access
this end point; thats why we set permission_classes as
IsAuthenticated. We override a get_queryset method, so
the GET method filters all to-dos by the logged-in user and
just responds with the serialised data. In the POST method,
we validate the incoming data with the TodoSerializer. If
the incoming data is valid, we create a to-do object and
save it. The method replies with the incoming data and the
primary key ID.

class TaskView(viewsets.ModelViewSet):
Only Authenticate User perform CRUD Operations on
Respective Task

permission_classes = (permissions.IsAuthenticated,)
model = TaskModel
serializer_class = TaskSerializer
def get_queryset(self):
Return tasks belonging to the current user
queryset = self.model.objects.all()
# filter to tasks owned by user making request
queryset = queryset.filter(user=self.request.user)
return queryset
def perform_create(self, serializer):
Associate current user as task owner
return serializer.save(user=self.request.user)

We use the perform_create method to associate the


current user as the task owner, which is provided by the mixin
classes, and provide easy overriding of the object save.
Now lets handcraft RegistrationView, which lets
unregistered users register. The following belongs to the file /
Task/views.py:
class RegistrationView(CreateAPIView):
CreateAPIView have only POST method

model = User
serializer_class = RegistrationSerializer
permission_classes = (permissions.AllowAny,)

We derive the RegistrationView from the


CreateAPIView from the REST frameworks generic views.
CreateAPIView supports only the POST HTTP verb, and
we want to post the username and password into our
database in order to create a user. Everyone, logged in or
not, should be able to use this view; therefore, we set the
permission_classes to an AllowAny. First, lets validate
the incoming data with the RegistrationSerializer. After
validation, we can create the user.

Play with the browsable API

Start the development server of your TaskAPI project. By


default, this starts an HTTP server on port 8000.
$ (env) python manage.py runserver

from
from
from
from
er

rest_framework import permissions,viewsets


.models import TaskModel
rest_framework.generics import CreateAPIView
.serializers import TaskSerializer,RegistrationSerializ

Because the REST framework provides a browsable API,


feel free to interact with the API through the Web browser at
http://localhost:8000/.

Continued on Page 66...

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 61

Developers How To

My Library Application in

App Inventor 2

Heres another tutorial in our ongoing series to help you hone your skills for making
useful Android apps with App Inventor.

ts a good practice to keep things organised and in place.


This library app will help you to arrange your books and
to find any book quickly when needed. The app pretty
much resembles a real life library, where all the publications
are arranged in definite categories for readers to have easy
access to them.
So lets continue on our journey to master App Inventor.
We have already had around eight to nine sessions, enough to
make you comfortable with the capabilities of App Inventor.
If you are reading this tutorial series for the first time, I would
recommend that you familiarise yourself with the previous
articles as well. That will help you understand the things we
are going to do in this article, better.
Lets now proceed to build an Android app that will serve
as a digital book library for us. Lets add new books to it and
when needed, we will search for a required book and access
more details that we have already saved. For the first time,
I am introducing a list picker and the associated list block.
After this tutorial you will learn to:
Create a list
Add items to the list
Remove list items
View all list items
Search the list and use an index
Join text to create a list item
Split a list item
Store lists in a database (Tiny DB)
62 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Retrieve data from the database


Use procedures that return data
Perform data validation

The purpose of the application

The application prepares a list of items that we have, and later


refers to it to see what we have so far. The app is for a book
library, which will enable us to add information about new
books, edit the earlier entries, completely delete the records,
etc. The app will be ideal in a paperless real-life scenario,
enabling you to write things digitally on to your mobile.
You are already familiar with all the components that I will
be using for this application, like buttons, labels, Tiny dB,
horizontal arrangements, list pickers, etc.

GUI requirements

For every application, we have a graphical user interface or


GUI, which helps us interact with the on-screen components.
How each component responds to user actions is defined in
the block editor section. As per our current requirements, we
hope to have multiple text boxes and buttons with which you
can write data and initiate methods.
The GUI requirements are:
Label: Labels are static text components, which are used
to display some headings or markings on the screen.
Buttons: These let you trigger the event and are very
essential components.

How To Developers
Horizontal arrangements: These
are special components, which keep
all child components aligned within
themselves, horizontally.
Notifier: This is used to display some
instructions or gives you control over
your existing components. You will be
able to see its functionality in more detail
as we will be implementing it in our app.
List picker: The list picker
component is one of the basic
components from the App Inventor
repository. Using this you can pick a
definite item from a list of items and
once selected, it will be treated as your
choice for further actions.
Tiny DB: As you already know,
this is the application-specific database
that the Android system provides. This
is where all your user credentials like
Viewer
Display hidden components in Viewer

Sample Book Collection App

My Book Collection App


Book Title / Author
Title:

Author:

View Books

Delete

Add

Clear Book List


Text for ListPicker1

Notifier1

TinyDB1

Figure 1: Designer screen

Component name
Label
Button
Horizontal arrangement
Notifier
Tiny DB
List picker

Sample Book Collection App

Book List

My Book Collection App

Book Title & Author

Search list...

Book Title / Author


Title:

Book Title

Add

Author:

Delete

Java Intro:James

Enter Author

View Books

Displays the saved


book collections

Java Advanced:Hary

Clear Book List

Validates data entered,


then deletes from list

Android Programming:Andy

Validates if title/author
entered, then adds to list

Figure 2: How the application looks

passwords, high scores, subscriptions,


etc, are maintained.
In the table below are the
components that we require for this
application. We will drag them on to the
designer from the left hand side palette.
1. Drag and drop the components
mentioned in the table, to the viewer.
2. Visible components can be seen by
you while the non-visible components
will be located beneath the viewer
under the tag Non-visible.
3. We have placed a label so as to put
the name of the application.
4. All buttons need to be put within the
Horizontal arrangement so as to
keep them aligned horizontally.
5. If you have dragged and placed
everything, the layout will look like
whats shown in Figure 1.
6. Make the necessary property
changes like we did while changing
the text property for the label and
button components.
7. Renaming the components helps to
identify them in the block editor.
8. Your graphical user interface is

Purpose
To display a label
To trigger events
To arrange the child components
To display on-screen information
To store data persistently
Enable picking an item from the list

Components
Screen1
Label2
Label4
Horizontal Arrangement1
Label1
txtBookTitle
Label3
txtBookAuthor
Horizontal Arrangement2
btnAdd

btnDelete
btnView
btnClearList
ListPicker1
Notifier1
TinyDB1

Rename

Delete

Media

Figure 3: A view of the components

Location
Palette-->User Interface-->Label
Palette-->User Interface-->Button
Palette-->Layout-->Horizontal Arrangement
Palette-->User Interface-->Notifier
Palette-->Storage-->Tiny DB
Palette-->User Interface-->List Picker

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 63

Developers How To
now ready. Figure 2 shows how
the application will look after the
installation.
9. Figure 3 gives the hierarchy of the
components that we have dragged to
the designer.
If you are confused seeing the
designer and the components viewers,
let me explain these a bit more. Here is
the hierarchy that we have placed for our
application.
1. At the top, we have the title for
our application. Its always a good
practice to name your application
and show it on the screen as well. We
have put a label for that and have set
its text property to the name of the
application.
2. Next, we have a horizontal
arrangement, which has two labels
and their corresponding text boxes.
Properties
btnAdd
BackgroundColor
Default
Enabled

FontBold

FontItalic

FontSize
14.0
FontTypeface
default
Image
None...

Using the text boxes, you can enter


the text. Putting all the components
within the horizontal arrangement
will keep them aligned horizontally.
3. For the Add button, we have set the
properties in the designer itself (see
Figure 4).
4. The next horizontal arrangement
holds all the actions buttons. As I
have explained earlier, keeping them
within an arrangement will make the
horizontal arrangement their parent
and all the components within it, the
children.
5. For the notifier, keep the default
settings as available in the designer.
Now we will head towards the block
editor to define the behaviour. Lets
discuss the actual functionality that we
expect from our application.
1. The user should be able to add a new
book into the existing list by giving
the title and the authors name.
2. If there is no existing list, the app
should create one, and if a list
already exists, the new book should
be appended to it.
3. The user should be able to delete the
book details as well, by giving the
title and the name of the author.
4. The user should be able to view all
the stored books.
5. The user should be able to clear
the whole list by the press of a
single button, in case we dont have
information regarding the title and
authors name.
So lets move on and add these
behaviours using the block editor. I hope
you remember how to switch from the
designer to the block editor. There is a
button right above the Properties pane to
switch between the two.

Shape
default
ShowFeedback

Text
Add

Figure 4: Button properties

Figure 5: Block Editor Image 1

64 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Block Editor blocks

I have already prepared the blocks for


you. All you need to do is drag the
relevant blocks from the left side palette
and drop them on the viewer. Arrange
the blocks in the same way as you see in
Figure 5. I will explain what each does
and how it is done.
Initialise three variables as follows:
TAG_BOOKS: Used for storing the
book collection to the database
listBooks: Used to store/manipulate
data related to the list of books (title,
author, etc)
varTemp: A general-purpose variable
initData procedure: On app start-up, it:
Sets up the ListPicker title.
Hides the ListPicker (we use a button
to open it).
Sets up initial demo data in our
database (happens only on app initial
install). This procedure attempts
to get the data from the database.
If data does not exist, then it uses
demo data in csv format, splits it
using commas (,) and assigns it to
our temp.
Screen.Initialize: This is used to
check if its the first time installation.
This is done by checking if any
database table exists. If not, it will then
invoke the initDatabase procedure.
sDataEntered: This procedure gets
invoked when we try to add a book to
the list. It performs validation to ensure
data in both fields is entered. It returns
false if either the books name or the
authors name is not entered.
isExistingBook: This procedure
gets invoked when we try to add or
delete a book from the list. It joins the
books title and author in the form of
Title:Author, and then checks our list to

How To Developers
the procedure isExistingBook. The
find out if it exists or not. If it exists, it
procedure (see above) checks to see if
returns true, else false.
the book exists in the list and, based
Adding a book: When the button
on that, returns true or false. If false
Add is clicked, it invokes the procedure
(book not found), it displays an error
isDataEntered.
If a false is returned from
the procedure (i.e., data was
not entered in text fields), it
displays an error message.
If true is returned, then we
invoke the procedure called
isExistingBook. The procedure
checks to see if data already
exists in our list. If it does, an
error message is displayed;
otherwise a join of title and
author, in the format of Title:
Author, is stored in the list. It also
Figure 6: Block Editor Image 2
stores this latest data into our
database using our TAG variable.
Viewing books in ListPicker:
When the View button is pressed,
we invoke the ListPicker.Open
to show data. This will cause the
BeforePicking block of ListPicker
to be triggered. Once triggered, we
set its elements to listBooks, which
contains our book collection data.
Next, the picker will open, showing
the data.
Data in the picker (Figure
9) is displayed in the format
BookTitle:BookAuthor. When a
selection is made, AfterPicking will
Figure 7: Block Editor Image 3
be invoked. At this time, we take
data for the current select, split it
at the colon (:) and assign it to a
temporary variable (varTemp) as a
list. The list will have two items,
with the first item as the BookTitle
and the second item as BookAuthor.
The next blocks take the first item
(BookTitle) and set it into the text
field (txtBookTitle), and then take
Figure 8: Block Editor Image 4
the second item (BookAuthor)
and set it into the text field
(txtBookAuthor).
Deleting books: To delete a
book, you can select an existing
book from the picker (see Figure
10). You can also manually
enter the title/author into the text
fields and then click the Delete
Figure 9: Block Editor Image 5
button. Once clicked, we invoke

message. If found, we join the data


from the text field using a colon (:).
Next, we check for this Title:Author
and get its index from our list variable
listBooks. The result (index) is saved
into our temp var varTemp. Then,
using that index number, we
remove the item from our list.
We then save the updated list of
books into the database using the
TAG we had defined as the key.
Resetting the database (book
lists): To re-initialise the database
and remove all books (reset the
list), we have provided a button
btnClearList (clear book list). The
blocks in Figure 11 show how
to reset the book list and re-init
the database.
Now you are done with the
block editor too. Next, lets move
to download and install the app
on your phone to check how it is
working.

Packaging and testing

To test the app, you need to get it


on your phone. First, download
the application to your computer
and then move it to your phone
via Bluetooth or USB cable. Ill
tell you how to download it.
1. On the top row, click on the
Build button. It will show you
the option to download the apk to
your computer.
2. While downloading, you can
see its progress and after it has
been successfully completed,
the application will be placed
in the Download folder of your
directory or the preferred location
you have set for it.
3. Now you need to get this apk
file to your mobile phone either
via Bluetooth or USB cable.
Once you have placed the apk
file on your SD card, you need
to install it. Follow the on-screen
instructions to do so. You might
get some notification or warning
saying that the install is from an
untrusted source. Allow this from
the settings, and after successful

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 65

Developers How To
Debugging the application

Figure 10: Block Editor Image 6

we move ahead in this course.


I hope your application
is working based on the
requirements you have given.
Now, depending upon your
usability and customisation,
you can change various
things like images, sound and
behaviour also.

We have just created the prototype of the


application with very basic functionality.
What else might the user of your app be
interested in? Give the various use cases
that your app should be able to operate in
some serious thought, so as not to annoy
the user. Consider the following cases:
Wouldnt it be nice to add some data
validation upon entering the data in
lower or upper case?
Should we consider adding the cover
image of the book as well?
These are some of the updates you
could consider, and users may be pretty
happy seeing these implemented. Think
about other possible scenarios, and
how you can integrate these into the
application. Do ask me if you fail to
accomplish any of the above cases.
You have successfully built another
useful Android app for yourself.
Happy inventing!

Figure 11: Block Editor Image 7

installation, you will see the icon


of your application in the menu of
your mobile. Here, you will see the
default icon that can be changed,
and I will tell you how to do this as

By: Meghraj Singh Beniwal


The author, who is a freelance writer and an Android app developer, has a B. Tech in
electronics and communication. He is currently working as an automation engineer
at Infosys, Pune. He can be contacted at meghrajsingh01@rediffmail.com or
meghrajwithandroid@gmail.com.

Continued from Page 61...


Figures 2 to 5 show a few API end point tests while Figures 2
and 3 show POST/register and GET/task APIs, Figures 4 and
5 show POST/task and PUT/task/1 APIs.
In this article, you implemented a simple TaskAPI
with Django and the Django REST framework, with basic

Figure 4: Create task

Figure 5: Update task

References

Figure 2: Register user

Figure 3: List task

authentications and permissions. We have now learned quite


a lot about the Django REST framework, including how to
implement a Web-browsable API which can return JSON for
you, how to configure serialisers to compose and transform
your data, and how to use class based views to extract
boilerplate code.
66 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

[1] http://www.django-rest-framework.org/#tutorial
[2] https://docs.djangoproject.com/en/1.9/
[3] Source code related to this article is available at:
http://opensourceforu.com/article_source_code/
aug16/djangoandrestapi.zip

By: Yogendra Sharma


The author is a Java developer with a background in Python. He
currently works in Pune at Siemens Industry Software Pvt Ltd as
a product development engineer. His LinkedIn profile is available
at http://in.linkedin.com/in/yogendra0sharma.

How To Developers

PhoneGap: Simplifying
Mobile App Development
Developing mobile apps is fun, but developing apps for different mobile operating systems,
individually, is a chore! An easy way out is to use PhoneGap, which is a software development
platform that can create mobile apps for all mobile platforms in one go. The best thing about
PhoneGap is that developers can work with it using existing Web developer skills.

honeGap is a free and open source framework that can


be used to create mobile apps using Web technologies
like HTML, CSS and JavaScript. We can also use
standardised Web APIs and target the platforms on which we
want to develop the app. The biggest advantage of PhoneGap
is that we can use our existing Web developer skills, which
result in a shorter learning period and faster development.
This framework is entirely based on Web standards.
The structure of a mobile is similar to that of a computer.
It has a custom-built operating system, hardware and
firmware. Every mobile OS provides its own environment setup and tools to develop apps, which will run only on that OS.
Apps running on one OS cant run on another. So to increase
the reach among users, theres a need to make the apps
compatible with all major mobile OSs. To make an app that
can not only run on all major OS platforms but also have the
look and feel thats compatible with them is a tedious task.
PhoneGap is the solution for all the problems mentioned
above. It is a framework that allows us to develop apps using
HTML, CSS, JavaScript and standard Web APIs for all major
mobile OSs. PhoneGap takes care of the look and feel of the
app and its compatibility with various mobile OSs.

It also allows us to use different features of the mobile


device like the camera, contacts and location. It supports iOS,
Android, BlackBerry, Symbian, webOS, WP7 and Bada.
Developing an app in PhoneGap doesnt require any
expertise in any of the above platforms nor any hard core
coding practices. Once you upload the data content to the
website, PhoneGap will convert it to various app files.

Installation

1) Install Node.JS from its official site. This is a prerequisite


for PhoneGap.
2) Download PhoneGap (desktop or mobile, according to the
requirement) from its official site http://phonegap.com/
getstarted/.
3) To install PhoneGap, go through the following steps.
Mac: Type the following command in the terminal:
sudo npm install -g phonegap

Windows: Type the following in the command prompt:


npm install -g phonegap
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 67

Developers How To

Configuration
Files

Information
or content
(built using
web
technologies)

Icons

App
Figure 1: PhoneGaps structure

Figure 2: App structure

4) Install Java, Ant and Android Studio according to the


platform on which you want to develop the app.
5) Update the path variables in Mac/Windows.
6) Test PhoneGap by entering the following commands
in a terminal:
cd /Users/YOURUSERNAME/Desktop
phonegap create MyApp com.test.myapp TestApp
cd TestApp
phonegap local build android

7) If it is installed successfully, you will get the


following results:
[phonegap] adding the Android platform...
[phonegap] compiling Android...
[phonegap] successfully compiled Android app

Environment set-up

An app contains the following items in its package:


Configuration files
Icons for the app
Information or content (built using Web technologies)
Apps created using the PhoneGap build can be set up
either through the Web or by setting the configuration in
config.xml. The config.xml file allows us to specify the
metadata about our applications.

<widget>

<name>

The widget element is the root of your


XML documentit assures you that you
are following the W3C specification. The
following are the attributes you should set
on the widget element:
* id: the unique identifier for your application
* version: a major/minor/patch style version
* versionCode: (optional)
Name of the application.

<description> A description for your application.


68 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Figure 3: Command Line Output

An example is:
<?xml version=1.0 encoding=UTF-8 ?>
<widget xmlns = http://www.w3.org/ns/widgets
xmlns:gap = http://phonegap.com/ns/1.0
id
= com.phonegap.helloworld
versionCode = 20
version
= 0.0.1 >
<name>PhoneGap Hello Wold</name>
<description>
Hello World Example
</description>
</widget>

Platforms and preferences


<platform>

Can be zero or more of platform elements.


Name the platform on which we want our
app to be built -- Android, iOS or Winphone.
<preference> The preferences tag is used to personalise
application configuration. It can be used
to set various configuration properties like
orientation, full screen, background colour,
etc. These configuration properties should
be in a name and value pair.

An example is:
<platform name=ios />
<platform name=android />
<patform name=winphone />

How To Developers
Name
assets
bin
gen
libs
res
src
.classath
.project
AndroidManifest
build.properties
build
default.properties
local.properties

Figure 4: PhoneGaps workspace structure

Figure 6: Setup Wizard - Step 2

Figure 7: Launching PhoneGap application in the Android Emulator


<preference name=orientation value=portrait />
</platform>

Figure 5: Setup Wizard - Step 1

Preferences are for all platforms if you havent specified


them for a single platform. To specify a preference for a
single platform, you can place the preference tag inside a
platform tag.
<platform name=ios >
<preference name=orientation value=landscape />
</platform>
<platform name=android >

Icons and splashes


<icon> Can be zero or more of icon elements. If not
specified, then the PhoneGap logo will be used
as the applications icon.
src: (required) path of the image file
Width: (optional) width in pixels
Height: (optional) height in pixels

Icons and splashes are platform dependent. Shown below are


two ways to specify an icon or splash for a particular platform.
1) By specifying a platform attribute:
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 69

Developers How To
Table 1

API
Battery status
Camera
Console
Contacts

Figure 8: Running PhoneGap application in the Android Emulator

Device
Device Motion
(Accelerometer)
Device Orientation
(Compass)
Dialogs
File Transfer
Geolocation
Globalization
Media
Splashscreen
Vibration
StatusBar

Figure 9: Output of PhoneGap application


<icon src=icon1.png platform=ios width=100 heig
ht=100/>

2) By putting the icon or splash inside a platform element:


<platform name=ios>
<icon src=icon1.png width=100 height=100 />
</platform>

Both these statements will give the same resultthat the


icon is being used for iOS.

Developing a Hello World application for Android

To build an app for Android, you need the following


prerequisites on your machine:
1) Java SDK
2) Android SDK
3) Eclipse
4) Eclipse ADT plugin
5) Android platforms and components
6) Apache Ant
7) Ruby
8) PhoneGap framework

70 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Usage
Monitors the status of the devices
battery
Captures a photo using the devices
camera
Adds additional capability to console.
log()
Works with the devices contact database
Gathers device-specific information
Taps into the devices motion sensor
Obtains the direction that the device is
pointing
Visual device notifications
Hooks into a native file system through
JavaScript
Makes your application location aware
Enables representation of objects specific to a locale
Records and plays back audio files
Shows and hides the applications
splash screen
An API to vibrate the device
An API for showing, hiding and configuring the status bars background

Now, make an environment variables check.


Set the following values to your accounts PATH variable:
system_path/jdk/bin
system_path/android-sdk/tools
system_path/ruby/bin
system_path/apache-ant/bin

Also set the following variables:


JAVA_HOME path of JDK directory
ANT_HOME path of apache-ant directory
ANDROID_HOME path to android SDK directory

To create a workspace for your app, go to thephonegapandroidfolder on the command prompt:


ruby ./droidgap "[android_sdk_path]" [name] [package_name]
"[www]" "[path]"
android_sdk_path: Where you have installed the Andorid SDK
name: Name of the application.
package_name: The name you want to have for your application.
www: The path of the folder where you want to have your
PhoneGap app files.
path: The application workspace for your project which you
will add afterwards.

How To Developers
Once you run the command, you will see the screen
shown in Figure 3.
This will create a complete workspace for your PhoneGap
Android app, as shown in Figure 4.

Setting up your project in Eclipse

Once you have created the above workspace, you can


open it in Eclipse. Now create a new Android project, as
shown in Figure 5,
Next, select Create project from existing source and give
the project a name, as shown in Figure 6.
Add the external library (phonegap.jar) in the libs folder
of your workspace.
To add the external library, right-click on the project Build
Path Add external archive phonegap.jar in the libs folder.
There will already be a file called phonegap.js in the
assets/www folder of the workspace. In the same folder,
create a file called index.html for your content and add the
following code:

<!DOCTYPE HTML>
<html>
<head>
<meta name="viewport" content="width=320; userscalable=no" />
<meta http-equiv="Content-type" content="text/html;
charset=utf-8">
<title>PhoneGap</title>
<script type="text/javascript" charset="utf-8"
src="phonegap.js"></script>
<script type="text/javascript"
charset="utf-8">
var sayHello = function() {
var name =
document.
getElementById("Name").value;
navigator.notification.alert("Hello
:" + name);
}
</script>

THE COMPLETE MAGAZINE ON OPEN SOURCE

</head>
<body onload="init();" id="ibody" >
<div id="txt">
<input type="text" name="Name" id="Name" />
</div>
<div id ="btnhello">
<a href="#" class="btn" onclick="sayHello();">Go</a>
</div>
</div>
</body>
</html>

There is a textbox in which you can enter the name and


there is a button called Go. Once you click the button, it will
show the output of Hello and the name you have entered in
the textbox in an alert box.
To launch your PhoneGap application in the Android
emulator, right-click the project root, and select Run As >
Android Application (see Figure 7).
Once you run the project, you can see the screen shown in
Figure 8 on your emulator.
When you click on the Go button, it will show the alert
box, as shown in Figure 9.
PhoneGaps native APIs are listed in Table 1.

References
[1] http://code.tutsplus.com/tutorials/creating-an-androidhello-world-application-with-phonegapmobile-2532
[2] http://coenraets.org/blog/phonegap-tutorial/
[3] http://phonegap.com/blog/tag/tutorial/
[4] http://phonegap.com/getstarted/
[5] http://www.phonegap.co.in/phonegap-tutorial/
[6] http://code.tutsplus.com/tutorials/phonegap-fromscratch-introductionmobile-9171

By: Palak Shah


The author is a senior software engineer, and loves to explore
new technologies and learn innovative concepts. She is also fond
of philosophy. She can be reached at palak311@gmail.com.

Your favourite magazine on


Open Source is now on the Web, too.

OpenSourceForU.com
Follow us on Twitter@LinuxForYou

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 71

Developers How To

Work with Flow to


Debug Your Program
Flow is a static type checker for JavaScript. It enables programmers to write
clean code by quickly finding errors in JavaScript applications.

here are two types of programming languages


strongly typed and weakly typed. Strongly typed
languages are those in which the various constructs
of the programming language, which include variables,
expressions and functions, have a particular type. This type
cannot be altered. For example, you cannot concatenate a
string with an integer because these are of two different types.
On the other hand, weakly typed programming languages are
flexible with the type their constructs can hold, and the type
of construct they actually hold. Here, concatenation of strings
and numbers is possible, though these are of different types.
For example, consider JavaScript and Python. In
JavaScript, we can add a string to a number like this

String s= You have a meeting at + 10 +AM tomorrow;


Output : You have a meeting at 10 AM tomorrow

whereas, in the case of Python, adding a number to


72 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

a string will result in an error because here, this kind of


interaction between two types (string and int) is not allowed.
So, we have to convert the number 10 to a string first, and
then add it to the string. This is because JavaScript is weakly
typed and Python is strongly typed. We can see that the
type system affects how we interact with the programming
language, and how we code.
Hence, we can say that every programming language
has a type system. When we try to run a code snippet that
violates its type rule, the programming language throws a
type error.
The type system is used to reduce the chances of error by
checking if the parts have been connected in a consistent way
for example, checking whether a value that is transported
to another location (via a function call, an assignment, etc)
has the correct type.
Static type checking performs the type checking operation
before the execution of the program. To perform this

How To Developers

Figure 3: Creating index.js

Figure 4: Compiling index.js with Flow

Figure 1: Initialising npm

Figure 5: Modified index.js

Figure 2: Setting up a sample project

operation, the arguments, expressions and variables must


be given a data type. Hence, this helps to eliminate certain
classes of errors before the program runs.
Flow is an open source static type checker. It helps to
catch common bugs in JavaScript before they run. These
bugs include:
1. Dereferencing a null pointer
2. Silent type conversions
3. The dreaded undefined is not a function error message
When Flow is initiated, it performs an initial analysis
of all the files present in the code base. After the analysis,
it stores the result in a persistent server. When the
user saves a file, Flow rechecks all the changes in the
background. Flow comes with a lot of new features, some
of which are listed here.
Speed: The programmer does not have to wait for Flow
to check the code because both the initial analysis and
recheck are heavily optimised for performance.
Safety: Flow is designed to find errors. It uses control
flow analysis to deeply understand your code to find errors
that other type systems cant.
Idiomatic: Flow is exclusively for Java programmers.
So, coding with Flow would be like coding using the
common idioms in the language.
Gradual: Flow follows the opt-in style of checking,
which means that you can gradually convert your existing
code base written in JavaScript to Flow on a per file basis.
This can be done by adding a /* @flow */ comment to the
top of your source file.
Figure 1 demonstrates how to set up a sample project
with npm, which is a package manager for JavaScript. To

Figure 6: Testing with Flow

initialise a project with npm, we can run npm init inside our
sample project directory. It creates a package.json file with the
options that you have selected.
Another way of doing this is by creating a sample project
directory get_started and adding properties like the name
and scripts to the package.json file. Then add Flow to the
project (Figure 2).
Figure 3 shows how to create a file index.js.
// @flow at the beginning of the file tells Flow to check
this file. If this line is neglected, then Flow will not type
check this file.

To run Flow

No errors indicates that the file index.js has been type checked
and run error-free.To study the functioning of this static type
checker, lets make an error, type check it and see if Flow
detects it or not (see Figure 4).
So, now the index.js looks like whats shown in Figure 5.
Note the type change we have made here. Now, on
compilation, the result looks like what can be seen in Figure 6.
We see that Flow detects the type error.
By: H. Chaithanya Krishnan
The author is an open source enthusiast. She is an active
contributor to the world of open source. She can be reached at
chaitanya7991@gmail.com

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 73

Developers How To

How do Arrays
Decay into Pointers?
This article is for C programming newbies and enthusiasts. It deals with
how arrays are passed as parameters to a function decays into pointers.

any newbies to the C world are


often confused about passing
an array as the parameter to a
function. They find it difficult to decide
whether it is done using the pass by value or
pass by reference parameter passing mechanism.
This article explains how the arrays are passed to the
function and why this is so.

Example 1: An array name as a function


parameter is a pointer

In this example, consider 1-D array to explain the meaning


of the above statement with the help of the demo codes.
Consider Code 1, which contains a function called display()
to display the elements of the array.
1 #include <stdio.h>
2
3 void display(int numbers[], int size);
4
5 int main()
6 {
7//Definationofanarray
8
int numbers[] = {10, 20, 30, 40, 50};
9
10//Callingdisplayfunction,toprinttheelements
11display(numbers,sizeof(numbers)/
sizeof(numbers[0]));
12
return 0;
13
14 }
15
16//Functiondefinition
17 void display(int numbers[], int size)
18 {
19
int i;
74 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

20for(i=0;i<size;i++)
21
{
22printf(Thevalue@address:%p:
%d\n,&numbers[i],numbers[i]);
23
}
24 }

Code 1: This code is to demonstrate how arrays are


passed to functions
From Code 1, whenever arrays are passed as the arguments
to functions, they are always passed by using the Pass by
reference mechanism. Because of this, they will decay into
pointers in the function parameters. In Line 17 of Code 1, In
main function, the name of array (numbers) which contains the
base address of the array (numbers) is passed to the function
called the display, and collected using the parameter called
numbers, which is nothing but the pointer to the base address of
the array (numbers) defined in the main function. The function
display() is equivalent to the code shown below.
void display(int *numbers, int size);

The definition of the display is also given in Code 2.


1//Functiondefinition
2 void display(int *numbers, int size)
3 {
4
int i;
5for(i=0;i<size;i++)
{
6
7printf("Thevalue@address:%p:%d\n",&numbers[i],
numbers[i]);
8
}
9 }

Code 2

How To Developers
The two declarations of the function display shown in
Code 1 and Code 2 look different from the programmers
perspective, but from the compilers viewpoint, both are one
and the same. The standard specifies that a parameter in the
function declaration, which is of array of type(T) shall be
decayed to pointer to type(T).
Note 1: An array name in the declaration of a function
parameter is treated by the compiler as a pointer to the first
element of the array.
void display(int numbers[], int size);
void display(int numbers[5], int size);

In the code given above, all the function declarations are


equivalent. One can call the function display with an array,
with a pointer or with its actual argument. In all the three
prototypes, the function parameter (numbers) will decay into
the Pointer to the first element of an array.

Example 2

Consider the example of a 2D array, to explain how it will


decay into the Pointer to an array, when passed to the
function. Let us start with a demo code as shown in Code 3:
1 #include <stdio.h>
2#defineROW3
3#defineCOL2
4
5 int main()
6 {
7intarray[ROW][COL];
8
populate(array);
9
10
//Some code here
11 }
12
13voidpopulate(intarray[ROW][COL])
14 {
15
//Some code here
16 }

Code 3: Code to demonstrate how 2D arrays are


passed to functions
From the example shown in Code 3, when the 2D array of
type(T) is passed as a parameter to the function, it will decay
into the Pointer to an array, as shown in Line 13 of Code 3.
Code 3 shows a straightforward way of passing the 2D arrays
to the function, which means declaring them exactly the same
way as declared in the calling function.
1 #include <stdio.h>
2#defineROW3

3#defineCOL2
4
5 int main()
6 {
7intarray[ROW][COL];
8
populate(array);
9
10
//Some code here
11 }
12
13voidpopulate(int(*array)[COL])
14 {
15
//Some code here
16 }

Code 4: Code to demonstrate how 2D arrays


are passed to functions
As far as we are concerned, with the equivalence between
pointers and arrays, when arrays are passed as an argument
to a function, what really gets passed is a pointer to the
arrays first element. When we declare a function that accepts
an array as a parameter, the compiler simply compiles the
function as if that parameter were a pointer, since a pointer is
what it will actually receive.
In general, when multi-dimensional arrays are passed as a
parameter to the function, what actually is passed is a pointer
to the arrays first element. Since the first element of a multidimensional array is another array, what gets passed to the
function is a pointer to an array.
We cannot receive a 2D array as shown in the demo Code
5, since compilers will generate code in such a way that
functions will receive the 2D array as Pointer to an array and
not Pointer to Pointer.
1 #include <stdio.h>
2#defineROW3
3#defineCOL2
4
5 int main()
6 {
7intarray[ROW][COL];
8
populate(array);
9
10
//Some code here
11 }
12
13 void populate(int **array) //Error
14 {
15
//Some code here
16 }

Code 5: Code to demonstrate that 2D arrays cannot be


collected as pointer to pointer
Now, consider Code 6 given below:
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 75

Developers How To
1 #include <stdio.h>
2
3 int main()
4 {
5//Definethearray
6
int a[5] = {10, 20, 30, 40, 50};
7
8
int x = *a;
9
10
...
11
12
return 0;
13 }

the array. The detailed explanation is given in Table 1.


An array name when passed as an argument to the
function, it decays into a pointer to the first element of the
arry by the compiler.
Table 1: Pictorial explanation for Code 7

Statement

int a[5] = {10, 20, 30, 40,


50};
int x = a[2];

Pictorial representation: int a[5] = {10, 20, 30, 40, 50};

10

Code 6
From Code 6, in the expression shown in Line 8, the name
of the array a is decayed into the pointer to the first element
of the array by the compiler. As a result, dereferencing the
pointer yields the value stored at the first location of an array,
which is nothing but 10 in the example shown in Code 6.
Let us consider one more example to show how the
subscript in the array is always the offset from the pointer.

100

20
104

30

40

108

112

Assumptions:
1. The base address of the array(a) is 100 and
sizeof(int) = 4 bytes.
Interpretation of a[2] in pointer notation and how the new
address is computed is shown below:
An array name (a) is decayed into the
pointer to the first element of the array.
The subscript 2 is treated as the offset
from the pointer, which is added to the
base address of the array.

a[2]=>*(a+2)
1 #include <stdio.h>
2
3 int main()
4 {
5//Definethearray
6
int a[5] = {10, 20, 30, 40, 50};
7
8
int x = a[2];
9
10//a[2]<=>*(a+2)
11
12
return 0;
13 }

Code 7
From Code 7, Line 8, the compiler internally treats the
name of the array a as the pointer to the first element of the
array; the subscript value 2 is added to the pointer as a offset
to get the new address. The equivalent of a[2] is nothing but
*(a + 2). The pictorial representation of Code 7 is shown in
Table 1. So, here, once again, the name of the array decays
into the pointer to the first element of the array and then the
subscript will be added.
The compiler automatically scales a subscript to the size
of the object pointed at. For instance, if sizeof(int) is 4 bytes
long, then the subscript 2 in a[2] is actually 2 * 4 bytes long
from the base address of the array. The compiler takes care
of scaling, before adding the subscript to the base address of
76 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

50
116

=>*(a+2*sizeof(int)) Adding constant means it is multiplied


by sizeof(DataType) Data Type: int

=>*(100+2*4)

Assuming sizeof(int) = 4 bytes

=> *(108)

We get the new address (108)

=> 30

Dereferencing the new address(108),


gives the value, as shown in the picture
with the cyan coloured cell.

Pictorial representation: *(a + 2)


10
100

20
104

30
108

40
112

x = a[2]

50
116

Finally, the x contains the


value 30.

An array name in an expression (apart from the place


of declaration) is also decayed by the compiler to a pointer
to the first element of the array. A subscript is always
equivalent to an offset from a pointer. 2D arrays cannot be
collected in pointer-to-pointer when passed as an argument
to the functions.
By: Satyanarayana Sampangi
The author is a member of the embedded software team at Emertxe
Information Technology (P) Ltd (http://www.emertxe. com). His
areas of interest are embedded C programming combined with data
structures, microcontrollers and Linux Internals. He can be reached
at satya@emertxe.com, satya.2891@gmail.com.

Lets Try Developers

The Fundamentals of

RDMA Programming

Computers in a network can exchange data in the main memory with the
involvement of the processor, cache or OS by using a technology called remote
direct memory access (RDMA). This frees up resources, improving the throughput
and performance while facilitating faster data transfer.

emote direct memory access (RDMA) technology


increases the speed of server-to-server data movement
through better utilisation of network infrastructure
without CPU intervention. The network adapter transfers
data directly to or from the application memory without
interrupting other parallel operations of the system. RDMA
technology is widely used in enterprise data centres and high
performance computers (HPC) because of its high-throughput
and low-latency networking.
This article will enable app developers to start
programming RDMA apps even without any experience with
it. Before we start, lets have a brief introduction to InfiniBand
(IB) fabrics its features and components.

InfiniBand (IB)

InfiniBand is an open industry-standard specification


for data flow between server I/O and inter-server
communication. IB supports RDMA and offers highspeed, low latency, low CPU overhead, high efficiency and
scalability. The transfer speed of InfiniBand ranges from
10Gbps (SDR) to 56Gbps (FDR) per port.

Components of InfiniBand

Host channel adapter (HCA): This provides an address

translation mechanism under the control of the operating


system, which allows an application to access the HCA directly.
The same address translation mechanism is the means by which
an HCA accesses memory on behalf of a user level application.
The application refers to virtual addresses, while the HCA has
the ability to translate these addresses into physical addresses
in order to effect the actual message transfer.
Switches: IB switches are conceptually similar to standard
networking switches but are designed to meet IB performance
requirements. They implement the flow control of the IB Link
Layer to prevent packet dropping and to avoid congestion.
They also have adaptive routing capabilities and advanced
quality of service. Many switches include a subnet manager,
at least one of which is required to configure an IB fabric.
Range extenders: InfiniBand range extension is
accomplished by encapsulating the InfiniBand traffic onto the
WAN link and extending sufficient buffer credits to ensure
full bandwidth across the WAN.
Subnet managers: The IB subnet manager is based on
the concept of software defined networking (SDN), which
eliminates interconnect complexity and enables the creation
of very large scale compute and storage infrastructures.
The IB subnet manager assigns local identifiers (LIDs) to
each port connected to the InfiniBand fabric, and develops a
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 77

Developers Lets Try


routing table based on the assigned LIDs.

Installing RDMA

First of all, connect two devices back to back or through a


switch. Download and install the latest version of the OFED
package from https://www.openfabrics.org/downloads/.
OpenFabrics Enterprise Distribution (OFED) is a package
developed and released by the OpenFabrics Alliance (OFA),
as a joint effort of many companies that are part of the RDMA
scene. It contains the latest upstream software packages (both
kernel modules and user-space code) to work with RDMA.
This package supports most major Linux distributions and
CPU architectures.
Extract the tgz file and type the following command to
start the installation:
[root@localhost]# ./install.pl

Next, choose 2 (Install OFED software).


From the options displayed, choose 1 (OFED modules
and basic user level libraries).
OFED packages will now be installed. Reboot the system
to complete the installation.
The structure of a typical RDMA application is as follows:
1. Gets the device list
2. Opens the requested device
3. Queries the devices capabilities
4. Allocates a protection domain
5. Registers a memory region
6. Creates a completion queue
7. Creates a queue pair
8. Brings the queue pair to a ready-to-send state
9. Creates an address vector
10. Posts work requests
11. Polls for completion
12. Cleans up
To identify RDMA-capable devices in your system, type
the following command:
[root@localhost]# ibstat

You need to be aware of the medium you are planning


to use for your RDMA connectionInfiniBand or Ethernet.
Verify that the ports are Active and Up.

Getting the device list

ibv_get_device_list( ) returns an array of the RDMA devices


currently available.
An example of how this is done is given below:
struct ibv_device **dev_list;
dev_list = ibv_get_device_list(NULL);
if (!dev_list)
exit(1);
78 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Opening the requested device

ibv_open_device( ) opens the device and creates a context for


further use.
An example is given below:
struct ibv_device **device_list;
struct ibv_context *ctx;
ctx = ibv_open_device(device_list[0]);
if (!ctx) {
fprintf(stderr, Error, failed to open the device
%s\n,
ibv_get_device_name(device_list[i]));
return -1;
}
printf(The device %s was opened\n, ibv_get_device_
name(ctx->device));

Querying the devices capabilities

ibv_query_device( ) returns the attributes of the RDMA


device that is associated with a context. These attributes are
constant and can be later used.
Here is an example:
struct ibv_device_attr device_attr;
int rc;
rc = ibv_query_device(ctx, &device_attr);
if (rc) {
fprintf(stderr, Error, failed to query the
device %s attributes\n,
ibv_get_device_name(device_list[i]));
return -1;
}

Allocating a protection domain

ibv_alloc_pd( ) allocates a protection domain for an RDMA


device context.
An example is given below:
struct ibv_context *context;
struct ibv_pd *pd;
pd = ibv_alloc_pd(context);
if (!pd) {
fprintf(stderr, Error, ibv_alloc_pd()
failed\n);
return -1;
}

Registering a memory region

ibv_reg_mr( ) registers a memory region associated with the


protection domain to allow the RDMA device to perform
read/write operations.
Here is an example:
struct ibv_pd *pd;

Lets Try Developers


struct ibv_mr *mr;
mr = ibv_reg_mr(pd, buf, size, IBV_ACCESS_LOCAL_WRITE);
if (!mr) {
fprintf(stderr, Error, ibv_reg_mr() failed\n);
return -1;
}

Creating a completion queue

ibv_create_cq( ) creates a completion queue for an RDMA


device context.
An example is given below:
struct ibv_cq *cq;
cq = ibv_create_cq(context, 100, NULL, NULL, 0);
if (!cq) {

ah_attr.dlid
= dlid;
ah_attr.sl
= sl;
ah_attr.src_path_bits = 0;
ah_attr.port_num
= port;
ah = ibv_create_ah(pd, &ah_attr);
if (!ah) {
fprintf(stderr, Error, ibv_create_ah()
failed\n);
return -1;
}

Posting work requests

ibv_post_send( ) posts a linked list of work requests to the


send queue of a queue pair.
Here is an example:

fprintf(stderr, Error, ibv_create_cq()


failed\n);
return -1;
}

Creating a queue pair

ibv_create_qp( ) creates a queue pair associated with a


protection domain.
An example is given below:

struct ibv_pd *pd;


struct ibv_cq *cq;
struct ibv_qp *qp;
struct ibv_qp_init_attr qp_init_attr;
memset(&qp_init_attr, 0, sizeof(qp_init_attr));
qp_init_attr.send_cq = cq;
qp_init_attr.recv_cq = cq;
qp_init_attr.qp_type = IBV_QPT_RC;
qp_init_attr.cap.max_send_wr = 2;
qp_init_attr.cap.max_recv_wr = 2;
qp_init_attr.cap.max_send_sge = 1;
qp_init_attr.cap.max_recv_sge = 1;
qp = ibv_create_qp(pd, &qp_init_attr);
if (!qp) {
fprintf(stderr, Error, ibv_create_qp()
failed\n);
return -1;
}

Creating an address vector

ibv_create_ah( ) creates an address handle associated with a


protection domain.
Here is an example of how this is done:
struct ibv_pd *pd;
struct ibv_ah *ah;
struct ibv_ah_attr ah_attr;
memset(&ah_attr, 0, sizeof(ah_attr));
ah_attr.is_global
= 0;

struct ibv_sge sg;


struct ibv_send_wr wr;
struct ibv_send_wr *bad_wr;
memset(&sg, 0, sizeof(sg));
sg.addr
= (uintptr_t)buf_addr;
sg.length = buf_size;
sg.lkey = mr->lkey;
memset(&wr, 0, sizeof(wr));
wr.wr_id
= 0;
wr.sg_list
= &sg;
wr.num_sge
= 1;
wr.opcode
= IBV_WR_SEND;
wr.send_flags = IBV_SEND_SIGNALED;
if (ibv_post_send(qp, &wr, &bad_wr)) {
fprintf(stderr, Error, ibv_post_send()
failed\n);
return -1;
}

Polling for completion

ibv_poll_cq( ) polls work completions from a completion queue.


An example is given below:
struct ibv_wc wc;
int num_comp;
do {
num_comp = ibv_poll_cq(cq, 1, &wc);
} while (num_comp == 0);
if (num_comp < 0) {
fprintf(stderr, ibv_poll_cq() failed\n);
return -1;
}

By: Miren Karamta


The author is an IT systems manager at Bhaskaracharya
Institute for Space Applications and Geo Informatics (BISAG)
with over five years of system and network administration
experience. He can be contacted at mirenkaramta@yahoo.com.

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 79

Developers Lets Try

Android Data Binding:


Write Less to do More
The Android data binding framework significantly reduces the code that needs to
be written. This second article in the series covers the aspects of two-way binding,
expression chaining, implied event updates and Lambda expressions.

e covered one method of data binding in the first


part of this article (OSFY, July 2016). Along with
that, we also looked at certain basics like how to
get data binding up and running in Android Studio, and the
use of binding adapters to add our own attributes to layouts
easily. While one-way data binding enables us to get our data
from the model to the UI, there are times when we need to
get input from the UI and send it to our model. This is when
two-way data binding comes to our rescue. It helps the data
flow in both directions in our application, i.e., from the model
to the UI and vice versa.
When Android data binding supported only one-way
binding, people used to achieve two-way data binding
by including callback methods on the models, and then
calling those methods from XML by using listeners such as
TextWatchers. But now, as the data binding framework on
Android supports two-way data binding, it becomes very easy
to bind an attribute to a field with two-way binding. Lets take
a look at an example that will make this concept very clear.
Previously, when we bound an attribute to a field, we used
the following syntax:

android:text=@{model.text}

This will essentially declare that the field is a one-way


data binding field. Now, to make the field bind both ways, we
just need to replace @ with @=, as follows:
android:text=@={model.text}

And thats pretty much it. This field is now two-way


data bound with the model, i.e., if data in the variable
changes, itll update the UI and when the UI changes the
data, itll update the model. But, of course, the field needs
to be observable, so the rules and syntax we covered in the
previous article on this topic still apply.
Most of the time, when using two-way data binding, only
adding the equals sign will make the field two-way bound.
But there may be cases when the data generated by the UI
is not in the form that the model accepts. To help with such
cases, we have InverseBindingAdapters, which are very
useful to get data from the UI and convert it into a form that
the model expects. InverseBindingAdapters look and behave
almost the same way as BindingAdapters do. Lets explore the
80 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

example we described above:


android:text=@={model.text}

Now the variable text is a string type variable and the


attribute android:text or method setText expects a string, so it
works perfectly in one-way data binding, but when we want
to pass the value of the text to the model, we need to get the
data from the view. To get the text from the view, we will call
getText on the view, but getText doesnt return a string. It returns
a CharSequence in case of TextView and it returns an editable in
the case of EditText. The conversion from a CharSequence or an
editable to a string before setting it on a variable is done in the
InverseBindingAdapter, which expects a single parameterthe
view on which the attribute is and the return type of the method
should be the type which the model expects (in our case, string).
It is important to check the return value of the method
carefully, because the method will be called based on the
return type of the method. Now, to give our problem a
solution, we write an InverseBindingAdapter as shown below:
@InverseBindingAdapter(attribute = android:text)
public static String getStringFromTextView(TextView textView)
{
return textView.getText().toString();
}

The attribute parameter inside the annotation is the


attribute you want applied on this adapter. Thats a simple
and basic introduction to how two-way data binding can be
achieved with the Android data binding framework.

Expression chaining

It often happens that more than one view in our layout


hierarchy depends on a single variable or a single expression.
So what we need to do is repeat all that code in all the views.
Data binding has now introduced expression chaining, in
which you can put that condition in only one of the views and
then depend on that views state in other views. Lets see what
it looks like in code.
Assuming that you want to toggle some views visibility
based on the state of a CheckBox, write the code only once
and refer to the views visibility in other views by using that
views ID as shown in the following code:

Lets Try Developers


<CheckBox android:checked=@={user.adult}
/>
<EditText android:id=@+id/firstName
android:visibility=@{user.adult ? View.
VISIBLE : View.INVISIBLE}
/>
<EditText android:visibility=@{firstName.visibility}
/>

Implied event updates

In the above example, we used a variable adult from our


model to store the value of the state of a checkbox and update
the visibility of other views, but this can be done more easily
by directly passing the result of the checked state to another
expression. So now, modifying the code shown earlier, we get
the following output:
<CheckBox android:id=@+id/checkbox
/>
<EditText android:id=@+id/firstName
android:visibility=@{checkbox.checked ?
View.VISIBLE : View.INVISIBLE}
/>
<EditText android:visibility=@{firstName.visibility}
/>
When the checkboxs checked property changes, the result is
directly applied to the expression and the views visibility
changes accordingly.

Lambda expressions

Lambda expressions are a very cool feature of Java 8.


Anyone who is familiar with Java 8 will agree. But as the
Android platforms before Android Nougat had features up
to Java 7 only, we unfortunately dont get to use them. But
in data binding, the framework parses the expressions and it
completely removes those expressions at compile time. We
can put almost any feature in data binding and it will still
work on all the older platforms that data binding supports. At
a very basic level, you can imagine lambda expressions as a
syntactic sugar for listener interfaces. Lets see how we can
use lambda expressions in our layout files.
So lets suppose there is a method in our model, which we
would like to call when the checkbox is checked; for example,
showing a toast of the current state of the checkbox. To
achieve this, we can directly write a lambda expression in our
layout file which will call that method. In code, it looks like
whats shown below:
<CheckBox android:id=@+id/checkbox
android:onClick=@{() -> user.
toastCheckboxState(context, checkbox)}
/>

Here, the method toastCheckboxState is present in our


model. It needs a checkbox to get the state and context
to make a toast. Now, data binding has this cool feature
that allows you to pass any view from the hierarchy to the
method by referring to it from its ID. Another new feature
is that you can get context in the layout file in a context
named variable. Whenever you refer to the context variable,
it will give you the context of the root view. But be aware
that if you give a context name to anything else, such as a
views ID or a field in the model, it will override the context
variable of the data binding framework and, instead, return
the view or value from the model, respectively; so do try to
avoid naming anything as context.
This is how lambda expressions can be used to put
method calls easily in layout files. There is also another
feature called method references, which works in a manner
similar to lambda expressions, but behaves differently and
has some different use cases. Readers are advised to explore
this feature as an exercise as were not covering it in the
article, because we believe that most of the time lambda
expressions will do the trick.
There are yet more features available in the data binding
framework which we havent covered. These include binding
callbacks that give you control over when the data is bound
with the UI, and writing our own data bound fields and
synthetic events to make our own fields two-way data bound.
We advise you to start using data binding in your
layouts because it reduces quite a bit of code. It also makes
the code cleaner by dividing the logic between Java files
and layout files, whereas previously, we had to do all of the
login in our Java code. It also increases the performance
because it gets a reference to all the views with IDs at
compile time. So we dont need to call findViewById for
any view (findViewById is a heavy call as it scans the
whole view hierarchy to find our view).

References
[1] https://developer.android.com/topic/libraries/databinding/index.html
[2]. https://github.com/google/android-ui-toolkit-demos/tree/
master/DataBinding/DataBoundRecyclerView
[3] https://www.youtube.com/watch?v=DAmMN7m3wLU
[4] https://github.com/samvidmistry/data-binding-demo Link to demo used in article

By: Samvid Mistry and Prof. Prakash Patel


Samvid Mistry is an Android developer at SculptSoft. He has
a diploma in computer engineering from Nirma University,
Ahmedabad. Email: mistrysamvid@gmail.com, http://www.
samvidinfotech.in
Prof. Prakash Patel is working as Assistant Professor in the
Information Technology department at Gandhingar Institute of
Technology. Email: prakashpatelit@ yahoo.co.in

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 81

Developers Insight

A Quick Look at Multi-architecture,


Multi-platform Mobile
App Development Frameworks
The article presents an overview of multi-architecture, multi-platform mobile application
development frameworks and then takes a look at appEZ, which, as its website proudly
declares, is Ridiculously easy mobile app building for non-programmers.

Is executed directly by the mobile operating system


Is able to use the mobile platform or operating system APIs
Is distributed via a platform-specific app store or via an
enterprise distribution mechanism

he pace at which the mobile is changing peoples


life styles is accelerating. The emerging challenge
for enterprise applications is to be available
and accessible over diverse mobile platforms. Multiarchitecture (flexibility to support native, hybrid and Web
mobile applications) and multi-platform (Android, iOS,
WP, etc) support is the imperative expectation for any
mobile application development tool/framework. Along
with cross-platform support, such a framework is also
expected to help mobile applications cover platformspecific diverse form factors. Such diversity among mobile
devices and the rich feature support in HTML/CSS have
resulted in mobile hybrid applications being a preferred
implementation approach.
Mobile applications can be classified into three types
- native mobile applications, Web applications and hybrid
mobile applications.

Native mobile applications

Mobile hybrid apps are neither native nor Web applications.


They are implemented with Web technologies and packaged
as applications for distribution. These applications can access
native device features and APIs. Basically, hybrid applications
are native mobile applications that host Web browser
controls within their main UI screen. Here are the imperative

These are also known as thick client applications. These apps


are implemented via Android, iOS, Windows Phone or other
mobile OSs. The imperative characteristics of any native
mobile application are:
An executable file installs and resides at the mobile device

82 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Mobile Web applications

These are also known as thin client applications. These apps


are implemented with Web technologies (HTML, CSS and
JavaScript). Some imperative characteristics for mobile Web
applications are:
Applications are executed by the device browser of the
mobile operating system
They can leverage only limited device features for app
implementation
They dont carry any executable file that can be installed
or removed from the mobile OS

Mobile hybrid applications

Insight Developers
Native or Hybrid App
Web App

Password

Communication
Channel

User Name

Business Logic Layer

Communication
Channel

UI Layer

Login

Native Layer

Sign In

Figure 2: Three distinctive layers of appEZ


About Terms Privacy

Figure 1: Mobile hybrid application

characteristics for mobile hybrid apps:


UI implementation is done using Web technologies
(HTML, CSS and JavaScript)
Apps are capable of using the mobile platform or
operating system APIs
An executable file installs and resides at the mobile device
The app can be distributed via a platform-specific app
store or an enterprise distribution mechanism
Figure 1 depicts a representation of a mobile hybrid
application. Broadly, it has two layers - the container and the
user interface (UI), where the container is implemented on native
mobile technology (Android, iOS or Windows Phone) and the UI
is implemented with Web technologies (HTML, CSS and JS).
There are many frameworks and tools for mobile hybrid
app development. The major selling point of mobile hybrid
apps is cross-platform development. The following are the
features that can be leveraged with mobile hybrid apps:
Integration of open source frameworks with HTML5
Liquid layouts for multi-screen UIs
Local storage, multimedia handling, semantics and forms,
graphics, etc
A single code-based architecture model for multi-platform
presentations
Mobile hybrid framework that bundles the HTML5 based
view layer with native platform containers to create
deployable builds
The mobile hybrid application development space
has witnessed increasing interest from different sets
of customers and companies. Over the last few years,
many frameworks have come up, which include a mix of
open source and licensed frameworks for mobile hybrid
applications. Most of them cover particular aspects of
mobile hybrid app development.

appEZ

appEZ, an open source multi-architecture/multi-platform

mobile app development framework, has a unique flexible


architecture for true mobile hybrid app development
with flawless amalgamation of native (Android/iOS/WP)
and Web (HTML5, CSS and JavaScript) technologies,
depending on the apps need. It covers all the layers of
mobile hybrid application development. Its modular
architecture makes it possible to use only the required
components, based on the business requirements as
against using the complete library set-up.

appEZ: The application layer structure


The appEZ mobile application incorporates multiple layers
in one application code base.
Figure 2 illustrates three distinctive layers. These are
listed below.
UI layer: This represents the UI/UX and view
implementation for the appEZ mobile app. The appEZ
platform has multiple ways of implementing the UI
layer as hybrid/Web or native (Android Activity). The
hybrid and Web application UI layer is implemented
with HTML, CSS, JS and other UI frameworks/tools
(JQuery, JQueryMobile, LESS, Bootstrap, etc). This
layer supports UI/UX guidelines for app-centric design
or platform-specific adaptive design, depending on the
applications needs.
Business logic layer: This layer is responsible for the
core logic and handles the implementation for the
application. It also communicates with the server for the
required data and information. The appEZ-recommended
approach and respective design patterns can be used
to implement the applications features. In case of
cross-platform development, it is recommended that
users harness the common business logic layer among
platform-specific builds.
Native layer: This layer takes care of the platformspecific capabilities (camera, database, HTTP
communication, persistent store, etc) to be used
by appEZ. This layer is not accessible with Web
application implementation. This is the reason
Web applications possess limited access to mobile
platform capabilities.
Mobile applications powered by appEZ comprise three
components, as illustrated in Figure 3.
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 83

Developers Insight
MMI (native): This is the corresponding native layer at the
respective platform container that receives the required
parameters from the Web layer and executes native
services as per the need.
MMI callback notification: Each MMI API requires
callback at the Web layer to get notified about success/
failure from native container layer (Android/iOS/WP).
appEZ supports the Android (API level 10 onwards),
iOS6 - iOS9 and Windows Phone 8/8.1 mobile platforms for
mobile application development. It has some advantages over
competing frameworks. These include:
Unique offering of flexible architecture for true mobile
hybrid application development
Flawless amalgamation of native and hybrid technologies
as per applications needs
Covers all the layers of mobile hybrid application
development
Modular architecture makes it possible to use only the
required components as per business requirements, as
against using the complete library set-up
Ready-to-use services for accessing the most commonly
used features of the native container
Expressive and easy-to-use JS syntaxes
Based on open source libraries which makes the design/
development extensible

UIC UI Layer

(HTML5, CSS3, JavaScript, Bootstrap, LESS)

SmartWeb Business Logic Layer


(JavaScript, Ajax)

MMI (JavaScript)

Container Native Layer

(Native container with embedded web view)


MMI (Native)

Figure 3: appEZ components

Unified integration components (UIC)

UIC enables users to create multi-platform native mobile


applications with simple HTML5/CSS3 and JavaScript. This
facilitates the creation of cross-platform applications with
a unified HTML structure. These components are built on
open source industry standards, frameworks and tools such as
LESS (for CSS programmability), Bootstrap (for page layouts
and dynamic structuring), and JQuery Mobile (for gesture
handling and navigation history management).

SmartWeb

This is a JavaScript based model-view-controller (MVC)


skeleton for the business logic layer. SmartWeb recommends
a generic design skeleton that can be used to develop
typical client-server applications. Its MVC skeleton for Web
development supports scalability and extensibility as per
application business logic layer requirements.

Mobilet Manager Interface (MMI)

This is an interface to communicate between the native layer


and the Web layer. MMI provides a set of APIs that enables
developers to leverage native capabilities exposed by native
mobile platforms (Android/iOS/WP) from the JavaScript
layer. MMI has the following parts:
MMI (JavaScript): Provides a defined set of APIs for the
developers to use at the Web layer in JavaScript.

THE COMPLETE MAGAZINE ON OPEN SOURCE

References
[1] Enterprise Mobility Breakthrough, < http://www.amazon.
com/Enterprise-Mobility-Breakthrough-Beginners-Guide/
dp/1482844087, https://books.google.com/books/
about/Enterprise_Mobility_Breakthrough.html?id=_
sY5rgEACAAJ&hl=en >
[2] appEZ,< http://appez.github.io/ >

By: Raghvendra Singh Dikhit


The author is a solutions architect by profession and author
of the book Enterprise Mobility Breakthrough. He has handson experience with enterprise mobility solutions, services
and products. He has worked with Fortune 500 firms for
mobile solutions (architecture, design, development, quality
assurance and porting). He holds a masters degree in computer
applications, an MBA and a masters degree in English.

Your favourite magazine on


Open Source is now on the Web, too.

OpenSourceForU.com
Follow us on Twitter@LinuxForYou

84 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Insight For U & Me

OLabs Makes School Laboratories


Accessible Anytime, Anywhere
School education in India faces many challenges including the lack of infrastructure and
shortage of trained teachers. Even in places where this is not a major concern, students
come out with little practical knowledge of the concepts and theories they learn. Read on
to discover how CDAC is addressing this problem.

uch has been said about the inadequate higher


order thinking skills among students passing out
from our education system. As per the curriculum,
science subjects should have associated labs, the study of
mathematics should be accompanied with activities, and so
on. But due to various constraints, these guidelines are not
implemented adequately in most Indian schools. Lab access is
usually limited in terms of time and resources available. And
in many places, there is no lab worth mentioning.
Added to this is the need to continuously
replenish stock of raw materials to
conduct the lab activities.
Virtual labs address this
deficiency in practical
exposure. These
labs simulate the
physical processes
on a computer
system, allowing
the student to perform
the activity on a
computer screen.
For example, in a
chemistry virtual
lab, the student
can put a beaker
on the table, add
some chemicals
to it, mix it with a spoon, boil it, and
so onall while sitting in front of the
computer. As a physics student, you can
suspend a spring on a stand, add various
weights to it, observe it oscillating and
coming to rest, measure the increase in the springs length,
and plot a graph to verify Hookes Law. Online Labs (OLabs)
is a set of such virtual labs, for schools.
Computers are today pretty accessible and affordable for
students. Getting Indian languages on them is also easy. Fastdropping prices and healthy competition is helping matters
further. Today, the trend is to use tablets instead of laptops and
desktops, which can be considered as a computer for most
purposes. One major strength of these devices is that they are
used in a personal way, unlike, for example, a TV. And, hence,
students can spend as much or as little time as they need on

their topics, without worrying about other students in the class.


Therefore, a computing device offers tremendous scope for
open-paced learning, to account for varying learner profiles.
OLabs exploits this
opportunity, offering a
number of virtual labs for
schools, allowing students
to practice on them as and
when they are comfortable,
at their own pace. The
project is supported by the
Department of Electronics
and Information Technology
(DeitY), government of India, and
designed and implemented by CDAC
Mumbai and Amrita Vishwa
Vidyapeetham.

The OLabs approach

If you do a Google search


for any specific topic, say
Hookes Law, chances
are that you will come
across plenty of material.
Good animations and even
simulations can be located. Many
of these are available for free,
at least for use. But these are
hardly used in any schools.
There are two major
reasons for this. First, for
such online resources to
be used in a school, these
first need to be studied well
and integrated into the
curriculum. This implies
t h a t t h e o n l i n e r e s o u r c e s should use the same
jargon and context that children are familiar with from their
curriculum. Many of the simulations do not easily fit into
this mould. Teachers will need to demonstrate the concepts
in class, and perhaps give assignments or lab work using the
online tool. This is usually a hard task, since the tool may not
be developed for such use cases. Teachers, therefore, need
to understand such tools, and figure out how best they can
be used in the class. There are also issues linked to fitting
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 85

For U & Me Insight


these tools into the lab environment, with support for taking
readings, making observations, plotting graphs, etc.
OLabs has been designed to address these concerns.
First of all, there is a single portal under which all labs are
accessible, irrespective of the subjects or the developer. See
Figure 1, which shows the opening page of the website www.
olabs.edu.in. It is fully browser based, and hence no local
software or installations are required.

the same, visually. Where measurements are needed, a virtual


scale is also provided.
A Viva voce tab is also provided with a few multiple
choice questions for self-testing the concepts. The Reference
tab includes some reading material, Web links, etc, that are
relevant to the particular activity and can be used for further
reading. This structure is one of the key USPs of OLabs, as
it provides a complete ecosystem for directly adopting the
system into the curriculum.
The labs follow the structure and process as outlined
in the CBSE (Central Board of Secondary Education) lab
manuals, further easing the adoption process. Despite this
compliance to CBSE, adopting OLabs for state boards
should not be difficult, since most of the syllabus and labs
are expected to be common. Language can be a deterrent,
and OLabs, in its current form, supports this too. The team
behind the project has already translated OLabs into Hindi,
Malayalam and Marathi. The same process can be extended to
other languagesonly translation of the text strings needs to
be done. If readers can help to translate OLabs into any of the
other Indian languages, please do get in touch with us.

Using OLabs
Figure 1: Home page of the OLabs

Generally, all the labs follow a common structure, as


shown in Figure 2.

Objective To determine the focal length of a concave mirror, by obtaining image of distant object.
Theory

Figure 2: Structure of a virtual lab

There is a Theory tab, which documents the theoretical


concepts addressed by the lab. The explanation is brief, since
this is usually available in detail in the textbooks. This will
help to check on the relevant formula, definitions, etc, when
students are working on a lab, or as a precursor to performing
the lab activity. The Procedure tab outlines the activity, step
by step, both when you do the activity in a real lab, and when
you do it using OLabs. This is usually the recommended
procedure as per the curriculum, and OLabs reflects this
sequence as far as possible.
The third tab is either a video, showing the lab activity in
real life (e.g., in case of biology), or a teacher-guided view of
the lab, which can be used to explain the lab to a class. The
fourth tab is the actual lab, and offers an interactive platform
for the student to perform the activity and control the relevant
parameters within the intended scope of the lab activity.
Where required, an observation window is provided to record
the readings from different trials, and a graph plot depicting
86 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Given the approach we have followed, OLabs can be used


in a variety of ways. Teachers can use the online labs in
the classroom to explain the lab activity to be performed
in the real lab. This helps to remove the surprise element
when the student gets to the lab, and makes the lab
session more effective. Teachers can also frame review
questions with the lab as the backdrop, asking the students
to predict what will happen under particular conditions.
Students can use the lab before the physical lab session, to
familiarise themselves with the process, and to chart out
the expectations. As a post-lab activity, they can use it with
more variations, to reinforce the concepts, and to answer
any questions they may have. We are sure that creative
teachers and students can come up with many more
innovative uses for this resource.

The math lab

While the notion of a lab for physics, chemistry and biology


is well understood, usually there are no formal labs for
mathematics. The math lab in OLabs is envisaged as activities
to reinforce conceptual relationshipse.g., the formula for
the volume of a sphere. In the curriculum, these are called
activities, and usually involve using resources such as paper,
glue, sand, thread, scissors, etc, and making shapes and
demonstrating similarities. These activities are captured into an
interactive online experience allowing the student to perform
these virtually. Students can, thus, draw a variety of triangles
and see how a particular formula or theorem holds true for each
of them. As in real life, they have choices when selecting a
point, which step to do in which order, and so on. The system
ensures that wrong steps are not performed.

Insight For U & Me


The English lab

For the English language lab, we have designed activities


that students can perform repeatedly till they are thorough
with a concept. In a real-world tutorial class, a teacher will
need to review the work done by students regularly, and
give feedback, so that they can improve their performance.
This takes up too much of the teachers time, and limits
the extent of tutorial activities. In OLabs, reviewing the
students solutions is done by our system, automatically. The
feedback is directly dependent on the selected problem and
the mistakes made by the student at that particular time. This
helps students to notice the problems in their solutions and
improve on them with practice.
Another interesting aspect of the English lab is the use
of templates to generate sentences, instead of using a set of
canned sentences for the activities. This helps prevent blind
copying from others, and offers a wide range of sentences for
practice. Sophisticated language processing techniques are
deployed to make this effective.
English labs are currently available for topics such as
tense conversion, voice conversion, use of prepositions,
subject-verb agreements, etc.

Current status

The system has been available on the OLabs website for the
past few years. There are over 150 labs covering physics,
chemistry, biology, mathematics and English. Twelve
English labs are currently open to students of different
classes, depending on the concepts that they want to
practice. The number of other labs is given below (as per the
CBSE curriculum).
Class/
subject

Physics

Chemistry

Biology

Math

IX

11

13

13

10

12

XI

16

10

13

XII

16

15

OLabs is gaining a lot of attention through various


channels, with 50,000 to 100,000 page views a week,
depending on the academic calendar. Since many schools
do not have the required Internet access for regular use in
classrooms or labs, a live DVD version of the entire OLabs
system has been developed. There is also a Windows installer
for schools to set up the system on their computers, locally.
In 2013, the then CBSE chairman, Dr Vineet Joshi, had
written to all schools encouraging them to use OLabs, and
recently, another circular has been issued with the same
message. It also encourages CBSE teachers to benefit from
the training that CDAC offers. In December 2015, the
union minister for communications and IT, Honourable
Shri Ravi Shankar Prasad, formally launched a programme

Figure 3: English lab on voice conversion

to train 30,000 teachers on OLabs across India, under the


Digital India initiative. This training is free, and we invite
teachers from CBSE schools to register on the portal for
this programme. We will schedule these programmes in
various parts of the country, and will inform those who
register, on the participation dates. Over 5000 teachers have
been trained so far.

The technologies used

The entire system is built on open source software, and


also follows open standards, to ease localisation and other
modifications. The technologies used are listed below.
1. Client side: HTML5, JavaScript CCS3, KineticJS, Three.
js and JQuery
2. Server side: PHP, JSP and Servlets
3. Database: MySQL
The UI design is responsive; hence, we can use
websites in the desktop environment as well as on handheld
devices. The development follows software engineering
principles, once again using open source software such as
Git (code versioning), GitLab (issue tracking and repository
management) and Eclipse (IDE) among others.

Acknowledgements

The success of the project owes much to the great support we


received from DeitY, government of India, especially from Dr
Ajay Kumar, Dipak Singh, D.K. Kalra, Santosh Pandey and
A.K. Arora. I also thank the OLabs team at CDAC Mumbai,
particularly the team leads Archana Sharma and Manoj
Kumar Singh, and the team at Amrita Vishwa Vidyapeetham,
headed by Prof. Prema N., for the passion and commitment
they bring to the project. Special thanks to Archana for the
critical review of this article.
By: Dr M. Sasikumar
The author is associate director at CDAC Mumbai, and is the chief
investigator of the project. He can be reached at sasi@cdac.in

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 87

For U & Me Lets Try

Cool Terminal Tricks


The terminal can become your best friend and saviour. All that you need to do is a bit of
exploring and some inevitable trial-and-error to get the best out of your system. So lets
learn a few tricks to use on the terminal for some amazing effects.

ash is an interactive and scripting shell program for


UNIX and UNIX-like operating systems. In this
article, you will learn a few neat tricks that will make
bash a more convenient and useful tool to manage your GNU/
Linux system.
While a lot of effort has gone into creating attractive
desktop managers such as KDE, GNOME and Xfce, the
humble terminal remains the desktop of compulsion/choice
for systems administrators and ultra-geeks. It only takes a few
simple tricks for regular users to become more productive
and comfortable at the terminal. While there are several
terminal programs, this article relies on GNOME Terminal.
A working knowledge of shell Linux commands such as ls,
mkdir, cd, cp, mv, chmod, head, tail, echo and sudo, and basic
scripting is needed for this article.
Using the ~/.bashrc file: Commands in the hidden .bashrc
file are executed before the terminal window is displayed.
This is usually well-documented and changes can be made
using a text editor. Remember to back up the file before
making any changes to the preferences specified in it.
Starting at the desktop: The GNOME Terminal usually
starts at the users home directory, which is not easily
accessible. Put the command cd ~/Desktop at the end of the
.bashrc file so that any files created at the terminal can be easily
found on the desktop. The home directory can be referred in
your commands and scripts using the ~ (tilde) abbreviation.
Colour-code your terminal window: In some GNU/
Linux distributions, the default terminal window has black

88 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

text on a white background. This is not good for your eyes.


Change the terminal preferences to have a dark background
and light foreground colours. Another common annoyance is
that the prompt by itself takes half a line, and long commands
invariably wrap to the next line, disrupting your train of
thought. To have a more informative and colourful prompt,
add the following line at the end of your .bashrc file:
PS1= \a\n\n\e[31;1m\u@\h on \d at \@\n\e[33;1m\w\e[0m\n$

This setting ensures that the prompt contains a colourcoded user name, machine name, date, time and current
directory. Most importantly, you get an entire line to type your
command. If you want to tinker with the prompt even further,
visit the IBM Developer Works Linux library page for Prompt
Magic. (I had mentioned this trick in a previous article. I am
repeating it here for those who might have missed it.)
Using command history: The commands that you type are
stored in a hidden file named .bash_history. You can browse
the history using the up and down arrow keys at the prompt.
You can also search this history using the keyboard shortcut
Ctrl+R. You can increase the length of the command history
by changing the following variables in the .bashrc file. They
limit the number of lines of typed commands that can be
stored in memory and in the .bash_history file.
HISTSIZE=1000
HISTFILESIZE=2000

Lets Try For U & Me


Write shell scripts: Bash is not only an interactive
command prompt but is also a powerful scripting language
interpreter. As many terminal tasks require more than one
shell command, you can store them in a shell script text file
and pass the variable data as command-line arguments. I use
the following script to kill applications by their name:
for sID in pgrep $1
do
ps $sID
kill -STOP $sID
done
for sID in pgrep $1
do
ps $sID
kill -KILL $sID
done

To kill all instances of the Opera browser from the command


line, I can type bash ~/MyScripts/kill.txt opera. Even though an
execution bit can be added to the file so that bash can be omitted
from the command, I never do this. As a long-time Windows
user, I am paranoid about malware finding executable scripts and
adding their own destructive commands. In Microsoft Windows
operating systems, I usually change the default action of .vbs, .js,
.wsf and .reg file types to Edit, rather than leave them at Open or
Merge. I follow the same precaution in GNU/Linux.
Creating command aliases: You can create abbreviated
forms for your commands in the .bashrc file. One of the most
common commands that you have to type may be ls l. Hence,
I have the following alias for the ls command. I just need to
type one letter (l) for it:
alias l= ls l

To execute my kill script, I use the following alias:


alias kll= bash ~/MyScripts/killit.txt

So, to kill Opera, I type kll opera. I continue to use


the obsolete pre-Blink Opera browser for its wonderful
RSS reader. Unfortunately, it becomes unstable several
times during a session. It cannot be killed by the usual
killall or pkill command. The kill script and this alias,
however, vanquishes it.
Using bash, not sh: My first experience with a UNIXlike OS was on SCO UNIX. As a result, I was accustomed to
running shell script files with the sh command. This continued
even after I started using GNU/Linux systems. For years, I
was perplexed by why many of the scripts were not working
well, but they did fine after I set the execution bit (chmod
+x) and ran them directly without the shell command.
Apparently, in many GNU/Linux distributions, sh refers to the

Figure 1: Optimised and colour-coded GNOME Terminal

old UNIX-like shell and bash is a separate program that works


like a more advanced superset of sh. As a result, scripts
designed for bash will not work well with sh. So, always use
bash to run all your bash shell scripts. However, there are
some tasks such as cross-platform driver compilation that still
require sh. Another thing to note is that the su or root terminal
uses sh by default. This is why the up/down arrow keys will
not let you browse the history. So, be aware of which shell
program you are currently using.
Breaking up long commands: When you have to type
a long command, you can type a slash to easily wrap the
command to the next line and continue from there, as follows:
ffmpeg -i tank.mp4 \
-c:v copy \
-c:a copy \
-ss 1:12
\
-t 2:50
\
tank-cut.mp4

Using keyboard shortcuts: GNOME Terminal is a multitab window. You can press Ctrl+Shift+T and start a terminal
in a new tab in the same window. If you feel you need to refer
to the man pages of the current command, do not cancel what
you are typing. Just start a new tab and browse the man pages
from there. Another useful shortcut is Ctrl+U, which cancels
or deletes everything in the current line and lets you begin
afresh. To copy text from a terminal window, the short cut is
Ctrl+Shift+C, and the short cut to paste text is Ctrl+Shift+V.
Ctrl+C is for killing the current command and should not
be used to copy text in the terminal.
Switching to a TTY terminal: The Linux desktop
managers are known to be very stable. However, on those
rare occasions when the desktop hangs, you can hold down
Ctrl+Alt and press any of the function keys to switch to a TTY
terminal. These terminals are created by the OS before loading
the GUI (desktop manager). From one of these terminals, you
can log in to your account and do your troubleshooting. One of
the following commands can restart the desktop manager.
sudo service gdm restart # for Gnome 2
sudo service mdm restart # for Mate
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 89

For U & Me Lets Try


sudo service lightdm restart # for Upstart
sudo service gdm3 restart # for Gnome 3

Running shell scripts from other programs: You can


execute shell commands and scripts from other programs
to automate your application tasks. In my article that
appeared in the March 2016 issue of OSFY, I have shown
how to add shell commands/scripts to the Nautilus/Caja
file manager as the context menu options. Many other
programs provide shell support. In Pluma/Gedit (the
default text editor), I have an external tool command to
convert Markdown files to formatted HTML.
perl ~/MyScripts/markdown.pl --html4tags \
$GEDIT_CURRENT_DOCUMENT_NAME > \

${GEDIT_CURRENT_DOCUMENT_NAME%.*}.htm

You will have to look up the help documentation of the


application to find out how to run shell commands.
I use isolated Firefox profiles to load Web pages
with Adobe Flash or run the browser behind a US-based
proxy IP. These profiles can be quickly loaded from other
programs or from a launcher (a short cut) using special
command line options.
# Create a new profile and specify its settings
firefox -ProfileManager
#Load a created a named profile
firefox -p ProxyIpProfile
firefox -p AdobeFlashProfile

You will be surprised how many popular GUI programs

have an elaborate command line interface. cvlc is the


command line version of the media player VLC. I use it to
play audio files from the terminal.
While there are a lot of GUI programs for GNU/Linux,
there are several times more command line utilities written for
it and its predecessor, UNIX. The Linux userland abounds with
command-line solutions for situations where the GUI option is
deficient or does not exist. For example, there is a wonderful
program called alsamixer, which provides a command line
version of Sound Preferences. I was able to configure my
near-vintage Pinnacle TV tuner using this program. If Network
Manager failed to recognise your wireless modem, then wvdial
will still be able to configure and use it. Espeak can convert text
arguments to voiced audio, which can then be captured as wave
files. I use the wave files to provide audio notifications from my
bash scripts and to annotate my YouTube videos.
Well... one can go on forever like this. The terminal can
become your best friend and saviour. All that you need to do
is a bit of exploring and some inevitable trial-and-error to get
the best out of your system.

References
[1] http://www.linuxcommand.org/learning_the_shell.
phpLearning the shell
[2] http://tldp.org/HOWTO/Bash-Prompt-HOWTO/index.html
BASH Programming - Introduction HOW-TO
[3] http://www.ibm.com/developerworks/linux/library/l-tipprompt/ Tip: Prompt magic

By: V. Subhash
The author is a writer, programmer and FOSS
fan (www.vsubhash.com).

Read more stories on Components in

www.electronicsb2b.com
COMPONENTS STORIES

TOP

nverters
The latest in power co
ent distributors
Indias leading compon
ry
onics components indust
Growth of Indian electr
components for LEDs
The latest launches of
ics
components for electron
The latest launches of

ELECTRONICS

INDUSTRY IS AT A

Log on to www.electronicsb2b.com and be in touch with the Electronics B2B Fraternity 24x7
90 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Lets Try OpenGurus

CoAP: Get Started with IoT Protocols


CoAP is a software protocol that allows simple electronic devices to communicate over the
Internet. It is designed for small devices with low-power sensors and actuators that need to
be controlled or supervised remotely, through standard Internet networks.

etwork protocols play a significant role in


communicating between various building blocks
of an IoT architecture. You might wish for an
efficient protocol that connects gateways with sensor nodes
as per M2M or WSN needs, and with servers or cloud
platforms for Web integration and global access. HTTP
may be fit for certain needs but is expensive and has its
own overheads. CoAP is a simple, less expensive protocol
that meets all the above needs and is affordable for those
with resource constraints.
Constrained Application Protocol (CoAP) is a simple,
low overhead protocol designed for environments like
low-end microcontrollers and constrained networks
with critical bandwidth and a high error rate such as
6LowPANs. It is defined by the IETF open standard RFC
7252, runs on UDP by default but is not limited to it, as it
can be implemented over other channels like TCP, DTLS
or SMS. A typical networking stack offering CoAP is
shown in Figure 1. CoAP is based on the request-response
communication model and comes with support for resource
discovery, better reliability, URIs, etc. The protocol was
designed for M2M needs initially, but was adapted in IoT
as well, with support on gateways, high-end servers and
enterprise integration.
CoAP resembles HTTP in terms of the REST model with
GET, POST, PUT and DELETE methods, URIs, response
codes, MIME types, etc, but one shouldnt think of it as
compressed HTTP. However, CoAP can easily interface with
HTTP using proxy components, where HTTP clients can talk

to CoAP servers and vice versa, which enables better Web


integration and the ability to meet IoT needs.
Lets have a quick overview of the protocol.

Endpoints

A host or node participating in CoAP communication is


known as an endpoint. The endpoint on which resources are
defined and is a destination for requests is known as a server
or, more precisely, the origin server and the endpoint from
which requests are made for target resources is known as
a client. Similarly, a server is the source and a client is the
destination for responses. Certain endpoints like proxies act as
the intermediate client and server.

The CoAP message format

One of the key design goals of CoAP is to avoid fragmentation


at underlying layers, especially at the link layer, i.e., the whole
CoAP packet should fit into a single datagram compatible with
a single frame at the Ethernet or IEEE 802.15.4 layer. This is
possible with a compact 4-byte binary header, optional fields
and payload, as shown in Figure 2.
Version: 2-bit version number, currently fixed at 0x01
Type of messages: CoAP supports four types of messages
with the following 2 bit transaction codes:
CON

00(0)

ACK

10(2)

NON

01(1)

RST

11(3)

CoAP offers optional reliability using conformable


www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 91

OpenGurus Lets Try


Applications

Ver(2)

CoAP Requests/Responses

Type(2)

Tkl(4)

Message
ID (16)

Token (0-8)
Tk1 byte

CoAP Messaging

Option if (any)

UDP,DTLS

Playload Marker (0xff)

IPv4,IPv6

6LowPAN

Ethernnet/WiFi

WPAN(802.15.4)
Physical Channel

Figure 1: Protocol stack with CoAP support.

(CON) messages, where each message (request/response)


should be acknowledged (ACK) by the peer endpoint.
Messages will be retransmitted if a CON message is not
acknowledged within the time out. A response may be
combined with an ongoing acknowledgement, which is
known as a piggy backed response, or the response may
be sent afterwards if not available immediately, which is
known as a separate response. Piggy backed responses
need not be acknowledged, and the same holds for non
conformable (NON) messages.
Req/Resp code: A 3-bit class ID and 5-bit detail in
the c.dd format forms this field. The class values used for
requests, success responses, client error responses and server
error responses are 0, 2, 4 or 5, respectively. The detail
carries the request code or response code depending on the
class value. Code 0.00 indicates an empty message.
Message ID: A 16-bit unsigned number in network byte
order is used to match acknowledgement or reset messages
and eliminate duplicate messages.
Tokens: An optional token field, which is limited to 0 to
8 bytes currently to match request responses, may be kept
after the header, which is of TKL number of bytes specified
in the header.
Options: Zero or more option fields may follow a token.
A few options are Content Format, Accept, Max-Age, Etag,
Uri-Path, Uri-Query, etc.
CoAP URIs: CoAP URIs consist of the hostname,
port number, path and query string, which are specified by
option fields Uri-Host, Uri-Port, Uri-Path and Uri-Query,
of which Uri-Host and Uri-Port are implicit as they are
part of underlying layers. Uri-Path and Uri-Query are
significant and part of the CoAP message. For example, if
we request a resource with URI coap://hostname:port/leds/
red?q=state&on, the following options are generated:
Option#1
Option#2
Option#3
Option#4

Req/Resp code(8)

Uri-Path Leds,
Uri-Path red,
Uri-Query q=state
Uri-Query on.

92 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Playload (if any)

Figure 2: CoAP message format

CoAP servers run on UDP port no.5683 by default.


Payload: Followed by the optional token and zero or
more option fields, the payload may start, in which case a
payload marker 0xFF is placed inbetween. This indicates the
end of the options and the start of the payload. The absence of
the payload marker indicates an empty payload.

Request methods

CoAP has come up with a subset of the Representational


State Transfer [REST] architecture of the Web, which
resembles HTTP with request methods GET(1), POST(2),
PUT(3) and DELETE(4), with indicated request codes.
These methods can be used to create, update, query and
delete the resources on the server representing events of
an IoT application. For example, a server getting sensor
readings directly can define suitable resources and provide
data when the GET method is requested by clients, or
sensor readings can be updated by using the POST method
by a client holding sensor values. Besides, resources can
be updated using the GET method with a query string.
Similarly, clients can use the POST method to update
resources which may represent actuators. However, the PUT
method is preferred to POST for idempotent operations.
A Discover operation is added with a GET request on
the predefined path /.well-known/core, which returns a list
of available resources along with applicable paths in the
response. A ping operation is implemented by sending an
empty message to which the peer responds with an empty
reset (RST) message indicating thelivenessof the endpoint.
Additionally, the Observe method is defined, where the
notification is sent periodically or on an event basis, for a single
request. If the request is a CON message, each response is of
the CON type and will be acknowledged by the client. Observe
can be initiated with an extended GET request, with the added
Observe option set to zero. It can be cancelled by sending a
reset message for one of the notifications or by sending an
explicit GET request with the Observe option set to value 1.
Notifications use the Observe option with sequential values
for reordering of delayed responses and the Max-Age option
to keep freshness in the cache. The combination of resource
update, Observe method is analogous to publish-subscribe
mechanisms in other protocols like MQTT. CoAP also offers
block wise transfers, which enable a large amount of data
exceeding datagram limits, and eliminates fragmentation at

Lets Try OpenGurus


underlying layers with better reliability.

Response codes

Most response codes are similar to HTTP. For instance,


2.xx indicates success, 4.xx indicates client error and
5.xx indicates server error. Here are some frequently
encountered response codes:
2.01

Created

2.02

Deleted

2.04

Changed

2.05

Content

4.04

Not found (resource)

4.05

Method not allowed

This is the summary of the CoAP protocol and message


format. You can refer to the listed RFC and references for
more details like message ID rules, token generation, options,
etc. Now, lets go ahead with some hands-on stuff. The
Eclipse IoT project offers Californium core library, demo apps
and add-ons to bootstrap with the protocol.

Building the Eclipse Californium core demo apps


Set up the Maven build using the recent stable version of
binaries. Check out the Git repository github.com/eclipse/
californium.git with a recent stable branch, say 1.0.4, and
switch into a cloned directory, as follows:
mvn clean install
# in case of failure at scandium-core module comment the
following line in pom.xml under modules as follows
<!-<module>scandium-core</module> -->
# Or rerun the command as follows
mvn clean installed -rf : californium-core
# collect californium-core-1.0.4.jar from californiumcore/target and element-connector-1.0.4.jar from elementconnector/target to a new work directory for building your
own server, client apps without maven build support.

This provides a single resource at path /helloWorld


supporting the GET method.
Open Firefox, install the Copper plugin using https://
addons.mozilla.org/en-US/firefox/addon/copper-270430/. Type
the URL coap://localhost, and hit Ping to check how alive the
server is and hit Discover to retrieve hosted resources. Click
on the HelloWorld resource and request the GET method,
which returns the response with the payload Hello World! You
will also notice that other methods are not allowed, which is
indicated by the response code 4.05.
You can run another demo, which provides various
resources supporting REST methods and also the Observe
method and block wise transfer, as follows:
java -jar cf-plugtest-server-1.0.4.jar

You can also run cf-helloworld-client-1.0.4.jar and cfplugtest-client-1.0.4.jar for testing the respective servers.

Working with CoAP servers with custom resources


To run a server with custom resources, you can use the example
code at github.com/rajeshsola/iot-examples/coap/demoserver, which defines the following resources with supported
operations. In this demo, the resources represent dummy
information in order to run without specific hardware.
/Temperature
/RtcData
/Leds
/Leds/xxx
updateme!
removeme!

GET
Observe, GET
subpath for red, green,blue
PUT, GET
PUT, POST, GET
DELETE

Execute the make command with build, run targets to build


and run the demo server. You can test this server using the
Firefox Copper plugin as shown in Figure 3.

NodeRED and CoAP-cli support

NodeRED is a visual wiring tool for prototyping IoT solutions


and networking services. It comes with an add-on node-redcontrib-coap and provides a coap-request node for making

element-connector is a Java socket abstraction for UDP,


DTLS, TCP, etc., which provides a connector interface
implemented by UDPConnector currently, and DTLS by
Scandium plugin. Support for TCP, SMS can be added in
future with this abstraction layer.

A tour of demo apps and quick test with the


Firefox Copper plugin

Switch to demo apps/run and launch the following


demos as follows:
java -jar cf-helloworld-server-1.0.4.jar

Figure 3: Firefox Copper plugin


www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 93

OpenGurus Lets Try


Node-RED
Search

localhost:1880/#

filter nodes

info

Flow 1

debug

http request
tcp request

discover

msg.payload

switch
change

GET Temperature

coap request

range
csv
html
json
xml
rbe

coap request

coap request

Provides a node for making CoAP requests.

johnny5

social
e-mail
twitter
e-mail

Figure 5: Protocol debugging with Wireshark

twitter

storage
tail
file

Figure 4: coap-request node in NodeRED

requests to the CoAP server. This node takes input or gives


output in the form of a payload property of a JavaScript
object. Currently, this node supports CON messages.
The npm package coap-cli provides a simple command line
tool for making various requests with the following syntax:
coap <request> <options> URI

A few options are: -p for payload, -o for observe, -c for


CON and -n for NON.
e.g.: coap post coap://localhost/updateme! -p hello #for
POST operation
coap -o coap://localhost/RtcData #for observe
operations

Other implementations are:


libcoap and freecoap in C, suitable for porting
on the lwip stack
nCoAP in Java
npm packages coap, node-coap, coap-cli, etc, in Node.js

if it is of the CON or NON type, message ID, Uri-Path


option, etc. You can also find a piggy backed response
with the same message ID of request, content format
as text/plain, response code 2.05 and suitable payload.
Now, select the NON type from behaviour drop-down
in the Copper plugin and try the same request. You will
find an independent NON message with a different ID as
the response. You can dissect the message flow in Ping,
discover operations and other request methods.
You can analyse capture files in the pcapng format
recorded from Wireshark or an equivalent, using the coap.
me pcap interpreter. It also provides a crawler client, which
crawls though any public CoAP server using coap.me.

Android apps

Any IoT application is not complete without a mobile


app, and Android has good support for this. A prebuilt
application Aneska from Playstore is available. You can
develop custom apps by building code available from
Eclipse Californium demo apps (cf-android) or Spitfirefox
from github.com/okleine using Android Studio or the
Gradle build system.

CoAPthon, aiocoap, txthings, etc, in Python

ESP8266 libraries
Stay tuned to http://coap.technology/impls.html for
frequent updates.

CoAP Client

CoAP Client

: 5683

californium.eclipse.org
GET

CON

NON

SEND
Observe

Wireshark comes with support for dissecting CoAP


messages, and the final version of the protocol is well
supported from version 1.12 onwards. For this, install
Wireshark from the package manager or build it from
source. Then run Wireshark with suitable privileges and
start capturing by selecting the interface on which the
message flow exists. Type coap in the filter box and hit
Enter to eliminate other packets. In each message, you
can check various header elements, tokens, options and
payloads, if any. For example, request the URL coap://
localhost/Temperature using the GET method, and check
94 | august 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

ACK (2) 2.05


RESPONSE OPTIONS

/test

(12) CONTENT-FORMAT: 0

MORE REQUEST OPTIONS:

Packet sniffing using Wireshark

coap://californium.eclipse.org:5683/test

(01) IF-MATCH (hex):

e.g. AB12

(04) ETAG (hex):

e.g. AB12;CD34

(05) IF-NONE-MATCH:
(12) CONTENT-FORMAT: e.g. 0
(17) ACCEPT:

e.g. 0;40

(23): BLOCK2:

none

(27): BLOCK1:

none

(14) MAX AGE:

30

RESPONSE PAYLOAD
Type: 0 (CON)
Code: 1 (GET)
MID: 23982
Token: 5954da1f208a6529

RESPONSE PAYLOAD (requires Option No. 12):

no payload given

Figure 6: Spitfirefox Android app

Response received (after 622 ms)

Lets Try OpenGurus


Connect your mobile to the same network of the
server over Wi-Fi or by using USB tethering, and test the
resources by entering the server IP in these apps.

Bridging with HTTP

Since CoAP supports a subset of the REST model similar


to HTTP, it is easy to integrate both protocols using proxy
endpoints. An HTTP-CoAP proxy is used to request
resources on a CoAP server from the HTTP client and
reverse proxy is used to request resources on the HTTP
server from CoAP clients. Eclipse Californium comes
with a proxy library and an example cf-proxy for this.
By running cf-proxy, you can talk to the CoAP
server from the HTTP client as follows: http://
phostname:8080/proxy/coap://chostname:5683/resource,
where phostname represents the proxy and chostname
represents the origin server.
Another implementation is the npm package ponte,
a current Eclipse IoT project that bridges MQTT, CoAP
and HTTP protocols. To try this, install the ponte, bunyan
packages using npm and run the following command:

hostname:3000/resources/abcd

Public servers and cloud platforms

Compared to other protocols, currently, very few cloud


platforms support CoAP like thethings.io, ThingMQ, etc.
A few public servers with test resources to get started with
clients are listed on coap://californium.eclipse.org/ or coap://
vs0.inf.ethz.ch/, coap://coap.me/.
I hope this article helps you to bootstrap the CoAP
protocol, and its tools and libraries. Please follow the given
pointers for detailed coverage of the protocol and more handson information.

References
[1] https://tools.ietf.org/rfc7252 [RFC7252]
[2] CoAP: The Web of Things Protocol, Zach Shellby, ARM IoT
Tutorial.
[3] Hands-on with CoAP, Matthias Kovatsch and Julien
Vermillard, EclipseCon France 2014
[4] github.com/eclipse/californium.git

By: Rajesh Sola


ponte -v | bunyan

Using this bridge, identical URIs for the resource


named abcd are coap://hostname:5683/r/abcd and http://

The author is a faculty member at C-DACs Advanced Computing


Training School, Pune, and an evangelist in the embedded
systems and IoT domains. He loves teaching and open source.
You can reach him at rajeshsola@gmail.com.

Please share your feedback/ thoughts/


views via email at osfyedit@efy.in

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2016 | 95

OpenGurus How To

Profanity: The Command


Line Instant Messenger

Profanity is a text based instant messaging application which uses Command line
Interface (CLI), which beats many GUI (Graphical User Interface) applications of a similar
nature. By virtue of having a CLI, Profanity is faster than any GUI, since it involves less
effort as theres no fumbling between mouse and keyboard. The biggest advantage of
Profanity is that the user can generate various scripts to multi-task.

rofanity is a command line chat application for the


XMPP-Extensible Messaging Presence Protocol, which
is packed with loads of features for typical user needs.
In this article, were going to explore some of its features.

Why prefer CLI when theres GUI?

There are plenty of instant messengers (IMs) available for all


the three platforms, which provide loads of features. So what
makes Profanity stand out?
The answer is that Profanity is one of the very few
applications that uses the CLI (command line interface)
instead of a GUI (graphical user interface).
In our daily life, we use GUI applications everywhere,
and the users expectations of the appearance of the
software have grown dramatically over the past years.
Newer frameworks focus on how best the UI can be
pleasant without compromising on the functions of that
particular software.
It is a perception that CLI applications are for geeks,
hackers and systems admins, while typical users prefer GUI
applications since the latter are much easier to work with.
But Linux users can be exceptions to this, since they often
use the CLI for installing, configuring and fine-tuning the
operating system.
Both the CLI and GUI have their pros and cons, but the
former is a much better option for many reasons, some of
which are listed here.
96 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

1. CLI users need only the keyboard to work on and hence


can perform tasks faster than GUI users who need to use
the mouse as well as the keyboard. For a simple copy and
paste operation, the CLI uses a single command whereas
the GUI has to navigate through the source and target
locations in the file system.
2. The CLI offers more control on the file and system than
the GUI, like changing the user permissions (read, write
and execute) and the ownership of a particular file.
3. Since the CLI is only used to perform tasks that require a
few system resources, it is highly unlikely to be clogged
or to freeze mid-task.
4. The CLI enables users to write scripts to perform a
series of tasks, while the GUI supports multi-tasking for
performing multiple tasks at the same time, but it doesnt
beat the CLIs performance.

Installation

Profanity is available for Linux, Windows and Mac. For now,


lets look at how to install it in Linux machines. Separate
installation instructions for Windows and Mac can be found at
the Profanity website.
Profanity has been included on some of the major
distributions such as Arch, Gentoo, Ubuntu, Open SUSE
and Slackware. If your OS is not in the above list, then you
can use the installation script found in the Profanity tar ball.
When you run the installation shell script, it will take care of

How To OpenGurus
all the dependencies, configure all the necessary settings and
successfully install Profanity in your system.

Getting started

To start Profanity, just issue the following command


in the terminal:
$profanity

Once Profanity is up and running, you can start


issuing commands.
Always make sure that your command begins with /. For
further assistance, use the /help command and it will present
you with various categories of help.
Now it is time to connect to your account in Profanity.
You can use your Jabber ID or any XMPP service for which
you have an account. In this article we are going to use the
gtalk account in Profanity.
Type the following command in Profanity:

Figure 1: The Profanity welcome screen

/connect user@gmail.com

Once you supply the valid password for that account, you
will see a message stating that you have successfully logged
into your account. On the upper right corner, you will find
your status is indicated as Online and below that, you will
find the list of your Online Contacts and Offline Contacts.
If you get a Login Failed message, please note that, by
default, Google will turn off access for the less secure apps
to your Google account. You can enable it by visiting https://
www.google.com/settings/security/lesssecureapps
Once youve turned on your access, Profanity will work
just fine.
After you have successfully connected to your account,
it is time to chat with other users. To send a message to other
users, issue the following command in the terminal:

Figure 2: Profanity connected screen


/roster add user@gmail.com

To remove a contact from the roster, use the /roster remove


command by supplying the account ID as the argument:
/roster remove user@gmail.com

Subscriptions: Once a contact has been added to the


account, you can subscribe to that contact for online activities
with the /sub command:
/sub request user@gmail.com

/msg user@gmail.com your_message

As soon as the other user starts replying, a new window


will be opened along with the window number. You can use
alt+right or the left arrow keys to switch between windows.

Many service providers will make the contacts subscribed


together automatically.
If you want a particular contact to be subscribed to your
account, issue the following command:

Roster, subscriptions and groups

/sub allow user@gmail.com

Roster: A roster is simply a list of contacts in your account.


The /roster command will show the Contacts list in
your account. The output of the /roster command has four
sets of attributes:
1. jid (Jabber ID)
2. Nickname
3. Subscription
4. Groups
To manually add a contact to your account, you must first
use the /roster add command in the following format:

In some cases, you may not want to show your online


status and updates regarding your online presence, for which
you can use the /sub deny command in the following format:
/sub deny user@gmail.com

Assigning nicknames to the contacts: Assigning


nicknames to contacts is very handy, as you dont have to type
the whole account ID in Profanity:
www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 97

OpenGurus How To

Figure 5: OTR Session Untrusted

Figure 3: OTR key generation in progress

This should take some time, but moving the mouse


pointer randomly on the screen will speed up the keygenerating process.
Once your key is generated, you can start your OTR
session with the other user, who should have enabled OTR
on the messenger also to achieve a successful trusted OTR
session between two users.
/otr start user@gmail.com

Figure 4: OTR key generated


/roster nick chessurthecat@gmail.com chessur

The above command will assign the nickname chessur


to the account chessurthecat@gmail.com.
A nickname can be removed by using the following
command:
/roster clearnick chessurthecat@gmail.com

Groups: Profanity allows you to keep contacts in groups.


To view all the groups in your account, issue the /group
command.
To view the contacts that belong to a particular group, use
/group show group_name
To add a particular contact in a group, use /group add
group_name contact_name
To remove a contact from a group, use /group remove
group_name contact_name

Chatting with OTR encryption

OTR (off the record) encryption is one of the salient


features of Profanity. To start chatting with OTR, you
must first generate your private key. To do that, type the
following command:
/otr gen

You will see that your private key is being generated.


98 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

The above command will attempt to establish an OTR


session with the other user, but the set-up isnt complete as
we have not authenticated the other users identity. Hence,
the OTR session will be shown as untrusted in the title bar.
To authenticate other users, Profanity uses three types of
authentication methods:
1. Question and answer
2. Shared secret
3. Fingerprint
In this article, we will use the Question and Answer
challenge method for the other user. If he/she provides the
expected answer, then the identity is confirmed.
The following command will provide a question and
answer challenge for that particular user:
/otr question Who is the patron saint of desperate cases?
St Jude

Figure 6: OTR Session Trusted

Once your friend answers the question correctly, the


untrusted indicator will be changed to trusted and also a
message OTR Session Trusted will be displayed.
By: Magimai Prakash
The author has completed a B.E in computer science. As he is
deeply interested in Linux, he spends most of his leisure time
exploring open source.

For U & Me Interview

Interview For U & Me

Raspberry Pi enables embedded


engineering while being a cheap,
low-power Linux platform
Raspberry Pi has transformed the world of computing. This single-board computer has
opened new avenues for millions of budding developers all across the globe. But what was
the prime idea behind the tiny little device and how did it receive such worldwide acclaim?
Jagmeet Singh of OSFY tried to get the answers to these questions in an exclusive
conversation with Raspberry Pi creator, Eben Upton. Read on for the edited excerpts.

Q How did you get the idea to design Raspberry Pi?


When I was a child, I had two different small computers
BBC Micro and Amiga. At the time that I went to university
and started computing, both the computers were quite
common for people of my age in the UK. So to engage a
hobbyists interest and to step into the computer industry, I
planned to develop Raspberry Pi.
Raspberry Pi filled the space that was left empty by
machines like BBC Micro and Amiga. It was really an
attempt to provide kids with the same experience that we used
to have back in the 1980s.

Q What is the real mission behind Raspberry


Pi? Is it just to give the world an
affordable computing experience or also
to open new avenues for Internet of
Things (IoT) developments?
The mission is quite simple and it is to
give young people access to a computer
science education experience. We didnt
design Raspberry Pi for the industrial
control space, but it turns out to be very
well designed for industrial control and
the IoT world.
We just want to create a new
generation of people who are excited about computing and
computer programming. Mainly, we are targeting people who
are not just the end users of computers but are also potential
creators. They are the users who understand computing at a
very deep level.

Q Raspberry Pi is already helping students learn new code.


How can it be an impactful device in the education sector?
Obviously, several factors make Raspberry Pi an impactful
device in the education sector. One is affordability, as this is
incredibly important everywhere. There is a misconception
about computing, which is that everybody has a computer.
But most of the users all across the globe were using one
family computer in the past, due to its high price. In addition,

a significant percentage of people had devices with some closed


platforms. Thus, they were scared to break the devices and even
avoided customising them using their own code and tweaks.
We were determined generate a solution that would be
cheap enough, and Raspberry Pi emerged as the result. One of
the major things that persuade you to buy a Raspberry Pi is that
if you break the board, you can certainly afford a new one.
The other thing that makes Raspberry Pi a perfect solution
for students is the feasible computing experience. We have
provided the universally accepted 40-pin GPIO header
that helps children try some of their advanced sensors and
transform some simple projects into intuitive ones. Kids use
Raspberry Pi to write new software and control
things in the real world.

Q What are your plans to expand the presence


of Raspberry Pi in the Indian market?
We are keen to get the cost down and
availability up for Raspberry Pi in India, and
want to reduce the delay in introducing new
products in the Indian market. We are also
looking at some initiatives with the state
governments in the country.
On the embedded systems side, we are
trying to improve the cost. There are plans to
work with various government bodies in the country to fulfil
the needs of the educational sector. We want to see Raspberry
Pi in the hands of a large number of children in India who are
eager to learn computing. We also want to make sure that there
is appropriate education and support material in the Indian
market to help those children.

Q The Indian government has recently launched some


initiatives like Digital India and Skill India. Do you think
it would be worth it for the Raspberry Pi Foundation to
participate in both these programmes?
Because we are a very small organisation, and it takes
some major effort to join some flagship local government
programme, we are unlikely to get involved directly.
www.OpenSourceForU.com | OPEN SOURCE FOR YOU | AUgUSt 2016 | 101

For U & Me Interview


However, we always aim to provide enough resources to
students in India. We are already supporting children through
our partnership with the Kerala government. Rather than
participating entirely in a government-led programme, we
are providing resources in the form of hardware and software
documentation to help young developers. That is a good
model as India is a massive place.

Q Universities such as Cambridge and Harvard are


providing Raspberry Pi to their students. Are you in talks
with some Indian institutes to give the same power of open
source computing to local budding developers in India?

Regarding open source, we are already an open source


software platform. The millions of man hours and engineering
that was involved in the development of Linux indeed helped
us design Raspberry Pi. Thus, we try to give back to the open
source community.
We are providing the community with the platform that lets
a large number of developers build new open source software.
Also, we are spending heavily in improving the kernels and
software that are already available to the open source world.

Q What is the prime reason a developer would prefer a


Raspberry Pi?

We have the initiative in Kerala to provide computing


Just one thing; it is the size of the community. We have sold
through Raspberry Pi to the local school students. The
over nine million units of Raspberry Pi so far.
good thing about Raspberry Pi is that it works as a standard
We have a large and friendly community that enables
platform and uses a Linux-based operating system to help a
easy resolving of issues on your Raspberry Pi. We provide
large number of budding developers. Generally, even in the
official forums on our website that are consistently
University of Cambridge or
moderated by our team to offer the
India is kind of a dream
at Harvard, we dont interact
best solutions to the users.
directly with the alumni.
One of the interesting things about
market for the Raspberry Pi
We actually made the
the Raspberry Pi is that it enables
Foundation and it should
decision to build the hardware
embedded engineering while being a
become a US-sized market
cheap. Our strategy to support
cheap, low-power Linux platform. It
young developers is through a
builds enterprise skill sets and allows
in the near future.
simple, cheap and standardised
people to apply their embedded skills.
platform. The universities and institutions can deploy the
By bringing out a more enterprise-oriented feature set into the
hardware themselves and enhance the knowledge of their
embedded space, we are going to change the engineering of
students in an effective and efficient manner.
this newly emerged space.

Q Where does India rank globally on the list of potential


markets for Raspberry Pi and the Raspberry Pi Foundation?
We have two distributors supplying Raspberry Pi units in
India, and both supply roughly 2 to 3 per cent of our global
shipments, to the Indian market. Also, India is the biggest
APAC country after China in terms of the sales of Raspberry Pi.
Globally, India stands at No. 6 and sits behind some
developed markets like the US, UK, Germany and China. We
certainly want to do better to improve the existing presence
of Raspberry Pi in the Indian market. India is kind of a dream
market for the Raspberry Pi Foundation and it should become
a US-sized market in the near future.

Q Where do you see Raspberry Pi in the future, in relation to


the IoT and open source?
Raspberry Pi is a good platform for prototyping IoT
applications. But what is more interesting for me is when the
people continue to stay on Raspberry Pi even after building
a prototype, using the same platform to even finish their
final product.
We are now allowing customers to buy customised
Raspberry Pi units. This enables a large number of our
customers to tweak the prime features of Raspberry Pi and even
change the shape of the device to match their requirements.
102 | AUgUSt 2016 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com

Q So apart from just the hardware, the Raspberry Pi


Foundation maintains a strong community. What are your
thoughts on the need for a community to develop a unique
computing experience?
A community is not just needed to develop a unique computing
experience but also to create a unique culture involving a great
mixture of people. A lot of organisations have gained success
through strong community support.
Some people wanted to get involved in embedded
computing but could not due to the lack of resources and skills.
In our case, the community power helped such people and let
them start developing new products by interacting with others
on the same platform.
Modern computing systems are complicated, and the
computing experience is hard. When I was a kid, we worked
with just 32K of RAM to learn everything about computers.
However, you now need to stand on the shoulders of other
people and need experts to learn new things about the
computing world. The need for a strong community does arise.
So it is wonderful to have dedicated community support.

Q When can we expect an official Android version for


Raspberry Pi users?
We are not going to do this for ourselves. Android is certainly

Interview For U & Me


one of the most requested features. But at the same time,
Android is a platform for consumption, not for production.
It is a platform that makes developers bother about some
consumable things. On the other hand, we are very focused
on production and on teaching people to produce.
I am very hopeful that Google will at some point add
Android support for Raspberry Pi. If you look at the activity
on the AOSP repository, some directories have been created
to suggest the new development. So I hope something
will happen soon. Moreover, it would be a wonderful
community allied on Android development.

Q How is Raspbian a preferable choice for Raspberry Pi


as opposed to Android and other open source platforms?
Raspbian is where we spent a lot of money and brought out
all the tools that people would require on an open source
platform for Raspberry Pi. The beauty about Raspbian
is how active the community is (particularly the Ubuntu
community) in pulling all the new features.

Q Can a Windows-powered Raspberry Pi be stronger than


its Raspbian counterpart?

Q What is your take on some advanced connectivity options


like NFC and LoRa?
These new options are certainly important for the embedded
community. But I dont think that we will integrate
them into the core Raspberry Pi platform. We have a
shield standard to let people add new hardware to the
single-board computer.
I believe that NFC, LoRa and SigFox are useful to the
IoT space in the present scenario. These would become
equally useful for every single Raspberry Pi user in the future.
But I cannot put in dollars to integrate NFC or any other
connectivity options that are yet to be majorly in demand
within the community. These features have so far been just of
interest to a minority.

Q Some developers are looking for an analogue-to-digital


converter (ADC) on Raspberry Pi. Why have you left out this
cheap feature?
One day, maybe. An ADC module costs 20 cents, and I do
not want to increase the price of Raspberry Pi by 20 cents
merely for an ADC module. It could be featured on future
versions of Raspberry Pi, though. For now, a majority of
people are not demanding ADC. This is the reason we have
not provided the converter on the present-generation device.

I would love to see full Windows on Raspberry Pi. That


would be the game changer if a full Windows experience,
including the Edge browser and the Office applications
along with Shell, debuts some time in the future. Although
Q How can Raspberry Pi help enterprises and startups in
I do not have any verifiable information about when this
India and around the world?
could happen, the full Windows on Raspberry Pi would
In 10 years, we will be set to help enterprises and startups by
expand both the hardware and
giving them new employees
We just want to create a
software worlds.
through the skill sets they
Presently, Windows 10
new generation of people who acquired on Raspberry Pi. We
on Raspberry Pi is available
are also offering a scalable
are excited about computing
as a platform to build IoT
platform to entrepreneurs to let
and computer programming.
devices. It is certainly a
them develop new products.
very good choice. People
Additionally, we offer our
are making new things using the Windows 10 platform,
customisation system to enable product development for those
and we are looking to deploy and upscale these using our
who are cost sensitive and form-factor sensitive.
customisation programme.

Q The latest version of Raspberry Pi features built-in


Bluetooth and Wi-Fi. Why are these connectivity options
needed?
There are two reasons -- one is cost, and the other one
is usability. It obviously is cheaper if you are providing
something on the board, over an external solution. But for
us, the aim was to squeeze the cost to US$ 35. We used
Broadcoms 43438 platform to offer additional connectivity
support without increasing the US$ 35 price.
In addition to the cost, the other factor behind the need
for the new connectivity options was usability. If you have
a fixed hardware platform, then you can invest a lot of time,
money and effort to make that platform work very well. That
is what we did when adding Bluetooth and Wi-Fi support.

Q What is your take on the future of embedded systems


and IoT?

I am quite pessimistic about IoT. The question is: what will the
computing platforms look like for the majority of IoT solutions?
It will be interesting to see whether hundreds of millions or
tens of billions of 50-cent and one-dollar devices such as
microcontrollers or traditional data systems will dominate the IoT
market, or a large number of 10-dollar devices like Raspberry Pi
will lead the race. Also, it is important to look at the connectivity
support for IoT devices. It is hard to say whether the devices will
come with Bluetooth, Wi-Fi and local networking with hubs and
bridges, or cellular connectivity through a module, or even some
new standards like LoRa and SigFox.
We are already making the best hardware to contribute
towards the growth of embedded systems in the future.
www.OpenSourceForU.com | OPEN SOURCE FOR YOU | AUgUSt 2016 | 103

TIPS

&

TRICKS

Access the complete command list with


descriptions

Here is a command that enables you to see the list of


commands with their descriptions (based on the packages
installed in your system).

the lazy dog

In the above example -W interactive option sets


unbuffered writes to stdout.
Narendra Kangralkar, narendrakangralkar@gmail.com

Group commands and execution in


the subshell

apropos -r [a-z] > List_of_Commands.txt

Vijay Kumar, vijjav@gmail.com

Displaying coloured output with the tail -f


command when a pattern match succeeds

GNU/Linux provides a very powerful set of commands.


tail is just one of them. It shows the last part of the files.
Additionally, the -f option keeps the file in an open state
and continuously displays the data as the file grows. But
we often want to look at only certain lines and are not
interested in the whole log file. We can achieve this very
efficiently by combining the tail -f command with the
awk command, which will show certain lines in a different
colour when pattern match succeeds.
For instance, the simple command shown below will
display a line in red colour if the pattern is present in the
current line; otherwise, it will display the line in the regular
fashion. Because of the different colour combinations, it
becomes very easy to take a look at the required logs. By
combining these two utilities, one can improve productivity
greatly. Let us try this out with a simple example.
First, let us define a string to be searched:
[bash]$ export SEARCH_STRING=jumps

To group commands and execution in the subshell and then


store the execution of the subshell in /tmp/all.out, run the
following command in the terminal. Each command will
execute in a subshell and finally get stored in /tmp/all.out.
If one of the commands fails, then the subsequent command
will not be executed.
$ (pwd; ls; cd ../elsewhere; pwd; ls) > /tmp/all.out

Gururaj Rao, raogrr@gmail.com

Single line execution to remove files and


directories with filtered contents

We often need to remove all files and directories which are


not LINUX. Here is a command that will help you do this.
We will skip all .svn and .
find . -mindepth 0 -maxdepth 1 \( -type f -o -type d \) \( !
-name LINUX -a ! -name .svn -a ! -name . \)|xargs rm
-rf {}

Note: Please use this carefully as it uses a command


to delete files and folders.

Now search for the pattern jumps in a current line


and if it is present, display the line in red colour; if it is not
present, display the line in the regular way.
[bash]$ tail -f output.log | awk -W interactive {if($0
~ $SEARCH_STRING) {print 33[0;31m$033[0m} else
{print}}
the quick brown
fox jumps over ## NOTE: Only this line will be shown in
red color

Gururaj Rao, raogrr@gmail.com

Open any application directly from the


terminal with its default application

The xdg-open command opens the default application


associated with the file type directly from the terminal.
For example, the following command
xdg-open http://opensourceforu.com

104 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Dumping utmp and wtmp logs

opens opensourcefor.com in the default browser on


your system. Similarly,
xdg-open test.py

opens test.py, a Python script in the default text


editor set for Python.
Sricharan Chiruvolu, sricharanized@gmail.com

Tracking an IP address

To track an IP address, we need to check the


established connection using the command netstat of sshd
daemon. The following command checks the same:
#sudo netstat -tnpa | grep ESTABLISHED.*sshd

A small script that will help you find out the IP address
is given below:
#!/bin/bash
clear
echo -e \n\n
a=`sudo netstat -tnpa | grep ESTABLISHED.*sshd | grep -Po
\d+\.\d+\.\d+\d+\.\d+ | sort | uniq -c > /tmp/ip`
echo -e \t\t\tNumber_of_times_ssh \tIP Address
awk {printf \t\t\t\t$11111$2} /tmp/ip
echo -e \n

Rupin Puthukudi, rupinmp@gmail.com

Like pacct, you can also dump the contents of the utmp
and wtmp files. Both these files provide login records for the
host. This information may be critical, especially if applications
rely on the proper output of these files to function.
Being able to analyse the records gives you the power to
examine your systems in and out. Furthermore, it may help
you diagnose problems with logins, for example, via VNC or
SSH, non-console and console login attempts, and more.
You can dump the logs using the dump-utmp utility. There
is no dump-wtmp utility; the former works for both.
You can also use the following command:
dump-utmp /var/log/wtmp

This will print a utmp file in human-readable format for


you to analyse.
Somya Jain, somya1124@yahoo.co.in

A beginners guide to Sed

The Sed (Stream Editor) command is used for changes


in files automatically and also to replace or substitute a string.
For example, if we have a file a.txt with content like:
hello how are you

and we want to replace the word hello with


welcome, then we use the following command:
sed /s/hello/welcome/ a.txt

Taking screenshots in Linux and


displaying an image using the CLI

Option -s is used to search and replace.


Similarly, if we want to replace more than one word
like hello and how with welcome and where, use the
following command:

Do you want to take a screenshot and that too from the


terminal? You can use the command import at the
prompt, followed by the name of the file and format in
which you want to save the screenshot, as follows:

sed -e s/hello/welcome/ -e s/how/where/ a.txt


$import screenshot.png

After executing the command, the mouse pointer


changes to X (cross). Now you can click on the window
that you want to take the screenshot of. This command is
part of the ImageMagick package, which is used for image
manipulation.
Now, to display the screenshot using the terminal,
you can use the command display followed by the file
name you want to be displayed. For example, if you want
to display the file by the name file1.png, then give the
following command:
$display file1.png

Sathyanarayanan S, sathyanarayanan_s@yahoo.com

Option -e is for multiple strings.


There are many other options that you can refer to in the
manual of Sed.
Ajay Trivedi, ajay.trivedi67@gmail.com

Share Your Linux Recipes!


The joy of using Linux is in finding ways to get around
problemstake them head on, defeat them! We invite you
to share your tips and tricks with us for publication in OSFY
so that they can reach a wider audience. Your tips could be
related to administration, programming, troubleshooting or
general tweaking. Submit them at www.opensourceforu.
com. The sender of each published tip will get a T-shirt.

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2016 | 105

OSFYOSFY
DVD DVD

DVD OF THE MONTH


Try out the latest powerful Fedora.

Fedora 24 Workstation Live (64-bit)

Fedora Workstation is a reliable, user friendly and powerful


operating system for your laptop or desktop computer. For details
on how to install it, you can visit http://docs.fedoraproject.org/
install-guide. The bundled DVD can boot into a live version of
Fedora 24 64-bit for you to try out. The 32-bit version of the OS
is available in the other_isos folder on the root of the DVD.
Fedora 24 Server

Fedora Server is a short-lifecycle, community-supported server


operating system that enables seasoned systems administrators
with experience on any OS to make use of the very latest serverbased technologies available in the open source community. It is
a powerful, flexible operating system that includes the best and
latest data centre technologies. The bootable netinstall image is
available in the other_isos folder on the root of the DVD.
Fedora 24 LXDE Live

LXDE, the Lightweight X11 Desktop Environment, is an


extremely fast and energy-saving desktop environment. LXDE
is not designed to be powerful and bloated, but to be usable
and slim. The main goal of LXDE is to keep computer resource
usage low. It is especially designed for computers with low
hardware specifications or older computers.
This stable release can be installed or tried live from the
bootable ISO images available in the other_isos folder on the
root of the DVD.

What is a live DVD?


A live CD/DVD or live disk contains a bootable operating system,
the core program of any computer, which is designed to run all your
programs and manage all your hardware and software.
Live CDs/DVDs have the ability to run a complete, modern OS on
a computer even without secondary storage, such as a hard disk drive.
The CD/DVD directly runs the OS and other applications from the DVD
drive itself. Thus, a live disk allows you to try the OS before you install it,
without erasing or installing anything on your current system. Such disks
are used to demonstrate features or try out a release. They are also used
for testing hardware functionality, before actual installation. To run a live
DVD, you need to boot your computer using the disk in the ROM drive.
To know how to set a boot device in BIOS, please refer to the hardware
documentation for your computer/laptop.
To use the image available in the other_isos folder, you either need
a drive that can create or burn DVDs, or a USB flash drive at least
as big as the image, or you can use it directly with any virtualisation
software like VirtualBox.

106 | August 2016 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Potrebbero piacerti anche