Sei sulla pagina 1di 58

Proceedings Book for

National Conference on Information Systems Trends


(Theme: Cloud

Computing)

Thursday 11 February 2016


Al Shahba Auditorium, University of Nizwa, Initial Campus, Birkat Al-Mouz, Nizwa
Sultanate of Oman

Conference Chair
Dr. Ahmed Masoud Al Kindi
Conference Co-Chair and Editor
Dr. Arockiasamy Soosaimanickam

Organized by
Department of Information Systems
College of Economics, Management and Information Systems
University of Nizwa

ii

Message from the Dean and Chair, NIST 2016


I welcome you to the National Information Systems Conference dubbed as NIST 2016 with the
theme, Cloud Computing. This event marks the first annual Academic and Research endeavor
where science, innovation, humanity and academia come together in search for new trends and
novel ideas within the Information Systems spectrum.
The selected theme for this year is undoubtedly one of the fastest growing fields in the general
computing arena. Its impact has been tremendous in terms of its digital storage capability, data
availability, performance efficiency and stakeholders effective financial planning. The global
Cloud Computing umbrella in which the IT/IS industries have managed to adopt, has made such
investments and advancements in research an asset in pursuit of new and developing technologies
for both the hardware and software.
On behalf of CEMIS and the University, I am honored and humbled to serve as the Chair of the
conference. I am also gratified and thankful to all the scientists who participated and shared their
research findings with us, from which many are included in this conference proceedings. I would
also like to express my appreciation to the distinguished panel of computing researchers in the
Technical Committee who filtered the submitted research papers after undergoing a rigorous double
blind peer-review selection process.
Finally, I thank all the external peer-reviewers and the University personnel who worked with the
Preparation/Organizing Committee; for their efforts and support in making this dream into a reality.
I am profoundly grateful to all CEMIS staff who worked very hard, day and night and spared no
effort in realizing such.
To all I say, Well Done.
Dr. Ahmed Al Kindi
Chair, NIST 2016
Dean, College of Economics, Management and Information Systems
University of Nizwa

iii

Message from Editor and HoD


I am glad that the Department of Information Systems of the College of Economics, Management
and Information Systems is organizing the First National Conference on Information Systems
Trends (NIST 2016) on 11th February 2016 at University of Nizwa campus. The theme for the
conference is Cloud Computing and it reveals the latest trends in the field of Information Systems
and Technology. This conference has become popular with experienced Academicians and
Researchers increased interest, contributions and sponsorship from many academic and industrial
research groups. I would like to thank H.E. The Chancellor of University of Nizwa for his approval
and support to conduct this conference. Furthermore, we are grateful to the supporting bodies who
have contributed in terms of financial assistance or providing their technical and human resources to
organize this conference. The number of submissions show a steady progress of the conference
compared with the other National conferences conducted in Oman. I owe thanks to the contributing
authors and special thanks to our technical program committee members, external reviewers, and the
local organizing team members who did a great job in reviewing the large amount of high-quality
papers. I also extend my thanks to the keynote speaker, other invited speakers, panelists,
Researchers and other members of the Organizing Committee and Technical committee. Special
thanks to those who helped to promote the conference, particularly the program chairs, and track
chairs. Without these hard-working professionals, NIST 2016 would not have become reality. I wish
the conference is organized every year for the mutual benefit of the researchers and organizing
department.
I extend my warm greetings and best wishes to all the delegates and other participants. I also
congratulate the organizers for the Grand Success.
Dr. Arockiasamy Soosaimanickam
Co-Chair, NIST 2016
Assistant Dean for Training
Head, Department of Information Systems
College of Economics, Management and Information Systems
University of Nizwa
iv

Committees
Chair:
Dr. Ahmed Masoud Al Kindi, Dean
Co-Chair: Dr. Arockiasamy Soosaimanickam, Head, Department of Information Systems
Preparatory Committee:
Dr. Ahmed Al-Kindi (Chair)
Ms. Amal Al-Ismaili (Member)
Dr. Arockiasamy Soosaimanickam (Co-Chair)
Dr. Hani Hadad (Member)
Dr. Said Younes (Member)
Mr. Mahmood Al-Rawahi (Member)
Dr. Nour Al-Deen Al-Shaiekh (Member)
Mr. Mohammed Al-Maawali (Member)
Mr. Hamed Al-Azri (Member)
Mr. Mohammed Al-Rawahi (Member)
Mr. Rashid Al-Amri (Member)
Scientific Committee:
Dr. Said Younes (Chair)
Dr. Saleh Al-Hatali (Member)
Dr. George Kastanian (Member)
Dr. Nour Al-Deen Al-Shaiekh (Member)

Dr. Mahmood Jassim (Member)


Mr. Hazim Al-Najjar (Member)
Mr. Mourad Henchri (Member)
Dr. Wiem Abdulbaki (Member)

Mr. Hilal Al-Naabi (Member)


Mr. Tariq Mohammed (Member)
Mr. Makhlouf Aroua (Member)
Mr. Khalfan Al-Manthari (Member)

Mr. Ibrahim Al-Azri (Member)


Mr. Faisal Al-Rawahi (Member)
Mr. Mohammed Al-Saighi (Member)
Mr. Hussain Al-Azri (Member)
Mr. Khalaf Al-Howqani (Member)

Financial Support Committee:


Dr. Arockiasamy Soosaimanickam (Chair)
Mr. Rashid Al-Amri (Member)
Mr. Ghandi Jaber (Member)

Mr. Yaqoob Al-Rahbi (Member)


Ms. Zayana Al-Kindi (Member)

Al-Shahba Hall Arrangement Committee:


Mr. Saud Al-Saqri (Chair)
Ms. Muna Al-Yafee (Member)
Mr. Basim Awadallah (Member)
Ms. Amal Al-Hadrami (Member)

Ms. Nadia Al-Roussan (Member)


Ms. Lila Al-Rahbi (Member)
Mr. Aslam Al-Hinai (Member

Technical Support and Services Committee:


Mr. Hamed Al-Azri (Chair)
Mr. Saud Al-Adawi (Vice-Chair)
Mr. Mohammed Al-Mawaali (Member)
Mr. Hamed Al-Salmi (Member)
Mr. Khalid Al-Hadidi (Member)
Public Relations Committee:
Mr. Mahmood Al-Rawahi (Chair)
Mr. Jamal Al-Kindi (Member)
Ms. Fakhria Al-Mamari (Member)
Ms. Maryam Al-Kamyani (Member)
Ms. Zainab Al-Quarni (Member)

Plenary Sessions
Dr. Salim Sultan Al Ruzaiqi, Chief Executive Officer, ITA
As CEO of ITA, Dr. Salim is responsible for the implementation of the Digital Oman Strategy. Throughout
his 18-year career in the IT field, Dr. Salim held different technical and managerial roles in the Sultanate of
Oman. Dr. Salim joined the Ministry of Foreign Affairs in March 1987 and led the IT initiatives in the
ministry.
Dr. Salim had a diplomatic experience by joining the Sultanate's Diplomatic Corp as the First Secretary at the
Embassy of the Sultanate of Oman in Washington DC from 1998 to 2003. Dr. Salim Received Doctorate of
Science degree in Information Systems and Communications from Robert Morris University of Pittsburgh
Pennsylvania, Master of Science in Information Systems Technology from George Washington University of
Washington DC and Bachelor of Science in Computer Science and Mathematics from Lindenwood
University of St.Charles Missouri.

Sheikh Abdulla bin Issa Salim Al Rawahy, Chief Officer Alliances and Partnership, Ooredoo
Sheikh Abdulla bin Issa Al Rawahy was appointed as Corporate Advisor in March 2014, having been Chief
Strategy Officer of Ooredoo since 2008 and Chief Technical Adviser from 2004. With over 30 years of
experience in the telecommunications sector, Sheikh Abdulla has held several leading roles in network
planning, projects, strategy and corporate business development for both fixed and mobile
telecommunications. In his current role as Chief Strategy Officer, Sheikh Abdulla is responsible for the long
term strategy of Ooredoo, which has focused on the transformation of Ooredoo from a mobile operator to a
full service operator, able to serve customers (consumers as well as enterprises) with all their communication
needs as well as Business Development and the International Wholesale business.
Prior to joining Ooredoo, Sheikh Abdulla served as Technical Advisor to the Minister of Transport and
Communications, President of OmanTel and Chairman and founding Member of the Oman Fibre Optic
Company. He holds a Bachelor in Engineering Technology and Masters of Science in Electrical Engineering
from the University of Central Florida (USA).

Dr. Bader Al-Manthari, Director General of the Information Security Division, ITA
Dr. Bader holds a PhD in Computer Science from Queens University (Canada). He joined the Information
Technology Authority (ITA) beginning of 2010 and since then he has worked on many national projects in
Information Security. Dr. Bader is also a member of the Program Committee of the Intelligent Cloud
Computing Conference (ICC 2014) and a member of the Technical Program Committees in more than 10
highly ranked international conferences. In addition, he serves as a professional referee in more than 50
international conferences, journals and awards in the area of Information and Communication Technology
(ICT). Furthermore, Dr. Bader is certified ITIL, SABSA, and ILM Level 5 Award in Leadership and
Management.
Before joining ITA, Dr. Bader worked as a teaching assistant and a research associate at Queens University,
Canada, where he worked on various research projects particularly in enhancing the Quality of Service of
next generation communication networks. He has authored more than 20 refereed journal and conference
publications. He holds a patent in 3.5 G wireless cellular networks and WiMAX and another patent is
currently being prepared for 4G technologies.

Mr. Yahya Nasser Al-Hajri, Senior Specialist, Regulatory and Compliance Unit, TRA
Mr. Yahya is a computer engineering graduate engineer with 15 years experience in ICT sector. He works
for Telecommunications Regulatory Authority of Oman as a senior specialist in the technical standards and
numbering department of the regulatory and compliance unit. Mr. Yahya is responsible for the development
of ICT Technical Regulations & Guidelines. On international related matters, Mr. Yahya is acting also as corapporteur for Question 1 of Study Group 1 of the development bureau of the International
Telecommunication Union (ITU).

vi

Invited Researchers

Prof. Youcef Baghdadi,


Professor, Department of Computer Science, College of Science,
Sultan Qaboos University
Prof. Youcef Baghdadi received his HDR in Computer Science from University Paris 1
Pantheon-Sorbonne and his PhD from University of Toulouse 1, France. His is currently an Full
Professor with the Department of Computer Science at Sultan Qaboos University in Oman. His
research aims at bridging the gap between business and information technology (IT), namely
in the areas of cooperative information systems (IS), web technologies, e-business, serviceoriented computing, and methods for service-oriented software engineering. He is an expert in
Service-Oriented Architecture (SOA). He has published many papers in journals such as the
Information Systems Frontiers, Information Systems and E-Business, Service-Oriented
Computing and Applications, Business Information Systems, Electronic Research and
Applications, Web Grids and Services, and others.

Dr. Abderezak Touzene,


Associate Professor, Department of Computer Science, College of Science,
Sultan Qaboos University
Abderezak Touzene received the BS degree in Computer Science from University of Algiers in
1987, M.Sc. degree in Computer Science from Paris-Sud University in 1988 and Ph.D. degree
in Computer Science from Institut polytechnique de Grenobe (France) in 1992. He is an
Associate Professor in the Department of Computer Science at Sultan Qaboos University in
Oman. His research interests include Cloud Computing, Parallel and Distributed Computing,
Wireless and Mobile Networks, Network On Ship (NoC), Cryptography and Network Security,
Interconnection Networks, Performance Evaluation, Numerical Methods. Dr. Touzene is a
member of the IEEE, and the IEEE Computer Society.

vii

Table of Contents
1

Cloud Computing and its Security Issues

Zunaira Zubair Ahmed, Rachana Visavadiya, and Sanjay Kumar


Department of Computer Science, Waljat College of Applied Sciences, Muscat, Oman.

Diagnosis Security Problems for Hybrid Cloud Computing in


Medium Organizations

Mohanaad T Shakir , Asmidar Bit Abu Bakar , Yunus Bin Yusoff, and Mustefa Talal
Sheker Al-Buraimi University College, Oman.
University Tenaga National(UNITEN), Malaysia.

A Survey on Hybrid Route Diversification and Node Deployment in


Wireless Sensor Networks

12

Mohamed Yasin, Dr.P.Sujatha, Dr. A.Mohamed Abbas, and Dr. M.S.Saleem Basha
Pondicherry University, India.
Mazoon University College, Muscat, Oman.

An Analysis of User Satisfaction of Mobile Banking Application in


Oman

17

Faiza Al Balushi, and Dr. Ashish


IT Department, Higher College of Technology, Muscat, Oman.

Human Immune System Based Security System Model for Smart


Cities

25

Ansu George, Mahata Sudeshna, Sonia Soans, and K Chandrasekaran


Department of Computer Science andEngineering, National Institute of Technology,
Surathkal, Karnataka

Defense-in-Depth Architecture for Mitigation of DDoS Attacks on


Cloud Servers

31

Dr. S. Benson Edwin Raj, Mr. N. Senthil Kumar, and Mrs. Revathi
IT Department, Rustaq College of Applied Sciences, Sultanate of Oman
MCA Department, St. Joseph College, Trichy, India.

Distributed Computing Environment Based on Cloud Computing


Technology for MRMWR

37

Boumedyen Shannaq
Ministry of Regional Municipalities Water Resources (MRMWR), Directorate General of
Planning and Studies ,Oman

AIRS: A Search Engine Performance Visualization with Ontology


Dr.Sharmi Sankar, Noora Al-Alawi, Mr.Ishtiaque Mahmood, and Dr.Jehad Al-Bani Younis
Ibri College of Applied Sciences, Sultanate of Oman

viii

41

Cloud Computing and its Security Issues


Zunaira Zubair Ahmed, Rachana Visavadiya, Sanjay Kumar
Department of Computer Science
Waljat College of Applied Sciences
Muscat, Oman

AbstractThe distribution of computing resources and


facilities over the Internet is referred to as cloud computing. The
different services provided to its clients by the cloud are
Software-as-service (SaaS), Platform-as-Service (PaaS) and
Infrastructure-as-service (IaaS) along with enormous storage
capacity and security. Googles Application, Microsoft Azure,
Amazon IBM etc., are well-known examples of cloud service
providers that enable users to create and access applications in
the cloud environment from anywhere, any time. In cloud
environment it becomes easy to tamper or hack data. The service
providers offer data storage and access in remote servers, which
is why providing security is a foremost concern. This paper aims
to describe the services offered by the cloud, the incentive for
working with cloud computing, and the security threats and
challenges associated with it.

Public Cloud: Shared by two or more firms on a larger


scale.
Hybrid Cloud: It is obtained by joining both Private
Cloud and Public Cloud, and is used by most of the
industries.

B. Models/Layers of Cloud Computing


Cloud computing provides three different types of services.
These services can work hand-in-hand or they can be used
individually. The existing cloud services are [7]:1) Infrastructure-as-a-Service (IaaS): This is a way of
delivering a computer infrastructure to the enterprises and this
product is typically in the form of platform virtualization.
Essentially, the consumer can control the storage, applications,
operating systems, and perhaps have partial influence of
choosing the networking components, but doesnt influence
the cloud infrastructure. Instead of purchasing the entire
hardware, consumers can pay only for what they use, similar
to electricity or other utility billing [13].

KeywordsCloud computing, security, Data Security, Public


cloud, IaaS, SaaS, PaaS.

I. INTRODUCTION
Cloud computing is defined as a network-based
environment that provides on demand resources and services to
clients over the Internet [7]. It allows developers to write and
run applications in the cloud itself. This makes it an attractive
and smart solution to enterprises as it doesnt entail major
capital investments to suffice their needs. Moreover, they can
retrieve their data from anyplace with an internet connection
[2]. If security procedures are not offered suitably for data
operations and transmissions then there will be high data risk
[11].

2)
Platform-as-a-Service (PaaS): A computing platform
permitting the creation of online applications effortlessly
without the complication of purchasing and/or maintaining the
software or its infrastructure is termed as PaaS. It is the
distribution of an architecture or framework. PaaS is
equivalent to SaaS with the difference that, instead of the
software being provided over the web, it is the platform that
creates the software, provided over the World Wide Web [12].
Apart from offering an area to store and organise applications,
PaaS also offers an IDE that provisions a complete life cycle
for developing applications that can be presented on the
internet with ease. This provides development and delivery
tools as a service. A platform has to be established to
effectively influence the increasing number of services made
available in the cloud.

According to The Global State of Information Security


Survey 2016, in 2015, 38% more security incidents were
detected than in 2014 [4]. This makes it extremely important to
identify and deal with these security issues. A Gartners Survey
says that the investment of the entire market on services in
public cloud is expected to increase to $210B in 2016 from
$76.9B in 2010 [9]. This is why the area of cloud computing is
appealing to businesses and researchers. The Global State of
Information Security Survey 2016 also mentioned that
respondents boosted their information security budgets by 24%
in 2015 [4]. This proves that cloud computing is everexpanding in the IT field, largely for the security group. This is
particularly due to the cloud architectures emerging all over the
world. The security of cloud computing services is a concern
which is delaying its adoption in today's technology world [1].

3) Software-as-a-Service (SaaS): Software as a Service is


a way to deliver clients with software over the Internet. Its
frequent uses include email users (Gmail, Hotmail, etc),
antivirus scans (Kaspersky, Symantec, etc) and word
processors (Google Docs, Adobe Buzzword, etc). SaaS
establishes an advanced technique of providing software, and
is more of a business model. It delivers a web-based software
over the internet. This allows the customer to run the
application in the browser, and instead of owning the software
on their machine they pay for its use. Users can elude massive
expenditure on software packages and licenses by this
economical approach. This cloud service dodges issues
pertaining to difficulties with software installation, system

A. Types of Cloud
There are three different kinds of cloud [2]: Private Cloud: It is owned and maintained by a single
firm, making it more flexible and providing better
control.

III. CLOUD COMPUTING SECURITY THREATS

compatibility, manual updating, etc., and thus appeals


establishments and clients.

As already evident, security is one of the major challenges


when implementing cloud computing. Security concerns arise
due to the transfer of data to and from the remote data centres,
running of applications using virtual machines, etc. As
discovered by the Cloud Security Alliance, cloud computing
faces the following threats [2][3][16]:-

II. INCENTIVE FOR CLOUD COMPUTING IN MARKET


Cloud computing offers various advantages and has been
convincing enough to be adopted by a considerable number of
companies. It has tackled quite a lot of problems of firms due
to its characteristics. They are [8]:-

A. Data Loss :Cloud computing providers are curators of an enormous


magnitude of data which is seen as a privacy concern. There
are various methods through which data can be compromised
causing severe consequences, intentionally or accidentally [5].
One example of data being compromised is records getting
deleted without a backup of the original ones. If stored on an
unreliable medium, the data/records will become
unrecoverable. There can be real destruction caused by the
loss of an encryption key. Lastly, unauthorized individuals
should be prohibited from accessing sensitive data. Cloud
computing magnifies the complications allied to data privacy.
Data loss or leakage can lead to an overwhelming impact on
industries. Apart from brand damage and harm to reputation, a
loss can impact client and partner trust. Also, legal penalties
will be caused if the lost or leaked data is being dependent
upon.

A. Flexibility and Scalability:Cloud computing lets the users to be more flexible, i.e.,
they can access the data both in and out of the workplace with
the help of a web-enabled device. Sharing of documents and
files can be done simultaneously, facilitating collaboration. The
scalability factor enables the business to upscale or downscale
the IT supplies effortlessly upon requirement [14].
B. Scope of Network Access:Cloud computing offers the consistent procedure to access
network through various media, for example, laptops, mobile
phones, tablets, etc.
C. Location Independence:As cloud computing offers virtualized resources that can be
accessed from anywhere, at any time, it creates a perception of
location independence. This adds up to the reasons why
companies opt for cloud computing.

B. Data Breaches:A virtual machine could use side channel timing information
to extract private cryptographic keys being used in other
virtual machines on the same physical server. However, in
many cases an attacker wouldnt even need to go to such
lengths. If a multitenant cloud service database is not properly
designed, a flaw in one clients application could allow an
attacker access not only to that clients data, but every other
clients data as well. Unfortunately, the steps taken to alleviate
one of the serious threats, data leakage and data loss, can
worsen the other. In order to reduce the impact of a data
breach the data can encrypted. However, if the encryption key
is lost, so will be the data. On the contrary, there can be offline
backups of the data which will reduce the impact of data loss,
but exposes it to breaching.

D. Economy of Scale:Cloud computing encourages complete utilization of


resources as it virtualizes a single resource into minor
resources. This avoids wastage of resources, making cloud
computing more economical.
E. Reliability:In order to deliver higher degree of reliability and uptime,
cloud computing offers methods to employ the inessential
resources.
F. Cost Effectiveness:The unique idea of cloud computing is that clients pay only
for what they require and use instead of having to purchase an
entire software. Owning a software is expensive, but spending
on only a required amount of resources isnt. This explains
why cloud computing is cost effective.

C. Account or Service Hijacking:Another issue that cloud customers need to be conscious of


is traffic hijacking. These threats include man-in-the-middle
attacks, spam campaigns and denial-of service attacks. Account
or service hijacking includes attack methods such as phishing,
fraud, and misuse of software vulnerabilities. These methods
still attain results as credentials and passwords are often
repeated by users intensifying the impact of such attacks. A
customers transactions can be altered, redirecting clients to
illicit sites and information can be falsified by attackers if they
obtain access to the customers details. An attacker may form a
new basis in the form of an individuals account from where
ensuing attacks may be launched to spread further damage.

G. Sustainability:Cloud computing promotes clean-tech applications that


fund to sustainability. Due to the minimum energy needs, cloud
lessens carbon emissions. Offsite servers own the potential to
prevent 85.7 million metric tons of annual carbon emissions by
2020, says a CDP report [15]. Companies can reduce up to 6085% of their expenses on energy costs by switching to cloud
computing according to a research carried out by Google [15].
This contributes to the sustainability of cloud computing.
The advantages of cloud computing appear to be ideal, but
there are some issues related to security that can result in
trouble. The next section of the paper mentions these problems.

The properties, services and performance of cloud services


are advertised a great deal, but no information is provided on
configuration, hardening, patching, auditing and logging, nor

are any details given about the inner security procedures.


Particulars such as the storage of data and related logs, access
permits to them and data that the vendor could disclose, if any,
in the event of a security disturbance are either not issued
clearly, or are disregarded completely. This leaves customers
with an uncertain risk profile which may give rise to major
threats.

becomes too expensive to run and forces the provider to take it


down themselves.
F. Malicious Insiders:An insider threat has been defined by CERN as follows:
A malicious insider threat to an organization is a current
or former employee, contractor, or other business partner who
has or had authorized access to an organization's network,
system, or data and intentionally exceeded or misused that
access in a manner that negatively affected the confidentiality,
integrity, or availability of the organization's information or
information systems.
The risk of fraud, brand damage and theft or deletions of
confidential information to the organization along with
negative financial impact are some of the perils constituted by
malicious insiders which can be present within the cloud
providers or the firms on which they have a significant
outcome that relies on the extent of access they possess. To
recognize and secure against the malign insiders, the
consumers of the cloud service providers must fathom the
providers methods. A malicious insider, such as a system
administrator, in an erroneously designed cloud platform, can
have access to potentially sensitive information. The threat
being an absolute antagonist is a fact, although, the degree of
the threat is still up for consideration.
There is a rise in degree of access to more pivotal systems,
and consequently, to data going from IaaS to PaaS and SaaS.
There is considerable risk for the systems depending
exclusively on the cloud service provider (CSP). The system is
still susceptible to malicious insider threat if the keys are only
available at data usage time and not kept with the users, even
after implementation of encryption.

Attacks in recent years have been fixed on the shared


technology existing in Cloud Computing environments.
Reliability, coherence, and availability of the services are
compromised if attackers, using stolen credentials, gain access
to critical sections of deployed cloud services.
D. Insecure Interfaces and APIs:Software interfaces used by consumers to manage and
network with cloud services are called Application
Programming Interfaces or APIs. The overall cloud services
security and availability is influenced by the security of these
APIs. These APIs must be designed in such a way that from
authentication and access control to encryption and activity
monitoring, it combats both unintentional and deliberate
attempts to bypass policy.
When third parties build upon these interfaces to provide
their clients with value-added services, it increases risk and
initiates the difficulty of the new layered API. Confidentiality,
integrity, availability and accountability are the various
security issues that associations face when they rely upon weak
APIs.
E. Denial of Service:Denial-of-service attacks are attacks meant to prevent users of
a cloud service from being able to access their data or their
applications. By forcing the victim cloud service to consume
inordinate amounts of finite system resources such as
processor power, memory, disk space or network bandwidth,
the attacker (or attackers, as is the case in distributed denialofservice (DDoS) attacks) causes an intolerable system
slowdown and leaves all of the legitimate service users
confused and angry as to why the service isnt responding.
While DDoS attacks tend to generate a lot of fear and media
attention, they are by no means the only form of DoS attack.
Asymmetric application-level DoS attacks take advantage of
vulnerabilities in web servers, databases, or other cloud
resources, allowing a malicious individual to take out an
application using a single extremely small attack payload in
some cases less than 100 bytes long.
Experiencing a denial-of-service attack is like being caught in
rush-hour traffic gridlock: theres no way to get to the
destination, and nothing can be done about it except sit and
wait. As a consumer, service outages not only frustrate you,
but also force you to reconsider whether moving your critical
data to the cloud to reduce infrastructure costs was really
worthwhile after all. Even worse, since cloud providers often
bill clients based on the compute cycles and disk space they
consume, theres the possibility that an attacker may not be
able to completely knock the service off of the net, but may
still cause it to consume so much processing time that it

G. Abuse of Cloud Computing:Cloud computing allows even small organizations access


to expansive amounts of computing power and it is one of its
signature advantages. Renting time on tens of thousands of
servers from a cloud service provider is much more
economical and reasonable than procuring and maintaining the
same amount of servers.
Relatively flimsy registration systems are one of the
reasons why cloud service providers are being continuously
attacked. To crack an encryption key, it might take several
years for an attacker using their own finite hardware, but by
making use of a range of cloud servers, they could be able to
crack it in minutes. With the aid of cloud infrastructure,
malicious coders and other criminals can invade a public cloud
without being detected, infecting hundreds of machines with
malware. Other sections of danger include Denial of Service
(DDoS) and launching dynamic attack points.
Originally, PaaS vendors were impacted most by such
attacks, but recently, IaaS providers are being aimed as well.
A number of grave repercussions have to be considered by the
service providers, essentially, detection of people abusing the
service and prevention of the threat.

environment is jeopardized to a possibility of breach if a


structural piece of the shared technology, such as the
hypervisor or an application in the SaaS environment, is
compromised. Therefore, gaining unauthorized access to data
and impacting the operations of other cloud users is the focal
point for malicious end users.
Whether the service model is IaaS, PaaS, or SaaS, a
defensive in-depth strategy is suggested incorporating
computation, storage, network and user security
administration, and monitoring. A critical feature is that a sole
vulnerability or misconfiguration can endanger a whole
providers cloud.

H. Insufficient Due Diligence:Many organizations are eager to make use of cloud


computing due to its obvious advantages that include cost
reductions, operational regulations and improved security.
Although, the goals are realistic for organizations that have the
resources to opt for cloud technologies correctly, several of
them rush into it without completely understanding the
capacity and issues of it. Incompatible calculations between
the service provider and the customer lead to contractual
affairs over responsibilities on accountability, or transparency.
Financial and functional profits must be contemplated
carefully against conflicting security concerns. This becomes
more complex by the fact that cloud distribution is managed
by groups, prompted by possible profits, who may fail to keep
track of security consequences. The reduction of hardware and
software ownership and maintenance allows companies to
focus on their core business strengths. Principal aspects that
play a part in evaluating an organizations security status
comprises of code updates, vulnerability profiles, intrusion
attempts, software versions, security designs and versions. Indepth analysis requiring highly controlled or regulated
operational areas could be damaged and lead to unknown risks
if security is not sustained. When planners new to cloud
technologies design applications for the cloud, uncertain
functional and architectural issues arise.
To limit the damage caused by a violation, organizations
must be aware of these threats and the defense in depth
strategies. The main point for organizations moving to cloud
technology is that they must have efficient resources and be
able to implement comprehensive internal meticulousness to
better understand the insecurities they have to face by
selecting this modern technology. Computation, storage and
network security administration and monitoring should be
included in the defense in depth strategy. It must be
guaranteed that the operations of other users are not influenced
by individual users running on the same cloud provider.
Customers should not have access to any other users actual or
residual data, network traffic, etc.

CONCLUSION
Cloud computing is recognized as an advanced technology
which drastically transforms the manner in which the Internet
is put to use, granting considerable advantages to those who
utilize it. Nevertheless, the limitations and insecurities while
making use of this technology must be thoroughly analyzed.
Apprehending what vulnerabilities occur in Cloud Computing
will help organizations to make the move towards this
technology. Several techniques are applied in Cloud computing
to improve application performance in a way that is reliable
and cost effective. The basis of cloud applications are network
appliance software whose operating systems run in a virtual
machine in a virtualized environment. Virtualization lets
multiple users to share a physical server and is one of the major
concerns for cloud users. The various types of virtualization
technologies may undertake security procedures in different
ways. Critical differences between cloud security and virtual
machine security that are left out in presumptions formulated in
cloud-security research, lead to major disparity between cloud
security practice and cloud security research. An equilibrium
between business profits and covert probable threats that may
affect success must be reached for productive cloud computing.
Analogous to other Internet-based technology, cloud, too, is
at a prominent risk despite the cloud suppliers being in
possession of effective servers and capital in order to present
proper services for their users. If the virtual environment is not
secure, a reliable cloud is unfeasible. A process which informs
whether the virtual machines in the cloud are patched properly
would be a functional part of the framework such as in the case
where one section of the framework is developing a method to
observe the clouds management software, and another section
is developing a secluded handling for a particular clients
applications.
Users response to cloud, for instance, whether they permit
automated patching software to run, upgrade anti-virus
software, or whether they comprehend how to harden their
virtual machines in the cloud, can be followed and monitored.
Since the time companies first delivered online services for
customers and organizations, a multitude of these issues have
been dealt with. Numerous companies privacy principles, and
overall business practices have been determined by the extent
of practical knowledge and skill. Owing to the fact that cloud
computing is composite and ever-changing in nature, standard
solutions are not convenient for the cloud domain. Modern
virtualization-aware security solutions that are self-defending,
deliver real-time discovery and aversion of unknown threats
should be furnished so as to make the overall system

I. Shared Technology Vulnerabilities:There may be a probability that information belonging to


different customers is situated on the same data center since
cloud provider platforms are being shared by different users.
Thus, one customers information may be given to another
causing data leakage.
Services of Infrastructure as a Service (IaaS) vendors are
supplied by sharing infrastructure to make it more adaptive.
Since, the primary components this infrastructure is composed
of (e.g., CPU caches, GPUs, etc.), were not intended to offer
strong isolation properties for a multi-user architecture; a
virtualization hypervisor that moderates access between guest
operating systems and the physical resources, was initiated.
However, guest operating systems have been able to gain
unsuitable access to the platform and inappropriate levels of
control to affect the central platform.
Disk partitions, CPU caches, and other shared components
were not made for strong compartmentalization. The entire

dependable. Cloud providers can increase the amount of


resources to protect themselves from possible dangers.
The latest encryption procedures can be applied to store and
recover data from cloud which provides secure access to data
in cloud. In addition, apt key management techniques, to
deliver the key to the cloud users in a way that only approved
persons can access the data, could be employed. Shared
authorization, verification of customer applications,
consolidating software as a service, and modifying the
understanding of guidelines can be included to supplement
security optimizations.
The onset of recently developed Cloud Computing
technology which absolutely relies upon the faith and
confidence it establishes with its users requires a
comprehensive study in the area of its security. Redesigned
traditional security solutions along with new solutions are
needed that can work with the cloud architecture. Conclusive
regulations for cloud computing security can be developed in
the future.
REFERENCES
[1] R. Ashalatha and M. Vaidehi, THE SIGNIFICANCE OF
DATA SECURITY IN CLOUD: A SURVEY ON
CHALLENGES AND SOLUTIONS ON DATA
SECURITY, pp. 1518, 2012.
[2]

A. Chhibber, Security Analysis of Cloud Computing


Issn: 2278-6252, vol. 2, no. 3, pp. 4953.

[3]

Cloud Security Alliance, Top Threats to Cloud


Computing, Security, no. March, pp. 114, 2010.

[4]

Global State of Information Security Survey 2016:


PwC.
[Online].
Available:
http://www.pwc.com/gx/en/issues/cybersecurity/information-security-survey.html. [Accessed: 28Dec-2015].

[5]

Cloud Computing Privacy Concerns on Our Doorstep.


[Online].
Available:
http://cacm.acm.org/magazines/2011/1/103200-cloudcomputing-privacy-concerns-on-our-doorstep/fulltext.
[Accessed: 27-Dec-2015].

[6]

R. Adams, The emergence of cloud storage and the need


for a new digital forensic process model, 2013.

[7]

M. G. Rani and D. A. Marimuthu, A Study on Cloud


Security Issues and Challenges. [Online]. Available:
http://www.ijcta.com/documents/volumes/vol3issue1/ijcta
2012030164.pdf. [Accessed: 27-Dec-2015].

[8]

S. Dogra and Sahil, Cloud Computing and its Security


Concerns
A
Survey.
[Online].
Available:
http://www.ijitee.org/attachments/File/v3i12/L163905312
14.pdf. [Accessed: 27-Dec-2015].

[9]

S. Nema and S. S. Raghuwanshi, An Innovative Method


to Improve Security in Cloud: Using LDAP and OSSEC,
pp. 66746681, 2016.

[10]

Security in Hybrid Cloud. [Online]. Available:


https://globaljournals.org/GJCST_Volume13/1-Securityin-Hybrid-Cloud.pdf. [Accessed: 27-Dec-2015].

[11]

R. V. Rao and K. Selvamani, Data Security Challenges


and Its Solutions in Cloud Computing, Procedia Comput.
Sci., vol. 48, pp. 204209, 2015.

[12]

B. Kepes, Understanding the Cloud Computing Stack:


SaaS, PaaS, IaaS | Knowledge Center | Rackspace
Hosting.
[Online].
Available:
http://www.rackspace.com/knowledge_center/whitepaper/
understanding-the-cloud-computing-stack-saas-paas-iaas.
[Accessed: 29-Dec-2015].

[13]

IaaS, PaaS, SaaS (Explained and Compared) Apprenda.


[Online].
Available:
https://apprenda.com/library/paas/iaas-paas-saasexplained-compared/. [Accessed: 29-Dec-2015].

[14]

R. Blaisdell, Cloud computing enables business


scalability and flexibility | Rickscloud. [Online].
Available: https://www.rickscloud.com/cloud-computingenables-business-scalability-and-flexibility/.
[Accessed:
29-Dec-2015].

[15]

R. Matthews, How Environmentally Sustainable is Cloud


Computing?
[Online].
Available:
http://globalwarmingisreal.com/2013/09/12/sustainablecloud-computing/. [Accessed: 30-Dec-2015].

[16]

The Notorious Nine Cloud Computing Top Threats in


2013.
[Online].
Available:
https://downloads.cloudsecurityalliance.org/initiatives/top
_threats/The_Notorious_Nine_Cloud_Computing_Top_Th
reats_in_2013.pdf.
[Accessed:
24-Jan-2016].

Diagnosis Security Problems for Hybrid Cloud


Computing in Medium Organizations
Mohanaad T Shakir , Asmidar Bit Abu Bakar , Yunus Bin Yusoff, and Mustefa Talal Sheker
Al-Buraimi University College, Oman
University Tenaga National (UNITEN), Malaysia

Abstract Cloud computing is considered as one of the rapid


growing technologies for it has high flexibility in both usage and
application; therefore, it has been used widely by many
organizations. Cloud computing features both the easiness and
fast accessibility of the data as well as the characteristic of cost
reduction for data storage. Accordingly, different organizations
are using this useful technique. Since the cloud computing has
been used within ample facets and various environments, many
originations have faced several security problems with cloud
computing. This paper aims to diagnose some of these security
problems and identify their characteristics by distributing a
questionnaire among a group of organizations in selected twelve
(12) countries. Nevertheless, this study will examine the most
important issues in security cloud computing and determine
frames to help the researchers to improve the styles and
techniques of security systems for cloud computing. Thus, this
will enhance the use of cloud computing in different
organizations and institutions.

storage, applications, and services) that can be rapidly


provisioned and released with minimal management effort or
service provider interaction [1]. The essence of Cloud
Computing extends to a wide range of information, software,
and resources that are made available to the consumers through
their very own browsers.
Cloud Computing has used insight from other computing
software and paradigms like Web 2.0 and virtualization SOA
(Service Oriented Architecture). To some extent, Cloud
Computing can be considered as the result of these paradigms
evolution, and so the name itself stands for the change they have
undergone with respect to the service they stand for [2]. Cloud
Computing can be used in one of its three model variations as
shown in figure 1 below, namely PaaS (Platform-as-a-Service),
IaaS
(infrastructure-as-a-Service),
and
SaaS
(Software-as-a-Service). PaaS involves the utilisation of
platform layer resources like operating system abutment and
software frameworks meant to introduce, expand, or transfer
resources to the cloud. On the other hand, IaaS is the most
basic-resource provider, dealing with networks, storage, and
processing, ensuring that the consumer is able to optimise
random software, applications, and operating systems.
Moreover, SaaS is the final model which gives the customer the
option to choose from a wide range of cloud-implemented,
end-user applications, which are brought to the consumer
through a thin interface (for example, web-based e-mail) to their
preferred device. Cloud computing, undoubtedly presents
multiple advantages, but its limitations, including legal issues,
security, standardization and privacy, should also be kept in
mind. Every model comes with certain security complications.
In addition to its own security problems, it has still not
overcome the ones inherited by the technologies it has been
influenced by or which have acted as its fundamental bases,
which impedes system security performance even more.
Although many security measures have been taken with respect
to singular system parts, there is no unified system for protecting
the whole cloud. This is precisely where we have concentrated
our efforts in the procurement of a general system for handling
system security concerns.

KeywordsCloud computing, information system security.


I. INTRODUCTION
Cloud Computing has been recently introduced computing
model. Its main advantages lie in upgrading hardware power
efficiency and resource use. At the same time, it gives users the
opportunity for universal access and gives them the privilege to
reimburse only the services they have received. It has been
defined in multiple ways due to its relatively young presence.
However, in this current study, we will abide by NISTs
thorough definition, stating that Cloud Computing is a model
for enabling convenient, on-demand network access to a shared
pool configurable computing resources (e.g. Networks, servers,
This work was supported in part by the Information Technology department,
AlBuraimi University College, Oman, and COGS, University Tenaga
National,(UNITEN).
Mohanaad T. Shakir. is Ph.D Student at
University Tenaga
National(UNITEN),Malaysia, and academic staff at AlBuraimi University
college, Oman.(corresponding author to provide phone: 0096891990794;
e-mail: mohanaad@buc.edu.om).
Asmidar Bit Abu Bakar. She is Academic staff at University Tenaga
National(UNITEN) (e-mail: asmidar@uniten.edu.my).
Yunus Bin Yusoff. He is Academic staff at University Tenaga
National(UNITEN) (e-mail: yunusy@uniten.edu.my).
6

identifications are among the immediate threats concerning the


cloud storage [22], The clouds multitenancy characteristics
pose the threat of user data abuse since resource sharing
between clients can expose private information. This is largely
due to the fact that a cloud separates its data assets only
virtually. Information that has been deleted can be unlawfully
retained and reconstructed because of the clouds data
remnants. Fraud protection should also be implemented because
weak identification may result in illegitimate data access. It is
mandatory that cloud service providers protect users from
breaches coming from various software applications, which
require access to the clients information [16]. This data,
although used by the application, must remain secure and
unavailable to third parties. Privacy can be secured by popular
techniques like 2FA [18, 17] and encryption algorithms [20,
19].
III. METHODOLOGY
In this paper, we diagnosed the problems in security of cloud
computing. Cloud computing could have many problems in
security which includes many areas such as cipher, information
hiding, Information security, intruder and others. Applying
cloud computing in the organization could face many security
problems. To determine these problems, we firstly conducted a
survey in many organizations that use the cloud computing. The
questionnaires were distributed to collect the required data in
order to diagnose problems in security of cloud computing and
define the nature of these problems. The collected data gave us a
clear vision about the existing problem and the method to
address the issue in multi security problems with cloud
computing. The total numbers of samples collected were 56
cases in many countries. The survey was disseminated through
the website, and observed many organizations that used the
cloud computing, such as universities, company, and
government institutions. Secondly, the data were analyzed
using a statistical analysis to indicate the problems that the
organization faces.
This study sheds light on the common problems of cloud
computing security in confidentiality. Figure 2 shows the steps
of the methodology.

Figure 1 Cloud Computing layers


II. SECURITY AND PRIVACY REQUIREMENT
Security deals with informational privacy, integrity and
availability, and is additionally characterized by AAA
(Researcher Authorization, Authentication, Access control), as
shown in figure 3 below.

Figure 3. AAA Triangle


Privacy, in turn, relates to the adherence to certain legal and
functional requirements, including client agreement, personal
identification and legit usage, as well as purpose constraints.
Additional norms are controlled, compliance, and clarity. When
these requirements are met, the cloud arrangement is considered
to be lawfully operated. ISO 7498 2 imposed by the
International Standards Organization concerns some
supplementary specifications:
Confidentiality,
Identification
and
Authentication
management, Researcher station and access, Integrity,
Availability, Compliance and Audit, Transparency,
Governance, Accountability.
Confidentiality involves the numerous cloud access points and
users, which makes it sensitive to illegitimate venues and pirate
individuals. It must ensure that only authorized users can access
their data. It is especially mandatory for public clouds since they
are most vulnerable. Software applications, shared information
and profiles, information exposure, and weak user
7

Q2:

Age

20-25

25-30

30-35

35-40

40-50

Ove
r 50

2
Q3:

12
Location

14

16

10

KSA
3

Jordan
2

Iraq

USA
7

10
UAE

Canada
3

3
Oman
2
Q4:

Australia
7

Turkey
5

Malaysia
9

India
2

Germany
3
Position

Fig. 2. Methodological Framework


Managing
Manager Academic staff
Director
4
6
20
IT Professional
Programmer
Others

3.1 PARTICIPANTS
The participants of this study were organizations that use cloud
computing. This study focused on the experts as participants
such as Professor, Lecturer, IT Professionals, and researchers
who had interests in security in cloud computing. In this study,
medium organization samples from twelve (12) countries (as
shown in Table 1) were selected to ensure a wider scope of data
collection which helped in diagnosing the problems selected.
The total of samples collected was one hundred twenty five
(125), the related samples was fifty six (56) from various
organizations. Based on the results of this survey most of the
organizations that used cloud computing suffered from security
problems.
1. DATA COLLECTION INSTRUMENT
In this section, we present the questionnaire that were
distributed and the feedback from the organizations on all
questions (via email, and, field visits) within a period of six
months. The organizations that did not use cloud computing
were not included as part of the data of this study. In addition,
this research consists of two parts, the first part is about
information and background about the samples while the second
part is about cloud computing security as shown in the questions
below:
Table 1. Questionnaire distribution
Title of qualification awarded

High
School

Associate's
degree

Bachelor's
degree

Master's
degree

Doctorate'
s degree

21

22

10

8
2
Years of experience in the aforementioned
position

0-2

2-5

5-10

35

Q6:

Work sector

19

Education

Bank

Computer/IT

27

7
Q7:

Over 10

Insurance

Government agencies

Part 1
Q1:

16
Q5:

Healthcare
6

Other
1

Number of staff OR student at your


organization

0 till 100

101till 299

21

300 till 599

Over 600

22

higher of the deviation. Standard deviation is calculated as the


square root of variance [14].
The variance is the measure of the spreading set of data that
points around their mean value. Variance is a mathematical
prospect of the average squared divergences from the mean
[15].

Part 2: Confidentiality
1:
I am concerned with the improvement features of
security in cloud computing.
Strongly Agree
2:

Disa
gree
I often face problems in the authority model of
cloud computing.

Strongly Agree
3:

Agree Neither

Disa
gree
Data differ based on the level of security in the
cloud computing.

Strongly Agree
5:

Neither

Disa
gree
I prefer the authority of cloud computing that has
Multi-level security.

Strongly Agree
4:

Agree

Agree Neither

Agree Neither

Disa
gree
Data need various sizes of key encryption
according to the level of security.

Figure 4. Statistical Analysis keys


Range is one of the several indices of inconsistency that
statisticians use to characterize the dispersion among the
measures as given in samples [21]. Below is an explanation of
the analysis for part 2.
Table 2. Descriptive Statistics , Part 2

Strongly Agree
6:

Agre Neither
Disa
e
gree
Timing of encryption and decryption is vital.

Strongly Agree
7:

Agree

Neither

Disa
gree
The legality of data which upload on cloud
computing is important.

Rang

Minim

Max

um

imu

Sum

Mean

m
Statistic

Std.
Error

Strongly Agree
8:

Disa
gree
I prefer the process of recognizing and notifying
the illegal data.

Strongly Agree

Agree

Agree

Neither

Neither

Dis
agre
e

V. DATA ANALYSIS AND RESULTS


In this paper the statistical analysis and evaluation of the data
were done by using Mean, Median, Mode, Std. Deviation,
Variance, Range, and Sum. SPSS was used for statistical
relation analysis and Excel for Statistical graphic summary as
shown in Fig. 4 (the statistical analysis).
The triple averages include Mean, median, and mode. However,
there are many types of averages in statistics and one of those
averages is the "mean" and the "median" which is the mid
value. The recurring values are called the "mode" [13]. Std.
Deviation is a statistical dimension or appraise of the dispersion
of a set of data from its mean. The more spread the data is the

Q1

56

3.0

1.0

4.00

92.0

1.6429

.13100

Q5

56

2.0

1.0

3.00

74.0

1.3214

.07256

Q2

56

3.0

1.0

4.00

80.0

1.4286

.12183

Q3

56

2.0

1.0

3.00

72.0

1.2857

.07942

Q4

56

2.0

1.0

3.00

72.0

1.2857

.07942

Q6

56

2.0

1.0

3.00

71.0

1.2679

.07425

Q7

56

2.0

1.0

3.00

66.0

1.1786

.05759

Q8

56

2.0

1.0

3.00

66.0

1.1786

.06297

Val

56

Std. Deviation

Variance

Statistic

Statistic

Q1

.98033

.961

Q5

.54296

.295

Q2

.91168

.831

Q3

.59435

.353

Q4

.59435

.353

Q6

.55567

.309

Q7

.43095

.186

Q8

.47125

.222

Difference'. This result indicates a high acceptable value of the


average of all questions.

Valid N
(listwise)

Table 2 shows the results of the survey questionnaire that was


distributed to the participants. The findings reveal the mean of
each question in the survey. Question 1 (Q1) has a mean score of
1.6429 and Q8 has a mean score of 1.1786. Detailed analysis of
the data is further discussed in Table 3.

Table 4 Average results


Average
Mean Difference
1.3239

VI. Discussion and Conclusion


Cloud computing is suffering from various problems of
confidentiality. Therefore, organizations or institutions need
more censorship to their data to avoid any problems of
confidentiality that might happen to the cloud computing, and to
stay away from any problem that may damage any important
data [22]. In addition, the organizations might face many risks
in the security level because of cloud computing. The results of
this research have presented the various levels of problems
which impacted the security level of cloud computing in two
phases. On one hand; organizations or institutions are very
concerned in improving the security of cloud computing
through the application of the authority model and dynamic
classification of data model based on the multi-level security.
On the other hand, they prefer to develop the multi-key cipher
algorithm in order to manage the encryption based on the level
of security. Based on the results of this study, it is recommended
that organizations must apply new policies in classifying the
data into many security levels based on the nature of data to save
time, and effort. Consequently, developing new encryption
methods would be more convenient considering the nature of
the security of data which reinforces safety in the
implementation of the cloud computing in several
organizations.

Table 3. T_Test Result.


One-Sample Test
Test Value = 0
t

Df

Mean
Diffe
rence

55

Sig
.
(2tail
ed)
.00

1.643

1.3803

1.9054

N
Q1

12.54

95% Confidence
Interval of the
Difference
Lower
Upper

Q2

11.73

55

.00

1.429

1.1844

1.6727

Q3

16.19

55

.00

1.286

1.1265

1.4449

Q4

16.19

55

.00

1.286

1.1265

1.4449

Q5

18.21

55

.00

1.321

1.1760

1.4668

Q6

17.07

55

.00

1.268

1.1190

1.4167

Q7

20.47

55

.00

1.179

1.0632

1.2940

Q8

18.71

55

.00

1.179

1.0524

1.3048

95% Confidence Interval of the Difference


A v e r a g e Lower
A v e r a g e Upper
1.15354
1.4938

Table 3 shows that the first question of the questionnaire (I am


concerned with the improvement features of security in cloud
computing) is near to Agree category (Mean Difference =
1.643) . Since this value is between lower, and upper range in
'Confidence Interval of the Difference' so it could be
considered as an acceptable value. However, the values of the
questions respectively from 2 to 8 (Mean Difference = 1.429,
1.286,1.286, 1.321,1.268,1.179,1.179) are near to Strongly
Agree and their values are between lower, and upper range in
'Confidence Interval of the Difference'. Accordingly, they
could be considered as high accepted results.
The total results (Average of results) for all questions of the
survey are as follows:
Average Mean=(mean)/N
Average Mean=(1.643+1.429+1.286+1.286+ 1.321+1.268+
1.179+ 1.179) /8 =1.3239
Average Lower=( Lower)/N
Average Lower=(1.3803+1.1844+1.1265+1.1265+1.1760+1.1190+
1.0632 +1.0524)/8=1.15354
Average Upper=( Upper)/N
Average Upper=(1.9054+1.6727+1.4449+1.4449+1.4668+
1.4167 + 1.2940+ 1.3048 = 1.4938
The analysis of total average results (As shown in Table 4)
shows that the Average Mean Difference = 1.3239 is near to
Strongly Agree category and this value is between Average
lower, and Average upper range in 'Confidence Interval of the

[1]

[2]

[3]
[4]

[5]

[6]
[7]

10

VII. REFERENCES
M. Boroujerdi and S. Nazem, Cloud Computing: Changing
Cogitation about Computing, World Academy of Science,
Engineering and Technology, 2009.
L. Vaquero, L. Rodero-Merino, J. Caceres, and M. Lindner,
A Break in the Clouds: Towards a Cloud Definition, ACM
SIGCOMM Computer Communication
Review, Volume 39 Issue 1, pages 50-55, January 2009.
NIST, http://www.nist.gov/itl/cloud/index.cfm
P. Mell and T. Grance, The NIST Definition of Cloud
Computing Recommendation of NIST, Special Publication
800-145, 2011.
http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145
.pdf
http://whatis.techtarget.com/defi nition/Conf
identiality-integrity-andavailability-CIA
GTSI Group, Cloud Computing Building a Framework for
Successful Transition, White Paper, GTSI Corporation, 2009.
W. Jansen and T.Grance Guidelines on Security and Privacy
in Public Cloud Computing, NIST Draft Special Publication
800-144, 2011.
http://csrc.nist.gov/publications/drafts/800-144/Draft-SP-800144_cloud-computing.pdf

[8]

[9]
[10]

[11]

[12]

[13]
[14]
'[15]
[16]

[17]

8 T. Dillon, C. Wu and E. Chang, Cloud Computing: Issues


and Challenges, 24th IEEE International Conference on
Advanced Information Networking and Appications, 2010.
http://www.idc.com.
Ramgovind S, Eloff MM and Smith E. The Management of
Security in Cloud Computing Information Security for South
Africa (ISSA), Sandton, Johannesburg, 2-4 Aug, 2010.
Z. Wang, Security and Privacy Issues Within Cloud
Computing IEEE Int. Conference on computational and
Finformation sciences, Chengdu, China, Oct. 2011.
Dimitrios Zissis and Dimitrios Lekkas, Addressing cloud
computing security issues, Future Geberation Computer
Systems 28, pp. 583-592, 2012.
http://www.purplemath.com/modules/meanmode.htm
http://www.investopedia.com/terms/s/standarddeviation.asp#
axzz1ZyGlP8Xd
http://www.investopedia.com/terms/v/variance.asp#axzz1Zy
GlP8Xd
Dimitrios Zissis and Dimitrios Lekkas, Addressing cloud
computing security issues, Future Geberation Computer
Systems 28, pp. 583-592, 2012.
Dave Abraham,Why 2FA in the cloud, Network Security,
Vol. 2009, Issue 9, Pages 4-5, September 2009.

[18]

http://en.wikipedia.org/wiki/Two-factor_authentication

[19]

Federal Information Processing Standards Publication


197,"Specification for the Advanced Encryption Standards
(AES)", 2001.
S. Fluhrer, I. Mantin, and A. Shamir, "Weakness in the Key
schedualing algorithm of RC4", 8th Annual International
Workshop on Selected Areas in Cryptography,
Springer-Verlag London, UK, 2001.

[20]

[21]
[22]

https://www.mathsisfun.com/definitions/range-statistics-.htm
l
Ahmed E. Youssef and Manal Alageel," A forA Framework
for SSSSeeeeccccure Cloudure Cloud Computing" IJCSI
International Journal of Computer Science Issues, Vol. 9,
Issue 4, No 3, July 2012 ISSN (Online): 1694-0814

Mohanaad T Shakir is an Assistant Professor at


Alburaimi University College, Oman. He .holds a B.S.C
Degree in Computer Science from the University of
Almamoon, Baghdad, Iraq; Post Diploma in Computer
Security from University of Technology, Iraq and M.Sc. in
Information Technology (MIT) from the University of
Tenaga National(UNITEN), Putrajaya, Malaysia. He is a
Ph.D. candiadte in Information Communication Technology at the Univesity of
Tenaga National (UNITEN), Putrajaya, Malaysia. His research interests
include Cipher Security, Computer-Aided Learning and Cloud Computing
Security. He can be contacted at mohanaad@buc.edu.om.
DR. ASMIDAR BTE ABU BAKAR is an Associate
Professor from University Tenaga National, Malaysia.
She received the B.S.C Degree in Computer Science
from the Universiti Putra Malaysia and Master of
Science (Computer Science) at the Universiti Putra
Malaysia and completed her PhD at the Universiti
Tenaga Nasional, Malaysi.
DR. YUNUS BIN YUSOFF is an Associate
Professor at University Tenaga National, Malaysia.
He completed B.Sc. in Computer Science and
Master in Computer Application at the Pacific
Lutheran University,and Ph.D in ICT at the
Universiti Tenaga National.

11

A Survey on Hybrid Route Diversification and


Node Deployment in Wireless Sensor Networks
Mohamed Yasin1, Dr.P.Sujatha2, Dr. A.Mohamed Abbas3, Dr. M.S.Saleem Basha4
1
Research Scholar, Pondicherry University, India, Email id: yascrescent@gmail.com
2
Assistant Professor, Pondicherry University, India, Email id: spothula@gmail.com
3,4
Assistant Professor, Mazoon University College, Muscat,
Email id: abbasasfaq@gmail.com, m.s.saleembasha@gmail.com
to reach sink, it might be a congestion in the network due
to traffic or some noise by nature like storm etc., in order
to avoid those problems, the process should be carried out
in some other different manner to achieve better results.
One way of resolving this issue is route diversification
[15]. This results in efficient network lifetime, less risk of
network failure. Now days transportation of data plays a
vital role in Wireless Sensor Networks and there exist the
problem of redundancy of data to a same node which
causes bottleneck and that makes the node to spend more
energy on that and causes node failure. Multi-hop is one
of the best ways to handle these kinds of approaches. Our
model achieves two different themes, first one
exploration of search space with two different algorithms
and the next is efficient way to deploy the node.

Abstract - Wireless Sensor Networks (WSN) is an emerging


field of research with large number of applications and
associated constraints like minimum energy consumption,
maximum residual power, scalability, reliability etc.,. To help in
these aspects, Node deployment and diversification of routes are
the two major problems in WSN. Broadcasting from a single
node to all the other nodes obviously reduces the maximum
power consumption. But, it results in the node failure since
maximum amount of energy used by a single node. Absence of
diversification of solutions results in local optima and premature
convergence without trailing majority of search space
combinations. Important factors in WSN are node deployment,
node coverage, minimum energy consumption, and congestion
control. In this innovative paper, we propose an efficient hybrid
model for enhancing the diversification of solution space and
improvising the efficiency of node deployment. Bio inspired
algorithm combinations are used in resolving these issues which
results in efficiency of node coverage and minimum energy
consumption for data transfer. The network for resolving these
issues in WSN instances of Minimum Energy Broadcasting
(MEB) are taken in count with large number of instances. Since
all bio inspired algorithms works under no free lunch theorem
large instances of datasets degrades the performance of WSN.
This algorithm is designed to provide optimal solution for such
large instances. We also show; how our proposed model can
handle the routes, transmission delay etc., without facing any
problem such as power consumption, etc.
Keywords-Wireless Sensor Networks (WSN), Minimum
Energy Broadcasting (MEB), Ant Colony Optimization (ACO),
Hybrid.

II. MAJOR ISSUES IN WIRELESS SENSOR


NETWORKS
Node deployment and diversification of search in the
search space are the two main issues in WSN (Wireless
Sensor Network). Optimization algorithms [5] which are
heuristic and bio inspired are some of the approaches to
solve these issues. Unfortunately, many large scale
instances are not been solved due to minimal
diversification in the algorithms and premature
convergence which results in local optima. The problem
is to identify where the diversification of search space
[15] gets failed in large scale problems and provide better
approach to resolve it. The results will give better results
in terms of minimum energy consumption, maximal
residual energy in nodes, multi path route discovery etc.

I. INTRODUCTION
The networks which holds the capability to interface
with this real world which is physical in nature with the
virtual world in a vast manner and provides reasonable
uses and causes for developing application in large
number which results in Internet of Things, sixth sense
technology, habitat monitoring, sensor based agriculture,
etc,.Though it gives enormous benefits, it results in
challenge in terms of deployment. Recent advances in
communication and computing Wireless Sensor Network
gathered high range of attention in terms of research
oriented proposals. This inexpensive deployment of
sensors makes the researchers to deploy it and uses nature
as their test bed to prove the efficiency of their proposed
work. At first, this WSN has been designed and deployed
for military purposes for sensing and reporting of climate
and physical changes in their target area. Later due to its
advanced techniques, it has been deployed throughout the
country for commercial and personal purposes.
The main problem occur during the node
deployment and while the communication between nodes

III. RELATED INNOVATIVE WORKS - A SURVEY


Ant colony optimization with greedy migration
mechanism for node deployment in wireless sensor
networks [1]
A. Theme

12

In this paper, authors proposed an efficient


algorithm called ACO Greedy (Ant Colony
Optimization-Greedy)[18] for efficient node
deployment in wireless sensor networks.
ACO-Greedy solves Grid-Based coverage with
low-cost and connectivity-guarantee problem
(GCLC) with the help of ant-pheromone strategy
and non- uniform sensing/ communication radius
design.

E. Objectives

Experimental results of ACO-Greedy [12]


algorithm are compared with EasiDesign and
ACO-TCAT algorithms in terms of average
coverage cost, energy consumption, ratio of
surviving nodes and provide improved results.
ACO-Greedy works on 4 sets.
o Object Point Selection Strategy: choosing the
next point to deploy the node based on
pheromone
intensity
and
heuristic
probability.
o Pheromone constraining strategy: to avoid
stagnation the pheromone rate is not
constrained in each iteration but periodically.
o Non-uniform sensing/communication radius
design: to solve energy hole problem
maximum sensing radius value is computed
and used.
o Ants greedy migration scheme: in order to
cover at least 1 ECP (Effective Candidate
Point) per node the ant migration is set along
with greedy technique.
B. Advantages

Optimization deployment of wireless sensor networks


based on cultureant colony algorithm [2]
A. Theme

Because of using greedy method it converges


soon which increases computational speed.
This algorithm is computationally effective since
minimum number of nodes get deployed.
This approach decreases the deployment cost
and raises the coverage speed with the help of
ECP.
Due to low energy consumption the network
lifetime gets increased.

C. Disadvantages

Greedy method converges soon which is


effective for small instances but for large
instances since it converges soon many
combinations have not been tried.
Since the nodes deployed with respect to
coverage of ECP and PoIs the communication
range collision become unnoticed.
All the PoIs are predefined which are not
practically possible in WSN since it is dynamic
in nature.
The experiments were been handled in plane
environment without obstacles.

The deployment of node has been designed with


the collaboration of Culture-Ant Colony
Algorithm in which the Culture algorithm
provides dual evolution mechanism and ACO
for pheromone strategy to choose next optimal
place to deploy the node.
Introduction of Convergence judging method to
avoid pre mature convergence to achieve global
optimization.
Use of tabu search which keeps the unvisited
nodes in order to use it as deployment areas for
nodes to cover the search space.
Use of double evolution which is acceptance and
influence operation which is used either to
choose the best individual or if not smash it and
replace it with new individual.
The experimental results of CA-ACO has been
computed and compared with EasiDesign and
ACO-TCAT [15] algorithms in 3 aspects.
1. Optimal capacity analysis.
2. Search speed analysis.
3. Stability analysis.

B. Advantages

D. Techniques to overcome: (Abstract of the Proposal)

To develop an efficient algorithm provides


maximum exploration of search space for node
deployment in Wireless Sensor Network.
To improve the lifetime of network and to avoid
collision without affecting the performance of
the proposed efficient algorithm.
To develop an efficient algorithm that holds the
capability to handle dynamic PoI and to work
well under heavy traffic.

Diversity has been included to some extend with


the help of dual evolution.
Provides optimal solutions for sparse
environment.
Guaranteed for overall coverage of search space
and a robustness of data connectivity has been
evolved.

C. Disadvantages

ACO have minimum exploration capability and


since this hybrid algorithm goes with high
exploitation methodology (even in greedy
search). An algorithm which gives better
exploration of search space to deploy a node will
improve the effectiveness of network.
Imposing collision avoidance or traffic control
mechanism can improve the lifetime of a
network since the deployment for covering the
nodes is not collision avoidance based.
A suitable concept should be included to handle
dynamic PoIs.
An algorithm which works well under heavy
traffic have to be introduced.

The deployment of number of nodes is not


considered as a series issue which results in high
deployment cost.
Absence of the concept of data transmission in
dense region.
Power saving mode has not been considered.
Barrier coverage has not taken into count.
D. Techniques to overcome

13

Hop-by-Hop retransmission where each link


provides reliable forwarding to the next hop
using localized packet transmission can reduce
the number of node deployment. This will

improve the lifetime of network by saving the


transmission node energy and utilizing the
minimum energy proved for other nodes in a
network
An efficient clustering algorithm can be imposed
in order to serve for dense region which groups
the nodes and serves the data.
Use of beacon intervals between nodes which
save the node power and improves network
lifetime.

C. Disadvantages

E. Objectives

D. Techniques to overcome

To develop an effective Multi-Hop packet


transmission routing protocol and using
optimization algorithm to improvise lifetime of
network.
To develop an optimization algorithm for node
clustering to provide reliability in WSN and a
protocol for handling beacon interval for
enhancing network lifetime.

A multi-objective routing algorithm for Wireless


Multimedia Sensor Networks [3]
A. Theme

This paper presents an evolutionary algorithm


named
Strength
Pareto
Evolutionary
Algorithm for optimizing the routing in WSN
with respect to multiple objectives.
It improves the QoS requirements which are in
need for energy efficiency such as delay and a
metric which is the estimation of the expected
total number of transmissions (including
retransmissions) required to deliver a packet to
the destination node successfully Expected
Transmission Count (ETX)[10].
The multi-objective optimization algorithms
goal is to produce a diverse set of optimal
solutions that can be used by the user while
evaluating
trade-offs
between
different
objectives.
SEPA algorithm is executed after all constrains
were been applied to the problem and those
individuals which represent the solution will be
operated with crossover and mutation and then a
value of Pt will be calculated.
Cluster technique is used in order to reduce the
population size.
Experimental results were been compared with 2
algorithms in terms of
1. ETX
2. End - to - End Delay
B. Advantages

QoS parameters such as throughput, availability


of routes and minimum energy consumption are
not considered as objectives.
The chosen source nodes of an individual are
considered to be in direct link. This reduces the
multi-hop hope in WSN.
The scenarios considered for experimentation
considered to be static while WSN is a dynamic
problem.

Considering throughput, availability and


minimum energy consumption along with end to
end delay and ETX will give complexity to the
algorithms. A swarm intelligent based algorithm
which works in cooperative/collaborative
manner will leads to solve all dimensions of a
routing problem.
When setting the source node as direct
neighbour to the adjacent nodes it provides
global optima or else the solution will get into
local optima. A complete global search
optimization algorithm where high concentration
on exploration leads to global optima without
placing source node as neighbour node.
Using of dynamic scenarios for testing routing
algorithms in WSN will be appreciable. Node
mobility should be introduced in order to address
the dynamic concept. Delay Torrent Networks
which store, carry and forward the data can be
used to introduce the concept of node mobility.

E. Objectives

To introduce an efficient optimization algorithm


that handles multi-objective problems to increase
the QoS parameters.
To incorporate the behaviour of diversification
that gives higher exploration by placing different
nodes as source node.
To develop a modified Bio inspired algorithm to
handle Delay Torrent Network effectively.

Memetic algorithm for minimum energy broadcast


problem in wireless ad hoc networks [4]
A. Theme

Comprises of multi objective functions which


focuses on end to end delay as well as ETX at a
same time and provides optimized results when
compared to other existing algorithms.
The solutions are global as well as near to Pareto
Optimal Set.
The solution between iterations comes more
diverse in SEPA population generation which is
obtained non-dominated set.

14

This paper introduces memetic algorithm to


coin minimum energy broadcasting problem in
WSN.
It considers the antenna are omnidirectional
(which broadcast messages in 360 degree) and
serves all nodes in single transmission.
The feature of Wireless Multicast Advantage
(WMA)[12] is used in which Each node within
the transmission range of the sender in wireless
network can receive the broadcast without any
additional cost to the sender.
The algorithm works in the following flow:

A. Theme
A new routing protocol new cluster-based Route
Optimization and Load-balancing protocol
(ROL) which satisfies various QoS metrics.
An optimization tool New cluster based route
optimization which provides the QoS metrics
like, solution to prolong network life, provide
timely message delivery and improve network
robustness.
Uses a combination of routing metrics that can
be configured according to the priorities of userlevel applications to improve overall network
performance.
For Load-balancing a new network-flow-based
Distributed Clustering (NDC) is proposed.
ROL protocol provides high performance in all
aspects of QoS metrics when compared with
LEACH and Mires++.
ROL improves the robustness by providing
multiple paths to Cluster Head (CD) and by
electing CH backup node.
Metrics like Data delivery ratio, Timeliness,
Energy Efficiency are compared with LEACH
and Mires++ protocol and proves higher
performance.

1.

Initial population: Decoder method is used


to generate the population tree.
2. Knowledge added mutation: drawback of
decoder has been coined in this mutation
operation.
3. r-shrink procedure: To improvise the
placement of 2nd node r-shrink procedure is
used.
Experimentation result of Memetic based MEB
[4] are compared with GA, Iterated Local search
(ILO),
Nested
Partitioning (NP),
and
Evolutionary Local Search (ELS) and proved
that MA works efficiently in many of the
instances in MEB.

B. Advantages

Local Search in Memetic works efficiently for


MEB instances in small set of nodes.
Exploitation of solutions works well and
provides good convergence rate.
Avoids crossover and mutation which
improvises the computational time.

C. Disadvantages

For large instances MA takes more high


computational time.
Possibilities of getting into local optima.
Exploration of multiple combinations fails.

B. Advantages

D. Techniques to overcome

The performance measure shows that MA works


nominal when the instances consist of 50 nodes.
MEB has instances up to 200 nodes in it. This
shows for large instance of data memetic takes
high computational time and also provides
below average results.
Since the improved solution have the same node
as source node as of in last iteration, the
combination gets reduced to (n-2)! On changing
the source node for every iteration, the solution
gets improvised.
Exploration misses due to pre mature
convergence. Providing highly exploration
algorithms may increase the diversification for
the individuals.

ROL protocol provides load balancing at both


network and cluster level which improves the
robustness.
Routing protocol ROL provides back off for
minimizing he cluster setup traffic, Number of
Backup CH which avoids re-clustering, Number
of paths which provides fault tolerance, and
NDC to increase the lifetime.
Cluster formation makes ROL energy
computationally efficient.
Highly flexible since NDC allows any clustering
algorithm to make clusters.

C. Disadvantages

The protocol can handle only static sized


messages. Since the WSN message size is not
predefined it is odd to handle in run time
environment using ROL.
High maintenance is needed to make backup for
CH and to keep update CH and backup of CH.

E. Objectives

To modify a bio inspired algorithm that holds


high coordination behaviour to handle high
instances of MEB.
To introduce the concept of dynamic source
node and exploration of search space in every
iteration that goes for higher combinational
experimentation.

D. Techniques to overcome

Adaptive routing in wireless sensor networks: QoS


optimisation for enhanced application performance [5].

15

Concept of handling variable sized data should


be incorporated in order to avoid stagnation or
network route failure. An algorithm which
handles variable sized messages needs to be
incorporated which is well suited for WSN
routing problems.
Head backups which provides extra burden to
maintain the CH backup database in order to
avoid network failure. An alternate procedure for
handling CH failure can be imposed in order to
enhance runtime.

E. Objectives

[3]

To develop an optimization algorithm or to


impose a bio inspired algorithm to handle
messages with variable sizes.
To introduce an efficient procedure to handle
CH backup to avoid delay in routing.
Optimization is needed in the implementation to
reduce the processing overhead time.

[4]

[5]

IV. SIGNIFICANCE OF FUTURE WORK

[6]

This multipath discovery provides high


possibility of data transmission in WMN; it can
mitigate the link failure and finds another
optimal path.
A node or link failure can be tolerated without
affecting the entire environment of WSN. Fault
tolerance is improved.
Use of multi-hop transmission can increase
robustness of WSN network.
Multipath discovery reduces time delay of
packet transmission in the scenario of failure of
links or nodes.
Maximal residual energy between nodes gets
maintained because of the concept of multi-hop
data transmission.
The network lifetime can be increased by
consuming low energy consumption at the time
of broadcasting the messages to desired nodes in
the network.
Optimizing node deployment increases node
coverage efficiently.
Use of local information can reduce bottleneck
problem in the WSN.

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

V. CONCLUSIONS AND DISCUSSIONS


An efficient hybrid model has been proposed for
enhancing the diversification of solution space and
improvising the efficiency of node deployment. Bio
inspired algorithm combinations are used in resolving
these issues which results in efficiency of node coverage
and minimum energy consumption for data transfer.
There are two different theme of achieving the results for
the proposed model, first one is called as exploration of
search space with two different algorithms and the next
one is efficient way to deploy the node. The evaluation of
WSN is carried out by Network Simulator2.The
algorithm for the proposed hybrid model will be tested by
large scale MEB instances. The performance of proposed
model will be tested in terms of congestion control, fault
tolerance, energy consumption and robustness.

[16]

[17]

[18]
[19]

REFERENCES
[1]

[2]

Xuxun Liu, Desi He, "Ant colony optimization with greedy


migration mechanism for node deployment in wireless sensor
networks, JOURNAL OF NETWORK AND COMPUTER
APPLICATIONS 39 (2014) Elsevier, Science Direct
Xuemei Sun, Yiming Zhang, XuRen, Ke Chen "Optimization
deployment of wireless
sensor networks based on cultureant

16

colony algorithm", APPLIED


MATHEMATICS
AND
COMPUTATION 250 (2015) Elsevier, Science Direct
Fan Yang, Huchuan Lu, and Ming-Hsuan YangNarcioMagaia,
NunoHorta, RuiNeves, Paulo Rogerio Pereira, Miguel Correia, " A
multi-objective routing algorithm for Wireless Multimedia Sensor
Networks " APPLIED SOFT COMPUTING 30 (2015) Elsevier,
Science Direct
D. Arivudainambi, D. Rekha, "Memetic algorithm for minimum
energy broadcast problem in wireless ad hoc networks " Swarm
and Evolutionary Computation 12 (2013),Elsevier, Science Direct
Mohammad Hammoudeh, Robert Newman, "Adaptive routing in
wireless sensor networks: QoS optimisation for enhanced
application performance. INFORMATION FUSION 22 (2015),
Elsevier, Science Direct
K. Salem, N. Fisal, S. Hafizah, S. Kamilah, and R. A. Rashid A
Self-Optimized Multipath Routing Protocol for Wireless Sensor
Networks ,International Journal of Recent Trends in Engineering,
Vol 2, No. 1, November 2009
Xin-She Yanga, Mehmet Karamanoglu, Xingshi He, " Multiobjective Flower Algorithm for Optimization", International
Conference on Computational Science, ICCS 2013
K. Vijayalakshmi, S. Radhakrishnan, Artificial immune based
hybrid GA for QoS based multicast routing in large scale
networks" Computer Communications 31 (2008)
Esther M. Arkin, Alon Efrat, Joseph S.B. Mitchell, Valentin
Polishchuk,
Srinivasan
Ramasubramanian,
Swaminathan
Sankararaman, Javad Taheri, Data transmission and base-station
placement for optimizing the lifetime of wireless sensor networks"
Ad Hoc Networks 12 (2014)
Hugo Hernandez, Christian Blum, "Minimum energy broadcasting
in wireless
sensor networks: An ant colony optimization
approach for a realistic antenna model" Applied Soft Computing
11 (2011)
L.F. Akyildiz, W.L. Su, Y. Sankarasubramaniam, E. Cayirci, A
survey on sensor Networks, IEEE Commun. Mag. 40 (2003) 102
114.
L.B. Jonathan, D.D. Erik, T.H. Mohammad, R. Daniela, eploying
sensor networks with guaranteed fault tolerance, IEEE/ACM
Trans. Networking 18(2010) 216228.
S.S. Dhillon, K. Chakrabarty, Sensor placement for effective
coverage and surveillance in distributed sensor networks, in: 2003
IEEE Wireless Communications and Networking Record, 2003,
pp. 16091614.
C.F. Huang, Y.C. Tseng, A survey of solutions to the coverage
problems in wireless sensor networks, J. Int. Technol. 6 (2005) 1
8.
L.P. Liu, F. Xia, Z. Wang, J.M. Chen, Y.X. Sun, Deployment
issues in wireless sensor networks: mobile ad-hoc and sensor
networks, Lect. Notes Comput.Sci. 3794 (2005) 239248.
Mohamed Youssef, Naser El-Sheimy, Wireless sensor network:
Research vs. reality design and deployment issues. in: the Fifth
Annual Conference on Communication Networks and Services
Research, 2007, pp. 89.
S.S. Dhillon, K. Chakrabarty, S.S. IvengarSensor, placement for
grid coverage under imprecise detections, in: The Fifth
International Conference on Information Fusion, 2002, pp. 158l
l587.
W. Liu, L. Cui, Ant based approach to the optimal deployment in
wireless sensor networks, J. Commun. 30(10) (2009).
X.X. Liu, Sensor deployment of wireless sensor networks based on
ant colony optimization with three classes of ant transitions, IEEE
Commun. Lett. 16 (2012) 16041607.

AN ANALYSIS OF USER SATISFACTION OF MOBILE BANKING APPLICATION IN


OMAN
*Faiza Al Balushi, **Dr. Ashish
IT Department, Higher College of Technology, Muscat, Oman

ABSTRACT
Sultanate into the constantly evolving spheres for
applying knowledge.(H.M Sultan Qaboos ,2008) .

Information and communication technology


(ICT) has become the important aspect of each
and every business organization including
banking sector. In Oman the banking sector
plays an important role in the establishment and
development of business, so the banking sector
should use advanced Information and
Communication Technology (ICT) to improve
the banking services. Mobile banking is one of
modern application that has been established
using ICT through which customers can get
better banking services. The purpose of this
paper is to examine the impact of mobile
banking in service delivery in Oman banks. This
research is conducted at Higher College of
Technology, Muscat, and Sultanate of Oman in
the year 2015-16.

Oman has set out it is looking to change the Sultanate


by harnessing information and communication
technology (ICT), and to enable its people to meet
the challenges of the global economy. According to a
World Bank report, the use of ICT in public sector is
increased significantly in Oman and the percentage of
improvement in Sultanate is even better than the
world average improvement rate. Moreover,
according to the latest sector analysis and
performance indicators used to monitor the sectors
show continuous improvement of the banking sector.
Because of an ICT, banks could develop and update
their Hardware and Software to deal with large
amount of data related to the banks customer. The
Sultanate of Oman has won five awards at the GCC
e-Government Award, Conference and Exhibition
2015, which is taking place in Bahrain. The National
Center for Statistics and Information won the Best eGovernment Website among GCC countries. The
Public Authority for Consumer Protection (PACP)
won the Best Practice in Community e-Participation
while Al- Raffd Fund (Sanad Program) won the Best
e-Government
Integrated
Services
Individual/Business Sector award. The Royal Oman
Police (ROP) has been awarded the Best Whole-ofGovernment National e-Project. The fifth award for
the category of the Best Government Service for
Business Sector went to the Ministry of Commerce
and Industry (MOCI) on its Invest easy' project.

Keywords
ICT, Electronic banking, Mobile banking,
banking sector, Higher College of Technology
INTRODUCTION
In one of the speeches of his Majesty Sultan Qaboos
in ICT field he stated that: Information technology
and communications have now become the main
elements that move forward the development process
in this third millennium; therefore, we have accorded
our attention to finding a national strategy to develop
the skills and abilities of citizens in this domain with
the aim of further developing e-government services.
We are closely following the important steps that we
have made in this regard. We call upon on all
government institutions to speedily enhance their
performance, and to facilitate their services, by
applying digital technology in order to usher the

In Oman almost all the banks offer some type of Mbanking. However, a percentage of the facilitated
methodology must be embraced by the
administrations and interchanges suppliers and
budgetary organizations, with the goal that they can
execute the M- banking administrations. However,

IT Department, Higher College of Technology, Oman, Muscat

*16J121388@stu.hct.edu.om,
**ashish.rastogi@hct.edu.om (corresponding author)

17

assert whether the social effect is the most influential


figure today particular objective to use compact
dealing with a record.

bank X has great significance in providing services


and the completion of transactions electronically and
over the past period many number of electronic
services , which had been admired public put
forward, including the provision of banking services
via the mobile phone which known as M-Banking. It
is a term used to refer to systems that allows
customers of banks to conduct a number of financial
transactions through a mobile device such as a
mobile phone or tablet.

Aduda and Kingo (2010)


The researchers focused on the Factors affecting
individual to adopt mobile banking. The result
indicated that bank performance (measured by return
on assets) are explained by independent variable the
e-banking measured by Investments in e-banking and
number of debits cards issued to customers. This
indicates E-banking has strong and significance
marginal effects on returns on asset in the banking
industry. Thus, there exists positive relationship
between e-banking and bank performance. EBanking also has a strong positive relationship on the
overall banking performance by making workers
performance more effective and efficiency; The
application of information and communication
technology concepts, techniques, policies and
implementation strategies to banking services has
become a subject of fundamental importance and
concerns to all banks and indeed a prerequisite for
local and global competitiveness banking [6].

LITERATURE REVIEW
The study was aimed to verify factors influencing
continuance intentions of the early adopters of MBanking services. An elaborative review of existing
and available literature in this context was conducted
to narrow down the research area and to clearly
define the research objective. Majority of studies
about intention to adopt were conducted based on
research models and frameworks traditionally used
within the information system literature. Among
the different models that have been proposed, the
Technology Acceptance Model (TAM) [ 1 1 ] ,
adapted from the Theory of Reasoned Action
(TRA) [ 2] and its variations were widely used by
various scholars for explaining technology adoption
intentions [ 1 2 ] [ 2 0 ] [ 1 7 ] [13][7].

Naqvi and Al-Shihi (2014)


In one of their studies they found that Sultanate of
Oman is continuously taking many initiatives to
promote Oman to e-Oman by adopting the country's
digital strategy based on modern technology. Their
study is an attempt towards understanding and high
lighting the issues related to the adoption of Mcommerce services recently offered in Oman. Their
studies showed that majority of the mobile users have
easy access to these services and had positive
attitudes towards them. They felt that the services
were useful, easy to use and friendly. The results
indicated that besides users' positive attitudes not
many users were willing to adopt the services
offered. The users had shown their concerns on issues
related to business transaction security, privacy and
the most important is the trust between the users and
services offered. The finding suggests that all
positive attitudinal attributes such as speed, user
friendly, easy to use and usefulness lagged far behind
and the factors like security, privacy and trust

Research report (n.d)


After long time of research and collecting
information from different people, it is found that
only 21.8% of respondents used M-Banking, this
study just identifies with a starting stage for looking
at huge components influencing people desire to
grasp compact dealing with a record and certified
behavior of using M-Banking. Particularly in light of
the fact that adaptable advancement has immediately
advanced and the union of such advances and
budgetary organizations has progressed after sooner
or later, more research on versatile sparing cash
gathering is key. In this way, making the disclosures
needs to be vigilant, notwithstanding the way that the
observational revelations from this study may offer
productive bits of data to propel M-Banking and even
diverse remote business or fiscal organizations. From
this time forward, coordinating studies in distinctive
countries in Europe and America are imperative to

18

prevails affecting on the swift adoption of Mcommerce services offered in Oman. According to
the researchers there is a need to educate and create
more awareness among the users on the services
offered and address their concerns.

as well as the lack of awareness and understanding of


the benefits. Therefore, Lack of understanding of
mobile banking benefits is a reason for lake of
customers unwilling. They suggest that banker
advertising should focus on the novel aspect for
mobile banking. Willingness of customers to use
electronic banking, which is a banks desirable goal,
increases when, access to modern banking is more,
customers understanding of electronic banking,
dependence of modern banking on electronic network
and customers imagery about the benefits of using ebanking are better [28].

Adewoye, J. O (2013)
The mobile banking services provided by commercial
banks in Nigeria generally cover information-push
where customers can access banking information and
make transaction such as Account information,
Payments, transfers and Investments using mobile
phone as terminal. The results of his findings and the
hypotheses showed that Mobile banking improve
banks service delivery in a form of transactional
convenience, saving of time, quick transaction alert
and cost saving. Also the introduction of electronic
payment products such as M-banking, ATM, Internet,
etc. has increased the level of economic activities. it
also reveal that commercial banks in Nigeria that
have implemented mobile banking are chalking-up
some successes even with the problems that come
with it. These challenges include network problem
and Security which are major contributory factors
that hinder the effectiveness of mobile banking
service in the Nigeria banking sector. Finally their
research finding indicates that mobile banking
positively influence service delivery of commercial
banks in Nigeria.

The Technology Acceptance Model (TAM)


Technology Adoption Model [11] has been the
foundation of many technology adoption and
diffusion research and it is rooted in the Theory of
Reasoned Action (TRA). As per TAM, the two
important independent variables of actual use of
technology are: Perceived ease of use, defined as
the degree to which a person believes that using a
particular system would be free of effort Perceived
usefulness, defined as the degree to which a person
believes that using a particular system would enhance
his or her performance The presentation of TAM
[11] is shown below:

Amiri Aghdaie, S. F., & Faghani, F. (2012)


The researchers have been used customers
perception of mobile banking quality. They found
that increase in service quality of the mobile banking
can satisfy and develop customer satisfaction that
ultimately retains valued customers. The research
generally supports the result of previous research.
[26] examined influence of service quality on
customer satisfaction in banking industry. [16]
proposed that responsiveness, assurance, security,
easy to use are the factors affecting the customer
satisfaction in E- banking. [19] showed that the
factors affected on customer intention to use of
mobile banking services in order of importance are;
personal innovativeness, task-fit, connectivity,
connectivity, absorptive capacity and monetary value.
[18] identified the main barriers to mobile banking

Fig. 1 The principle scheme of original Technology


Acceptance Model
TAM was developed to explain and predict particular
IT usages. However, this particular model has been
using by many researchers in studying adoption and
diffusion of various IS technologies.

METHODOLOGY
To meet the aim of this research paper the
questionnaires has been distributed to the various

19

peoples of different nationality and age. The sample


population for which the work is conducted is the
students and staff of higher college of technology.
The questionnaire includes 22 questions in form of
Construct Measures.

H1: There exist significant relation between


Perceived Usefulness and satisfaction in the MBanking context
H2: There exist significant relation between
Perceived ease of use and satisfaction in the MBanking context.
H3:There exist significant relation between
Perceived Service Quality and satisfaction in the
M-Banking context.
H4: There exist significant relation between
Perceived Credibility and satisfaction in the MBanking context

The primary data was collected with a help of survey


questionnaires. We have distributed them to various
peoples of different nationality and age during the
period of June 2015 Dec 2015.
The secondary data is also needed to accomplish of
research. So, it was collected from different online
journals from newspaper, magazines etc.
M-Banking process has to depend on the global
network provided by various communication
channel providers for offering services in a
personalized manner to customers. The use of such
an open public network develops scope for
securityconcerns about the ability of the banks to
securely store and protect their privacy and
monetary information from hackers [24]. Privacy
concerns are due to widespread presumption that
while using a global communication channel,
chances of leakage of personal information and
disclosure to third parties are possible. Various
researchers have noticed the significant relation
between security concerns and intentions to use,
in online contexts [8] [14][15]. When security
and privacy concerns of the customer are properly
attended, credibility is achieved in the banking
system [1].

Fig. 2 Theoretical Technology Acceptance Model

RESULT AND DISCUSSION


Below are the answers which have been collected
from staff s of the Higher College of Technology
with a help of questionnaire.
The percentage of the participated in the survey was
(53%)of Omani staff and (47%) of non- Omani
teachers and staff in Higher College of Technology.
Approximately (61% )of the respondents aged 30-40
while (22%) were more than 40 years old and (17%)
were less than 30 years.

The above constructs are assumed to develop


favorable intentions in customers to adopt mobile
banking services. The transformation from the
intention stage to adoption stage largely depends
on the feeling of satisfaction generated out of
initial trial. The user experiences regarding the
service forms a major yardstick to evaluate
satisfaction about the usage of a technical service
offer [27]. User satisfaction is the result of
subjective sum of the interactive experience the
customer has with the initial trial of the features of
the service offer. In light of the above observations,
the following hypotheses are proposed in this study.

Nationality

Age<30
Age(30-40)

Omani

Age>40

Non-Omani
47
%

53
%

22
%

AGE
17
%

61
%

20

The percentages of gender participated in the survy


was male(53%) and femanle(47%). Also all of the are
internet users. The percentages of Bank A users
participated in the survy was (86%) and (11%) of
Bank B users finally (3%) of C users.

GENDER
Male

Female

47
%

INTERNET
Int Yes

Int No

53
%

BANK
A
11
%

3%

Table 1: Reliability Analysis


From the above analysis it is observed that the
instruments used for the data collection is reliable.
The following table 2 shows the correlation coefficient between the factors and investigates the
hypotheses of the research model. The analysis
tool used for calculating the co-efficient is SPSS.
The following table shows that the correlations
between the factors are positive.

86
%
Instrument Reliability
The reliability analysis is conducted in order to
check the internal validity and consistency of the
items used for each factors using SPSS as the
analysis tool. The Result of Reliability analysis are
presented in Table 1.
According to [23]
questionnaire for the various factors of Mobile
Banking Applications were judged to be well
reliable measurement instrument, with the
Cronbachs alpha scores were all above 0.8.

Table 2: Correlation Analysis

21

It is observed from the following table no 4 that the


value of R square indicates the the predictors PU,
PEU,PSQ and PC explained 83.9% of the variation in
user satisfaction. It means that this model is a rational
model. It is also observed from the Model Summary
obtained from SPSS that there are some factors that
may impact negatively

H2

H3

Table 3 : Predictors : PC,PU,PEU, PSQ


Dependent variable
Finally, a linear regressions model is used to test the
hypothesis H1, H2, H3, H4 which are the impact of
perceived user satisfaction towards Mobile banking
application. As shown in table no 5, the variables PU
and PC had a significant positive effect on the user
satisfaction, with =.911, Sig = 0 and =.387, sig =0.
It is also observed that PEU and PSQ had an
insignificant impact on user satisfaction.

H4

satisfaction
in
the M-Banking
context
There
exist
significant
relation
between
Perceived ease
of use and
satisfaction in
the M-Banking
context
There
exist
significant
relation between
Perceived
Service Quality
and satisfaction
in
the
MBanking
context
There
exist
significant
relation between
Perceived
Credibility and
satisfaction
in
the M-Banking
context

Not
Supported
(=-.076,
p<.001)

Not
Supported
(=.215,
p<.001)

Supported
(=.38,
p<.001)

CONCLUSION
The objective of this study is to analyze the factors
affecting bank customers decisions to adopt Mobile
banking. This study identifies some factors that are
more influential than others in Internet banking
adoption in the Oman banking market. The empirical
results show that the PU and PC of the organization
have significant impact on user satisfaction to use MBanking. The study also shows that there is no
significant relation between PEOU and PSQ on user
satisfaction. An important finding of this study is that,
among early adopters, convenience was a more
important indicator of intentions to adopt mobile
banking. It is observed that ICT plays strong role in
banking industry.

Table no 4 : Regression Analysis


Summary of Hypothesis testing
Hypothesis
H1

Specification
There
exist
significant
relation between
Perceived
Usefulness and

Results
Supported
(=.911,
p<.001)

22

REFERENCES
[11] F.D. Davis, "Perceived usefulness, perceived
ease of use, and user acceptance of information
technology" MIS Quarterly, vol. 13, no.3, pp. 319340, 1989.

[1] A.A. Aderonke, An empirical investigation of the


level of users' acceptance of e-banking in Nigeria.
Journal of Internet Banking and Commerce, 15(1), 1,
Apr 2010.

[12] D.Gefen, and D.W. Straub, Gender differences


in the perception and use of e-mail: An extension to
the technology acceptance model. MIS quarterly,
389-400, 1997.

[2] I. Ajzen, and M. Fishbein, Understanding


attitudes and predicting social behavior, 1980.
[3] J.O. Adewoye, Impact of Mobile Banking on
Service Delivery in the Nigerian Commercial Banks
International Review of Management and Business
Research Vol. 2 Issue.2, 2013.

[13] J.C. Gu, S.C Lee, and Y.H Suh, Determinants


of behavioral intention to mobile banking: Expert
Systems with Applications, 36(9), 11605-11616,
2009.

[4] J. Aduda and N. Kingo, The Relationship


between Electronic Banking and Financial
Performance among Commercial Banks in Kenya :
Journal of Finance and Investment Analysis, vol.1,
no.3pp[99-118], 2010.

[14] C.Hamlet, and M.Strube, Community banks


go online: ABA Banking Journal's 2000 White
Paper/Banking on the Internet, March, pp. 61-5.
2000.

[5] A. Aghdaie, & F. Faghani, Mobile Banking


Service Quality and Customer Satisfaction
(Application of SERVQUAL Model). International
Journal of Management and Business Research, 2(4),
351-361. 2012

[15] J.M.C. Hernandez, and J.A. Mazzon, Adoption


of internet banking: proposition and implementation
of
an
integrated
methodology
approach:
International Journal of Bank Marketing, 25(2), 7288, 2007.

[6] C.Fullenkamp and S.M. Nsouli, Six puzzles in


electronic money and banking (Vol. 4). International
Monetary Fund, 2004.

[16] V.M. Kumbhar, Factors Affecting the


Customer Satisfaction in E- Banking: Some
Evidences from Indian Banks: Management
Research and Practice, 3 (4), pp. 1-14, 2011.

[7] C.M. Chiu, C.C Chang, H.L. Cheng, and Y.H.


Fang,
Determinants of customer repurchase
intention in online shopping : Online information
review, 33(4), 761-784, 2009.

[17] M.K Kim, M.C Park, & D.H. Jeong, The


effects of customer satisfaction and switching barrier
on
customer
loyalty
in
Korean
mobile
telecommunication services: Telecommunications
policy, 28(2), 145-159. 2004.

[8] Y.H.Chen and S. Barnes S, Initial trust and


online buyer behavior : Ind. Manage. Data Syst.
107 (1), 21-36, 2007.

[18] S. Laforet, and X. Li, Consumer's Attitudes


towards Online and Mobile Banking in China
International Journal of Bank Marketing, 23 (5), pp.
362-380, 2005.

[9] S.Y. Chian, Factors affecting individuals to


adopt mobile banking : Empirical evidence from the
UTAUT Model, Journal of Electronic Commerce
Research, VOL 13, NO 2, 2011.

[19] Y.K. Lee, J.H. Park, N. Chung, and A.


Blakeney, A Unified Perspective on the Factors
Influencing Usage Intention toward Mobile Financial
Services. Journal of Business Research, 65 (11), pp.
1-10, 2011.

[10] N. Chung, and S.J. Kwon, T he effect of


customers' mobile experience and technical support
on the intention to use mobile banking, Cyber
Psychology and Behavior, 12, 539-543,.2009.

[20] Y. Malhotra, and D.F. Galletta,Extending the


technology acceptance model to account for social

23

influence: Theoretical bases and empirical


validation. In Systems Sciences, 1999. HICSS-32.
Proceedings of the 32nd Annual Hawaii International
Conference on (pp. 14-pp). IEEE. Jan. 1999

Measuring Service Quality: A Bayesian Approach.


EnterpriseRisk Management, 1 (1), pp. 145-169,
2010.

[27] G.M.Wilson, and M.A. Sasse, From doing to


being: getting closer to the user experience:
Interacting with Computers, 16(4), 697-705, 2004.

[21] S.J. Naqvi,and H. Al-Shihi, Factors Affecting


M-commerce Adoption in Oman using Technology
Acceptance Modeling Approach: TEM Journal
Volume
3
/
Number
4
/
2014.
www.temjournal.com 2004, 2014.

[28] M. Zari Baf, S.M. Hosseini, and B. Bozorgmehr,


Comparative Study of Electronic and Traditional
Banking Preferences of User Behavior (Case study:
Investigating the Use of Services Like Semnan Ebanking Customers Bank). Journal of Management,
21, pp. 55-66, 2012.

[22] S.J. Naqvi and H. Al-Shihi, Practicing MApplication Services Opportunities with special
reference to Oman Journal of Issues in Informing
Sciences
and
Information
Technology,
(IISIT),Vol(10), ISSN 1547-5840 [Published by
Informing Science Institute Press, USA]. 2013

Web Resources

[23] J.C Nunnally, Psychometric theory (2nd ed.).


New York: McGraw-Hill, 1978.

http://www.itp.net/605635-oman-wins-five-gccegovernment-awards (Accessed on 21 Dec 2015)


http://www.arraydev.com/commerce/jibc/
(Accessed on 21 Dec 2015)

[24] P.A. Pavlou, H.Liang, and Y. Xue,


Understanding and Mitigating Uncertainty in
Online Buyer-Seller Relationships: A Principal
Agent Perspective, MIS Quarterly, 31, 1, 105-136,
2007.

N. AL-Salmi, Information and Communication


Technology(ICT) and Approval of business activity
details:Retrieved
May
30,
2015,
from
http://www.oman.om/wps/portal/!ut/p/a1/hZBB
(Accessed on 21 Dec 2015)

[25] Riyadh, Akter and Islam, The Adoption of Ebanking in Developing Countries: A Theoretical
Model for SMEs International Review of Business
Research Papers, Vol. 5 No. 6 November 2009,
Pp.212-230, 2009.

Excerpts from the speech of His Majesty Sultan


Qaboos bin Said on the occasion of the Opening
Annual Session of the Council of Oman (Majlis
Oman)
on
11th
November
2008.
http://www.omanet.om/english/hmsq/hmsq8.asp?cat=
hmsq
(Accessed
on
21
Dec
2015)

[26] K. Ravichandran, S. Prabhakaran, and A.S.


Kumar,
Application of Servqual Model on

24

Human Immune System Based Security


System Model for Smart Cities
Ansu George, Mahata Sudeshna, Sonia Soans, K Chandrasekaran
Department of Computer Science andEngineering,
National Institute of Technology, Surathkal, Karnataka

AbstractThe world is experiencing a technological and social evolution as the concept of smart
cities finally makes its leap from white pages to
reality. Since, smart cities are based on technologies
like IoT (Internet of things), ubiquitous computing
and grid computing etc the system security is quite
vulnerable. In a scenario where the intelligence
of a city has been elevated to that of a smart
city, one of its important components has been
majorly neglected, that is the cyber-security system
of the city. In this highly automated and integrated
environment of a smart city where every system is
said to be intelligent and is inspired by the existing
natural processes, why should the security domain
be denied of its intelligence especially when there
exists a natural mechanism that complements its
characters perfectly, which is none other than the
mechanism of the biological immune system. In this
paper, a new computer security system model for the
smart cities is proposed based on the mechanism of
the human immune system.
KeywordsSmart City Cyber-security, Human Immune System, Computer System, IoT

I.

I NTRODUCTION

As the human civilization ventures into the new


depths of technology with the agenda of integrating
these technologies into the way of life, fields such
as IOT and ubiquitous computing have emerged.
These emerging concepts led to the idea of a smart
city. Smart city cannot be merely referred to as a
digital city, as it is not merely a technology-rich
city but also integrates the infrastructure according
to social and political needs. A new perspective
is added to the word urbanization by the development of Smart Cities which plays a key role in
providing ICT enabled services and applications to
the citizens and organizations, as an integral part
of public services. Governing bodies and organizations continuously aim to achieve higher standards

of living for its citizens, enhance the efficiency and


quality of service provided and reduce the cost and
resource consumption. This perspective requires an
integrated vision of a city and of its infrastructures,
in all its components, and extends beyond the mere
digitization of information and communication: it
has to incorporate a number of dimensions that are
not related to technology, e.g., the social and political ones. Smart city applications and requirements
have been grouped into 5 domains considering
the various dimensions such as potential, technical
requirements, roadmaps and challenges[1]: Economic, Social and Privacy Implications; Developing E-Government; Health, Inclusion and Assisted
Living; Intelligent Transportation Systems; Smart
Grids, Energy Efficiency, and Environment;
However, the applications of the above mentioned domains raise the bar in-terms of security
and privacy issues. As the centralized notion of
smart city is to deeply embed automation technologies into the systems so as to minimize manual
interference in all possible scenarios, everything is
networked. Thus making every device connected to
the network vulnerable to various security hazards.
The security issues proportionally increase with
respect to the expanding complex value network
of the smart cities. The existing signature based
anti-virus solutions are only effective once the
malware has been detected and a definition of the
malware is issued by the anti-virus vendor. Hence
the mechanism which the Anti-virus software uses,
blocks out particular behaviour of the previously
encountered malware. This approach will not react
quickly enough in case of highly automated and
integrated environment of the smart city, which
consists of many autonomous systems, thus leaving
cities vulnerable to malicious attacks. This makes
it even more essential to develop an intelligent
25

security system which is capable of detecting,


isolating and disinfecting malicious attacks on the
network. The security system of a city which
consists of self intelligent systems must also be
self intelligent, in-order to efficiently synchronize
and protect the other systems. There already exists
such a system in nature which is capable of serving
as a reference to develop a self intelligent security
system, that is, the biological immune system. In
Section III the different types of immunity has been
briefly introduced and in Section IV the relevance
of each type in a smart city security environment
has been explained. Section V consists of a case
study on Intelligent Transportation Systems using
the proposed cyber-security model.
II.

L ITERATURE S URVEY

With constant automation in every other field,


automation in the field of security also became
important.
An executive report by Symantec [2] describes
the expansion in the implementation of the concept
of Smart cities worldwide. Modern technology
like intricate and convoluted ICT implementations
by providers from innovative domains will carry
out the Smart city deployments. Malevolent attacks
will increase due to the complexity of the integrated systems. The idea of urban interconnection
with information guarding and privacy should be
realized by the city administrators to ensure safety
and well-being for citizens and businesses alike.
Felipe Ferraz and Carlos Ferraz [3] explain the
nine different issues of an urban system. These
issues did not just cause privacy issues but led to
the collapse of the entire system. It pointed out the
importance of required studies in the direction of
smart city security.
Symantec Corporations technical report [4] explains the Digital Immune System which is a novel
form of anti-virus to counter the growing computer
viruses. The ever expanding rate of viruses and
worms force anti virus companies to change their
strategies and technologies. Speed in the creation
and distribution of new cures against the threat that
hits the system is the important factor to be looked
into in this world which is constantly proceeding
towards complete automation. The system specified in [4] also leaves no chance for human error

and aims at a larger ratio of time taken for cures


distribution to the time taken for virus spread.
Anil Somayaji, Steven Hofmeyr, Stephanie Forrest [5] also takes inspiration from the natural
immune systems for computer security in the age
of the Internet. Various features of immune system
like distributability, diversity, disposability, adaptability, autonomy, dynamic coverage, anomaly detection, multiple layers, identity via behavior and
imperfect detection have been discussed in this
paper. Human immune systems adaptive responses
is the focus in [5] since computer systems do not
have such mechanisms at present.
P. K. Lala and B. Kiran Kumar [6] point out
the possibility of developing an immune system
inspired from the human immune system. This
system makes a cellular level comparison of the
interconnected digital world. The proposed architecture consists of interconnected functional cells
that carry out a specific function. A unified fault
detection system does not exist in this architecture
like the human immune system. A mechanism
for the automatic replacement of faulty cells with
spare cells is the main concept of the HIS focused
in this paper.
III.

T HE H UMAN I MMUNE S YSTEM

The immune system is responsible for protecting the body against possibly harmful substances
by recognizing and responding to the antigens. The
immune response is how your body recognizes
and defends itself against bacteria, viruses, and
substances that appear foreign and harmful. There
are different types of immunity which are listed as
follows:
1)
2)

Innate Immunity
Adaptive Immunity/Acquired Immunity
a) Passive Immunity
b) Active Immunity

Innate Immunity: Innate or non-specific immunity is a defence system that exists since birth.
Main components of this defense system include
barriers which are referred to as front line in
immune response. Pattern recognition receptors
identifying microbes belonging to a broad group
and damaged, injured or stressed cells that send out
alarms,usually trigger the innate response. This is
the first level of security provided by the immune
26

system. Once an antigen passes this level, other


parts of the immune system get ready to retract
the attack.
Adaptive Immunity/Acquired Immunity:
The immunity that an organism develops as a
result of exposure of various antigens is referred
to as adaptive immunity. This adds a second level
of safety to the immune system by continuously
learning to respond to new pathogens. When the
body is being attacked by a new pathogen, a
process called antigen presentation occurs during
which non-self antigens are recognized which
are tailored to provide responses to the pathogeninfected cells. These tailored responses are stored
in the body via memory-cells in-order to avoid
similar future secondary attacks.
Passive Immunity: Passive immunity is acquired due to the presence of antibodies that are
produced in the body of another organism. Infants
gain passive immunity when antibodies are transferred through the placenta from their mother or
while feeding. Injection of anti-serum is also a
form of passive immunity. This immunity provides
immediate short term protection against antigens.
Active Immunity: Long time active immunity
is acquired following an infection phase by activation of B and T cells. This type of immunity can
also be generated artificially through vaccination.
In the process of vaccination also known as immunization, antigen from a pathogen is introduced
in-order to stimulate the immune system so as to
develop specific immunity against that particular
pathogen without causing disease associated with
the organism.
IV. H UMAN I MMUNE S YSTEM BASED
C YBER -S ECURITY M ODEL (HISCS)
The proposed security model that is the Human Immune System based Cyber-Security models
(HISCS) mechanism is synonymous to that of the
mechanism of the human immune system wherein
the security model detects, quarantines and disinfects malwares instead of pathogens.
Innate Immunity: This type of immunity in the
cyber sense refers to creation of barriers, ie firewalls. The firewall system recognises the malware
or malicious activity using its security database
which contains malware fingerprints/information

Fig. 1.

HISCS Model

and then poses as a barrier for these activities. In


case the malware manages to breach the firewall,
a secondary response mechanism is triggered inorder to send warning notifications to the central
responsible system of the immune system. Innate
immunity is divided into active and passive innate.
Active Innate: The entire system is continuously monitored via extensive monitoring devices.
Strategic points are recognized in the system such
that monitoring the system allows the system to
get a head start in case of any necessary counter
action. Passive Innate: The components used for
the system must be thoroughly checked so as to
avoid internal flaws of these components leading
to loop-holes in the security system itself.
Adaptive Immunity/ Acquired Immunity: This
type of immunity is not imbibed in the initial
phase of the system. The system undergoes learning process by encountering various malwares and
strengthens its security with experience. Adaptive
immunity is categorized into active and passive
immunity.
Active Immunity This type immunity develops
as the system is exposed to new and unknown
malicious activities. A closed loop process approach is followed to acquire active immunity.
In accordance with figure 3 the node subjected
to the malware attack detects the malware and
quarantines it. The quarantined sample is sent to
the central security system for analysis and development of cure. After which the developed cure
27

8)

9)
10)

To provide end-to-end automation of the


processes of submission, analysis and
distribution of new definitions.
To provide real time status updates.
To manage denial of service and flooding
attacks.

Using the above explained Human immune


system model the HISCS model is developed.
For better understanding of the HISCS model,
the method of explanation through exampling is
adopted where the domain of Intelligent transport
systems is considered.
V.
Fig. 2.

Closed Loop Model

is circulated in the network so that the databases


of all the nodes security databases in the network
are updated with the malwares information and
cure. For this mechanism to work effectively an
intelligent detection system must be designed.

In this section we exhibit how the HISCS


modeling can be applied to various potential smart
city applications. We first consider a Traffic monitoring solutions systems developed by T-systems
Hungary. Some of the key features of this system
include:
1)

Passive Immunity Since the central security


system send the information and cure of the malware to all the nodes of the network irrespective
of which node was infected with the malware.
This implies that the unaffected nodes are supplied with the information of a malware that they
havent encountered thus borrowing the information
and gaining immunity as a result of activity that
occurred in another node, this is called passive
immunity.

2)

3)

The HISCS model is proposed so as to fulfill


the following objectives. [4]
1)

2)
3)
4)
5)
6)
7)

C ASE S TUDY

4)

To detect greater percentage of new or


unknown threats at various security levels.
To make the system highly scalable.
To provide secure malware submission
and distribution of new definitions.
To provide intelligent filtering of submissions.
To focus system resources on most critical threats.
To reduce false positive instances.
To attain higher analysis speeds.

5)

6)

28

Ability to differentiate between the three


vehicle classifications with the help of
modification of pixels. These statistics
help the system to determine load and
number of axles on a particular road.
Using the images from the camera, speed
of vehicles can be monitored by authorities. Local display of vehicle speed
as warning to let drivers know to slow
down, is another feature.
If rules of turning are violated, the system warns the driver. The software continuously monitors the motion of the
vehicles.
This system controls the traffic light system controlling the changing of these
lights on the basis of rush hours. Number
plates of cars bypassing the red light
at crossroads will be recognised by this
system as well.
Drive-in authorizations without hindering the traffic flow can be done using
car plate reader or based on maximum
vehicle weight policy or zone congestion
control depending on the requirement.
The system can also track vehicles or
number plates of criminal suspects and
automatically inform the concerned au-

7)

thority via text message or even an email


Pedestrians are warned at zebra crossing
if an approaching vehicle is within stopping distance.

Having identified the key functionalities of such


a Traffic Monitoring System the HISCS model is
applied.
Active Innate Immunity During the development of such a traffic monitoring system all points
of entry or access into this system must be secured
during this stage according to the HISCS model
specifications. Some of these precautions include:
Sensors are used to monitor the movement of
vehicles. These sensors must be strategically located to get the right information as well to keep it
away from public access. Extra protection of these
physical sensor grid needs to be provided. Unauthorised man handling or tampering of the sensor
devise should have a mechanism for immediate
signalling to the main authorities.

the T-memory cells remember the previous virus or


worms that have attacked the body and if the same
virus attacks again it produces a secondary response faster than the primary response to counter
this virus based on previous memory. For this
traffic monitoring system two techniques can be
used. One to produce a primary response to any
new/unknown virus attack and another to remember previous attacks and react with a secondary
response.
Primary Response: A heuristic technology
called Bloodhound heuristics developed by Symantec recognises new or unknown virus. Based on internal tests at Symantec Security Response, Bloodhound can expose anonymous and prominent Word
and Excel viruses upto 75-90% , DOS file virus up
to 70% and anonymous boot-record viruses up to
80%.

Passive Innate Immunity When integrating


such a traffic monitoring system the products like
camera, sensors, cctv components and other legacy
systems used must be from trusted certified vendors. These components are the building blocks
of this monitoring system and if these have faulty
security the entire system that is to be developed
inherits these faults which will hinder the functionality and raise security concerns.

The Bloodhound heuristics can be considered


anomalous to the receptors in the human immune
system that help detect antigens. In the human
immune, after the antigen has been found enzymes
surround the antigen and engulfs the foreign body
through a process called phagocytosis. [4] Similarly once the suspicious files are detected by the
heuristics it is isolated and sent for analysis to
the central quarantine as soon as possible in the
most protected manner without spreading to other
computers in the grid. Latest virus definitions are
scanned against these detected virus and if these
definitions cure the virus they are quickly sent back
to the workstation from which the virus came. This
takes place within the transport monitoring system
itself. If the central quarantine of the transport
monitoring system is unable to cure the virus, it is
submitted to the Symantec Security Response. The
Central Quarantine strips all non-essential contents
(e.g., text) and encrypts the data. This is then
sent to the Symantec Security Response with the
information it needs to determine whether a virus
is present so it can create a cure without hindering
any privacy issues.

Active Acquired Immunity The first time a


cyber hacker attacks the system the public will
excuse the fault depending on the destructive level
of the attack. But the second time the same attack
happens the system must be prepared to counter
this attack else the system will become unpopular
among its customers. In the human immune system

Secondary Response: To respond to a virus


that has already affected the system, instead of
considering the lengthy process of Symantec Security Response we need to formulate a faster
secondary response that reacts faster than the
primary response. The cure from the Symantec
Security Response that is sent back to the desti-

The cloud servers used for the storage of data


obtained from the traffic monitoring system must
use a trusted service.
A ten-factor authentication system based on
managerial levels is used this system to ensure
only authorized individuals can access and approve
the respective functions and alarms. Every event
is logged into this system hence system log protection is important against criminal attacks who
might want to monitor the traffic flow.

29

nation workstation where the virus has attacked


must be stored in a central security log of the
smart city. This log can help react if the same
virus attacks the same smart city application again
or any other application of the smart city. The
human immune system uses the same techniques
to overpower an attacking worm or a virus. But a
computer worm needs to be tackled differently. An
independent malware program that runs its code on
target system only once; after the first infection it
spreads to other computers or systems in the grid
or network by multiplying is called a computer
worm. The prototypical worm infects or causes its
code to run on a target system only once; after the
initial infection, the worm attempts to spread to
other machines on the network. Unlike acomputer
virus, it does not need to attach itself to an existing
program. Due to this difference a computer worm
cannot be tackled by Bloodhound heuristics as this
works with files as mentioned above.
Passive Acquired Immunity Ethical Injection:A recent ongoing research on the vulnerabilities of automated traffic control systems by Cesar
Cerrudo shows the dangers that these undetected
vulnerabilities can cause. Such ethical hacking
attacks should be encouraged during this phase of
the security modelling. Brainstorming sessions on
possible methods for attack and programing the
systems to counter these new found attacks will
help to keep the system safer for longer durations
by looking ahead of the cyber attacker.
Installing Anti-Virus: This phase of the model
also ensures that trusted AntiVirus softwares are
installed in all the interconnected systems involved.
VI.

on Intelligent transport system which is one of


the domains of the smart city. The case study
includes the key features of the system and the
necessary security layers to be incorporated after
identifying which type of immunity system should
be allocated to the which functionality. Thus, the
proposed HISCS model efficiently categorizes the
security issues of a smart city and describes the
best possible approach in accordance to that of the
human immune system.
VII.

Currently there are five major domains identified in a smart city. However, this is a generic
classification. Further research possibilities include
a detailed classification of these domains, comprehensive study of each domain, its functionalities,
issues and the immunity model adopted, developing the technologies and mechanisms used in
different types of immunity systems etc. Every city
in existence is aiming to be a smart city, hence
developing a smart security system that efficiently
protects the security and privacy of the citizens is
a necessity for any application that is designed to
cater to a smart city crowd.
R EFERENCES
[1]

[2]

[3]

C ONCLUSION

A human immune system inspired approach


toward cyber-security has been proposed for a
smart city. The proposed cyber-security model was
developed so as to steer the security technology towards self-intelligent security. The HISCS
model describes the cyber-immunity system and
categorizes it into various types and describes its
functionalities. The various types of immunity can
be referred to as various levels of security. The
proposed model is supplemented by a case study

S COPE FOR F UTURE W ORK

[4]

[5]

[6]

[7]

30

Smart Cities Applications and Requirements


Net!Works.
March
2005.
20
May
2011
http://www.networks-etp.eu
Transformational smart cities:cyber security and resilience, Symantec World Headquarters Mountain View,
CA 94043 USA, May, 2013.
Felipe Silva Ferraz, Carlos Andr Guimares Ferraz,
Smart City Security Issues:Depicting information security issues in the role of a urban environment,
IEEE/ACM 7th International Conference on Utility and
Cloud Computing, 2014
The Digital Immune System: Enterprise-Grade AntiVirus Automation in the 21st century, Symantec Corporation, July 2001
Anil Somayaji, Steven Hofmeyr, Stephanie Forrest,
Principles of a Computer Immune System, Department
of Computer Science University of New Mexico Albuquerque, NM 87131
P. K. Lala and B. Kiran Kumar, Human Immune System
Inspired Architecture for Self-Healing Digita Systems,
Department of Computer Science and Computer Engineering University of Arkansas, Fayetteville, AR -72701
Gyrgy Szamosvri, Smart City Intelligent Solutions, TSystems Hungary Ltd,

Defense-in-Depth Architecture for Mitigation of


DDoS Attacks on Cloud Servers

Dr. S. Benson Edwin Raj

Mr. N. Senthil Kumar

Mrs. Revathi

IT Department
Rustaq College of Applied Sciences
Sultanate of Oman
bensonedwin@gmail.com

IT Department
Rustaq College of Applied Sciences
Sultanate of Oman
senthilgnd@gmail.com

MCA Department
St. Joseph College
Trichy, India
aruldossrevathi@yahoo.co.in

Abstract Online services are on a rapid rise in todays


Internet world. Web servers, which host these online services,
are the prime targets for the hackers to perform Distributed
Denial of Service (DDoS) attacks. Attackers launch DDoS
attacks on web servers in order to disrupt the services or to
consume the network bandwidth. This makes legitimate users
unable to access the web resources at times. DDoS attacks
compromise the availability of the service by utilizing the power
of millions of zombies (compromised computers) under the
control of the bot masters. DDoS attacks existed since mid-1980s
and they are still the top most web security threat. Hence,
mitigation of DDoS attacks is becoming very important. The
distributed and dynamic nature of the DDoS attacks makes it
more difficult to mitigate. In order to mitigate the DDoS attacks,
several techniques have been proposed in the past by various
researchers. However, most of the project research were
focusing either on Application Layer or Network Layer and are
mostly providing single layer of defense. In such scenario,
hackers and attackers are taking advantage of the weakness of
these mitigation techniques to launch the DDoS attack.
Therefore, it is necessary to propose a new architecture for
mitigation of DDoS attack to enhance the security of the web
servers. In this research work, an overview of comprehensive
approach has been proposed for mitigation of DDoS attacks and
Trusted Referral Architecture is discussed in detail.

between any pair of other parties, simply by flooding one end


or the other with unwanted traffic. These attacks are
widespread, increasing, and have proven resistant to all
attempts to stop them. Thus, in order to defend against these
Distributed Denial-of-Service (DDoS) attacks that plague
websites today; it is proposed to mitigate the DDoS attacks.

Index TermsDDoS, Zombies, Cloud Servers, Cyber warfare

b) To design and develop a distributed referral


architecture in order to divide and conquer the DDoS attacks
and to provide secure access to the web server resources to the
legitimate clients.

Confidentiality, Integrity and Availability often referred as


the CIA triads are the three main objectives of computer
security. According to the code of Laws of the United States
regarding the definition of information security, availability
means ensuring timely and reliable access to and use of
information. Denial of Service attacks makes the resources
unavailable to the intended users making it a major potential
threat to availability.
The objective of the research work is to design and develop a
defense in depth architecture for mitigation of DDoS attacks
on cloud servers with the following aims:
a) To mitigate DDoS attacks on the application layer and
the Network Layer of the OSI model.

I. INTRODUCTION
The Internet owes much of its historic success and growth
to its openness to new applications. New applications can be
designed, implemented and come into widespread use much
more quickly, if they do not need to wait for key features to be
added to the underlying network. Perversely, this has happened
as a rational response of network and system administrators
needing to cope with the consequences of the Internets
openness. The Internet architecture is vulnerable to Denial-of
Service (DoS) attacks, where any collection of hosts with
enough bandwidth can disrupt legitimate communication

II. LITERATURE SURVEY


DDoS attacks are in existence since mid 1980s. The
solutions provided for mitigation of DDoS attacks are
analyzed summarized below. The solutions proposed are
classified into different phases.

31

Phase I: (Early 1990s to 1997)


application level protection, CAPTCHAs are made more
complex from ordinary text recognition to complex image
recognition CAPTCHAs [6] [8]. And in the case of network
level protection, in order to make protection distributed,
referral architectures [3] are used where the authenticated
referrals provide additional defense to the target thereby
increasing its protection perimeter.

There was a sudden boom in the field of networking in this


phase. But, this eventually came to a virtual standstill in the
mid 90s due to the effects of DoS attacks. Bailey et al
(1996) proposed a solution to prevent DDoS attacks.
Syndefender protects against the TCP SYN flood attacks by
intercepting all SYN packets and mediating the connection
attempts before they reach the operating system. This
prevents the target host from becoming flooded by these
unresolved connection attempts, which causes the operating
system, and the host, stop receiving new connections. As a
result, the host system is effectively insulated from the SYN
flood attack and denial of service condition that results.

III. PROPOSED APPROACH


The research framework for the proposed defense-in-depth
architecture for mitigation of DDoS attacks on web servers is
illustrated in Figure 3.1. This may be deployed in any
corporate sector, Government organizations. The proposed
Distributed Denial of Service (DDoS) tolerant architecture
provides a defense mechanism for DDoS attacks at the
Application Layer, Network Layer in the OSI reference model
and also at the Serverside.

Phase II : (1997 to 1998)


In this phase a number of monitoring agents were introduced
to monitor the network traffic. An agent can collect
communication control information to generate a view of all
connections that can be observed in a monitored network.
Furthermore, it can watch for certain conditions to arise and
react appropriately. An important proposal in this phase is
that of anomaly detection.

The client accesses the web server through the Intermediary


Proxy Server. The Web servers are heterogeneous and
redundant virtual machines. The Intermediary Proxy Server
(IPS) protects the web servers from direct public access. The
Load balancer in the IPS accepts the incoming http request
and distributes evenly to the heterogeneous web servers. The
IPS Manager is responsible for managing the communication
between IPS and the Trusted Referral Server (TRS) through
the IPS agent. IPSec protocol is enabled between the IPS and
the TRS. The Intrusion Detection System monitors the
incoming traffic and alerts the administrator in case of DDoS
attacks. Botnet Analysis Engine is responsible for identifying
and analyzing the botnets and tracks the spambot. Both the
IDS and the Botnet Analysis engine provide necessary inputs
to the Firewall to mitigate the DDoS traffic. The Picture based
CAPTCHA application is responsible for filtering out
automated tools access to web resources and to provide
privilege token to the legitimate clients to access the web
resources.

Phase III (1998 to 2003)


The second phase solutions were still insufficient to defend a
DDoS attack and the need for a reactive as well as a
proactive mechanism was essential. This gave rise to the 3 rd
phase of solutions which basically consisted of reactive
approaches such as the traceback mechanism which helps us
to trace back to the source where the attack originated and a
proactive approach such as the overlay networks which
consisted of selective overlay nodes that form a protection
perimeter over the network that needs to be secured.
Phase IV (2003 to 2009)
This is the most important milestone in the journey to
mitigate DoS attacks, because this was the period where
more number of solutions was proposed to meet the breach in
security in both the application level and the network level.
As we infer from the table, we can see that CAPTCHAs [6]
[8] were used to defend against application level DoS attacks.
And in order to overcome network level attacks, an
integrated architecture which contains filtering techniques
and capabilities were used to identify and differentiate
between legitimate and the bad traffic.

This framework consists of two sub systems:


Subsystem 1 : Defense mechanism at the Application Layer
Subunit 1:
Subunit 2:

Referral Architecture
Picture Based CAPTCHA
Application

Subsystem 2 : Defense mechanism at the Network / Server


Side Architecture

Phase V (2009 to 2014)

Subunit 1: Identifying, Classifying and Detecting


Spambots
Subunit 2: DDoS resilient Server Side
Architecture

The fifth phase depicts a clearer picture of a solution to the


problem of DoS attacks. Unlike the previous phases, where
the problem of DoS was solved by a central entity having
control of the overall mechanism here the process of
mitigating DoS becomes distributed. In case of the

In this paper, the referral architecture is described in detail.

32

Fig. 3.1 Proposed Architecture


The IPS reads the privilege token via secure cookies and
verifies the content of the cookie before accessing the web
page from the web server. The central idea of our referral
architecture is to let the trusted referral servers on the website
refer legitimate clients to it, while the web server is under a
DDoS attack. Attackers are widely distributed and they attack
the web server through millions of zombies. The proposed
method redirects the incoming HTTP requests to one of the
Trusted Referral Servers for validation. The TRS validates the
client by issuing a Picture based CAPTCHA mechanism[11].
The requests of the clients who fail to clear the picture based
CAPTCHA for three consecutive times are discarded.
A privilege token is issued to those clients who clear the
picture based CAPTCHA. The privilege token is transferred to
the IPS through the client via secure cookies. Only clients
trusted by the referrers are given the privilege tokens for
accessing the target host. The IPS reads the privilege token via
secure cookies and verifies the content of the cookie before
accessing the web page from the web server. The IPS gain
confidence as the qualified clients is those who are
authenticated by the referrer website. Confidentiality, Integrity
and Authentication of the privilege token is ensured. The
communication flow between the client (C), Intermediary

As the illegitimate request to the web servers are infinite


during a DDoS attack, providing a distributed defense
mechanism with the help of Trusted Referral Servers will make
the defense mechanism simple and easy. The central idea of
this referral architecture is to let the trusted neighbors of the
website refer legitimate clients to it, while the website is under
a flooding attack. Attackers are widely distributed and they
attack the web server through millions of zombies. The first
defense mechanism proposed here is based on the divide and
conquer strategy, wherein the incoming HTTP requests are
redirected to one of the Trusted Referral Server for validation.
The TRS validates the client by issuing a Picture based
CAPTCHA. The requests of the clients who fail to clear the
picture based CAPTCHA for three consecutive times are
discarded. A privilege token is issued to those clients who
clear the picture based CAPTCHA. The privilege token is
transferred to the IPS through the client via secure cookies.
This privilege token referral is done through a proxy script
running on the referrer website. Only clients trusted by the
referrers are given privilege tokens for the target server, such
trusted clients are those that are authenticated by the referrer
website in a way so as to give the IPS confidence that the client
is not under the control of an attacker.

33

Proxy Server (IPS) and Trusted Referral Server (TRS) are


depicted below.

3.

Referral web server does a Picture Based


CAPTCHA test with the client.

4.

If the client fails the Picture Based


CAPTCHA test, the referral server
generates
another
Picture
Based
CAPTCHA test. If the client fails for
more than three attempts, the request is
dropped, suspecting a bot.

5.

If the client clears the Picture Based


CAPTCHA test, the referral server creates
a cookie, which contains a privileged
token inside and the request is redirected to
the Intermediary Proxy Server. The cookie
contains the IP address of the TRS, session
key shared between TRS, IPS and privilege
Token of the client encrypted by the public
key of the Intermediary proxy server.

6.

The Intermediary Proxy server reads the


cookie and the verifier validate the origin
of the cookie (created by the referral
server). The IPS decrypts the Cookiec and
using the KTRS,IPS it reads the Tokenc. The
token contains the IP address of the client
which the IPS uses to permit the http
request to access the web servers. Once
validated, the application processes the
client request w.r.to response from any
one of the Virtual Machines (VMs).

Fig. 3.2 Communication between Client, TRS,IPS


C IPS : Http Request
IPS TRS : Http redirect during DDoS
TRS C : PicCAPTCHA
C TRS :Solution for PicCAPTCHA + IPC
TRSC : Cookiec
C IPS : Cookiec + IPC + TS2
Where,

PicCAPTCHA

Picture based CAPTCHA


Cookiec

Tokenc
IV. EXPERIMENTATION AND RESULT ANALYSIS
H(Tokenc) Token Digest
IPc

IP address of the client

IPTRS

IP address of the TRS

TS1, TS2

Performance analysis focuses on two issues. Firstly, a


security analysis is done on the secure communication between
Client, IPS and the TRS server. Secondly, experiments are
conducted to calculate the overhead of the referral server by
forwarding 100, 200 client requests simultaneously. The time
taken for successful connection by our proposed method is
compared with WRAPS [3] method and the results show slight
increase in overhead in the proposed method. Considering the
security benefits offered by the proposed method, the increase
in overhead is negligible.

Timestamps

The communication steps are described as follows:


1.

2.

The client access the web server through


Intermediary Proxy Server.
All the
incoming packets are captured for packet
analysis which is done by Botnet Analysis
Engine.

Security analysis of the communication between Client,


IPS and the TRS Server
a) Secrecy, authenticity and Integrity of the privilege
token
CIA parameters are ensured in the privilege token. The
privilege token is created by the TRS server on successful
clearance of the Picture based CAPTCHA test by the client.
The token is encrypted by session key between TRS and IPS.
Hence, client cannot open the token information. Moreover,

During DDoS attack, the firewall redirects


the request to the Referral server once the
packets are captured for analysis.

34

the hash code of the token ensures integrity of the privilege


token. In case of any modification attacks to the privilege
token, the hash code doesnt match at the IPS server and hence
the client requests are rejected.
b) Not prone to Man-In-Middle attacks
Man in Middle attacks may happen by some intruders
compromising the token and trying to flood the network. As
the privilege token contains the information about the client,
any kind of Man-In-Middle attack is very well known by the
IPS server and hence rejects the requests.
c) Not prone to Replay attacks
Fig. 4.2 Comparison of Connection time between WRAPS
and proposed method for 200 referrals.

The lifetime of the tickets plays a vital role in identifying


replay attacks. The privilege token issued by the TRS server
is subjected to a life time. Beyond the lifetime, the privilege
token get invalid. Hence, any kind of replay attacks using the
expired privilege token is not possible.

V. CONCLUSION

Experimental setup

A new framework for mitigation of DDoS attacks on web


servers has been developed in this thesis work. As the
attackers are widely distributed and follow different means for
generating DDoS traffic, the proposed mitigation architecture
is comprehensive by covering mitigation measures at the
application level, network level and the server side
architecture. This comprehensive strategy plays a vital role in
providing a defense-in-depth approach for mitigation of DDoS
traffic. The effectiveness of the system has been experimented
and the conclusions derived are summarized as follows.

The overall experiment was done on an Intel Core 2 Duo CPU


with 3 GB RAM. Linux operating system with kernel version
2.6.24.7 is used. The simulations are done using network
simulator NS-2.34, Click version 1.8.0 with packet size of 500
bytes. Dumbell topology is chosen as simulation topology
with droptail management policy.
Results
Here we calculate the overhead on the referral server with the
help of clients who are assumed to be legitimate by all means.
These clients requests for forwarding requests to the referral
server by sending 100,200 referral requests. The graph
depicts, as the connection number of requests increases, so
does the connection time as well as the percentage of
successful connections. The X axis is the percentage of the
successful connections and the Y axis is the connection time
to the referral server in milliseconds. The evaluation for the
scenario with the proposed referral architecture for a mean of
100 and 200 referrals is also given in the Figure 4.1 and
Figure 4.2.

TRS based referral architecture proposed in this


thesis provide a distributed defense mechanism to
mitigate DDoS traffic.

Security analysis is done in comparison to the


existing technique which ensures the CIA
(Confidentiality, Integrity and Authentication)
parameters for the privilege token.

Fig. 4.1 Comparison of Connection time between WRAPS


and proposed method for 100 referrals.

35

Although, the overhead of the Trusted Referral


Server is slightly higher, considering the security
benefits the overhead is negligible.

REFERENCES
1.

Bhuyan, M, Kashyap, H, Bhattacharyya, DK &


Kalita, JK 2014, Detecting Distributed Denial of
Service Attacks: Methods, Tools and Future
Directions, The Computer Journal, vol. 57, no. 4, pp.
537-556.
2. Wang, P., Sparks, S. and Zou, C. An advanced
hybrid peer-to-peer botnet, IEEE Transactions on
Dependable and Secure Computing., Vol. 7, No. 2,
pp. 113-127, 2010.
3. Wang, X. and Reiter, M. Using Web-Referral
Architectures to Mitigate Denial-of-Service Threats,
IEEE transactions on dependable and secure
computing, Vol. 7, No. 2, pp. 203-216, 2010.
4. Alexander Seewald, K. and Wilfried Gansterer,N.
On the detection and identification of botnets,
Science Direct, Computers and Security, Vol. 29, No.
1, pp. 45-58, 2009.
5. Saidane, A.,Nicomette, V. and Deswarte, Y. The
design of a generic intrusion-tolerant architecture for
web servers, IEEE Transaction on Dependable and
secure computing, Vol.6, No.1, pp. 45-58, 2009.
6. Datta, R., Li, J. and Wang, J.Z. Exploiting the
HumanMachine Gap in Image Recognition for
Designing CAPTCHAs IEEE Transactions on
Information Forensics and Security, Vol.4, No.3, pp.
504 518, 2009.
7. Basso, A. and Sicco, S. Preventing massive
automated access to web resources, computers and
security, Vol.28, No. 3-4, pp.174-188, 2009.
8. Tariq Banday, M. and Nisar shah, A. Image flip
CAPTCHA, The ISC intl journal of information
security, Vol.1, No.2, pp. 105-123, 2009.
9. Fallah, M.S. A puzzle based defense strategy against
flooding attacks using game theory,
IEEE
Transactions on Dependable and secure computing,
Vol.7 , Issue,1, pp. 5-19, 2008.
10. Walfish, M., Mythili, V., Balakrishnan, H., Karger,
D. and Shenker, S. DDoS Defence by Offense,
ACM Transactions on Computer Systems (TOCS),
Vol. 28, No. 1, 2010.
11. Benson Edwin Raj, S., Dr.Jayanthi, V.S., and
Muthulakshmi,V. A Novel Architecture for the
Generation of Picture based CAPTCHA, ADCONS
2011, LNCS 7135, Springer-Verlag Berlin
Heidelberg, pp. 568 574, 2012.

36

Distributed Computing Environment Based on


Cloud Computing Technology for MRMWR
Boumedyen Shannaq1
Ministry of Regional Municipalities Water Resources (MRMWR), Directorate of Planning and Studies, Oman

(1) Business Core Systems


(2) Business Support Systems
(3) Middleware.

Abstract-This work developed principles for distributed


computing environment based on cloud computing technology
to organize a dynamic load balancing in virtual computing
environment . As well as analyzed and discussed problems,
characteristics, architectures and applications of various
approaches to building the operating environment for user
access and for running demand applications based on cloud
computing technology. The aim of this work is to improve the
efficiency of distributed computing in the cloud system, and to
develop an approach to organize the operating environment
that run demanding applications in a distributed computing
environment based on cloud computing technology. To achieve
this goal, new methodology have been developed for running
applications in the cloud, allowing to increase the overall
performance of heterogeneous software and hardware systems
.The designed method allows organizing an effective operating
environment that allows for dynamic balancing for nodes of the
cloud computing environment due to migration processes, not
data.

Keywords cloud computing


Computing, Dynamic load balancing

technology,

MRMWR is looking forward to improve the various services


(G2C, G2B, G2G and G2E) relate to the core business
functions and streamline the organisational processes for the
service delivery. Therefore the proposed schemes in this work
could support MRMWR to consider cloud computing features
to achieves their goals .
[2][3]In today's economy one of the most important conditions
for success functioning of companies and enterprises becomes
effective manage their costs. Automation of production
processes and implementation information technologies in
organizational management are the most important factors in
reducing costs , considering the cost of maintenance of
automated systems control and information processing. At this
regard, one promising ways to solve problems of this kind are
actions to use cloud computing. [4][5][6] Cloud computing - is
a computing model in which resources such as processing
control, storage, networking and software are abstracted and
are provided as a service on the Internet for a remote user.
Provide access to the resource allocation, dynamic and virtually
infinite scalability in solving specific problems. The benefits of
cloud computing include performance, cost savings, high
availability, and easy scalability.

Distributed

I. INTRODUCTION

inistry of Regional Municipality and Water Resources


(MRMWR),[1] Government of Oman is guided by a
vision for sustainable development aimed at satisfying
development requirements in sultanate of Oman. The
Ministry is working on the provision of infrastructure in
different Wilayats such as internal roads, lighting,
beautification and urban development and construction of
facilities of various services such as markets, parks and
gardens. The Ministry is also responsible for the
development of water resources through water exploration
and construction of various types of dams and maintenance
of springs and aflaj.

Fig. 1 Cloud Computing [7]

Cloud systems are service oriented i.e. To provide consumers


with quality services. Accordingly, the allocated Several
models of Service are: Infrastructure as a Service (IaaS)

The ministry has developed application architecture to


support the business functions of MRMWR. The
MRMWR Application architecture contains three
categories:

Manuscript received December 24, 2015.

37

A. IaaS Providing customers with a variety of IT resources

D. Grid computing and cloud computing


The concept of grid computing and cloud computing leads to
new concept called virtualization . [11] Some researchers
believe that the main difference between grid computing and
cloud computing is virtualization . While grid systems provide
a high load of computing resources by distributing a specific
task for a few compute nodes, cloud computing are on the path
of execution of multiple tasks on the same server as virtual
machines. In addition, there are features in the main use cases
of grid and cloud computing. While the grid is mainly used to
solve problems for a limited period of time, cloud computing is
mainly focused on providing long-term services

As a rule IaaS model includes infrastructure of virtual server,


storage, network. such service can be found in Amazon,
Google ,IBM etc. IaaS provides the user with ample
opportunity to configuring the service, but at the same time, it is
difficult to achieve high quality service. To avoid this problem,
many providers offer a range of templates to provide virtual
infrastructure and different models.

Fig. 2 Iaas [8]

B. Platform
as
a
Service
(PaaS)
-PaaS Provides access to software platform. users can create
and publish their own applications based on this platform, they
have access to resource management of lower level (operating
system, data warehouse, etc.). Due to significant differences in
each
API
Platform-specific migration of applications from one PaaS
solutions is usually impossible. This fact has led some internet
service providers (ISPs) to reflect the development of a
universal interface of PaaS. The main effect to benefit the
customer in selecting this model is cost savings ,related to the
maintenance of physical infrastructure and hardware computer
network, as well as system and server software, such service
can be found in (Google app engine ,Azure )

Fig. 5 How Grid Computing Works [12]

Fig. 6 Virtualization

Thus, this work assume that the grid and cloud computing
complement each other. Grid Interfaces and protocols can
provide interaction between the cloud resources or cloud
platforms to provide the union. A higher level of abstraction
provided by the cloud platform, can help users in organizing
the Grid systems transparent and easy provisioning of Grid
platforms and to attract new user groups to the use of such
resources.

Fig. 3 Paas [9]

C. Software as a Service (SaaS)


- SaaS provides software. In this model, users have access only
to the functions of necessary software via net. The SaaS model
is already being used to deliver applications edit documents
and presentations, project management, CRM. such service can
be found in Google Apps

II. PROBLEM DEFINITION


The implementation of cloud computing raises a number of
unsolved scientific problems preventing the full use of all
potential advantages of this approach.
First, the desire to create a universal cloud inevitably faced with
the need to operate in a heterogeneous environment and thus to
organize users access to their individual applications without
compromising performance.
Secondly, in the cloud when accessing an arbitrary number of
users , increases several problems of ensuring a high degree of
safety and reliability to protect specific data. Therefore, the
problem of data security and resources in cloud computing is
one of the critical issues. Safety of the entire system depends on
the security software interfaces for resource management,
virtual machines and services. Starting from the authentication
and authorization procedures and ending with encryption.

Fig. 4 SaaS [10]

38

Software Interfaces should provide maximum protection


against unauthorized attacks. Therefore security in the cloud is
necessary to provide a convenient and uniform conduct of
authorized access to resources, taking into account their use and
protection of resources and data from unauthorized use.
Thirdly, to enable practical use of a heterogeneous cloud
environment in different areas is necessary to organize a
universal system that run individual applications. Given that
these problems are still not completely solved, we can assume
that the topic of this research work is practically significant.

In order to overcome certain difficulties in the systems with the


scalability of cloud computing, we will use dynamic load
balancing based on the migration processes between the virtual
processors, not the data as in the standard systems.

A. Schema for Driving a distributed computing


systems
A new approach have been developed to build the operating
environment, providing secure user access to intensive
applications in distributed cloud computing environment,
The proposed methodology to run applications in cloud
environments, allowing to increase the overall performance of
heterogeneous software and hardware systems as illustrated in
figure 7.This work manage heterogeneous resources and
combine computing power under the control of the hypervisor
in a single cloud infrastructure.
Although the idea of creating a virtual machine under the user is
not new, the virtual machine in the cloud opens up new
opportunities. The size of a virtual cluster can dynamically
increase or decrease. The virtual cluster can quickly switch in
the running time from one user to another user. But the central
problem of cloud computing - the inability to control computer
processes, and hence the impossibility of dynamic balancing.
Fig. 7 Represent The scheme of distributed computing systems
based on cloud computing technology as standard cloud
computing system using flow control.

A. Research Methods
Research methods based on modern principles of parallel
and distributed processing, data transmission in computer
systems, the protection of computer systems on modern
technologies of software design on the reliability of
information systems theory.
B. Purpose of this Work
The aim of this work is to improve the efficiency of distributed
computing in the cloud system, by creating the operating
environment for the organization of secure user access and the
development of principles that support to run demanding
applications in distributed computing environment based on
cloud computing technology.
To achieve this goal, the following task have been developed .
1. A methodology for running applications in the cloud,
allowing to increase the overall performance of heterogeneous
software and hardware systems.
III. SCHEMES FOR DISTRIBUTED COMPUTING
To increased scalability and overall performance of distributed
computing environment in the cloud , a problem appears to
affect the performance ; load balancing.
Balancing can be static and dynamic, to start the task, even
during work tasks. Dynamic balancing used to redistribute the
task that already running. In this instance the migration process
can move a process from one machine to another without
having to restart the process from the beginning. In fact,
applications designed to process large amounts of data, need to
use methods of load balancing by migrating data processing.
Based on several experiments developed in the literatures [14]
that compare the execution time when multiplying matrices and
compare latency of messages. Using migration processes and
without it, the obtained results allow to compare the execution
of the actual processes for high-performance matrix
multiplication framed PVM in a heterogeneous environment,
as well as to compare the effectiveness of the interaction of
multiple processes within the MPI virtual cluster. It was shown
that the performance of various tests using PVM and MPI
without migration process was significantly lower than their
performance to the migration process. These results clearly
demonstrate the benefits of using the priority migration
process.

Fig. 7 Schema for Driving a distributed computing systems based on


cloud computing technology

Fig. 7 Represent The scheme of distributed computing systems


based on cloud computing technology as standard cloud
computing system using flow control.
Since the flows can be substantially irregular, and the load on
the individual cores is irregular, which greatly reduces the
productivity of the whole system. To eliminate such defect ,
load balancing is used .
The proposed approach in this work enabling applications
aimed at processing large amounts of data, using the methods of
load balancing by migrating data processing in virtual clusters.
The proposed formula have been developed to compute the
performance of system with migration process and without
migration .
Performance Computing System P (S) without migration
processes:
P (S) = P (n, x, u, q, y), where
P (S) = the performance of the entire system,
n = number of cores ,
x = the performance of a single core,
u = performance communications environment,
q = the efficiency of the program code,

This approach provides a significant acceleration and high


performance for concurrent and multi-tasking. It is a central
element of this work approach.

39

y = productivity data processing algorithms,


adding new argument that specifies the operation to the
migration process ,produce new formula to compute the
Performance Computing System P (S) with migration
processes:
P (S) = F (n, x, u, q, y, m), with m = migration processes.
The cluster is constantly trying to align the load units due to
dynamic migration processes of great load nodes to nodes with
a smaller load. Users run their applications, and the system is
transparent to them seeking free resources in the cluster and
distributes the processes among the available nodes, thereby
increasing overall performance. This approach will bring new
opportunities for the division of computing power and other
resources, such as memory and information channels.

The designed method allows organizing an effective computer


system that allows for dynamic balancing of nodes of the cloud
computing environment due to migration processes, not data.
.
REFERENCES
[1]

[2]

[3]

B. Scheme of the virtual cloud cluster


In order to organize such migration between the nodes in the
cloud system, functional virtual cluster environment with a
single operating system image (SSI) can be implemented,
therefore automatically parallelizing tasks between nodes as
demonstrated in figure 8.

[4]
[5]
[6]
[7]

Ministry of Regional Municipality and Water Resources (MRMWR),


ICT
Sustainable
Development
Report
2015
website
,www.mrmwr.gov.om.
A report from the Economist Intelligence Unit Sponsored by EMC
Organisational agility: How business can survive and thrive in turbulent
times,2009.
E.Papulova , Z.Papulova , COMPETITIVE STRATEGY AND
COMPETITIVE ADVANTAGES OF SMALL AND MIDSIZED
MANUFACTURING ENTERPRISES IN SLOVAKIA, E-Leader,
Slovakia 2006
A.Shawish,M.Salama
,Cloud
Computing:
Paradigms
and
Technologies,Springer 2014.
NIST Cloud Computing Standards Roadmap Working Group NIST
Cloud Computing Program Information Technology Laborator ,2011.
IBM Cloud Infrastructure - ibm.com,Adwww.ibm.com/IaaS
The
National
Institute
of
Standards
and
Technology
(NIST)http://www.brighthub.com/environment/green-computing/articles
/127086.aspx

[8] www.cloudreviews.com
[9] ref: www.mbuguanjihia.com
[10] www. dzone.com
[11] S. Hashemi, A.Bardsir, Cloud Computing Vs. Grid Computing ,ARPN
Journal of Systems and Software,2012.
[12] How Grid Computing Works - computer.howstuffworks.com
[13] Virtualization - I. T. Works -www.itworksite.com

Fig. 8 Scheme of the virtual cloud cluster with a single operating


system image

[14] Mathematics and Computer Science Division Argonne National


Laborator Goals Guiding Design: PVM and MPI

The single operating system image environment with the help


of using distributed file system. It provides a unified view of all
files on all mounted file systems on all nodes in the cluster as if
they are all within a single file system. As a result of this
technique its organizing an effective computer system,
allowing for the first time to perform dynamic balancing of
computing nodes of the cloud computing environment due to
migration processes, not data.
The Scheme of the virtual cloud cluster with a single operating
system image will provide a strong interaction between parallel
processes and management that effectively addressed through
cloud technology , even for very system tasks .
As a result of this approach, allows creating distributed
computing systems based on cloud computing technology.
Such systems are characterized by high reliability, low cost,
and performance, which could be an ideal approaching.

DR Boumedyn Shannaq currently serves as an Expert of


Information Systems at the Ministry of Regional
Municipalities Water Resources (MRMWR), Directorate
General of Planning and Studies ,Oman . Received the
(Ph.D.) in System analysis, information control and
processing "Technical Systems" on Technical Science.
Dissertation theme: Methods , Algorithms and programs
of discursive analysis aimed at developing Multi-language
Thematic Glossaries (Arabic , English and Russian)
from St. Petersburg institute for Informatics and Automation
RAS .
Dr.Boumedyen has been serving as Scientific Research Director, an
Information System Expert in the fields of (Research, Development, Planning,
E-Government, Technology Management ), Professor ,Lecturer , Trainer
,Journal committee member, Journal Reviewer
at different institutions
international and domesticor country, and year degree was earned. The authors
major field of study should be lower-cased.

IV. CONCLUSION
The main directions of improvement of computer systems is
to improve their performance, increase reliability and decrease
price / performance. A new approach designed to create the
operating environment of cloud computing ,will improve
overall performance of heterogeneous software and hardware
systems in the average order by adapting the architecture of
each individual virtual machine for a specific user application.
40

AIRS: A Search Engine Performance


Visualization with Ontology
Dr.Sharmi Sankar*, Noora Al-Alawi*, Mr.Ishtiaque Mahmood*, Dr.Jehad Al-Bani Younis*
*Ibri College of Applied Sciences, Sultanate of Oman
Abstract The evenhanded of the Semantic Web and linked-data
apparition is to construct a professed web of data: An
infrastructure for machine-readable semantics for data on the web.
An Internet search engine (e.g., Google, AltaVista or Infoseek)
returns thousands of so-called matched documents from a single
query, some of which are relevant and others irrelevant to the query.
End users usually have problems with organizing and digesting
such vast quantities of information, in which much of the
information retrieved, is likely to be irrelevant. Our main goal is to
design and implement advanced information retrieval system
(AIRS)based on XML and semantic web technologies (RDF, OWL
Web Ontology language) AIRS for Ibri college of applied
sciences to solve the above drawbacks.

II. PRELIMINARIES: COMPONENTS OF ONTOLOGY


The ontology consists of different types of components,
which could be divide into three types according to the
ability to describe the entities of domain, such as Classes,
Individual and Relation.
A. Ontology Classes
Classes are the core component of most ontologies.
According to the different languages, which is used to
implement ontologies, it is called a concept or a type. Classes
represent a collection of individuals that share common
characteristics. Sometime one class could be a subclass to
another class. For example, if the Class College is a subclass
of the Class Organization. Then, every individual of the
Class College is also be individual of Class Organization. In
addition, classes could share relationships that will describe
how the individual of one class relate to another.
B. Ontology Individuals
Individual represents the objects of domain of interest. It is
called instance of class. Ontology is described the individual
so that, it is considered as the base unit of ontology.
Individual could represent concrete objects like people,
machine, or abstract object like article or function.
C. Ontology Relations
Relation is often called property or slots in some system. It
is describe how the individuals of classes are related to each
other, or describe the way how each individual relate to
specific class, or sometimes how the classes of specific
domain relate to each others. For example, the relation
between classes, if we have a class person and a class country
the relationship between them is lives in. That means every
person lives in country. Besides, if we want to make relation
between individuals related to classes. For instance, if we
have individual called Ahmed in class person and in class
country have Oman. If Ahmed lives in Oman then the relation
will be between individuals Ahmed and Oman [10].
D. Ontology Development Tools
There are many Ontology tools are available in the present
times in order to design any ontology the developer should
consider two crucial things. The first point is the ontological
language that could be used and a so-called the Ontology
Representation Languages. The second point is the
ontological editor, which supports the suggested language

KeywordsRDF, OWL, Ontology, CASEng.


I. INTRODUCTION
An RDF dataset consists entirely of an unordered set of
triples: (subject, predicate, object) 3-tuples that describe
relationships either between entities, or between properties
of entities, depending on the value of the object field [1].
RDF Schema provides with special means to represent such
cases, with constructs like rdfs:subClassOf, rdfs:range or
rdfs:domain. Ideally, a reasoner can understand the RDFS
semantics and expand the number of triples based on the
relationships [2]-[5].
Each College of Applied Sciences (CAS) has a dean, for
instance, if you have the triples Jehad is a dean of Ibri CAS
and dean rdfs:subClassOf Person then you should generate
as well the triple Jehad is a Person.The extension concept of
RDF and RDFS, which is a so-called ontology. To build a
clear background we give a piece of information of
ontology's types, components, advantages, motivations and
the tools that facilitate to develop the ontology [6]-[9]. Later,
we talk about the steps to build the IBRI-CASEng ontology,
which is used for Ibri College of Applied Sciences in both
languages Arabic and English. Then we generate the IBRICASEng ontology with an inference feature that reduces
self-join and make the graph more intelligent. In addition,
how we store the IBRI-CASEng ontology in a triple store
system as well as a relational database.
The benefits of modeling Ontology.

Reasoning to aid annotation as well as queries.


Annotation of multiple bodies of data based on underlying
ontologies facilitates its integration to build another level of
complexity.

Meaning is explicit.
Meaning is human and computer readable.
Ease of updating, no need to find terms in free text and
change them.
Data transfer possible without loss of meaning.
41

possible if it exists surly to avoid the problem of merging


ontology such as the format conflicts, same concepts and
different representation of the domain or not.

and a so-called Graphical ontology Development


Environments. The following gives more detail and
examples of these two things.
E. Ontology Representation Languages
From the beginning when the ontology appears to the
developers. They develop many languages to represent
ontological graph. The researchers classify these languages
into two types, which are the Traditional Syntax Ontology
Languages, and the Markup Ontology Languages [11] [12].

C. Defining Concepts in the Domain (Classes)


In this step, you define the terms or concepts, which exist in
your domain of interest. In addition, the concepts have
various properties as well as relationships. Thus, you should
determine what the suitable properties are for the terms
according to your domain. Besides, the relationships among
terms should build correctly to avoid irrelevant results and
incorrect as well.
D. Arranging the Concepts in a Hierarchy (SubclassSuperclass Hierarchy)
The domain hierarchy is one of the important feature holds
as a kind of building the ontological knowledge. There are
different approaches to build your model of knowledge in the
ontological way such as Top-Down Approach, Bottom-Up
Approach, Mixed and Object Oriented Programming
Analogy. These approaches help the developer to get the
hierarchical arrangement of concepts. For instance, if a class
A is a super-class of class C, every instance of C is an
instance of A. (Implication: class C represents a kind-of
A).
E. Defining Attributes and Properties of Classes - Slots
Identify the properties is a significant point in the ontology
development, which a so-called Slots among some
developers. You need to determine two important thing
according to identify any slot. The Slot Cardinality, which
means defining how many slot a class, can have. Moreover,
the Slot Value Type that means what are the type of values
that can be filled. There are different types such as String,
Number, Boolean.
F. Defining Individuals and Filling in Property Values
The instances or individuals are considered as a core of any
domain ontology that holds the targeted real values. It joins
the classes, properties as well as relationships. Hence, the
class holds one or many instances and each instance fill with
data properties. In addition, the classes link with object
properties and object oriented (sub-classes, super-classes) to
form the relationships inside the domain. The instance has
one or many properties to give sufficient details about that
individual.
IV. METHODOLOGY- AIRS: IBRI-CASENG ONTOLOGY
Search engine is defined as an information retrieval system
that designed to help the users. In order to find information
which stored in a computer system, such as on the World
Wide Web, inside a corporate or proprietary network, or in a
personal computer. The search engine allows the clients to
ask for content meeting specific criteria (typically those
containing a given word or phrase) and retrieves a list of
items that match with those criteria. Regardless of the
underlying architecture, users specify keywords that match
words in huge search engine databases, producing a ranked
list of URLs and snippets of Web pages in which the
keywords matched. Although such technologies are mostly

F. Markup Ontology Language


These languages mostly implement a markup scheme to
encode knowledge, mostly with XML. These languages used
Web-based ontology:

XOL built in 1999 by the AI center of SRI International as


an XMLization of a small subset of primitives from the
OKBC protocol called OKBC-Lite.
SHOE developed as an extension of HTML by Luke &
Heflin and others in 2000.
OIL (Ontology Inference Layer or Ontology Interchange
Language) which is based on description logics and includes
frame-based representation developed in 2001.
RDF (Resource Description Framework) developed in 2004
by the W3C as a semantic network- based language to
describe Web resources;
RDF Schema (RDFS) is an extension of RDF built in 2004.
Also developed by the W3C, with frame-based primitives.
DAML+OIL develop in 2002 consider as the latest release
of the earlier DAML (DARPA Agent Markup Language),
created to combine result of two language DAML and OIL.
OWL, or Web Ontology Language created by Smith et al in
2004. It delivered from DAML+OIL under the supports of
the W3C and Know OWL is the most popular ontology
representation language. Some joke for this language it
should have the WOL abbreviation but to make it remember
by user they abbreviate as OWL related to the name of kind
of bird [13]-[15].
III. ONTOLOGY ENGINEERING
There are different correct ways to model a domain not just
limit to one model. Most of the ontology models work in a
network fashion. Therefore, ontology development is
necessarily a dynamic and iterative process. In addition,
Ontology is not a synonym of Taxonomy. However, the
Taxonomical knowledge is a kind of ontological knowledge
among others. There are general and common processes that
familiar among ontology developers in order to describe a
specific domain ontology, which are as follow:
A. Determine the Domain and the Scope of your Ontology
During this step, you can define your domain of interest as
well as the purpose and the aim for developing your
ontology. In addition to that, you can determine the end users
interests and expect their question to make your ontology
sufficient to the target group.
B. Re-Use Existing Ontology
You should determine if there is any existing ontology,
which covers all your needs. Then, try to study it as much as
42

knowledge about the objects in real world, which promotes


reusability and interoperability among different modules.
We followed an iterative development process in the
ontology-engineering phase. First, we start with a core
ontology that includes the basic concepts and a simple
hierarchy. Then, we experiment it with this ontology and fix
the issues in reasoning and searching. These steps are
repeated until we end up with a stable ontology containing
multiple classes and properties in IBRI-CASEng domain.
The full class hierarchy can be seen in Figure 2. Protg as
designing environment tool to implement the IBRI-CASEng
Ontology. Protg is an open source that is implemented as
ontology editor. It is provide a graphical user interface to
design ontology. It is automatically generate source for
ontology that is written in java. In addition, we decide to
implement the IBRI-CASEng Ontology for Arabic and
English languages because protg can ultimately create
ontology in both languages. It also uses the RDF as a
standard and support the UTF-8 encoding [16] [17].
Third step, create the ontological graph, which helps to
implement the IBRI-CASEng ontology as it is illustrated in
Figure 2.

used, users are still often faced with the daunting task of
sifting through multiple pages of results, many of which are
irrelevant. Surveys indicate that almost 25% of Web
searchers are unable to find useful results in the first set of
URLs that are returned.
A. AIRS: Ontology Design of IBRI-CASEng
Design is considered as a significant phase for developing
any system. Our IBRI-CASEng is designed based on
different phases as it is illustrated in Figure 1. In the
following, we describe how each phase or step is
implemented to generate our efficient and scalable
ontological graph.
First step, we determine the domain and scope of our
ontology. We suggest the Ibri CAS (College of Applied
Science) to be our domain of interest and highlight the
academic department to serve our ontology specifically as a
prototype of the system.
Second step, determine the ontology representation
language and the editor. We use the OWL to develop our
ontology that is more compatible with the World Wide Web.
In addition, OWL is based on the main elements of RDF in
order to add more vocabularies to describe classes and
properties.
The OWL can define some relationship between classes such
as:

Fourth step, we start the ontology by defining the Classes.


Super-classes and sup-classes have been defined in protg,
each new class is sub-class from the general class that is
called thing. Our IBRI-CASEng Ontology have three main
classes for English and Arabic (Person, Organization and
Location) (-- )respectively.

disjointWith, which means any resources that related to one


Class, cannot related to other.
complementOf, which has all individuals of one class, cannot
belong to others.
equivalentClass means two or more Class are equal by have
the same individual belong to them.

Fifth step, we define the instances for each class, which is


called individuals. Individual is considered as a member of
the class. For instance, the class Dean have only one
individual that called Dean as it is shown in Figure 3.
Engine's instances is reached to more than 1000 individuals.

Fig. 1 IBRI-CASEng framework.


Fig. 2 IBRI-CASEng class hierarchy.

After we use the OWL. Ontologies are specified the concepts


and relationships among them, which play as a central role
in the semantic web applications. It is also providing a shared
43

Sixth step, we define the relationships or the object


properties as they called in the protg. There are different
types of relations such as the relationship between classes or
among the classes and individuals. Each new object property
is a sub property from the topObjectProperty in protg is
shown in Figure 3. Seventh step, create the data property and
define the construct, domain and range for each property.
The range of the data property could be String, Number,
Date, or Time act. In protg, each new data property is sub
property from the topDataProperty. It helps to give value for
the class instance or as it is called individual.

Fig. 4 Queries on IBRI-CASEng framework.

It is based on the Resource Description Framework (RDF),


which integrates a variety of applications using XML for
syntax and URIs for naming. W3C Semantic Web. The
Semantic Web is a charter that allows publishing, sharing,
and reusing data and knowledge on the Web and across
applications, enterprises, and community boundaries.
Semantic Web search engine, which searches the Semantic
Web, documents against a user query for accurate results.
The proposed constructed search engine deploys both
keyword search and entity based search to comfort the users
and help them retrieve beneficial data from the community
by deploying sample query search as shown in Figure 4. Web
applications and publications describe their approaches from
a very abstract viewpoint.

Fig. 3 IBRI CASEng properties mapping in protege.

V. PERFORMANCE ANALYSIS
The Semantic Web is the representation of data on the
World Wide Web. It is a collaborative effort led by W3C
with participation from a large number of researchers and
industrial partners.
TABLE I

Precision ratios on search engines


IBRICASEng

Google

Kngine

WolframAlpha

Retrieved
Relevant

12

Retrieved
IrRelevant

Total
Retrieved

12

15

15

Not
Retrieved
Relevant

Not
Retrieved
IrRelevant

%
Precision

100

55.56

46.47

53.3

% Recall

100

45.45

100

100

%
Accuracy

100

33.3

46.47

53.3

VI. EXPERIMENTAL RESULTS


Our experimental of sematic search engines is compared
with four types of famous semantic engines, which are IBRICASEng, Wolfram-Alpha, Kngine and Google. We
submitted around 15 different queries against the tested
engines, which are shown in Figure 5. As it has been shown
in Table I, our engine is retrieved 12 out of 15 answers while
the rest of engines have irrelevant answers as well as no
responses. The ratios of Precision are comparable between
the rests of the engines where Wolfram-Alpha, Kngine and
Google have 53.3, 46.47 and 55.56 respectively as it is
illustrated in Figure 5. In addition, the accuracy of our engine
is also high compare to other engine; it has 100% while
Wolfram-Alpha, Kngine and Google have 53.3, 46.47 and
33.3 respectively as we have been shown in Table I.

Queries to compare Semantic Search Engines


1. Regions in Oman
2. Governorates in Oman
44

3. Cities in Oman
4. Cities in Sultanate of Oman
5. Regions in Sultanate of Oman
6. Governorates in Sultanate of Oman
7. Number of Regions in Oman
8. Number of Governorates in Oman
9. Number of Regions in Sultanate of Oman
10. Number of Governorates in Sultanate of Oman
11. Cities in Muscat
12. Number of Cities in Al Dakhiliya
13. Oman President
14. Oman Colleges
15. Squ dean

REFERENCES
[1] David Beckett, The design and implementation of the redland RDF
application framework, WWW 01: Proceedings of the 10th international
conference on World Wide Web,New York, NY, USA,2001.

[2] Alberto Reggiori, Rdfstore Perl API for RDF Storage , 2002.
[3] R.

V. Guha , , rdfDB
http://www.guha.com/rdfdb/,2000.

An

RDF

Database

[4] K. Wilkinson, Jena property table implementation, In Proc. SSWS, pp.


5468, 2006.

[5] Jeen Broekstra, Arjohn Kampman , Frank Van Harmelen Sesame: A


Generic Architecture for Storing and Querying RDF and RDF , 2002.

[6] Sofia Alexaki, Vassilis Christophides, Greg Karvounaraki , Dimitris

Plexousakis, Karsten Tolle, The ICS-FORTH RDFSuite: Managing


Voluminous RDF Description Bases, 2nd International Workshop on the
Semantic Web (SemWeb01),Hongkong,2001.

[7] S Harris, N Lamb , N Shadbol , 4store: The design and implementation of


a clustered rdf store , In SSWS2009: Proceedings of the 5th International
Workshop on Scalable Semantic Web Knowledge Base Systems, 2009.

[8] D. J. Abadi, A. Marcus, S. R. Madden, and K.Hollenbach, Scalable


semantic web data management using vertical partitioning, In Proc. VLDB,
pp. 411422, 2007.

[9] George P. Copeland and Setrag Khosha an. A Decomposition Storage


Model. In Proceedings of the ACM SIGMOD International Conference on
Management of Data, pages 268{279, 1985.

[10] L. Sidirourgos, R. Goncalves, M. Kersten, N. Nes, and S.Manegold,

Column-store support for RDF Data Management: not all swans are
white, In Proc. VLDB, pp. 15531563, 2008.

[11] M. Schmidt, T. Hornung, N. Kchlin, G. Lausen, and C.Pinkel, An


experimental comparison of RDF data management approaches in a
SPARQL benchmark scenario, In Proc. ISWC, pp. 8297, 2008.

Fig. 4 Performance comparison on IBRI-CASEng.

Consequently, it seems that our engine retrieved better and


efficient results than other engines. Thus, it is built according
to the ontological domain specific, highly scalable,
performance and handle the complex queries well by
understanding the context behind the query. The analysis of
results is based on the familiar metrics, which called Recall
and Precision.
VII. CONCLUSION AND FUTURE WORK
The current improvements of this search engine manipulates
in three main stages, which are content (databases), query
(querying languages) and the indexing techniques in order to
retrieve data fast , efficient as well as accurate. We have
described our first prototype based on the College of Applied
Sciences dataset. In the future work, we shall extend the RDF
graph to contain all information about MoHE. Therefore, we
should update the new ontological graph as well
corresponding to the new RDF graph.

[12] Andreas Harth , Stefan Decker, Optimized Index Structures for Querying

RDF from the Web, LA-WEB 05: Proceedings of the Third Latin
American Web Congress, Washington, DC, USA, 2005.

[13] Devedzi, D. G. (2006). Model Driven Architecture and Ontology


Development. NewYork: Springer- Verlag Berlin Heidelberg.

[14] Ontogenesis. (2010, January 22). Retrieved 3 19, 2015, from Referance and
application ontologies: http://ontogenesis.knowledgblog.org/295

[15] M, H. (2009). A Practical guide to building OWL ontologies using Protege


4 and CO-ODE tool. The University of Manchester.

[16] L, N. F. Ontology development: a Gide to creating your first Ontology.


Standford University.

[17] Open semantic framework. (2014, November 18). Retrieved 4 16, 2015,
from
Ontology
Tools:
http://wiki.opensemanticframework.org/index.php/Ontology_Tools
45

Appendix

Invited Speakers:
Dr. Salim Sultan Al Ruzaiqi
Chief Executive Officer, ITA

As CEO of ITA, Dr. Salim is responsible for the implementation of the Digital Oman Strategy. Throughout his 18-year career
in the IT field, Dr. Salim held different technical and managerial roles in the Sultanate of Oman. Dr. Salim joined the Ministry of
Foreign Affairs in March 1987 and led the IT initiatives in the ministry.
Dr. Salim had a diplomatic experience by joining the Sultanate's Diplomatic Corp as the First Secretary at the Embassy of the
Sultanate of Oman in Washington DC from 1998 to 2003. Dr. Salim Received Doctorate of Science degree in Information
Systems and Communications from Robert Morris University of Pittsburgh Pennsylvania, Master of Science in Information
Systems Technology from George Washington University of Washington DC and Bachelor of Science in Computer Science and
Mathematics from Lindenwood University of St.Charles Missouri.

Sheikh Abdulla bin Issa Salim Al Rawahy


Chief Officer Alliances and Partnership, Ooredoo

Sheikh Abdulla bin Issa Al Rawahy was appointed as Corporate Advisor in March 2014, having been Chief Strategy Officer of
Ooredoo since 2008 and Chief Technical Adviser from 2004. With over 30 years of experience in the telecommunications
sector, Sheikh Abdulla has held several leading roles in network planning, projects, strategy and corporate business development
for both fixed and mobile telecommunications. In his current role as Chief Strategy Officer, Sheikh Abdulla is responsible for
the long term strategy of Ooredoo, which has focused on the transformation of Ooredoo from a mobile operator to a full
service operator, able to serve customers (consumers as well as enterprises) with all their communication needs as well as
Business Development and the International Wholesale business.
Prior to joining Ooredoo, Sheikh Abdulla served as Technical Advisor to the Minister of Transport and Communications,
President of OmanTel and Chairman and founding Member of the Oman Fibre Optic Company. He holds a Bachelor in
Engineering Technology and Masters of Science in Electrical Engineering from the University of Central Florida (USA).

Prof. Youcef Baghdadi


Professor, Department of Computer Science, College of Science, Sultan Qaboos University

Prof. Youcef Baghdadi received his HDR in Computer Science from University Paris 1 Pantheon-Sorbonne and his PhD from
University of Toulouse 1, France. His is currently an Full Professor with the Department of Computer Science at Sultan Qaboos
University in Oman. His research aims at bridging the gap between business and information technology (IT), namely in the
areas of cooperative information systems (IS), web technologies, e-business, service-oriented computing, and methods for
service-oriented software engineering. He is an expert in Service-Oriented Architecture (SOA). He has published many papers in
journals such as the Information Systems Frontiers, Information Systems and E-Business, Service-Oriented Computing and
Applications, Business Information Systems, Electronic Research and Applications, Web Grids and Services, and others.

Dr. Bader Al-Manthari


Director General of the Information Security Division, ITA

Dr. Bader holds a PhD in Computer Science from Queens University (Canada). He joined the Information Technology
Authority (ITA) beginning of 2010 and since then he has worked on many national projects in Information Security. Dr. Bader
is also a member of the Program Committee of the Intelligent Cloud Computing Conference (ICC 2014) and a member of the
Technical Program Committees in more than 10 highly ranked international conferences. In addition, he serves as a professional
referee in more than 50 international conferences, journals and awards in the area of Information and Communication
Technology (ICT). Furthermore, Dr. Bader is certified ITIL, SABSA, and ILM Level 5 Award in Leadership and Management.
Before joining ITA, Dr. Bader worked as a teaching assistant and a research associate at Queens University, Canada, where he
worked on various research projects particularly in enhancing the Quality of Service of next generation communication
networks. He has authored more than 20 refereed journal and conference publications. He holds a patent in 3.5 G wireless
cellular networks and WiMAX and another patent is currently being prepared for 4G technologies.

Mr. Yahya Nasser Al-Hajri


Senior Specialist, Regulatory and Compliance Unit, TRA

Mr. Yahya is a computer engineering graduate engineer with 15 years experience in ICT sector. He works for
Telecommunications Regulatory Authority of Oman as a senior specialist in the technical standards and numbering department
of the regulatory and compliance unit. Mr. Yahya is responsible for the development of ICT Technical Regulations &
Guidelines. On international related matters, Mr. Yahya is acting also as co-rapporteur for Question 1 of Study Group 1 of the
development bureau of the International Telecommunication Union (ITU).

Dr. Abderezak Touzene


Associate Professor, Department of Computer Science, College of Science, Sultan Qaboos University

Abderezak Touzene received the BS degree in Computer Science from University of Algiers in 1987, M.Sc. degree in Computer
Science from Paris-Sud University in 1988 and Ph.D. degree in Computer Science from Institut polytechnique de Grenobe
(France) in 1992. He is an Associate Professor in the Department of Computer Science at Sultan Qaboos University in Oman.
His research interests include Cloud Computing, Parallel and Distributed Computing, Wireless and Mobile Networks, Network
On Ship (NoC), Cryptography and Network Security, Interconnection Networks, Performance Evaluation, Numerical Methods.
Dr. Touzene is a member of the IEEE, and the IEEE Computer Society.

08:30 - 09:00 AM

Registration

09:00 - 09:20 AM

Inauguration

09:20 - 10:15 AM

Plenary Session 1: Key Note Speaker: Dr. Salim Al Ruzaiqi

Chief Executive Officer, ITA Oman


Topic: Cloud Computing An ITA Perspective
Chair: Prof. Bill Wresch, Associate Dean, University of Wisconsin, Oshkosh
(Visiting Professor, University of Nizwa)

10:15 - 10:45 AM

Plenary Session 2: Talk 1: Mr. Abdulla bin Issa Salim Al Rawahy


Chief Officer Alliances and Partnership, Ooredoo, Oman
Topic: The Digital Enterprise
Chair: Dr. Said Younes, Assistant Dean for Undergraduate Studies, CEMIS
University of Nizwa

10:45 11:05 AM

Tea / Coffee Break

11:05 12:35 PM

Paper Presentation Session :

Chair: Dr. Arockiasamy Soosaimanickam, Associate Professor, University of Nizwa


Co-Chair: Dr. Mohamed Ben Laroussi Aissa, Associate Professor, University of Nizwa

Paper
Code

Author(s)

Paper / Research Title

Affiliation

Invited
Researcher

Prof. Youcef Baghdadi

Relationship

of Cloud
Computing to SOA

Department of Computer
Science, College of Science,
Sultan Qaboos University.

Paper 110

Zunaira Zubair Ahmed,


Rachana Visavadiya, and
Sanjay Kumar

Cloud

Computing and its


Security Issues

Department of Computer
Science, Waljat College of
Applied Sciences, Oman.

Paper 111

Mohanaad T Shakir,
Asmidar Bit Abu Bakar,
Yunus Bin Yusoff, and
Mustefa Talal Sheker

Diagnosis

Information Technology
Department, Al-Buraimi
University College, Oman.

Security Problems for


Hybrid Cloud Computing in
Medium Organization

College of Information
Technology, University
Tenaga National, Malaysia.

Paper 109

Paper 115

12:35 01:45 PM

Mohamed Yasin,
Dr. P. Sujatha,
Dr. A. Mohammed Abbas,
Dr. M.S. Saleem Basha

Faiza Al Balushi, and


Dr. Ashish

Lunch Break

Survey on Hybrid Route


Diversification and Node
Deployment in Wireless Sensor
Networks

Sultan Qaboos University,


Oman.

An

IT Department, Higher
College of Technology, Oman.

Analysis of User
Satisfaction of Mobile Banking
Application in Oman

Mazoon University College,


Oman.
Pondicherry University, India.

01:45 - 02:30 PM

Plenary Session 3: Talk 2: Dr. Bader Al Manthari


Director General of Information Security Division, ITA Oman
Topic: Cloud Computing Security Issues and Challenges
Chair: Dr. Mahmood Jasim, Associate Professor, University of Nizwa

02:30 - 03:00 PM

Plenary Session 4: Talk 3: Mr. Yahya Al Hajri


Senior Specialist, Regulatory and Compliance Unit, TRA Oman

Topic: Cloud Computing TRA Perspective


Chair: Dr. Khizar Hayat, Associate Professor, University of Nizwa
03:00 - 04:30 PM

Paper Presentation Session :

Chair: Dr. Mahinda Alahakoon, Associate Professor, University of Nizwa


Co-Chair: Dr. Munaf Najmuldeen, Assistant Professor, University of Nizwa

Paper
Code

Author(s)

Paper / Research Title

Affiliation

Invited
Researcher

Dr. Abderezak Touzene,


Associate Professor

Mobile

Paper 101

Ansu George,
Mahata Sudeshna,
Sonia Soans, and
K Chandrasekaran

Human

Immune System Based


Security Model for Smart Cities

Department of Computer
Science and Engineering,
National Institute of
Technology, Surathkal,
Karnataka, India.

Paper 104

Dr. S. Benson Edwin Raj,


Mr. N. Senthil Kumar, and
Mrs. Revathi

Defense-in-Depth

Architecture
for Mitigation of DDoS
Attacks on Cloud Servers

IT Department, Rustaq
College of Applied Sciences,
Oman.

Cloud An overview

Department of Computer
Science, College of Science,
Sultan Qaboos University.

MCA Department, St. Joseph


College, Trichy, India.

Paper 105

Dr. Boumedyen Shannaq

Distributed

Computing
Environment Based on Cloud
Computing Technology for
MRMWR

Ministry of Regional
Municipalities Water
Resources (MRMWR),
Directorate General of
Planning & Studies, Oman.

Paper 113

Dr. Sharmi Sankar,


Mr. Ishtiaque Mahmood,
Ms. Noora Al-Alawi, and
Dr. Jehad Al-Bani Younis

AIRS:

Ibri College of Applied


Sciences, Oman.

04:30 05:00 PM

Valedictory Function

05:30 05:30 PM

Tea / Coffee Break

A Search Engine
Performance Visualization with
Ontology

Our Valued Supporters for NIST 2016

Potrebbero piacerti anche