Sei sulla pagina 1di 22

Journal of Ambient Intelligence and Humanized Computing

https://doi.org/10.1007/s12652-018-0785-4

ORIGINAL RESEARCH

A methodology for deployment of IoT application in fog


Salvatore Venticinque1 · Alba Amato1 

Received: 4 January 2018 / Accepted: 30 March 2018


© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Abstract
The foreseen increase of IoT connected to the Internet is worrying the ICT community because of its impact on network
Infrastructure when the number of requesters become larger and larger. Moreover also reliability of network connection and
real-time constraints can affect the effectiveness of the Cloud Computing paradigm for developing IoT solutions. The neces-
sity of an intermediate layer in the whole IoT architecture that works as a middle ground between the local physical memories
and Cloud is proposed by the Fog paradigm. In this paper we define and use a methodology that supports the developer to
address the Fog Service Placement Problem, which consists of finding the optimal mapping between IoT applications and
computational resources. We exploited and extended a Fog Application model from the related work to apply the proposed
methodology in order to investigate the optimal deployment of IoT application. The case study is an IoT application in the
Smart Energy domain. In particular, we extended a software platform, which was developed, and released open source by
the CoSSMic European project, with advanced functionalities. The new functionalities provide capabilities for automatic
learning of energy profiles and lighten the platform utilization by users, but they introduce new requirements, also in terms of
computational resources. Experimental results are presented to demonstrate the usage and the effectiveness of the proposed
methodology at deployment stage.

Keywords  Cloud computing · Fog computing · Smart grid · Internet of things

1 Introduction consequence of the onset of the Internet of Things (IoT)


and the huge number of devices connected to it that by a
Fog Computing is a new architecture for the internet of few years will become about 50 billion Evans (2011). The
things, with an intermediate layer between local and Cloud. transmission of all these data to the Cloud may be inefficient
In Cloud data have been moved on large servers remotely, as it would require high computing power and bandwidth,
away and accessible via the Internet. But the Cloud model increasing the latencies and consequently adversely affecting
is not perfect. Above all it is designed to hold the data but the performance of the entire network.
not to react in real time. And the real time is the key for While Cloud Computing is based on large data center
the internet of things: without this readiness we could not away from the user, the Fog promises to bring more process-
run the cars that drive themselves, smart cities, and so ing power in the network edge in particular on the devices
on. So it is necessary an intermediate level in the whole themselves or on the local gateway so reducing the amount
architecture. A middle ground between the local physical of data to be transmitted to the Cloud for processing, analy-
memories and Cloud, where small data centers work at local sis and archiving.
level and facilitate the flow and data management. This dis- Fog Computing is a highly virtualized platform that pro-
tributed approach is becoming increasingly important as a vides compute, storage, and networking services between
end devices and traditional Cloud Computing Data Centers,
typically, but not exclusively located at the edge of network
* Alba Amato Bonomi et al. (2012). The Fog extends the Cloud to be closer
alba.amato@unicampania.it
to things that produce and act on IoT data. These devices,
Salvatore Venticinque called fog nodes, can be deployed anywhere with a network
salvatore.venticinque@unicampania.it
connection: on a factory floor, on top of a power pole, along-
1
Università degli studi della Campania Luigi Vanvitelli, via side a railway track, in a vehicle, or on an oil rig. Any device
Roma 29, Aversa, Italy

13
Vol.:(0123456789)
S. Venticinque, A. Amato

with computing, storage, and network connectivity can be a computing for IoT applications. Section 3 discusses improve-
fog node. Examples include industrial controllers, switches, ment of Fog to Cloud. In Sect. 4 the Fog Application model
routers, embedded servers, and video surveillance cameras and the metodology to address the Fog Service Placement
Evans (2015). Problem are described. Section 4 presents the case study.
By acting at the peripheral level, that is, on the Edge of The application of the proposed methodology and experi-
the network, it is possible to manage vast amounts of data mental results are described in Sect. 5. Section 6 reviews
without necessarily passing from the Cloud, with two unde- the related work providing an overview of works related
niable advantages: on the one hand, it reduces the required to Fog Computing in Smart Grid. Finally conclusions are
bandwidth needed to reach the Cloud or the date corporate drawn in Sect. 7.
center, and on the other one can assume an increase in the
level of security, because the infrastructure may be more
controllable. In fact processing does not take place on the 2 Cloud computing for IoT applications
Cloud, but on local smart structures, capable of providing its and smart grids
own limited computational power to perform simple tasks,
however, designed to allow the sending and processing in The main strengths introduced by the Cloud Computing in
the Cloud only critical and useful data. In this way, they IoT sector are the high quality and fast services at the low-
decrease latency and saves bandwidth. est cost to the user. The limited autonomy, due to a small
This paper addresses the Fog Service Placement Prob- capacity of batteries, creates an upper limit with regard to
lem (FSPP), that aims to determine an optimal mapping the hardware components of the device itself. As the vast
between IoT applications and computational resources majority of IoT devices has significantly limited resources
with the objective of optimizing the fog landscape utiliza- compared to the Cloud platforms, Cloud is considered an
tion while satisfying QoS requirements of applications. In essential model to allow mobile customers to perform bur-
the FSPP, we consider a methodology to design the best densome operations.
deployment configuration in the context of smart energy Nevertheless, from the point of view of the Internet of
domain. To account for QoS requirements of applications, Things, the real limitation of Cloud Computing actually
deadlines and throughput of application must be preserved. stems from the type of network for which it was designed
The propose methodology allows to estimate an upper bound a decade ago. Given the inevitable convergence of Mobile
(i.e., worst-case estimation) of the response time of relevant and Cloud Computing, the focus shifts to the actual condi-
transactions, which guides the design of test scenarios to be tions of utilization of these models, which are often placed
executed on a real testbed. in a hostile operating environment. The additional resource
The main contributions made in this work are briefly shortage of IoT devices, compared to current smartphones
described as follows: and tablets, is due to their spread even in very specialized
domains where they often need a miniaturization of the
a) a new methodology to address the Fog Service Place- components which does not allow high performance. For
ment Problem has been defined; the whole category of objects connected wearable, for sen-
b) a Fog Application model from the related work has been sors, actuators and small gateway IoT, the limited available
extended and used to apply the proposed methodology; resources force users to move a quantity of data, and related
c) an IoT application from the Smart Energy domain has computations, on the Cloud; with a frequency and quantity
been chosen as real case study to demonstrated the effec- greater than in the case of mid range devices such as tablets,
tiveness of the methodology; smartphones and laptops.
d) new advanced functionalities have been developed to Internet of Things (IoT) needs to operate on a fast net-
extend a peer to peer platform that implemented the cho- work topologies that provides end-to-end connection and
sen case study in the CoSSMic European project; real-time responses. For instance the frequent disconnec-
e) three different deployment solutions of the platform have tions and reconnections by the devices, or notifications of
been investigated using the proposed methodology dem- a disaster or an imminent collapse of the system. In many
onstrating the effectiveness of the Fog computing para- cases, decisions must be taken in a short time and it is nec-
digm; essary to be able to rely on a reliable connection between
f) experimental results have been described and discussed the customer and the corresponding servant who performs
to validate the methodology and to estimate the process- complex tasks. In many situations, especially dictated by
ing capability of the extended CoSSMic platform. the overload of communications in multi-hop WAN net-
works, these qualities are not guaranteed by the Cloud. In
The remaining materials are organized in this fashion. fact, it aims to manage the modern real-time responsiveness
In Sect.  2 introduce advance and limitations of Cloud requirements in a secure data and user applications, in more

13
A methodology for deployment of IoT application in fog

relaxed timings of which the IoT needs. In addition, Cloud is On the other had, data security and privacy remain rel-
designed to work on large networks, which may contain bot- evant obstacles to migration Smart Grid applications in
tlenecks or even interruptions. The lack of connection is not Cloud Simmhan et al. (2010). In particular, scalable access
actually the only service interruption reason. Let’s consider to information on energy assets need to be balanced with
the case in which a shared access to a resource is present data privacy and security (using identified data), which must
for all user connected to the same WAN. As often happens not affect the performance of such mission-critical applica-
in networks which convey a large number of users that can tions. The resilience of the ICT infrastructure is so relevant
be both human and artificial, as more and more frequently to make available data from which information is extracted
occur in modern IoT scenarios, in correspondence of peak in order to detect faults, isolate the fault, and then resolve
usage there is an high probability that the resource sharing faults. Also the security of the ICT infrastructure becomes
becomes effectively unusable by users, due to an overload a requirements to avoid failures caused by cyber-attacks
of network communications. Cloud is not designed for the or by combined attacks to the power grid and to the ICT
reception of data with frequency and speed like those that infrastructures.
are produced by numerous IoT devices; moreover analysis of So for an important domain, such as Smart Grid, is really
those data on Cloud platforms involves a continuous move- important to go towards an architecture that overcome the
ment of a huge amount of data. All this leads to congestion, described limitations.
which directly results in denial of service, slowdowns or
disconnections Papageorgiou et al. (2015).
In addition to the congestion problems, Cloud is unsuited 3 From cloud to Fog: benefits
to mobile users, since it is not rare that they change their and challenges
address, while using the same service, to move from one net-
work to another. Moreover, Cloud Computing suffers other Moving resources in a point of the network closest to the
problems, such as those related to security and law issues. production data is the solution to meet the high latencies
With regard to security, it can be said that the main issues that impede the execution of applications that require near
are related to the location of user’s data. It is not always pos- real-time to the timing and generate high data traffic. Fog
sible, or convenient, to move the computational load on the Computing aims at moving the Cloud Computing paradigm
network to the Cloud because performance are not guaran- to the edge of networks, in particular those that connect all
teed in some critical situations of particular interest. the devices belonging to the IoT Bonomi et al. (2012). The
Smart Grid represents the most relevant IoT application edge of the network is identified by the subnet of the devices
in the wider Smart Metering context. It has emerged to pro- in the network and can have a reduced or a wide extension
vide an intelligent power infrastructure, but it is integrating depending on whether the internal devices are wireless or
IoT devices to gather more and more detailed information broadband.
and introducing additional requirements. The integration of The nodes, even if are less performing compared to the
WSNs, actuators, smart meters, and other components of cloud platforms, are suitable for execution of complex tasks
the power grid with together with information and commu- and the storing of data. They are also scalable and adaptable
nication technology (ICT), is referred to as the Internet of to different types of deployment.
Energy (IoE). IoT technology integrated within the smart In this place of the network, Cloud services can operate
power grid comes with a cost of storing and processing large directly, or at the most by a single intermediary, with mobile
volume of data every minute. This data includes end users customers that consume or produce data. According to the
load demand, power lines faults, network’s components sta- commonly accepted definition of Fog Computing Vaquero
tus, scheduling energy consumption, forecast conditions, and Rodero-Merino (2014), it represents a scenario where an
advanced metering records, outage management records, high number of wireless devices communicate and cooper-
enterprise assets, and many more. Hence, utility companies ate among them and with network services to support the
must have the software and hardware capabilities to store, storage of data and the computational processes without
manage, and process the collected data efficiently. In  Witt intervention from third party.
(2015) explains how the high volume data gather in smart The conceptual model that describes the Fog archi-
grid is similar in size and characteristics to the concept of tecture fits all devices that extend the capabilities of the
big data. Cloud in a dedicated intermediate level of connection
The utilization of Cloud architecture provides, in addi- to end devices for data protection. Fog nodes have the
tion to high scalability, a number of other the advantages characteristic of being distant from the Cloud data center
in the field Smart Grid such as better interoperability, rapid with which they communicate, but very close to the user
elasticity, better maintainability, as well as cost reduction. devices. Fog nodes also are in large number and scattered

13
S. Venticinque, A. Amato

communication and the geographic location of the unit, as


well as the devices that use them.
The resulting model is the element that will enable exist-
ing infrastructure to meet the demands of modern applica-
tions, especially related to IoT that need to process data close
to the source to minimize latency and avoid frequent connec-
tions with the Cloud, limiting traffic generated by network.
The applications residing on Fog nodes are not software
sectors in isolation, but are part of larger solutions that also
cover Cloud, and user level.  Arridha et al. (2017) shows
how fog nodes are relevant to collect and forward data for
Fig. 1  Fog architecture overview real time monitoring, however Cloud sources are necessary
to run big data analytics. The optimal deployment of soft-
ware components proposed in the next sections will allow
throughout the territory according to a geographically for an effective distribution of the analytics between Cloud
convenient distribution that covers the largest number of and Fog layers. In this way the Cloud level will only be
users. solicited by requests requiring soft time constraints, such
As it is shown in Fig.  1, Fog nodes work as a bridge as, for example, the statistical processing of the system or
between IoT devices and the Cloud. Using this information of the historical analysis and long term storage of Big Data.
they send signals to actuators or communicate with users via Fog nodes store less data respect to the Cloud and data are
mobile devices. The variety and the potentially enormous preserved for less time, just the time needed for the execu-
numbers of endpoint devices spotlight, the significance of tion of real-time IoT service. This separation of responsibil-
the such a layer in the IoT architecture. Data processing does ity, lightens the load of data sent to the cloud and allows
not take place only in the Cloud, but also in local smart you to increase the security of sensitive information passing
structures, capable of providing its own limited computa- through the Fog level.
tional power to perform simple tasks. However, they offload Fog nodes inherit the characteristics of the Cloud server
to the Cloud critical tasks whose requirements overcome devices and are highly virtualized environments. The ben-
their capabilities. In this way, they decrease latency. The efits resulting from virtualized software model are the same
main goal of the Fog is not just the latency, which is a key of the cloud environment. They ensure a logical isolation
issue in hostile environments for the early identification of of shared components, both between the various processes
significant and potentially damaging events that occur in the running and among the areas where the data are allocated.
system, but also the downsizing of unnecessary and redun- Manage a myriad of heterogeneous devices, leveraging
dant data sent to the central network. The fog computation the centralized Cloud technologies is a very complex task,
reduces the redundant and unnecessary traffic, preserving so that often it leads to the lost of the benefits of adopting the
the band for most significant processes, and sending only Fog computing paradigm. In particular this paper contrib-
regular statements (clustering) regarding the state of the sys- utes to address the problem of placing software components
tem under test. The availability of computational resources of an IoT application in the Fog.
distributed close to the data sources can be exploited to
deploy technical solutions that have been already applied
in other fields for data gathering and data fusion (e.g. algo- 4 Fog Application model
rithms proposed by  Ling et al. (2017) to increase lifetime
of Wireless Sensor Networks). Here we use and extend the Fog Application model defined
The advantages of adopting such a model include also in  Skarlat et al. (2017) and shown in Fig.  2. An IoT appli-
support to mobility. When a node realizes that the connec- cation consists of several services, which run on virtualized
tion with the device is becoming too weak and the latency IoT devices (i.e., Fog cells) and interact with each other to
is affected, it can move the application to another Fog node, provide a well-defined functionality. Hence, an application
nearest to the device you are moving, in order to avoid ser- is distributed in a Fog landscape by placing the services onto
vice interruptions. particular Fog cells. Let A = ∪ni=1 Ai be a set of applications
The Fog tier also contextual data to devices, and appli- to be placed in a fog landscape. Each application consists
cations that need it. Fog nodes are also able to extrapo- of a set of services. A service a ∈ Ai is characterized by its
late the infrastructural information in a more detailed way demands for computational resources, i.e., CPU cCa , RAM
respect to the data cloud center. Services enjoy of additional a  , storage ca , bandwidth ca and by a service type Ta . The
cM S N

information about the status of the radio channel used for type Ta specifies the requirement of a service to be executed

13
A methodology for deployment of IoT application in fog

CFN and storage CFS  . Each control node has to perform resource
provisioning by exploiting the computational resources of the
Cloud to overcome its own limitations. A Cloud R has theoreti-
cally unlimited resources. The logical link between a control
node F and the Cloud R has a not negligible delay dR.

4.1 Fog service placement problem

Fog Service Placement Problem consist of finding the opti-


mal mapping between IoT applications and computational
resources, with the objective of optimizing the Fog landscape
utilization while satisfying QoS requirements of applications.
Here we do not care about services allocation to Fog cells, but
we focus on the Fog service placement on the Control nodes
rather than on the Cloud. The proposed methodology aims at
Fig. 2  Fog architecture
supporting the developer to determine the following subsets
of placements:
on a specific kind of resource, e.g., which is equipped with
a specific sensor. Without loss of generality, we consider – Which services have to be executed locally on F’s own
three types of services: sensing, processing, and actuating resources?
services. A service has an estimated make-span duration ma. – Which services have to be deployed in the Cloud R?
An application Ai could be characterized by a deadline
DAi which defines the maximum amount of time allowed for Recalling the application model, we observe that could be
the application execution and by an arrival rate of requests mandatory to deploy a service a ∈ Ai on the Fog Node F
ARi that represents the maximum workload it is necessary because of its type Ta that may require specific resources. This
to process. The application response time rAi , with Ai ∈ A , is the case of drivers that communicate directly to sensors or
depends on the make-spans of its services a ∈ Ai , and on the to Fog cells f ∈ Res(F).
communication time. The binary variable xa indicates whether it is mandatory to
A Fog cell f is a virtualized single IoT device coordinat- deploy a service in the current Fog node F. ya defines that the
ing a group of other IoT devices, i.e., sensors and actuators. service a is optionally placed on the control node. The binary
Fog cells run on IoT devices with computational, network, and variable za defines whether the service a is deployed to the
storage capacities, while pure sensors and actuators are IoT cloud. Variables xa implicitly account for the types of services
devices without such resources. A Fog cell is a software envi- and types of IoT devices through Resa (F) . Analogously, vari-
ronment that provides access to, control of, and monitoring of ables ya indicates that the control node can host the service a.
an underlying physical IoT device. Computational resources As a first constraint, each service a has to be placed either
of a Fog cell are usually very limited. Fog cells are organ- in the Fog or in the cloud:
ized in a Fog colony. Each colony is identified and managed
by its control node F. We denote Res(F) the set of Fog cells Resa (F)

which are managed by F. A colony acts as a micro data center xa + ya + za = 1, ∀a ∈ Ai , ∀Ai ∈ A (1)
f
with resources scattered in a certain area, i.e., each Fog cell
belongs to the resources of a colony: f ∈ Res(F) . The com- Second, placed services must not exceed the capacities of
munication link between a Fog cell f and a control node F is CPU, RAM, Bandwidth and storage of a corresponding Fog
characterized by a negligible delay. Under the assumption of resource. The Eq.  2 guarantees that the services placed on
symmetric links and considering that a Fog cell f can com- the control node do not exceed a given percentage 𝜇 of avail-
municate only with the control node F of its own colony, we able resources.
denote df as the communication link delay between f and F
with f ∈ Res(F) . Colonies communicate with each other via Ai
(2)

their control nodes. A control node is a more powerful Fog c𝛾a ∗ (ya + za ) ≤ 𝜇CF𝛾 , 𝛾 = {C, M, S}
cell. It could work as a proxy server or a gateway. Usually, a a

control node F is supplemented with faster and more expensive


A different formulation defines the network requirements
computational resources than resources of Fog cells, however,
in terms of bandwidth. Here we further distinguish the
at the same time, slower and cheaper than Cloud resources. F
case of data are sent from the Fog to Cloud and vice versa
is characterized by its capacities CPU CFC , RAM CFM , network

13
S. Venticinque, A. Amato

Table 1  Taxonomy of fog Parameter Description


application model
Application A Set of applications to be executed
Ai Application
D Ai Deadline for an application (in s)
rAi Response time of an application (in s)
a Service in an application
cCa CPU demand of a service
cM
a
RAM demand of a service
cSa Storage demand of a service
cNa Network demand of a service
ma Makespan of a service
Ta Type of a service
d(a, F)S Response time of a service executing on a Fog node
d(a, R)S Response time of a service executing on a Cloud node
Fog Landscape R Cloud
F Control Node
CFC CPU capacity of F
CFN Network capacity of F
CFM RAM capacity of F
CFS Storage capacity of F

(Eq. 4). This is necessary for asymmetric channels, such As a third constraint, we account for the response time rAi
as the ones available in residential buildings or in mobile of an application, so that it does not violate the application
networks. Equation 3 computes the bandwidth cF u used by deadline as in  5.
N

all services placed on the control node, which send data to


services running on the Cloud. It also defines the upload
rAi ≤ DAi , ∀Ai ∈ A (5)
constraint CF u of control node and the download constraint
N The application response time rAi depends on the overall
CR of the Cloud.
Nd make-span mAi The overall make-span mAi accounts for the
time needed to transfer services to computational resources,
N
Res(F) Res(R)
∑ ∑ u
execute them, and retrieve the results, considering that the
cF u = cNa ,j ∗ mai ,aj ∗ (yai ∗ zaj + yaj ∗ zai ) communications are coordinated by the control node. The
i
ai aj
make-span mAi depends on the execution time of each service
∀a ∈ Ai , Aj ∀Ai , Aj ∈ A (3) according to its placement as in  6–7. We define mAi as:
N N
cF u ≤ 𝜇CF u Ai
(6)
∑ N ∑
N mAi = d(a, F) ∗ (xa + ya ) + d(a, R) ∗ zA
cF u ≤ 𝜇CR d
F a

where d(a, F) and d(a, R) represent the make-span of a


service, when it is executed on the control node F and the
Equation 4 computes the used by all services
bandwidth cF d
N
Cloud R respectively.
placed on the control node, which receive data from services
In Table  1 we summarize the defined notation.
running on the Cloud. It also defines the download constraint
In the following sections the service placement problem
CF u of the control-node and the upload constraint CR d of the
N N
will be addressed focusing on CPU and Network require-
Cloud.
ments applying a three steps methodology.
Res(F) Res(R)
N d
∑ ∑
cF d = cNa ,j ∗ mai ,aj ∗ (yai ∗ zaj + yaj ∗ zai )
i
ai aj 4.2 BET methodology
∀a ∈ Ai , Aj ∀Ai , Aj ∈ A (4)
N N A solution to the presented model will be found using a the
cF d ≤ 𝜇CF d methodology based on Benchmarking, Evaluation and Test-
ing (BET) activities.
∑ N N
cF d ≤ 𝜇CR u
F

13
A methodology for deployment of IoT application in fog

Fig. 3  Architecture overview

1. Benchmarking: The computation and the communication Smart Energy represents a killer use case for the adoption
requirements will be estimated on the target platform for of the Fog paradigm.
different workloads. In particular each service will run In order to demonstrate the application of the proposed
on the target platform in order to estimate the execution approach we extended the CoSSMic platform described
time d(a, F) and d(a, R) with increasing input sizes. in  Jiang et al. (2016) and evaluated three different deploy-
2. Evaluation: the presented model will be used to compute ment solutions. In the CoSSMic scenario consuming appli-
an upper bound to the workload that can be processed by ances and photo-voltaic panels of a neighbourhood are
different deployment configurations. In particular, Eq.  2 continuously monitored. Users plan the utilization of their
in the case of CPU utilization becomes: devices using a software application, and the system com-
putes and enforces the best schedule of consuming appli-
ances, to optimize the global self-consumption at neighbour-
∑Ai
a
cCa ∗ (ya + za )
= hood level. Users’ preferences and constraints include some
CFC
(7) parameters that allow to define the flexibility of the schedule,
n Ai
� � but they need to set in advance which kind of program they
(d(a, F) ∗ ya + d(a, R) ∗ za ) ∗ 𝜆a ≤𝜇
i a
are running.
Here we focus on some extended functionalities that have
been developed to automatically learn energy profiles cor-
where 𝜆a is the arrival rate of requests for the service responding to different working programs of an appliance,
a that must be maximized ∀a ∈ Res(F)a and others and to predict at device switch time which program is actu-
parameters have been estimated by benchmarks. Equa- ally going to run.
tion  7 defines the same problem in terms of percentage Let’s suppose an users has switched-on her washing
of time used by services. The objective is to maximize machine. The start will be detected by the system, which
the throughput of all services 𝛌 = {𝜆1 , ..., 𝜆n } keeping will stop the device and will schedule the next switch-on
free 𝜇 percentage of CPU, which takes into account the at the best time of the day, according to the energy require-
error introduced by the model and the overhead con- ments of the predicted working program. At the end of the
suming processng resources. The analytical solution of run measured energy consumption will be used to improve
Eq. 7 provides a pareto front of solution consisting of the average consuming profiles and to update the prediction
thresholds values for the arrival rates in different deploy- model.
ment configurations. In Fig. 3 we show an overview of the CoSSMiC platform.
3. Testing: the evaluation results will be used to reduce It is possible identify four main layers:
the space dimension of test cases. It provides an upper
bound to the maximum throughput that can be processed – The Sensor Layer is composed of different kind of sen-
by the nodes in each deployment configuration. Testing sors  Gupta et al. (2016), digital smart meters, digital
will be used to validate the evaluation results, to tune the controls, and analytic tools to monitor and control two-
deployment configuration reproducing, by a testbed, the way energy flow.
real environment and to estimate the overhead.

13
S. Venticinque, A. Amato

– The Sensor Network, collects and disseminates envi- – The User Interface (UI) supports interactive control and
ronmental’s data. Wireless sensor networks facilitate configuration of the system. It provides to the user real
monitoring and controlling of physical environments time monitoring information, statistics on historical data
from remote locations at any time. Devices in the grid and feedback from the coaching system. XMPP messages
can send information through wireless interfaces (for are used to trigger alerts, to communicate with other
example UHF or Zigbee). Such a communication layer users and with the application layer providing informa-
allows for transmission of data and control signals using tion and feedbacks to the Application Logic.
heterogeneous technologies and across different kinds of
area networks. For example data can be collected locally
using embedded systems that are hosted at the user’s 4.3 The software platform
home, in a server, or directly to the user’s smartphone,
according to the complexity of applications, the amount In Fig. 4 the component diagram of the CoSSMic applica-
of data and the provided functionalities. tion is shown.
– Drivers implement the bridge between sensors and the
integration platform. In fact sometimes the same tech- 1. The Data collector stores measures for monitoring, real
nology cannot adequately manage the characteristics of time processing and analysis.
data from different sources. The lack of standards and 2. The Event detector is in charge to detect the start and the
the availability of many proprietary solutions require an stop of a consuming devices.
effort to develop a new driver for each different technol- 3. The Profile clustering identifies from historical measures
ogy to be integrated into the architecture. At the next the energy profiles of different working programs for
layer data are collected in centralized or distributed each device.
repositories. 4. The Profile modeler computes the average profile for
– The Integration Layer integrates data coming from each cluster.
a number of sensors drivers providing a uniform rep- 5. The Usage modeler learn from historical measures the
resentation model, which is used to store and manage common usage of a device by each user.
the information at the Data Layer. Data flowing from 6. The Profile prediction predicts for each user and for each
the sensor layer are characterized here by complexity device the next program is running, when a device is
of different type, that make challenging the extraction switched-on.
of relevant information they provide as a whole. In fact 7. The Device scheduler according to PV production,
data are heterogeneous as they come from very different energy requirements and users’ constraints computes
sources or they are representative of different phenomena the best schedule of appliances.
(the sources can be rather than utility meter sensors that 8. The Device controller enforce the schedule automati-
detect environmental quantities or human phenomena). cally switching on/off the controlled devices.
– The Data Layer holds all data that have to be processed.
– The Application Logic. Specifically-designed data analy- Profile clustering, Profile modeler and Profile prediction
sis procedures are also used to detect and analyse data implement the new functionalities that automatically execute
Patel et al. (2012). On relevant situations, productions tasks which were manually performed by users.
activate messages which are sent to the User Interface The main issue investigated in this section and in the
and notify users about dangerous situations or recom- following ones will be the deployment of services, which
mend correct behaviors. XMPP1 (Extensible Messag- implement the monitoring and the learning functionali-
ing and Presence Protocol) is used as transport layer to ties, in the Fog nodes rather than in the Cloud. The energy
deliver such messages to final users reusing the mecha- scheduler is a peer to peer application that exploits the col-
nisms of protocol (friendship, presence, multi-user chat, laboration of all control nodes and is out of scope in this
etc.) to identify available receivers, groups, etc. Other contribution.
actions consist of real time commands to electric devices In Table 2 the learning and monitoring software compo-
in the smart house. They are delivered using specific API nents are renamed according to the notation defined in the
of the integration, which are implemented by the differ- previous Section. For convenience, the terms service and
ent drivers. the For example, through device drivers the component will be used interchangeably.
mediator APIs allow to switch on or switch off devices,
when there is a dangerous situation. 4.4 Deployment configurations

Different deployment configurations of the CoSSMic


1
  https​://xmpp.org/. platform has been preliminarily defined and investigated

13
A methodology for deployment of IoT application in fog

Fig. 4  Component diagram of the CoSSMic application

Table 2  CoSSMic application Application Services Interaction


model
A0 : drivers a0,i : driver A0 − > A 1 , A 1 − > A 0 , A 4 − > A 0
A1 : monitoring a1,1 : collector A0 − > A 1 , A 1 − > A 2
a1.2 : detection
A2 : learning a2,1 : clustering A1 − > A 2 , A 2 − > A 3
a2,2 : profile approximation
a2,3 : user model
A3 : prediction a3.1 : prediction A3 − > A 4
A4 : scheduler a4.1 : scheduler A4 − > A 0 , A 3 − > A 4

in  Amato et al. (2014b) and in  Amato et al. (2014a). Here The All-in-fog solution hosts every component in Fog
we focus on deployment of new monitoring and learning nodes. Here we have no privacy issues and reduced
components to evaluate the new overhead introduced and latency, but we need to tune the software components in
the compliance with some relevant requirements. For sake order to comply with the limited resource of each node.
of simplicity we will deploy the full applications A0−3 in The Half-in-fog solution will try to find a good compro-
Cloud or in Fog according to the following configurations: mise between performance and system capabilities.

The All-in-cloud configuration locates all the software The main requirements for this application is the response
components remotely. The Fog nodes will just host a time of prediction ( A1 ) and of detection, in fact it is relevant
queue service that is in charge to route energy measures to apply the reaction in real time for switching-off the device
to the Cloud and to receive commands from remote. and for triggering the scheduler to allocate the required
The cost of a centralized and complete knowledge about energy. Another requirement we investigate in this paper is
the distributed system will be a greater latency and the bandwidth in upload that can be limited when the Fog
eventually privacy issues to be addressed. control node is installed in an household by an ADLS link.

13
S. Venticinque, A. Amato

Table 3  CoSSMic deployment App. All-in-cloud All-in-Fog Half-in-Fog


configurations
xa ya za xa ya za xa ya za

A0 0 1 0 0 1 0 0 1 0
A1 0 0 1 0 1 0 0 1 0
A2 0 0 1 0 1 0 0 0 1
A3 0 0 1 0 1 0 0 1 0

Obviously, the compliance with this requirements depends the scheduler is also running in Cloud, the only informa-
both on the hardware resources and on the workload to be tion to be communicated to the gateway is the assigned
processed, but they can change according to the deployment start time. We assume that the delay of the assigned start
configuration. time is not relevant respect to the dynamic of the schedule,
The placement of each application according to the pro- which needs a certain degree of flexibility from the user to
posed model is shown in Table 3. Additional technological optimize the energy utilization.
details are provided in the following sub-sections.

4.4.1 All‑in‑cloud 4.4.2 All‑in‑fog

As it shown in Fig. 5 measures and application data for As it shown in Fig.  5 measures and application data of
all users and all devices are stored in Cloud. Also all ser- each user are stored locally, in her own Fog node. Also all
vices run in Cloud, except IoT drivers. The data collector services run locally. In this configuration all components
is implemented by a web service with a REST interface will run on the Fog node. This was the default configura-
that receives energy samples in real time from devices tion deployed by the CoSSMic project in all trials. No data
through the Fog control node. The event detector works are transferred to the Cloud, and only the average profiles
in pipeline with the data collector in order to identify the need to be communicated between the distributed instance
switch-on and the switch-off of each device. That means of the scheduler to compute the assigned start time. It is
there will be relevant latency from the switch-on detec- straightforward to observe that there are not latency issues
tion and the switch-off control that must be applied in real in this case, a part the time required to update the sched-
time. On the other hand, this solution has the advantage ule. Even security and privacy issues are reduced, because
to store all data in Cloud, which means a centralized full only an analytical model of the average energy profile is
knowledge base of energy measures. For this reason, all shared among gateways. On the other hand, the utilization
the needed information are available for clustering the of local data will limit the learning process of energy pro-
time-series of the same device type. Clustering is trig- files, that cannot exploit time-series of other households.
gered when the completion of a device run is detected. Focusing on the scope of this contribution, it is critical
It allows to classify different working programs, which to evaluate the performance resulting from the execution
will be used to predict the energy requirements at the next of all software components on the embedded platform
switch-on. Of course, time-series recorded for the same hosting the gateway, in fact the original software did not
device type at different households allow to characterize include the new learning capabilities (Fig.  6).
earlier and better the energy profiles, which can be used
for new households, or for those that have not been col-
lected any measures yet. The clustered time-series are used
by the Profile modeler to compute the representation of
each energy profile. There will be one average profile for
cluster. The clustering will also allow to learn the user’s
behaviour in terms of device utilization. It means that,
for each user and for each device a regression model will
be estimated to predict the next program will be planned,
observing the sequence of previous runs. Such a model
is used by the Profile prediction when the switch-on of a
device has been detected. The energy profile correspond-
ing to the same model is used by the Device scheduler to
compute the assigned start time for the device. In the case Fig. 5  All-in-cloud deployment configuration

13
A methodology for deployment of IoT application in fog

5 Experimental results

Experiments aim at evaluating the performance and the


resource utilization of different IoT deployment configura-
tions. The testbed has been set-up with a 8GB I7 worksta-
tion hosting Ubuntu Linux 17.04 and a fog node that is a
raspberry PI3 board with Raspbian Jessie operating system.
Three different configurations have been evaluated to
estimate performance improvement provided by an optimal
Fig. 6  All-in-Fog deployment configuration deployment choice. To reproduce a realistic scenario we
emulated the workload using energy measures of CoSSMic
trials.
The tsung2 stress tool has been used to model and run the
tests and to monitor the resource utilization. Tsung allows
to define a testing scenario by concurrent sessions. Sessions
can start on a single or on multiple machines. Each session
is composed of one or more requests, which can grouped by
transactions. A transaction is defined as a sequence of mes-
sages whose statistics will be evaluated.
The first designed scenario is shown in Fig.  8a. It
describes, by a sequence diagram, the workload model in
the case software components are deployed all in Cloud or
all in Fog. Two concurrent sessions have been defined. The
Fig. 7  Half-in-Fog deployment configuration
first session, which is called metering, sends measured data
to the Emoncms content management system. This session
4.4.3 Half‑in‑fog simulates the common behaviour of any smart meters. It
includes just a transaction, sending only one HTTP request
The third configuration aims at reducing the latency and to invoke a REST method for posting energy samples.
the bandwidth utilization, but would like to use the Cloud The second session, which is called run, emulates the
resources to exploit the collective knowledge about energy start and the stop of an appliance. It occurs with a lower
consumption by appliances, and off-loading the most expen- probability than the first session. It includes two kind of
sive computation. The proposed deployment configuration transactions. The first one, which is called prediction, is
is shown in Fig.  7. The only software components running composed of a blocking XMPP message that notifies the
in Cloud are the Profile clustering, Profile modeler and the start of the device. Than it waits for the predicted energy
User modeler. The local gateway will upload to the Cloud profile that is going to execute. The second transaction,
the time-series of a completed run after that the switch- called learning, is non-blocking, it notifies the stop of the
off has been detected. The Cloud services are in charge to device. The learning transaction is delayed according to the
return the updated average profiles, and the usage model. duration of the run. The samples of the run between the start
This allows also for providing energy profiles as a service and stop are emulated by instances of the metering session.
to other energy management systems adopting alternatives The schedule transaction is missing because it is out of the
approaches like the one described in  Gentile et al. (2016). scope in this paper. The second designed scenario is shown
There are not hard constraints for updating the models, in Fig.  8b. It describes, by a sequence diagram, the work-
because there will be at least the time until the next utili- load model in the case software components are distributed
zation of the appliance by the user. On the other hand the in Cloud and Fog. In particular we have two instances of the
only computational burden, which is the utilization of the learning agent that executes the prediction function locally
regression model for predicting the starting working pro- and the learning functions in Cloud. The light blue color is
gram, is less than the computational requirements of the used here to highlight the only one transaction distributed
training phase. between Fog and Cloud. In this case the energy sample are
not transmitted to the Cloud, but at the end of the run the
monitored consumption of a run is communicated by only

2
 tsung.erlang-projects.org.

13
S. Venticinque, A. Amato

(a) All in Cloud and all in Fog (b) Half in Fog

Fig. 8  Testing scenarios

one message at the end of the execution. Moreover the pre- increases slightly linear with the length of the sequence, and
diction model is asynchronously received from the Cloud, the time to complete the prediction is comparable with the
but it does not belong to a transaction because we are inter- classification.
ested only to its effect on the bandwidth utilization. In Fig.  10 performance of processing elements has been
evaluated in the case all the software components run on the
5.1 Benchmarking raspberry. Here we can observe that performance results are
obviously worse, but they are comparable with the case they
A preliminary analysis about the performance of software run in Cloud.
components has been investigated running all functions in They relevant issue to be addressed deals with the the
Cloud with different workloads. We emulated the sequence number of devices and data to be handled locally. In fact the
of 200 runs of a washing machine. Each clustering and the processing time of each element will affect the maximum
learning algorithm is executed on all the collected profiles, acceptable arrival rate of incoming requests. We expect a
as well as the prediction use the model trained on the full reduced number of devices to be handled locally, but such a
history. But the spline approximation is computed for each solution does not allow to exploit data from other fog nodes
cluster, but using just the last five recorded profiles assigned to speed up and improve the learning process. Benchmark
to each cluster. results will be assigned to the parameters of Eq. 7 to select
In Fig.  9 the blue points represent the time for complet- the feasible deployment configurations.
ing each computation. The red line is the linear approxima-
tion of the time distribution that allows to predict the per- 5.2 Evaluation
formance figures with increasing workload. In Fig.  9a the
time increases with number of processed individuals. The Here we evaluate the model formulated in Sect.  4.1, setting
time to compute the splines models depends on the number the arrival rates of energy samples. In Table  4 evaluation
of clusters identified and on the number of time-series in results are shown for different values of the arrival rate of the
each clusters. For this reason in Fig.  9b on the x axis we energy samples. We also supposed that 20% of energy sam-
plotted the product between the number of identified clus- ples belong to switching events. Values have been chosen
ters and the number of time-series. Such a number varies to reproduce underloaded, overloaded and normal working
between 1 and 45 because just the last five runs are used to conditions.
compute the spline. Figure  9c, d show the time needed to We expect that when either cCF > 1 or cCR > 1 the system
build a classification model using the random forest algo- will be overloaded and the execution time will continuously
rithm and the application of the same model to predict which increase until a failure.
program is starting, according to the switch-on date-time The system will work in normal condition when both
and the previous occurrences. We can observe that the time cCF < 𝜇F or cCR < 𝜇R where the 𝜇 parameter depends on the

13
A methodology for deployment of IoT application in fog

(a) Clustering performance (b) Spline computation

(c) Random Forest classification (d) Random Forest prediction

Fig. 9  All in cloud performance

overhead and the amount of resources used by the operating Here we reproduce the scenario shown in Fig.   8a to
systems. In normal conditions, we expect that the execu- evaluate the resource utilization in term of CPU load and
tion time will match the values provided as an input to the communication overhead at Cloud side, and to estimate the
model, even if fluctuation of the workload can introduce latency of each transaction. In particular, we are presenting
several deviation. performance results with different values of the arrival rate.
The system will work in normal condition when both Each test will last for 2 min.
cCF << 𝜇F or cCR << 𝜇R . It will be able to reduce deviations
and process longer and greater fluctuation. We do not dis-
tinguish in this case upload and download. In this case cNR 5.3.1 All in cloud scenario
and cFR are the same as we consider one Fog control node.
In this scenario the fog node is sending all sensors data to the
cloud node and receive both the prediction about next load
5.3 Testing profile to schedule and the energy profile itself. Every com-
putation is done in the Cloud and all the historical data are
The performance evaluation of processing element allowed stored there. In three different experiments requests arrives
only for an estimation of the processing capability of at a different rate. For this scenario values of inter-arrivals
working nodes (Fog and Cloud). It allows to estimate a are 1, 0.5, and 0.1 s. In each experiment 20% of requests
upper bound for the arrival rate of processing requests. corresponds to the start and to the stop of a device, 80%
However in order to evaluate the latency, bandwidth and of requests are any other energy samples. The arrival rate
the computation overhead it needs to test realistic work- of request during the run of each experiment is shown in
load on real testbeds. Fig.  11a.

13
S. Venticinque, A. Amato

(a) Clustering performance (b) Spline computation

(c) Random Forest classification (d) Random Forest prediction

Fig. 10  All in Fog performance

The processing capability of the Cloud can be under-


Table 4  Evaluation of CPU and network utilization stood comparing the arrival rate of Fig.  11 with the dura-
Sample rate
tion of transaction shown in Fig.  12.
cCF cCR cNR (KB/s)
(req/s) In Fig.  12a the load of the Cloud node is shown for the
different experiments. Of course the workload grows with
All-cloud 1 0 0.075 0.385 the arrival rate. It can be more than 1 because it has a quad
2 0 0.15 0.77 core processor.
10 0 0.72 3.85 In Fig.  12b–d the duration of different transactions
All-fog 0.2 0.11 0 0 must be evaluated with different criteria. In fact, we must
1 0.52 0 0 observe that each sample transaction is an HTTP request
2 1.04 0 0 that simply posts an energy value. It means that each
Half-fog 1 0.14 0.042 0.27 request waits only until the HTTP response is received.
2 0.28 0.084 0.54 On the other hand the stop transaction triggers the clus-
10 1.4 4.42 2.7 tering and this learning processes. It is asynchronous, that
means the duration include only the transmission time as
no acknowledge its expected and also the connection is
already open. Finally, the predict transaction must wait
In Fig. 11 the rate of relevant transactions is also shown. for the response. In fact, nevertheless the previous requests
In particular we observe how the arrival rate of different simply increase the load of the Cloud node, the predict
requests increases in different experiments. They are all transaction must complete as soon as possible to start the
accepted, but some of them wait enqueued for being pro- schedule and to find the best time to switch-on the device.
cessed when the inter-arrival is too short.

13
A methodology for deployment of IoT application in fog

(a) Requests per seconds (b) 1 sec inter-arrivals

(c) 0.5 sec inter-arrivals (d) 0.1 sec inter-arrivals

Fig. 11  Transaction rate in three experiments of all-in-cloud scenario

The effect of the higher overhead of the Cloud node is the same throughput cannot be processed locally. For this
observed by a longer duration of the prediction. In the case reason, in three different experiments sessions are started
of the highest arrival rate the prediction is received after 20 with the following values of inter-arrivals: 10, 1, 0.5 s.
s, that becomes not acceptable after two minutes, and it is In each experiment 10% of requests corresponds to the
expected to increase because the Cloud node is not able to switch-on and 10% to the switch-off of a device, 80% of
process the workload. requests are any other energy samples. The arrival rate of
In fact another effect, not shown in the Fig. 12 is repre- requests during each experiment is shown in Fig. 13 a).
sented by the number of requests processed after the end of In Fig. 13 the throughput of transactions in different
the experiment, when all the message have been sent. experiments is shown. The arrival rate has been chosen to
Average arrival rate and average duration of transactions show the result in overloaded, normal and edge conditions.
are provided in Table 5. In Fig.  14 the duration of transactions in different
experiments show that in the overloaded and in edge con-
5.3.2 All in Fog scenario dition the prediction increases as the throughput is not sus-
tainable by the fog node. With 10 s inter-arrivals the node
In this scenario the fog node never sends data to the Cloud. is able to process the incoming workload. Let’s observe
All data are stored locally and reactions are locally com- that it means we collect receive a measure from any device
puted and applied. Clustering and learning is computed in each 10 s and a device can switch-on or switch-off not
the Fog node as well as prediction is performed locally. more often than 100 s, in average. Additional details are
Because of the limited capability of the raspberry platform provided in Table 6.

13
S. Venticinque, A. Amato

(a) CPU load (b) 1 sec inter-arrivals

(c) 0.5 sec inter-arrivals (d) 0.1 sec inter-arrivals

Fig. 12  Duration of transactions and CPU load in different experiments of all-in-cloud scenario

Table 5  Transactions statistics Name 1_r 0.5_r 0.1_r 1_d 0.5_d 0.1_d


of all-in-cloud scenario
Predict 0.06/s 0.29/s 0.56/s 0.51 s 0.92 s 5.30 s
Sample 0.59/s 1.56/s 4.32/s 21.18 ms 22.44 ms 16.31 ms
Stop 0.12/s 0.16/s 0.52/s 0.731 ms 0.634 ms 0.738 ms

5.3.3 Half in Fog scenario during the run of each experiment is shown in Fig. 15a.
Even if the arrival rate, that means the number of devices,
In this scenario the fog node is sending only time-series is the same of the first scenario, the workload is shared
of runs to the Cloud when appliance stops. The fog node with the Cloud node. Offloading clustering and learning he
receives prediction models and spline representations of fog node can use its own computing capability to increase
clustered profiles. All data are stored locally and reactions its throughput. In Fig.  16 the duration of transactions in
are locally computed and applied. Clustering and learn- different experiments is shown. We observe that the even
ing is computed in the Cloud, but prediction is performed in the third experiment, when the node cannot process the
locally. In three different experiments sessions are started incoming requests, the duration of prediction is less than
with the following values of inter-arrivals: 1, 0.5, 0.1 s. In in the first scenario. On the other hand, the fog node is
each experiment 10% of requests correspond to the switch- delegated to handle only local devices and the arrival rate
on and 10% to the switch-of of a device, 80% of requests of 0.5 s would allow to serve the switch-on every 5 s that
are any other energy samples. The arrival rate of requests

13
A methodology for deployment of IoT application in fog

(a) Requests per seconds (b) 5 sec inter-arrivals

(c) 1 sec inter-arrivals (d) 0.5 sec inter-arrivals

Fig. 13  Transaction rate in three experiments of all-in-fog scenario

is far from being a constraints. Average values of duration down the following operations and the bandwidth utilization
of transactions and of arrival rates are provided in Table 7. decreases very fast. The same effect is not shown in the half-
in-cloud experiment because the number of HTTP transac-
5.4 Communication overhead and bandwidth tions is 10% respect to the all-in-cloud scenario.

In Fig. 17 the size of data read and written by the Cloud


node through the loop-back and the Ethernet interface is 6 Related works
shown. We observe that the read data from the Ethernet
interface are almost the same in the first and in the second According to Dastjerdi et al. (2016), advantages associated
experiment, even if with a slightly different distribution. The with Fog computing include the following:
data written to the loop-back interface increase in the all-
in-cloud scenario because the energy sample must be stored – Reduction of network traffic: fog computing provides a
and the database utilization increase. Even the number of platform for filtering and analysing of the data generated
HTTP response written to the Ethernet interface is relevant by devices close to the edge, and for the generation of
because of the higher arrival rate. An interesting phenom- local data views. This drastically reduces the traffic sent
ena is observed in the third experiment. Here we observe a to the Cloud.
transient high bandwidth utilization at the beginning of the – Suitable for IoT tasks and queries: with the increasing
experiment, but the overhead due to the computation slows number of smart devices, most of the requests are related

13
S. Venticinque, A. Amato

(a) CPU load (b) 5 sec inter-arrivals

(c) 1 sec inter-arrivals (d) 0.5 sec inter-arrivals

Fig. 14  Duration of transactions and CPU load in different experiments of all-in-fog scenario

Table 6  Transactions statistics Name 5_r 1_r 0.5_r 5_d 1_d 0.5_d


of all-in-fog scenario
Predict 0.05/s 0.08/s 0.17/s 2.73 s 2.97 s 5.61 s
Sample 0.17/s 0.69/s 1.3/s 15.75 ms 14.3 ms 0.25 s
Stop 0.02/s 0.10/s 0.21/s 0.505 ms 0.31 ms 0.258 ms

to the surroundings of the device. Hence, such requests fog computing aims at processing incoming data closer
can be served without the help of the global information to the data source itself, it reduces the burden of that
stored in the Cloud. processing on the Cloud, thus addressing the scalability
– Low latency requirement: Mission critical applications issues arising out of the increasing number of endpoints.
require real-time data processing. The control system
running on the Cloud may make the sense-process-actu- In particular there are several application scenarios that
ate loop slow or unavailable as result of communication could benefit from fog computing that has been investigated
failures. Fog computing helps by performing the process- in several research papers. In Patil (2015) some application
ing required for control system very close to the robots of fog computing and the benefit that fog provides in those
- thus making real-time response possible. contexts are described.
– Scalability: Even with virtually infinite resources, the The key advantages of Fog Computing in Smart Grid
Cloud may become the bottleneck if all the raw data are described in Gia et al. (2015). The challenges regard
generated by end devices is continued sent to it. Since latency-sensitive issues, location awareness and large data

13
A methodology for deployment of IoT application in fog

(a) Requests per seconds (b) 1 sec inter-arrivals

(c) 0.5 sec inter-arrivals (d) 0.1 sec inter-arrivals

Fig. 15  Transaction rate in three experiments of half-in-fog scenario

transmission. Undoubtedly, the more data is transmitted over deadlines on the execution time of applications. Authors
a network, the higher possibility of error occurs because bit also provide a model for an IoT application and a resource
errors, data transmission latency and packet dropping pos- model for the fog landscape. Afterwards, the FSPP is
sibility are proportional to the volume of transmitted data. described, and an according optimization model is formal-
Paper Perera et al. (2017) focuses on Fog Computing in ized and validated using experimental evaluation.
Smart Grid and describes several inspiring use case sce- The Fog Service Placement Problem has been inves-
narios of Fog computing, identify ten key characteristics tigated also in Skarlat et al. (2017) where authors pre-
and common features of Fog computing, and compares more sent a conceptual fog computing framework and model
than 30 existing research efforts in this domain. Based on the service placement problem for IoT applications over
their review, authors further identify several major function- fog resources as an optimization problem, which explicitly
alities that ideal Fog computing platforms should support considers the heterogeneity of applications and resources
and a number of open challenges toward implementing them, in terms of Quality of Service attributes. Authors also
to shed light on future research directions on realizing Fog propose a problem resolution heuristic based on a genetic
computing for building sustainable smart cities. algorithm showing, through experiments, that the service
In Minh et al. (2017) authors present present the Fog execution can achieve a reduction of network communica-
Service Placement Problem (FSPP), which allows to tion delays when the genetic algorithm is used, and a better
place IoT services on virtualized fog resources while tak- utilization of fog resources when the exact optimization
ing into account Quality of Service (QoS) constraints like method is applied.

13
S. Venticinque, A. Amato

(a) CPU load (b) 1 sec inter-arrivals

(c) 0.5 sec inter-arrivals (d) 0.1 sec inter-arrivals

Fig. 16  Duration of transactions in three experiments of half-in-fog scenario

Table 7  Transactions Statistics Name 1_r 0.5_r 0.1_r 1_d 0.5_d 0.1_d


of half in fog scenario
Predict 0.09/s 0.14/s 0.46/s 3.77 s 4.07 s 8.71 s
Sample 0.63/s 1.53/s 1.66/s 33.75 ms 0.11 s 11.30 ms
Stop 0.11/s 0.15/s 0.24/s 0.89 ms 0.9 ms 0.851 ms

7 Conclusion taking into account available resources, architecture and


requirements. A Fog Application model defined in related
Fog computing can be seen as an extension of Cloud com- work has been adopted and extended to apply the pro-
puting with extra features that address some challenges posed methodology. A real case study in the Smart Energy
of Cloud. In particular it poses the attention on the smart domain and software platform, from the CoSSMic Euro-
utilization of all the computational resources distributed pean project, have been extended with new functionalities
in the network, that become a main challenge because of to support the automatic learning of energy profiles of
the increasing development of IoT technologies. In this appliances and for the prediction of energy requirements
scenario this paper proposed a methodology to address of future use. The choice of deployment configurations
the Fog Service Placement Problem, that means to find the can be driven by functional requirements, such the possi-
best deployment solution of an IoT application in the Fog, bility provide energy profiles as a service to other energy
management systems like the one described in  Gentile

13
A methodology for deployment of IoT application in fog

(a) 1-sec (all-in-cloud) (b) 1-sec (half-in-fog)

(c) 0.5-sec (all-in-cloud) (d) 0.5-sec (half-in-fog)

(e) 0.1-sec (all-in-cloud) (f) 0.1-sec (half-in-fog)

Fig. 17  Network utilization

13
S. Venticinque, A. Amato

et al. (2016), however effect on performance and service Business Solutions Group (IBSG). http://www.cisco​.com/web/about​
levels cannot be neglected. The changed not functional /ac79/docs/innov​/IoT_IBSG_0411F​INAL.pdf. Accessed 5 Apr 2018
Evans D (2015) Fog computing and the internet of things: extend the
requirements have been analyzed for an optimal deploy- cloud to where the things are. Tech. rep., Cisco Internet Business
ment according to the Fog computing paradigm. Experi- Solutions Group (IBSG). https​://www.cisco​.com/c/dam/en_us/solut​
mental results demonstrated the application of the BET ions/trend​s/iot/docs/compu​ting-overv​iew.pdf. Accessed 5 Apr 2018
methodology that allowed for an effective deployment of Gentile U, Marrone S, Mazzocca N, Nardone R (2016) Cost-energy mod-
elling and profiling of smart domestic grids. Int J Grid Util Comput
new learning functionalities integrated in the CoSSMic 7(4):257–271. https​://doi.org/10.1504/IJGUC​.2016.08101​2
platform. In particular we were able to evaluate and vali- Gia TN, Jiang M, Rahmani AM, Westerlund T, Liljeberg P, Tenhunen H
date the maximum workload can be processed with the (2015) Fog computing in healthcare internet of things: a case study
available computational resources in different deployment on ecg feature extraction. In: Computer and information technology;
ubiquitous computing and communications; dependable, autonomic
configurations. However, the precision of estimated per- and secure computing; pervasive intelligence and computing (CIT/
formance is limited by the availability of a testbed, which IUCC/DASC/PICOM), 2015 IEEE international conference on,
is as similar as possible to the real case. We think that IEEE, pp 356–363
the proposed methodology can provide guidelines to the Gupta H, Dastjerdi AV, Ghosh SK, Buyya R (2016) ifogsim: A toolkit
for modeling and simulation of resource management techniques
developer at programming and deployment stage to meet in internet of things, edge and fog computing environments. arXiv​
application requirements and to optimize performance and :abs/1606.02007​
utilization of available resources. The next step will focus Jiang S, Venticinque S, Horn G, Hallsteinsen S, Noebels M (2016) A
on the development of a tool that allows for running the distributed agent-based system for coordinating smart solar-pow-
ered microgrids. pp 71–79. https​://doi.org/10.1109/SAI.2016.75559​
benchmarking, evaluation and testing automatically, col- 64. https​ : //www.scopu​ s .com/inwar​ d /recor​ d .uri?eid=2-
lecting performance results could speed-up and improve s2.0-84988​84199​2&doi=10.1109%2fSAI​.2016.75559​64&partn​
the job of the developer. The design and development of a erID=40&md5=dd0b5​c7229​b8637​3bf3d​3dc03​78b84​25, cited By
decision support system, which automatically recommends 2. Accessed 5 Apr 2018
Ling C, Lifang L, Xiaogang Q, Gengzhong Z (2017) Cooperation for-
the optimal deployment configurations, and the estimated warding data gathering strategy of wireless sensor networks. Int
performance figures, is a future work. The dynamic opti- J Grid Util Comput 8(1):46–52. https​://doi.org/10.1504/IJGUC​
mization of deployment of a Fog Application by reconfigu- .2017.10003​009
ration is another hint for future investigation on autonomic Minh QT, Nguyen DT, Le AV, Nguyen HD, Truong A (2017) Toward
service placement on fog computing landscape. In: 2017 4th
fog applications. NAFOSTED conference on information and computer science, pp
291–296. https​://doi.org/10.1109/NAFOS​TED.2017.81080​80
References Papageorgiou A, Cheng B, Kovacs E (2015) Real-time data reduction
at the network edge of internet-of-things systems. In: Tortonesi M,
Amato A, Aversa R, Di Martino B, Scialdone M, Venticinque S, Hall- Schnwlder J, Madeira ERM, Schmitt C, Serrat J (eds) 11th Interna-
steinsen S, Horn G (2014a) Software agents for collaborating tional Conference on Network and Service Management, CNSM
smart solar-powered micro-grids. In: Caporarello L, Di Martino 2015, Barcelona, Spain, November 9–13, 2015, IEEE Computer
B, Martinez M (eds) Smart organizations and smart artifacts: fos- Society, pp 284–291. https​://doi.org/10.1109/CNSM.2015.73673​73
tering interaction between people, technologies and processes, Patel S, Park H, Bonato P, Chan L, Rodgers M (2012) A review of wear-
Springer International Publishing, Cham, pp 125–133. https​://doi. able sensors and systems with application in rehabilitation. J Neu-
org/10.1007/978-3-319-07040​-7_14 roEng Rehabilit 9(1):21. https​://doi.org/10.1186/1743-0003-9-21
Amato A, Di Martino B, Scialdone M, Venticinque S, Hallsteinsen S, Patil PV (2015) Fog computing. In: International Journal of Computer
Jiang S (2014b) A distributed system for smart energy negotiation. Applications (0975 – 8887), National conference on advancements
In: Fortino G, Di Fatta G, Li W, Ochoa S, Cuzzocrea A, Pathan M in alternate energy resources for rural applications (AERA-2015),
(eds) Internet and distributed computing systems: 7th international pp 1–6
conference, IDCS 2014, Calabria, Italy, September 22–24, 2014. Perera C, Qin Y, Estrella JC, Reiff-Marganiec S, Vasilakos AV (2017) Fog
Proceedings, Springer International Publishing, Cham, pp 422–434. computing for sustainable smart cities: a survey. ACM Comput Surv
https​://doi.org/10.1007/978-3-319-11692​-1_36 50(3):32:1–32:43. https​://doi.org/10.1145/30572​66
Arridha R, Sukaridhoto S, Pramadihanto D, Funabiki N (2017) Clas- Simmhan Y, Giakkoupis M, Cao B, Prasanna VK (2010) On using cloud
sification extension based on iot-big data analytic for smart envi- platforms in a software architecture for smart energy grids. In: Inter-
ronment monitoring and analytic in real-time system. Int J Space national conference on cloud computing technology and science
Based Situated Comput 7(2):82–93. https​://doi.org/10.1504/IJSSC​ (CloudCom), IEEE, poster [CORE C]
.2017.10008​038 Skarlat O, Nardelli M, Schulte S, Borkowski M, Leitner P (2017) Opti-
Bonomi F, Milito R, Zhu J, Addepalli S (2012) Fog computing and its mized iot service placement in the fog. Serv Orient Comput Appl
role in the internet of things. In: Proceedings of the first edition 11(4):427–443. https​://doi.org/10.1007/s1176​1-017-0219-8
of the MCC workshop on mobile cloud computing, ACM, New Vaquero LM, Rodero-Merino L (2014) Finding your way in the fog:
York, NY, USA, MCC ’12, pp 13–16. https​://doi.org/10.1145/23425​ Towards a comprehensive definition of fog computing. SIGCOMM
09.23425​13 Comput Commun Rev 44(5):27–32. https​://doi.org/10.1145/26770​
Dastjerdi AV, Gupta H, Calheiros RN, Ghosh SK, Buyya R (2016) 46.26770​52
Fog computing: principles, architectures, and applications. arXiv​ Witt S (2015) Data management and analytics for utilities. http://www.
:1601.02752​[Cs]. Accessed 5 Apr 2018 smart​gridu​pdate​.com. Accessed 29 Sep 2016
Evans D (2011) The internet of things: How the next evolution of the
internet is changing everything. Tech. Rep. April, Cisco Internet

13

Potrebbero piacerti anche