Sei sulla pagina 1di 98

Politecnico di Milano

School of Industrial and Information Engineering


Dipartimento di Elettronica, Informazione e Bioingegneria
Master of Science in Computer Science and Engineering

Implementation of a smart bin for


waste monitoring and recognition

Relatore: Prof. Alessandro Enrico Cesare Redondi

Tesi di laurea di:


Federico Seri Matr. 854032

Anno Accademico 20172018


Una piccola idea
per un piccolo pianeta.
Più verde e meno grigio.
Ringraziamenti
Abstract
The amount of waste produced on a daily basis is constantly increasing. More and
more frequently we read on newspaper of a bad waste management and the inability
to dispose of it. All this is often connected to a poor adoption of the separate
collection system or to a bad organizational capacity of the companies responsible
for waste disposal.
To improve the current situation, we propose a two-module smart bin in this
thesis. The rst consists of a smart bins network connected to the internet. Each
basket is able to know its ll status, detect any unexpected objects inside and notify
the user when an intervention is necessary. The second module checks if an object
has been inserted in it and suggests to the user in which recycle bin throw it. The two
modules can be easily combined together or can be used separately as two dierent
smart bin versions.
To know the lling state we used an ultrasonic sensor capable of measuring the
distance. To detect unexpected situation we use a weight sensor and make estimates
using historical data. Instead for the second smart bin we use the ultrasonic sensor
to detect an eventual object and through a small camera and an image recognition
service we can understand the material of that object.
Sommario
La quantità di riuti prodotti quotidianamente è in continua crescita e n troppo
di frequente la loro cattiva gestione e l'incapacità di smaltirli sono argomenti di
cronaca. Tutto questo spesso è collegato ad una scarsa adozione del sistema di
raccolta dierenziata o ad una pessima capacità organizzativa delle aziende incaricate
al loro smaltimento.
Per migliorare l'attuale situazione proponiamo une cestino intelligente composto
da due moduli. Il primo consiste in una rete di cestini collegati ad internet capaci
di conoscere il proprio stato di riempimento, rilevare eventuali oggetti inaspettati
all'interno e noticare l'utente quando è necessario un suo intervento. Il secondo
modulo controlla se un oggetto è stato inserito al suo interno e suggerisce all'utente
in quale cestino della raccolta dierenziata buttarlo. I due moduli possono essere
facilmente uniti oppure possono essere utilizzati singolarmente come due versioni
dierenti di cestino intelligente.
Per conoscere lo stato di riempimento abbiamo impiegato un sensore ad ultra-
suoni capace di misurare lo distanza, per rilevare situazioni anomale utilizziamo un
sensore di peso e facciamo delle stime utilizzando i dati storici. Invece per il secondo
cestino intelligente utilizziamo sempre il sensore ad ultrasuoni per rilevare un even-
tuale oggetto e, adoperando una piccola fotocamera e un servizio di riconoscimento
immagini, riusciamo a capire il materiale dell'oggetto.
Contents
Introduction 1
1 State of the art 4
1.1 Capacity monitoring smart bin state of the art . . . . . . . . . . . . 4
1.2 Waste recognition state of the art . . . . . . . . . . . . . . . . . . . . 6

2 Technological background 8
2.1 The IEEE 802.11 Standard . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 LAMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 GNU/Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Apache HTTP Server . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4 MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 MQTT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.2 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 REST API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Node-RED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 Microcontroller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6.1 Arduino . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6.2 Raspberry PI . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.6.3 NodeMCU - Devkit 0.9 . . . . . . . . . . . . . . . . . . . . . 22
2.7 Used sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7.1 Load Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7.1.1 HX 711 . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.2 HC-SR04 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.3 ArduCAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.7.4 INA219 DC Current Sensor . . . . . . . . . . . . . . . . . . . 28

i
Contents

3 System implementation 29
3.1 Common functionalities . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.1 Current measure . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.2 Measure the distance . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.3 Obtain a unique identier . . . . . . . . . . . . . . . . . . . . 32
3.1.4 Store values in ash memory . . . . . . . . . . . . . . . . . . 33
3.2 Smart bin - Capacity monitoring module . . . . . . . . . . . . . . . . 35
3.2.1 Server implementation . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1.1 Receiving data . . . . . . . . . . . . . . . . . . . . . 35
3.2.1.2 Showing information . . . . . . . . . . . . . . . . . . 36
3.2.2 Client implementation . . . . . . . . . . . . . . . . . . . . . . 38
3.2.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.2.2 Software . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3 Smart bin - Vision module . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3.1 Server implementation . . . . . . . . . . . . . . . . . . . . . . 47
3.3.2 Client implementation . . . . . . . . . . . . . . . . . . . . . . 50
3.3.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3.2.2 Software . . . . . . . . . . . . . . . . . . . . . . . . 52

4 Evaluation 58
4.1 Smart bin: capacity monitoring modules . . . . . . . . . . . . . . . . 59
4.1.1 Bins map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1.2 Smart noties . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2 Smart bin - Vision module . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.1 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.2 Time eciency . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.2.3 Further experiments . . . . . . . . . . . . . . . . . . . . . . . 74
4.2.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3 Consumption analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Conclusions 82
Bibliography 87

ii
List of Figures
2.1 ISO/OSI Model and the IEEE 802.11 layers . . . . . . . . . . . . . . 8
2.2 LAMP logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 MQTT Broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Sample of a Node-RED ow . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 MQTT setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 Node-RED dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Arduino Microcontrollers . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.8 Raspberry Pi 3 Model B . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.9 NodeMCU v0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.10 Watchdog error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.11 Load cell (10kg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.12 Analog to Digital Converter HX 711 . . . . . . . . . . . . . . . . . . 26
2.13 Ultrasonic sensor: HC-SR04 . . . . . . . . . . . . . . . . . . . . . . . 26
2.14 ArduCAM-Mini-5MP-Plus OV5642 Camera Module . . . . . . . . . 27
2.15 INA219 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.1 Wiring diagram of the Current Measure circuit . . . . . . . . . . . . 30


3.2 Two smart bins of dierent size . . . . . . . . . . . . . . . . . . . . . 35
3.3 Node-RED blocks for the server . . . . . . . . . . . . . . . . . . . . . 36
3.4 Dashboard creation using Node-RED . . . . . . . . . . . . . . . . . . 36
3.5 Information shown to the user . . . . . . . . . . . . . . . . . . . . . . 38
3.6 Wiring diagram of a Smart Bin capacity monitoring module . . . . . 40
3.7 Load cell assembling . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.8 UML sequences diagrams . . . . . . . . . . . . . . . . . . . . . . . . 46
3.9 Wiring diagram of the smart bin vision module . . . . . . . . . . . . 51
3.10 Cardboard mock-up . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.1 Representation of the collected data . . . . . . . . . . . . . . . . . . 59


4.2 20 cm smart bin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3 Bins map and their state . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4 Mock-up of a possible device used by the waste collector . . . . . . . 62

iii
List of Figures

4.5 IFTTT notify samples on smartphone. . . . . . . . . . . . . . . . . . 63


4.6 Examples of notications on smartphone using emails. . . . . . . . . 64
4.7 Example of dierent image size. . . . . . . . . . . . . . . . . . . . . . 66
4.8 Example of highly deformed plastic object . . . . . . . . . . . . . . . 69
4.9 Analysis on aluminium cans . . . . . . . . . . . . . . . . . . . . . . . 70
4.10 Newspapers are hardly recognized due to their size . . . . . . . . . . 71
4.11 Comparison of images with and without background . . . . . . . . . 75
4.12 Consumption graph of 2 cycle of data collection and transmission . . 79
4.13 Camera consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.14 Total consumption of the smart bin vision module . . . . . . . . . . 81

iv
List of Tables
3.1 Result of the material detection . . . . . . . . . . . . . . . . . . . . . 50

4.1 Accuracy on plastic objects. . . . . . . . . . . . . . . . . . . . . . . . 68


4.2 Accuracy on aluminium objects . . . . . . . . . . . . . . . . . . . . . 69
4.3 Accuracy on paper objects . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4 Accuracy on unsorted objects . . . . . . . . . . . . . . . . . . . . . . 71
4.5 Total accuracy per resolution . . . . . . . . . . . . . . . . . . . . . . 72
4.6 Accuracy per resolution . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.7 File dimension per resolution . . . . . . . . . . . . . . . . . . . . . . 73
4.8 Time elapsed per resolution . . . . . . . . . . . . . . . . . . . . . . . 74
4.9 Power consumption of the MKR1000 . . . . . . . . . . . . . . . . . . 78
4.10 Consumption of the smart bin . . . . . . . . . . . . . . . . . . . . . . 79
4.11 Power consumption of camera sensor OV5642 . . . . . . . . . . . . . 80

v
List of Algorithms
3.1 Main client function . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

vi
Introduction
Nowadays more and more trash is created and waste management is becoming
crucial, both for developed country and developing ones. The main idea is not to
propose a solution to this widespread and complex problem, but to alleviate and
improve the current inecient solution. In order to achieve this ambitious aim I
focus on an advanced smart bin project composed of two modules.

1. The capacity monitoring module, a network of smart bins that communicate


their status to a central server.

2. The vision module, a system that can recognize waste and recommend the
appropriate receptacle according to the material of the waste.

The rst module intention is to improve the garbage collection process. Currently
every single bin must be inspected to gure out the ll level and then decide if there
is a real need to empty it. Moreover in a relative small environment, like a building
on a university campus, greater attention is also required from the waste collector.
In fact, the employee has to understand whether any organic waste is present inside
the wastebasket that requires the collection regardless the ll level and furthermore
a mandatory change of the dirty rubbish bag.
The second module focuses on another crucial problem of the waste management,
the recycle. Many countries all over the world want to reduce the quantity of waste
produced and its correlated cost in terms of money for the waste disposal and in
terms of health for a greater environmental quality. Particularly virtuous in this
eld is the EU that has xed the ambitious target to recycle from 65% of municipal
waste up to the 75% of packaging waste. [8] To achieve this we have created a system
that through the use of a camera identies the object to throw away and suggest the
more appropriate bin to recycle it correctly.
This smart bin projects is currently based on the WiFi technology, that is widespread
throughout the campus and provides a stable, fast and low energy consumption con-
nectivity. In the near future we aim to replace it using 5g sensors, further increasing
the performance of the project and also removing the constraint of the reduced WiFi
coverage.

1
Introduction

We are proposing here a smart bin that is: cheap, all the electronic components
used have a cost of only few euros, and easy to use, the two modules do not require
any particular skill or knowledge. As an example of this last point, to obtain a
recycle bin recommendation is sucient to insert the object in the bin and the re-
sponse appears in a couple of seconds, instead for the smart bins net the notications
are automatically sent to the desired device and the map can easily be used on a
smartphone screen.
For the construction of the rst module, the capacity measuring module, we
have added to some common waste bins a microcontroller: a tiny computer only few
centimeters long, and some sensors used to detect the ll status of the bin and its
weight. These smart bins can also communicate to a central server connecting to a
pre-existing WiFi connection. The data sent by all the devices are collected by a
central server that stores and computes them and then informs the end-user on the
status of the bin. It can detect not only the ll status of the bin but also inform if
something unexpected has been thrown in a waste bin. The users can monitor the
status of the bins using a map of the building at any time or can receive a notication
on their favorite device like smartphone, computer o smartwatch through emails or
notications.
The vision module is built using the same microcontroller that is wired to a cam-
era sensor. In this case the microcontroller takes a picture of the object inserted in
this smart bin and, always through the support of a central server, it can recommend
to the user the appropriate recycle bin to use in order to correctly recycle that object.
The server receives the image of the object and then reply to the smart bin his waste
bin proposal. From the transmitted image we obtain a list of possible object using an
already existing image recognition service called Amazon Rekognition. This service
provides a complete Software Development Kit written in PHP that has been used
to extract information from the image.
The thesis is divided in 4 main chapters and every chapter can be conceptually
split into two parts: the former is related to the capacity monitoring module whereas
the latter to the vision module.
In the Chapter 1 we analyze the previous smart bins implementations and the existing
papers in literature that inspired us to develop this project.
In Chapter 2 we describe the theoretical and practical technological background
adopted to develop this project.
In Chapter 3 we introduce our approach to the problem. Initially we describe the
common feature, then we enter into the details showing how we have created the
modules step by step.
Finally in Chapter 4 we present the obtained results and some possible use cases in
real scenarios of the two smart bin modules. We also show a detailed analysis of
their power consumption relating to the current commercial battery.

2
Introduction

We have dedicated the conclusion section to describe the obtained results and some
proposal for future works.

3
Chapter 1

State of the art


This chapter gives an overview of the state of the art in waste management and
of previous implementations of the smart bin concept.
As we will see, the idea of a capacity monitoring smart bin as a group of mi-
crocontrollers that measure and communicate the ll status is not an original idea
and many others have developed similar projects using dierent components and
implementations for the project.
Our work in this eld started on top of these previous prototypes focusing on the use
of smaller and less power consuming components. An example in these projects are
the GSM sensors that are widely adopted to transmit information and compared to
the integrated WiFi sensor of the Arduino MKR1000 are more expensive and require
more power to be adopted.
On the contrary the vision module for a smart bin as the concept of image anal-
ysis for the waste sorting is quite new and to date few similar examples are available
in the literature. Mainly these works study solutions to automatically recognize and
separate a certain waste material directly in waste treatment plants requiring expen-
sive and cumbersome devices.
Instead our prototype is cheaper and uses only small sensors and a low power micro-
controller delegating the heavier calculations to a third party service like AWS that
guarantees a high scalability.

1.1 Capacity monitoring smart bin state of the art

As said before many studies have been done on the smart bin concept and in
particular a great interest comes especially from the Eastern countries that have
experienced a great economic and demographic development in the last century.
This has led to an exponential growth of waste produced and therefore to the need
to nd a more ecient method of waste disposal.
In 2013 Bashir et al. from two Indian universities created a prototype using load

4
1.1. Capacity monitoring smart bin state of the art

cell and infrared to control the ll level. For the communication part The system
makes use of radio frequency (RF) tags [6]. The ll level is not very precise an gives
only 2 bin status because the sensor are one placed at the middle of the Smart Trash
Bin and the second is placed near the top of the Smart Trash Bin. Communication
is not continuous but a message is sent only when the Smart Trash Bin is lled up
to the specied load and level.
A similar approach at the level sensing using infrared sensors (TSOP 1738) was
presented also by Navghane et al. from Lonavala (India) in 2016 [20] that used 3 IR
sensor and weight sensor to communicate through a WiFi module the ll level and
the weight status.
In 2014 Mamum et al. from Universiti Kebangsaan Malaysia presents a new
framework that enables the remote monitoring of solid waste bin in real time, via
ZigBee-PRO and GPRS [1]. The framework receives various data from a smart
bin thanks to the great amount of sensors on it: accelerometer, a hall eect, an
ultrasound, a temperature and a humidity sensor and a the load cell sensor.
Sharm et al. in 2015 publish a paper that describes their model of smart bin
using a PIC16F73 microcontroller, an HC-SR04 ultrasonic sensor and SIM900A GSM
module for the communication part [27]. They designed also a map that gives an idea
of the ll status of the bins in the city represented with widgets. These widgets
marking the level of dustbin lled will be put in the location in map exactly the
way dustbins are placed throughout the city. In 2017 Sharm et al. published a
new paper [28] now using an Arduino UNO with a WiFi sensor. In this case his
approach was similar to the one used by us: When garbage reaches the threshold
limit, the status of the bin is updated in the cloud, and this status can be accessed
by the concerned authorities.
In 2017 Wijaya et al. of Hasanuddin University in Makassar (Indonesia) created
a prototype of smart waste bin that can send measurements using a GSM modem
for long range communication and a Bluetooth module for the short range [33]. In
their studies they analyzed the Load cell in order to verify sensors and measure-
ments accuracy, which concluded to a less than 3% error and the HC-SR04 sensor
discovering that The average of error percentage of level sensor is 3-5%. It means
that the sensor can measure the waste level in the bin..
Many other examples are present but noteworthy is the prototype of Kumar et
al. in 2016 from Sri Ramakrishna Engineering College in India that uses an RFID
tag to conrm the task of emptying the garbage. Moreover whenever an RFID tag
(ID card of the cleaner) interrupts the RFID reader, the ultrasonic sensor checks
the status of the dustbin and sends it to the web server. [16], in this way the
microcontroller provides real time information to the server.
In 2017 Fei er al. from Universiti Tun Hussein Onn Malaysia published a pa-
per describing another example of smart bin using the NodeMCU microcontroller

5
1.2. Waste recognition state of the art

equipped with ultrasonic sensors and tracing collection trucks using a GPS module
in order to inform the nearest collection truck to reach the bin that requires to be
emptied. [11]
The same year Baby et al. from School of Electronics Engineering in India
presented an example of smart bin that implements dierent noties to alert the
authorities to empty the bin. The user receives a mail through an SMTP server
and using IFTTT also a mobile notify is sent and an event is created in the Google
calendar. The technologies used to sense the bin status are a Raspberry Pi connected
to an ultrasonic sensor. [5]
Two projects where deployed and tested in real cases during 2015. One was
deployed by Folianto et al. in Singapore reaching an average data delivery ratio of
99.25% during six months of data collections. Of particular interest is the use of
containers with IP65 certication to prevent sensor malfunctions caused by dust and
water splash [14].
The other project was deployed by Dynacargo in the Municipality of Nafpaktia in
Greece using an IP67 protection and the MB1040 LV- MaxSonar-EZ4 to detect the
ll status [25].

1.2 Waste recognition state of the art

Despite the fact the waste recycling is a more and more important theme for the
environmental policies of many states, the automation of it using an image recogni-
tion system is quite new and there are only few papers in this eld.
The current technologies are based on near-infrared spectroscopy, expensive sen-
sors that can identify the object material from the surface. A related joint research
of 3 italian university in 2009 show the obtained results using this technology. The
study of Bonifazi et al. addressed the application of hyperspectral imaging ap-
proach for compost products characterization, in order to develop control strategies
to be implemented at the plant scale. [7]
The rst study aiming to detect dierent materials through image analysis was
published by Wagland, Veltra and Longhurst in the online journal Waste Manage-
ment in 2012. They describe their works as: Research towards new, non-invasive,
remote imaging and image recognition methods to provide faster and more accurate
technologies for waste characterization could lead to signicant savings in time and
cost, and a reduction in the risk of worker exposure. [32]
An interesting example project was made in 2015 at Jesuit University of Guadala-
jara in Mexico by Torres-García et al.: they developed the Intelligent Waste Sep-
arator (IWS): The prototype has to distinguish between three dierent kinds of
inorganic waste [...] aluminum cans, plastic bottles, and plastic cutlery. [30]. They
used a multimedia embedded processor, image processing, and machine learning in

6
1.2. Waste recognition state of the art

order to select and separate waste. Their approach was similar to the one used for
our prototype. It is always based on image processing but instead of using a third
party service like AWS to recognize the object it was done locally. It consists of
converting an RGB image to grayscale and then binarizing it and then through a
machine learning algorithm a label is given to the image.
Our approach is dierent mainly in terms of the architecture and hardware used,
in fact this image recognition requires a pc processor for every camera used whereas
our solution requires only a microcontroller with a camera sensor. This implies in
our case a greater scalability of the architecture and cost of components, space of
the model and power consumption that is orders of magnitude lower.
In 2016 during the TechCrunch Disrupt Hackathon in San Francisco a team cre-
ated Auto Trash [10] a similar project that sorts between compostable items and
recyclable items using a Raspberry Pi module equipped with a camera to pho-
tograph the object. A custom software model built on top of Google's TensorFlow
AI engine was used for the recognition process.
In 2017 Lei Zhu of the ENSTA Bretagne made a nal Study Project in collab-
oration with Veolia Environnement about Machine Learning for Automatic Waste
Sorting [34]. This paper aims to improve the actual automated separating system of
a waste disposal company that currently is partially based on manual selection and
partially automated using near-infrared spectroscopy. Zhu's mission is to Compare
the performance of the realized machine learning method and Realize the labeling
and concept the image base.
In 2017 AMP Robotics created Clarke: a robot is now picking an average of one
carton per second o a container line at a Denver-area materials recovery facility
[24]. It uses advanced visioning system and deep-learning capabilities in order to
identify and separate carton from mixed waste.

7
Chapter 2

Technological background
In this section we describe the technological background employed for the devel-
opment of the prototype. This chapter is divided into several parts where, in the
rst, the IEEE 802.11 standard is briey explained. It is the underlying technology
used by the dierent smart bins to communicate to the server.
In parts 2-5 we describe the technologies used in the server. The LAMP stack
on which the server is based, NodeRED to easily program the server and the two
protocols used for the communication: MQTT for the capacity monitoring module
and REST API to get a response from the image recognition server.
Finally in the last two sections we describe the technologies used to create the
clients to be deployed in the smart bin. In the sixth section there is a brief description
of the microcontrollers adopted to develop these two modules and in the seventh one
there is an overview of the sensors used to retrieve data like weight, ll status and
waste pictures.

2.1 The IEEE 802.11 Standard

Figure 2.1: ISO/OSI Model and the IEEE 802.11 layers

8
2.1. The IEEE 802.11 Standard

The IEEE 802.11 standard is generally called WiFi and more exactly is a set
of standards that every device must follow in order to be Wi-Fi Certied from the
Wi-Fi Alliance, owner of the Wi-Fi trademark.
This standard was created and it is maintained by the Institute of Electrical and
Electronics Engineers (IEEE) LAN/MAN Standards Committee (IEEE 802). It is a
set of specication for Media Access Control (MAC) and Physical (PHY) Layers in
order to implement a Wireless Local Area Network (WLAN) communication in the
900 MHz and 2.4, 3.6, 5, and 60 GHz frequency bands.
Over time some adjustments were made and new versions of the initial protocol were
released. These can be identied from the letter beside the number of protocol (e.g.
802.11g extended the maximum physical layer bit rate to 54 Mbit/s)
More specically the WiFi operates at 2.4 GHz band over up to 14 channels but in
a 2009 were released IEEE 802.11n that allows operation in 5GHz band in order to
increase speed and avoid congestion.
Without entering too much into detail, there are two main components of a wireless
network: Station (STA) that are the wireless terminal that uses the connection to
communicate and Access Point (AP) that provides an access to a wired network.
The standard provides also dierent security solution nalized to protect the
connection and thus the data that pass through the network.
The most common security protocols are:

ˆ WEP: It was part of the original IEEE protocol (802.11) ratied in 1997 but
it is currently deprecated and highly insecure due to its security aws. It uses
the RC4 protocol and in August 2001 Scott Fluhrer, Itsik Mantin, and Adi
Shamir published a cryptanalysis where explained a the security problems and
a possible exploit. [13]

ˆ WPA: This was an update of the insecure WEP security protocol (still using
RC4) and allowed to secure the older device through a rmware update. Un-
fortunately also in this protocol a security aws was discovered. It relied in
older problems of WEP and to a limitations of the message integrity code hash
function. [12]

ˆ WPA2: It is the currently used security protocol and was introduced in 2003
with IEEE 802.11i. It uses the Advanced Encryption Standard (AES) instead
of RC4 and support two main version and protection mechanism based on the
target end-user: Personal and Enterprise.

 WPA-Personal: It is also known as WPA-PSK (Pre-Shared Key) and is


targeted for small oces or house due to the fact that a unique key shared
between all users is necessary to access to the network.

9
2.1. The IEEE 802.11 Standard

 WPA-Enterprise: It is also referred as WPA-802.1X and is targeted for


network used by a large number of users. It requires an authentication
server (RADIUS) and allows to easily give or remove access to the network
to single user through appropriate certicates and credentials.

ˆ WPA3: It is a new standard announced by Wi-Fi Alliance in January 2018 as


substitute of WPA2. It has the aim of mitigating the security issues posed by
weak password and simplify the process of setting up devices with no display
interface. [2]

For the smart bin project we used an ad hoc WLAN (IoTPolimi) with WPA2-PSK,
created to support the developing of dierent IoT technologies inside the laboratory.
It works as Access Point (AP) to provide a bridge between dierent clients. The
network only allows the communication inside it and it is not connected to any
external network. All the used sensors connects to this WLAN using one of the 11
available in Europe (1 to 11) in the ISM band among 2.412 and 2.472 GHz.
Around the year 2020 the rst commercial applications of 5g technologies are
expected to be released. As soon as there are sensors available, this project will be
able to switch to this technology, improving the user experience since we expect to
obtain greater performance without changing the way users employ this technology.

10
2.2. LAMP

2.2 LAMP

Figure 2.2: LAMP logo

LAMP is a software stack model suitable for building dynamic web sites and
web applications. Its name is the acronym of the four original components: Linux
kernel, Apache HTTP Server, the programming language PHP, and the relational
database management system MySQL.

2.2.1 GNU/Linux
At the base of this stack model there is the open-source operating system (OS)
GNU/Linux based on the kernel Linux. As OS it manages the computer hardware
and the software resource providing all the needed services for computer program.
For the developing of the project two dierent Linux distribution were used:

ˆ Raspbian Stretch: It was initially used on the single-board computer Rasp-


berry Pi. It was used during the initial phases of the development as the central
server. It is the ocial supported operating system and is based on Debian 9
Stretch.

ˆ Ubuntu 16.04: It is the operating system installed on the server used at


deploy time. The server provides a greatest computation power and so can
handle greatest number of connection and devices at the same time. It has
access to the WiFi network IoTPolimi and it is also used for other projects
inside the ANTLab.

This operating system is used for its quality. Its goal to be secure out of the box is
particularly appreciated. In fact, it provides updates and security patches quickly
and for long period of time. In case of Ubuntu 16.04 updates are guaranteed for 5
years, and will end in 2021.
Another fundamental feature of the operating system is its free and open source
nature. This means that the whole software is freely licensed to use, copy, study
and change in any possible way without the need of special permission or royalties
to pay.

11
2.2. LAMP

These features made it possible for Gnu/Linux to become widely used across
world and the most used operating system on servers [29].

2.2.2 Apache HTTP Server


Apache HTTP Server as suggested by the name itself is a web server, it is free
and open source. Robert McCool created it in 1995 and played a fundamental role
in the initial growth of the internet intended as a net of web pages.
It is highly scalable in fact it is composed by a central core software that is ex-
tended by a set of modules that provide further functionalities. An example are the
security module (mod_ssl), the authentication one and a directory-level congura-
tion le (.htaccess) to control the access to whole parts of the server path structure.
Moreover it provides support for the many server-side programming languages like
Python or PHP.
The used version in the development of this project is the 2.4, the only currently
supported and updated version, that was released at the end of February 2012.

2.2.3 PHP
PHP is a general-purpose programming language created by Rasmus Lerdorf in
1994 designed for the web-development.
It is commonly used to develop the back-end part of web services and allows the
creation of personal home page.
There are many modules that extend the functionalities of the language, an
example is mysqli extension that provide a full access and control for MySQL DBMS.
In 1997 PHP 3 added the concept of programming objects and exception to the
language and was improved and fully operative since version 4.
Since PHP is a widely used language in the server domain there are many third
party services that provides extension and Software Developmente Kit (SDK). An
example is the AWS SDK used to perform image recognition through the Amazon
Rekognition service.
Version 7.0 was used for the development of the server, it provides signicative
performance improvements using a new core called phpng.

2.2.4 MySQL
MySQL is an open source Relational DataBase Management System (RDBMS)
created by Ulf Michael Widenius.
It has become the leading choice for web base application for its characteristic:
a great speed in executing query, reliability, ease-of-use.

12
2.2. LAMP

Relational Database
A relational database is a particular database based on relational model (RM)
of data proposed by E. F. Codd in 1970. This model propose to approach to data
management where all data is seen as a tuple: a nite ordered list of elements and
these tuples are grouped in relations. In this type of database the information are
stored in tables (that represent the relations in the RM). Each information is inserted
in a row of a table (correspond to the concept of tuple). Using a RDBMS is possible
to execute SQL queries that allow to execute transactions used to retrieve, modify
or delete all the required information.
The transaction of a relational database must guarantee validity also in case of
error or unexpected events like a power loss. To achieve this goal the DB must follow
the ACID properties:

Atomicity Each transaction is indivisible in its execution. The execution must or


total or null, partial executions are not allowed

Consistency The database must be in a consistent state both when a transaction


start and when it ends. No integrity constraints can be violated and no incon-
sistency can be present between the data stored in the DB.

Isolation Each transaction must be isolated and independent from the others, so an
eventual failure of a transaction must not interfere with the other transaction
in execution

Durability Once a transaction has been committed this must remain committed
also in case of system failure. To achieve this property log register are used to
write the operation on a non-volatile memory.

13
2.3. MQTT

2.3 MQTT

MQTT stands for Message Queuing Telemetry Transport and is a publish-subscribe-


based protocol working on top of TCP/IP protocol. It was designed in order to work
also on constrained devices even on a poor quality and unreliable network (high-
latency, low-bandwidth, high latency).
It was invented in 1999 by Andrew James Stanford-Clark of IBM and Arlen Nipper
of Arcom and was designed in order to be simple, lightweight, and easy to implement.
Initially it was created for internal use, then it was released and available without
royalties [18]. In 2014 OASIS, a nonprot consortium that promotes open standard,
announced that MQTT Version 3.1.1 became an OASIS standard [23] and in turn
in 2016 by ISO with the code ISO/IEC PRF 20922.

2.3.1 Structure

Figure 2.3: Example of an MQTT Broker with one publisher and two subscribers.

MQTT is based on the publish-subscribe messaging pattern where there is a


decoupling between the sender and the receiver. In this way the sender (called
publisher) sends the message only to a central unit (called Broker) that forward the
message to the receiver (called subscriber). Publishers and the Subscribers are both
called clients and the broker, that handle the messages, is called server. Every client
can be both publisher and subscriber without any protocol limitation.
A message is composed by two main part:

Content: MQTT is data-agnostic so it can be of any type like binary data or a

14
2.3. MQTT

string.

Topic: It is a set of one or more UTF-8 string concatenated by a forward slash


/. Each string represent a topic level.
In this way a one-to-many message distribution center is easily created.. A client
can subscribe to a certain topic to receive all the messages concerning it or can
also publish a message about a particular topic in order to send it to all the other
interested clients. There is no need for a client to know if any other client exists or
if there is a subscriber interested to the sent topic.
Examples of topic are:

smartbin/weight/123
smartbin/height/456

If a client is interested in more than a single topic it can use special strings called
wildcards.
Single level wildcard + This wildcard can substitute any possible string of a
single level. So if a client is interested to all the measure of a certain
publisher it can subscribe to smartbin/+/123 and receive both the
message about weight and height. In case the client is interested in
all the measure of weight it can subscribe to smartbin/weight/+.

Multi level wildcard # Multilevel wildcard can be used to substitute an arbi-
trary number of topic levels. If a client is interested in all the messages
of type smartbin can subscribe to smartbin/# and it will receive
all the messages from every bin and for every type of measure: weight
and height.
It is important to underline that the topics are case-sensitive so smartbin and
Smartbin are considered by the broker two distinct topic. Moreover wildcards can
be used only by subscriber and not by publishers.

2.3.2 Reliability
Another important information present in every message is the QoS (Quality of
Service) that dene the reliability in the message delivery. The QoS is dened at
publish time and at subscribe time by the client. The broker can manage the message
in 3 ways according to the QoS level dened. [21]
1. QoS = 0 (At most once delivery)
The message is sent only once and there is no need either to receive an ac-
knowledge by the receiver or for the need for the sender to store the message.
This is the faster and less resource-consuming type of message.

15
2.3. MQTT

2. QoS = 1 (At least once delivery)


This level guarantees that the message is sent to the receiver at least once.
The sender stores the message until it doesn't receive an acknowledge packet
(PUBACK) from the receiver, if it isn't received before a timeout the message
is sent another time.

3. QoS = 2 (Exactly once delivery)


Using this QoS the message is guaranteed to be sent only once but the time
to deliver it is much higher due to the fact that for every message 4 packets
are sent: the message itself and 3 acknowledge packets (PUBREC, PUBREL,
PUBCOMP). The QoS should be used when the system cannot handle correctly
duplicate delivery and the message must reach the receiver.

2.3.3 Security
The MQTT protocol provide a set of guidelines for secure an implementation of
MQTT that are strongly recommended although these are not mandatory. Security
is divided into multiple layers and each one prevents dierent kind of attacks. [22]
The protocol also denes the TCP/IP ports to use (standardized to IANA):

1883 it is the TCP/IP port registered for non TLS communication.


8883 it is the TCP/IP port exclusively reserved for MQTT over SSL (IANA service
name: secure-mqtt). The messages are encrypted across the network but not
built-in to the protocol due to the complexity of SSL and the signicant network
overhead added.

Since version 3.1 of MQTT protocol an authentication and authorization mechanism


is provided. It consist in a CONNECT packet with a username and password that
are validated by the server.

16
2.4. REST API

2.4 REST API

REST is the acronym that stands for Representational State Transfer and it is
an architectural style for distributed system presented by Roy Fielding that quickly
became a de facto standard among web developers.
REST-compliant web services provide access to web resources by using a prede-
ned and uniform stateless operation. This is in contrast with other used standards
like SOAP that provides their own arbitrary set of operation application-dependent.
Resources and can be for example a document, a le or more generically an in-
formation identied by their URL. The most common operation available to interact
with these resources are the following:

GET Request or access the resource. It is a safe method: calling it produces no


side-eects and the state of the resource does not change.

POST Creates a new resource. As response usually receive the entry URI (Uniform
Resource Identier) of the created resource.

PUT Replace the resource with another and if the resource does not exist it is
created. It is considered a idempotent method since multiple execution of the
operation doesn't create changes.

DELETE If the resource exist, it will be deleted. Also this operation is idempotent.
An example of resource can be: smartbin/123/height  that represent the list of
all the height values of the bin 123 or smartbin  that represents the list of all the
smart bins.
A possible example of the use of a REST API can be:
POST @le:my_object.jpg smartbin/456/image
In this way the le my_object.jpg is added to the images of the smart bin 456.
GET smartbin/456/weight/last
Using this operation the server gives us the last weight measure of the smart bin
456.

17
2.5. Node-RED

2.5 Node-RED

Node-RED is a ow-based programming tool used to wire together dierent ser-


vices and components using a visual editor based on boxes that can be connected
one to the other using drag and drop.

Figure 2.4: This is a sample of a Node-RED ow that receive data from MQTT and
store them inside a MySQL database

This peculiar type of programming facilitates not only the programming process
but also helps the user to understand what it is doing through a clear representation.
Node-RED was originally developed in 2013 as a side-project by Nick O'Leary and
Dave Conway-Jones of IBM's Emerging Technology Services group. In September
2013 it was released as open source and from 2016 is part of the JS Foundation.
The power of this developing tool is that provides out-of-the box many ready-to-
use components or nodes that need only a few step to use.
In Figure 2.5 we can see how simple is to set up a MQTT node.

Figure 2.5: MQTT setup

Another useful functionality of Node-RED is the possibility to create easily web

18
2.5. Node-RED

interface to represent the data collected from other services. In fact to create a
dashboard (Figure 2.6) there is no need to know any language like CSS or HTML

Figure 2.6: Node-RED dashboard

19
2.6. Microcontroller

2.6 Microcontroller

A microcontroller (MCU) can be easily imagined as a computer with extremely


low dimension and with a very limited computing power. It can be compared to
a microprocessor with the dierence that this is used on general purpose machines
combined with other chips whereas the MCU is a complete system designed for
embedded application, with processor, memory and some input/output channels
called PIN.
In a microcontroller a custom program can be created and loaded on the board
to execute operation and control external modules. For the developing of the custom
software we used Arduino IDE, a development environment written in Java com-
patible with many boards also non-Arduino. This IDE can be extended installing
third party plugins to provide further functionalities like a debugger for the ESP or
libraries for not ocially supported modules.

2.6.1 Arduino
Arduino is an Italian open-source computer hardware and software company
that creates single-board microcontrollers and kits for their development. Thanks to
success of this company the concept of DIY (Do It Yourself) applied to electronics
spread all over the world leading to the birth of many other similar companies like
Raspberry Pi Foundation.

(a) Arduino UNO (b) Arduino MKR1000

Figure 2.7: Arduino Microcontrollers

Arduino UNO
The Arduino UNO (Figure 2.7a) is a low cost microcontroller based on the AT-
mega328p and it is the most famous and used board. Thanks to his open source
nature, many identical cheaper versions from other companies are available and all
this success contribute to the creation of more and more compatible external com-
ponents that extend its functionalities like a WiFi module based on ESP chip.

20
2.6. Microcontroller

During the development of the smart bin project it has been used to monitor the
power consumption and extract data used to analyze the eciency of the created
modules.

Arduino MKR1000
The Arduino MKR1000 (Figure 2.7b) is based on Atmel ATSAMW25 SoC and
provide an integrated module that provide WiFi connection with a Cryptochip for
secure communication (ECC508). This MCU provides also 2 pins to connect a Li-Po
battery and recharge it using the USB port. It works at a tension 3.3V and can be
powered using a 5v only using the VIN pin or the USB port. This board was used
as base for both the 2 smart bin modules.

2.6.2 Raspberry PI

Figure 2.8: Raspberry Pi 3 Model B

The Raspberry PI is a particular type of microcontroller, in fact since it has a


lot of computing power and it was created to work with Linux kernel can be more
correctly considered as a single-board computer. It was created by the Raspberry
Pi Fundation to promote the computer science and thanks to his success (over 19
millions of board sold) dierent versions have been created.
For the development of the project this board was initially used as server to test
the communication between the bins and it and later was also used together with a
touch screen monitor to present an idea of a possible device to control the ll status
of the bins (Figure 4.4).

21
2.6. Microcontroller

2.6.3 NodeMCU - Devkit 0.9

Figure 2.9: NodeMCU v0.9

The NodeMCU is an open-source microcontroller and runs on the ESP8266 SoC.


The version 0.9 is currently outdated and is based on the ESP-12 created by the
Chinese manufacturer Espressif Systems. The current version is the 1.0 that uses
the ESP-12E module that provides the same processor and memory but xes some
bugs and provides a layout that allows it to be used on a breadboard.
The initial development of the smart bins started from this microcontroller but
soon dierent problems appeared.
The main problems are created by the library for the Analog to Digital converter
HX711: HX711_ADC.h and the library used for MQTT: PubSubClient.h. The
use of one or the other library alone does not create any problem.
The rst problem appears trying to read a weight measure from the load cell
using the command: HX711_ADC::start(unsigned int time) . (Figure 2.10a) This
instruction continuously call the update function (Listing 2.1) until a valid value
is not ready. This instruction triggers the STW watchdog causing a block (Figure
2.10b) of the entire board that requires a manual reboot.

22
2.6. Microcontroller

(a) Stack output on the serial monitor

(b) Watchdog stack debug

Figure 2.10: Watchdog error

Listing 2.1: Function that activates the watchdog


1 uint8_t HX711_ADC :: update ()
2 {
3 // # ifndef USE_PC_INT
4 byte dout = digitalRead ( doutPin ) ; // check if
conversion is ready
5 if (! dout ) {
6 conversion24bit () ;
7 }
8 else convRslt = 0;
9 return convRslt ;
10 // # endif
11 }
This error can be avoided by disabling this watchdog ESP.wdtDisable();  but
unfortunately resolving this initial problem causes another problem to appear and
creates conict with WiFi module and his library: WiFi101.h. This problem

23
2.6. Microcontroller

frequently blocks the board during the connection to the WiFi and seems due to a
problem in feeding the watchdog caused by the MQTT library [9].
For the image recognition project also was initially chosen because from the
specications the ESP8266 is ocially supported by the ArduCAM OV5642. Unfor-
tunately due to the low amount of RAM memory and to the sporadic blocks due to
the watchdog the Arduino MKR1000 was chosen for his greater stability.

24
2.7. Used sensors

2.7 Used sensors

2.7.1 Load Cell

(a) Strain gauge load cells (b) Diagram of a strain gauge

Figure 2.11: Load cell (10kg)

This straight bar load cell can translate the pressure applied on it into an electrical
signal. In this way, after a proper calibration, this sensor can give a good estimate
of the weight up to 10kg with an error usually lower than 2 grams.
In bar strain gauge load cells, the cell is set up in aZ formation (Figure 2.11b)
and when a weight is placed on it a torque is applied to the bar. To measure the
weight the four strain gauges on the cell will measure the bending distortion, two
measuring the compression and two measuring the tension.
This bar is made of aluminum-alloy oers an IP66 protection rating that guar-
antees protection from dust and liquids. For this reason this component was chosen
to measure the garbage weight inside the bin.

25
2.7. Used sensors

2.7.1.1 HX 711

Figure 2.12: Analog to Digital Converter HX 711

The HX711 is a precision 24-bit analog to digital converter used to convert the
electrical signals coming from the load cell into a digital number corresponding to
the sensed weight.
This sensor uses a two-wire interface to communicate with the microcontroller:
Clock and Data and can be powered by any tension between 2.7Vx and 5V . It can
be connected and controlled by any microcontroller that provides GPIO pins.

2.7.2 HC-SR04

Figure 2.13: Ultrasonic sensor: HC-SR04

The HC-SR04 is low cost an ultrasonic sensor working at 5V that measure dis-
tances in range between 2 centimeters up to 4 meters. It has an accuracy that reaches
3mm in case of a at object with a surface of at least 0.5m².

26
2.7. Used sensors

Its working mechanism is similar to the concept of the sonar used for submarine
navigation and by many animals. Through the microcontroller the ultrasonic sensor
is activated and it sends out 8 cycle burst of ultrasound at 40kHz and then raise its
echo. Knowing the time before the echo's arrival we can calculate the distance of the
sensor from the object.

2.7.3 ArduCAM

Figure 2.14: ArduCAM-Mini-5MP-Plus OV5642 Camera Module

ArduCAM is a Chinese startup company that creates open source hardware and
software especially for Arduino and Raspberry microcontrollers. Their modules are
a general purpose high denition SPI camera that provides a simple control interface
that can be used by any hardware platform as long as they have SPI and I2C interface.
For this project we adopted a ArduCAM-Mini-5MP-Plus that mounts the 5MP
CMOS image sensor OV5642. This particular model provides a miniature size and
allows also for low cost microcontrollers that usually wouldn't have enough resources,
to capture 5MP images not only in JPEG or RGB format but also in the RAW image
format. This module handles camera, memory and user interface hardware timing
providing to the user an SPI interface. This module allows not only single capture
mode but also the possibility to take video and multiple images. This last modality
combined with the possibility to control many parameters like the exposure, white
balance, brightness, contrast and color saturation gives the possibility to create HDR
images.
Moreover for battery powered devices there is also the possibility to set a low

27
2.7. Used sensors

power mode that reduce the power consumption turning o the sensor.
The operating mechanism is very simple: the camera is turned on, the parameters
of the images such as the image format and resolution are set, then the hardware
trigger input is activated to start a capture. Finally when the capture done ag
is raised the image can be read byte by byte from the SPI interface.
This camera can be powered with any current between 3.3V and 5V and all the
I/O ports can tolerate the same voltage range.

2.7.4 INA219 DC Current Sensor

Figure 2.15: INA219

The INA219 is a direct current sensor that can measure both the high side voltage
and DC current draw over I2C with 1% precision. It can be powered by a 5V tension
and can communicate with many microcontrollers using I2C.
Its operating mechanism is based on a resistor with a very low resistance called
shunt resistor. Thanks to this particular resistor the voltage across the shunt resistor
is measured and from it, using the Ohm's law (Current = Shunt Resistance ) the current
Shunt T ension

value is calculated.

28
Chapter 3

System implementation
In the previous chapters we have seen the main components used and the theo-
retical part of this work. Now, in this chapter, the project implementation will be
presented and explained.
For the sake of clarity, the result of this thesis work will be presented separately as
two dierent projects. Nevertheless it is important to notice that these are not two
freestanding ideas, hence an initial subsection is present. There all the functionalities
shared between the two projects are described.
So to summarize this chapter is split into 3 main parts:

1. Common functionalities

2. Smart bin project. Capacity monitoring module

3. Smart bin project. Vision module

29
3.1. Common functionalities

3.1 Common functionalities

In this section we will show all the parts shared between the two projects. By
shared functionalities we do not mean only the functions used inside the code loaded
in the microcontroller like the function to measure the distance of an object or the one
used to obtain a unique identier for each device. Inside the common functionalities
we have also the whole work done to measure the absorbed power by a certain device,
indeed as we will see we have created a separated project employed to measure both
the two smart bin modules and other sensors.

3.1.1 Current measure


Connection

(a) Breadboard schema (b) Schematic schema

Figure 3.1: Wiring diagram of the Current Measure circuit

The rst fundamental step to measure the current of a certain circuit is of course
the creation of the circuit that will do it.
For this step, we used the following 2 component: the Arduino UNO (2.6.1) as
microcontroller that will receive and analyze the power consumption information
received by the other component used: the INA219 Current Sensor (2.7.4).
To power the current sensor we connect the 5V pin and the ground one of the
MCU to the VIN and GND pin of the sensor. Then we establish the data connection
using the I²C serial bus connection between the 2 components wiring the respective
SDA and SCL pins.
As nal step, we power the monitored device using the VIN+ and VIN- connectors
of the current sensor. The former, VIN+, should be connected to the power source
(5V) and the latter, VIN-, is connected to the positive pole of the device to be

30
3.1. Common functionalities

measured, usually identied by the label VIN or VCC. It is important that all the
ground (GND) pins are connected to the same tension value in order to achieve a
correct measurement.

Data analysis
To read values from the current sensor we use the library Adafruit_INA219.h
created by Adafruit Industries, the vendor of the sensor. The code executed by
the Arduino UNO is really simple, it gets a sample of the current every 50ms and
outputs a line with the time and the read value to the serial monitor. In this way
we obtain the real time consumption of the device in a csv format (values separated
by commas).
Finally we use the obtained value simply copying all the values in a spreadsheet
editor like LibreOce Calc or Microsoft Oce Excel.
As we will see later, we obtain in this way the data used to retrieve the maximum
and the average consumption that will be use to determine the size of a possible
battery used to power the analyzed circuit.

3.1.2 Measure the distance


The distance measurement is used in both the modules: in the capacity monitor-
ing module to determine the ll level of the bin and in the waste analyzer to gure
out when an object is inserted in the created model. To make these measurement is
sucient to wire the ultrasonic sensor HC-SR04 (2.7.2) to any microcontroller that
can write and read from 2 digital pins, without the need of any external library.
The measurement takes places in 3 steps:

1. The transmitter (TRIG pin) emits an high-frequency sound.

2. The signal hits the object and is reected.

3. The receiver (ECHO pin) receives the reected sound.

So nally, we can obtain the distance in centimeters through the following mathe-
matical calculation (3.1).
The used information are the following.

ˆ The elapsed time expressed in microseconds (1000000µs/s).

ˆ The speed of sound through the air (34380cm/s).

ˆ The time should be divided by 2 since is measured since the sound is emitted
until the echo is received back. In this way the sound cover twice the distance
to measure.

31
3.1. Common functionalities

Using these information we can execute the following operation 1000000µs/s


34380cm/s and obtain
the value of 29.1[µs/cm].
Elapsed time[µs]/2
Distance[cm] = (3.1)
29.1[µs/cm]

Listing 3.1: Distance measurement


1 long getHeightAIO () {
2 // Inizialize pin
3 // Trig
4 pinMode ( D0 , OUTPUT );
5 // Echo
6 pinMode ( D1 , INPUT ) ;
7
8 // Capture value
9 long duration , distance ;
10 digitalWrite (D0 , LOW ) ;
11 delayMicroseconds (2) ;
12
13 digitalWrite (D0 , HIGH );
14 delayMicroseconds (10) ;
15
16 digitalWrite (D0 , LOW ) ;
17 duration = pulseIn (D1 , HIGH );
18 distance = ( duration / 2) / 29.1;
19 return distance ;
20 }
The created function 3.1 does not do much more than explained so far: it initial-
izes the pins, enables the output for 10 microseconds and gets the response returning
a value in centimeters.

3.1.3 Obtain a unique identier


In order to identify the message sent from a certain smart bin we need to use
a unique identier. To achieve this goal we use the Media Access Control address
(MAC), a unique identier assigned to every device with a network interface con-
troller, and thus, in our case, is an ID assigned to every Arduino MKR1000.
The MAC address can be printed in a human-friendly form as six groups of
hexadecimal digits separated by hyphens (e.g. 01-23-45-67-89-AB). The rst 3 groups
are the manufacturer identier (OUI) and in our case they are always the same since

32
3.1. Common functionalities

we have all Arduino device. The last 3 groups instead have a unique identier for
every Network Interface Controller (NIC) among the same manufacturer.

Listing 3.2: Retrieve the id from the MAC address


1 byte mac [ 6 ] ;
2
3 long g e t I d ( ) {
4 setup_wifi () ;
5 long i d = 0 ;
6
7 WiFi . macAddress (mac ) ;
8 const int d i g i t s = 3 ;
9 for ( int i = 0 ; i < d i g i t s ; i ++) {
10 long v a l u e = long (mac [ i ] ) ;
11 i d += v a l u e * pow ( 1 0 , i * d i g i t s ) ;
12 }
13 return i d ;
14 }
To retrieve for the rst time the MAC address of the device we use the function
3.2. As rst step it connects to the WiFi, then using a function provided by the WiFi
library retrieves the MAC address as an array of 6 bytes. Finally we concatenate the
values of the last 3 bytes as a decimal value.
The rst time the Arduino is turned on the unique ID is stored in the memory
and all the following times it is read directly from the ash memory. In this way we
can save battery avoiding to turn on the WiFi to retrieve the ID.

3.1.4 Store values in ash memory


As we have said, we need to store some values in the non-volatile memory of
the microcontroller. Unfortunately the Arduino MKR1000 does not provide any
mechanism to store data in case of reboot or shutdown of the microcontroller.
To overcome this limitation the Github user cmaglie created a library that
aims to provide a convenient way to store and retrieve user's data using the non-
volatile ash memory of microcontrollers [19].
Since ash memory has a nite number of write operation on it, to reduce them
at minimum, we make an initial write during the rst boot of the microcontroller
and before updating that value we control if the value is actually changed.
In the MKR1000 source code we use two libraries: FlashStorage.h and FlashAsEEP-
ROM.h that provide functions similar to the ones used to write on a microcontroller
with an EEPROM memory.

33
3.1. Common functionalities

So, in order to store and retrieve variables we dene the name of the variable and
the type.
FlashStorage(my_variable, int);
Then the variable can be easily accessed through simple read and write methods:
my_variable.read();
my_variable.write(new_value);
Morover there is also a useful method to verify if a variable was ever written in
the ash storage and thus is valid and contains useful information.
my_variable.isValid();

34
3.2. Smart bin - Capacity monitoring module

3.2 Smart bin - Capacity monitoring module

The capacity monitoring module has been developed using a client-server archi-
tecture. Every smart bin can be considered as a client that collect data and, if
needed, it sends the information to the central server.
The server can handle without problems bins of any size, therefore to show that
we created 3 dierent bins of two dierent size (Figure 3.2).

Figure 3.2: Two smart bins of dierent size

3.2.1 Server implementation


In order to implement the server we used Node-RED (2.5) and through its specic
node we store the information in the MySQL database (2.2.4).
We can split the logic of the server in two main parts:

1. Managing data received from the smart bins.

2. Showing information to the end-user.

The rst part is completely developed using Node-RED while the latter is partially
made using the Node-RED dashboard with some parts written in PHP.

3.2.1.1 Receiving data


Using Node-RED we use 2 nodes to subscribe to the MQTT topics of height and
weight. As we can see from the picture 3.3 when we receive a height measurement
we store the information in the database to use it later. Instead when we receive a

35
3.2. Smart bin - Capacity monitoring module

Figure 3.3: Node-RED blocks for the server

weight measurement it isn't only stored but is also used to send notications to the
user through mail or IFTTT. The notications are sent only if the weight received
is higher than the expect or if the bin is almost full.

3.2.1.2 Showing information

Figure 3.4: Dashboard creation using Node-RED

We provide to the user also a possibility to show the status of the bin. To create
the dashboard we used the Node-RED nodes. We retrieve data from the database
using two dierent queries, one to gather all the historical data up to now of a
certain sensor (weight or height) and the other to only retrieve the current status of
the bin with the real time information. The real time data are updated every hour

36
3.2. Smart bin - Capacity monitoring module

so on average they show a 30 minutes old information, even though the last message
received can be 6 hours old (Figure 3.5a).
To provide a fully functional product that can be used by the end-user we created
an interactive map that show the bins on a certain oor (Figure 3.5b). In order to
create it we use an HTML canvas on which we draw the map data retrieved from
DB.
Once the user access to the page with the map 3 operations are done:

1. A white canvas is created using HTML

Listing 3.3: The HTML canvas is created


1 < canvas id =" viewport " height = " 1300 " width = " 1300 " ></ canvas >

2. Bins data are retrieved in JSON format from the data.php page through a
xmlHttp request.

var xmlHttp = new XMLHttpRequest () ;


xmlHttp . open ( " GET " , "/ data . php " , false ) ; // false
for synchronous request
xmlHttp . send ( null );
var data = xmlHttp . responseText ;
var info = jQuery . parseJSON ( data );

The information retrieved are: ll level, weight level, X coordinate on the map,
Y coordinate on the map, the bin unique identier.

3. The map of the chosen oor and building is plotted and over it one button is
created for each bin.

1 // Plot the map


2 create_map () ;
3 // Create buttons
4 create_buttons () ;

The create_map function print the map according to the building and the oor
chosen, then using the data previously obtained show the bins on it. The color
and the shape of the bins depends on the weight and the ll level. Once we've
printed the bins we create buttons over the images of the bins that allow the
nal user to obtain more detailed information of a certain bin.

37
3.2. Smart bin - Capacity monitoring module

(a) Information of a single bin

(b) Information on the smart bins of an entire oor

Figure 3.5: Information shown to the user

3.2.2 Client implementation


The client is essentially a regular trash that is converted into a smart bin attaching
some external components (microcontroller and sensor) to it.
Once the model is created and thus all the components are successfully added
and integrated to the bin (Hardware part), we must program the rmware of the mi-
crocontroller in order to retrieve data from sensor in a correct way and communicate
them to the central server previously described (Software part).

3.2.2.1 Hardware
The rst step to build the client is the choice of the components. In order to build
the smart bin we used sensors to sense the weight and the distance, a microcontroller

38
3.2. Smart bin - Capacity monitoring module

to manage data and transmit them to a remote server and, of course, a trash bin.
For the choice of the bin the only constrain is the presence of a bottom and the
presence of a lid. This implies that the bin cannot be only a hanging bag while
the lid is necessary mainly to make the bin more durable and accurate since the
distance sensor is attached on the top and the trash unlikely hit and then ruin it.
To summarize the used components are:

Microcontroller: Arduino MKR1000 2.6.1


Weight sensor: Load cell 10kg 2.7.1 and Analog to Digital converter HX 711 2.7.1.1
Distance sensor: Ultrasonic sensor HC SR-04 2.7.2
Trash Bin: IKEA Filur bin with lid (dierent size)

39
3.2. Smart bin - Capacity monitoring module

Components connection

(a) Breadboard schema

(b) Schematic schema

Figure 3.6: Wiring diagram of a Smart Bin capacity monitoring module

As we can see from the schematic (3.6) we connected in cascade the load cell
directly to the DAC1 HX 711 and this one to the Arduino powering it with a 3.3V
current and using two digital port to communicate with the sensor. The distance
sensor was connected also using other two digital ports and powering it with a 5V
power.
To power the microcontroller and the other sensor connected to it we used a
1
Digital to Analog Converter

40
3.2. Smart bin - Capacity monitoring module

either the USB port or the VIN/GND pins always providing a 5V power.

Prototype creation
Once the wiring is done and the initial prototype is correctly functional we started
assembling a model of smart bin. The rst step requires the creation of two small
holes to allow the wires to exit from the bin. One is made on the bottom on a lateral
site in order to connect the load cell inside the bin and the microcontroller outside
it. The other one is made on the top in an angle of the lid, in this way the wires
that connect the microcontroller and the ultrasonic sensor aren't compressed when
the lid is opened.
After this we fasten the load cell at the bottom of the bin using two screws 3.7b
and using other two screws we ax a false bottom to the load cell 3.7c. In this way
all the weight lie on the load cell and we can obtain an accurate measurement of it
(3.7a).

41
3.2. Smart bin - Capacity monitoring module

(a) An example of load cell fasten with 4 screws

(b) Load cell fasten to the bottom (c) False bottom installed

Figure 3.7: Load cell assembling

Before wiring all the component is to x the ultrasonic sensor in the center of
the lid, in this way we can measure the distance from the object in the bin. Using
this information, after an initial calibration with the empty bin we can understand
the ll level of the bin.
As last step when all the cable are properly connected we x the microcontroller
on an external side of the bin, in this way the trash and any operation of changing
the trash bag doesn't damage the Arduino.

42
3.2. Smart bin - Capacity monitoring module

3.2.2.2 Software
Now the prototype is completely assembled and only two steps miss: the rmware
upload and the initial calibration.
What this smart bin does is quite simple:

1. Retrieve weight measurement

2. Retrieve ll measurement

3. Something has changed signicantly from the last measuring?

(a) Send the information to the server.

4. Wait before next measuring.

In order to calculate the ll measurement we trivially use the ultrasonic sensor as
shown before to measure the distance (3.1.2) and throw a simple subtraction we
obtain the trash level 3.2.

F ill Level = Bin Height − Sample M easured (3.2)

To obtain the weight measurement we use an external library (HX711_ADC.h )


that allow us to turn on the sensor only when we need to retrieve a value, then we
wait that the sensor start issuing reliable data and nally we obtain the weight value
3.4.

Listing 3.4: Weight measurement


1 long getWeightAIO () {
2 LoadCell . powerUp () ;
3 LoadCell . begin () ;
4 long t = millis () ;
5 int count = 0;
6
7 while (1) {
8 LoadCell . update () ;
9 if ( millis () > t + STABILISING_TIME ) {
10 long sample = ( long ) round ( LoadCell . getData () );
11 count = count + 1;
12 t = millis () ;
13
14 if ( count > 0) {
15 count = 0;
16 LoadCell . powerDown () ;

43
3.2. Smart bin - Capacity monitoring module

17 return sample ;
18 }
19 }
20 }
21 }
Once we have obtained the two data we check them to understand if can be useful
and need to be send to the central server or can be discarded. The checked condition
are:

1. The maximum time without messages is elapsed.


If 6 hours are elapsed and in this period of time no messages has been sent we
communicate the measurements to the server. In this way we always know if
a smart bin is still working even if it is not used.

2. The ll level is changed of a signicative value


If the ll level has changed of at least 2% and the measured distance if dierent
of at least 1cm we send the new value to the server.

3. The weight is changed of a signicative value


If the weight has changed of at least 5 grams we send the new value to the
server.

To send the information to the server we connect to the WiFi and then we use the
external library PubSubClient. that provides the necessary function used to create
and manage a MQTT connection. Using MQTT we publish 2 dierent topic:

1. {id}/weight

2. {id}/height

where the id correspond to the unique identier of the microcontroller obtain as


shown before (3.1.3) and the messages are sent in JSON format.
Only during the rst boot the behavior of the smart bin is slightly dierent, in
fact it automatically obtains and stores in ash memory the size of the bin and start
the calibration of the load cell taking the measurements of 0 grams and of another
known weight (in our case we used an object that weighs 92 grams).

44
3.3. Smart bin - Vision module

3.3 Smart bin - Vision module

The vision module has a three-layer architecture (Figure 3.8a) and is composed
by the following components.

Client It is the component actually installed on every smart bin of this type. It
takes pictures of the object to recognize and communicate the response to the
end-user.

Personal Server It collects the images sent by the clients and the responses from
the Amazon server. Used to debug and analyze the data sent by the client and
decouple the project from a specic recognition server.

AWS Server It is the Amazon Rekognition server used to extract information from
the image. It provides a fully functional image recognition service and easy to
use API.

With the exception of the third layer, the AWS server, the rest of the architecture
has been developed during the project. So the construction of the bin that recognize
the waste material can be split into two main part:

1. The personal server main functions:

(a) Communicate with Amazon Rekognition.


(b) Store and output the predicted material.

2. The client main functions:

(a) Detect if an object is thrown away.


(b) Takes a picture of the waste.
(c) Send the image to the server.
(d) Parse the response.
(e) Turn on the led related to waste material predicted.

In the near future, when the 5g connections will be diused we imagine a dierent
architecture (Figure 3.8b) that could further reduce the server response time that
takes advantage of the multi-access edge computing (MEC) architecture. This tech-
nology is designed to be implemented at the cellular base station and in this way
there would no more need to send the image rst to a personal server and then to
the AWS server thousands of kilometers away.

45
3.3. Smart bin - Vision module

(a) Current implementation of the smart bin vision module

(b) Possible future implementation of the smart bin vision module using MEC. 46
Figure 3.8: UML sequences diagrams
3.3. Smart bin - Vision module

3.3.1 Server implementation


The preparation of the server requires dierent steps since we use various tech-
nologies.

1. LAMP setup
First of all the whole project is developed on top of a Linux server (Ubuntu
16.04) using Apache 2, MySQL and PHP 2.2. The whole software bundle can
be easily installed through the packet manager (apt):

sudo apt install apache2


sudo apt install mysql - server
sudo apt install php libapache2 - mod - php php - mysql

2. Database Creation
Using MySQL we've created a Database to store all the needed information
(smartbinDB) then we use the administration tool PhpMyAdmin to manage
it. The following tables are used to store the information sent by the clients:

(a) bin_images (bin_id, image_id, image_name, expected_bin, time)


Here is stored the unique identier of the bin that created the image, the
unique identier of the image, the path where the image is stored and
available for a potential debug, the outcome of the bin prediction, the
timestamp that states the moment the image is received by the server.
(b) bin_list (bin_id, type, building, xpos, ypos, description)
This table contains all the information used to identify a certain smart
bin. The eld type species the type of smart bin: capacity monitoring
module, vision module or both the two modules in the same smart bin.
(c) image_labels (image_id, image_label, accuracy)
In this table we store the labels received by the image recognition software
(in this case Amazon Rekognition), for each image we have a list of labels
with a dierent predicted accuracy.

3. Amazon Rekognition
The following step require the creation of a working AWS account creation.
As a further security step we create a IAM (Identity and Access Management)
user, a special user that has only limited permission. In our case we created a
IAM user (smartbin) that can access only in Read Only mode to Amazon
Rekognition. Doing so we ensure that a malicious user cannot obtain full access
to the account also in case of credentials theft. Once the user is successfully
created we can start interfacing to the service, to do so we downloaded the

47
3.3. Smart bin - Vision module

AWS SDK2 and we added it to the root of our website.


As nal step to congure the SDK we created a proper class (Rekognition) that
read the user credential from an ini le and create an instance of the Ama-
zon Rekognition client successfully logged. As further option we xed the API
version using the last available: 2016-06-27 and thus we avoid any possible
change in the service in the next future. Moreover we also specied also a Eu-
ropean server: eu-west-1, in this way Amazon guarantees a greater privacy
according to the European privacy regulation called GDPR and a faster server
due to the server proximity.

1 public function __construct () {


2 // Create credentials
3 $secret = parse_ini_file ( " ../ secret . ini " , true ) ;
4 $credentials = new Aws \ Credentials \ Credentials (
5 $secret [ ' aws ' ][ ' key '],
6 $secret [ ' aws ' ][ ' secret ']
7 );
8
9 // Create client options
10 $options = [
11 ' region ' => 'eu - west -1 ',
12 ' version ' => ' 2016 -06 -27 ' ,
13 ' signature_version ' => 'v4 ',
14 ' credentials ' => $credentials
15 ];
16 // Instantiate an Amazon Rekognition client
17 $this - > rekognitionClient = new Aws \ Rekognition \
RekognitionClient ( $options ) ;
18 }

4. REST API endpoint


To conclude the server development we created a PHP page (image_api.php)
that provides an API to send images. The image received is analyzed to control
if it has been sent correctly, stored in the disk and in the database and nally
sent to the Amazon server to obtain image labels. The server returns back the
labels with the relative accuracy of the prediction in JSON format 3.5, then
these are compared to some arrays of known objects made of a certain material .

2
Amazon Web Services Software Development Kit

48
3.3. Smart bin - Vision module

Listing 3.5: Obtained response from AWS server


1 {
2 " Labels ": [
3 {
4 " Name ": " Bottle " ,
5 " Confidence ": 98.00452423095703
6 },
7 {
8 " Name ": " Beverage " ,
9 " Confidence ": 72.26675415039062
10 },
11 {
12 " Name ": " Drink " ,
13 " Confidence ": 72.26675415039062
14 },
15 {
16 " Name ": " Mineral Water " ,
17 " Confidence ": 72.26675415039062
18 },
19 {
20 " Name ": " Water Bottle " ,
21 " Confidence ": 72.26675415039062
22 }
23 ]
24 }

If the label predicted belong to an object of a known material its accuracy is


added to a counter and when all the label are processed in this way the material
with the greater value is the most probable and so is chosen.

Listing 3.6: Arrays used to detect the waste material


1 private static $plastic = array (
2 " Bottle " , " Water Bottle " , " Mineral Water " , " Cup " , "
Drink " ,
3 " Coffee Cup "
4 );
5
6 private static $aluminium = array (
7 " Can " , " Aluminium " , " Tin " , " Coke " , " Soda " , " Drink " ,
" Coke " ,
8 " Canned Goods "
9 );
10
11 private static $paper = array (

49
3.3. Smart bin - Vision module

12 " Paper " , " Poster " , " Diagram " , " White Board " , "
Brochure " , " Menu " ,
13 " Origami " , " Cardboard " , " Carton "
14 );

In case of all labels are unknown the object is predicted to be an unsorted


waste since it has a small initial weight (50). This approach is really simple
and takes little time to be correctly trained but, as we will see in the next
chapter, it gives good results.
So in the example shown in Listing 3.5 we would obtain Plastic outcome as
shown in Table 3.1.
Table 3.1: Result of the material detection
Material Probability
Plastic 314.81
Aluminium 72.27
Unsorted 50
Paper 0

3.3.2 Client implementation


The client task is to notice when an object has been inserted in the smart bin and
then take a picture of it. Once the image has been acquired, it is sent to the server
that will return the suggested object material and using this answer the appropriate
led is turned on to recommend the correct trash bin.

3.3.2.1 Hardware
Also in this smart bin module is fundamental having an internet connection so
we've chosen the microcontroller Arduino MKR1000 already adopted for the other
smart bin modules that provides the WiFi connectivity. For the object detection
we have also used an already tested shown sensor, the ultrasonic sensor HC-SR04
that allows to detect object from a minimum distance of 4cm up to 2m. The last
employed sensor is the ArduCAM OV5642 (2.7.3) that allows to take picture up to
a maximum resolution of 5MP.

50
3.3. Smart bin - Vision module

(a) Breadboard schema

(b) Schematic schema

Figure 3.9: Wiring diagram of the smart bin vision module

The schematic schema shows us how the sensor and the microcontroller are wired.
The three led are connected to the same number of digital pin: 1 for unsorted waste,
2 for plastic and 3 for the paper, the ultrasonic sensor is connected to the connectors
6 and 7 and the CS pin of the camera to pin 0. Moreover the camera requires also

51
3.3. Smart bin - Vision module

the the SPI communication protocol (pin: MISO, MOSI, SCK) and the I²C serial
bus (pin: SDA, SCL).

The model construction


Finally we've designed a cardboard prototype to test all the functionalities of
this smart bin module. It has been obtained from an existing cardboard to which
two small holes has been made to the short side to x the camera and the ultrasonic
sensor. To the opposite side we have added another cardboard in order to cover the
background and thus do not add any external elements to the picture. This approach
allow us to obtain the same results using the same object independently from the
place where the picture has been taken providing a sucient illumination.

(a) Mock-up image (b) Detail on the led

Figure 3.10: Cardboard mock-up

Three led lights of dierent colors has been used in order to show to the end-user
the appropriate bin where the object should be thrown: blue for unsorted waste,
green for paper and red for both plastic and aluminium(3.10).

3.3.2.2 Software
The client, as can be seen from the algorithm (3.1), is continuously checking for
the presence of an object using a proximity sensor HC-SR04.
When an object is detected, it is photographed then the Arduino MKR1000 connects
to the WiFi network and sends the captured image to the server REST API a through
a HTTP POST requests
The response is in plain text and it is read and parsed by the Arduino. Finally the
client simply turn on a led corresponding to the recommended receptacle and it is
again available.

52
3.3. Smart bin - Vision module

Algorithm 3.1 Main client function


1: Initial Set Up
2: while 1 do . Endless loop
3: status ← detectObject()
4: if status = DETECTED then . Object found!
5: Take a picture
6: Analyze image . Send it to AWS
7: Parse server response
8: Recommend a bin . Turn on a led
9: end if
10: end while

Setup camera library


The microcontroller Arduino MKR1000 does not ocially support this camera
sensor, but since it has sucient computation power and satises all the hardware
requirements, the sensor can be used applying only few modify to the given libraries.
As suggested from the camera sensor producer site we modied the memo-
rysave.h le, enabling the denition only for OV5642_MINI_5MP_PLUS and
OV5642_CAM, corresponding to the our camera model. As further step we cre-
ated a new constant:
#d e f i n e MKR1000
and in the le ArduCAM.h we used the denition used for the ESP8266:
1 # if ( defined ESP8266 ) || ( defined MKR1000 )
2 # define cbi ( reg , bitmask ) digitalWrite ( bitmask , LOW )
3 # define sbi ( reg , bitmask ) digitalWrite ( bitmask , HIGH )
4 # define pulse_high ( reg , bitmask ) sbi ( reg , bitmask );
cbi ( reg , bitmask );
5 # define pulse_low ( reg , bitmask ) cbi ( reg , bitmask ); sbi
( reg , bitmask );
6
7 # define cport ( port , data ) port &= data
8 # define sport ( port , data ) port |= data
9
10 # define swap ( type , i , j ) { type t = i; i = j ; j = t ;}
11
12 # define fontbyte (x ) cfont . font [ x]
13
14 # define regtype volatile uint32_t
15 # define regsize uint32_t
16 # endif

53
3.3. Smart bin - Vision module

Doing these steps we used the same code created for the ESP8266 devices without
any problem.

Boot time
At rst start up or after the power is back the Arduino enters the start-up phase.
During this step all the components are checked to ensure that everything is still
working and correctly connected.
As rst step we initialize the following objects:

ArduCAM The main class that will be used to execute all the operation on the cam-
era. During initialization must be specied the camera type (OV5642)
and the SPI slave chip select input.

WiFiClient Used to communicate with the server through REST API.


Wire Fundamental to connect the camera to the Arduino and allows to use
I²C to read and write all the registers in the sensor.

SPI It's the used interface to start the image transmission from the camera
memory to the Arduino.

Then in the following steps (3.7) the microcontroller checks the status of the camera.
In lines 1 and 2 we test if the I²C bus is correctly working and so we write and
read from the test camera register 0x00 (ARDUCHIP_TEST1) a test value: 0x55.
In lines 3 and 4 we check if the camera module type is OV5642 and so we read from
the 16bit registers 0x300a (OV5642_CHIPID_HIGH) and 0x300b (OV5642_CHIPID_LOW)
respectively the 8 bit values 0x56 and 0x42.
The Arduino remains in the loop at line 6 until the correct values are read.

Listing 3.7: Wiring camera-Arduino check


1 myCam . write_reg ( ARDUCHIP_TEST1 , 0 x55 ) ;
2 temp = myCam . read_reg ( ARDUCHIP_TEST1 ) ;
3 myCam . rdSensorReg16_8 ( OV5642_CHIPID_HIGH , & vid ) ;
4 myCam . rdSensorReg16_8 ( OV5642_CHIPID_LOW , & pid ) ;
5
6 while (( vid != 0 x56 ) || ( pid != 0 x42 ) || ( temp != 0 x55 ) ) {
7 myCam . write_reg ( ARDUCHIP_TEST1 , 0 x55 ) ;
8 delay (3000) ;
9 temp = myCam . read_reg ( ARDUCHIP_TEST1 ) ;
10 if ( temp != 0 x55 ) {
11 Serial . println ( " SPI1 interface Error ! " ) ;
12 }
13
14 // Check if Normal CAM is connected

54
3.3. Smart bin - Vision module

15 myCam . rdSensorReg16_8 ( OV5642_CHIPID_HIGH , & vid ) ;


16 myCam . rdSensorReg16_8 ( OV5642_CHIPID_LOW , & pid ) ;
17
18 if (( vid != 0 x56 ) || ( pid != 0 x42 ) ) {
19 Serial . println ( " Can 't find OV5642 module ! " ) ;
20 } else {
21 delay (1000) ;
22 }
23 }
24 Serial . println ( " OV5642 module correctly wired ! " ) ;

Once the camera module is successfully detected we set the image format to
JPEG, initialize the module and set the JPEG compression level.
We have chosen this image format for many reasons: to compress the image directly
on the camera module and then save time to read it rst from the microcontroller
and then to send it to the server. Moreover in this way we avoid the conversion as a
further step in the server, in fact Amazon Rekognition accept only PNG and JPEG
image formats. [3] Finally another important motivation is that JPEG images have
a greater resolution with respect to the RGB mode, due to the limitation of frame
buer. [4]
Then we prepare the camera to take a picture: we set the sensor interface timing
register to enable Vertical sync, put the module in low power to save current when
the module is not used, clear the FIFO control register to indicate that no picture
has been taken and set the number of frame to be captured to 0.
At the end the microcontroller is ready and in a saving energy mode: the camera
is in low power mode and the WiFi is still o.

Listing 3.8: Setting camera


1 // Configure JPEG format
2 myCam . set_format ( JPEG ) ;
3 myCam . InitCAM () ;
4
5 // Setting the quality of image at minimum
6 myCam . OV5642_set_JPEG_size ( IMAGE_QUALITY ) ;
7 myCam . set_bit ( ARDUCHIP_TIM , VSYNC_LEVEL_MASK ) ;
8 // Enable low power
9 myCam . set_bit ( ARDUCHIP_GPIO , GPIO_PWDN_MASK ) ;
10 myCam . clear_fifo_flag () ;
11 myCam . write_reg ( ARDUCHIP_FRAMES , 0 x00 ) ;

Image acquisition
When the camera is ready and the setup has terminated, the microcontroller
keeps checking if any object has been inserted in the smart bin measuring the dis-

55
3.3. Smart bin - Vision module

tance. When an object is inserted in the smart bin, the ultrasonic sensor detects it
and after few seconds a the function used to take a picture starts (3.9).
As rst step the camera is turned on, all the internal register are properly cleared
and, setting a particular bit through the function at line 11 of the function cap-
ture_normalImage(), the command is transmitted to the sensor.
When the ag CAP_DONE_MASK is raised in the internal memory of the
camera the image is ready to be read and transmitted.

Listing 3.9: Taking a picture


1 void capture_normalImage () {
2 uint64_t temp = 0;
3 uint64_t temp_last = 0;
4 uint64_t start_capture = 0;
5
6 // Power up Camera
7 myCam . clear_bit ( ARDUCHIP_GPIO , GPIO_PWDN_MASK ) ;
8 delay (500) ;
9 myCam . flush_fifo () ;
10 myCam . clear_fifo_flag () ;
11 myCam . start_capture () ;
12 while (! myCam . get_bit ( ARDUCHIP_TRIG , CAP_DONE_MASK ) ) ;
13 myCam . set_bit ( ARDUCHIP_GPIO , GPIO_PWDN_MASK ) ; // enable
low power
14
15 Serial . println ( " Normal Capture Done ! " ) ;
16 read_fifo_burst ( myCam , " NORM " ) ;
17
18 // Clear the capture done flag
19 myCam . clear_fifo_flag () ;
20 }

Image transmission
The nal step of the smart bin vision module consists in sending the image
previously taken to the server and parse the response in order to recommend the
correct trash bin to the end-user.
To do this the image is completely read from the sensor memory and stored in
the microcontroller ram in order to use it later without any problems caused by the
relatively long time to read from the sensor (approximately 500ms). Then we create
the request to send to the server and thus we connect to the WiFi. So we connect to
the server and we send the multipart/form-data request previously composed adding
the image stored in the Arduino memory.
We receive the response from the server, we read it character by character and

56
3.3. Smart bin - Vision module

thus we extract the expected label. At the end we trivially compare the label to the
possible materials e turn on the respective light.

57
Chapter 4

Evaluation
In this chapter, we analyze the data collected during the creation and the de-
ployment phase of the two modules. Moreover, we also present some examples of
practical applications derived from the project.
In particular, for the capacity monitoring module project we mainly focused on
displaying the collected data: we provide an accurate interface that summarizes the
most useful information of a particular bin. Starting from this we then provided two
useful functionalities: a way to aggregate and visualize data of multiple bins in a
certain building through a map and smart notications that alert the user in case a
bin reaches a particular state.
For the smart bin vision module we analyze its performances: the accuracy in
labelling the correct receptacle and the speed in giving a response. Moreover, we also
present other attempts made to improve the project and the problems still present.
In the conclusions of this chapter we also show the power consumption of the two
modules and more precisely we evaluate the feasibility of a battery power supply.

58
4.1. Smart bin: capacity monitoring modules

4.1 Smart bin: capacity monitoring modules

Figure 4.1: Representation of the collected data

As rst step, to collect the data, we have deployed 3 smart bins in 2 dierent
rooms on the rst oor of building 20 in the Politecnico di Milano. We collected
data for approximately 45 days emptying the baskets at non-regular intervals.
We placed them in two dierent rooms in order to cover a greater number of
users and to vary the type of trash thrown.
To further vary the cases and to highlight that the system operates independently
from the basket we used two dierent basket sizes.
Using the Node-RED dashboard we created a chart of a single bin from the
collected data (4.1). The left panel of the chart shows the current status of the bin
and some buttons to see the data for the other bins; the right panel displays the
historical trend of the measured height and weight.
As we can see from the historical height chart of the smallest bin (4.2) the height
analysis is unreliable on bins lower than 20 cm. From the technical data sheet of
the proximity sensor [15] we can see that to achieve a suciently good measure the
surface of the object to be detected, this should measure at least 0.5m². With this bin
size, such object dimension is not only physically impossible but also the bottom of
the basket is smaller of that size. This seriously compromises only the measurements
of the smallest bin while in the larger one this limitation only generates occasional
erroneous measurements.
Instead the weight measurements are quite accurate. The sensors give us mea-

59
4.1. Smart bin: capacity monitoring modules

Figure 4.2: 20 cm smart bin

surement with an accuracy of one gram and the weight remains constant over many
days of measuring if nothing is thrown in the bin. From the slope on the graph we
can easily realize when something heavy is thrown or when the bin was emptied.

4.1.1 Bins map

Figure 4.3: Bins map and their state

60
4.1. Smart bin: capacity monitoring modules

The data of every single bin are not very useful without a way to create an
aggregate vision of them. So, to simplify the use of the capacity monitoring module,
we created a simple interactive map that summarizes the state of an entire oor of
a building. In gure 4.3 we can see an example of the lling degree of every bin of
that oor.
The map provides an easy color scheme to identify the status of each bin:

Red bin: a full bin with a ll level of at least 70%.


Yellow bin: a half-lled bin.
Green bin: an empty bin
Another feature of the map is the prediction of the expected weight (4.1) from
historical data and current height.

P redicted W eight = Avg. W eight


Avg. Height · Detected Height (4.1)

If the detected weight is 5% greater than the predicted one the correspondent
icon is changed from the bin icon to the exclamation mark one. The exclamation
mark icon helps the operator who is monitoring the map to quickly understand if
something unusual has been thrown away. For example, in the case of dierentiated
waste collection, a value far from the expected one can suggest the wrong choice of
recycling bin.
Furthermore even for standard mixed waste bins, it is useful to know if a receptacle
needs to be emptied even though the ll state color signals an empty bin. In fact,
in case of organic waste (e.g. food refuse) or liquids (e.g. coee cup) the system
detects a higher weight than usual and inform the operator. This monitoring of the
weight is fundamental to prevent bad odors, the presence of animals and problems
to sensors.
The map allows the user to access information about each basket by simply
pressing the icon corresponding to the chosen bin. This leads to the web page shown
before with all the up to date information and helps the user to understand whether
the behavior of a particular basket is unexpected.
A possible use of the smart bins map (4.1.1) is the possibility to create an appli-
cation on the smartphone or to create a custom device to check their ll status.
We created a simple mock-up (Figure 4.4) using a Raspberry PI and a compatible
touch screen display showing the ll status of a smart bin and its historical values.
This device could be given to the waste collector to know which baskets empty before
accessing a building and to know the last time the bin has been emptied.

61
4.1. Smart bin: capacity monitoring modules

(a) Historical data (b) Fill level

Figure 4.4: Mock-up of a possible device used by the waste collector

4.1.2 Smart noties


Another simple functionality is the possibility to notify the user when her or his
attention is required. Naturally this is possible only when the number of bins is low
or the ll frequency is particularly low (e.g. Bins located in places with few people).
As an example we create 2 dierent notications, one that alerts the user on a
critical level of the bin (2/3 of the bin is full) and the other that informs the user on
the possibility that something unexpected may have been thrown in the bin.
The user can also choose how to be informed. Using Node-RED we created a
notify alert using IFTTT (Figure 4.5). In this way, we can support multiple devices
such as Smartphones, Tablets and Smartwatches, without the need for the user to
rely on a specic device.
IFTTT stands for If This Then That and is a free web-based service. It allows
to create a chain of simple conditional statements and is fully integrated with many
third-party software and services. In this way simple Internet of Things applications
can be easily created. IFTTT provides an application that supports both Android
and iOS.

62
4.1. Smart bin: capacity monitoring modules

(a) IFTTT notify of a full bin. (b) IFTTT notie of a bin heavier than usual.

Figure 4.5: IFTTT notify samples on smartphone.

To further increase compatibility, we also decided to use e-mails (Figure 4.6),


that are a de facto standard for communication and are supported by every device
without any third-party software like IFTTT app.

63
4.1. Smart bin: capacity monitoring modules

(a) Email notication of a full bin.

(b) Email notication of a bin heavier than usual.

Figure 4.6: Examples of notications on smartphone using emails.

4.1.3 Problems
As mentioned above these are working prototypes that are far from being perfect
and without problems. We identied three main problems listed here in order for
importance:

1. A poor precision in measuring the ll status of the bin. As we touched upon
before this problem is mainly present in small size bins and it is caused by the
ultrasonic sensor's fault. In reality this problem is particularly serious only in
few cases in fact, the greatest number of bins is big enough not to present this
problem.
The only possible solution to this problem is to change the sensor with a more
accurate one like the Time of Flight (ToF) VL53L0X that has always a low
power consumption, small dimension and in on other projects test is giving
great accuracy results.

2. Vulnerable to hot steam. This problem appears only in a limited number of


cases. In fact, it is required that a receptacle is frequently used to throw hot

64
4.1. Smart bin: capacity monitoring modules

object emitting steam (like used coee pods). The steam can create a short
circuit to the ultrasonic sensor attached to the lid and damage the prototype.

3. Weight sensor loses some grams in subsequent measurements over long periods
of time. From the empirical data, the weight approximately decreases up to 2
grams per 3/4 days when nothing is thrown in the bin. Although this problem
does not have a real solution, it does not inuence the nal result of the mea-
surements. In fact, it is mainly because in a real use case is quite rare that a
bin is not used for a so long period of time. Furthermore a small loss of weight
during a long period of time can be identied and easily solved directly by the
server.

65
4.2. Smart bin - Vision module

4.2 Smart bin - Vision module

The testing of the accuracy of the waste recognition is based on 2 main tests: the
accuracy, that shows how well the correct waste is predicted and the time eciency,
that is the average time that elapses between the moment the photo is taken and
the bin recommendation is shown to the user.
To achieve this we have taken a picture of dierent objects made of dierent
material at the maximum available resolution: 2592x1944. Each picture has the
same aspect ratio of 4:3 and is saved as JPEG. After the initial creation of the
image set, using the open source software MAT (the acronym stands for: Metadata
anonymisation toolkit), we remove every metadata that could inuence the Amazon
Rekognition output. Finally, we create 3 distinct sets from the initial images but
with dierent resolutions (4.7):

2592x1944 It's the maximum resolution of the ArduCAM OV5642. (From now on
we will refer to this resolution as 5MP)

960x720 This is an intermediate resolution corresponding to an HD (High Deni-


tion) image. (720p from now on)

320x240 It's the lowest resolution available on the camera sensor. (240p from now
on)

(a) 5MP image (b) 720p image (c) 240p


image

Figure 4.7: Example of dierent image sizes. The proportion are respected, so we
can understand the size dierence between the 3 resolution sets.

To remove any dierence in light or in any other external factors we did not take
3 photos but we changed the size of the initial image using the open source software

66
4.2. Smart bin - Vision module

ImageMagick 6.9.7-4. The images did not receive any further compression during
the resize operation. This post-processing gives us the certainty that the 3 sets are
all equals and have the same characteristic (contrast, brightness, ...).
To sum up at the end we created 3 sets each containing 80 images for a total of 240
analyzed samples. To create the initial set we used 14 object, each photographed
from dierent angles: 4 objects made of plastic, 3 aluminum, 4 paper and 3 that
cannot be recycled.

4.2.1 Accuracy
Before starting with data obtained through the accuracy analysis it's important
to highlight some important points.
First of all during the creation of the model and the development of the project
we have assumed that not all prediction errors have the same importance. For this
reason we distinct between two types of error:

Type 1 Lower importance error.


Type 2 More critical error.
An example of type 1 error is the case where an object made of plastic is not rec-
ognized and so it's thrown in the unsorted receptacle. This is considered a lower
importance error because it only leads to the loss of a possible recyclable waste. A
type 2 error has more serious consequences. This occurs, for example, when a plas-
tic object is thrown in another recycling bin like the paper bin. Here a mistake in
the prediction can ruin the entire work of recycling, leading up to the case where
the entire contents of the bin is not accepted by the company designated to waste
recycling.
Considered this, in the case of uncertain prediction the bin has a conservative
approach in throwing away objects. Instead of proposing a possible wrong receptacle,
the smart bin labels the object as unsorted waste decreasing the overall accuracy but
also substantially reducing the possibility of a type 2 error.
As we have seen in the previous chapter (3.3.2.1) we have created a cardboard
model. This decision was driven by the need of isolating the object from the context,
in this way the result of the recognition would be independent from the place where
the photo is taken. The only necessary requirements to achieve a good prediction is
to have an environment suciently lit.
From initial tests, we have seen that Amazon Rekognition does not identify all
the objects present in the image but, in the case of pictures with many items, it
identies and report only those considered more important. This is not a problem
in case of xed background but without the cardboard model these object omissions
lead to a higher number of errors.

67
4.2. Smart bin - Vision module

Now we will analyze each of the 4 recognised materials (Plastic, Aluminium,


Paper, Unsorted waste):

Plastic Aluminium Paper Unsorted Accuracy


Bottle 5 0 0 0 100%
Squeezed bottle 4 0 0 1 80%
Highly squeezed bottle 2 0 0 3 40%
Small coee cup 4 0 0 1 80%
(a) Images of plastic objects at resolution of 5MP

Plastic Aluminium Paper Unsorted Accuracy


Bottle 5 0 0 0 100%
Squeezed bottle 5 0 0 0 100%
Highly squeezed bottle 2 0 0 3 40%
Small coee cup 4 0 0 1 80%
(b) Images of plastic objects at resolution of 720p

Plastic Aluminium Paper Unsorted Accuracy


Bottle 5 0 0 0 100%
Squeezed bottle 5 0 0 0 100%
Highly squeezed bottle 3 0 0 2 60%
Small coee cup 3 0 0 2 60%
(c) Images of plastic objects at resolution of 240p

Table 4.1: Accuracy on plastic objects.

The plastic object analysis, reported in Table 4.1 gives good results. With new
bottles, or generally with objects that are not too deformed the detection precision is
around 100%. The accuracy decreases when the analyzed object is highly squeezed
(Figure 4.8b) or it is photographed in particular positions (Figure 4.8a). In these
cases Amazon Rekognition is not able to deliver the correct label, hence also the
material recognition fails.
Example:

68
4.2. Smart bin - Vision module

(a) Frontal and deformed object (b) Highly squeezed bottle

Figure 4.8: Example of highly deformed plastic object

Plastic Aluminium Paper Unsorted Accuracy


Cola can 0 6 0 0 100%
Squeezed cola can 0 8 0 0 100%
Highly squeezed cola can 0 4 0 2 66.67%
(a) Images of aluminium objects at resolution of 5MP

Plastic Aluminium Paper Unsorted Accuracy


Cola can 0 5 0 1 83.33%
Squeezed cola can 0 8 0 0 100%
Highly squeezed cola can 0 4 0 2 66.67%
(b) Images of aluminium objects at resolution of 720p

Plastic Aluminium Paper Unsorted Accuracy


Cola can 0 5 0 1 83.33%
Squeezed cola can 0 7 0 1 87.50%
Highly squeezed cola can 0 4 0 2 66.67%
(c) Images of aluminium objects at resolution of 240p

Table 4.2: Accuracy on aluminium objects

The accuracy in image recognition of objects made of aluminium is higher and


reaches levels near to 100% even in the case of highly deformed tins. The recognition
of an aluminium object, as shown in Table 4.2, behaves dierently from the analysis
of paper and plastic objects. In fact, as we will see, only with the aluminium the
accuracy decreases, reducing the image quality. The average recognition rate is 80%
in the worst case (240p) and reaches an average of 90% with images of the maximum
quality.

69
4.2. Smart bin - Vision module

(a) Normal aluminium can (b) Compressed aluminium can

Figure 4.9: Analysis on aluminium cans

Plastic Aluminium Paper Unsorted Accuracy


Ball of paper 0 0 3 2 60%
Newspaper 0 0 2 3 40%
Balled up newspaper 0 0 2 3 40%
Paper ship 0 0 3 2 60%
(a) Images of paper objects at resolution of 5MP

Plastic Aluminium Paper Unsorted Accuracy


Ball of paper 0 0 2 3 40%
Newspaper 0 0 1 4 20%
Balled up newspaper 0 0 2 3 40%
Paper ship 0 0 4 1 80%
(b) Images of paper objects at resolution of 720p

Plastic Aluminium Paper Unsorted Accuracy


Ball of paper 0 0 4 1 80%
Newspaper 0 0 3 2 60%
Balled up newspaper 0 0 1 4 20%
Paper ship 0 0 5 0 100%
(c) Images of paper objects at resolution of 240p

Table 4.3: Accuracy on paper objects

Paper analysis is a bit more problematic than the other materials and the card-
board model was essential to increase the recognition accuracy. From the rst tests
done without any model, we noticed that Amazon Rekognition tended to ignore the
paper object considering it as not important. Then, applying a xed background
and removing all the external object that can inuence the object labels output, we
could increased the recognition at a sucient level, up to the 80% for balls of paper
and 100% for objects obtained through paper folding.
Voluminous object like newspaper (Figure 4.10a) are hardly recognized and, as

70
4.2. Smart bin - Vision module

(a) Voluminous object (b) Voluminous and balled up newspaper

Figure 4.10: Newspapers are hardly recognized due to their size

we have seen also for dierent materials, compressed newspaper (Figure 4.10b) gives
an extremely low recognition rate. The solution in this case is to take the photo
from further away and make a bigger model that allows the exclusion of any external
object.
During the analysis of the results for paper objects we noticed another inter-
esting trait of the Amazon Rekognition service: by reapplying multiple times the
same image we obtained dierent object labels. This means that the service is not
deterministic and multiple analysis of the same image could increase the accuracy of
the recognition, albeit to the detriment of the speed.

Plastic Aluminium Paper Unsorted Accuracy


Apricot 0 0 0 7 100%
Banana 0 0 0 7 100%
Carrot 0 0 0 6 100%
(a) Images of unsorted objects at resolution of 5MP

Plastic Aluminium Paper Unsorted Accuracy


Apricot 0 0 0 7 100%
Banana 0 0 0 7 100%
Carrot 0 0 0 6 100%
(b) Images of unsorted objects at resolution of 720p

Plastic Aluminium Paper Unsorted Accuracy


Apricot 0 0 0 7 100%
Banana 0 0 0 7 100%
Carrot 0 0 0 6 100%
(c) Images of unsorted objects at resolution of 240p

Table 4.4: Accuracy on unsorted objects

In the case of unsorted objects we mainly focused on food as this was the most

71
4.2. Smart bin - Vision module

common unsorted object used that cannot be recycled.


The accuracy is 100%, irrespective for the object or the photo resolution. This
good accuracy can also be imputed to the conservative approach of the model that,
in the case of uncertainty, assumes the object as not recyclable and labels it as
unsorted.
After the analysis of every object, separated depending on the material, we cre-
ated a nal table that sums up the accuracy for every label and presents the nal
value of total accuracy for each resolution.

2592x1944 Plastic Aluminium Paper Unsorted Accuracy


Plastic 15 0 0 5 75%
Aluminium 0 18 0 2 90%
Paper 0 0 10 10 50%
Unsorted 0 0 0 20 100%
(a) Total accuracy for 5MP images

960x720 Plastic Aluminium Paper Unsorted Accuracy


Plastic 16 0 0 4 80%
Aluminium 0 17 0 3 85%
Paper 0 0 9 11 45%
Unsorted 0 0 0 20 100%
(b) Accuracy table for 720p images

320x240 Plastic Aluminium Paper Unsorted Accuracy


Plastic 16 0 0 4 80%
Aluminium 0 16 0 4 80%
Paper 0 0 13 7 65%
Unsorted 0 0 0 20 100%
(c) Accuracy table for 240p images

Table 4.5: Total accuracy per resolution

The analysis of the overall accuracy data (Table 4.5) reveals that no type-2 errors
occurred, hence a bin only based on this model's predictions can successfully identify
waste material without the risk of mixing materials and damaging the bin content.
Looking at the nal data of accuracy for each resolution (Table 4.6) we notice
that the accuracy is approximately 80%, independently of the resolution, and that
in a set of 5 objects, 4 of these are thrown in the correct bin and only one is not
recognized as recyclable and is thrown to the unsorted bin.
5MP 720p 240p
Accuracy 78.75% 77.50% 81.25%
Table 4.6: Accuracy per resolution

72
4.2. Smart bin - Vision module

It is quite unexpected to achieve better results using lower resolution and lower
details. This outcome can be explained knowing the source code of the algorithm
of Amazon Rekognition. With great probability this cloud service recognize a xed
number of object in the gure so in big images, like the 720p and 5MP ones, we have
a greater number of pixel that can be erroneously confused with possible object. In
this way the possibility to nd false positive during the image recognition process
increases. So using 240p images we have a smaller number of pixels. This helps in
situations where we have only one object to recognize in fact, in this way the service
nds less false positive and thus it makes a better prediction.

4.2.2 Time eciency


As we have touched upon in the previous part we used 3 dierent image resolution.
This implies that the 3 sets have images with dierent sizes (Table 4.7) the image
recognition requires dierent time periods to analyze them.
5MP 720p 240p
Min. size 845.83 KB 116.6 KB 19.12 KB
Average size 1142.2 KB 171.76 KB 23.79 KB
Max. size 1490.84 KB 232.86 KB 35.61 KB
Table 4.7: File dimension per resolution

The analysis measures the time elapsed from the moment the image is sent from
the personal server up to the moment we receive the label response from the Amazon
server. We repeated the same analysis using the Wired connection provided by
Politecnico di Milano, a temporary ad-hoc WiFi connection and a 4g connection
using a portable router. The rst two connection provide a 100Mbps connection, the
4g provide a 20Mbps connection.
The data provided measure the time to execute the following operations: send
the image to Amazon Rekognition, parse the response and output the material label.
From the data (Table 4.8) we can see that there are no signicant dierences
between wired and wireless connection. Instead we have a signicant dierence
using a 4g connection due to its lower connection speed. The dierence of speed is
remarkable with the 5MP images, on average 190% slower, while is similar using the
720p images (33% slower) and the 240p ones (49% slower).
The main dierence comes out among the 3 dierent resolution. As we can see
sending an image using the 240p instead of 5MP resolution is, on average, 1.7 times
faster.
Moreover even for the biggest les (1490 KB) using a wireless or a wired connec-
tion it takes less than 2 seconds to obtain a response. This means that this model
can be used for real time applications with every resolution assuring the user keeps

73
4.2. Smart bin - Vision module

5MP 720p 240p


Min. time 1.275 s 0.973 s 0.729 s
Average time 1.456 s 1.109 s 0.868 s
Max. time 1.731 s 1.394 s 1.134 s
(a) Time with WiFi connection

5MP 720p 240p


Min. time 1.177 s 0.887 s 0.612 s
Average time 1.462 s 1.151 s 0.865 s
Max. time 1.894 s 1.439 s 1.051 s
(b) Time with Wired connection

5MP 720p 240p


Min. time 3.364 s 1.244 s 1.069 s
Average time 4.244 s 1.529 s 1.291 s
Max. time 4.723 s 1.796 s 1.516 s
(c) Time with 4g connection

Table 4.8: Time elapsed per resolution

receiving a very fast response.


From a fast analysis on future implementation we can see that this model can
reach up to 41 objects processed in a minute using the highest resolution possible or
up to 69 objects per minute using the lowest resolution available from the camera.
This fact is particularly interesting compared to the result found in the accuracy
analysis section, where we found that images at the lowest resolution can reach
better results than the pictures of higher quality.
Moreover, considering that 240p images give us the best results, we can also
use the 4g connection obtaining an average response time of 1.3 seconds. In future
implementations we expect to obtain even better results employing a 5g connection
reaching an average response time below the second for 240p images as seen using
the wired connection.

4.2.3 Further experiments


Before reaching the results shown we also tried dierent solutions to gure out if
better outcomes can be obtained.
First we tried to remove the background (Figure 4.11) using post-processing in
the intermediate server between the client and Amazon Rekognition.
The server receives the normal image (4.11a) created by the camera sensor, then
the server removes the object automatically using ImageMagick (4.11b) and send it
to Amazon Rekognition.
Unfortunately this procedure does not work properly as sometimes the back-

74
4.2. Smart bin - Vision module

(a) Normal image that reaches the server (b) The image sent to Amazon Rekognition
after postprocessing

Figure 4.11: Comparison of images with and without background

ground is not completely removed. Even with images taken correctly without back-
ground, the recognition fails more often compared to images not post-processed.
In another attempt we tried to substitute the cardboard box with a dierent
one that allows to take the picture from the top. In this way there are no more
problems with the background because it will always underneath the model and no
external object will appear in the picture.
The rst result were interesting but the model was not good enough to manage
dierent size object. In some cases with bulky objects, like the balled up newspaper
(Figure 4.10b), the camera is obscured and no image recognition is possible.

75
4.2. Smart bin - Vision module

4.2.4 Problems
Overall, despite we reaching great results there are still some problems that need
attention. First of all as we have seen from the accuracy analysis section there are
still problems with deformed items. Unfortunately this issue is strictly related to
the image recognition service used and cannot be easily avoided except by changing
service or creating one ad hoc.
Another signicant issue to consider is to nd a way to isolate the photographed
object from the context. We tried both a software and a hardware solution. The
former, which remove the background using an image editing software, is not easy
to implement with Amazon Rekognition due to its poor results. The latter is the
currently used solution but it is hard to use with large objects.
This last problem is not so serious because from a quick analysis, the largest
part of the recyclable object has a small dimension so the cardboard model works
correctly in most cases.

76
4.3. Consumption analysis

4.3 Consumption analysis

One of the pillars of every Internet of Things application is having a low power
consumption that allows the various devices to work with a battery power supply.
So to conclude the analysis of the various projects results we have also tracked the
energy consumption of the two main projects. In order to do this we used an Arduino
UNO (2.6.1) and the INA219 DC Current Sensor (2.7.4) as shown in detail in the
previous subsection of common features (3.1.1). All the information were calculated
using a spreadsheet that allowed to extract various data such as minimum, maximum
and average energy consumption.
As a rst step we analyzed the eective energy consumption of a single smart
bin capacity module (Figure 4.12). To better represent the data on the graph and to
show two whole cycles we reduced the period of deep sleep from 60 minutes to only
25 seconds.
We rst analyzed the consumption of the Arduino MKR1000, leaving it rst idle
(using the delay() function) and then putting it in deep sleep.
The consumption is also measured looking at 2 dierent parameters:

ˆ Input pin for energy supply:

 VIN pin: Can be used to power the board with a regulated 5V source
(Can tolerate maximum tensions up to 6V).
 VCC pin: It is primarily an output pin but can be used to receive a 3.3V
current, paying attention that it does not allow higher voltages.
 5V pin: Also this pin is mainly an output pin and can receive only 5V
current. Any dierent tension level could damage the microcontroller.

ˆ Input current tension:

 5V: It is the maximum tension allowed from the board.


 3.3V: It is the working tension of the Arduino MKR1000.

77
4.3. Consumption analysis

Power Source Input PIN Power consumption


3.3 V VIN 4.8 mA
3.3 V VCC 0.5 mA
5V VIN 9 mA
5V 5V 9 mA
(a) Consume in Deep Sleep

Power Source Input PIN Power consumption


3.3 V VIN 13.4 mA
3.3 V VCC 9.3 mA
5V VIN 18.1 mA
5V 5V 18.1 mA
(b) Power Consumption when idle

Table 4.9: Power consumption of the MKR1000

The consumption analysis (Table 4.9) shows us that the deep sleep is clearly the
best choice to achieve a great autonomy. In fact, the energy consumption improves
by a range of 2 and 18 times in the case of deep sleep.
Supposing to use the Arduino MKR1000 in deep sleep for approximately 30
minutes using a 3.3V power source and considering that the microcontroller is awake
for a mean time of 4.762s (4.10) we have a consumption of 0.656mA.
Having these data we can correctly size the battery: supposing we want a bin
working for at least 2 weeks, we need a battery that can provide 220.4mAh (4.2).

Battery Size
Duration [h] = (4.2)
Energy needed
From further studies we know that the energy drawn from the battery is greater
respect to the energy consumed [26] and from empirical data we can estimate the
available energy of a commercial battery to be not less than the 80% of the nominal
charge [17] (4.3).

Battery Size ∗ 0.8


Duration [h] = (4.3)
Energy needed
Hence taking as reference the average battery of a smartphone of 2018: 3300mAh
[31], we can estimate the autonomy of the smart bin as 4024h: approximately 5
months.
Unfortunately due to the proximity sensor that requires a 5V source we cannot
reach a real deep sleep. Therefore, the current consumption increased up to 15.87mA
during the sleep period (Figure 4.12).
Considering this, now the average current consumed is 15.986mA, which dramat-
ically reduces the battery duration. This means that to use the device for 2 weeks
would use 5371mAh. Taking again the example of a 3300mAh battery we would

78
4.3. Consumption analysis

State Energy [mA] Duration [s]


Deep sleep [3.3V] 0.5 1800
Deep sleep [3.3V - 5V] 15.87 1800
Awake + transmission 59.62 4.76
Awake (no transmission) 39.69 3.51
Table 4.10: Consumption of the smart bin

reach a battery duration of only 165h, a bit less than 1 week.


In order to save more battery we decided to store the last weight and height
samples taken and compare them with the new extracted data. If the data have
not changed signicantly, these are not sent to the server and therefore it is not
necessary to connect to the WiFi network. The maximum time interval between two
communications is 6 hours. In this way we can understand if the basket has problems
or if it is only rarely used.

Figure 4.12: Consumption graph of 2 cycle of data collection and transmission

For the second module, the vision one, we cannot use the deep sleep functionality;
in fact, we do not know when the next object will be placed in the cardboard model.
As a rst step we started to analyze the only camera consumption (Table 4.11).
From the graph 4.13 we can see that the average camera consumption in power saving
mode is 55.67mA. This modality lasts for an undened fraction of time depending
on the use of this project.
On the contrary, the time to take a picture is dened and it is of 577ms and the
average power consumption is 226.47mA.

79
4.3. Consumption analysis

State Energy [mA] Duration [s]


Power saving status 55.67 /
Take a photo 226.47 0.577
Table 4.11: Power consumption of camera sensor OV5642

Figure 4.13: Camera consumption

After the initial analysis of the camera we can see the power consumption of
the whole prototype made of an Arduino MKR1000 as microcontroller, the camera
sensor OV5642 to take the photo, the HC-SR04 sensor to detect the presence of any
objects and nally 3 leds to communicate to the user the suggested bin.
From the graph (4.14) we can see 3 full cycles of detection. In every cycle, we
can identify 4 dierent phases:

1. Idle phase: here the camera is in power saving mode, the leds are o and only
the ultrasonic sensor is measuring the distance to detect an eventual object.

2. Take a photo: the object is detected and the camera exits from power saving
mode only in this phase to take a photo and store it in memory.

3. Send image to server: the microcontroller connects only in this phase to the
WiFi network, send the image through a POST request and parse the response.

4. Suggest bin: the led corresponding to the suggested bin is turned on for 5
seconds.

80
4.3. Consumption analysis

Figure 4.14: Total consumption of the smart bin vision module

Therefore the overall power consumption cannot be estimated accurately because


the consumption depends on the use. From the data measured we know that when
the prototype is in the idle phase it consumes 80.95mA; while it consumes 166.05mA
for approximately 8.5 seconds when it is triggered and executes the other 3 phases.
Having these data, we can estimate the maximum and minimum consumption.
To estimate the minimum consumption, we suppose that the prototype is always
idle, so it consumes only 80.95mA. A battery with nominal capacity of 3300mAh can
power the prototype for at least 32h.
Instead, in case of maximum consumption, we suppose that an object is always
present inside the model and it analyzes that object continuously. In this case the
power consumption is 138.94mA and the prototype can be powered for at least 19
hours.
In conclusion, this power consumption analysis shows that, due to the high con-
sumption of this module, a battery implementation is still not feasible. Therefore,
we suggest an external power source to deploy and use this prototype.
A dierent design could help to create a smart bin that can be powered using bat-
teries, for example having all the sensors and the microcontroller always o and the
user manually activate the recognition using a button. In this way we could consid-
erably increase the battery duration but the recognition time would also increment
because of the camera sensor setup time and thus worsen the user experience.
From a more accurate analysis on the use cases of this smart bin module, we can
see that the use of the electricity instead of the batteries is not a great problem since
this module does not need to be moved.

81
Conclusions
With this work we have proposed two functional modules that can be adopted
to manage in a better way the waste collection, increase recycling and thus help to
live in a cleaner world.
The smart bins capacity monitoring module is one of these modules and as we
have seen, we were able to transform an ordinary trash bin in a more complex object
that can communicate with to a central server that manage it. Moreover we do not
simply collected data but we were able to create a simple mechanism to actively
inform the end-user on the status of the bins. These same information can also
be obtained by the user checking at any time a easy to use map that is constantly
updated. We also demonstrated that it is not fundamental to use a trash bin of a
certain size or dimension but almost all can be transformed in an smart bin.
In the previous chapter we also described accurately how to transform a tiny mi-
crocontroller and a low cost camera in a fully functional smart bin that can recognize
any sort of waste and recommend the appropriate recycling bin. We created a fully
functional mock-up and we tested it
We reached great results: we can recognize an object material in less than 1
second obtaining an average accuracy over the 80%. In addition all the wrong pre-
diction does not mix the materials among dierent recycling bins but only suggest
to throw a recyclable object in the unsorted waste bin.
For the development of this master thesis we started from previous implementa-
tions that we found in literature and mentioned in Chapter 1. We briey discussed
on the dierent problems we found. Examples of the discussed issues are the ab-
sence compatibility among the microcontroller NodeMCU 0.9, the HX711 library
and MQTT implementation for ESP8266 devices or the ultrasonic sensor problems
caused by very small trash bin or too small items that are close to the sensor.

Further developments

As we have said the projects shown in this thesis are far form the perfection and
of course they have parts that can be enhanced.
An example is the relatively high power consumption of every smart bin capacity

82
Conclusions

module. This consumption is caused by the ultrasonic sensor operating at 5V power


that does not allow to enter in the deep sleep mode when the microcontroller is idle.
A possible solution requires to change sensor used to measure the ll level, from
further studies we have found the VL53L0X Time-of-Flight Laser Ranging Sensor
that works at 3.3V. This would allow the deep sleep on the Arduino and consequently
the deployment of the smart bin using small batteries.
Also in the smart bin vision module we have found problems. An example is the
inability for the recognition image service to correctly identify deformed items. A
possible solution to this issue could be the use of another or even multiple image
recognition services. Another possible solution, requires the creation of an ad hoc
neural net to determine the directly the waste material. A possible implementation
is provided by the open source software library TensorFlow. The use of an ad hoc
solution would provide a greater speed in image recognition because now we would
not send the image to any other server and better result because the neural network
would be trained no more to recognize images but only the material they are made
of.
Another possible idea considers the possibility to use a more complex and exible
articial intelligence algorithm to detect unexpected object in the smart bin.
Looking at new technologies, the spread of 5G technology in the near future will
increase the speed, the reliability and the power consumption of communications
over internet. This will oer us the possibility to improve the current prototypes re-
ducing the dimension of batteries and removing the dependency from a reliable WiFi
connection giving us the possibility to deploy these smart bins almost everywhere.

83
Bibliography
[1] Md Abdulla Al Mamun, MA Hannan, and Aini Hussain. Real time solid waste
bin monitoring system framework using wireless sensor network. In Electronics,
Information and Communications (ICEIC), 2014 International Conference on,
pages 12. IEEE, 2014.

[2] Wi-Fi Alliance. Wi- alliance® introduces security en-


hancements. https://www.wi-fi.org/news-events/newsroom/
wi-fi-alliance-introduces-security-enhancements, Jan 2018. Accessed
on 2018-07-06.

[3] AWS. Limits in amazon rekognition. https://docs.aws.amazon.com/


rekognition/latest/dg/limits.html. Accessed on 2018-07-05.

[4] AWS. Arducam mini released. http://www.arducam.com/


arducam-mini-released, Mar 2015. Accessed on 2018-07-05.

[5] Cyril Joe Baby, Harvir Singh, Archit Srivastava, Ritwik Dhawan, and P Ma-
halakshmi. Smart bin: An intelligent waste alert and prediction system us-
ing machine learning approach. In Wireless Communications, Signal Processing
and Networking (WiSPNET), 2017 International Conference on, pages 771774.
IEEE, 2017.

[6] Adil Bashir, Shoaib Amin Banday, Ab Rouf Khan, and Mohammad Sha. Con-
cept, design and implementation of automatic waste management system. In-
ternational Journal on Recent and Innovation Trends in Computing and Com-
munication ISSN, pages 23218169, 2013.

[7] G. Bonifazi, S. Serranti, A. Bonoli, and A. Dall'Ara. Innovative recognition-


sorting procedures applied to solid waste: the hyperspectral approach. pages
885894, April 2009.

[8] The European Commission. Review of waste policy and legislation. http://
ec.europa.eu/environment/waste/target_review.htm, Dec 2017. Accessed
on 2018-07-02.

84
Bibliography

[9] Michel Deslierres. Esp8266 watchdogs in arduino.


https://sigmdel.
ca/michel/program/esp8266/arduino/watchdogs_en.html#ESP8266_WDT_
FEEDING, Aug 2017. Accessed on 2018-08-04.

[10] Jay Donovan. Auto-trash sorts garbage automatically at the


techcrunch disrupt hackathon. https://techcrunch.com/2016/09/13/
auto-trash-sorts-garbage-automatically-at-the-techcrunch-disrupt-hackathon/,
Sep 2016. Accessed on 2018-08-01.

[11] Teh Pan Fei, Shahreen Kasim, Rohayanti Hassan, Mohd Norasri Ismail,
Mohd Zaki Mohd Salikon, Husni Ruslai, Kamaruzzaman Jahidin, and Moham-
mad Syafwan Arshad. Swm: Smart waste management for green environment.
In Student Project Conference (ICT-ISPC), 2017 6th ICT International, pages
15. IEEE, 2017.

[12] Glenn Fleishman. Battered, but not broken: understanding the


wpa crack. https://arstechnica.com/information-technology/2008/11/
wpa-cracked, Jul 2008. Accessed on 2018-07-06.

[13] A. Shamir Fluhrer S., Mantin I. Weaknesses in the key scheduling algorithm of
rc4. https://www.cs.cornell.edu/people/egs/615/rc4_ksaproc.pdf, Jan
2001. Accessed on 2018-07-06.

[14] Fachmin Folianto, Yong Sheng Low, and Wai Leong Yeow. Smartbin: Smart
waste management system. In Intelligent Sensors, Sensor Networks and Infor-
mation Processing (ISSNIP), 2015 IEEE Tenth International Conference on,
pages 12. IEEE, 2015.

[15] Elec Freaks. Ultrasonic ranging module hc - sr04. https://cdn.sparkfun.com/


datasheets/Sensors/Proximity/HCSR04.pdf. Accessed on 2018-07-28.

[16] N Sathish Kumar, B Vuayalakshmi, R Jenifer Prarthana, and A Shankar. Iot


based smart garbage alert system using arduino uno. In Region 10 Conference
(TENCON), 2016 IEEE, pages 10281034. IEEE, 2016.

[17] Mark I. Shoesmith Lars Ole Valúen. The eect of phev and
hev duty cycles on battery and battery pack performance. https:
//web.archive.org/web/20090326150713/http://www.pluginhighway.ca/
PHEV2007/proceedings/PluginHwy_PHEV2007_PaperReviewed_Valoen.pdf,
2007. Accessed on 2018-07-30.

[18] Dave Locke. Mq telemetry transport (mqtt) v3.1 protocol specication. https:
//www.ibm.com/developerworks/webservices/library/ws-mqtt/, Aug 2010.
Accessed on 2018-07-09.

85
Bibliography

[19] Cristian Maglie.Flashstorage library for arduino. https://github.com/


cmaglie/FlashStorage, Oct 2016. Accessed on 2018-08-04.

[20] SS Navghane, MS Killedar, and Dr VM Rohokale. Iot based smart garbage and
waste collection bin. Int. J. Adv. Res. Electron. Commun. Eng, 5(5):15761578,
2016.

[21] OASIS. Mqtt version 3.1.1. http://docs.oasis-open.org/mqtt/mqtt/v3.1.


1/os/mqtt-v3.1.1-os.html#_Toc384800473, Oct 2014. Accessed on 2018-07-
09.

[22] OASIS. Mqtt version 3.1.1. http://docs.oasis-open.org/mqtt/mqtt/v3.1.


1/os/mqtt-v3.1.1-os.html#_Toc398718111, Oct 2014. Accessed on 2018-07-
09.

[23] OASIS. Mqtt version 3.1.1 becomes an oasis stan-


dard. https://www.oasis-open.org/news/announcements/
mqtt-version-3-1-1-becomes-an-oasis-standard, Oct 2014. Accessed
on 2018-07-09.

[24] Jared Paben. Carton-plucking 'clarke' brings robots into recy-


cling. https://resource-recycling.com/plastics/2017/03/31/
carton-plucking-clarke-brings-robots-recycling/, Mar 2017. Accessed
on 2018-08-01.

[25] Andreas Papalambrou, Dimitrios Karadimas, J Gialelis, and Artemios G Voyi-


atzis. A versatile scalable smart waste-bin system based on resource-limited
embedded devices. In Emerging Technologies & Factory Automation (ETFA),
2015 IEEE 20th Conference on, pages 18. IEEE, 2015.

[26] Daler N. Rakhmatov Sarma Vrudhula. Battery modeling for energy-


aware system design. https://pdfs.semanticscholar.org/2cf8/
2f3d23b04eaaea3208f0e6e4382a9651aadc.pdf, Jul 2003. Accessed on
2018-07-25.

[27] Narayan Sharma, Nirman Singha, and Tanmoy Dutta. Smart bin implementa-
tion for smart cities. International Journal of Scientic & Engineering Research,
6(9):787791, 2015.

[28] Sanjana Sharma, V Vaishnavi, Vandana Bedhi, et al. Smart bin - an 'internet of
things' approach to clean and safe public space. In I-SMAC (IoT in Social, Mo-
bile, Analytics and Cloud)(I-SMAC), 2017 International Conference on, pages
652657. IEEE, 2017.

86
Bibliography

[29] Security Space. Web server survey. https://secure1.securityspace.com/s_


survey/data/201807/index.html, Aug 2018. Accessed on 2018-08-04.

[30] Andres Torres Garcia, Oscar Rodea Aragon, Omar Longoria Gandara, Francisco
Sanchez Garcia, and Luis Enrique Gonzalez Jimenez. Intelligent waste separator.
Computacion y Sistemas, 19(3), Oct 2015. Accessed on 2018-08-01.

[31] Robert Triggs. Fact check: Is smartphone battery capacity grow-


ing or staying the same? https://www.androidauthority.com/
smartphone-battery-capacity-887305/, Jul 2018. Accessed on 2018-07-25.

[32] S.T. Wagland, F. Veltre, and P.J. Longhurst. Development of an image-based


analysis method to determine the physical composition of a mixed waste mate-
rial. Waste Management, 32(2):245248, February 2012.

[33] Aksan Surya Wijaya, Zahir Zainuddin, and Muhammad Niswar. Design a smart
waste bin for smart waste management. pages 6266. IEEE, August 2017.

[34] Zhu Lei. Machine learning for automatic waste sorting. Master's thesis, ENSTA
Bretagne, Aug 2017. Accessed on 2018-08-01.

87

Potrebbero piacerti anche