Esplora E-book
Categorie
Esplora Audiolibri
Categorie
Esplora Riviste
Categorie
Esplora Documenti
Categorie
2 Technological background 8
2.1 The IEEE 802.11 Standard . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 LAMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 GNU/Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Apache HTTP Server . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4 MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 MQTT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.2 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 REST API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Node-RED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 Microcontroller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6.1 Arduino . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6.2 Raspberry PI . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.6.3 NodeMCU - Devkit 0.9 . . . . . . . . . . . . . . . . . . . . . 22
2.7 Used sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7.1 Load Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7.1.1 HX 711 . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.2 HC-SR04 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.3 ArduCAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.7.4 INA219 DC Current Sensor . . . . . . . . . . . . . . . . . . . 28
i
Contents
3 System implementation 29
3.1 Common functionalities . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.1 Current measure . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.2 Measure the distance . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.3 Obtain a unique identier . . . . . . . . . . . . . . . . . . . . 32
3.1.4 Store values in ash memory . . . . . . . . . . . . . . . . . . 33
3.2 Smart bin - Capacity monitoring module . . . . . . . . . . . . . . . . 35
3.2.1 Server implementation . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1.1 Receiving data . . . . . . . . . . . . . . . . . . . . . 35
3.2.1.2 Showing information . . . . . . . . . . . . . . . . . . 36
3.2.2 Client implementation . . . . . . . . . . . . . . . . . . . . . . 38
3.2.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.2.2 Software . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3 Smart bin - Vision module . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3.1 Server implementation . . . . . . . . . . . . . . . . . . . . . . 47
3.3.2 Client implementation . . . . . . . . . . . . . . . . . . . . . . 50
3.3.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3.2.2 Software . . . . . . . . . . . . . . . . . . . . . . . . 52
4 Evaluation 58
4.1 Smart bin: capacity monitoring modules . . . . . . . . . . . . . . . . 59
4.1.1 Bins map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1.2 Smart noties . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2 Smart bin - Vision module . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.1 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.2 Time eciency . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.2.3 Further experiments . . . . . . . . . . . . . . . . . . . . . . . 74
4.2.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3 Consumption analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Conclusions 82
Bibliography 87
ii
List of Figures
2.1 ISO/OSI Model and the IEEE 802.11 layers . . . . . . . . . . . . . . 8
2.2 LAMP logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 MQTT Broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Sample of a Node-RED ow . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 MQTT setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 Node-RED dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Arduino Microcontrollers . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.8 Raspberry Pi 3 Model B . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.9 NodeMCU v0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.10 Watchdog error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.11 Load cell (10kg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.12 Analog to Digital Converter HX 711 . . . . . . . . . . . . . . . . . . 26
2.13 Ultrasonic sensor: HC-SR04 . . . . . . . . . . . . . . . . . . . . . . . 26
2.14 ArduCAM-Mini-5MP-Plus OV5642 Camera Module . . . . . . . . . 27
2.15 INA219 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
iii
List of Figures
iv
List of Tables
3.1 Result of the material detection . . . . . . . . . . . . . . . . . . . . . 50
v
List of Algorithms
3.1 Main client function . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
vi
Introduction
Nowadays more and more trash is created and waste management is becoming
crucial, both for developed country and developing ones. The main idea is not to
propose a solution to this widespread and complex problem, but to alleviate and
improve the current inecient solution. In order to achieve this ambitious aim I
focus on an advanced smart bin project composed of two modules.
2. The vision module, a system that can recognize waste and recommend the
appropriate receptacle according to the material of the waste.
The rst module intention is to improve the garbage collection process. Currently
every single bin must be inspected to gure out the ll level and then decide if there
is a real need to empty it. Moreover in a relative small environment, like a building
on a university campus, greater attention is also required from the waste collector.
In fact, the employee has to understand whether any organic waste is present inside
the wastebasket that requires the collection regardless the ll level and furthermore
a mandatory change of the dirty rubbish bag.
The second module focuses on another crucial problem of the waste management,
the recycle. Many countries all over the world want to reduce the quantity of waste
produced and its correlated cost in terms of money for the waste disposal and in
terms of health for a greater environmental quality. Particularly virtuous in this
eld is the EU that has xed the ambitious target to recycle from 65% of municipal
waste up to the 75% of packaging waste. [8] To achieve this we have created a system
that through the use of a camera identies the object to throw away and suggest the
more appropriate bin to recycle it correctly.
This smart bin projects is currently based on the WiFi technology, that is widespread
throughout the campus and provides a stable, fast and low energy consumption con-
nectivity. In the near future we aim to replace it using 5g sensors, further increasing
the performance of the project and also removing the constraint of the reduced WiFi
coverage.
1
Introduction
We are proposing here a smart bin that is: cheap, all the electronic components
used have a cost of only few euros, and easy to use, the two modules do not require
any particular skill or knowledge. As an example of this last point, to obtain a
recycle bin recommendation is sucient to insert the object in the bin and the re-
sponse appears in a couple of seconds, instead for the smart bins net the notications
are automatically sent to the desired device and the map can easily be used on a
smartphone screen.
For the construction of the rst module, the capacity measuring module, we
have added to some common waste bins a microcontroller: a tiny computer only few
centimeters long, and some sensors used to detect the ll status of the bin and its
weight. These smart bins can also communicate to a central server connecting to a
pre-existing WiFi connection. The data sent by all the devices are collected by a
central server that stores and computes them and then informs the end-user on the
status of the bin. It can detect not only the ll status of the bin but also inform if
something unexpected has been thrown in a waste bin. The users can monitor the
status of the bins using a map of the building at any time or can receive a notication
on their favorite device like smartphone, computer o smartwatch through emails or
notications.
The vision module is built using the same microcontroller that is wired to a cam-
era sensor. In this case the microcontroller takes a picture of the object inserted in
this smart bin and, always through the support of a central server, it can recommend
to the user the appropriate recycle bin to use in order to correctly recycle that object.
The server receives the image of the object and then reply to the smart bin his waste
bin proposal. From the transmitted image we obtain a list of possible object using an
already existing image recognition service called Amazon Rekognition. This service
provides a complete Software Development Kit written in PHP that has been used
to extract information from the image.
The thesis is divided in 4 main chapters and every chapter can be conceptually
split into two parts: the former is related to the capacity monitoring module whereas
the latter to the vision module.
In the Chapter 1 we analyze the previous smart bins implementations and the existing
papers in literature that inspired us to develop this project.
In Chapter 2 we describe the theoretical and practical technological background
adopted to develop this project.
In Chapter 3 we introduce our approach to the problem. Initially we describe the
common feature, then we enter into the details showing how we have created the
modules step by step.
Finally in Chapter 4 we present the obtained results and some possible use cases in
real scenarios of the two smart bin modules. We also show a detailed analysis of
their power consumption relating to the current commercial battery.
2
Introduction
We have dedicated the conclusion section to describe the obtained results and some
proposal for future works.
3
Chapter 1
As said before many studies have been done on the smart bin concept and in
particular a great interest comes especially from the Eastern countries that have
experienced a great economic and demographic development in the last century.
This has led to an exponential growth of waste produced and therefore to the need
to nd a more ecient method of waste disposal.
In 2013 Bashir et al. from two Indian universities created a prototype using load
4
1.1. Capacity monitoring smart bin state of the art
cell and infrared to control the ll level. For the communication part The system
makes use of radio frequency (RF) tags [6]. The ll level is not very precise an gives
only 2 bin status because the sensor are one placed at the middle of the Smart Trash
Bin and the second is placed near the top of the Smart Trash Bin. Communication
is not continuous but a message is sent only when the Smart Trash Bin is lled up
to the specied load and level.
A similar approach at the level sensing using infrared sensors (TSOP 1738) was
presented also by Navghane et al. from Lonavala (India) in 2016 [20] that used 3 IR
sensor and weight sensor to communicate through a WiFi module the ll level and
the weight status.
In 2014 Mamum et al. from Universiti Kebangsaan Malaysia presents a new
framework that enables the remote monitoring of solid waste bin in real time, via
ZigBee-PRO and GPRS [1]. The framework receives various data from a smart
bin thanks to the great amount of sensors on it: accelerometer, a hall eect, an
ultrasound, a temperature and a humidity sensor and a the load cell sensor.
Sharm et al. in 2015 publish a paper that describes their model of smart bin
using a PIC16F73 microcontroller, an HC-SR04 ultrasonic sensor and SIM900A GSM
module for the communication part [27]. They designed also a map that gives an idea
of the ll status of the bins in the city represented with widgets. These widgets
marking the level of dustbin lled will be put in the location in map exactly the
way dustbins are placed throughout the city. In 2017 Sharm et al. published a
new paper [28] now using an Arduino UNO with a WiFi sensor. In this case his
approach was similar to the one used by us: When garbage reaches the threshold
limit, the status of the bin is updated in the cloud, and this status can be accessed
by the concerned authorities.
In 2017 Wijaya et al. of Hasanuddin University in Makassar (Indonesia) created
a prototype of smart waste bin that can send measurements using a GSM modem
for long range communication and a Bluetooth module for the short range [33]. In
their studies they analyzed the Load cell in order to verify sensors and measure-
ments accuracy, which concluded to a less than 3% error and the HC-SR04 sensor
discovering that The average of error percentage of level sensor is 3-5%. It means
that the sensor can measure the waste level in the bin..
Many other examples are present but noteworthy is the prototype of Kumar et
al. in 2016 from Sri Ramakrishna Engineering College in India that uses an RFID
tag to conrm the task of emptying the garbage. Moreover whenever an RFID tag
(ID card of the cleaner) interrupts the RFID reader, the ultrasonic sensor checks
the status of the dustbin and sends it to the web server. [16], in this way the
microcontroller provides real time information to the server.
In 2017 Fei er al. from Universiti Tun Hussein Onn Malaysia published a pa-
per describing another example of smart bin using the NodeMCU microcontroller
5
1.2. Waste recognition state of the art
equipped with ultrasonic sensors and tracing collection trucks using a GPS module
in order to inform the nearest collection truck to reach the bin that requires to be
emptied. [11]
The same year Baby et al. from School of Electronics Engineering in India
presented an example of smart bin that implements dierent noties to alert the
authorities to empty the bin. The user receives a mail through an SMTP server
and using IFTTT also a mobile notify is sent and an event is created in the Google
calendar. The technologies used to sense the bin status are a Raspberry Pi connected
to an ultrasonic sensor. [5]
Two projects where deployed and tested in real cases during 2015. One was
deployed by Folianto et al. in Singapore reaching an average data delivery ratio of
99.25% during six months of data collections. Of particular interest is the use of
containers with IP65 certication to prevent sensor malfunctions caused by dust and
water splash [14].
The other project was deployed by Dynacargo in the Municipality of Nafpaktia in
Greece using an IP67 protection and the MB1040 LV- MaxSonar-EZ4 to detect the
ll status [25].
Despite the fact the waste recycling is a more and more important theme for the
environmental policies of many states, the automation of it using an image recogni-
tion system is quite new and there are only few papers in this eld.
The current technologies are based on near-infrared spectroscopy, expensive sen-
sors that can identify the object material from the surface. A related joint research
of 3 italian university in 2009 show the obtained results using this technology. The
study of Bonifazi et al. addressed the application of hyperspectral imaging ap-
proach for compost products characterization, in order to develop control strategies
to be implemented at the plant scale. [7]
The rst study aiming to detect dierent materials through image analysis was
published by Wagland, Veltra and Longhurst in the online journal Waste Manage-
ment in 2012. They describe their works as: Research towards new, non-invasive,
remote imaging and image recognition methods to provide faster and more accurate
technologies for waste characterization could lead to signicant savings in time and
cost, and a reduction in the risk of worker exposure. [32]
An interesting example project was made in 2015 at Jesuit University of Guadala-
jara in Mexico by Torres-García et al.: they developed the Intelligent Waste Sep-
arator (IWS): The prototype has to distinguish between three dierent kinds of
inorganic waste [...] aluminum cans, plastic bottles, and plastic cutlery. [30]. They
used a multimedia embedded processor, image processing, and machine learning in
6
1.2. Waste recognition state of the art
order to select and separate waste. Their approach was similar to the one used for
our prototype. It is always based on image processing but instead of using a third
party service like AWS to recognize the object it was done locally. It consists of
converting an RGB image to grayscale and then binarizing it and then through a
machine learning algorithm a label is given to the image.
Our approach is dierent mainly in terms of the architecture and hardware used,
in fact this image recognition requires a pc processor for every camera used whereas
our solution requires only a microcontroller with a camera sensor. This implies in
our case a greater scalability of the architecture and cost of components, space of
the model and power consumption that is orders of magnitude lower.
In 2016 during the TechCrunch Disrupt Hackathon in San Francisco a team cre-
ated Auto Trash [10] a similar project that sorts between compostable items and
recyclable items using a Raspberry Pi module equipped with a camera to pho-
tograph the object. A custom software model built on top of Google's TensorFlow
AI engine was used for the recognition process.
In 2017 Lei Zhu of the ENSTA Bretagne made a nal Study Project in collab-
oration with Veolia Environnement about Machine Learning for Automatic Waste
Sorting [34]. This paper aims to improve the actual automated separating system of
a waste disposal company that currently is partially based on manual selection and
partially automated using near-infrared spectroscopy. Zhu's mission is to Compare
the performance of the realized machine learning method and Realize the labeling
and concept the image base.
In 2017 AMP Robotics created Clarke: a robot is now picking an average of one
carton per second o a container line at a Denver-area materials recovery facility
[24]. It uses advanced visioning system and deep-learning capabilities in order to
identify and separate carton from mixed waste.
7
Chapter 2
Technological background
In this section we describe the technological background employed for the devel-
opment of the prototype. This chapter is divided into several parts where, in the
rst, the IEEE 802.11 standard is briey explained. It is the underlying technology
used by the dierent smart bins to communicate to the server.
In parts 2-5 we describe the technologies used in the server. The LAMP stack
on which the server is based, NodeRED to easily program the server and the two
protocols used for the communication: MQTT for the capacity monitoring module
and REST API to get a response from the image recognition server.
Finally in the last two sections we describe the technologies used to create the
clients to be deployed in the smart bin. In the sixth section there is a brief description
of the microcontrollers adopted to develop these two modules and in the seventh one
there is an overview of the sensors used to retrieve data like weight, ll status and
waste pictures.
8
2.1. The IEEE 802.11 Standard
The IEEE 802.11 standard is generally called WiFi and more exactly is a set
of standards that every device must follow in order to be Wi-Fi Certied from the
Wi-Fi Alliance, owner of the Wi-Fi trademark.
This standard was created and it is maintained by the Institute of Electrical and
Electronics Engineers (IEEE) LAN/MAN Standards Committee (IEEE 802). It is a
set of specication for Media Access Control (MAC) and Physical (PHY) Layers in
order to implement a Wireless Local Area Network (WLAN) communication in the
900 MHz and 2.4, 3.6, 5, and 60 GHz frequency bands.
Over time some adjustments were made and new versions of the initial protocol were
released. These can be identied from the letter beside the number of protocol (e.g.
802.11g extended the maximum physical layer bit rate to 54 Mbit/s)
More specically the WiFi operates at 2.4 GHz band over up to 14 channels but in
a 2009 were released IEEE 802.11n that allows operation in 5GHz band in order to
increase speed and avoid congestion.
Without entering too much into detail, there are two main components of a wireless
network: Station (STA) that are the wireless terminal that uses the connection to
communicate and Access Point (AP) that provides an access to a wired network.
The standard provides also dierent security solution nalized to protect the
connection and thus the data that pass through the network.
The most common security protocols are:
WEP: It was part of the original IEEE protocol (802.11) ratied in 1997 but
it is currently deprecated and highly insecure due to its security aws. It uses
the RC4 protocol and in August 2001 Scott Fluhrer, Itsik Mantin, and Adi
Shamir published a cryptanalysis where explained a the security problems and
a possible exploit. [13]
WPA: This was an update of the insecure WEP security protocol (still using
RC4) and allowed to secure the older device through a rmware update. Un-
fortunately also in this protocol a security aws was discovered. It relied in
older problems of WEP and to a limitations of the message integrity code hash
function. [12]
WPA2: It is the currently used security protocol and was introduced in 2003
with IEEE 802.11i. It uses the Advanced Encryption Standard (AES) instead
of RC4 and support two main version and protection mechanism based on the
target end-user: Personal and Enterprise.
9
2.1. The IEEE 802.11 Standard
For the smart bin project we used an ad hoc WLAN (IoTPolimi) with WPA2-PSK,
created to support the developing of dierent IoT technologies inside the laboratory.
It works as Access Point (AP) to provide a bridge between dierent clients. The
network only allows the communication inside it and it is not connected to any
external network. All the used sensors connects to this WLAN using one of the 11
available in Europe (1 to 11) in the ISM band among 2.412 and 2.472 GHz.
Around the year 2020 the rst commercial applications of 5g technologies are
expected to be released. As soon as there are sensors available, this project will be
able to switch to this technology, improving the user experience since we expect to
obtain greater performance without changing the way users employ this technology.
10
2.2. LAMP
2.2 LAMP
LAMP is a software stack model suitable for building dynamic web sites and
web applications. Its name is the acronym of the four original components: Linux
kernel, Apache HTTP Server, the programming language PHP, and the relational
database management system MySQL.
2.2.1 GNU/Linux
At the base of this stack model there is the open-source operating system (OS)
GNU/Linux based on the kernel Linux. As OS it manages the computer hardware
and the software resource providing all the needed services for computer program.
For the developing of the project two dierent Linux distribution were used:
This operating system is used for its quality. Its goal to be secure out of the box is
particularly appreciated. In fact, it provides updates and security patches quickly
and for long period of time. In case of Ubuntu 16.04 updates are guaranteed for 5
years, and will end in 2021.
Another fundamental feature of the operating system is its free and open source
nature. This means that the whole software is freely licensed to use, copy, study
and change in any possible way without the need of special permission or royalties
to pay.
11
2.2. LAMP
These features made it possible for Gnu/Linux to become widely used across
world and the most used operating system on servers [29].
2.2.3 PHP
PHP is a general-purpose programming language created by Rasmus Lerdorf in
1994 designed for the web-development.
It is commonly used to develop the back-end part of web services and allows the
creation of personal home page.
There are many modules that extend the functionalities of the language, an
example is mysqli extension that provide a full access and control for MySQL DBMS.
In 1997 PHP 3 added the concept of programming objects and exception to the
language and was improved and fully operative since version 4.
Since PHP is a widely used language in the server domain there are many third
party services that provides extension and Software Developmente Kit (SDK). An
example is the AWS SDK used to perform image recognition through the Amazon
Rekognition service.
Version 7.0 was used for the development of the server, it provides signicative
performance improvements using a new core called phpng.
2.2.4 MySQL
MySQL is an open source Relational DataBase Management System (RDBMS)
created by Ulf Michael Widenius.
It has become the leading choice for web base application for its characteristic:
a great speed in executing query, reliability, ease-of-use.
12
2.2. LAMP
Relational Database
A relational database is a particular database based on relational model (RM)
of data proposed by E. F. Codd in 1970. This model propose to approach to data
management where all data is seen as a tuple: a nite ordered list of elements and
these tuples are grouped in relations. In this type of database the information are
stored in tables (that represent the relations in the RM). Each information is inserted
in a row of a table (correspond to the concept of tuple). Using a RDBMS is possible
to execute SQL queries that allow to execute transactions used to retrieve, modify
or delete all the required information.
The transaction of a relational database must guarantee validity also in case of
error or unexpected events like a power loss. To achieve this goal the DB must follow
the ACID properties:
Isolation Each transaction must be isolated and independent from the others, so an
eventual failure of a transaction must not interfere with the other transaction
in execution
Durability Once a transaction has been committed this must remain committed
also in case of system failure. To achieve this property log register are used to
write the operation on a non-volatile memory.
13
2.3. MQTT
2.3 MQTT
2.3.1 Structure
Figure 2.3: Example of an MQTT Broker with one publisher and two subscribers.
14
2.3. MQTT
string.
smartbin/weight/123
smartbin/height/456
If a client is interested in more than a single topic it can use special strings called
wildcards.
Single level wildcard + This wildcard can substitute any possible string of a
single level. So if a client is interested to all the measure of a certain
publisher it can subscribe to smartbin/+/123 and receive both the
message about weight and height. In case the client is interested in
all the measure of weight it can subscribe to smartbin/weight/+.
Multi level wildcard # Multilevel wildcard can be used to substitute an arbi-
trary number of topic levels. If a client is interested in all the messages
of type smartbin can subscribe to smartbin/# and it will receive
all the messages from every bin and for every type of measure: weight
and height.
It is important to underline that the topics are case-sensitive so smartbin and
Smartbin are considered by the broker two distinct topic. Moreover wildcards can
be used only by subscriber and not by publishers.
2.3.2 Reliability
Another important information present in every message is the QoS (Quality of
Service) that dene the reliability in the message delivery. The QoS is dened at
publish time and at subscribe time by the client. The broker can manage the message
in 3 ways according to the QoS level dened. [21]
1. QoS = 0 (At most once delivery)
The message is sent only once and there is no need either to receive an ac-
knowledge by the receiver or for the need for the sender to store the message.
This is the faster and less resource-consuming type of message.
15
2.3. MQTT
2.3.3 Security
The MQTT protocol provide a set of guidelines for secure an implementation of
MQTT that are strongly recommended although these are not mandatory. Security
is divided into multiple layers and each one prevents dierent kind of attacks. [22]
The protocol also denes the TCP/IP ports to use (standardized to IANA):
16
2.4. REST API
REST is the acronym that stands for Representational State Transfer and it is
an architectural style for distributed system presented by Roy Fielding that quickly
became a de facto standard among web developers.
REST-compliant web services provide access to web resources by using a prede-
ned and uniform stateless operation. This is in contrast with other used standards
like SOAP that provides their own arbitrary set of operation application-dependent.
Resources and can be for example a document, a le or more generically an in-
formation identied by their URL. The most common operation available to interact
with these resources are the following:
POST Creates a new resource. As response usually receive the entry URI (Uniform
Resource Identier) of the created resource.
PUT Replace the resource with another and if the resource does not exist it is
created. It is considered a idempotent method since multiple execution of the
operation doesn't create changes.
DELETE If the resource exist, it will be deleted. Also this operation is idempotent.
An example of resource can be: smartbin/123/height that represent the list of
all the height values of the bin 123 or smartbin that represents the list of all the
smart bins.
A possible example of the use of a REST API can be:
POST @le:my_object.jpg smartbin/456/image
In this way the le my_object.jpg is added to the images of the smart bin 456.
GET smartbin/456/weight/last
Using this operation the server gives us the last weight measure of the smart bin
456.
17
2.5. Node-RED
2.5 Node-RED
Figure 2.4: This is a sample of a Node-RED ow that receive data from MQTT and
store them inside a MySQL database
This peculiar type of programming facilitates not only the programming process
but also helps the user to understand what it is doing through a clear representation.
Node-RED was originally developed in 2013 as a side-project by Nick O'Leary and
Dave Conway-Jones of IBM's Emerging Technology Services group. In September
2013 it was released as open source and from 2016 is part of the JS Foundation.
The power of this developing tool is that provides out-of-the box many ready-to-
use components or nodes that need only a few step to use.
In Figure 2.5 we can see how simple is to set up a MQTT node.
18
2.5. Node-RED
interface to represent the data collected from other services. In fact to create a
dashboard (Figure 2.6) there is no need to know any language like CSS or HTML
19
2.6. Microcontroller
2.6 Microcontroller
2.6.1 Arduino
Arduino is an Italian open-source computer hardware and software company
that creates single-board microcontrollers and kits for their development. Thanks to
success of this company the concept of DIY (Do It Yourself) applied to electronics
spread all over the world leading to the birth of many other similar companies like
Raspberry Pi Foundation.
Arduino UNO
The Arduino UNO (Figure 2.7a) is a low cost microcontroller based on the AT-
mega328p and it is the most famous and used board. Thanks to his open source
nature, many identical cheaper versions from other companies are available and all
this success contribute to the creation of more and more compatible external com-
ponents that extend its functionalities like a WiFi module based on ESP chip.
20
2.6. Microcontroller
During the development of the smart bin project it has been used to monitor the
power consumption and extract data used to analyze the eciency of the created
modules.
Arduino MKR1000
The Arduino MKR1000 (Figure 2.7b) is based on Atmel ATSAMW25 SoC and
provide an integrated module that provide WiFi connection with a Cryptochip for
secure communication (ECC508). This MCU provides also 2 pins to connect a Li-Po
battery and recharge it using the USB port. It works at a tension 3.3V and can be
powered using a 5v only using the VIN pin or the USB port. This board was used
as base for both the 2 smart bin modules.
2.6.2 Raspberry PI
21
2.6. Microcontroller
22
2.6. Microcontroller
23
2.6. Microcontroller
frequently blocks the board during the connection to the WiFi and seems due to a
problem in feeding the watchdog caused by the MQTT library [9].
For the image recognition project also was initially chosen because from the
specications the ESP8266 is ocially supported by the ArduCAM OV5642. Unfor-
tunately due to the low amount of RAM memory and to the sporadic blocks due to
the watchdog the Arduino MKR1000 was chosen for his greater stability.
24
2.7. Used sensors
This straight bar load cell can translate the pressure applied on it into an electrical
signal. In this way, after a proper calibration, this sensor can give a good estimate
of the weight up to 10kg with an error usually lower than 2 grams.
In bar strain gauge load cells, the cell is set up in aZ formation (Figure 2.11b)
and when a weight is placed on it a torque is applied to the bar. To measure the
weight the four strain gauges on the cell will measure the bending distortion, two
measuring the compression and two measuring the tension.
This bar is made of aluminum-alloy oers an IP66 protection rating that guar-
antees protection from dust and liquids. For this reason this component was chosen
to measure the garbage weight inside the bin.
25
2.7. Used sensors
2.7.1.1 HX 711
The HX711 is a precision 24-bit analog to digital converter used to convert the
electrical signals coming from the load cell into a digital number corresponding to
the sensed weight.
This sensor uses a two-wire interface to communicate with the microcontroller:
Clock and Data and can be powered by any tension between 2.7Vx and 5V . It can
be connected and controlled by any microcontroller that provides GPIO pins.
2.7.2 HC-SR04
The HC-SR04 is low cost an ultrasonic sensor working at 5V that measure dis-
tances in range between 2 centimeters up to 4 meters. It has an accuracy that reaches
3mm in case of a at object with a surface of at least 0.5m².
26
2.7. Used sensors
Its working mechanism is similar to the concept of the sonar used for submarine
navigation and by many animals. Through the microcontroller the ultrasonic sensor
is activated and it sends out 8 cycle burst of ultrasound at 40kHz and then raise its
echo. Knowing the time before the echo's arrival we can calculate the distance of the
sensor from the object.
2.7.3 ArduCAM
ArduCAM is a Chinese startup company that creates open source hardware and
software especially for Arduino and Raspberry microcontrollers. Their modules are
a general purpose high denition SPI camera that provides a simple control interface
that can be used by any hardware platform as long as they have SPI and I2C interface.
For this project we adopted a ArduCAM-Mini-5MP-Plus that mounts the 5MP
CMOS image sensor OV5642. This particular model provides a miniature size and
allows also for low cost microcontrollers that usually wouldn't have enough resources,
to capture 5MP images not only in JPEG or RGB format but also in the RAW image
format. This module handles camera, memory and user interface hardware timing
providing to the user an SPI interface. This module allows not only single capture
mode but also the possibility to take video and multiple images. This last modality
combined with the possibility to control many parameters like the exposure, white
balance, brightness, contrast and color saturation gives the possibility to create HDR
images.
Moreover for battery powered devices there is also the possibility to set a low
27
2.7. Used sensors
power mode that reduce the power consumption turning o the sensor.
The operating mechanism is very simple: the camera is turned on, the parameters
of the images such as the image format and resolution are set, then the hardware
trigger input is activated to start a capture. Finally when the capture done ag
is raised the image can be read byte by byte from the SPI interface.
This camera can be powered with any current between 3.3V and 5V and all the
I/O ports can tolerate the same voltage range.
The INA219 is a direct current sensor that can measure both the high side voltage
and DC current draw over I2C with 1% precision. It can be powered by a 5V tension
and can communicate with many microcontrollers using I2C.
Its operating mechanism is based on a resistor with a very low resistance called
shunt resistor. Thanks to this particular resistor the voltage across the shunt resistor
is measured and from it, using the Ohm's law (Current = Shunt Resistance ) the current
Shunt T ension
value is calculated.
28
Chapter 3
System implementation
In the previous chapters we have seen the main components used and the theo-
retical part of this work. Now, in this chapter, the project implementation will be
presented and explained.
For the sake of clarity, the result of this thesis work will be presented separately as
two dierent projects. Nevertheless it is important to notice that these are not two
freestanding ideas, hence an initial subsection is present. There all the functionalities
shared between the two projects are described.
So to summarize this chapter is split into 3 main parts:
1. Common functionalities
29
3.1. Common functionalities
In this section we will show all the parts shared between the two projects. By
shared functionalities we do not mean only the functions used inside the code loaded
in the microcontroller like the function to measure the distance of an object or the one
used to obtain a unique identier for each device. Inside the common functionalities
we have also the whole work done to measure the absorbed power by a certain device,
indeed as we will see we have created a separated project employed to measure both
the two smart bin modules and other sensors.
The rst fundamental step to measure the current of a certain circuit is of course
the creation of the circuit that will do it.
For this step, we used the following 2 component: the Arduino UNO (2.6.1) as
microcontroller that will receive and analyze the power consumption information
received by the other component used: the INA219 Current Sensor (2.7.4).
To power the current sensor we connect the 5V pin and the ground one of the
MCU to the VIN and GND pin of the sensor. Then we establish the data connection
using the I²C serial bus connection between the 2 components wiring the respective
SDA and SCL pins.
As nal step, we power the monitored device using the VIN+ and VIN- connectors
of the current sensor. The former, VIN+, should be connected to the power source
(5V) and the latter, VIN-, is connected to the positive pole of the device to be
30
3.1. Common functionalities
measured, usually identied by the label VIN or VCC. It is important that all the
ground (GND) pins are connected to the same tension value in order to achieve a
correct measurement.
Data analysis
To read values from the current sensor we use the library Adafruit_INA219.h
created by Adafruit Industries, the vendor of the sensor. The code executed by
the Arduino UNO is really simple, it gets a sample of the current every 50ms and
outputs a line with the time and the read value to the serial monitor. In this way
we obtain the real time consumption of the device in a csv format (values separated
by commas).
Finally we use the obtained value simply copying all the values in a spreadsheet
editor like LibreOce Calc or Microsoft Oce Excel.
As we will see later, we obtain in this way the data used to retrieve the maximum
and the average consumption that will be use to determine the size of a possible
battery used to power the analyzed circuit.
So nally, we can obtain the distance in centimeters through the following mathe-
matical calculation (3.1).
The used information are the following.
The time should be divided by 2 since is measured since the sound is emitted
until the echo is received back. In this way the sound cover twice the distance
to measure.
31
3.1. Common functionalities
32
3.1. Common functionalities
we have all Arduino device. The last 3 groups instead have a unique identier for
every Network Interface Controller (NIC) among the same manufacturer.
33
3.1. Common functionalities
So, in order to store and retrieve variables we dene the name of the variable and
the type.
FlashStorage(my_variable, int);
Then the variable can be easily accessed through simple read and write methods:
my_variable.read();
my_variable.write(new_value);
Morover there is also a useful method to verify if a variable was ever written in
the ash storage and thus is valid and contains useful information.
my_variable.isValid();
34
3.2. Smart bin - Capacity monitoring module
The capacity monitoring module has been developed using a client-server archi-
tecture. Every smart bin can be considered as a client that collect data and, if
needed, it sends the information to the central server.
The server can handle without problems bins of any size, therefore to show that
we created 3 dierent bins of two dierent size (Figure 3.2).
The rst part is completely developed using Node-RED while the latter is partially
made using the Node-RED dashboard with some parts written in PHP.
35
3.2. Smart bin - Capacity monitoring module
weight measurement it isn't only stored but is also used to send notications to the
user through mail or IFTTT. The notications are sent only if the weight received
is higher than the expect or if the bin is almost full.
We provide to the user also a possibility to show the status of the bin. To create
the dashboard we used the Node-RED nodes. We retrieve data from the database
using two dierent queries, one to gather all the historical data up to now of a
certain sensor (weight or height) and the other to only retrieve the current status of
the bin with the real time information. The real time data are updated every hour
36
3.2. Smart bin - Capacity monitoring module
so on average they show a 30 minutes old information, even though the last message
received can be 6 hours old (Figure 3.5a).
To provide a fully functional product that can be used by the end-user we created
an interactive map that show the bins on a certain oor (Figure 3.5b). In order to
create it we use an HTML canvas on which we draw the map data retrieved from
DB.
Once the user access to the page with the map 3 operations are done:
2. Bins data are retrieved in JSON format from the data.php page through a
xmlHttp request.
The information retrieved are: ll level, weight level, X coordinate on the map,
Y coordinate on the map, the bin unique identier.
3. The map of the chosen oor and building is plotted and over it one button is
created for each bin.
The create_map function print the map according to the building and the oor
chosen, then using the data previously obtained show the bins on it. The color
and the shape of the bins depends on the weight and the ll level. Once we've
printed the bins we create buttons over the images of the bins that allow the
nal user to obtain more detailed information of a certain bin.
37
3.2. Smart bin - Capacity monitoring module
3.2.2.1 Hardware
The rst step to build the client is the choice of the components. In order to build
the smart bin we used sensors to sense the weight and the distance, a microcontroller
38
3.2. Smart bin - Capacity monitoring module
to manage data and transmit them to a remote server and, of course, a trash bin.
For the choice of the bin the only constrain is the presence of a bottom and the
presence of a lid. This implies that the bin cannot be only a hanging bag while
the lid is necessary mainly to make the bin more durable and accurate since the
distance sensor is attached on the top and the trash unlikely hit and then ruin it.
To summarize the used components are:
39
3.2. Smart bin - Capacity monitoring module
Components connection
As we can see from the schematic (3.6) we connected in cascade the load cell
directly to the DAC1 HX 711 and this one to the Arduino powering it with a 3.3V
current and using two digital port to communicate with the sensor. The distance
sensor was connected also using other two digital ports and powering it with a 5V
power.
To power the microcontroller and the other sensor connected to it we used a
1
Digital to Analog Converter
40
3.2. Smart bin - Capacity monitoring module
either the USB port or the VIN/GND pins always providing a 5V power.
Prototype creation
Once the wiring is done and the initial prototype is correctly functional we started
assembling a model of smart bin. The rst step requires the creation of two small
holes to allow the wires to exit from the bin. One is made on the bottom on a lateral
site in order to connect the load cell inside the bin and the microcontroller outside
it. The other one is made on the top in an angle of the lid, in this way the wires
that connect the microcontroller and the ultrasonic sensor aren't compressed when
the lid is opened.
After this we fasten the load cell at the bottom of the bin using two screws 3.7b
and using other two screws we ax a false bottom to the load cell 3.7c. In this way
all the weight lie on the load cell and we can obtain an accurate measurement of it
(3.7a).
41
3.2. Smart bin - Capacity monitoring module
(b) Load cell fasten to the bottom (c) False bottom installed
Before wiring all the component is to x the ultrasonic sensor in the center of
the lid, in this way we can measure the distance from the object in the bin. Using
this information, after an initial calibration with the empty bin we can understand
the ll level of the bin.
As last step when all the cable are properly connected we x the microcontroller
on an external side of the bin, in this way the trash and any operation of changing
the trash bag doesn't damage the Arduino.
42
3.2. Smart bin - Capacity monitoring module
3.2.2.2 Software
Now the prototype is completely assembled and only two steps miss: the rmware
upload and the initial calibration.
What this smart bin does is quite simple:
In order to calculate the ll measurement we trivially use the ultrasonic sensor as
shown before to measure the distance (3.1.2) and throw a simple subtraction we
obtain the trash level 3.2.
43
3.2. Smart bin - Capacity monitoring module
17 return sample ;
18 }
19 }
20 }
21 }
Once we have obtained the two data we check them to understand if can be useful
and need to be send to the central server or can be discarded. The checked condition
are:
To send the information to the server we connect to the WiFi and then we use the
external library PubSubClient. that provides the necessary function used to create
and manage a MQTT connection. Using MQTT we publish 2 dierent topic:
1. {id}/weight
2. {id}/height
44
3.3. Smart bin - Vision module
The vision module has a three-layer architecture (Figure 3.8a) and is composed
by the following components.
Client It is the component actually installed on every smart bin of this type. It
takes pictures of the object to recognize and communicate the response to the
end-user.
Personal Server It collects the images sent by the clients and the responses from
the Amazon server. Used to debug and analyze the data sent by the client and
decouple the project from a specic recognition server.
AWS Server It is the Amazon Rekognition server used to extract information from
the image. It provides a fully functional image recognition service and easy to
use API.
With the exception of the third layer, the AWS server, the rest of the architecture
has been developed during the project. So the construction of the bin that recognize
the waste material can be split into two main part:
In the near future, when the 5g connections will be diused we imagine a dierent
architecture (Figure 3.8b) that could further reduce the server response time that
takes advantage of the multi-access edge computing (MEC) architecture. This tech-
nology is designed to be implemented at the cellular base station and in this way
there would no more need to send the image rst to a personal server and then to
the AWS server thousands of kilometers away.
45
3.3. Smart bin - Vision module
(b) Possible future implementation of the smart bin vision module using MEC. 46
Figure 3.8: UML sequences diagrams
3.3. Smart bin - Vision module
1. LAMP setup
First of all the whole project is developed on top of a Linux server (Ubuntu
16.04) using Apache 2, MySQL and PHP 2.2. The whole software bundle can
be easily installed through the packet manager (apt):
2. Database Creation
Using MySQL we've created a Database to store all the needed information
(smartbinDB) then we use the administration tool PhpMyAdmin to manage
it. The following tables are used to store the information sent by the clients:
3. Amazon Rekognition
The following step require the creation of a working AWS account creation.
As a further security step we create a IAM (Identity and Access Management)
user, a special user that has only limited permission. In our case we created a
IAM user (smartbin) that can access only in Read Only mode to Amazon
Rekognition. Doing so we ensure that a malicious user cannot obtain full access
to the account also in case of credentials theft. Once the user is successfully
created we can start interfacing to the service, to do so we downloaded the
47
3.3. Smart bin - Vision module
2
Amazon Web Services Software Development Kit
48
3.3. Smart bin - Vision module
49
3.3. Smart bin - Vision module
12 " Paper " , " Poster " , " Diagram " , " White Board " , "
Brochure " , " Menu " ,
13 " Origami " , " Cardboard " , " Carton "
14 );
3.3.2.1 Hardware
Also in this smart bin module is fundamental having an internet connection so
we've chosen the microcontroller Arduino MKR1000 already adopted for the other
smart bin modules that provides the WiFi connectivity. For the object detection
we have also used an already tested shown sensor, the ultrasonic sensor HC-SR04
that allows to detect object from a minimum distance of 4cm up to 2m. The last
employed sensor is the ArduCAM OV5642 (2.7.3) that allows to take picture up to
a maximum resolution of 5MP.
50
3.3. Smart bin - Vision module
The schematic schema shows us how the sensor and the microcontroller are wired.
The three led are connected to the same number of digital pin: 1 for unsorted waste,
2 for plastic and 3 for the paper, the ultrasonic sensor is connected to the connectors
6 and 7 and the CS pin of the camera to pin 0. Moreover the camera requires also
51
3.3. Smart bin - Vision module
the the SPI communication protocol (pin: MISO, MOSI, SCK) and the I²C serial
bus (pin: SDA, SCL).
Three led lights of dierent colors has been used in order to show to the end-user
the appropriate bin where the object should be thrown: blue for unsorted waste,
green for paper and red for both plastic and aluminium(3.10).
3.3.2.2 Software
The client, as can be seen from the algorithm (3.1), is continuously checking for
the presence of an object using a proximity sensor HC-SR04.
When an object is detected, it is photographed then the Arduino MKR1000 connects
to the WiFi network and sends the captured image to the server REST API a through
a HTTP POST requests
The response is in plain text and it is read and parsed by the Arduino. Finally the
client simply turn on a led corresponding to the recommended receptacle and it is
again available.
52
3.3. Smart bin - Vision module
53
3.3. Smart bin - Vision module
Doing these steps we used the same code created for the ESP8266 devices without
any problem.
Boot time
At rst start up or after the power is back the Arduino enters the start-up phase.
During this step all the components are checked to ensure that everything is still
working and correctly connected.
As rst step we initialize the following objects:
ArduCAM The main class that will be used to execute all the operation on the cam-
era. During initialization must be specied the camera type (OV5642)
and the SPI slave chip select input.
SPI It's the used interface to start the image transmission from the camera
memory to the Arduino.
Then in the following steps (3.7) the microcontroller checks the status of the camera.
In lines 1 and 2 we test if the I²C bus is correctly working and so we write and
read from the test camera register 0x00 (ARDUCHIP_TEST1) a test value: 0x55.
In lines 3 and 4 we check if the camera module type is OV5642 and so we read from
the 16bit registers 0x300a (OV5642_CHIPID_HIGH) and 0x300b (OV5642_CHIPID_LOW)
respectively the 8 bit values 0x56 and 0x42.
The Arduino remains in the loop at line 6 until the correct values are read.
54
3.3. Smart bin - Vision module
Once the camera module is successfully detected we set the image format to
JPEG, initialize the module and set the JPEG compression level.
We have chosen this image format for many reasons: to compress the image directly
on the camera module and then save time to read it rst from the microcontroller
and then to send it to the server. Moreover in this way we avoid the conversion as a
further step in the server, in fact Amazon Rekognition accept only PNG and JPEG
image formats. [3] Finally another important motivation is that JPEG images have
a greater resolution with respect to the RGB mode, due to the limitation of frame
buer. [4]
Then we prepare the camera to take a picture: we set the sensor interface timing
register to enable Vertical sync, put the module in low power to save current when
the module is not used, clear the FIFO control register to indicate that no picture
has been taken and set the number of frame to be captured to 0.
At the end the microcontroller is ready and in a saving energy mode: the camera
is in low power mode and the WiFi is still o.
Image acquisition
When the camera is ready and the setup has terminated, the microcontroller
keeps checking if any object has been inserted in the smart bin measuring the dis-
55
3.3. Smart bin - Vision module
tance. When an object is inserted in the smart bin, the ultrasonic sensor detects it
and after few seconds a the function used to take a picture starts (3.9).
As rst step the camera is turned on, all the internal register are properly cleared
and, setting a particular bit through the function at line 11 of the function cap-
ture_normalImage(), the command is transmitted to the sensor.
When the ag CAP_DONE_MASK is raised in the internal memory of the
camera the image is ready to be read and transmitted.
Image transmission
The nal step of the smart bin vision module consists in sending the image
previously taken to the server and parse the response in order to recommend the
correct trash bin to the end-user.
To do this the image is completely read from the sensor memory and stored in
the microcontroller ram in order to use it later without any problems caused by the
relatively long time to read from the sensor (approximately 500ms). Then we create
the request to send to the server and thus we connect to the WiFi. So we connect to
the server and we send the multipart/form-data request previously composed adding
the image stored in the Arduino memory.
We receive the response from the server, we read it character by character and
56
3.3. Smart bin - Vision module
thus we extract the expected label. At the end we trivially compare the label to the
possible materials e turn on the respective light.
57
Chapter 4
Evaluation
In this chapter, we analyze the data collected during the creation and the de-
ployment phase of the two modules. Moreover, we also present some examples of
practical applications derived from the project.
In particular, for the capacity monitoring module project we mainly focused on
displaying the collected data: we provide an accurate interface that summarizes the
most useful information of a particular bin. Starting from this we then provided two
useful functionalities: a way to aggregate and visualize data of multiple bins in a
certain building through a map and smart notications that alert the user in case a
bin reaches a particular state.
For the smart bin vision module we analyze its performances: the accuracy in
labelling the correct receptacle and the speed in giving a response. Moreover, we also
present other attempts made to improve the project and the problems still present.
In the conclusions of this chapter we also show the power consumption of the two
modules and more precisely we evaluate the feasibility of a battery power supply.
58
4.1. Smart bin: capacity monitoring modules
As rst step, to collect the data, we have deployed 3 smart bins in 2 dierent
rooms on the rst oor of building 20 in the Politecnico di Milano. We collected
data for approximately 45 days emptying the baskets at non-regular intervals.
We placed them in two dierent rooms in order to cover a greater number of
users and to vary the type of trash thrown.
To further vary the cases and to highlight that the system operates independently
from the basket we used two dierent basket sizes.
Using the Node-RED dashboard we created a chart of a single bin from the
collected data (4.1). The left panel of the chart shows the current status of the bin
and some buttons to see the data for the other bins; the right panel displays the
historical trend of the measured height and weight.
As we can see from the historical height chart of the smallest bin (4.2) the height
analysis is unreliable on bins lower than 20 cm. From the technical data sheet of
the proximity sensor [15] we can see that to achieve a suciently good measure the
surface of the object to be detected, this should measure at least 0.5m². With this bin
size, such object dimension is not only physically impossible but also the bottom of
the basket is smaller of that size. This seriously compromises only the measurements
of the smallest bin while in the larger one this limitation only generates occasional
erroneous measurements.
Instead the weight measurements are quite accurate. The sensors give us mea-
59
4.1. Smart bin: capacity monitoring modules
surement with an accuracy of one gram and the weight remains constant over many
days of measuring if nothing is thrown in the bin. From the slope on the graph we
can easily realize when something heavy is thrown or when the bin was emptied.
60
4.1. Smart bin: capacity monitoring modules
The data of every single bin are not very useful without a way to create an
aggregate vision of them. So, to simplify the use of the capacity monitoring module,
we created a simple interactive map that summarizes the state of an entire oor of
a building. In gure 4.3 we can see an example of the lling degree of every bin of
that oor.
The map provides an easy color scheme to identify the status of each bin:
If the detected weight is 5% greater than the predicted one the correspondent
icon is changed from the bin icon to the exclamation mark one. The exclamation
mark icon helps the operator who is monitoring the map to quickly understand if
something unusual has been thrown away. For example, in the case of dierentiated
waste collection, a value far from the expected one can suggest the wrong choice of
recycling bin.
Furthermore even for standard mixed waste bins, it is useful to know if a receptacle
needs to be emptied even though the ll state color signals an empty bin. In fact,
in case of organic waste (e.g. food refuse) or liquids (e.g. coee cup) the system
detects a higher weight than usual and inform the operator. This monitoring of the
weight is fundamental to prevent bad odors, the presence of animals and problems
to sensors.
The map allows the user to access information about each basket by simply
pressing the icon corresponding to the chosen bin. This leads to the web page shown
before with all the up to date information and helps the user to understand whether
the behavior of a particular basket is unexpected.
A possible use of the smart bins map (4.1.1) is the possibility to create an appli-
cation on the smartphone or to create a custom device to check their ll status.
We created a simple mock-up (Figure 4.4) using a Raspberry PI and a compatible
touch screen display showing the ll status of a smart bin and its historical values.
This device could be given to the waste collector to know which baskets empty before
accessing a building and to know the last time the bin has been emptied.
61
4.1. Smart bin: capacity monitoring modules
62
4.1. Smart bin: capacity monitoring modules
(a) IFTTT notify of a full bin. (b) IFTTT notie of a bin heavier than usual.
63
4.1. Smart bin: capacity monitoring modules
4.1.3 Problems
As mentioned above these are working prototypes that are far from being perfect
and without problems. We identied three main problems listed here in order for
importance:
1. A poor precision in measuring the ll status of the bin. As we touched upon
before this problem is mainly present in small size bins and it is caused by the
ultrasonic sensor's fault. In reality this problem is particularly serious only in
few cases in fact, the greatest number of bins is big enough not to present this
problem.
The only possible solution to this problem is to change the sensor with a more
accurate one like the Time of Flight (ToF) VL53L0X that has always a low
power consumption, small dimension and in on other projects test is giving
great accuracy results.
64
4.1. Smart bin: capacity monitoring modules
object emitting steam (like used coee pods). The steam can create a short
circuit to the ultrasonic sensor attached to the lid and damage the prototype.
3. Weight sensor loses some grams in subsequent measurements over long periods
of time. From the empirical data, the weight approximately decreases up to 2
grams per 3/4 days when nothing is thrown in the bin. Although this problem
does not have a real solution, it does not inuence the nal result of the mea-
surements. In fact, it is mainly because in a real use case is quite rare that a
bin is not used for a so long period of time. Furthermore a small loss of weight
during a long period of time can be identied and easily solved directly by the
server.
65
4.2. Smart bin - Vision module
The testing of the accuracy of the waste recognition is based on 2 main tests: the
accuracy, that shows how well the correct waste is predicted and the time eciency,
that is the average time that elapses between the moment the photo is taken and
the bin recommendation is shown to the user.
To achieve this we have taken a picture of dierent objects made of dierent
material at the maximum available resolution: 2592x1944. Each picture has the
same aspect ratio of 4:3 and is saved as JPEG. After the initial creation of the
image set, using the open source software MAT (the acronym stands for: Metadata
anonymisation toolkit), we remove every metadata that could inuence the Amazon
Rekognition output. Finally, we create 3 distinct sets from the initial images but
with dierent resolutions (4.7):
2592x1944 It's the maximum resolution of the ArduCAM OV5642. (From now on
we will refer to this resolution as 5MP)
320x240 It's the lowest resolution available on the camera sensor. (240p from now
on)
Figure 4.7: Example of dierent image sizes. The proportion are respected, so we
can understand the size dierence between the 3 resolution sets.
To remove any dierence in light or in any other external factors we did not take
3 photos but we changed the size of the initial image using the open source software
66
4.2. Smart bin - Vision module
ImageMagick 6.9.7-4. The images did not receive any further compression during
the resize operation. This post-processing gives us the certainty that the 3 sets are
all equals and have the same characteristic (contrast, brightness, ...).
To sum up at the end we created 3 sets each containing 80 images for a total of 240
analyzed samples. To create the initial set we used 14 object, each photographed
from dierent angles: 4 objects made of plastic, 3 aluminum, 4 paper and 3 that
cannot be recycled.
4.2.1 Accuracy
Before starting with data obtained through the accuracy analysis it's important
to highlight some important points.
First of all during the creation of the model and the development of the project
we have assumed that not all prediction errors have the same importance. For this
reason we distinct between two types of error:
67
4.2. Smart bin - Vision module
The plastic object analysis, reported in Table 4.1 gives good results. With new
bottles, or generally with objects that are not too deformed the detection precision is
around 100%. The accuracy decreases when the analyzed object is highly squeezed
(Figure 4.8b) or it is photographed in particular positions (Figure 4.8a). In these
cases Amazon Rekognition is not able to deliver the correct label, hence also the
material recognition fails.
Example:
68
4.2. Smart bin - Vision module
69
4.2. Smart bin - Vision module
Paper analysis is a bit more problematic than the other materials and the card-
board model was essential to increase the recognition accuracy. From the rst tests
done without any model, we noticed that Amazon Rekognition tended to ignore the
paper object considering it as not important. Then, applying a xed background
and removing all the external object that can inuence the object labels output, we
could increased the recognition at a sucient level, up to the 80% for balls of paper
and 100% for objects obtained through paper folding.
Voluminous object like newspaper (Figure 4.10a) are hardly recognized and, as
70
4.2. Smart bin - Vision module
we have seen also for dierent materials, compressed newspaper (Figure 4.10b) gives
an extremely low recognition rate. The solution in this case is to take the photo
from further away and make a bigger model that allows the exclusion of any external
object.
During the analysis of the results for paper objects we noticed another inter-
esting trait of the Amazon Rekognition service: by reapplying multiple times the
same image we obtained dierent object labels. This means that the service is not
deterministic and multiple analysis of the same image could increase the accuracy of
the recognition, albeit to the detriment of the speed.
In the case of unsorted objects we mainly focused on food as this was the most
71
4.2. Smart bin - Vision module
The analysis of the overall accuracy data (Table 4.5) reveals that no type-2 errors
occurred, hence a bin only based on this model's predictions can successfully identify
waste material without the risk of mixing materials and damaging the bin content.
Looking at the nal data of accuracy for each resolution (Table 4.6) we notice
that the accuracy is approximately 80%, independently of the resolution, and that
in a set of 5 objects, 4 of these are thrown in the correct bin and only one is not
recognized as recyclable and is thrown to the unsorted bin.
5MP 720p 240p
Accuracy 78.75% 77.50% 81.25%
Table 4.6: Accuracy per resolution
72
4.2. Smart bin - Vision module
It is quite unexpected to achieve better results using lower resolution and lower
details. This outcome can be explained knowing the source code of the algorithm
of Amazon Rekognition. With great probability this cloud service recognize a xed
number of object in the gure so in big images, like the 720p and 5MP ones, we have
a greater number of pixel that can be erroneously confused with possible object. In
this way the possibility to nd false positive during the image recognition process
increases. So using 240p images we have a smaller number of pixels. This helps in
situations where we have only one object to recognize in fact, in this way the service
nds less false positive and thus it makes a better prediction.
The analysis measures the time elapsed from the moment the image is sent from
the personal server up to the moment we receive the label response from the Amazon
server. We repeated the same analysis using the Wired connection provided by
Politecnico di Milano, a temporary ad-hoc WiFi connection and a 4g connection
using a portable router. The rst two connection provide a 100Mbps connection, the
4g provide a 20Mbps connection.
The data provided measure the time to execute the following operations: send
the image to Amazon Rekognition, parse the response and output the material label.
From the data (Table 4.8) we can see that there are no signicant dierences
between wired and wireless connection. Instead we have a signicant dierence
using a 4g connection due to its lower connection speed. The dierence of speed is
remarkable with the 5MP images, on average 190% slower, while is similar using the
720p images (33% slower) and the 240p ones (49% slower).
The main dierence comes out among the 3 dierent resolution. As we can see
sending an image using the 240p instead of 5MP resolution is, on average, 1.7 times
faster.
Moreover even for the biggest les (1490 KB) using a wireless or a wired connec-
tion it takes less than 2 seconds to obtain a response. This means that this model
can be used for real time applications with every resolution assuring the user keeps
73
4.2. Smart bin - Vision module
74
4.2. Smart bin - Vision module
(a) Normal image that reaches the server (b) The image sent to Amazon Rekognition
after postprocessing
ground is not completely removed. Even with images taken correctly without back-
ground, the recognition fails more often compared to images not post-processed.
In another attempt we tried to substitute the cardboard box with a dierent
one that allows to take the picture from the top. In this way there are no more
problems with the background because it will always underneath the model and no
external object will appear in the picture.
The rst result were interesting but the model was not good enough to manage
dierent size object. In some cases with bulky objects, like the balled up newspaper
(Figure 4.10b), the camera is obscured and no image recognition is possible.
75
4.2. Smart bin - Vision module
4.2.4 Problems
Overall, despite we reaching great results there are still some problems that need
attention. First of all as we have seen from the accuracy analysis section there are
still problems with deformed items. Unfortunately this issue is strictly related to
the image recognition service used and cannot be easily avoided except by changing
service or creating one ad hoc.
Another signicant issue to consider is to nd a way to isolate the photographed
object from the context. We tried both a software and a hardware solution. The
former, which remove the background using an image editing software, is not easy
to implement with Amazon Rekognition due to its poor results. The latter is the
currently used solution but it is hard to use with large objects.
This last problem is not so serious because from a quick analysis, the largest
part of the recyclable object has a small dimension so the cardboard model works
correctly in most cases.
76
4.3. Consumption analysis
One of the pillars of every Internet of Things application is having a low power
consumption that allows the various devices to work with a battery power supply.
So to conclude the analysis of the various projects results we have also tracked the
energy consumption of the two main projects. In order to do this we used an Arduino
UNO (2.6.1) and the INA219 DC Current Sensor (2.7.4) as shown in detail in the
previous subsection of common features (3.1.1). All the information were calculated
using a spreadsheet that allowed to extract various data such as minimum, maximum
and average energy consumption.
As a rst step we analyzed the eective energy consumption of a single smart
bin capacity module (Figure 4.12). To better represent the data on the graph and to
show two whole cycles we reduced the period of deep sleep from 60 minutes to only
25 seconds.
We rst analyzed the consumption of the Arduino MKR1000, leaving it rst idle
(using the delay() function) and then putting it in deep sleep.
The consumption is also measured looking at 2 dierent parameters:
VIN pin: Can be used to power the board with a regulated 5V source
(Can tolerate maximum tensions up to 6V).
VCC pin: It is primarily an output pin but can be used to receive a 3.3V
current, paying attention that it does not allow higher voltages.
5V pin: Also this pin is mainly an output pin and can receive only 5V
current. Any dierent tension level could damage the microcontroller.
77
4.3. Consumption analysis
The consumption analysis (Table 4.9) shows us that the deep sleep is clearly the
best choice to achieve a great autonomy. In fact, the energy consumption improves
by a range of 2 and 18 times in the case of deep sleep.
Supposing to use the Arduino MKR1000 in deep sleep for approximately 30
minutes using a 3.3V power source and considering that the microcontroller is awake
for a mean time of 4.762s (4.10) we have a consumption of 0.656mA.
Having these data we can correctly size the battery: supposing we want a bin
working for at least 2 weeks, we need a battery that can provide 220.4mAh (4.2).
Battery Size
Duration [h] = (4.2)
Energy needed
From further studies we know that the energy drawn from the battery is greater
respect to the energy consumed [26] and from empirical data we can estimate the
available energy of a commercial battery to be not less than the 80% of the nominal
charge [17] (4.3).
78
4.3. Consumption analysis
For the second module, the vision one, we cannot use the deep sleep functionality;
in fact, we do not know when the next object will be placed in the cardboard model.
As a rst step we started to analyze the only camera consumption (Table 4.11).
From the graph 4.13 we can see that the average camera consumption in power saving
mode is 55.67mA. This modality lasts for an undened fraction of time depending
on the use of this project.
On the contrary, the time to take a picture is dened and it is of 577ms and the
average power consumption is 226.47mA.
79
4.3. Consumption analysis
After the initial analysis of the camera we can see the power consumption of
the whole prototype made of an Arduino MKR1000 as microcontroller, the camera
sensor OV5642 to take the photo, the HC-SR04 sensor to detect the presence of any
objects and nally 3 leds to communicate to the user the suggested bin.
From the graph (4.14) we can see 3 full cycles of detection. In every cycle, we
can identify 4 dierent phases:
1. Idle phase: here the camera is in power saving mode, the leds are o and only
the ultrasonic sensor is measuring the distance to detect an eventual object.
2. Take a photo: the object is detected and the camera exits from power saving
mode only in this phase to take a photo and store it in memory.
3. Send image to server: the microcontroller connects only in this phase to the
WiFi network, send the image through a POST request and parse the response.
4. Suggest bin: the led corresponding to the suggested bin is turned on for 5
seconds.
80
4.3. Consumption analysis
81
Conclusions
With this work we have proposed two functional modules that can be adopted
to manage in a better way the waste collection, increase recycling and thus help to
live in a cleaner world.
The smart bins capacity monitoring module is one of these modules and as we
have seen, we were able to transform an ordinary trash bin in a more complex object
that can communicate with to a central server that manage it. Moreover we do not
simply collected data but we were able to create a simple mechanism to actively
inform the end-user on the status of the bins. These same information can also
be obtained by the user checking at any time a easy to use map that is constantly
updated. We also demonstrated that it is not fundamental to use a trash bin of a
certain size or dimension but almost all can be transformed in an smart bin.
In the previous chapter we also described accurately how to transform a tiny mi-
crocontroller and a low cost camera in a fully functional smart bin that can recognize
any sort of waste and recommend the appropriate recycling bin. We created a fully
functional mock-up and we tested it
We reached great results: we can recognize an object material in less than 1
second obtaining an average accuracy over the 80%. In addition all the wrong pre-
diction does not mix the materials among dierent recycling bins but only suggest
to throw a recyclable object in the unsorted waste bin.
For the development of this master thesis we started from previous implementa-
tions that we found in literature and mentioned in Chapter 1. We briey discussed
on the dierent problems we found. Examples of the discussed issues are the ab-
sence compatibility among the microcontroller NodeMCU 0.9, the HX711 library
and MQTT implementation for ESP8266 devices or the ultrasonic sensor problems
caused by very small trash bin or too small items that are close to the sensor.
Further developments
As we have said the projects shown in this thesis are far form the perfection and
of course they have parts that can be enhanced.
An example is the relatively high power consumption of every smart bin capacity
82
Conclusions
83
Bibliography
[1] Md Abdulla Al Mamun, MA Hannan, and Aini Hussain. Real time solid waste
bin monitoring system framework using wireless sensor network. In Electronics,
Information and Communications (ICEIC), 2014 International Conference on,
pages 12. IEEE, 2014.
[5] Cyril Joe Baby, Harvir Singh, Archit Srivastava, Ritwik Dhawan, and P Ma-
halakshmi. Smart bin: An intelligent waste alert and prediction system us-
ing machine learning approach. In Wireless Communications, Signal Processing
and Networking (WiSPNET), 2017 International Conference on, pages 771774.
IEEE, 2017.
[6] Adil Bashir, Shoaib Amin Banday, Ab Rouf Khan, and Mohammad Sha. Con-
cept, design and implementation of automatic waste management system. In-
ternational Journal on Recent and Innovation Trends in Computing and Com-
munication ISSN, pages 23218169, 2013.
[8] The European Commission. Review of waste policy and legislation. http://
ec.europa.eu/environment/waste/target_review.htm, Dec 2017. Accessed
on 2018-07-02.
84
Bibliography
[11] Teh Pan Fei, Shahreen Kasim, Rohayanti Hassan, Mohd Norasri Ismail,
Mohd Zaki Mohd Salikon, Husni Ruslai, Kamaruzzaman Jahidin, and Moham-
mad Syafwan Arshad. Swm: Smart waste management for green environment.
In Student Project Conference (ICT-ISPC), 2017 6th ICT International, pages
15. IEEE, 2017.
[13] A. Shamir Fluhrer S., Mantin I. Weaknesses in the key scheduling algorithm of
rc4. https://www.cs.cornell.edu/people/egs/615/rc4_ksaproc.pdf, Jan
2001. Accessed on 2018-07-06.
[14] Fachmin Folianto, Yong Sheng Low, and Wai Leong Yeow. Smartbin: Smart
waste management system. In Intelligent Sensors, Sensor Networks and Infor-
mation Processing (ISSNIP), 2015 IEEE Tenth International Conference on,
pages 12. IEEE, 2015.
[17] Mark I. Shoesmith Lars Ole Valúen. The eect of phev and
hev duty cycles on battery and battery pack performance. https:
//web.archive.org/web/20090326150713/http://www.pluginhighway.ca/
PHEV2007/proceedings/PluginHwy_PHEV2007_PaperReviewed_Valoen.pdf,
2007. Accessed on 2018-07-30.
[18] Dave Locke. Mq telemetry transport (mqtt) v3.1 protocol specication. https:
//www.ibm.com/developerworks/webservices/library/ws-mqtt/, Aug 2010.
Accessed on 2018-07-09.
85
Bibliography
[20] SS Navghane, MS Killedar, and Dr VM Rohokale. Iot based smart garbage and
waste collection bin. Int. J. Adv. Res. Electron. Commun. Eng, 5(5):15761578,
2016.
[27] Narayan Sharma, Nirman Singha, and Tanmoy Dutta. Smart bin implementa-
tion for smart cities. International Journal of Scientic & Engineering Research,
6(9):787791, 2015.
[28] Sanjana Sharma, V Vaishnavi, Vandana Bedhi, et al. Smart bin - an 'internet of
things' approach to clean and safe public space. In I-SMAC (IoT in Social, Mo-
bile, Analytics and Cloud)(I-SMAC), 2017 International Conference on, pages
652657. IEEE, 2017.
86
Bibliography
[30] Andres Torres Garcia, Oscar Rodea Aragon, Omar Longoria Gandara, Francisco
Sanchez Garcia, and Luis Enrique Gonzalez Jimenez. Intelligent waste separator.
Computacion y Sistemas, 19(3), Oct 2015. Accessed on 2018-08-01.
[33] Aksan Surya Wijaya, Zahir Zainuddin, and Muhammad Niswar. Design a smart
waste bin for smart waste management. pages 6266. IEEE, August 2017.
[34] Zhu Lei. Machine learning for automatic waste sorting. Master's thesis, ENSTA
Bretagne, Aug 2017. Accessed on 2018-08-01.
87