Sei sulla pagina 1di 62

IoT Light Control Documentation

Release 0.1.0

Zapis Dublya Team

Aug 29, 2017


Contents

1 1. The introduction 1

2 2. Edison and hardware requirements 3


2.1 Intel Edison Compute Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Intel Edison Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Grove kit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 SG90 9g Micro Servo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 BLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 3.Software requirements 11
3.1 OS Yocto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Important software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Ffmpeg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Opencv 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.5 UPM (Useful Packages & Modules) Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.6 libmraa - Low Level Skeleton Library for Communication on GNU/Linux platforms . . . . . . . . . 15

4 4. Installation process 19
4.1 From sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 5. Features and modules 23

6 6. Web-camera and traffic-light control 25

7 7. Mobile application 31

8 9. Server API 41

9 10. Additions 45
9.1 Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9.2 Contributing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

10 11. History 49
10.1 0.1.0 (2017-01-23) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
10.2 1.0.1 (2017-03-25) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

11 iot_light_control 51
11.1 iot_light_control package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

i
12 Indices and tables 53

Python Module Index 55

ii
CHAPTER 1

1. The introduction

The IoT Light Control is the smart Traffic Light Controller based on Intel Edison. The IoT Light Control project was
developed as the project for the Internet of Things course practice.
The main feature of the project is the traffic light control. This feature is implemented through opencv video-capture,
and based on developed custom pattern recognition algorithm. The algorithm sets the most suitable time of the traffic
light’s life cycle.
As an additional feature, the barrier control though beacon technology is implemented. The barrier is being automati-
cally lifted up when the transport with the beacon on the board arrives.
Besides of the automatic features listed above, the manual control is available. Using the mobile device, the user gains
the opportunity to control the system and to set his own desired parameters to be applied to the system.
The communication with the server is implemented through the REST API. The API is described in the Server API
part.
Any contribution is welcome!
The Zapis Dublya team

1
IoT Light Control Documentation, Release 0.1.0

2 Chapter 1. 1. The introduction


CHAPTER 2

2. Edison and hardware requirements

There is the brief list of required devices to install and use IoT Light Control.
• Intel Edison Compute Module
• Intel Edison Board
• Web-camera with the USB cable
• Estimote’s Bluetooth low energy Beacon
• Grove - Buzzer
• Grove - Green LED
• Grove - Yellow LED
• Grove - Red LED
• SG90 9g Micro Servo
• Mobile device under Android OS

Intel Edison Compute Module

The Intel® Edison Module is a tiny, SD-card-sized computing chip designed for building Internet of Things (IoT)
and wearable computing products. The Edison module contains a high-speed, dual-core processing unit, integrated
Wi-Fi*, Bluetooth* low energy, storage and memory, and a broad spectrum of input/output (I/O) options for in-
terfacing with user systems. Because of its small footprint and low power consumption, the Edison module is an
ideal choice for projects that need a lot of processing power without being connected to a power supply. The Edi-
son module is meant to be embedded in devices or development boards for connectivity and power options. To get
started, Intel® provides the Intel® Edison Kit for Arduino* and Intel® Edison Breakout Board Kit*, which you can
use for rapid prototyping. For production deployment, you can also create a custom board. To program the Edison
module, you can use the C, C++, Python*, or JavaScript* (Node.js*) programming language. To develop and de-
bug the device code on Edison development boards or devices, download the integrated development environment

3
IoT Light Control Documentation, Release 0.1.0

(IDE) for your programming environment. For instance, you can download Intel® XDK for JavaScript, Intel® Sys-
tem Studio IoT Edition for C/C++, Intel® System Studio IoT Edition for Java, or the Arduino IDE for programming
an Edison board with Arduino. The choice of IDE depends on your project and device requirements as well as
which programming language you’ll use to interface with the devices. To interact with sensors and actuators on Edi-
son devices (or any supported device), Intel® provides the Libmraa* library. Libmraa provides an abstraction layer
on top of supported hardware, so that you can read data from sensors and actuators in a standard way and create
portable code that works across supported platforms. To check supported sensors and actuators from various manu-
facturers for Edison devices, browse the Useful Packages & Modules (UPM) Sensor/Actuator repository at GitHub*
(https://github.com/intel-iot-devkit/upm). UPM is a high-level repository for various sensors, and provides a standard
pattern for integrating with sensors using the Libmraa library. With the option of widely-used programming languages
and a community of various sensor projects, you can reuse your existing programming knowledge to develop con-
nected products, and use the Libmraa library to interact easily with GPIO pins for I/O functionality. Visit the the
official site for more details: https://software.intel.com/en-us/iot/hardware/edison

Intel Edison Board

Arduino Breakout essentially gives your Edison the ability to interface with Arduino shields or any board with the
Arduino footprint. Digital pins 0 to 13 (and the adjacent AREF and GND pins), analog inputs 0 to 5, the power header,
ICSP header, and the UART port pins (0 and 1) are all in the same locations as on the Arduino Uno R3 (Arduino 1.0
pinout). Additionally, the Intel® Edison Arduino Breakout includes a micro SD card connector, a micro USB device
port connected to UART2, and a combination micro USB device connector and dedicated standard size USB 2.0 host
Type-A connector (selectable via a mechanical microswitch). Though this kit won’t turn your Edison into an Arduino
itself, you will, however, gain access to to the Arduino’s shield library and resources!
Board I/O Features: 20 digital input/output pins, including 6 pins as PWM outputs. 6 analog inputs. 1 UART (Rx/Tx).
1 I2C. 1 ICSP (In-system programming ) 6-pin header (SPI). Micro USB device connector OR (via mechanical switch)

4 Chapter 2. 2. Edison and hardware requirements


IoT Light Control Documentation, Release 0.1.0

dedicated standard size USB host Type-A connector. Micro USB device (connected to UART). SD card connector.
DC power jack (7 to 15VDC input).

Grove kit

Intel® Edison and Grove IoT Starter Kit Powered by AWS is a fully-integrated kit that includes Intel® Edison and
Grove sensors and actuators with compiled and optimized AWS IoT SDK so developers and makers can build cloud-
connected projects quickly!
Designed for expert makers, entrepreneurs, and industrial IoT companies, the Intel® Edison module provides ease-of-
development with a fully open source hardware and software development environment. It supports Wi-Fi and BLE
4.0 connectivity. Intel® Edison and Grove IoT Starter Kit Powered by AWS has included an Intel® Edison for Arduino
along with 11 selective Grove sensors and actuators for user to detect the indoor environment as well as to create smart
home applications. GROVE is a family of plug and play open-source modules for easy and quick prototyping. Each
Grove comes with standard interface, clear documentation and demo codes.
GROVE is an open modular toolset, designed to minimize the difficulty of fundamental electronic engineering. It is
formed by functional modules TWIG and interface board STEM. Each TWIG has unified 4pin interface and standard-
ized jigsaw shape for easy combination. They can work with major existing development platform (like Arduino and
compatible boards, beagleboard, Xbee, and etc) via STEMs.

2.3. Grove kit 5


IoT Light Control Documentation, Release 0.1.0

SG90 9g Micro Servo

Tiny and lightweight with high output power. Servo can rotate approximately 180 degrees (90 in each direction), and
works just like the standard kinds but smaller. You can use any servo code, hardware or library to control these servos.
Good for beginners who want to make stuff move without building a motor controller with feedback & gear box,
especially since it will fit in small places. It comes with a 3 horns (arms) and hardware.

6 Chapter 2. 2. Edison and hardware requirements


IoT Light Control Documentation, Release 0.1.0

For the project purposes The Servo library (Servo.py) has been used. The Servo library (Servo.py) is used to control a
servo motor attached to the Arduino Expansion Board for Edison.

2.4. SG90 9g Micro Servo 7


IoT Light Control Documentation, Release 0.1.0

Inside the micro servo, you will find the pieces from the above image. The top cover hosts the plastic gears while the
middle cover hosts a DC motor, a controller, and the potentiometer.

BLE

Bluetooth low energy (Bluetooth LE, BLE, marketed as Bluetooth Smart) is a wireless personal area network tech-
nology designed and marketed by the Bluetooth Special Interest Group aimed at novel applications in the healthcare,
fitness, beacons, security, and home entertainment industries. Compared to Classic Bluetooth, Bluetooth Smart is
intended to provide considerably reduced power consumption and cost while maintaining a similar communication
range. Bluetooth Smart was originally introduced under the name Wibree by Nokia in 2006. It was merged into the
main Bluetooth standard in 2010 with the adoption of the Bluetooth Core Specification Version 4.0. Mobile operating
systems including iOS, Android, Windows Phone and BlackBerry, as well as macOS, Linux, Windows 8 and Windows
10, natively support Bluetooth Smart. The Bluetooth SIG predicts that by 2018 more than 90 percent of Bluetooth-
enabled smartphones will support Bluetooth Smart. The Bluetooth SIG officially unveiled Bluetooth 5 on June 16,

8 Chapter 2. 2. Edison and hardware requirements


IoT Light Control Documentation, Release 0.1.0

2016 during a media event in London. One change on the marketing side is that they dropped the point number, so it
now just called Bluetooth 5 (and not Bluetooth 5.0 or 5.0 LE like for Bluetooth 4.0). This decision was made allegedly
to “simplifying marketing, and communicating user benefits more effectively”. On the technical side, Bluetooth 5 will
quadruple the range, double the speed, and provide an eightfold increase in data broadcasting capacity of low energy
Bluetooth transmissions compared to Bluetooth 4.x, which could be important for IoT applications where nodes are
connected throughout a whole house.

2.5. BLE 9
IoT Light Control Documentation, Release 0.1.0

10 Chapter 2. 2. Edison and hardware requirements


CHAPTER 3

3.Software requirements

OS Yocto

The Yocto Project is an open source collaboration project that provides templates, tools and methods to help you
create custom Linux-based systems for embedded products regardless of the hardware architecture. It was founded in
2010 as a collaboration among many hardware manufacturers, open-source operating systems vendors, and electronics
companies to bring some order to the chaos of embedded Linux development. As an open source project, the Yocto
Project operates with a hierarchical governance structure based on meritocracy and managed by its chief architect,
Richard Purdie, a Linux Foundation fellow. This enables the project to remain independent of any one of its member
organizations, who participate in various ways and provide resources to the project.
The Yocto Project provides resources and information catering to both new and experienced users, and includes core
system component recipes provided by the OpenEmbedded project. The Yocto Project also provides pointers to ex-
ample code built demonstrating its capabilities. These community-tested images include the Yocto Project kernel and
cover several build profiles across multiple architectures including ARM, PPC, MIPS, x86, and x86-64. Specific plat-
form support takes the form of Board Support Package (BSP) layers for which a standard format has been developed.
The project also provides an Eclipse IDE plug-in and a graphical user interface to the build system called Hob.
Here are some highlights for the Yocto Project: Provides a recent Linux kernel along with a set of system commands
and libraries suitable for the embedded environment. Makes available system components such as X11, GTK+, Qt,
Clutter, and SDL (among others) so you can create a rich user experience on devices that have display hardware. For
devices that do not have a display or where you wish to use alternative UI frameworks, these components need not be
installed. Creates a focused and stable core compatible with the OpenEmbedded project with which you can easily
and reliably build and develop. Fully supports a wide range of hardware and device emulation through the Quick
EMUlator (QEMU). Provides a layer mechanism that allows you to easily extend the system, make customizations,
and keep them organized. You can use the Yocto Project to generate images for many kinds of devices. As mentioned
earlier, the Yocto Project supports creation of reference images that you can boot within and emulate using QEMU.
The standard example machines target QEMU full-system emulation for 32-bit and 64-bit variants of x86, ARM,
MIPS, and PowerPC architectures. Beyond emulation, you can use the layer mechanism to extend support to just
about any platform that Linux can run on and that a toolchain can target. Another Yocto Project feature is the Sato
reference User Interface. This optional UI that is based on GTK+ is intended for devices with restricted screen sizes
and is included as part of the OpenEmbedded Core layer so that developers can test parts of the software stack.
Download the image: https://software.intel.com/en-us/iot/hardware/edison/downloads

11
IoT Light Control Documentation, Release 0.1.0

Important software

You will need the following to build, install and use IoT Light Control:
• pip==9.0.1
• bumpversion==0.5.3
• wheel==0.29.0
• watchdog==0.8.3
• flake8==2.6.0
• tox==2.3.1
• coverage==4.1
• Sphinx==1.5.2
• pytest==2.9.2
• pytest-cov==2.4.0
• pytest-mock==1.5.0
To install these,

$ git clone git@gitlab.com:zapis-dublya/iot-traffic-light-control.git

Then,

$ pip install -rrequirements_dev.txt

Ffmpeg

ffmpeg 3.4.2 FFmpeg is the leading multimedia framework, able to decode, encode, transcode, mux, demux, stream,
filter and play pretty much anything that humans and machines have created. It supports the most obscure ancient
formats up to the cutting edge. No matter if they were designed by some standards committee, the community or
a corporation. It is also highly portable: FFmpeg compiles, runs, and passes our testing infrastructure FATE across
Linux, Mac OS X, Microsoft Windows, the BSDs, Solaris, etc. under a wide variety of build environments, machine
architectures, and configurations. It contains libavcodec, libavutil, libavformat, libavfilter, libavdevice, libswscale and
libswresample which can be used by applications. As well as ffmpeg, ffserver, ffplay and ffprobe which can be used
by end users for transcoding, streaming and playing. The FFmpeg project tries to provide the best technically possible
solution for developers of applications and end users alike. To achieve this we combine the best free software options
available. We slightly favor our own code to keep the dependencies on other libs low and to maximize code sharing

12 Chapter 3. 3.Software requirements


IoT Light Control Documentation, Release 0.1.0

between parts of FFmpeg. Wherever the question of “best” cannot be answered we support both options so the end
user can choose.

Opencv 3.2

OpenCV is released under a BSD license and hence it’s free for both academic and commercial use. It has C++,
C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. OpenCV was designed
for computational efficiency and with a strong focus on real-time applications. Written in optimized C/C++, the
library can take advantage of multi-core processing. Enabled with OpenCL, it can take advantage of the hardware
acceleration of the underlying heterogeneous compute platform. Adopted all around the world, OpenCV has more than
47 thousand people of user community and estimated number of downloads exceeding 14 million. Usage ranges from
interactive art, to mines inspection, stitching maps on the web or through advanced robotics. Required Packages GCC
4.4.x or later CMake 2.8.7 or higher Git GTK+2.x or higher, including headers (libgtk2.0-dev) pkg-config Python and
Numpy with developer packages (python-dev, python-numpy) ffmpeg or libav development packages: libavcodec-dev,
libavformat-dev, libswscale-dev [optional] libtbb2 libtbb-dev [optional] libdc1394 2.x [optional] libjpeg-dev, libpng-
dev, libtiff-dev, libjasper-dev, libdc1394-22-dev [optional] CUDA Toolkit 6.5 or higher The packages can be installed
using a terminal and the following commands or by using [Getting OpenCV Source Code You can use the latest stable
OpenCV version or you can grab the latest snapshot from Git repository. Getting the Latest Stable OpenCV Version
Go to downloads page. Download the source archive and unpack it. Getting the Cutting-edge OpenCV from the Git
Repository Launch Git client and clone OpenCV repository.

cd ~/<my_working_directory>
git clone https://github.com/opencv/opencv.git
git clone https://github.com/opencv/opencv_contrib.git

Building OpenCV from Source Using CMake Create a temporary directory, which we denote as <cmake_build_dir>,
where you want to put the generated Makefiles, project files as well the object files and output binaries and enter there.
For example

cd ~/opencv
mkdir build
cd build

Configuring. Run cmake [<some optional parameters>] <path to the OpenCV source directory> For example

cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local ..

or

cmake-gui
set full path to OpenCV source code, e.g. /home/user/opencv
set full path to <cmake_build_dir>, e.g. /home/user/opencv/build
set optional parameters

run: “Configure” run: “Generate” Note Use

cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local .. ,

without spaces after -D if the above example doesn’t work. Description of some parameters build type:

CMAKE_BUILD_TYPE=Release\Debug

to build with modules from opencv_contrib set

3.4. Opencv 3.2 13


IoT Light Control Documentation, Release 0.1.0

OPENCV_EXTRA_MODULES_PATH to <path to opencv_contrib/modules/>


set BUILD_DOCS for building documents
set BUILD_EXAMPLES to build all examples

[optional] Building python. Set the following python parameters:

PYTHON2(3)_EXECUTABLE = <path to python>


PYTHON_INCLUDE_DIR = /usr/include/python<version>
PYTHON_INCLUDE_DIR2 = /usr/include/x86_64-linux-gnu/python<version>
PYTHON_LIBRARY = /usr/lib/x86_64-linux-gnu/libpython<version>.so
PYTHON2(3)_NUMPY_INCLUDE_DIRS = /usr/lib/python<version>/dist-packages/numpy/core/
˓→include/

[optional] Building java. Unset parameter: BUILD_SHARED_LIBS It is useful also to unset BUILD_EXAMPLES,
BUILD_TESTS, BUILD_PERF_TESTS - as they all will be statically linked with OpenCV and can take a lot of
memory. Build. From build directory execute make, it is recommended to do this in several threads For example

make -j7 # runs 7 jobs in parallel

[optional] Building documents. Enter <cmake_build_dir/doc/> and run make with target “html_docs” For example

cd ~/opencv/build/doc/
make -j7 html_docs

To install libraries, execute the following command from build directory

sudo make install

[optional] Running tests Get the required test data from OpenCV extra repository. For example git clone
https://github.com/opencv/opencv_extra.git set OPENCV_TEST_DATA_PATH environment variable to <path to
opencv_extra/testdata>. execute tests from build directory. For example

<cmake_build_dir>/bin/opencv_test_core

UPM (Useful Packages & Modules) Sensor

The UPM repository provides software drivers for a wide variety of commonly used sensors and actuators. These
software drivers interact with the underlying hardware platform (or microcontroller), as well as with the attached sen-
sors, through calls to MRAA APIs. Programmers can access the interfaces for each sensor by including the sensor’s
corresponding header file and instantiating the associated sensor class. In the typical use case, a constructor initializes
the sensor based on parameters that identify the sensor, the I/O protocol used and the pin location of the sensor. C++
interfaces have been defined for the following sensor/actuator types, but they are subject to change: Light controller
Light sensor Temperature sensor Humidity sensor Pressure sensor Gas sensor Analog to digital converter The devel-
oper community is encouraged to help expand the list of supported sensors and actuators and provide feedback on
interface design. Supported Sensors Supported sensor list from API documentation. You can also refer to the Intel®
IoT Developer Zone. Installing UPM Installing UPM UPM packages are provided for some of the major supported
distributions, making it very easy to install UPM and its dependencies without having to go through a full build. Poky
Linux Yocto Project based Poky Linux builds are provided for Intel Galileo, Intel Edison and Minnowboard. These are
the official images released with the Intel IoT Developer Kit and can found here. To update to the latest stable UPM
version: echo “src intel-iotdk https://iotdk.intel.com/repos/3.5/intelgalactic/opkg/i586/” > /etc/opkg/intel-iotdk.conf
opkg update opkg upgrade mraa upm If you would like to try the development version use intelgalactic-dev instead.
For more details visit this page.

14 Chapter 3. 3.Software requirements


IoT Light Control Documentation, Release 0.1.0

libmraa - Low Level Skeleton Library for Communication on


GNU/Linux platforms

Libmraa is a C/C++ library with bindings to Java, Python and JavaScript to interface with the IO on Galileo, Edison &
other platforms, with a structured and sane API where port names/numbering matches the board that you are on. Use
of libmraa does not tie you to specific hardware with board detection done at runtime you can create portable code that
will work across the supported platforms. The intent is to make it easier for developers and sensor manufacturers to
map their sensors & actuators on top of supported hardware and to allow control of low level communication protocol
by high level languages & constructs. Installation: Installing on Intel 32bit Yocto based opkg image See the section
below on compiling or use our repository to install on a glibc based yocto poky image that supports opkg. Adding this
repository is as simple as and you’ll have the latest stable tagged build of mraa installed!

echo "src mraa-upm http://iotdk.intel.com/repos/3.5/intelgalactic/opkg/i586" > /etc/


˓→opkg/mraa-upm.conf

opkg update
opkg install mraa

If you would like to get the latest & greatest builds from master HEAD you can use our -dev repository

echo "src mraa-upm http://iotdk.intel.com/repos/3.5/intelgalactic-dev/opkg/i586" > /


˓→etc/opkg/mraa-upm.conf

opkg update
opkg install mraa

Or Building:

Building libmraa {#building}

libmraa uses cmake in order to make compilation relatively painless. CMake runs build out of tree so the recommended
way is to clone from git and make a build/ directory inside the clone directory. For building imraa check building imraa
Build dependencies Not all these are required but if you’re unsure of what you’re doing this is what you’ll need: *
SWIG 3.0.5+ * git * python 2.7 or 3.4+ (you’ll need not just the interpreter but python-dev) * node.js 4.x recommended
(you’ll need not just the interpreter but nodejs-dev) * CMake 2.8.8+ (3.1+ is recommended for node.js version 2+) *
json-c 0.12+ (0.10+ probably works in reality) For Debian-like distros the below command installs the basic set:

sudo apt-get install git build-essential swig3.0 python-dev nodejs-dev cmake libjson-
˓→c-dev

To build the documentation you’ll also need: * Doxygen 1.8.9.1+ * Graphviz 2+ (For Doxygen graph generation) *
Sphinx 1.1.3+ (For Python docs) Basic build steps

mkdir build
cd build
cmake ..
make

If this goes wrong and you have all the dependencies installed, then please file an issue with the full output of cmake
.. and make or however far you got. After that you can install built files (into default path) by running:

sudo make install

See flags for adjusting install paths in the section below. Currently our install logic puts Python bindings into standard
paths, which do not work on Debian due to their policy. We are working on a permanent solution, in the meantime
please use this command after make install to link installed modules where Debian’s Python expects them:

3.6. libmraa - Low Level Skeleton Library for Communication on GNU/Linux platforms 15
IoT Light Control Documentation, Release 0.1.0

sudo ln -s <your install prefix, e.g. /usr>/lib/python2.7/site-packages/* /usr/lib/


˓→python2.7/dist-packages

Same approach works for Python 3, you’ll just need to adjust the version number in the path accordingly. Configuration
flags Our CMake configuration has a number of options, cmake-gui or ccmake (cmake -i is no longer with us :() can
show you all the options. A few of the more common ones are listed below. Note that when the option starts with
CMAKE_ it’s an option that is made available by CMake and will be similar in all CMake projects. You need to add
them after cmake but before .. A few recommended options: Changing install path from /usr/local to /usr:

-DCMAKE_INSTALL_PREFIX:PATH=/usr

Building debug build - adds -g and disables optimisations - this will force a full rebuild:

-DCMAKE_BUILD_TYPE=DEBUG

Using clang instead of gcc:

-DCMAKE_C_COMPILER=/usr/bin/clang -DCMAKE_CXX_COMPILER=/usr/bin/clang++

Building with an older version of SWIG (< 3.0.2) requires the disabling of JavaScript:

-DBUILDSWIGNODE=OFF

Disabling Python module building:

-DBUILDSWIGPYTHON=OFF

Building doc, this will require SPHINX & Doxygen: -DBUILDDOC=ON You will also require clone git submodules
from your existing checkout:

git submodule update --init --recursive The from doxygen2jsdoc dir: npm install
˓→mkdirp commander lodash bluebird pegjs

Override build architecture (this is useful because on x86 ARM code is not compiled so use this flag to force the target
arch)

-DBUILDARCH=arm

You can also enable -Wall for gcc before running cmake by exporting your wanted CC flags to the CC env var

export CC="gcc -Wall"

Sometimes it’s nice to build a static libary, on Linux systems just set

-DBUILD_SHARED_LIBS=OFF

Dependencies continued You’ll need at least SWIG version 3.0.2 and we recommend 3.0.5 to build the JavaScript
& Python modules. If your version of SWIG is older than this then please see above for disabling SWIGNODE.
Otherwise you will get a weird build failure when building the JavaScript module. The Python module builds with
SWIG 2.x but we don’t test it. During the build, we’ll assume you’re building from git, note that if you compile with git
installed your version of mraa will be versioned with git desribe –tag to make it easy for intentification. You can easily
modify version.c in build/src. If you don’t build from a git tree then you will simply have a version which matches
the latest released version of mraa. Using a Yocto/OE toolchain In order to compile with a Yocto/OE toolchain use
the following toolchain file. This works well on the Edison 1.7.2 SDK. First source the environment file, then use our
CMake toolchain file.

16 Chapter 3. 3.Software requirements


IoT Light Control Documentation, Release 0.1.0

source /opt/poky-edison/1.7.2/environment-setup-core2-32-poky-linux
mkdir build
cmake -DCMAKE_TOOLCHAIN_FILE=../cmake/Toolchains/oe-sdk_cross.cmake ..
make

Using Coverity This is the procedure to submit a build to Coverity. You’ll need to install coverity-submit for your OS.

mkdir covbuild/ && cd covbuild


cmake -DBUILDDOC=OFF -DBUILDSWIG=OFF ..
cov-build --dir cov-int make
tar caf mraa.tar.bz2 cov-int

Building Java bindings Have JAVA_HOME set to JDK install directory. Most distributions set this from /etc/profile.d/
and have a way of switching between alternatives. We support both OpenJDK and Oracle’s JDK. On Arch Linux with
OpenJDK 8 you’ll have to set this yourself like this:

export JAVA_HOME=/usr/lib/jvm/default/

Then use the CMake configuration flag: -DBUILDSWIGJAVA=ON To compile Example.java

javac -cp $DIR_WHERE_YOU_INSTALLED_MRAA/mraa.jar:. Example.java

To run, make sure libmraajava.so is in LD_LIBRARY_PATH

java -cp $DIR_WHERE_YOU_INSTALLED_MRAA/mraa.jar:. Example

If you want to add or improve Java bindings for mraa, please follow the Creating Java Bindings Guide. Building an
IPK/RPM package using cpack You can get cpack to generate an IPK or RPM package fairly easily if you have the
correct packaging tools

cmake -DIPK=ON -DCMAKE_INSTALL_PREFIX=/usr ..


make package

To use RPM simply enable the RPM option. You’ll need rpmbuild installed on your build machine.

cmake -DRPM=ON -DCMAKE_INSTALL_PREFIX=/usr ..

3.6. libmraa - Low Level Skeleton Library for Communication on GNU/Linux platforms 17
IoT Light Control Documentation, Release 0.1.0

18 Chapter 3. 3.Software requirements


CHAPTER 4

4. Installation process

After you get all the necessary software, you can install the IoT Light Control.

From sources

The sources for IoT Light Control can be downloaded from the Gitlab repo.
1 You can either clone the public repository:

$ git clone git@gitlab.com:zapis-dublya/iot-traffic-light-control.git

Or download the tarball:

$ curl -L 'https://gitlab.com/zapis-dublya/iot-traffic-light-control/repository/
˓→archive.tar?ref=master' -o iot_light_control.tar

And extract it with:

$ tar xvf archive.tar

2 Once you have a copy of the source, you can install it with:

$ python setup.py install

As for hardware, find the necessary information below. Grove Base Shield Interfaces:
Connect the following devices:

19
IoT Light Control Documentation, Release 0.1.0

Also, the table contains the information about the documented API of components.
Connect the buzzer using the following scheme or the live photo as an example.

20 Chapter 4. 4. Installation process


IoT Light Control Documentation, Release 0.1.0

Connect the USB WebCamera and set it above the road.


Aslo, you will need the Mobile Application to control the system. Download it and install on your Android device,
Congratulations! IoT Light Control is ready to use.

4.1. From sources 21


IoT Light Control Documentation, Release 0.1.0

22 Chapter 4. 4. Installation process


CHAPTER 5

5. Features and modules

The main feature of the project is the traffic light control. This feature is implemented trough opencv video-capture,
and based on developed custom pattern recognition algorithm. The algorithm sets the most suitable time of the traffic
light’s life cycle.
As an additional feature, the barrier control though beacon technology is implemented. The barrier is being automati-
cally lifted up when the transport with the beacon on the board arrives.
Besides of the automatic features listed above, the manual control is available. Using the mobile device, the user gains
the opportunity to control the system and to set his own desired parameters to be applied to the system.
These features and modules are described in following parts of the document.

23
IoT Light Control Documentation, Release 0.1.0

24 Chapter 5. 5. Features and modules


CHAPTER 6

6. Web-camera and traffic-light control

The traffic light control is implemented through OpenCV videocapture system. The class TimeAdviser implements
the traffic light control feature of the system.
class TimeAdvicer:
"""
Main class for the car detection.
Captures the road image and finds cars on it.
"""
def __init__(self, camN=0, file1="test1.jpg", file2="test2.jpg"):
"""

:param camN: Number of device /dev/video*


:param file1: Outfile for the first part of the road
:param file2: Outfile for the second part of the road
"""
self.cam=cv2.VideoCapture(camN)
self.file1=file1
self.file2=file2
self.broken=False

The system requires the empty road to be shown at the beginning. Function TEST is called to detect the road and to
print out parts of the road. This allows to find the zones of interest, which are parts of the crossroad.
def test(self):
"""
Detects the road and print it as two images.
:return: True if the road has been detected. False otherwise.
"""
#Road detection indicator
flag=False
#Capture the road image from the webcam
ret, image = self.cam.read()
#Get the objects' borders
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)

25
IoT Light Control Documentation, Release 0.1.0

edged = cv2.Canny(blurred, 50, 200, 255)

# Find contours in the edge map, then sort them by their


# Size in descending order
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)

#Set of contours' points


displayCnt = None

# Loop over the contours


for c in cnts:
# Approximate the contour
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)

# If the contour has four vertices, then we have found


# the thermostat display
if (len(approx) == 12) and (cv2.contourArea(c)>=1000):
displayCnt = approx

#Debug mode
flag=True
break
# X and Y of contour points
roadx = []
roady = []

for i in displayCnt:
roadx.append(i[0][0])
roady.append(i[0][1])

# Get the set of the contours' points


pts = []
for i in range(0, len(roadx)):
pts.append([roadx[i], roady[i]])

# Get the array with border numbers


convexNums = grahamscan(pts)

xFirst = convexNums[0]
# Getting first half of the road

firstHalfRoad = [point for point in convexNums if (isNear((pts[point])[0],


˓→(pts[xFirst])[0]) \
or isNear((pts[point])[1],
(pts[xFirst])[1]))
˓→and point != xFirst]

firstRoad = [point for point in convexNums if (isNear((pts[point])[0],


˓→ (pts[firstHalfRoad[0]])[0]) \
or isNear((pts[point])[1],

˓→ (pts[firstHalfRoad[0]])[1])) and point !=


firstHalfRoad[0]] + firstHalfRoad

26 Chapter 6. 6. Web-camera and traffic-light control


IoT Light Control Documentation, Release 0.1.0

# Other half of the road


secondRoad = [i for i in set(convexNums) - set(firstRoad)]

# Get two rectangles for the road recognition


localApproxFirstRoad = [[[]]]
localApproxSecondRoad = [[[]]]
for y in firstRoad:
localApproxFirstRoad.append([[pts[y]]])
localApproxFirstRoad.pop(0)
for y in secondRoad:
localApproxSecondRoad.append([[pts[y]]])
localApproxSecondRoad.pop(0)

self.coordsOfFirstRoad = np.array(localApproxFirstRoad)
self.coordsOfSecondRoad = np.array(localApproxSecondRoad)

# Print out the road parts to images


r1 = four_point_transform(gray, self.coordsOfFirstRoad.reshape(4, 2))
cv2.imwrite(self.file1, r1)
r2 = four_point_transform(gray, self.coordsOfSecondRoad.reshape(4, 2))
cv2.imwrite(self.file2, r2)
return flag

The function returns true and gives two files with given names, which are parts of the road, if the road has been
detected. The example of the output images:

27
IoT Light Control Documentation, Release 0.1.0

After that, connected-components analysis helps to find cars on the road. The function getCarN is implemented to
count the number of cars which are on the road.

def getCarN(self):
"""
Detects objects on the road
:return: The number of cars on both first and second roads as x,y
"""
try:
if self.broken==True:
return 1, 1
# Get the road images
ret, image = self.cam.read()
# Get the objects' borders
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
r1 = four_point_transform(gray, self.coordsOfFirstRoad.reshape(4, 2))
cv2.imwrite(self.file1, r1)
r2 = four_point_transform(gray, self.coordsOfSecondRoad.reshape(4, 2))
cv2.imwrite(self.file2, r2)

# Calculate the threshold


mean = 0
for i in r1:
for j in range(0, len(i)):
mean += i[j]
mean = mean / (len(r1) - 1) / (len(r1[0]) - 1)

# Get the car imaga on the road


blurred = cv2.GaussianBlur(r1, (11, 11), 0)
thresh = cv2.threshold(blurred, mean * 0.8, 255, cv2.THRESH_BINARY_INV)[1]
thresh = cv2.erode(thresh, None, iterations=2)
thresh = cv2.dilate(thresh, None, iterations=4)
labels1 = measure.label(thresh, neighbors=8, background=0, return_num=True)
cv2.imwrite(self.file1, thresh)

#Same for the second road


mean = 0
for i in r2:
for j in range(0, len(i)):
mean += i[j]
mean = mean / (len(r2) - 1) / (len(r2[0]) - 1)
blurred = cv2.GaussianBlur(r2, (11, 11), 0)
thresh = cv2.threshold(blurred, mean * 0.8, 255, cv2.THRESH_BINARY_INV)[1]
thresh = cv2.erode(thresh, None, iterations=2)
thresh = cv2.dilate(thresh, None, iterations=4)
labels2 = measure.label(thresh, neighbors=8, background=0, return_num=True)
cv2.imwrite(self.file2, thresh)
# Return the number of connected components
return labels1[1], labels2[1]

28 Chapter 6. 6. Web-camera and traffic-light control


IoT Light Control Documentation, Release 0.1.0

The simple intermediate result of the analysis is the following.

The output is how many cars are currently on the road. Depending on the traffic, server will set the necessary timer
value.

29
IoT Light Control Documentation, Release 0.1.0

30 Chapter 6. 6. Web-camera and traffic-light control


CHAPTER 7

7. Mobile application

There is the short guide how to control the system using the mobile device.
To gain mobile control, you need to install the mobile application.
Download link https://www.dropbox.com/s/9bj18y52phz1r96/cordova_project-armv7.android.20170328215023.apk?
dl=0
Firstly, you will need to register in the application.

31
IoT Light Control Documentation, Release 0.1.0

32 Chapter 7. 7. Mobile application


IoT Light Control Documentation, Release 0.1.0

Next, register the Edison:

33
IoT Light Control Documentation, Release 0.1.0

34 Chapter 7. 7. Mobile application


IoT Light Control Documentation, Release 0.1.0

After that, you could control the system.


You can control the system through the following parameters:
Barrier state (updown)

35
IoT Light Control Documentation, Release 0.1.0

36 Chapter 7. 7. Mobile application


IoT Light Control Documentation, Release 0.1.0

Light duration and barrier height:

37
IoT Light Control Documentation, Release 0.1.0

38 Chapter 7. 7. Mobile application


IoT Light Control Documentation, Release 0.1.0

All of these parameters can be saved and pushed to the system as API requests.
Server will handle these requests and apply them to the system.

39
IoT Light Control Documentation, Release 0.1.0

40 Chapter 7. 7. Mobile application


CHAPTER 8

9. Server API

There is the current implementation of public api methods.

api = Blueprint('api', url_prefix='api')

def verify_user(request):
ts = request.app.token_storage

token = request.headers.get('X-Auth-Token')

if not token:
token = request.json['token']

return ts[token]

@api.route("/settings", methods=('GET', 'PUT'))


async def settings(request):
user = verify_user(request)
app = request.app.main_app
settings = app.settings.state
if request.method == 'GET':
return json({
"settings": {
"smartLight": settings['smart_light'],
"barrierHeight": user.barrier_height,
"greenDuration": settings['green_duration'],
"redDuration": settings['red_duration']
}
})

if request.method == 'PUT':
new_settings = request.json

user.barrier_height = new_settings['barrierHeight']
user.save()

41
IoT Light Control Documentation, Release 0.1.0

settings['red_duration'] = new_settings['redDuration']
settings['green_duration'] = new_settings['greenDuration']
settings['smart_light'] = new_settings['smartLight']

return json({"status": "ok"})

@api.route("/test")
async def api_test(request):
return json({'status': 'ok'})

def api_error(msg):
return json({"error": msg}, 400)

def create_token(user, ts):


jwt = JWT(config.SECRET)
timestamp = str(datetime.now().timestamp())
token = jwt.dumps({'u': user.username, 't': timestamp}).decode()

ts[token] = user

Sessions.create(user=user, token=token)

return json({"token": token})

@api.route("/login", methods=('GET', 'POST'))


async def api_login(request):
token_storage = request.app.token_storage
if request.method == 'GET':
return json({"status": "ok"})

if request.method == 'POST':
username = request.json.get('username')
password = request.json.get('password')

if username is None or password is None:


return api_error("Invalid data")

password_hash = hashlib.sha1(password.encode()).hexdigest()

s = User.select().where(User.username == username)

if s.exists():
user = s.first()

if user.password_hash == password_hash:
return create_token(user, token_storage)

return api_error("Invalid user or password")

@api.route("/register", methods=('POST',))
async def api_login(request):
username = request.json.get('username')

42 Chapter 8. 9. Server API


IoT Light Control Documentation, Release 0.1.0

password = request.json.get('password')

if username is None or password is None:


return api_error("Invalid data")

if len(username) == 0:
return api_error("Username is too short")

if len(password) < 4:
return api_error("Password should be longer than 4 symbols")

s = User.select().where(User.username == username)

if s.exists():
return api_error("User already exists")

password_hash = hashlib.sha1(password.encode()).hexdigest()

new_user = User.create(username=username, password_hash=password_hash)

return create_token(new_user, request.app.token_storage)

@api.route('/barrier', methods=('GET', 'PUT'))


async def barrier(request):
app = request.app.main_app
user = verify_user(request)
barrier_open = app.state.state.get('barrier', 0) != 0
if request.method == 'GET':
return json({"barrierOpen": barrier_open})

if request.method == 'PUT':
if not barrier_open:
app.state.state['barrier'] = user.barrier_height
else:
app.state.state['barrier'] = 0
return json({"status": "ok"})

• PUT /api/settings
The API for changing the application settings.
• GET /api/settings
The API that returns the application settings.
• GET /api/barrier
The API that returns the information whether the barrier is opened.
• GET /api/test
The API for Edison search. Returns the information whether the application can access the Edison.
• PUT /api/barrier
The API that open the barrier.
• GET /api/status
The API that returns the number of the autos on the crossroad.

43
IoT Light Control Documentation, Release 0.1.0

• GET /api/camera
The API that returns the image caught from the camera.
• POST api/login
The API for log a user in the application system.
• POST /api/register
The API that register the new user in the application system.

44 Chapter 8. 9. Server API


CHAPTER 9

10. Additions

Credits

Development Team

• Vadim Andronov <vadimadr@gmail.com>


• Vladimir Zlobin <wovchena@gmail.com>
• Timofey Kuzmin <>
• Alexander Tsyplyaev <AlexTsyplyaev@gmail.com>
• Mihail Dolinin <mixaildolinin@gmail.com>
Any Contribution is welcome!

Contributing

Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
You can contribute in many ways:

Types of Contributions

Report Bugs

Report bugs at https://gitlab.com/zapis-dublya/iot-traffic-light-control/issues/issues.


If you are reporting a bug, please include:
• Your operating system name and version.
• Any details about your local setup that might be helpful in troubleshooting.

45
IoT Light Control Documentation, Release 0.1.0

• Detailed steps to reproduce the bug.

Fix Bugs

Look through the GitHub issues for bugs. Anything tagged with “bug” and “help wanted” is open to whoever wants
to implement it.

Implement Features

Look through the GitHub issues for features. Anything tagged with “enhancement” and “help wanted” is open to
whoever wants to implement it.

Write Documentation

IoT Light Control could always use more documentation, whether as part of the official IoT Light Control docs, in
docstrings, or even on the web in blog posts, articles, and such.

Submit Feedback

The best way to send feedback is to file an issue at https://gitlab.com/zapis-dublya/iot-traffic-light-control/issues/new


If you are proposing a feature:
• Explain in detail how it would work.
• Keep the scope as narrow as possible, to make it easier to implement.
• Remember that this is a volunteer-driven project, and that contributions are welcome :)

Get Started!

Ready to contribute? Here’s how to set up iot_light_control for local development.


1. Clone iot_light_control locally:

$ git clone git@gitlab.com:zapis-dublya/iot-traffic-light-control.git

2. Install your local copy into a virtualenv. Assuming you have virtualenv installed, this is how you set up for local
development:

$ python3 -m virtualenv iot_light_control_env


$ source iot_light_control_env/bin/activate
$ cd iot_light_control/
$ python setup.py develop

3. Install mraa and upm in mocking mode to prevent import errors::

$ git submodule update --init


$ make iot-devkit

4. Create a branch for local development from dev branch:

$ git checkout -b name-of-your-feature dev

46 Chapter 9. 10. Additions


IoT Light Control Documentation, Release 0.1.0

Now you can make your changes locally.


5. When you’re done making changes, check that your changes pass flake8 and the tests, including testing other
Python versions with tox:

$ flake8 iot_light_control tests


$ python setup.py test or pytest
$ tox

To get flake8 and tox, just pip install them into your virtualenv.
6. Commit your changes and push your branch to GitLab:

$ git add .
$ git commit -m "Your detailed description of your changes."
$ git push origin name-of-your-bugfix-or-feature

7. Now merge your changes to dev without a fast-forward and push changes:

$ git checkout dev


$ git merge --no-ff name-of-your-feature
$ git push origin dev

Tips

To run a subset of tests:

$ py.test tests.test_iot_light_control

9.2. Contributing 47
IoT Light Control Documentation, Release 0.1.0

48 Chapter 9. 10. Additions


CHAPTER 10

11. History

0.1.0 (2017-01-23)

• Bootstrap project.

1.0.1 (2017-03-25)

• Mobile app, server API, docs.

49
IoT Light Control Documentation, Release 0.1.0

50 Chapter 10. 11. History


CHAPTER 11

iot_light_control

iot_light_control package

Submodules

iot_light_control.api module

iot_light_control.app module

iot_light_control.barrier module

iot_light_control.buzzer module

iot_light_control.config module

class iot_light_control.config.Config

API_PORT = 8001
BUZZER_PIN = 6
BUZZER_VOLUME = 0.1
DATABASE_FILE = ‘iottl_database.db’
DEFAULT_GREEN_DURATION = 20
DEFAULT_RED_DURATION = 20
DEVELOP = False
LCD_I2C = (0, 62, 98)
LED_GREEN_PIN = 7

51
IoT Light Control Documentation, Release 0.1.0

LED_RED_PIN = 8
LED_YELLOW_PIN = 3
LOGGING_CONFIG = {‘loggers’: {‘iot_light’: {‘level’: ‘INFO’}}, ‘disable_existing_loggers’: False, ‘handlers’: {‘console’:
RESOLVER_URL = ‘http://35.157.167.40:5050/edison’
SECRET = ‘dummy-secret’
SERVO_BARRIER_PIN = 5
STATIC_ROOT = ‘../webapp/dist’
class iot_light_control.config.ProductionConfig
Bases: iot_light_control.config.Config
STATIC_ROOT = ‘/var/www’
class iot_light_control.config.TestConfig
Bases: iot_light_control.config.Config
BUZZER_PIN = 0
DEVELOP = True
LED_GREEN_PIN = 0
LED_RED_PIN = 0
LED_YELLOW_PIN = 0
SERVO_BARRIER_PIN = 0
iot_light_control.config.config
alias of Config

iot_light_control.database module

iot_light_control.led_light module

iot_light_control.light_controller module

iot_light_control.state module

class iot_light_control.state.State(store, *args, **kwargs)


Bases: dict
class iot_light_control.state.StateStore
Bases: object
notify(key, value, old_value)
state
subscribe(key, observer)

Module contents

52 Chapter 11. iot_light_control


CHAPTER 12

Indices and tables

• genindex
• modindex
• search

53
IoT Light Control Documentation, Release 0.1.0

54 Chapter 12. Indices and tables


Python Module Index

i
iot_light_control, 52
iot_light_control.config, 51
iot_light_control.state, 52

55
IoT Light Control Documentation, Release 0.1.0

56 Python Module Index


Index

A LED_RED_PIN (iot_light_control.config.Config at-


API_PORT (iot_light_control.config.Config attribute), 51 tribute), 51
LED_RED_PIN (iot_light_control.config.TestConfig at-
B tribute), 52
LED_YELLOW_PIN (iot_light_control.config.Config at-
BUZZER_PIN (iot_light_control.config.Config at-
tribute), 52
tribute), 51
LED_YELLOW_PIN (iot_light_control.config.TestConfig
BUZZER_PIN (iot_light_control.config.TestConfig at-
attribute), 52
tribute), 52
LOGGING_CONFIG (iot_light_control.config.Config
BUZZER_VOLUME (iot_light_control.config.Config at-
attribute), 52
tribute), 51

C N
notify() (iot_light_control.state.StateStore method), 52
Config (class in iot_light_control.config), 51
config (in module iot_light_control.config), 52
P
D ProductionConfig (class in iot_light_control.config), 52
DATABASE_FILE (iot_light_control.config.Config at-
tribute), 51
R
DEFAULT_GREEN_DURATION RESOLVER_URL (iot_light_control.config.Config at-
(iot_light_control.config.Config attribute), tribute), 52
51
DEFAULT_RED_DURATION S
(iot_light_control.config.Config attribute),SECRET (iot_light_control.config.Config attribute), 52
51 SERVO_BARRIER_PIN
DEVELOP (iot_light_control.config.Config attribute), 51 (iot_light_control.config.Config attribute),
DEVELOP (iot_light_control.config.TestConfig at- 52
tribute), 52 SERVO_BARRIER_PIN
(iot_light_control.config.TestConfig attribute),
I 52
iot_light_control (module), 52 State (class in iot_light_control.state), 52
iot_light_control.config (module), 51 state (iot_light_control.state.StateStore attribute), 52
iot_light_control.state (module), 52 StateStore (class in iot_light_control.state), 52
STATIC_ROOT (iot_light_control.config.Config at-
L tribute), 52
LCD_I2C (iot_light_control.config.Config attribute), 51 STATIC_ROOT (iot_light_control.config.ProductionConfig
LED_GREEN_PIN (iot_light_control.config.Config at- attribute), 52
tribute), 51 subscribe() (iot_light_control.state.StateStore method),
LED_GREEN_PIN (iot_light_control.config.TestConfig 52
attribute), 52

57
IoT Light Control Documentation, Release 0.1.0

T
TestConfig (class in iot_light_control.config), 52

58 Index

Potrebbero piacerti anche