Sei sulla pagina 1di 35

Journal of

Communication and Computer


Volume 13, Number 7, July 2016 (Serial Number 129)

David Publishing

David Publishing Company


www.davidpublisher.com

Publication Information:
Journal of Communication and Computer is published monthly in hard copy (ISSN 1548-7709) and online (ISSN
1930-1553) by David Publishing Company located at 616 Corporate Way, Suite 2-4876 Valley Cottage, NY 10989,
USA.
Aims and Scope:
Journal of Communication and Computer, a monthly professional academic journal, covers all sorts of researches on
Theoretical Computer Science, Network and Information Technology, Communication and Information Processing,
Electronic Engineering as well as other issues.
Contributing Editors:
YANG Chun-lai, male, Ph.D. of Boston College (1998), Senior System Analyst of Technology Division, Chicago
Mercantile Exchange.
DUAN Xiao-xia, female, Master of Information and Communications of Tokyo Metropolitan University, Chairman of
Phonamic Technology Ltd. (Chengdu, China).
Editors:
Cecily Z., Lily L., Ken S., Gavin D., Jim Q., Jimmy W., Hiller H., Martina M., Susan H., Jane C., Betty Z., Gloria G.,
Stella H., Clio Y., Grace P., Caroline L., Alina Y..
Manuscripts and correspondence are invited for publication. You can submit your papers via Web Submission, or
E-mail to informatics@davidpublishing.org. Submission guidelines and Web Submission system are available at
www.davidpublisher.com.
Editorial Office:
616 Corporate Way, Suite 2-4876 Valley Cottage, NY 10989, USA
Tel1-323-984-7526, Fax: 1-323-984-7374
E-mail: informatics@davidpublishing.org; com58puter@hotmail.com
Copyright2016 by David Publishing Company and individual contributors. All rights reserved. David Publishing
Company holds the exclusive copyright of all the contents of this journal. In accordance with the international
convention, no part of this journal may be reproduced or transmitted by any media or publishing organs (including
various websites) without the written permission of the copyright holder. Otherwise, any conduct would be
considered as the violation of the copyright. The contents of this journal are available for any citation. However, all
the citations should be clearly indicated with the title of this journal, serial number and the name of the author.
Abstracted / Indexed in:
Database of EBSCO, Massachusetts, USA
Chinese Database of CEPS, Airiti Inc. & OCLC
Chinese Scientific Journals Database, VIP Corporation, Chongqing, P.R.China
CSA Technology Research Database
Ulrichs Periodicals Directory
Summon Serials Solutions
Subscription Information:
Price (per year):
Print $520; Online $360; Print and Online $680
David Publishing Company
616 Corporate Way, Suite 2-4876 Valley Cottage, NY 10989, USA
Tel1-323-984-7526, Fax: 1-323-984-7374
E-mail: order@davidpublishing.org
Digital Cooperative Company:www.bookan.com.cn

DAVID PUBLISHING

David Publishing Company


www.davidpublisher.com

Journal of

Communication and Computer


Volume 13, Number 7, July 2016 (Serial Number 129)

Contents
Computer Theory and Computational Science
319

Robotics Based Autonomous Wheelchair Navigation


Anna Shafer, Michael Turney, Francisco Ruiz, Justin Mabon, Vamsi Paruchuri and Yu Sun

Network and Information Technology


329

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent
Plane
Makoto Yoda and Hiroki Imamura

338

Learning in a Smart City Environment


R. Nikolov, E.Shoikova, M. Krumova, E. Kovatcheva, V. Dimitrov and A.Shikalanov

Communications and Electronic Engineering


351

Saarbrucken Synthetic Image Database - An Image Database for Design and Evaluation of Visual
Quality Metrics in Synthetic Scenarios
Christopher Haccius and Thorsten Herfet

366

A New Code Family for Double Spread Transmissions in Radio-over-Fiber over Optical CDMA
Network
Kai-Sheng Chen, Chao-Chin Yang, Jen-Fa Huang and Hsuan-Ho Chang

373

Design of Energy Efficient ZigBee Module


Odgerel Ayurzana and Sugir Tsagaanchuluun

Journal of Communication and Computer 13 (2016) 319-328


doi:10.17265/1548-7709/2016.07.001

DAVID PUBLISHING

Robotics Based Autonomous Wheelchair Navigation


Anna Shafer, Michael Turney, Francisco Ruiz, Justin Mabon, Vamsi Paruchuri and Yu Sun
Dept. of Computer Science, University of Central Arkansas, Conway, AR 72034, USA
Abstract: The goal of this project is the development of a robotic wheelchair system that provides independent and autonomous
navigational system in indoor environments, which allows its user to drive independently, easily and efficiently. Navigating through
a large and complicated hospital can be difficult to most people, especially to those elderly and/or disabled patients. To help patients
more efficiently while reducing the manpower, in this research, we have proposed and developed an Autonomous Wheelchair
Navigation Prototype with an Arduino robot for hospital navigation. With a user-friend interface, the proposed prototype is able to
determine the optimal path to find locations accurately and can successfully control robots movement during the navigation. Thus, it
can remove the need to learn the ins and outs of hospitals and improve the quality of life for its users. Our prototype for this system
has shown good preliminary results and is looking towards a bright future.
Key words: Arduino, wheelchair, autonomous, navigation, robotics.

1. Introduction
The baby-boomers (i.e., those born between 1946
and 1964) are entering their senior years, and as a
result our health care system is experiencing a rising
tide of older patients. The number of US citizens aged
65 years or older is increasing dramatically. In 2010,
older adults accounted for approximately 13% of the
US population, with numbers estimated at 40.2
million [1]. By 2050, these individuals are projected
to account for nearly 20% of the US population, with
their numbers estimated to be 88.5 million [1].
The United States Census Bureau released the
status of people with disabilities in July of 2012. Over
3.5 million people use a wheelchair to assist with
mobility throughout their daily lives. Depending upon
the severity and type of disability, mobility and
independence can be a challenge. This issue is
compounded by the aging population of the United
States. The post-World II baby boom of 1946-1964
flooded the United States alone with 75 million births.
The Baby Boomers are beginning to hit the 65 and
older mark which will cause an influx of patients

Corresponding author: Vamsi Paruchuri, Ph. D., associate


professor, research fields: computer science (networking,
security, robotics).

being admitted into hospitals. As such health care


resources in the US are not prepared to meet such a
rapid increase in elderly patients. These patients will
require more and more medical care. The situation is
already taking a toll on health care resources,
requiring more health care personnel to accommodate
the rising number of elderly patients [2].
The draw on human resources in the US as a result
of this population imbalance is uneconomical. In
comparison, Japan is ageing faster than any other
country in history, with vast consequences for its
economy and society [3]. They have, however,
proposed a solution to some of the issues that we are
now beginning to face. One such problem is the
navigation of elderly patients especially in hospitals.
Navigating large hospitals can be difficult for
patients and arduous for the caretakers of elderly
patients. Without assistance, hospital navigation can
be difficult or impossible for them. We propose
Autonomous Wheelchair Prototype as the solution to
this problem. The goal of this project is the
development of a robotic wheelchair system that
provides independent and autonomous navigational
system in indoor environments, which allows its user
to drive independently, easily and efficiently.
The rest of this paper is organized as follows: We

320

Robotics Based Autonomous Wheelchair Navigation

explore existing literature in Section 2 and provide


motivation in Section 3. Section 4 presents proposed
system overview and section 5 details the system
components. In Section 6, we present several
experimental results and present concluding remarks
and future work in Section 7.

2. Related Work

avoidance to allow for general navigation commands


to go right or go left. However, this system lacks any
knowledge about its surroundings. It requires the user
to have a thorough knowledge of their environment
and be able to navigate to their destination.
2.3 Aviator

Recent technological advances have opened many


doors for those with physical impairments. However,
current technologies are still lacking. Manual
wheelchairs are still the standard in hospital settings,
requiring a patient to be in good physical condition
and have knowledge of the layout in order to navigate
or have assistance from health care personnel. Several
solutions to this problem have been proposed. There
have been few solutions developed for autonomous
wheelchair navigation.

The Aviator is a wheelchair designed by Hung


Nguyen and his team at the Centre for Health
Technologies at the University of Technology, Sydney.
This hands-free wheelchair uses an EEG
(electroencephalography) to read and translate brain
signals into navigational commands for the wheelchair
[6]. While the wheelchair can navigate by thoughts
alone and uses cameras to avoid obstacles, it is very
expensive and made only for patients who are severely
physically handicapped. It also requires the user to
have a prior knowledge of their environment.

2.1 MICA

3. Motivation and Proposed Solution

The MICA ( mobile internet connected assistant ),


developed by the Lulea University of Technology in
Sweden, allows users to operate a wheelchair with
movement of the head, voice commands, or fully
autonomously [4]. It is designed to be controlled
remotely, over the internet, or by the user himself.
Where the MICA suffers though, is its lack of path
finding. The only autonomous navigation the MICA is
capable of voice recognition which requires a user to
dictate very specific commands. A user is required to
be able to speak or make head movements to navigate
this device and to have a knowledge of the
environment in order to direct it.

Hospitals are huge and difficult to navigate. The


long hallways and labyrinthine passageways are
problematic for patients regardless of how many times
they may have visited. Patients often have to rely on
others to help them navigate, either by physically
pushing them in a wheelchair or by guiding them to
their destination.
Our proposed autonomous wheelchair navigation
system will be able to transport a patient to their
destination with only the push of a few buttons. Our
system will use RFID tags [7] strategically embedded
throughout the hospital to provide orientation
information to the wheelchair device. Our system
takes advantage of the static nature of hospitals by
allowing previously populated floor maps of hospitals
to be downloaded to the device at any time, allowing
the device to navigate to any destination choice
without requiring prior knowledge of the
environments layout from the user. RFID tags will be
used because they are cheap, readily available, and
will provide more accurate feedback than a GPS in

2.2 RobChair
The Rob Chair navigation system was developed at
the University of Coimbra, Portugal. The system is
designed to assist quadriplegic or simultaneously
blind and paraplegic people with their mobility and
navigation in domestic environments [5]. The system
is voice activated and is equipped with obstacle

Robotics Based Autonomous Wheelchair Navigation

this indoor setting.


Any patient will be able to use this system. It is
designed to remove the need of learning hospital
layouts. By allowing technology to assist in this way,
better management of human and monetary resources
will be possible. This wheelchair navigation system
will save money for health care administrations while
accommodating the needs of a wider range of patients
than is possible with only a manual wheelchair.

4. System Overview and Description


Fig. 1 illustrates the system overview. It starts with
the AUI (administrative UI), where a hospital
administrator can create maps of a hospital. He can
build maps to mimic floor plans and save them to an
online repository, which contains all the floors and
buildings of a hospital along with a database that links
room locations and room numbers together. The
database also links RFID values and tag numbers, so
the admin only has to memorize tag numbers but not
the 12 digit hexadecimal RFID values. RFID enables
identification from a distance, and unlike earlier
bar-code technology, it does so without requiring a
line of sight [7]. RFID tags can support a larger set of
unique IDs and can incorporate data such as room
number, ND Room Identification such as Laboratory
Name. Furthermore, RFID systems can discern many

Fig. 1

Prototype of robotic wheelchair.

321

different tags located in the same general area without


human assistance.
When a user sits in their wheelchair, the map of the
floor is downloaded to the device. When the user
inputs the room number they wish to go to, the
wheelchair will then scan their current location and
generate the optimal path to their destination. It will
then send directions to the wheelchairs motors
directing the wheelchair. When the wheelchair drives
over an RFID card, it will compare the value with the
virtual map. If the RFID card matches, then the
wheelchair is in the correct location and is given
another direction. If the values do not match the
wheelchair will update the path with the new location.
Once the wheelchair reads the RFID card of its
destination the chair will stop.

5. Proposed Prototype
The prototype wheelchair navigation system is
broken up into four modules (Fig. 2): the
Administrative UI, the Software Keypad, the
Navigational algorithm, and the Robot. The
Administrative UI is used to create the maps of
hospital floors. The Software Keypad is used to
retrieve the room locations from the database. The
location is sent to the Navigational Algorithm
which determines the optimal path. The algorithm sends

Robotics Based Autonomous Wheelchair Navigation

322

map space. Doors and RFID-embedded tiles require


the admin to input an RFID tag, which is a simple
number assigned to each tag. During and after the
building stage, the admin can save their work. The
admin can then load the map at any time to continue
work. When the map is finished and saved, it will then
be in the online repository and it is ready to be used.
The AUI (Fig. 3) is created using JavaScript, PhP
and HTML. The HTML is used to create a canvas for
the map and toolbar to be drawn in. The JavaScript
populates the canvases with the map and toolbar. The
JavaScript is then used to draw the map by the admin
Fig. 2

Prototype wheelchair system flow.

clicking the tile in the toolbar and then clicking a


location in the map. The load function uses JavaScript
to open up a text file and parse the contents to find the
value of each position in the map. The PhP is used to
save the map. It also converts the RFID tag the admin
inputted into the 12 digit hexadecimal RFID code. The
PhP also finds the rooms in the map and stores the
location of the room in a database. This database is
used later by the software keypad to retrieve end-point
locations.

Fig. 3

GUI example map.

instructions to the Robot. The Robot moves according


to the instructions and sends feedback to the algorithm
to update its position.
5.1 Administrative UI
The Administrative UI is designed in three stages:
generate, build, and save. The administrator must first
generate a map of a set size. Once the map is
generated, the administrator can then start to build the
map. By using a toolbar of pre-defined shapes, the
admin can reconstruct most floor plans. The admin is
able to click the shape in the toolbar and place it in the

From the Administrative UI Web Page, the admin


can generate a map and start drawing. Fig. 4 shows the
three main functions and how they are called. The
Generate function will create an empty map for the
admin to draw in. At this point the admin can then
click around on the web page. Depending on where
the admin clicks will determine which function is
called. If the admin clicks in the toolbar, select Shape()
is called to load a function into a variable. If the admin
clicks inside the map space, draw Shape() will be
called.
The UI contains an array of functions. These
functions are used to draw the many different shapes.
When the user clicks in the toolbar, that function from
the array is loaded into a variable. When the user
clicks inside the map, the function in the variable is
called. This lets the user draw many shapes in the map
without having a large if-else chain to slow it down.

Robotics Based Autonomous Wheelchair Navigation

323

5.2 Software Keypad

Fig. 4

Three main functions.

The map is a 2d array that contains strings. The value


of the string can range from 0 to 16 which
represent the possible shapes. Values of 2, 3, and
16 all contain extra information. 2 and 3 are
doors and must contain the RFID value in the floor
tile and the room number. 16 is the RFID embedded
tile and as such contains the RFID value. An example
of a door tile is 3; 2; 101 which means a door to
room 101 has the RFID tag 2. An RFID embedded tile
would look like 16; 4 which means the tile has the
RFID tag 4.
When the user is done and wants to save, the save
function is called. This function sends the 2D array
representing the map to a PhP script. This script will
go through the array and write to a text file the
contents. It will place the location of the tile and its
value. A value of w means wall, a value of e
means empty, no value represents an empty space, and
anything else is the 12 digit hexadecimal value. When
the program detects there will be a 12 digit
hexadecimal value, it will connect to a database that
pairs the hexadecimal value with the RFID tag number.
This allows the user to not have to memorize so much.
An example of an entry in the text file is 2 2
45DCA0112385 which means that position 2, 2 has
that RFID value while 2 3 w means there is a wall at
2, 3. When the text file is completed, the user can then
operate the software keypad to find room locations.

Our keypad (Fig. 5), programmed in HTML and


Javascript, contains mostly numbers and has twelve
buttons, which is easy for users. When a patient uses
our Autonomous Wheelchair Prototype, they would
press on the keypad the room number they wish to go
to. The keypad sends the room number to the remote
map server. That room number is looked up in the
database and the database returns the location and
saves that location into a text file. The robot then
navigates to the destination that the patient wanted to
go to.
5.3 A Star Algorithm
The A star (A*) algorithm is standard in navigation.
It finds the optimal path between two points in a
known map which consist of points or nodes. These
nodes are spread out in the map and serve as start and
end points as well as all points in between. Each node
also contains priority and level values. The priority
value is equal to the distance it is from the end point

Fig. 5

Software keypad example.

324

Robotics Based Autonomous Wheelchair Navigation

added to its level. The level of a node is how far away


the node is from the starting point. So a priority of
15means that the node is 15 units away from the end
point, while a level of 11 means this node is 11 units
away from the start point.
The standard A* algorithm is not suitable for our
project, so we had to make some revisions. One of
those revisions is on the node structure. The node
structure used in our A* algorithm contains six
properties. These properties are: the X and Y
coordinates of the node; the level and priority of the
node; the RFID value of the node, and the direction
that the node is pointing in. The direction is a value
between zero and seven. Zero represents map east and
each increment represents another direction in a
clockwise direction such that seven will represent map
north-east (see Fig. 6). This value is updated to show
where the next node in the path is located. The major
steps of the revised A* algorithm are: (1) Begin with
the starting Node; (2) Search each neighbor node of
the selected node. Start to the east of the node and
continue clockwise until all eight directions are
checked; (3) For each neighboring node, check to see
if they are a wall node or have been visited before; (4)
If the node is not a wall node and has not been visited
before, place the node in a possible path queue; (5) If
the node in the queue has been considered before,
compare the two priorities of the paths; (6) Update the
nodes priority and direction based on the lowest
priority; (7) Place the nodes in a stack; (8) Select a
node from the stack and go back to step 2; (9) Once
the stack is empty or the end node is found backtrack
using the direction on the nodes; (10) This will give
the correct path.
The text file from the Administrative UI is required
for the A* algorithm. Before the path can be generated,
the program has to load the map into memory. It will
parse through the text file and generate nodes based
on the map information. When the path is found, the
algorithm will return a string of directions. It will then
send one direction at a time to the robot. These

directions will indicate which way the robot should be


heading. It follows the same numbering pattern from
before where zero is map east. When a direction is
sent to the robot it will wait for the robot to send back
a 12 digit hexadecimal value. The program will take
this value and check to see if the robot is in the correct
spot. If it is, the program will send the next instruction.
If the robot is off course, the program will update the
path with the new starting location and send
instructions to the robot.
5.4 Hardware
The hardware platform used in this initial prototype
is based on the DFRobotshop Rover (Fig. 7). This
robotic kit is constructed around an Arduino Uno
microprocessor and its PCB (printed circuit board)
which supports the control of the robots motors,
sensors, input and output ports, as well as
communication with an external computer via mini
USB. Our application takes advantage of the motor
controller electronics as well as multiple serial lines.
Arduino microprocessor can be programmed using
open source Arduino libraries adapted with C++.
Robot is directed using a relative positioning
algorithm that translates directional navigation
commands.

Fig. 6

Eight directions & numbers.

Robotics Based Autonomous Wheelchair Navigation

325

receiving serial pairs must use the same baud rate


throughout the communication process. The serial
stream read by the program is error prone; therefore, it
is necessary to add error correction to the serial
processing. We implemented a basic error correction
into our prototype; however, it is still possible for enough

Fig. 7

Assembled DFRobotshop rover.

The embedded motor controller chip on the PCB


board is used to send alternating signals to the motors
corresponding to the desired speed and direction. The
computer into right and left turns relative to the
robots current position. After the command is
translated, it is amplified by a factor determined by the
robots terrain and other variables such as the
currently supplied power.
The
prototype
also
utilizes
a
RFID
( radio-frequency identification ) reader (Fig. 8) as
position feedback for the navigational algorithm. This
RFID reader operates on a 125 kHz frequency
allowing it to read standard electromagnetic card tags
(Fig. 9). All communication with the RFID reader is
performed through a serial line connected to the
microprocessor. The tag numbers are read from the
serial stream when they are detected. The tags consist
of 16 total bytes in the format shown below. The
RFID reader is connected to the PCB as shown in Fig.
10.
[start of text] [12 bytes of hex] [new line]
[carriage return] [end of text]
The robot is only capable of communicating
through one serial line at a time. In addition, the USB
cable is the communication line for navigational
commands to be sent to the robot and the RFID
numbers to be sent back to the computer. Our program
uses the Windows API to open a COM
(communication) port on the computer to receive the
information from the microprocessor. All sending and

Fig. 8

RFID reader.

Fig. 9

RFID reader on robot.

Fig. 10

Wiring between RFID reader& Arduino.

Robotics Based Autonomous Wheelchair Navigation

326

Fig. 11

Communication between robot & computer.

data to be lost that the only recovery option is an error


message.
Fig. 11 illustrates the communication flow
throughout the program. A communication handshake
is performed at the beginning to verify that the
synchronization has taken place. It lights up an LED
to confirm that the handshake was successful and the
robot is ready to begin communication with the
computer. Then, RFID numbers are sent to the
computer for position feedback and they receive
direction commands in return until the destination is
reached.

6. Experimental Results
We tested our prototype of the autonomous
wheelchair system according to two metrics. In order
for the system to be successful, it must be able to
quickly and accurately deliver the patients to their
destinations. Thus, the navigational algorithm must be
able to generate the path in a timely manner, and the
robot must be able to reach the destination accurately
100% of the time.
We performed a worst case scenario test to
determine whether the navigational implementation
was timely. We created a random map with 5,600
nodes. Considering hospitals can be as large as
500,000 square feet and can contain up to ten floors.
Each floor would be around 50,000 square feet. Since
a single node can represent a doorway which is about

three feet standard. This would bring a representation


of a hospital floor to a 75 nodes by 75 nodes map.
Based on estimations of real world applications, RFID
tags accounted for 24% of those nodes and 25% of
them were designated as walls. The algorithm to
generate the path was timed by subtracting the total
milliseconds at the time that it started calculating the
path from the total milliseconds that had occurred
when the path was done generating. The path was
generated, on average, in 840 milliseconds. The
average human reaction time to change is around 240
milliseconds. So very shortly after the average human
would react to entering their destination, the chair will
already have the path generated.
Since the map generated is randomized, it is
disorganized and contains many more possible paths
than in reality. There are very few intersections in
hospitals where there are eight possible ways to go,
but since the map is randomized this situation shows
up much more often. In a map of a real hospital with
optimized RFID placement, the time to traverse the
map and find the optimal path will be greatly lowered.
Even in the worst case, it still takes less than one
second for the path to be generated and the wheelchair
to start moving. When tested on more suitable rooms
for the prototype, the path finding time was greatly
reduced. For a 77 room the path was found in less
than 16 milliseconds every run.
In addition to speed, accuracy of the path that was
generated and the ability of the robot to follow that
path was measured. Measuring how often the robot
was able to correctly navigate to the destination was
difficult to quantify due to the imprecise nature of the
hardware used. To account for these dependencies in
our testing, we divided the measurement of the
correctness of the navigational algorithms path
generation from the actual hardwares response when
designing our tests.
We tested our prototype three times on each of
three different maps. The maps were designed to test
the ability of the algorithm to correctly find the most

Robotics Based Autonomous Wheelchair Navigation

Table 1

327

Ratio of correctly read to total RFID tags.

Test Map 1 (4 rooms)


Test Map 2 (2 rooms)
Test Map 3 (1 room)

Test 1
0.64
1.00
1.00

efficient path in several different situations and the


ability of the RFID reader to provide accurate
positioning feedback. For each of the nine test runs,
the number of correctly read RFID tags was divided
by the total number of tags to be read for the
generated path. These ratios are listed in Table 1.
The average for each test map was then found. The
average RFID tag reading accuracy for test maps one,
two, and three was 0.800, 0.913, 0.923, respectively.
The robots overall RFID tag reading accuracy was
calculated by taking the average of these three
numbers, yielding an accuracy rate of 87.9%.
The results gathered from our tests so far, support
further investigation into this wheelchair navigation
system. On nine different tests, requiring the
navigational algorithm to calculate different paths, the
average run time was less than 16 ms. The time to
generate a path in a worst case scenario was still less
than a second. In the test cases, the accuracy of the
robot was limited only by the hardware. The robot
successfully navigated to its destination every time,
unless the hardware failed, illustrating the potential of
the system.

7. Conclusion
Navigating hospitals is a tedious and demanding
task for patients and caretakers. With an increasing
number of people aging, there will be a greater
demand for health care resources in the near future. To
alleviate the burden on health care, an autonomous
wheelchair navigation system is very crucial.
We have investigated and developed a prototype for
this system. Our Autonomous Wheelchair Navigation
prototype was a success. It is able to navigate to an
end point following the optimal path. The prototype
was built with cheap, off the shelf components. In

Test 2
1.00
0.89
1.00

Test 3
0.86
0.88
0.80

addition, it is able to generate an optimal path to its


destination, which can autonomously traverse to that
destination. The prototype successfully simulates the
entire system by supplying the platform for an
administrator to generate floor maps and save them to
an online repository, a user downloads those maps to
the prototype, and letting the robot travel through the
map.
The core functionality of prototype is in place, but
there are improvements to be made in the future.
Bundling floors together in the administrative UI will
reduce load times between multi-floor navigation.
Adding additional sensors to the robot will help with
turning and timing issues along with collision
detection. Our initial prototype simulates an
autonomous wheelchair hospital navigation system
and demonstrates the great potential such a system has
to assist our growing elderly population.

References
[1]

[2]

[3]
[4]

[5]

[6]

Vincent, G. K, and Velkoff, V. A. 2010. The Next Four


Decades. The Older Population in the United States: 2010
to 2050. Washington, DC: US Census Bureau,
P.25-1138,http://www.census.gov/prod/2010pubs/p25-11
38.pdf. Accessed August 8, 2016.
Chance of Becoming Disabled - Council for Disability
Awareness.http://www.disabilitycanhappen.org/chances
_disability/disability_stats.asp.
Into
the
Unknown
|
The
Economist.
http://www.economist.com/node/17492860.
Hanlon, M. 2006. The Autonomous Wheelchair Raises
the Promise of Assistive Mobile Robots. Gizmag New &
Emerging
Tech.
News.
http://www.gizmag.com/go/6626/.
Pires, G., Honorio, N., Lopes, C., Nunes, U., and
Almeida, A. T. 1997. Autonomous Wheelchair for
Disabled People. In Proc. of IEEE International
Symposium on Industrial Electronics, Guimaraes, 797-801.
PRETZ, K. (n.d.). Building Smarter Wheelchairs: Making
Life a Little Easier for People Who Can't Walk.The

328

Robotics Based Autonomous Wheelchair Navigation


Institute.http://theinstitute.ieee.org/technology-focus/tech
nology-topic/building-smarter-wheelchairs.

[7]

Want, Roy. 2006. An Introduction to RFID Technology.


IEEE Pervasive Computing 5.1: 25-3.

Journal of Communication and Computer 13 (2016) 329-337


doi:10.17265/1548-7709/2016.07.002

DAVID

PUBLISHING

Development of a Finger Mounted Type Haptic Device


Using a Plane Approximated to Tangent Plane
Makoto Yoda and Hiroki Imamura
Department of Information System Science, Graduate School of Engineering, Soka University, Tokyo, Japan
Abstract: In recent years, several researches of haptic devices have been conducted. By using conventional haptic devices, users can
perceive touching an object, such as CG (computer graphics) by a force feedback. Since conventional haptic devices provide a force
feedback from a single point on an object surface where users touched, users touch an object by point contact. However,
conventional haptic devices cannot provide users with a sense such as humans touching an object with a finger pad because a finger
pad does not touch an object by point contact but surface contact in reality. In this paper, we propose a novel haptic device. By using
this haptic device, users can perceive the sense of the slope on a CG object surface when they put fingers on it without tracing.
Moreover, users can perceive the sense of grabbing a CG object with finger pads. To perceive the sense of the slope on a CG object
and to grab it, we mount the plane interface of the haptic device on each two finger pads. Each plane interface provides a finger pad
with the sense of the slope approximated to a tangent plane of area where they are touching. In the evaluation experiments, the
subjects in this experiment evaluated this haptic device. From the results, the subjects could perceive the sense of the slope on a CG
object surface. In addition, they could perceive the sense of grabbing a CG object.
Key words: Haptic device, plane interface, surface contact, force feedback.

1. Introduction
In recent years, researches of human interface using
AR (augmented reality) have been conducted. In order
to touch CG objects that are drawn by AR, haptic
devices have been developed. By using a haptic
device, users can perceive touching CG objects by
processing a force feedback. Therefore, haptic devices
are expected to be used in applications such as a
virtual surgery simulation system and a virtual
experience system.
Examples of conventional haptic devices include
Falcon, PHANToM and Dexmo. Falcon and
PHANToM are classified into a grounded type. This
type can provide users with accurate force feedback
because its fulcrum is fixed on the Table. Dexmo is
classified into a finger mounted type. This type can
provide users with perception of grabbing CG objects
easily. In addition, users can operate the device
Corresponding author: Hiroki Imamura, Assoc. Prof.,
research fields: haptic device, image recognition, computer
graphics.

without operating range limitation. These haptic


devices have been developed as a point contact type
haptic device. By using this type of haptic devices,
users can perceive touching CG objects because they
are provided with a force feedback from a single point
on the CG object surface where they touched. In case
of perception of an objects shape, from visual
information, users perceive CG objects shape of
visible part. As invisible part, users must perceive CG
objects shape by touching the surface. Humans
perceive an object shape from the direction of a force
feedback. However, in a point contact, as shown in
Fig. 1, it is difficult to perceive an object shape
because the direction of a force feedback changes
according to a direction of a finger. To perceive CG
objects shape in point contact, users must trace the
surface.
In order to perceive a CG object shape without
tracing the surface, a force feedback must be provided
to the normal direction on a surface where users
touched. To provide a force feedback to the normal

330

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

Force feedback

Point contact

Cube
Fig. 1

Sphere

Characteristic of point contact.

Force feedback

Surface contact

Cube
Fig. 2

Normal direction

Sphere

Characteristic of surface contact.

direction, users touch an object by surface contact as


shown in Fig. 2.
We focused on this characteristic. In addition, we
have developed an HaAP (haptic device based on an
approximated plane) [1] that is shown in Fig. 3 and
Fig. 4, respectively. HaAP is a grounded type haptic
device. Moreover, it is a surface contact type haptic
device which has a plane interface as shown in Fig. 3.
As shown in Fig. 4, a plane interface provides a
finger pad with a force feedback to normal direction
on an object surface where users touched regardless of
the direction of a finger. To provide a force feedback
to the normal direction, a plane interface is
approximated to tangent plane on a surface where
users touched. Therefore, users can perceive an object
shape with a finger pad without tracing a surface. In
addition, in this device, users are limited to operate
HaAP in range of the HaAP vertical mechanism
within 10 cm as shown in Fig. 3.
In this paper, we propose a novel haptic device.
This haptic device realizes three things. First, users are
provided with a force feedback to normal direction by
the plane interface approximated to the tangent plane

on a CG object surface. Second, users can operate this


haptic device without operating range limitation.
Third, by using this haptic device, users can perceive
grabbing a CG object. To realize that, we develop a
haptic device having characteristics of a finger
mounted type and a surface contact type.

2. Our Approach
Fig. 5 shows the outline of the proposed haptic

Plane interface

Vertical mechanism
(10cm)
Fig. 3

Appearance of HaAP.

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

CG object

Plane interface

Finger

Fig. 4

HaAP in the operating state.

device in the initial state and in the operating state.


This device uses a plane interface having four
movable points. These four movable points operate up
and down separately. In the initial state, the user is not
touching a CG object. In the operating state, the user
is touching a CG object. The plane interface provides
a finger pad with a force feedback to the normal
direction by four movable points operating up and
down and being approximated to the tangent plane on
the CG object surface where a finger pad touched.

3. The Proposed System


3.1 Hardware Construction
Fig. 8 shows the appearance of the proposed haptic
The plane interface is not providing a force feedback

device. This device is a glove type and composed of


Arduino Uno that is shown in Fig. 6, four servo
motors (GWS Servo PIC+F/BB/F) that is shown in
Fig. 7, four springs, a plane interface and a marker for
each finger. Eight motors are mounted in the back of
the hand. To detect finger position and posture,
markers are mounted in fingertips. In addition, each
motor is connected to Arduino Uno for the index
finger and for the thumb, respectively. Each Arduino
Uno controls four motors in each finger. These motors
pull up the movable points of the plane interface with
wires. Each spring adheres to each movable point as
shown in Fig. 9.
3.2 System Overview
Fig. 10 shows the system overview. This system
consists of PC, Display, Web-camera, Reference
marker and the proposed haptic device.
3.3 The System Flowchart
Fig. 11 shows the system flowchart. In the
following section, we explain each process in the
flowchart.
(1) Initialization of the Haptic Device
Arduino UNO controls motors to make springs
natural length.
(2) Drawing a CG Object
Based on the reference marker, by using ARToolkit
The plane interface is providing a force feedback

Movable Points
Tangent plane

Plane interface
CG Object

Initial State
Fig. 5

Outline of the proposed haptic device.

331

CG Object

Operating State

332

Fig. 6

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

Arduino UNO.

Web-Camera

PC

CG object
Display

Marker on
finger
Haptic device

Fig. 7

Servo motors (GWS Servo PIC+F/BB/F).

Reference Marker

Fig. 10

System overview.

Start
(1)Initialization of the haptic device

Marker

(2)Drawing a CG object

(3)Detecting finger position and posture

Servo motors
No
Fig. 8

Appearance of the proposed haptic device.

Spring

(4)Does the finger contact


with the CG object?
Yes
(5)Calculation of the motor rotation angle

Finger

(6)Controlling motors

Finger
No

End?

Plane interface
Fig. 9

Yes
End

Spring Plane interface

The fingertip part of the proposed haptic device.

Fig. 11

Flowchart.

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

[2], this system draws Sphere CG object or Sin-cos


curve CG object that is shown in Figs. 12 and 13
respectively.
(3) Detecting Finger Position and Posture
Fig. 14 shows detecting the marker on each finger
position and posture.
The system recognizes the reference marker and the
marker on each finger from the image that is captured
by the Web-camera. By using ARToolKit, the system
obtains position , , of the marker on each
finger from position 0, 0, 0 of the reference marker.
We defined this position as a finger position. In
addition, by using ARToolKit, the system also
calculates that denotes the roll angle and that
denotes the pitch angle of the marker on each finger.

(4) Judgement of Contact


In case of Sphere CG object, as shown in Fig. 15,
when the length between the finger position and the
center of Sphere CG object is within radius of Sphere
CG object, the system judges contact.
In case of Sin-cos curve CG object, the system
calculates Z-coordinate on Sin-cos curve CG object.
The following is the equation of Sin-cos curve.
=
(1)
where A is amplitude and X, Y and Z are X, Y and
Z-coordinate on Sin-cos curve CG object. By
substituting X and Y-coordinate of the finger position
for this equation, the system obtains Z. As shown in
Fig. 16, when Z-coordinate of the finger position is
under Z, the system judges contact.
(5) Calculation of the Motor Rotation Angle
The system calculates the normal vector on
touching point to calculate the tangent plane slope.
In case of Sphere CG object, the vector from the
center of Sphere CG object to the touching point is
The length between fingertip and
center of sphere

Reference marker
Fig. 12

333

Sphere CG object.

Reference marker
Center of Sphere
Fig. 15

Fig. 13

Touching Sphere CG object.


Z-coordinate of the marker on the fingertip is under
Z-coordinate on Sin-cos curve CG object

Sin-cos curve CG object.

Z
X

Web-camera

Marker on the finger


Position(X,Y,Z) from
reference marker

Reference marker(position(0,0,0))

Fig. 14

Detecting the finger position and posture.

Fig. 16

Touching Sin-cos curve CG object.

334

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

defined as the normal vector on the touching point.


In case of Sin-cos curve CG object, Sin-cos curve
CG object is composed of many planes. Fig. 17 shows
calculation of the normal vector on the touching point.
The system uses the equation of vector product
=
(2)
where A and B are the vectors from the touching point
to other points on the plane that includes the contact
point. The system obtains the normal vector from right
hand screw rule.
From the normal vector

Surface of Sin curve CG object

Right hand screw rule

Contact point

Fig. 17 Calculation of the normal vector in case of Sin-cos


curve CG object.

Z
Normal vector

(3)

the system calculates


= cos 1

2 + 2

(4)

that denotes the angle between x-axis and the normal


vector in X-Z plane and
= cos 1

2 + 2

CG object

(5)

that denotes the angle between y-axis and the normal


vector in Y-Z plane that is shown in Fig. 18.
Fig. 19 shows the calculation of the tangent plane
slope on touching point in X-Z plane and Y-Z plane.
Since the normal vector is vertical to the tangent plane,
the system uses
= 90
(6)
= 90
(7)
to obtain the roll and pitch angle of the tangent plane.
Using Eqs. (6) and (7), the system obtains that
denotes the roll angle of tangent plane and that
denotes the pitch angle of the tangent plane.
Using Eqs. (8) and (9), the system calculates the
difference between the angle of the marker on the
finger and that of the tangent plane.
=
(8)
=
(9)
This difference is the plane interface slope that is
shown in Figs. 20 and 21. and are shown in
Fig. 14. Fig. 20 shows that denotes the difference

Fig. 18

Calculation of and .
Normal vector

X
Tangent plane

Fig. 19

Normal vector

Y
Tangent plane

Calculation of tangent plane slope.

between and . Fig. 21 shows that denotes


the difference between and .
The system calculates the operation length of
movable point by using the plane interface slope.
Fig. 22 shows the state transition of a plane interface.
A plane interface operates as shown in this figure.
Fig. 23 shows two lengths (L1 and L2). These two
lengths are defined

1 =
2

2 =

+
2

(10)
(11)

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

Finger that is watched


in X-Z plane

Finger that is watched


in Y-Z plane

CG object
surface

CG object
surface
Fig. 20

Calculation of the difference of the pitch angle.

Fig. 21

Calculation of the difference of the roll angle.

Plane interface
in the operating state

Plane interface
in the initial state
Fig. 22

The state transition of a plane interface.

L2

Movable point

L1
The length of servo horn R
Wire

A
Plane interface
in the operating state

Servo motor

Plane interface

Plane interface
in the initial state
Fig. 23
points.

Calculation of the operation length of movable


The operation length
of movable point (L1 or L2)

as the operation length of movable points. Where A is


the length of one side on a plane interface. Using
Eqs. (10) and (11), the system calculates these lengths.
Fig. 24 shows the angle of motor rotation. The
system uses Eqs. (12) and (13)
1 = 2 sin1
2 = 2 sin1

335

1
2
2
2

The angle of motor


rotation(A1 or A2)

(12)
(13)

Fig. 24

Calculation of the motor rotation angle.

336

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

to calculate the angle of motor rotation, where R


denotes the length of servo horn. A1 and A2 are the
amount of controlling motors. The system sends this
amount to each Arduino Uno by serial
communication.
(6) Controlling the Motors
Arduino Uno receives the amount of controlling
motors and controls motors. Motors pull up each
movable point with wires. Figs. 25 and 26 show
providing a finger with a force feedback when users
touched Sphere CG object in Y-Z plane and X-Z plane
respectively. Figs. 27 and 28 show providing a finger
with a force feedback when users touched Sin-cos
curve CG object in Y-Z plane and X-Z plane
respectively.

4. Evaluation Experiments

Z
Force feedback
Y

Finger that is
watched in Y-Z plane

Plane interface

Sphere
CG object
Fig. 25

Touching sphere CG object in Y-Z plane.

Z
Force feedback

X
Finger that is
watched in X-Z plane

Plane interface

4.1 Overview of Experiments


We had evaluation experiments for the proposed
haptic device. Sin-cos curve CG object is used in
order to evaluate whether users can perceive the sense
of the slope on the CG object surface or not. In
addition, a Sphere CG object is used in order to
evaluate whether users can perceive grabbing a CG
object or not. Eleven subjects used the proposed
haptic device. After that, they evaluated following
items with a 5-grade score:
In case of Sin-cos curve CG object,
When you touched the CG object with one finger,
you perceived the sense of the slope on the CG object
surface at once (Item 1).
When you touched and traced the surface of CG
object with one finger, you perceived the sense of the
asperity on the CG object surface (Item 2).
In case of Sphere CG object,
When you touched the CG object with one finger,
you perceived the sense of the slope on a spherical
surface (Item 1).
When you touched and traced the surface of the
CG object with one finger, you perceived the shape of
a spherical surface (Item 2).

Sphere
CG object
Fig. 26

Touching sphere CG object in X-Z plane.

Force feedback

Z
Sin curve CG
object

Plane Interface
Fig. 27

Finger that is watched in Y-Z


plane

Touching Sin-cos curve CG object in Y-Z plane.

Finger that is watched in XZ plane

Z
Force feedback

Sin curve CG
object
Plane interface

Fig. 28

Touching Sin-cos curve CG object in X-Z plane.

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

Table 1
Item
1
2
Table 2
Item
1
2
3
4

Results of Sin-cos curve CG object.


Sin-cos curve CG object
Average score
A standard deviation
4.64
0.64
4.55
0.50
Results of Sphere CG object.
Sphere CG object
Average score
A standard deviation
4.09
0.90
4.00
0.85
4.18
0.94
4.27
0.96

When you touched the CG object with two


fingers, you perceived the sense of touching the CG
object (Item 3).
When you touched the CG object with two
fingers, you perceived the sense of grabbing the CG
object (Item 4).
Evaluation values are from 1 to 5 (1: Strongly
disagree, 2: Disagree, 3: Neutral, 4: Agree, 5:
Strongly agree).
4.2 Discussion
Table 1 shows the results of Sin-cos curve CG
object. Table 2 shows results of Sphere CG object.
Each result shows the average score and the standard
deviation.
From these results, in Item 1 of Tables 1 and 2, we
see that users perceived the sense of the slope on CG
object surface without tracing the surface. In Item 2 of
Tables 1 and 2, the results show that users perceived
the asperity by tracing the surface. In addition, in
Items 3 and 4 of Table 2, we see that users perceived
grabbing the CG object. Therefore, we consider that
the proposed haptic device can provide users with
perception of CG object surface shape by providing
the sense of the slope and the asperity of CG object

337

surface. Moreover, we consider that users can


perceive grabbing a CG object by using the proposed
haptic device. In addition, from Tables 1 and 2, we see
that the average score in Table 1 is higher than that in
Table 2. Since the surface of Sin-cos curve CG object
is more complex than that of Sphere CG object, the
accuracy of the proposed haptic device is improved
when a CG object has complex surface.

5. Conclusion and Future Works


In this paper, we proposed a novel haptic device.
The proposed haptic device has a plane interface. The
plane interface provides a finger with a force feedback
to normal direction by being approximated to tangent
plane on a CG object surface where users touched. In
addition, to perceive grabbing a CG object, the
proposed haptic device is designed as a finger
mounted type. After evaluation experiments, we see
that the proposed haptic device can provide users with
perception of the CG object surface shape and
perception of grabbing the CG object. However, we
consider that users cannot grasp the sense of the
distance between finger and a CG object easily. To
solve the issue, we improve the proposed haptic
device to grasp the sense of the distance more easily
by using HMD (head mounted display). In the future,
we will improve the operability of the proposed haptic
device by lightening the device. In addition, by using
Leap Motion, we will improve the accuracy of
detecting finger position and posture.

References
[1]

[2]

Kawazoe, A., Ikeshiro, K., and Imamura, H. 2013. A


Haptic Device Based on an Approximate Plane HaAP.
Presented at the ACM SIGGRAPH Asia 2013 Posters,
Hong Kong, CD-ROM.
Kato, H. 2002. ARToolKit: Library for Vision-based
Augmented Reality. Technical report of IEICE, PRMU.

Journal of Communication and Computer 13 (2016) 338-350


doi:10.17265/1548-7709/2016.07.003

DAVID PUBLISHING

Learning in a Smart City Environment


R. Nikolov1, E.Shoikova1, M. Krumova2, E. Kovatcheva1, V. Dimitrov1 and A.Shikalanov1
1. State University of Library Studies and Information Technologies
2. Technical University Sofia
Abstract: Advances in technology in recent years have changed the learning behaviors of learners and reshaped teaching methods
and learning environments. The purpose of this paper is to overview a foundational framework and provide models for the planning
and implementation of smart learning environments. Introduction is focused on analysis of emerging industries and new types of jobs
that are requiring future personnel to be well equipped to meet the need of the expansion requirements of these industries and keep up
with their development needs. Gartners 2015 Hype Cycle for Emerging Technologies identifies the computing innovations such as
Internet of Things, Advanced Analytics, Machine Learning, Wearables etc. that organizations should monitor. Learners and students,
being the future drivers of these industries, are the main human resource to fulfill the vacancies of these work forces. Constant
improvements in and re-evaluation of the curriculum taught to the learners have to be done regularly to keep the learners up-to-date
in fulfilling the requirements of these industries and corporations. Universities benefit from these thinking out of the box practices
by equipping students with work force experience that involves more hands on task with real life infrastructures. Section 2 looks at
future Internet domain landscape that comprises a great diversity of technology related topics involved in the implementation of
Smart Learning Environments. The purpose of section 3 is to overview a foundational framework and major considerations for the
planning and implementation of smart learning environments, behind which is the convergence of advances and developments in
social constructivism, psychology, and technology. Section 4 introduces the smart learning models, which are developed to reflect
the dynamic knowledge conversion processes in technology enabled smart learning environments. The last section presents a case
study of a learning scenario entitled Monitoring the environmental parameters in a Smart City as an illustration of experimental
learning on Internet of Things, which proofs the power of FORGE (forging online education through FIRE) FP7 project methodology
and infrastructure for building remote labs and delivering them to students.
Key words: Smart city, smart learning environment, full context awareness, big data and learning analytics, autonomous
decision-making, SECI, learning scenario, forging online education through FIRE.

1. Introduction
New forms of industries and new types of jobs are
emerging, requiring future personnel to be well
equipped to meet the need of the expansion
requirements of these industries and keep up with
their development needs. Gartners 2015 Hype Cycle
for Emerging Technologies identifies the computing
innovations that organizations should monitor
(see Fig. 1). Learners and students, being the future
drivers of these industries, are the main human
resource to fulfil the vacancies of these work forces.
Constant improvements in and re-evaluation of the
curriculum taught to the learners have to be done
Corresponding author: Roumen Nikolov, professor, Ph.D.,
research fields: computer science, software engineering,
internet of things, big data, smart city, e-learning.

regularly to keep the learners up-to-date in fulfilling


the requirements of these industries and corporations.
Universities benefit from these thinking out of the
box practices by equipping students with work force
experience that involves more hands on task with real
life infrastructures.
Today, as education systems are currently
undergoing significant change brought about by
emerging reform in pedagogy and technology, our
efforts have sought to close the gap between
technologies as educational additive to effective
integration as a means to promote and cultivate
student centred, inquiry based and project based
learning. Moving forward, many of the advances in
education will be brought about by further integration of
personalised learning into the smart learning environment,

Learning in a Smart City Environment

Fig. 1

339

Gartners 2015 hype cycle for emerging technologies.

such as ubiquitous access to technology through


continuously shifting mobile devices and mobile
platforms, cloud based services, big data, and dispersed
learning environments will further emphasise the
affordances of learning technologies. These changes
are also being impacted by broader trends including
population shifts, economics, employment, and other
societal shifts.

2. Smart City Concept


This section covers Smart City definitions as well as
some of those technology trends that are most
connected to the development of Smart Cities and
Smart Learning Environments.
2.1 Definitions
There are many definitions for Smart Cities in use
globally[1].Smart City is a new concept and a new
model, which applies the new generation of
information technologies, such as the internet of things,
cloud computing, big data and geospatial data
integration, to facilitate the planning, construction, and

management of smart services. Developing Smart


Cities can benefit synchronised development,
industrialisation, informationisation, urbanisation and
agricultural modernisation and sustainability of cities
development. Smart City is a term denoting the
effective integration of physical, digital and human
systems in the built environment to deliver a
sustainable, prosperous and inclusive future for its
citizens. The smartness of a city describes its ability
to bring together all its resources, to effectively and
seamlessly achieve the goals and fulfil the purposes it
has set itself. A smart city can be viewed as a
combination of four Internets or networks: Internet of
Things, Internet of People, Internet of Data and Internet
of Services. The emphasis is put on the system
integration and synergistic characteristic of a smart city
(Fig. 2). Such a view illustrates succinctly the glue,
or the system integration property that ICT provides in
smart cities.
An enterprise architecture view emphasises from a
domain and outcome perspective, and presents how
the ICT in a Smart City would break value by breaking

Learning in a Smart City Environment

340

Fig. 2

The smart city as a set of Internets.

Fig. 3

An enterprise architecture view of smart city.

silos. There we see the Education systems in a


Smart City context (Fig. 3).
2.2 Technology Trends
The future Internet domain landscape comprises a
great diversity of technology related topics involved in
the implementation of Smart Cities and smart learning

environments developments.
Ubiquitous computing is a concept in software
engineering and computer science where computing is
made to appear everywhere and anywhere. In contrast
to desktop computing, ubiquitous computing can
occur using any device, in any location, and in any
format. A user interacts with the computer, which can

Learning in a Smart City Environment

exist in many different forms, including laptop


computers, tablets and terminals in everyday objects.
Networking technologies that are about bringing
higher broadband capacity with FTTH, 4G LTE and
IMS (IP multimedia systems) provide the
infrastructure of the Smart Cities to make all the
devices, computers and people can have convenient,
reliable, secretive communication paths with each
other. Ubiquitous computing is also described as
pervasive computing, ambient intelligence, or
everyware.
Open Data in the context of Smart Cities generally
refers to a public policy that requires public sector
agencies and their contractors to release key sets of
government data to the public for any use, or re-use, in
an easily accessible manner. In many cases, this policy
encourages this data to be freely available and
distributable.
Big data is a blanket term for any collection of data
sets so large, complex and rapidly changing that it
becomes difficult to process using traditional database
management tools or traditional data processing
applications. A Smart City, as a system of systems,
can potentially generate vast amounts of data. The
challenges include capture, curation, storage, search,
sharing, transfer, analysis and visualisation.
A GIS (geographic information system) in Smart
Cities is used to provide location based services. The
implementation of a GIS in Smart City is often driven
by city jurisdictional, purpose, or application
requirements. GIS and location intelligence
applications can be the foundation for many
location-enabled services that rely on analysis,
visualization and dissemination of results for
collaborative decision making.
Cloud computing (public, private or hybrid) is the
delivery of computing as a service rather than a
product, whereby hared resources, software, and
information are provided to computers and other
devices as a utility over the Internet.
SOA (service-oriented architecture) is a software

341

design and software architecture design pattern based


on distinct pieces of software providing application
functionality as services to other applications. SOA
can leverage a world of multiple vendors that build
systems, which create interoperability and use each
others capabilities.
The E-Government essentially refers to the
utilisation of IT, ICTs, and other web-based
telecommunication technologies to improve and/or
enhance on the efficiency and effectiveness of service
delivery.
Embedded networks of sensors and devices into the
physical space of cities are expected advancing further
the capabilities created by web 2.0 applications, social
media and crowd sourcing. A real-time spatial
intelligence is emerging having a direct impact on the
services cities offer to their citizens. Collective
intelligence and social media has been a major driver
of spatial intelligence of cities. Social media have
offered the technology layer for organising collective
intelligence with crowdsourcing platforms, mashups,
web-collaboration, and other means of collaborative
problem-solving. Smart Cities with instrumentation
and interconnection of mobile devices and sensors can
collect and analyse data and improve the ability to
forecast and manage urban flows, thus push city
intelligence forward.
The IoT (internet of things) refers to the
interconnection of uniquely identifiable embedded
computing like devices within the existing Internet
infrastructure. Typically, IoT is expected to offer
advanced connectivity of devices, systems, and
services that goes beyond M2M (machine-to-machine)
communications and covers a variety of protocols,
domains, and applications. Internet of Things
including sensor networks and RFID is an important
emerging strand. These technologies overcome the
fragmented market and island solutions of Smart
Cities applications and provide generic solutions to all
cities. A new round of applications, such as location
aware applications, speech recognition, Internet micro

342

Learning in a Smart City Environment

payment systems, and mobile application stores,


which are close to mainstream market adoption, may
offer a wide range of services on embedded system
into the physical space of cities. Augmented reality is
also a hot topic in the sphere mobile devices and smart
phones, enabling a next generation location-aware
applications and services.

3. Conceptualising the Emerging Field of


Smart Learning Environments
Given the power and potential of new and emerging
technologies, it is time to conceptualise how learning
environments can be made smarter (i.e., more effective,
efficient and engaging) on a large and sustainable scale.
The purpose of this section is to overview a
foundational framework for the planning and
implementation of smart learning environments,
behind which is the convergence of advances and
developments in epistemology, psychology, and
technology. A few definitions are needed to motivate
the discussion [2-4]. Recently smart learning is being
defined and studied in diverse ways.
3.1 Smart Learning Environment Definitions
The International Association for Smart Learning
Environments embraces a broad interpretation of what
constitutes a smart learning environment. A learning
environment can be considered smart when it makes
use of adaptive technologies or when it is designed to
include innovative features and capabilities that
improve understanding and performance. In a general
sense, a smart learning environment is one that is
effective, efficient and engaging. According to
Spector[5]what is likely to make a learning
environment effective, efficient and engaging for a
wide variety of learners is one that can adapt to the
learner and personalise instruction and learning support.
This suggests that appropriate adaptation is a hallmark
of smart behavior. The adjective smart isused in
everyday language to refer to an action or decision that
involved careful planning, cleverness, innovation,

and/or a desirable outcome[6].Learning generally


involves a stable and persisting change in what a
person or group of people know and can do. Intentional
learning can occur in a formal context as well as in
informal contexts. The notion of an environment
suggests a place or surroundings in which something
occurs. Whether physical or virtual, an environment
can be conducive to or inhibitive of learning. A smart
learning environment, in keeping with the emphasis on
efficacy, is one that is generally conducive to and
supportive of learning.It is emphasised that smart
learning is different from e-learning using smart
devices[7], and smart learning is defined as smart
device-based
intelligent,
customised
learning
service[8].Broadly
defined,
smart
learning
environments represent a new wave of educational
systems, involving an effective and efficient interplay
of pedagogy, technology and their fusion towards the
betterment of learning processes. Various components
of this interplay include but are not limited to: (1)
Pedagogy/didactics: learning design, learning
paradigms, teaching paradigms, environmental factors,
assessment paradigms, social factors, policy; (2)
Technology: emerging technologies, innovative uses of
mature technologies, interactions, adoption, usability,
standards, and emerging/new technological paradigms
(open educational resources, learning analytics, cloud
computing, smart classrooms, etc.); (3) Fusion of
pedagogy/didactics and technology: transformation of
curriculum, transformation of teaching behaviour,
transformation of learning, transformation of
administration, transformation of schooling, best
practices of infusion, piloting of new ideas. A learning
environment can be considered smart when the learner
is supported through the use of adaptive and innovative
technologies from childhood all the way through
formal education, and continued during work and adult
life where non-formal and informal learning
approaches become primary means for learning. Smart
learning
environments
are
neither
pure
technology-based systems nor a particular pedagogical

Learning in a Smart City Environment

approach. They encompass various contexts, in which


students (and perhaps teachers) move from one context
to another. So, they are perhaps overarching concept
for future academia. This perspective has the potential
to overcome some of the traditions of institution based
instruction towards lifelong learning.
3.2 Considerations of Smart Learning Environments
Development
There are several major features of the development
of smart learning environments that separates smart
learning environments from other advances in learning
technologies. These are full context awareness,
stacking vs. replacing the LMS, big data and learning
analytics, and autonomous decision making.
Full context awareness. Boulanger et al.[9] indicated
that smart learning environments involve context
awareness that can combine a physical classroom with
many virtual learning environments. This could
provide full context awareness by combining smart
learning environments with holistic Internet of Things
and ubiquitous sensing devices, e.g., wearable
technologies such as smart watches, brainwave
detection, and emotion recognition [10]. Full context
awareness enables smart learning environments to
provide learners with authentic learning contexts and
seamless learning experiences to fuse a variety of
features in the e-learning environments. The system
includes learning management systems, mobile and
ubiquitous learning systems, various artificial
intelligence
based
adaptive
and
intelligent
tutoring/learning systems. These systems would assist
teachers and instructors in direct monitoring of the
learning environment, understand learners conditions
and give learners real-time adaptive assistance, while at
the same time facilitating independent learning for the
learners [11].
Stacking vs. Replacing the LMS. While many
organizations have grown beyond the current
capacities of their Learning Management Systems,
there are significantly fewer organizations choosing to

343

make the major capital and implementation


investment of replacing their entire enterprise learning
technology. Rather, we are seeing more Stacking which means accepting the role of the existing LMS
as the base system for the organization and then
adding Stacks or Layers on top that will create added
and more targeted functionality. Some of the Learning
Stacks include: Competency or Talent Management
Layers; Assessment or Feedback Layers; Compliance
or Regulatory Layers; Career Development Layers;
Collaboration and Social Networking Layers,
Gamification or Engagement Layers, Globalisation
Layers. In other words, some organizations are
shifting from replacing their LMS to adding these
technologies on top of the LMS. It might be called a
LMS Inside approach as extensions of the LMS,
using the core code for transaction tracking and shared
data exchange but the functionality is found in the
layer.
Big data and learning analytics. Smart learning
environments need to consider advanced data
manipulation techniques such as employing big data
and learning analytics to collect, combine and analyse
individual learning profiles in order to scientifically
generalise and infer each individual learning need in
real time in ubiquitous settings that encompass both
physical and online activities. Learning analytics by
using big data can monitor individual learners
progress and behaviour continuously in order to
explore factors that may influence learning efficiency
and effectiveness [12, 13].
Autonomous Decision Making and Dynamic
Adaptive Learning. Another important feature of smart
learning environments, which is different from other
learning environments, is their autonomous knowledge
management capability that enables them to
automatically collect individual learners life learning
profiles. As Kay mentioned [14], smart learning
environments can precisely and autonomously analyse
learners learning behaviours in order to decide in real
time, for example, what interactions with the physical

344

Learning in a Smart City Environment

environment to recommend to the individual learners


to undertake various learning activities, the best
location for those activities, which problems the
learners should solve at any given moment, which
online and physical learning objects are the most
appropriate, which tasks are the best aligned with the
individual learners cognitive and meta-cognitive
abilities, and what group composition will be the most
effective for each group members learning process.
Such autonomous decision-making and dynamic
adaptivity has the potential to generalise and infer
learners learning needs in order to provide them with
suitable learning conditions. It is a challenge for smart
learning environments to collect these data about the
learners and their environment from disparate sources
in both physical and online components of the
ubiquitous settings.
3.3 Smart Learning Environments Foundation Areas
According to Spector[5] social constructivism,
psychology and technology are the foundation areas
that provide meaningful and convergent input for the
design, development and deployment of smart learning
environments. At a high level, social constructivism
provides a coherent approach to many human
activities, including learning design and technology
research and practice. It consists of two primary tenets
that describe how humans develop knowledge and
expertise. The first tenet involves the creation of
mental models when encountering new or unusual or
otherwise unexplained experiences. Simply stated, this
is the notion that people create internal representations
to make sense of their experience. This perspective
puts the individual at the centre of knowledge and
skill development, and it implies that individuals may
develop knowledge and skills differently. The second
tenet from a philosophical perspective involves the
role of language as a critical mediator in learning and
knowledge development. The underlying idea is that
interaction with others, especially in the form of
discourse, contributes to how knowledge is developed.

Taken together, these two tenets provide a general


description of how people come to know and
understand their worlds namely by a process of
creating internal mental representations and then
sharing ideas formed on the basis of those
representations with others through appropriate
languages and media. Because social constructivism
provides a coherent philosophical foundation for
learning and instruction, it should be recognised as a
pillar of any smart learning environment.
The role of psychology in learning and instruction is
well recognised. The two main streams of educational
psychology have been behaviourism and cognitivism.
Behaviourism emphasises things that can be observed
and measured as a way to understand and predict
human behaviour. The emphasis on outcomes is a
valuable contribution of behaviourism that is worth
retaining
in
understanding
smart
learning
environments. Cognitivism emphasises the need to
understand mental processes that underlie and can
explain many human behaviours. Social psychology
emphasises the effects and impact of others on how
people think and behave. There is a strong parallel
between social psychology and its relationship to
behavioural and cognitive psychology. People do not
live and learn in isolation from others. A smart learning
environment will take this fact into account explicitly
and in meaningful ways.
Learning technology regularly undergoes changes in
response to changes in mainstream models of human
cognition and learning. Modern information and
communications technologies are expected to support
and strengthen the processes of creating, transforming
and sharing knowledge in a smart environment. This
often requires an innovative use of a technology in an
engaging and flexible manner.Web 2.0 and Social 3.0
have captured the interest and the imagination of
students, educators and researchers.
Web 2.0 presents opportunities for teachers to build
higher levels of engagement in the classroom. Giving
students the ability to think critically about transferring

Learning in a Smart City Environment

skills and knowledge to new creations teachers use


Web 2.0 to encourage students to view themselves as
active agents in the transfer process. The impact of
Web 2.0 has instrumentally changed the way students
learn and in return the way teachers must teach. The
number of Web 2.0 tools continues to grow while
utilisation of these tools supports constructivist
pedagogy. The interactive nature of these technologies
lends itself well to collaborative learning, which
motivates students, creates a safer learning
environment, and enhances knowledge and skills.
Users become creators, collaborators and actively
engaged with Web 2.0 tools.
As broadband penetration increases, people become
more empowered to connect with each other on their
own terms. Social Media as we know is just to
describe the nature of sharing online. From Facebook
and Twitter to Snapchat and Whatsapp, the apps for
online sharing vary as much as the Web 2.0 products.
While there is power in collaboration, concern exists
too. The impact of these technologies upon culture,
education, and knowledge is clear. According to Norris
and Soloway [15] Social 3.0 is defined as: two or more

Fig. 4

345

individuals verbally conversing; while those two or


more individuals are engaged doing something
inside an app or in a Web page; and while those two or
more individuals are either co-located, or more
interestingly, not co-located. Thinking of tools in terms
of students level of knowledge creation, the hierarchy
of revised Blooms Taxonomy (Fig. 4) will enhance
appropriate understanding.

4. Smart Learning Design Model


There is no single or simple way to characterise
knowledge development. People create internal
representations and then talk about those
representations with others along various paths to
understanding. People are smart in different ways, at
different times, and in different circumstances. In
response to the uncertainty regarding opportunities
and challenges facing education systems in
transforming the classroom into effective teaching and
learning environments, we have explored innovative
uses of technology that supports new ways to explore,
learn, and share knowledge in technology enabled
smart learning environments. Transforming the process

The hierarchy of revised Blooms Taxonomy supported by technologies.

Learning in a Smart City Environment

346

of teaching and learning means that teachers create


fundamentally different learning environments and
promote interactivity, socialisation; externalisation,
combination and internalisation thus create knowledge
as stated by Nonaka and Takeuchi [16]. On this point
of view learning design and developing of interactive
learning scenarios/activities are critical for the
successful development of any learning environment
supported by todays physical environments that are
enriched with digital, context-aware and adaptive
devices, to promote better and faster learning (Fig. 5).
The interplay between dynamic knowledge
conversion processes in technology enabled smart
learning environments enhanced by innovative
learning scenario is the basis behind the proposed

model, which is depicted in Fig. 6. The model reflects


an emphasis on the knowledge creation that can foster
learning outcome. Drawing on IMS LD specifications
[17] and current technology innovation, we assumed
that the commitment and the creativities of individual
users (teachers) are crucial to developing new
practices and approaches. The scenarios can be
defined as narrative descriptions of preferable learning
contexts that take into account user stories, including
the description of resources and the functionalities
needed, the interactions they have, the tasks they
perform and the aims of their activities. The SECI
model is the essence of knowledge management. With
this mode, we can grow to complex model creating
complementary models and using support tools. In each

Roles

Learning
activity

Student

Roles

Services
Learning
Analytics

Web 2.0
& Social
3.0

Smat
Learning
environment
MOOCs
Learning
resources

Physical environments that are enriched with


digital, context-aware and adaptive devices
to promote better and faster learning
Fig. 5

Administrative
activities

Learning
scenario

Learning
objective

Innovative learning scenario/activities supported by smart learning environments.

Teacher

Management
activities

Learning
activities

Learning in a Smart City Environment

347

Sharing

Sharing

Networking
Web 2.0: Prezi;
YouTube; Blog; ect.
Web 2.0: Ning; Wiki;
Facebook; etc.

Learning
activity

Web 2.0:
Glogster;
Flippingbook;
Animoto; etc.

Learning
activity

Internationalization

Socialization

SECI
processes
and web 2.0
Externalization

Conceptualization
Learning
activity
Web 2.0:
MindMeister;
Bubbl.us; etc.

Web 2.0: Weebly;


Voci; ProProfs;
Jimdo; etc.

Learning
activity

Brainstorming
Create
Fig. 6

SECI 2.0 -Dynamic knowledge conversion processes in technology enabled smart learning environments.

of the four SECI knowledge conversion stages


diversity learning activities supported by smart
technology can take place. The central thought of the
model is that knowledge held by individuals is shared
with other individuals so it interconnects to a new
knowledge. The spiral of knowledge or the amount of
knowledge grows all the time when more rounds are
done in the model. Knowledge creation can be viewed
as a bottom-up spiral process, starting with the sharing
of tacit knowledge at the individual level and moving
to crystallisation of the knowledge at the group level
and then on to the organisational level. Then the

combination process deductively produces increased


collective understanding, which is then internalised by
reflection and embodied into increased individual
understanding. In each one SECI stages a wide range
of smart learning scenarios and activities performed in
collaborative and knowledge platforms (Animoto,
Proshow, Prezi, Vuvox, Screenr, Quiz-Creator,
Bitrix24, Flipsnack, ProProfs, Voci) and supported
by smart technologies and services can be applied.
Smart technology attuned to the emergent nature of
thinking and learning not just points to greater control
over the students, but emphasis shifts to control of

Learning in a Smart City Environment

348

productive interaction between students, teachers,


ideas and technologies. The rapid progress of mobile,
wireless communication and sensing technologies has
enabled the development of context-aware ubiquitous
learning environments, which are able to detect the
real-world learning status of students as well as the
environmental contexts.

FIRE infrastructure. The goal of the Specialisation

5. Case Study

interfaces for Internet of Things products, defining

program is to provide graduates with theoretical


knowledge, practical skills and tools necessary to
begin

the

professional

practice

of

designing

user-centric next generation devices. Graduates of this


program will be able to implement a holistic,
multidisciplinary approach to the design of user
their form, behaviour, and content. This Specialisation

5.1 Synergy with the FP7 FORGE Project Forging


Online Education through FIRE

covers the foundation of UXD (Emotive UXD,

The University has started to adopt co-op and

development of Internet of Things products and

internship programs with FP7 FORGE project


Forging Online Education through FIRE
(http://ict-forge.eu/)
to
facilitate
experiences,
particularly in computer science programs. The EU
FIRE (future internet research and experimentation)
initiative creates an open research environment that

servicesincluding devices for sensing, actuation,

facilitates strategic research and development of new


Future Internet concepts, giving researchers the tools
they need to conduct large-scale experiments on new
paradigms. The FORGE project introduces the FIRE
experimental facilities to the eLearning community, in
order to promote experimentally driven research in

sensing, actuation and communication. The FORGE

education by using experiments as an interactive


learning and training channel both for students and
professionals. FORGE provides learners and educators
with access to world-class experimentation facilities
and high quality learning materials via a rigorous
production process. In FORGE, we focus on

will keep a low complexity for setting up simple

development methodologies and best practices for


remote experimentation performed on top of FIRE
facilities.

tools will ease the process of browsing, reserving and

The University delivers the UXD (user experience

learners will be initially guided to focus on studying a

design) for IoT Specialisation in the Software

specific subject and then, later on, experiment with

Engineering Master Program, which heavily relays on

aspects of this subject onto a real infrastructure. At the

present

eLearning

end of the course, learners will be guided to re-create

methodology and tools having the opportunity to

the experimentation environment by using the FIRE

study in depth various aspects of networking protocols

facility. Thus, learners in the end will have a good

and infrastructure, watch instructional movies and

understanding of what is FIRE, what it offers and how

screen casts, as well as conduct experiments using the

it can be used. To promote the concept of experimentally

research

and

the

FORGE

Personalized UXD, and Visual UXD) and the

processing,

and

communicationto

help

them

develop skills and experiences they can employ in


designing novel systems. The Specialisation has
theory and lab sections. In the IoT lab sections
students will learn hands-on IoT concepts such as
model

and

methodology

employed

for

the

development of interactive lab in the field of Internet


of Things aimed at fostering remote experimentation
with real production system installed in the Smart City,
such as Smart Santander. A FORGE enabled course
experiments on FIRE facilities, for both learners and
teachers, while making them aware that they are using
real resources remotely. On one hand, teachers will
use FORGE tools to create simple experimentation
scenarios and inject them into their courses. These
scheduling FIRE resources while pointing and using
tools that FIRE already offers. On the other hand

Learning in a Smart City Environment

Fig. 7

349

Smart Santander IoT infrastructure.

driven IoT research in education following


requirements have to be considered: Realism of
experimentation environment; Heterogeneity of IoT
devices; Adequate scale; Mobility support from
controlled to realistic; Concurrency; Repeatability and
replayability; Real end user involvement in the
experimentation cycle; Federation with other Internet
research facilities.
As an illustration of experimental learning on IoT, a
learning
scenario
entitled
Monitoring
the
environmental parameters in Smart City is presented,
which proofs the power of FORGE methodology and
infrastructure for building remote labs and delivering
them to students.
5.2 Learning Scenario: Monitoring the Environmental
Parameters in Smart City
Aim: In this experimental lab students will learn
hands-on IoT concepts, such as sensing and
communication in Smart City.
Smart environment: Smart Santander infrastructure
and its interactive online site, which is conceived as a
3-tiered approach and defined next:
(1) IoT node: Responsible for sensing the
corresponding parameter (temperature, CO, noise, light,
car presence, soil temperature, soil humidity). The
majority of them are integrated in the repeaters, whilst

the others standalone communicating wirelessly with


the corresponding repeaters (it is the case for the
parking sensor buried under the asphalt). For these
devices, due to the impossibility of powering them with
electricity, they must be fed with batteries.
2. Repeaters: These nodes are high-rise placed in
street lights, semaphores, information panels, etc., in
order to behave as forwarding nodes to transmit all the
information associated to the different measured
parameters. The communication between repeaters and
IoT nodes performs through 802.15.4 protocol.
3. Gateways: Both IoT nodes and repeaters, are
configured to send all the information (through
802.15.4 protocol), experiment-driven as well as
service provision and network management to the
gateway. Once information is received by this node, it
can either store it in a database which can be placed in a
web server to be directly accessed from internet, or
send it to another machine (central server), through the
different interfaces provided by it (WiFi, GPRS/UMTS
or ethernet).
Within the Smart Santander project more than 2,000
environmental monitoring sensors have been already
deployed. These sensors are monitoring CO index,
temperature, noise level and light intensity.
Learning activities:
(1) Navigate through the various parts of the Smart

Learning in a Smart City Environment

350

Santander
(http://maps.smartsantander.eu/)
and
become familiar with the capabilities of the freely
accessible platform tags - IoT infrastructure, Mobile
Sensing, Pace of the City, Augmented Reality POIs and
play with consideration of various parameters.
(2) Explore the Smart Santander IoT infrastructure
and examine the set of parameters of the environment,
the system is able to monitor.
(3) Find on the Internet intelligent sensors or
complete devices that can measure the same set of
parameters.
(4) Explore the features of smart sensors for air
pollution, for example C02, O3, particulate matter and
ZO2.
(5) Design of a Network of sensors and system for
continuous monitoring of the air.
Further, to support the learning process a hybrid
cloud infrastructure has been established, which
integrates variety of collaboration platforms and
eLearning systems. The cloud based infrastructure
enables innovative learning scenario execution and
monitoring.

6. Conclusion
This paper discusses a vision and some steps toward
development of smart learning environments. These
environments are expected to break the boundaries of
the traditional learning and enable the detection of the
learners location, environment, proximity and
situation. This would provide a fully contextualised
learning process in order to provide learners with
learning scenarios in their own living and work
environments, leading to significantly better learning
experiences.

References
[1]
[2]

[3]

ISO/ IEC JTC1, 2014. Smart Cities Preliminary


Report,www.iso.org.
Richey, R. C., Klein, J. D., and Tracey, M. W. 2011. The
Instructional Design Knowledge Base: Theory, Research,
and Practice. NewYork: Routledge.
Seel, N. M. etc.2012.The Encyclopedia of the Sciences of
Learning. New York: Springer.

[4]

[5]

[6]
[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]
[16]

[17]

Spector, J. M.2012. Foundations of Educational


Technology:
Integrative
Approaches
and
Interdisciplinary Perspectives. New York: Routledge.
Spector, J. M. 2014. Conceptualizing the Emerging
Field of Smart Learning Environments.Smart Learning
Environments 1 (2).
Spector, J. M. etc.2015. Encyclopedia of Educational
Technology. Thousand Oaks: Sage.
Lin,
Y.-T.,
Huang,
Y.-M.,
Kinshuk,
Q.-T:
Location-Based and Knowledge-Oriented Microblogging
for Mobile Learning. Framework, Architecture, and
System, pp. 146-150 WMUTE (2010)
Korea Education and Research Information Service:
Issues Analysis of the Learning Management System in
Smart Education, RM 2012, vol. 18, pp. 11(2012),
ITPLUS.: http://www.ktoa.or.kr/.
Boulanger, D. etc. 2015. Smart Learning analytics.In
Emerging issues in smart Learning, edited by G. Chen, V.
Kumar, Kinshuk, R. Huang, & S. C. Kong (Eds.), Berlin
Heidelberg:
Springer.
Retrieved
from
http://link.springer.com/chapter/10.1007/978-3-662-4418
8-6_39.
Li, B. P., Kong, S. C., and Chen, G.2015. A Study on
the Development of the Smart Classroom Scale. In
Emerging Issues in Smart Learning. Heidelberg: Springer.
Retrieved
from
http://link.springer.com/chapter/10.1007/978-3-662-4418
8-6_6.
Hwang, G. J. 2014. Definition, Framework and
Research Issues of Smart Learning Environments - a
Context Aware Ubiquitous Learning Perspective.Smart
Learning
Environments
1
(1):
4.
doi:
10.1186/s40561-014-0004-5.
Kumar, V. S., Kinshuk, D., Clemens, C., and Harris, S.
2015. Causal Models and Big Data Learning
Analytics.In Ubiquitous Learning Environments and
Technologies, Heidelberg: Springer.
Kumar, V. S. etc. 2015. Big Data Learning Analytics: A
New Perpsective. In Ubiquitous Learning Environments
and Technologies, Heidelberg, Germany: Springer.
Kay, J. 2008. Life-long Learning, Learner Models and
Augmented Cognition.In Intelligent Tutoring Systems,
edited by Woolf, B. P., Aimeur, E., Nkambou, R., and
Lajoie, S. (Eds.), Berlin: Springer.
Norris, C., and Soloway, E. 2014. Web 2.0 to Social 3.0:
The Next Big Thing.The Journal
Nonaka,
I.
and
Takeuchi,
H.1995.The
Knowledge-creating
Company:
How
Japanese
Companies Create the Dynamics of Innovation. New
York Oxford: University Press.
IMS
Learning
Design
Specification
http://www.imsglobal.org/learningdesign/index.htm.

Potrebbero piacerti anche