Sei sulla pagina 1di 70

Gartner's 2014 Hype Cycle for Emerging Technologies Maps the Journey to Digital Business

2014 Hype Cycle Special Report Evaluates the Maturity of More Than 2,000 Technologies

2014 Marks 20th Anniversary of the Gartner Hype Cycle


The journey to digital business is the key theme of Gartner, Inc.'s "Hype Cycle for Emerging Technologies, 2014." As the Gartner
Hype Cycle celebrates its 20th year, Gartner said that as enterprises set out on the journey to becoming digital businesses,
identifying and employing the right technologies at the right time will be critical.
Gartner's 2014 Hype Cycle Special Report provides strategists and planners with an assessment of the maturity, business benefit
and future direction of more than 2,000 technologies, grouped into 119 areas. New Hype Cycles this year include Digital Workplace,
Connected Homes, Enterprise Mobile Security, 3D Printing and Smart Machines.
The Hype Cycle for Emerging Technologies report is the longest-running annual Hype Cycle, providing a cross-industry perspective
on the technologies and trends that business strategists, chief innovation officers, R&D leaders, entrepreneurs, global market
developers and emerging technology teams should consider in developing emerging-technology portfolios.
"The Hype Cycle for Emerging Technologies is the broadest aggregate Gartner Hype Cycle, featuring technologies that are the
focus of attention because of particularly high levels of hype, or those that Gartner believes have the potential for significant impact,"
said Jackie Fenn, vice president and Gartner fellow. "Enterprises should use this Hype Cycle to identify which technologies are
emerging and use the concept of digital business transformation to identify which business trends may result."
"The central theme for this year's Emerging Technologies Hype Cycle is Digital Business (see Figure 1). As enterprises embark on
the journey to becoming digital businesses, they will leverage technologies that today are considered to be "emerging," said Hung
LeHong, vice president and Gartner fellow. "Understanding where your enterprise is on this journey and where you need to go will
not only determine the amount of change expected for your enterprise, but also map out which combination of technologies support
your progression."
Figure 1. Hype Cycle for Emerging Technologies, 2014

Source: Gartner (August 2014)


As set out on the Gartner road map to digital business, there are six progressive business era models that enterprises can identify
with today and to which they can aspire in the future.
Six Business Era Models in the Digital Business Development Path
Stage 1: Analog
Stage 2: Web
Stage 3: E-Business
Stage 4: Digital Marketing
Stage 5: Digital Business
Stage 6: Autonomous
Since the Hype Cycle for Emerging Technologies is purposely focused on more emerging technologies, it mostly supports the last
three of these stages: Digital Marketing, Digital Business and Autonomous.
Digital Marketing (Stage 4): The Digital Marketing stage sees the emergence of the Nexus of Forces (mobile, social,
cloud and information). Enterprises in this stage focus on new and more sophisticated ways to reach consumers, who are more

willing to participate in marketing efforts to gain greater social connection, or product and service value. Buyers of products and
services have more brand influence than previously, and they see their mobile devices and social networks as preferred gateways.
Enterprises at this stage grapple with tapping into buyer influence to grow their business. The following technologies on the Hype
Cycle represent the Digital Marketing stage:
Software-Defined Anything; Volumetric and Holographic Displays; Neurobusiness; Data Science; Prescriptive Analytics; Complex
Event Processing; Big Data; In-Memory DBMS; Content Analytics; Hybrid Cloud Computing; Gamification; Augmented Reality;
Cloud Computing; NFC; Virtual Reality; Gesture Control; In-Memory Analytics; Activity Streams; Speech Recognition.
Digital Business (Stage 5): Digital Business is the first post-nexus stage on the road map and focuses on the
convergence of people, business and things. The Internet of Things and the concept of blurring the physical and virtual worlds are
strong concepts in this stage. Physical assets become digitalized and become equal actors in the business value chain alongside
already-digital entities, such as systems and apps. 3D printing takes the digitalization of physical items further and provides
opportunities for disruptive change in the supply chain and manufacturing. The ability to digitalize attributes of people (such as the
health vital signs) is also part of this stage. Even currency (which is often thought of as digital already) can be transformed (for
example, cryptocurrencies). Enterprises seeking to go past the Nexus of Forces technologies to become a digital business should
look to these additional technologies:
Bioacoustic Sensing; Digital Security; Smart Workspace; Connected Home; 3D Bioprinting Systems; Affective Computing; Speechto-Speech Translation; Internet of Things; Cryptocurrencies; Wearable User Interfaces; Consumer 3D Printing; Machine-to-Machine
Communication Services; Mobile Health Monitoring; Enterprise 3D Printing; 3D Scanners; Consumer Telematics.
Autonomous (Stage 6): Autonomous represents the final post-nexus stage. This stage is defined by an enterprise's
ability to leverage technologies that provide humanlike or human-replacing capabilities. Using autonomous vehicles to move people
or products or using cognitive systems to write texts or answer customer questions are all examples that mark the Autonomous
stage. Enterprises seeking to reach this stage to gain competitiveness should consider these technologies on the Hype Cycle:
Virtual Personal Assistants; Human Augmentation; Brain-Computer Interface; Quantum Computing; Smart Robots; Biochips; Smart
Advisors; Autonomous Vehicles; Natural-Language Question Answering.
"Although we have categorized each of the technologies on the Hype Cycle into one of the digital business stages, enterprises
should not limit themselves to these technology groupings," said Mr. LeHong. "Many early adopters have embraced quite advanced
technologies, such as autonomous vehicles or smart advisors, while they continue to improve nexus-related areas, such as mobile
apps - so it's important to look at the bigger picture."
Additional information is available in Gartner's "Hype Cycle for Emerging Technologies, 2014"
athttp://www.gartner.com/document/2809728. The Special Report includes a video in which Betsy Burton, Gartner vice president
and distinguished analyst, provides more details regarding this year's Hype Cycles, as well as links to all of the Hype Cycle reports.
The Special Report can be found athttp://www.gartner.com/technology/research/hype-cycles/.

Cloud Computing

Cloud computing is a computing term or metaphor that evolved in the late 2000s, based on utility and consumption of computing
resources. Cloud computing involves deploying groups of remote servers and software networks that allow centralized data storage
and online access to computer services or resources. Clouds can be classified as public, private or hybrid
Overview[edit]

Cloud computing[3] relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity
grid) over a network.[2] At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.

Cloud computing, or in simpler shorthand just "the cloud", also focuses on maximizing the effectiveness of the shared resources.
Cloud resources are usually not only shared by multiple users but are also dynamically reallocated per demand. This can work for
allocating resources to users. For example, a cloud computer facility that serves European users during European business hours
with a specific application (e.g., email) may reallocate the same resources to serve North American users during North America's
business hours with a different application (e.g., a web server). This approach should maximize the use of computing power thus
reducing environmental damage as well since less power, air conditioning, rack space, etc. are required for a variety of functions.
With cloud computing, multiple users can access a single server to retrieve and update their data without purchasing licenses for
different applications.

The term "moving to cloud" also refers to an organization moving away from a traditional CAPEX model (buy the dedicated
hardware and depreciate it over a period of time) to the OPEX model (use a shared cloud infrastructure and pay as one uses it).

Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that
differentiate their businesses instead of on infrastructure.[4] Proponents also claim that cloud computing allows enterprises to get
their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust
resources to meet fluctuating and unpredictable business demand.[4][5][6] Cloud providers typically use a "pay as you go" model. This
can lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model. [7]

The present availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption
of hardware virtualization, service-oriented architecture, and autonomic and utility computing have led to a growth in cloud
computing.[8][9][10]

Cloud vendors are experiencing growth rates of 50% per annum


Characteristics[edit]

Cloud computing exhibits the following key characteristics:

Agility improves with users' ability to re-provision technological infrastructure resources.

Cost reductions claimed by cloud providers. A public-cloud delivery model converts capital expenditure to operational
expenditure.[35] This purportedly lowers barriers to entry, as infrastructure is typically provided by a third party and does not
need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained,
with usage-based options and fewer IT skills are required for implementation (in-house).[36] The e-FISCAL project's state-of-theart repository[37] contains several articles looking into cost aspects in more detail, most of them concluding that costs savings
depend on the type of activities supported and the type of infrastructure available in-house.

Device and location independence[38] enable users to access systems using a web browser regardless of their location
or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed
via the Internet, users can connect from anywhere.[36]

Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer
and can be accessed from different places.

Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:

centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)

peak-load capacity increases (users need not engineer for highest possible load-levels)

utilisation and efficiency improvements for systems that are often only 1020% utilised.[39][40]

Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the
system interface.[36][41][42]

Productivity may be increased when multiple users can work on the same data simultaneously, rather than waiting for it
to be saved and emailed. Time may be saved as information does not need to be re-entered when fields are matched, nor do
users need to install application software upgrades to their computer.[43]

Reliability improves with the use of multiple redundant sites, which makes well-designed cloud computing suitable
for business continuity and disaster recovery.[44]

Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis in
near real-time[45][46] (Note, the VM startup time varies by VM type, location, OS and cloud providers[45]), without users having to
engineer for peak loads.[47][48][49]

Security can improve due to centralization of data, increased security-focused resources, etc., but concerns can persist
about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or

better than other traditional systems, in part because providers are able to devote resources to solving security issues that
many customers cannot afford to tackle.[50] However, the complexity of security is greatly increased when data is distributed
over a wider area or over a greater number of devices, as well as in multi-tenant systems shared by unrelated users. In
addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by
users' desire to retain control over the infrastructure and avoid losing control of information security.

The National Institute of Standards and Technology's definition of cloud computing identifies "five essential characteristics":

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as
needed automatically without requiring human interaction with each service provider.

Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by
heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).

Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with
different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and
inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be
appropriated in any quantity at any time.

Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level
of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage
can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
National Institute of Standards and Technology[2]
Service models[edit]

Cloud computing providers offer their services according to several fundamental models:[2][51]

Infrastructure as a service (IaaS)[edit]


See also: Category:Cloud infrastructure

In the most basic cloud-service model & according to the IETF (Internet Engineering Task Force), providers of IaaS offer computers
physical or (more often) virtual machines and other resources. (A hypervisor, such as Xen, Oracle VirtualBox, KVM, VMware
ESX/ESXi, or Hyper-V runs the virtual machines as guests. Pools of hypervisors within the cloud operational support-system can
support large numbers of virtual machines and the ability to scale services up and down according to customers' varying
requirements.) IaaS clouds often offer additional resources such as a virtual-machine disk image library, raw block storage, and file
or object storage, firewalls, load balancers, IP addresses, virtual local area networks(VLANs), and software bundles.[52] IaaS-cloud
providers supply these resources on-demand from their large pools installed in data centers. Forwide-area connectivity, customers
can use either the Internet or carrier clouds (dedicated virtual private networks).

To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure.
In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill
IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed. [53][54][55]
Platform as a service (PaaS)[edit]
Main article: Platform as a service
See also: Category:Cloud platforms

In the PaaS models, cloud providers deliver a computing platform, typically including operating system, programming language
execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud

platform without the cost and complexity of buying and managing the underlying hardware and software layers. With some PaaS
offers likeMicrosoft Azure and Google App Engine, the underlying computer and storage resources scale automatically to match
application demand so that the cloud user does not have to allocate resources manually. The latter has also been proposed by an
architecture aiming to facilitate real-time in cloud environments. [56] Even more specific application types can be provided via PaaS,
e.g., such as media encoding as provided by services as bitcodin transcoding cloud [57] or media.io.[58]
Software as a service (SaaS)[edit]
Main article: Software as a service

In the business model using software as a service (SaaS), users are provided access to application software and databases. Cloud
providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software"
and is usually priced on a pay-per-use basis or using a subscription fee.

In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from
cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need
to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications
are different from other applications in their scalabilitywhich can be achieved by cloning tasks onto multiple virtual machines at
run-time to meet changing work demand.[59] Load balancers distribute the work over the set of virtual machines. This process is
transparent to the cloud user, who sees only a single access point. To accommodate a large number of cloud users, cloud
applications can bemultitenant, that is, any machine serves more than one cloud user organization.

The pricing model for SaaS applications is typically a monthly or yearly flat fee per user,[60] so price is scalable and adjustable if
users are added or removed at any point.[61]

Proponents claim SaaS allows a business the potential to reduce IT operational costs by outsourcing hardware and software
maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from
hardware/software spending and personnel expenses, towards meeting other goals. In addition, with applications hosted centrally,
updates can be released without the need for users to install new software. One drawback of SaaS is that the users' data are stored
on the cloud provider's server. As a result, there could be unauthorized access to the data. For this reason, users are increasingly
adopting intelligent third-party key management systems to help secure their data.
Cloud clients[edit]
See also: Category:Cloud clients

Users access cloud computing using networked client devices, such as desktop computers, laptops, tablets and smartphones.
Some of these devices cloud clients rely on cloud computing for all or a majority of their applications so as to be essentially
useless without it. Examples are thin clients and the browser-based Chromebook. Many cloud applications do not require specific

software on the client and instead use a web browser to interact with the cloud application. With Ajax and HTML5 these Web user
interfaces can achieve a similar, or even better, look and feel to native applications. Some cloud applications, however, support
specific client software dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy
applications (line of business applications that until now have been prevalent in thin client computing) are delivered via a screensharing technology.
Deployment models[edit]

Cloud computing types


Private cloud[edit]

Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party, and
hosted either internally or externally.[2] Undertaking a private cloud project requires a significant level and degree of engagement to
virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. When done
right, it can improve business, but every step in the project raises security issues that must be addressed to prevent serious
vulnerabilities.[62]Self-run data centers[63] are generally capital intensive. They have a significant physical footprint, requiring
allocations of space, hardware, and environmental controls. These assets have to be refreshed periodically, resulting in additional
capital expenditures. They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit
from less hands-on management,[64] essentially "[lacking] the economic model that makes cloud computing such an intriguing
concept".[65][66]
Public cloud[edit]

A cloud is called a "public cloud" when the services are rendered over a network that is open for public use. Public cloud services
may be free.[67] Technically there may be little or no difference between public and private cloud architecture, however, security
consideration may be substantially different for services (applications, storage, and other resources) that are made available by a
service provider for a public audience and when communication is effected over a non-trusted network. Saasu is a large public

cloud. Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure at
their data center and access is generally via the Internet. AWS and Microsoft also offer direct connect services called "AWS Direct
Connect" and "Azure ExpressRoute" respectively, such connections require customers to purchase or lease a private connection to
a peering point offered by the cloud provider.[36]
Hybrid cloud[edit]

Hybrid cloud is a composition of two or more clouds (private, community or public) that remain distinct entities but are bound
together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation,
managed and/or dedicated services with cloud resources.[2]

Gartner, Inc. defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public
and community cloud services, from different service providers.[68] A hybrid cloud service crosses isolation and provider boundaries
so that it cant be simply put in one category of private, public, or community cloud service. It allows one to extend either the
capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service.

Varied use cases for hybrid cloud composition exist. For example, an organization may store sensitive client data in house on a
private cloud application, but interconnect that application to a business intelligence application provided on a public cloud as a
software service.[69] This example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business service
through the addition of externally available public cloud services. Hybrid cloud adoption depends on a number of factors such as
data security and compliance requirements, level of control needed over data, and the applications an organization uses. [70]

Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity
needs that can not be met by the private cloud.[71] This capability enables hybrid clouds to employ cloud bursting for scaling across
clouds.[2] Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and
"bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid
cloud model is that an organization only pays for extra compute resources when they are needed.[72] Cloud bursting enables data
centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private
clouds, during spikes in processing demands.[73]
Others[edit]
Community cloud[edit]

Community cloud shares infrastructure between several organizations from a specific community with common concerns (security,
compliance, jurisdiction, etc.), whether managed internally or by a third-party, and either hosted internally or externally. The costs
are spread over fewer users than a public cloud (but more than a private cloud), so only some of the cost savings potential of cloud
computing are realized.[2]

Distributed cloud[edit]

Cloud computing can also be provided by a distributed set of machines that are running at different locations, while still connected to
a single network or hub service. Examples of this include distributed computing platforms such as BOINC and Folding@Home. An
interesting attempt in such direction is Cloud@Home, aiming at implementing cloud computing provisioning model on top of
voluntarily shared resources [74]

Intercloud[edit]
Main article: Intercloud

The Intercloud[75] is an interconnected global "cloud of clouds"[76][77] and an extension of the Internet "network of networks" on which it
is based. The focus is on direct interoperability between public cloud service providers, more so than between providers and
consumers (as is the case for hybrid- and multi-cloud).[78][79][80]

Multicloud[edit]
Main article: Multicloud

Multicloud is the use of multiple cloud computing services in a single heterogeneous architecture to reduce reliance on single
vendors, increase flexibility through choice, mitigate against disasters, etc. It differs from hybrid cloud in that it refers to multiple
cloud services, rather than multiple deployment modes (public, private, legacy).[81][82]
Architecture[edit]

Cloud computing sample architecture

Cloud architecture,[83] the systems architecture of the software systems involved in the delivery of cloud computing, typically
involves multiplecloud components communicating with each other over a loose coupling mechanism such as a messaging queue.
Elastic provision implies intelligence in the use of tight or loose coupling as applied to mechanisms such as these and others.

Cloud engineering[edit]

Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to the high-level
concerns of commercialization, standardization, and governance in conceiving, developing, operating and maintaining cloud
computing systems. It is a multidisciplinary method encompassing contributions from diverse areas such
as systems, software, web, performance, information, security,platform, risk, and quality engineering.
Security and privacy[edit]
Main article: Cloud computing issues

Cloud computing poses privacy concerns because the service provider can access the data that is on the cloud at any time. It could
accidentally or deliberately alter or even delete information.[84] Many cloud providers can share information with third parties if
necessary for purposes of law and order even without a warrant. That is permitted in their privacy policies which users have to agree
to before they start using cloud services.[85] Solutions to privacy include policy and legislation as well as end users' choices for how
data is stored.[84] Users can encrypt data that is processed or stored within the cloud to prevent unauthorized access.[84]

According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and APIs, Data Loss &
Leakage, and Hardware Failure which accounted for 29%, 25% and 10% of all cloud security outages respectively - together
these form shared technology vulnerabilities. In a cloud provider platform being shared by different users there may be a possibility
that information belonging to different customers resides on same data server. Therefore Information leakage may arise by mistake
when information for one customer is given to other.[86] Additionally, Eugene Schultz, chief technology officer at Emagined Security,
said that hackers are spending substantial time and effort looking for ways to penetrate the cloud. "There are some real Achilles'
heels in the cloud infrastructure that are making big holes for the bad guys to get into. Because data from hundreds or thousands of
companies can be stored on large cloud servers, hackers can theoretically gain control of huge stores of information through a
single attack a process he called "hyperjacking".

There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?).
Many Terms of Service agreements are silent on the question of ownership.[87]

Physical control of the computer equipment (private cloud) is more secure than having the equipment off site and under someone
else's control (public cloud). This delivers great incentive to public cloud computing service providers to prioritize building and
maintaining strong management of secure services.[88] Some small businesses that don't have expertise in IT security could find that
it's more secure for them to use a public cloud.

There is the risk that end users don't understand the issues involved when signing on to a cloud service (persons sometimes don't
read the many pages of the terms of service agreement, and just click "Accept" without reading). This is important now that cloud

computing is becoming popular and required for some services to work, for example for an intelligent personal
assistant (Apple's Siri or Google Now).

Fundamentally private cloud is seen as more secure with higher levels of control for the owner, however public cloud is seen to be
more flexible and requires less time and money investment from the user.[89]
The future[edit]

According to Gartner's Hype cycle, cloud computing has reached a maturity that leads it into a productive phase. This means that
most of the main issues with cloud computing have been addressed to a degree that clouds have become interesting for full
commercial exploitation. This however does not mean that all the problems listed above have actually been solved, only that the
according risks can be tolerated to a certain degree.[90] Cloud computing is therefore still as much a research topic, as it is a market
offering.[91] What is clear through the evolution of Cloud Computing services is that the CTO is a major driving force behind Cloud
adoption.[92] The major Cloud technology developers continue to invest billions a year in Cloud R&D; for example, in 2011 Microsoft
committed 90% of its $9.6bn R&D budget to Cloud.[93]

BIG DATA

Definition[edit]

Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and
process data within a tolerable elapsed time.[13] Big data "size" is a constantly moving target, as of 2012 ranging from a few dozen
terabytes to many petabytes of data. Big data is a set of techniques and technologies that require new forms of integration to
uncover large hidden values from large datasets that are diverse, complex, and of a massive scale.[14]

In a 2001 research report[15] and related lectures, META Group (now Gartner) analyst Doug Laney defined data growth challenges
and opportunities as being three-dimensional, i.e. increasing volume (amount of data), velocity (speed of data in and out), and
variety (range of data types and sources). Gartner, and now much of the industry, continue to use this "3Vs" model for describing big
data.[16] In 2012, Gartner updated its definition as follows: "Big data is high volume, high velocity, and/or high variety information
assets that require new forms of processing to enable enhanced decision making, insight discovery and process
optimization."[17] Additionally, a new V "Veracity" is added by some organizations to describe it.[18]

If Gartners definition (the 3Vs) is still widely used, the growing maturity of the concept fosters a more sound difference between big
data and Business Intelligence, regarding data and their use:[19]

Business Intelligence uses descriptive statistics with data with high information density to measure things, detect trends
etc.;

Big data uses inductive statistics and concepts from nonlinear system identification [20] to infer laws (regressions, nonlinear
relationships, and causal effects) from large sets of data with low information density[21] to reveal relationships, dependencies
and perform predictions of outcomes and behaviors.[20][22]

A more recent, consensual definition states that "Big Data represents the Information assets characterized by such a High Volume,
Velocity and Variety to require specific Technology and Analytical Methods for its transformation into Value".

[23]

Characteristics[edit]

Big data can be described by the following characteristics:

Volume The quantity of data that is generated is very important in this context. It is the size of the data which determines the value
and potential of the data under consideration and whether it can actually be considered Big Data or not. The name Big Data itself
contains a term which is related to size and hence the characteristic.

Variety - The next aspect of Big Data is its variety. This means that the category to which Big Data belongs to is also a very
essential fact that needs to be known by the data analysts. This helps the people, who are closely analyzing the data and are
associated with it, to effectively use the data to their advantage and thus upholding the importance of the Big Data.

Velocity - The term velocity in the context refers to the speed of generation of data or how fast the data is generated and
processed to meet the demands and the challenges which lie ahead in the path of growth and development.

Variability - This is a factor which can be a problem for those who analyse the data. This refers to the inconsistency which can be
shown by the data at times, thus hampering the process of being able to handle and manage the data effectively.

Veracity - The quality of the data being captured can vary greatly. Accuracy of analysis depends on the veracity of the source data.

Complexity - Data management can become a very complex process, especially when large volumes of data come from multiple
sources. These data need to be linked, connected and correlated in order to be able to grasp the information that is supposed to be
conveyed by these data. This situation, is therefore, termed as the complexity of Big Data.

Big data analytics consists of 6 Cs in the integrated industry 4.0 and Cyber Physical Systems environment. 6C system, that is,
consist of connection (sensor and networks), Cloud (computing and data on demand), Cyber (model and memory), content/context
(meaning and correlation), community (sharing and collaboration), and customization (personalization and value). In this scenario
and in order to provide useful insight to the factory management and gain correct content, data has to be processed with advanced
tools (analytics and algorithms) to generate meaningful information. Considering the presence of visible and invisible issues in an
industrial factory, the information generation algorithm has to capable of detecting and addressing invisible issues such as machine
degradation, component wear, etc. in the factory floor.[24][25]
Applications[edit]
Big data has increased the demand of information management specialists in that Software AG, Oracle
Corporation, IBM, Microsoft, SAP, EMC, HP and Dell have spent more than $15 billion on software firms specializing in data
management and analytics. In 2010, this industry was worth more than $100 billion and was growing at almost 10 percent a year:
about twice as fast as the software business as a whole.[1]

Developed economies make increasing use of data-intensive technologies. There are 4.6 billion mobile-phone subscriptions
worldwide and between 1 billion and 2 billion people accessing the internet. [1] Between 1990 and 2005, more than 1 billion people
worldwide entered the middle class which means more and more people who gain money will become more literate which in turn
leads to information growth. The world's effective capacity to exchange information throughtelecommunication networks was
281 petabytes in 1986, 471 petabytes in 1993, 2.2 exabytes in 2000, 65 exabytes in 2007[8] and it is predicted that the amount of
traffic flowing over the internet will reach 667 exabytes annually by 2014.[1] It is estimated that one third of the globally stored
information is in the form of alphanumeric text and still image data,[43] which is the format most useful for most big data applications.
This also shows the potential of yet unused data (i.e. in the form of video and audio content).

While many vendors offer off-the-shelf solutions for Big Data, experts recommend the development of in-house solutions customtailored to solve the companies problem at hand if the company has sufficient technical capabilities. [44]
Government[edit]

The use and adoption of Big Data, within governmental processes, is beneficial and allows efficiencies in terms of cost, productivity
and innovation. That said, this process does not come without its flaws. Data analysis often requires multiple parts of government
(central and local) to work in collaboration and create new and innovative processes to deliver the desired outcome. Below are the
thought leading examples within the Governmental Big Data space.

United States of America[edit]

In 2012, the Obama administration announced the Big Data Research and Development Initiative, to explore how big data
could be used to address important problems faced by the government.[45]The initiative is composed of 84 different big data
programs spread across six departments.[46]

Big data analysis played a large role in Barack Obama's successful 2012 re-election campaign.[47]

The United States Federal Government owns six of the ten most powerful supercomputers in the world.[48]

The Utah Data Center is a data center currently being constructed by the United States National Security Agency. When
finished, the facility will be able to handle a large amount of information collected by the NSA over the Internet. The exact
amount of storage space is unknown, but more recent sources claim it will be on the order of a few exabytes.[49][50][51]

India[edit]

Big data analysis was, in parts, responsible for the BJP and its allies to win a highly successful Indian General Election
2014.[52]

The Indian Government utilises numerous techniques to ascertain how the Indian electorate is responding to government
action, as well as ideas for policy augmentation

United Kingdom[edit]

Examples of good uses of big data in public services:

Data on prescription drugs: By connecting origin, location and the time of each prescription, a research unit were able to
exemplify the considerable delay between the release of any given drug, and a UK wide adaptation of the National Institute for

Health and Care Excellence guidelines. This suggests that new/most up to date drugs take some time to filter through to the
general patient.

Joining up data: The weather challenges in Winter 2014 a local authority blended data about services, such as road
gritting rotas, with services for people at risk, such as 'meals on wheels'. The connection of data allowed the local authority to
avoid any weather related delay.

International development[edit]

Research on the effective usage of information and communication technologies for development (also known as ICT4D) suggests
that big data technology can make important contributions but also present unique challenges to International development.[53]
[54]

Advancements in big data analysis offer cost-effective opportunities to improve decision-making in critical development areas

such ashealth care, employment, economic productivity, crime, security, and natural disaster and resource management.[55]
[56]

However, longstanding challenges for developing regions such as inadequate technological infrastructure and economic and

human resource scarcity exacerbate existing concerns with big data such as privacy, imperfect methodology, and interoperability
issues.[55]
Manufacturing[edit]

Based on TCS 2013 Global Trend Study, improvements in supply planning and product quality provide the greatest benefit of big
data for manufacturing.[57] Big data provides an infrastructure for transparency in manufacturing industry, which is the ability to
unravel uncertainties such as inconsistent component performance and availability. Predictive manufacturing as an applicable
approach toward near-zero downtime and transparency requires vast amount of data and advanced prediction tools for a systematic
process of data into useful information.[58] A conceptual framework of predictive manufacturing begins with data acquisition where
different type of sensory data is available to acquire such as acoustics, vibration, pressure, current, voltage and controller data. Vast
amount of sensory data in addition to historical data construct the big data in manufacturing. The generated big data acts as the
input into predictive tools and preventive strategies such as Prognostics and Health Management (PHM).[59]

Cyber-Physical Models[edit]

Current PHM implementations mostly utilize data during the actual usage while analytical algorithms can perform more accurately
when more information throughout the machines lifecycle, such as system configuration, physical knowledge and working
principles, are included. There is a need to systematically integrate, manage and analyze machinery or process data during different
stages of machine life cycle to handle data/information more efficiently and further achieve better transparency of machine health
condition for manufacturing industry.

With such motivation a cyber-physical (coupled) model scheme has been developed. Please see http://www.imscenter.net/cyberphysical-platform The coupled model is a digital twin of the real machine that operates in the cloud platform and simulates the health

condition with an integrated knowledge from both data driven analytical algorithms as well as other available physical knowledge. It
can also be described as a 5S systematic approach consisting of Sensing, Storage, Synchronization, Synthesis and Service. The
coupled model first constructs a digital image from the early design stage. System information and physical knowledge are logged
during product design, based on which a simulation model is built as a reference for future analysis. Initial parameters may be
statistically generalized and they can be tuned using data from testing or the manufacturing process using parameter estimation.
After which, the simulation model can be considered as a mirrored image of the real machine, which is able to continuously record
and track machine condition during the later utilization stage. Finally, with ubiquitous connectivity offered by cloud computing
technology, the coupled model also provides better accessibility of machine condition for factory managers in cases where physical
access to actual equipment or machine data is limited.[25][60]
Media[edit]
Internet of Things (IoT)[edit]
Main article: Internet of Things

In order to hone into the manner in which the media utilises Big Data, it is first necessary to provide some context into the
mechanism used for media process. It has been suggested by Nick Couldry and Joseph Turow that Practitioners in Advertising and
Media approach Big Data as many actionable points of information about millions of individuals. The industry appears to be moving
away from the traditional approach of using specific media environments such as newspapers, magazines, or television shows and
instead tap into consumers with technologies that reach targeted people at optimal times in optimal locations. The ultimate aim is of
course to serve, or convey, a message or content that is (statistically speaking) in line with the consumers mindset. For example,
publishing environments are increasingly tailoring messages (advertisements) and content (articles) to appeal to consumers that
have been exclusively gleaned through various data-mining activities.[61]

The media industries process Big Data in a dual, interconnected manner:

Targeting of consumers (for advertising by marketers)

Data-capture

Big Data and the IoT work in conjunction. From a media perspective, Data is the key derivative of device inter connectivity, whilst
being pivotal in allowing clearer accuracy in targeting. The Internet of Things, with the help of big data, therefore transforms the
media industry, companies and even governments, opening up a new era of economic growth and competitiveness. The intersection
of people, data and intelligent algorithms have far-reaching impacts on media efficiency. The wealth of data generated by this
industry (i.e. Big Data) allows Practitioners in Advertising and Media to gain an elaborate layer on the present targeting mechanisms
utilised by the industry.

Technology[edit]

eBay.com uses two data warehouses at 7.5 petabytes and 40PB as well as a 40PB Hadoop cluster for search, consumer
recommendations, and merchandising. Inside eBays 90PB data warehouse

Amazon.com handles millions of back-end operations every day, as well as queries from more than half a million thirdparty sellers. The core technology that keeps Amazon running is Linux-based and as of 2005 they had the worlds three largest
Linux databases, with capacities of 7.8 TB, 18.5 TB, and 24.7 TB.[62]

Facebook handles 50 billion photos from its user base.[63]

Private sector[edit]
Retail[edit]

Walmart handles more than 1 million customer transactions every hour, which are imported into databases estimated to
contain more than 2.5 petabytes (2560 terabytes) of data the equivalent of 167 times the information contained in all the
books in the US Library of Congress.[1]

Retail Banking[edit]

FICO Card Detection System protects accounts world-wide.[64]

The volume of business data worldwide, across all companies, doubles every 1.2 years, according to estimates. [65][66]

Real Estate[edit]

Windermere Real Estate uses anonymous GPS signals from nearly 100 million drivers to help new home buyers
determine their typical drive times to and from work throughout various times of the day.[67]

Science[edit]

The Large Hadron Collider experiments represent about 150 million sensors delivering data 40 million times per second. There are
nearly 600 million collisions per second. After filtering and refraining from recording more than 99.999% of these streams, there are
100 collisions of interest per second.[68][69][70]

As a result, only working with less than 0.001% of the sensor stream data, the data flow from all four LHC experiments
represents 25 petabytes annual rate before replication (as of 2012). This becomes nearly 200 petabytes after replication.

If all sensor data were to be recorded in LHC, the data flow would be extremely hard to work with. The data flow would
exceed 150 million petabytes annual rate, or nearly 500 exabytes per day, before replication. To put the number in perspective,

this is equivalent to 500 quintillion (51020) bytes per day, almost 200 times more than all the other sources combined in the
world.

The Square Kilometre Array is a telescope which consists of millions of antennas and is expected to be operational by 2024.
Collectively, these antennas are expected to gather 14 exabytes and store one petabyte per day.[71][72] It is considered to be one of
the most ambitious scientific projects ever undertaken.
Science and Research[edit]

When the Sloan Digital Sky Survey (SDSS) began collecting astronomical data in 2000, it amassed more in its first few
weeks than all data collected in the history of astronomy. Continuing at a rate of about 200 GB per night, SDSS has amassed
more than 140 terabytes of information. When the Large Synoptic Survey Telescope, successor to SDSS, comes online in
2016 it is anticipated to acquire that amount of data every five days.[1]

Decoding the human genome originally took 10 years to process, now it can be achieved in less than a day: the DNA
sequencers have divided the sequencing cost by 10,000 in the last ten years, which is 100 times cheaper than the reduction in
cost predicted by Moore's Law.[73]

The NASA Center for Climate Simulation (NCCS) stores 32 petabytes of climate observations and simulations on the
Discover supercomputing cluster

RFID

A radio-frequency identification system uses tags, or labels attached to the objects to be identified. Two-way radio transmitterreceivers called interrogators or readers send a signal to the tag and read its response.

RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits
its ID signal. A battery-assisted passive (BAP) has a small battery on board and is activated when in the presence of an RFID
reader. A passive tag is cheaper and smaller because it has no battery; instead, the tag uses the radio energy transmitted by the
reader. However, to operate a passive tag, it must be illuminated with a power level roughly a thousand times stronger than for
signal transmission. That makes a difference in interference and in exposure to radiation.

Tags may either be read-only, having a factory-assigned serial number that is used as a key into a database, or may be read/write,
where object-specific data can be written into the tag by the system user. Field programmable tags may be write-once, readmultiple; "blank" tags may be written with an electronic product code by the user.

RFID tags contain at least two parts: an integrated circuit for storing and processing
information, modulating and demodulating a radio-frequency (RF) signal, collecting DC power from the incident reader signal, and
other specialized functions; and an antenna for receiving and transmitting the signal. The tag information is stored in a non-volatile
memory. The RFID tag includes either fixed or programmable logic for processing the transmission and sensor data, respectively.

An RFID reader transmits an encoded radio signal to interrogate the tag. The RFID tag receives the message and then responds
with its identification and other information. This may be only a unique tag serial number, or may be product-related information such
as a stock number, lot or batch number, production date, or other specific information. Since tags have individual serial numbers, the
RFID system design can discriminate among several tags that might be within the range of the RFID reader and read them
simultaneously.
Uses[edit]

The RFID tag can be affixed to an object and used to track and manage inventory, assets, people, etc. For example, it can be
affixed to cars, computer equipment, books, mobile phones, etc.

RFID offers advantages over manual systems or use of bar codes. The tag can be read if passed near a reader, even if it is covered
by the object or not visible. The tag can be read inside a case, carton, box or other container, and unlike barcodes, RFID tags can
be read hundreds at a time. Bar codes can only be read one at a time using current devices.

In 2011, the cost of passive tags started at US$0.09 each; special tags, meant to be mounted on metal or withstand gamma
sterilization, can go up to US$5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data
centers start at US$50 and can go up over US$100 each. Battery-Assisted Passive (BAP) tags are in the US$310 range and also
have sensor capability like temperature and humidity.[citation needed]

Access management

Tracking of goods

Tracking of persons and animals

Toll collection and contactless payment

Machine readable travel documents

Smartdust (for massively distributed sensor networks)

Tracking sports memorabilia to verify authenticity

Airport baggage tracking logistics[20]

Timing sporting events

In 2010 three factors drove a significant increase in RFID usage: decreased cost of equipment and tags, increased performance to a
reliability of 99.9% and a stable international standard around UHF passive RFID. The adoption of these standards were driven by
EPCglobal, a joint venture between GS1 and GS1 US, which were responsible for driving global adoption of the barcode in the
1970s and 1980s. The EPCglobal Network was developed by the Auto-ID Center.[21]
Commerce[edit]

RFID provides a way for organizations to identify and manage tools and equipment (asset tracking) , without manual data entry.
RFID is being adopted for item level tagging in retail stores. This provides electronic article surveillance (EAS), and a self
checkout process for consumers. Automatic identification with RFID can be used for inventory systems. Manufactured products such
as automobiles or garments can be tracked through the factory and through shipping to the customer.

Casinos can use RFID to authenticate poker chips, and can selectively invalidate any chips known to be stolen. [22]

Wal-Mart and the United States Department of Defense have published requirements that their vendors place RFID tags on all
shipments to improve supply chain management.

Access control[edit]

RFID tags are widely used in identification badges, replacing earlier magnetic stripe cards. These badges need only be held within a
certain distance of the reader to authenticate the holder. Tags can also be placed on vehicles, which can be read at a distance, to
allow entrance to controlled areas without having to stop the vehicle and present a card or enter an access code.

Advertising[edit]

In 2010 Vail Resorts began using UHF Passive RFID tags in ski passes. Facebook is using RFID cards at most of their live events
to allow guests to automatically capture and post photos. The automotive brands have adopted RFID for social media product

placement more quickly than other industries. Mercedes was an early adopter in 2011 at the PGA Golf Championships, [23] and by the
2013 Geneva Motor Show many of the larger brands were using RFID for social media marketing. [24]

Promotion tracking[edit]

To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can
track exactly which product has sold through the supply chain at fully discounted prices.[25]
Transportation and logistics[edit]

Yard management, shipping and freight and distribution centers use RFID tracking. In the railroad industry, RFID tags mounted on
locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be
used with a database to identify the lading, origin, destination, etc. of the commodities being carried.[26]

In commercial aviation, RFID is used to support maintenance on commercial aircraft. RFID tags are used to identify baggage and
cargo at several airports and airlines.[27][28]

Some countries are using RFID for vehicle registration and enforcement. [29] RFID can help detect and retrieve stolen cars.[30][31]

Intelligent transportation systems[edit]

RFID is used in intelligent transportation systems. In New York City, RFID readers are deployed at intersections to track EZPass tags as a means for monitoring the traffic flow. The data is fed through the broadband wireless infrastructure to the traffic
management center to be used in adaptive traffic control of the traffic lights.[32]

Hose stations and conveyance of fluids[edit]

The RFID antenna in a permanently installed coupling half (fixed part) unmistakably identifies the RFID transponder placed in the
other coupling half (free part) after completed coupling. When connected the transponder of the free part transmits all important
information contactless to the fixed part. The couplings location can be clearly identified by the RFID transponder coding. The
control is enabled to automatically start subsequent process steps.
Public transport[edit]

RFID cards are used for access control to public transport.

In London travellers use Oyster Cards on the tube, buses and ferries. It identifies the traveller at each turnstile and so the system
can calculate the fare.

Infrastructure management and protection[edit]

At least one company has introduced RFID to identify and locate underground infrastructure assets such as gas pipelines, sewer
lines, electrical cables, communication cables, etc.[33]
Passports[edit]
See also: Biometric passport

The first RFID passports ("E-passport") were issued by Malaysia in 1998. In addition to information also contained on the visual data
page of the passport, Malaysian e-passports record the travel history (time, date, and place) of entries and exits from the country.

Other countries that insert RFID in passports include Norway (2005), [34] Japan (March 1, 2006), most EU countries (around 2006),
Australia, Hong Kong, the United States (2007), India (June 2008), Serbia (July 2008), Republic of Korea (August 2008), Taiwan
(December 2008), Albania (January 2009), The Philippines (August 2009), Republic of Macedonia (2010), and Canada (2013).

Standards for RFID passports are determined by the International Civil Aviation Organization (ICAO), and are contained in ICAO
Document 9303, Part 1, Volumes 1 and 2 (6th edition, 2006). ICAO refers to the ISO/IEC 14443 RFID chips in e-passports as
"contactless integrated circuits". ICAO standards provide for e-passports to be identifiable by a standard e-passport logo on the front
cover.

Since 2006, RFID tags included in new United States passports will store the same information that is printed within the passport,
and include a digital picture of the owner.[35] The United States Department of State initially stated the chips could only be read from
a distance of 10 centimetres (3.9 in), but after widespread criticism and a clear demonstration that special equipment can read the
test passports from 10 metres (33 ft) away,[36] the passports were designed to incorporate a thin metal lining to make it more difficult
for unauthorized readers to "skim" information when the passport is closed. The department will also implement Basic Access
Control (BAC), which functions as a Personal Identification Number (PIN) in the form of characters printed on the passport data
page. Before a passport's tag can be read, this PIN must be entered into an RFID reader. The BAC also enables the encryption of
any communication between the chip and interrogator.[37]
Transportation payments[edit]

In many countries, RFID tags can be used to pay for mass transit fares on bus, trains, or subways, or to collect tolls on highways.

Some bike lockers are operated with RFID cards assigned to individual users. A prepaid card is required to open or enter a facility or
locker and is used to track and charge based on how long the bike is parked.

The Zipcar car-sharing service uses RFID cards for locking and unlocking cars and for member identification. [38]

In Singapore, RFID replaces paper Season Parking Ticket (SPT).[39]


Animal identification[edit]

RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the
outbreak of mad-cow disease, RFID has become crucial in animal identification management. An implantable RFID
tag or transponder can also be used for animal identification. The transponders are more well known as passive RFID, or "chips" on
animals.[40] TheCanadian Cattle Identification Agency began using RFID tags as a replacement for barcode tags. Currently CCIA
tags are used in Wisconsin and by United States farmers on a voluntary basis. TheUSDA is currently developing its own program.

RFID tags are required for all cattle, sheep and goats sold in Australia. [41]
Human identification[edit]

Implantable RFID chips designed for animal tagging are now being used in humans. An early experiment with RFID implants was
conducted by British professor ofcybernetics Kevin Warwick, who implanted a chip in his arm in 1998. In 2004 Conrad
Chase offered implanted chips in his night clubs in Barcelona[42] andRotterdam to identify their VIP customers, who in turn use it to
pay for drinks.

The Food and Drug Administration in the United States has approved the use of RFID chips in humans.[43] Some business
establishments give customers the option of using an RFID-based tab to pay for service, such as the Baja Beach nightclub
in Barcelona.[44] This has provoked concerns into privacy of individuals as they can potentially be tracked wherever they go by an
identifier unique to them. There are concerns this could lead to abuse by an authoritarian government, to removal of freedoms,
[45]

and to the emergence of the ultimate panopticon, a society where all citizens behave in a socially accepted manner because

others might be watching.[46]

On July 22, 2006, Reuters reported that two hackers, Newitz and Westhues, at a conference in New York City showed that they
could clone the RFID signal from a human implanted RFID chip, showing that the chip is not hack-proof as was previously claimed.
[47]

Privacy advocates have protested against implantable RFID chips, warning of potential abuse. There is much controversy

regarding human applications of this technology, and many conspiracy theories abound in relation to human applications, especially
one of which is referred to as "The Mark of the Beast" in some religious circles.[48]
Institutions[edit]
Hospitals and healthcare[edit]

Adoption of RFID in the medical industry has been widespread and very effective. Hospitals are among the first users to combine
both active and passive RFID. Many successful deployments in the healthcare industry have been cited where active technology
tracks high-value, or frequently moved items, where passive technology tracks smaller, lower cost items that only need room-level

identification.[49] For example, medical facility rooms can collect data from transmissions of RFID badges worn by patients and
employees, as well as from tags assigned to facility assets, such as mobile medical devices.[50] The U.S. Department of Veterans
Affairs (VA) recently announced plans to deploy RFID in hospitals across America to improve care and reduce costs.[51]

A physical RFID tag may be incorporated with browser-based software to increase its efficacy. This software allows for different
groups or specific hospital staff, nurses, and patients to see real-time data relevant to each piece of tracked equipment or personnel.
Real-time data is stored and archived to make use of historical reporting functionality and to prove compliance with various industry
regulations. This combination of RFID real-time locating system hardware and software provides a powerful data collection tool for
facilities seeking to improve operational efficiency and reduce costs.

The trend is toward using ISO 18000-6c as the tag of choice and combining an active tagging system that relies on existing 802.11X
wireless infrastructure for active tags.[citation needed]

Since 2004 a number of U.S. hospitals have begun implanting patients with RFID tags and using RFID systems, usually for workflow
and inventory management.[52][53][54] The use of RFID to prevent mixups between sperm and ova in IVF clinics is also being
considered.[55]

In October 2004, the FDA approved USA's first RFID chips that can be implanted in humans. The 134 kHz RFID chips, from
VeriChip Corp. can incorporate personal medical information and could save lives and limit injuries from errors in medical
treatments, according to the company. Anti-RFID activists Katherine Albrecht and Liz McIntyre discovered an FDA Warning
Letter that spelled out health risks.[56] According to the FDA, these include "adverse tissue reaction", "migration of the implanted
transponder", "failure of implanted transponder", "electrical hazards" and "magnetic resonance imaging [MRI] incompatibility."

Libraries[edit]

Libraries have used RFID to replace the barcodes on library items. The tag can contain identifying information or may just be a key
into a database. An RFID system may replace or supplement bar codes and may offer another method of inventory management
and self-service checkout by patrons. It can also act as asecurity device, taking the place of the more traditional electromagnetic
security strip.[57]

It is estimated that over 30 million library items worldwide now contain RFID tags, including some in the Vatican Library in Rome.[58]

Since RFID tags can be read through an item, there is no need to open a book cover or DVD case to scan an item, and a stack of
books can be read simultaneously. Book tags can be read while books are in motion on a conveyor belt, which reduces staff time.
This can all be done by the borrowers themselves, reducing the need for library staff assistance. With portable readers, inventories
could be done on a whole shelf of materials within seconds.[59] However, as of 2008 this technology remains too costly for many

smaller libraries, and the conversion period has been estimated at 11 months for an average-size library. A 2004 Dutch estimate was
that a library which lends 100,000 books per year should plan on a cost of 50,000 (borrow- and return-stations: 12,500 each,
detection porches 10,000 each; tags 0.36 each). RFID taking a large burden off staff could also mean that fewer staff will be
needed, resulting in some of them getting laid off,[58] but that has so far not happened in North America where recent surveys have
not returned a single library that cut staff because of adding RFID. In fact, library budgets are being reduced for personnel and
increased for infrastructure, making it necessary for libraries to add automation to compensate for the reduced staff size. Also, the
tasks that RFID takes over are largely not the primary tasks of librarians. A finding in the Netherlands is that borrowers are pleased
with the fact that staff are now more available for answering questions.

Privacy concerns have been raised surrounding library use of RFID. Because some RFID tags can be read from up to 100 metres
(330 ft), there is some concern over whether sensitive information could be collected from an unwilling source. However, library
RFID tags do not contain any patron information,[60] and the tags used in the majority of libraries use a frequency only readable from
approximately 10 feet (3.0 m).[57] Further, another non-library agency could potentially record the RFID tags of every person leaving
the library without the library administrator's knowledge or consent. One simple option is to let the book transmit a code that has
meaning only in conjunction with the library's database. Another possible enhancement would be to give each book a new code
every time it is returned. In future, should readers become ubiquitous (and possibly networked), then stolen books could be traced
even outside the library. Tag removal could be made difficult if the tags are so small that they fit invisibly inside a (random) page,
possibly put there by the publisher.

Museums[edit]

RFID technologies are now also implemented in end-user applications in museums. An example was the custom-designed
temporary research application, "eXspot," at the Exploratorium, a science museum in San Francisco, California. A visitor entering
the museum received an RF Tag that could be carried as a card. The eXspot system enabled the visitor to receive information about
specific exhibits. Aside from the exhibit information, the visitor could take photographs of themselves at the exhibit. It was also
intended to allow the visitor to take data for later analysis. The collected information could be retrieved at home from a
"personalized" website keyed to the RFID tag.[61]

Schools and universities[edit]

School authorities in the Japanese city of Osaka are now chipping children's clothing, backpacks, and student IDs in a primary
school.[62] A school in Doncaster, England is piloting a monitoring system designed to keep tabs on pupils by tracking radio chips in
their uniforms.[63] St Charles Sixth Form College in west London, England, started September, 2008, is using an RFID card system
to check in and out of the main gate, to both track attendance and prevent unauthorized entrance. Similarly, Whitcliffe Mount School
in Cleckheaton, England uses RFID to track pupils and staff in and out of the building via a specially designed card. In the

Philippines, some schools already use RFID in IDs for borrowing books and also gates in those particular schools have RFID ID
scanners for buying items at a school shop and canteen, library and also to sign in and sign out for student and teacher's
attendance.
Sports[edit]

RFID for timing races began in the early 1990s with pigeon racing, introduced by the company Deister Electronics in Germany. RFID
can provide race start and end timings for individuals in large races where it is impossible to get accurate stopwatch readings for
every entrant.

In the race, the racers wear tags that are read by antennas placed alongside the track or on mats across the track. UHF tags
provide accurate readings with specially designed antennas. Rush error, lap count errors and accidents at start time are avoided
since anyone can start and finish any time without being in a batch mode.

The design of chip+antenna controls the range from which it can be read. Short range compact chips are twist tied to the shoe or
velcro strapped the ankle. These need to be about 400mm from the mat and so give very good temporal resolution. Alternatively a
chip plus a very large (a 125mm square) antenna can be incorporated into the bib number worn on the athlete's chest at about
1.25m height.

Passive and active RFID systems are used in off-road events such as Orienteering, Enduro and Hare and Hounds racing. Riders
have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is
connected to a computer and log their lap time.

RFID is being adapted by many recruitment agencies which have a PET (Physical Endurance Test) as their qualifying procedure
especially in cases where the candidate volumes may run into millions (Indian Railway Recruitment Cells, Police and Power sector).

A number of ski resorts have adopted RFID tags to provide skiers hands-free access to ski lifts. Skiers do not have to take their
passes out of their pockets. Ski jackets have a left pocket into which the chip+card fits. This nearly contacts the sensor unit on the
left of the turnstile as the skier pushes through to the lift. These systems were based on high frequency (HF) at 13.56 megahertz.
The bulk of ski areas in Europe, from Verbier to Chamonix use these systems.[64][65][66]
Telemetry[edit]

Active RFID tags also have the potential to function as low-cost remote sensors that broadcast telemetry back to a base station.
Applications of tagometry data could include sensing of road conditions by implanted beacons, weather reports, and noise level
monitoring.[67]

Passive RFID tags can also report sensor data. For example, the Wireless Identification and Sensing Platform is a passive tag that
reports temperature, acceleration and capacitance to commercial Gen2 RFID readers.

It is possible that active or battery-assisted passive (BAP) RFID tags, could broadcast a signal to an in-store receiver to determine
whether the RFID tag (product) is in the store.
Problems and concerns[edit]
Data flooding[edit]

Not every successful reading of a tag (an observation) is useful for business purposes. A large amount of data may be generated
that is not useful for managing inventory or other applications. For example, a customer moving a product from one shelf to another,
or a pallet load of articles that passes several readers while being moved in a warehouse, are events that do not produce data that
is meaningful to an inventory control system.[73]

Event filtering is required to reduce this data inflow to a meaningful depiction of moving goods passing a threshold. Various
concepts[examples needed] have been designed, mainly offered as middlewareperforming the filtering from noisy and redundant raw data
to significant processed data.
Global standardization[edit]

The frequencies used for UHF RFID in the USA are currently incompatible with those of Europe or Japan. Furthermore, no emerging
standard has yet become as universal as the barcode.[74] To address international trade concerns, it is necessary to use a tag that is
operational within all of the international frequency domains.
Security concerns[edit]

Retailers such as Walmart, which already heavily use RFID for inventory purposes, also use RFID as an anti-employee-theft and
anti-shoplifting technology. If a product with an active RFID tag passes the exit-scanners at a Walmart outlet, not only does it set off
an alarm, but it also tells security personnel exactly what product to look for in the shopper's cart. [75]

A primary RFID security concern is the illicit tracking of RFID tags. Tags, which are world-readable, pose a risk to both personal
location privacy and corporate/military security. Such concerns have been raised with respect to the United States Department of
Defense's recent adoption of RFID tags for supply chain management.[76] More generally, privacy organizations have expressed
concerns in the context of ongoing efforts to embed electronic product code (EPC) RFID tags in consumer products. This is mostly
as result of the fact that RFID tags can be read, and legitimate transactions with readers can be eavesdropped, from non-trivial
distances. RFID used in access control, payment and eID (e-passport) systems operate at a shorter range than EPC RFID systems
but are also vulnerable to skimming and eavesdropping, albeit at shorter distance.[77]

A second method of prevention is by using cryptography. Rolling codes and challenge-response authentication (CRA) are commonly
used to foil monitor-repetition of the messages between the tag and reader; as any messages that have been recorded would prove
to be unsuccessful on repeat transmission. Rolling codes rely upon the tag's id being changed after each interrogation, while CRA
uses software to ask for a cryptographically coded response from the tag. The protocols used during CRA can be symmetric, or may
use public key cryptography.[78]

Security concerns exist in regard to privacy over the unauthorized reading of RFID tags, as well as security concerns over server
security. Unauthorized readers can use the RFID information to track the package, and so the consumer or carrier, as well as
identify the contents of a package.[78] Several prototype systems are being developed to combat unauthorized reading, including
RFID signal interruption,[79] as well as the possibility of legislation, and 700 scientific papers have been published on this matter
since 2002.[80] There are also concerns that the database structure of servers for the readers may be susceptible to infiltration,
similar to denial-of-service attacks, after the EPCglobal Network ONS root servers were shown to be vulnerable. [81]
Exploitation[edit]

Ars Technica reported in March 2006 an RFID buffer overflow bug that could infect airport terminal RFID databases for baggage,
and also passport databases to obtain confidential information on the passport holder.[82]
Passports[edit]

In an effort to make passports more secure, several countries have implemented RFID in passports. [83] However, the encryption on
UK chips was broken in under 48 hours.[84] Since that incident, further efforts have allowed researchers to clone passport data while
the passport is being mailed to its owner. Where a criminal used to need to secretly open and then reseal the envelope, now it can
be done without detection, adding some degree of insecurity to the passport system.[85]
Shielding[edit]

In an effort to prevent the passive skimming of RFID-enabled cards or passports, the U.S. General Services Administration (GSA)
issued a set of test procedures for evaluating electromagnetically opaque sleeves.[86] For shielding products to be in compliance with
FIPS-201 guidelines, they must meet or exceed this published standard. Shielding products currently evaluated as FIPS-201
compliant are listed on the website of the U.S. CIOs FIPS-201 Evaluation Program.[87] The United States government requires that
when new ID cards are issued, they must be delivered with an approved shielding sleeve or holder.[88]

Further information: Aluminium foil Electromagnetic shielding

There are contradicting opinions as to whether aluminum can prevent reading of RFID chips. Some people claim that aluminum
shielding, essentially creating a Faraday cage, does work.[89] Others claim that simply wrapping an RFID card in aluminum foil only
makes transmission more difficult and is not completely effective at preventing it.[90]

Shielding effectiveness depends on the frequency being used. Low-frequency LowFID tags, like those used in implantable devices
for humans and pets, are relatively resistant to shielding though thick metal foil will prevent most reads. High frequency HighFID
tags (13.56 MHzsmart cards and access badges) are sensitive to shielding and are difficult to read when within a few centimetres
of a metal surface. UHF Ultra-HighFID tags (pallets and cartons) are difficult to read when placed within a few millimetres of a metal
surface, although their read range is actually increased when they are spaced 24 cm from a metal surface due to positive
reinforcement of the reflected wave and the incident wave at the tag.[citation needed]

INTERNET OF THINGS

The Internet of Things (IoT) is the network of physical objects or "things" embedded with electronics, software, sensors and
connectivity to enable it to achieve greater value and service by exchanging data with the manufacturer, operator and/or other
connected devices. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the
existing Internetinfrastructure.
Typically, IoT is expected to offer advanced connectivity of devices, systems, and services that goes beyond machine-to-machine
communications (M2M) and covers a variety of protocols, domains, and applications.[1] The interconnection of these embedded
devices (including smart objects), is expected to usher in automation in nearly all fields, while also enabling advanced applications
like a Smart Grid.[2]
Things, in the IoT, can refer to a wide variety of devices such as heart monitoring implants, biochip transponders on farm animals,
electric clams in coastal waters,[3] automobiles with built-in sensors, or field operation devices that assist fire-fighters in search and
rescue.[4] These devices collect useful data with the help of various existing technologies and then autonomously flow the data
between other devices.[5] Current market examples include smart thermostat systems and washer/dryers that utilize Wi-Fi for remote
monitoring.
Besides the plethora of new application areas for Internet connected automation to expand into, IoT is also expected to generate
large amounts of data from diverse locations that is aggregated very quickly, thereby increasing the need to better index, store and
process such data.
Applications[edit]

According to Gartner, Inc. (a technology research and advisory corporation), there will be nearly 26 billion devices on the Internet of
Things by 2020.[20] ABI Research estimates that more than 30 billion devices will be wirelessly connected to the Internet of Things
(Internet of Everything) by 2020.[21] As per a recent survey and study done by Pew Research Internet Project, a large majority of the
technology experts and engaged Internet users who responded83 percentagreed with the notion that the Internet/Cloud of
Things, embedded and wearable computing (and the corresponding dynamic systems [22]) will have widespread and beneficial
effects by 2025.[23] It is, as such, clear that the IoT will consist of a very large number of devices being connected to the Internet. [24]

Integration with the Internet implies that devices will utilize an IP address as a unique identifier. However, due to the limited address
space of IPv4 (which allows for 4.3 billion unique addresses), objects in the IoT will have to use IPv6 to accommodate the extremely
large address space required. [25] [26] [27] [28] [29] Objects in the IoT will not only be devices with sensory capabilities, but also provide
actuation capabilities (e.g., bulbs or locks controlled over the Internet).[30] To a large extent, the future of the Internet of Things will
not be possible without the support of IPv6; and consequently the global adoption of IPv6 in the coming years will be critical for the
successful development of the IoT in the future. [26] [27] [28] [29]

The ability to network embedded devices with limited CPU, memory and power resources means that IoT finds applications in nearly
every field.[31] Such systems could be in charge of collecting information in settings ranging from natural ecosystems to buildings and

factories,[30] thereby finding applications in fields of environmental sensing and urban planning.[32]
On the other hand, IoT systems could also be responsible for performing actions, not just sensing things. Intelligent shopping
systems, for example, could monitor specific users' purchasing habits in a store by tracking their specific mobile phones. These
users could then be provided with special offers on their favorite products, or even location of items that they need, which their fridge
has automatically conveyed to the phone.[33][34] Additional examples of sensing and actuating are reflected in applications that deal
with heat, electricity and energy management, as well as cruise-assistingtransportation systems.[35]

However, the application of the IoT is not only restricted to these areas. Other specialized use cases of the IoT may also exist. An
overview of some of the most prominent application areas is provided here. Based on the application domain, IoT products can be
classified broadly into five different categories: smart wearable, smart home, smart city, smart environment, and smart enterprise.
The IoT products and solutions in each of these markets have different characteristics.[36]
Media[edit]

In order to hone into the manner in which the Internet of Things (IoT), the Media and Big Data are interconnected, it is first
necessary to provide some context into the mechanism used for media process. It has been suggested by Nick Couldry and Joseph
Turow that Practitioners in Advertising and Media approach Big Data as many actionable points of information about millions of
individuals. The industry appears to be moving away from the traditional approach of using specific media environments such as
newspapers, magazines, or television shows and instead tap into consumers with technologies that reach targeted people at optimal
times in optimal locations. The ultimate aim is of course to serve, or convey, a message or content that is (statistically speaking) in
line with the consumer's mindset. For example, publishing environments are increasingly tailoring messages (advertisements) and
content (articles) to appeal to consumers that have been exclusively gleaned through various data-mining activities. [37]

The media industries process Big Data in a dual, interconnected manner:

Targeting of consumers (for advertising by marketers)

Data-capture

According to Danny Meadows-Klue, the combination of analytics for conversion tracking, with behavioural targeting and
programmatic marketing has unlocked a new level of precision that enables display advertising to be focussed on the devices of
people with relevant interests.[38] Big Data and the IoT work in conjunction. From a media perspective, Data is the key derivative of
device inter connectivity, whilst being pivotal in allowing clearer accuracy in targeting. The Internet of Things therefore transforms
the media industry, companies and even governments, opening up a new era of economic growth and competitiveness. The wealth
of data generated by this industry (i.e. Big Data) will allow Practitioners in Advertising and Media to gain an elaborate layer on the
present targeting mechanisms utilised by the industries.

Environmental monitoring[edit]

Environmental monitoring applications of the IoT typically utilize sensors to assist in environmental protection by monitoring air
or water quality,[3] atmospheric or soil conditions,[39] and can even include areas like monitoring the movements of wildlife and
their habitats.[40] Development of resource[41] constrained devices connected to the Internet also means that other applications
like earthquake ortsunami early-warning systems can also be used by emergency services to provide more effective aid. IoT
devices in this application typically span a large geographic area and can also be mobile.[30]
Infrastructure management[edit]

Monitoring and controlling operations of urban and rural infrastructures like bridges, railway tracks, on- and offshore- wind-farms is a
key application of the IoT.[42] The IoT infrastructure can be used for monitoring any events or changes in structural conditions that
can compromise safety and increase risk. It can also be utilized for scheduling repair and maintenance activities in an efficient
manner, by coordinating tasks between different service providers and users of these facilities. [30] IoT devices can also be used to
control critical infrastructure like bridges to provide access to ships. Usage of IoT devices for monitoring and operating infrastructure
is likely to improve incident management and emergency response coordination, and quality of service, up-times and reduce costs
of operation in all infrastructure related areas.[43] Even areas such as waste management stand to benefit from automation and
optimization that could be brought in by the IoT.[44]
Manufacturing[edit]

Network control and management of manufacturing equipment, asset and situation management, or manufacturing process
control bring the IoT within the realm on industrial applications and smart manufacturing as well.[45] The IoT intelligent systems
enable rapid manufacturing of new products, dynamic response to product demands, and real-time optimization of manufacturing
production andsupply chain networks, by networking machinery, sensors and control systems together.[30]

Digital control systems to automate process controls, operator tools and service information systems to optimize plant safety and
security are within the purview of the IoT.[42] But it also extends itself to asset management via predictive maintenance, statistical
evaluation, and measurements to maximize reliability.[46] Smart industrial management systems can also be integrated with
the Smart Grid, thereby enabling real-time energy optimization. Measurements, automated controls, plant optimization, health and
safety management, and other functions are provided by a large number of networked sensors.[30]
Energy management[edit]

Integration of sensing and actuation systems, connected to the Internet, is likely to optimize energy consumption as a whole. [30] It is
expected that IoT devices will be integrated into all forms of energy consuming devices (switches, power outlets, bulbs, televisions,
etc.) and be able to communicate with the utility supply company in order to effectively balance power generation and energy usage.
[47]

Such devices would also offer the opportunity for users to remotely control their devices, or centrally manage them via

a cloud based interface, and enable advanced functions like scheduling (e.g., remotely powering on or off heating systems,
controlling ovens, changing lighting conditions etc.).[30] In fact, a few systems that allow remote control of electric outlets are already
available in the market, e.g., Belkin's WeMo,[48] Ambery Remote Power Switch,[49] Budderfly [50] etc.

Besides home based energy management, the IoT is especially relevant to the Smart Grid since it provides systems to gather and
act on energy and power-related information in an automated fashion with the goal to improve the efficiency, reliability, economics,
and sustainability of the production and distribution of electricity.[47] Using Advanced Metering Infrastructure (AMI) devices connected
to the Internet backbone, electric utilities can not only collect data from end-user connections, but also manage other distribution
automation devices like transformers and reclosers.[30]
Medical and healthcare systems[edit]

IoT devices can be used to enable remote health monitoring and emergency notification systems. These health monitoring devices
can range from blood pressure and heart rate monitors to advanced devices capable of monitoring specialized implants, such as
pacemakers or advanced hearing aids.[30] Specialized sensors can also be equipped within living spaces to monitor the health and
general well-being of senior citizens, while also ensuring that proper treatment is being administered and assisting people regain
lost mobility via therapy as well.[51] Other consumer devices to encourage healthy living, such as, connected scales or wearable
heart monitors, are also a possibility with the IoT.[52]
Building and home automation[edit]

IoT devices can be used to monitor and control the mechanical, electrical and electronic systems used in various types of buildings
(e.g., public and private, industrial, institutions, or residential). [30]Home automation systems, like other building automation systems,
are typically used to control lighting, heating, ventilation, air conditioning, appliances, communication systems, entertainment and
home security devices to improve convenience, comfort, energy efficiency, and security.[53][54]
Transportation[edit]

The IoT can assist in integration of communications, control, and information processing across various transportation systems.
Application of the IoT extends to all aspects of transportation systems, i.e. the vehicle, the infrastructure, and the driver or user.
Dynamic interaction between these components of a transport system enables inter and intra vehicular communication, smart traffic
control, smart parking, electronic toll collection systems, logistic and fleet management, vehicle control, and safety and road
assistance.[30]
Large scale deployments[edit]

There are several planned or ongoing large-scale deployments of the IoT, to enable better management of cities and systems. For
example, Songdo, South Korea, the first of its kind fully equipped and wired smart city, is near completion. Nearly everything in this

city is planned to be wired, connected and turned into a constant stream of data that would be monitored and analyzed by an array
of computers with little, or no human intervention.[citation needed]

Another application is a currently undergoing project in Santander, Spain. For this deployment, two approaches have been adopted.
This city of 180000 inhabitants, has already seen 18000 city application downloads for their smartphones. This application is
connected to 10000 sensors that enable services like parking search, environmental monitoring, digital city agenda among others.
City context information is utilized in this deployment so as to benefit merchants through a spark deals mechanism based on city
behavior that aims at maximizing the impact of each notification. [55]

Other examples of large-scale deployments underway include the Sino-Singapore Guangzhou Knowledge City; [56] work on
improving air and water quality, reducing noise pollution, and increasing transportation efficiency in San Jose, California; [57] and
smart traffic management in western Singapore.[58]

Another example of a large deployment is the one completed by New York Waterways in New York City to connect all their vessels
and being able to monitor them live 24/7. The network was designed and engineered by Fluidmesh Networks, a Chicago based
company developing wireless networks for mission critical applications. The NYWW network is currently providing coverage on the
Hudson River, East River, and Upper New York Bay. With the wireless network in place, NY Waterway is able to take control of its
fleet and passengers in a way that was not previously possible. New applications can include security, energy and fleet
management, digital signage, public Wi-Fi, paperless ticketing and much more.
Unique addressability of things[edit]

The original idea of the Auto-ID Center is based on RFID-tags and unique identification through the Electronic Product
Code however this has evolved into objects having an IP address or URI.

An alternative view, from the world of the Semantic Web[59] focuses instead on making all things (not just those electronic, smart, or
RFID-enabled) addressable by the existing naming protocols, such asURI. The objects themselves do not converse, but they may
now be referred to by other agents, such as powerful centralized servers acting for their human owners.

The next generation of Internet applications using Internet Protocol Version 6 (IPv6) would be able to communicate with devices
attached to virtually all human-made objects because of the extremely large address space of the IPv6 protocol. This system would
therefore be able to scale to the large numbers of objects envisaged. [60]

A combination of these ideas can be found in the current GS1/EPCglobal EPC Information Services[61] (EPCIS) specifications. This
system is being used to identify objects in industries ranging from aerospace to fast moving consumer products and transportation
logistics.[62]

Trends and characteristics[edit]

Technology Roadmap: Internet of Things


Intelligence[edit]

Ambient intelligence and autonomous control are not part of the original concept of the Internet of Things. Ambient intelligence and
autonomous control do not necessarily require Internet structures, either. However, there is a shift in research to integrate the
concepts of the Internet of Things and autonomous control,[63] with initial outcomes towards this direction considering objects as the
driving force for autonomous IoT.[64][65] In the future the Internet of Things may be a non-deterministic and open network in which
auto-organized or intelligent entities (Web services, SOA components), virtual objects (avatars) will be interoperable and able to act
independently (pursuing their own objectives or shared ones) depending on the context, circumstances or environments.
Autonomous behavior through collecting and reasoning context information plays a significant role in IoT. Modern IoT products and
solutions in the marketplace use variety of different technologies to support such context-aware automation. [66]

Embedded intelligence[67] presents an "AI-oriented" perspective of Internet of Things, which can be more clearly defined as:
leveraging the capacity to collect and analyze the digital traces left by people when interacting with widely deployed smart things to
discover the knowledge about human life, environment interaction, as well as social inter connection and related behaviors.
Architecture[edit]

The system will likely be an example of event-driven architecture,[68] bottom-up made (based on the context of processes and
operations, in real-time) and will consider any subsidiary level. Therefore, model driven and functional approaches will coexist with
new ones able to treat exceptions and unusual evolution of processes (Multi-agent systems, B-ADSc, etc.).

In an Internet of Things, the meaning of an event will not necessarily be based on a deterministic or syntactic model but would
instead be based on the context of the event itself: this will also be asemantic web.[69] Consequently, it will not necessarily need

common standards that would not be able to address every context or use: some actors (services, components, avatars) will
accordingly be self-referenced and, if ever needed, adaptive to existing common standards (predicting everything would be no more
than defining a "global finality" for everything that is just not possible with any of the current top-down approaches and
standardizations). Some researchers argue that sensor networks are the most essential components of the Internet of Things. [70]

Building on top of the Internet of Things, the Web of Things is an architecture for the application layer of the Internet of Things
looking at the convergence of data from IoT devices into Web applications to create innovative use-cases.
Complex system[edit]

In semi-open or closed loops (i.e. value chains, whenever a global finality can be settled) it will therefore be considered and studied
as a Complex system[71] due to the huge number of different links and interactions between autonomous actors, and its capacity to
integrate new actors. At the overall stage (full open loop) it will likely be seen as a chaotic environment (since systems have always
finality).
Size considerations[edit]

The Internet of objects would encode 50 to 100 trillion objects, and be able to follow the movement of those objects. Human beings
in surveyed urban environments are each surrounded by 1000 to 5000 trackable objects. [72]
Space considerations[edit]

In an Internet of Things, the precise geographic location of a thingand also the precise geographic dimensions of a thingwill be
critical. Open Geospatial Consortium, "OGC Abstract Specification"Currently, the Internet has been primarily used to manage
information processed by people. Therefore, facts about a thing, such as its location in time and space, have been less critical to
track because the person processing the information can decide whether or not that information was important to the action being
taken, and if so, add the missing information (or decide to not take the action). (Note that some things in the Internet of Things will
be sensors, and sensor location is usually important. Mike Botts et al., "OGC Sensor Web Enablement: Overview And High Level
Architecture") TheGeoWeb and Digital Earth are promising applications that become possible when things can become organized
and connected by location. However, challenges that remain include the constraints of variable spatial scales, the need to handle
massive amounts of data, and an indexing for fast search and neighbour operations. If in the Internet of Things, things are able to
take actions on their own initiative, this human-centric mediation role is eliminated, and the time-space context that we as humans
take for granted must be given a central role in this information ecosystem. Just as standards play a key role in the Internet and the
Web, geospatial standards will play a key role in the Internet of Things.

Sectors[edit]

There are three core sectors of the IoT: enterprise, home, and government, with the Enterprise Internet of Things (EIoT) being the
largest of the three. By 2019, the EIoT sector is estimated to account for nearly 40% or 9.1 billion devices. [73]
A Basket of Remotes[edit]

According to the CEO of Cisco, the remote control market is expected to be a $USD 19 trillion market.[74] Many IoT devices have a
potential to take a piece of this market. Jean-Louis Gasse (Apple initial alumni team, and BeOS co-founder) has addressed this
topic in an article on Monday Note,[75] where he predicts that the most likely problem will be what he calls the "Basket of remotes"
problem, where we'll have hundreds of applications to interface with hundreds of devices that don't share protocols for speaking with
one another.

There are multiple approaches to solve this problem, one of them called the "predictive interaction",[76] where cloud or fog based
decision makers [clarification needed] will predict the user's next action and trigger some reaction.

For user interaction, new technology leaders are joining forces to create standards for communication between devices.
While AllJoyn alliance is composed the top 20 World technology leaders, there are also big companies that promote their own
protocol like CCF from Intel.

This problem is also a competitive advantage for some very technical startup companies with fast capabilities.

AT&T Digital Life provides one solution for the "basket of remotes" problem. This product features home-automation and
digital-life experiences. It provides a mobile application to control their closed ecosystem of branded devices;

Nuve has developed a new technology based on sensors, a cloud-based platform and a mobile application that allows the
asset management industry to better protect, control and monitor their property.[77]

Muzzley motd controls multiple devices with a single application[78] and has had many manufacturers use their API[79] to
provide a learning ecosystem that really predicts the end-user next actions. Muzzley is known for being the first generation of
platforms that has the ability to predict form learning the end-user outside World relations with "things".

my shortcut[80] is an approach that also includes a set of already-defined devices and allow a Siri-Like
needed]

[clarification

interaction between the user and the end devices. The user is able to control his or her devices using voice commands; [81]

Realtek "IoT my things" is an application that aims to interface with a closed ecosystem of Realtek devices like sensors
and light controls.[citation needed]

Manufacturers are becoming more conscious of this problem, and many companies have begun releasing their devices with open
APIs. Many of these APIs are used by smaller companies looking to take advantage of quick integration. [citation needed]
Sub systems[edit]

Not all elements in an Internet of Things will necessarily run in a global space. Domotics running inside a Smart House, for example,
might only run and be available via a local network.
Criticism and controversies[edit]

While many technologists tout the Internet of Things as a step towards a better world, scholars and social observers have doubts
about the promises of the ubiquitous computing revolution.
Privacy, autonomy and control[edit]

Peter-Paul Verbeek, a professor of philosophy of technology at the University of Twente, Netherlands, writes that technology already
influences our moral decision making, which in turns affects human agency, privacy and autonomy. He cautions against viewing
technology merely as a human tool and advocates instead to consider it as an active agent.[98]

Justin Brookman, of the Center for Democracy and Technology, expressed concern regarding the impact of IoT on consumer
privacy, saying that "There are some people in the commercial space who say, Oh, big data well, lets collect everything, keep it
around forever, well pay for somebody to think about security later. The question is whether we want to have some sort of policy
framework in place to limit that."[99]

Editorials at WIRED have also expressed concern, one stating 'What youre about to lose is your privacy. Actually, its worse than
that. You arent just going to lose your privacy, youre going to have to watch the very concept of privacy be rewritten under your
nose.'[100]

The American Civil Liberties Union (ACLU) expressed concern regarding the ability of IoT to erode people's control over their own
lives. The ACLU wrote that "Theres simply no way to forecast how these immense powers -- disproportionately accumulating in the
hands of corporations seeking financial advantage and governments craving ever more control -- will be used. Chances are Big
Data and the Internet of Things will make it harder for us to control our own lives, as we grow increasingly transparent to powerful
corporations and government institutions that are becoming more opaque to us." [101]

Researchers have identified privacy challenges faced by all stakeholders in IoT domain, from the manufacturers and app developers
to the consumers themselves, and examined the responsibility of each party in order to ensure user privacy at all times. Problems
highlighted by the report[102] include:

User consent somehow, the report says, users need to be able to give informed consent to data collection. Users,
however, have limited time and technical knowledge.

Freedom of choice both privacy protections and underlying standards should promote freedom of choice. For example,
the study notes,[103] users need a free choice of vendors in their smart homes; and they need the ability to revoke or revise their
privacy choices.

Anonymity IoT platforms pay scant attention to user anonymity when transmitting data, the researchers note. Future
platforms could, for example, use TOR or similar technologies so that users can't be too deeply profiled based on the
behaviors of their "things".

Security[edit]

A different criticism is that the Internet of Things is being developed rapidly without appropriate consideration of the profound
security challenges involved and the regulatory changes that might be necessary.[104] According to the BI (Business Insider)
Intelligence Survey conducted in the last quarter of 2014, 39% of the respondents said that security is the biggest concern in
adopting Internet of Things technology.[105] In particular, as the Internet of Things spreads widely, cyber attacks are likely to become
an increasingly physical (rather than simply virtual) threat. [106] In a January 2014 article in Forbes, cybersecurity columnist Joseph
Steinberg listed many Internet-connected appliances that can already "spy on people in their own homes" including televisions,
kitchen appliances, cameras, and thermostats.[107] Computer-controlled devices in automobiles such as brakes, engine, locks, hood
and truck releases, horn, heat, and dashboard have been shown to be vulnerable to attackers who have access to the onboard
network. (These devices are currently not connected to external computer networks, and so are not vulnerable to Internet attacks.)
[108]

The U.S. National Intelligence Council in an unclassified report maintains that it would be hard to deny "access to networks of
sensors and remotely-controlled objects by enemies of the United States, criminals, and mischief makers An open market for
aggregated sensor data could serve the interests of commerce and security no less than it helps criminals and spies identify
vulnerable targets. Thus, massively parallel sensor fusion may undermine social cohesion, if it proves to be fundamentally
incompatible with Fourth-Amendment guarantees against unreasonable search." [109] In general, the intelligence community views
Internet of Things as a rich source of data.[110]

Virtual Worlds Applications


Social[edit]

Although the social interactions of participants in virtual worlds are often viewed in the context of 3D Games, other forms of
interaction are common as well, including forums, blogs, wikis, chatrooms, instant messaging, and video-conferences. Communities
are born in places which have their own rules, topics, jokes, and even language. Members of such communities can find like-minded
people to interact with, whether this be through a shared passion, the wish to share information, or a desire to meet new people and
experience new things. Users may develop personalities within the community adapted to the particular world they are interacting
with, which can impact the way they think and act. Internet friendships and participation online communities tend to complement
existing friendships and civic participation rather than replacing or diminishing such interactions. [40][41]

Systems that have been designed for a social application include:

Active Worlds

Kaneva

Onverse

SmallWorlds

There.com

Twinity

Whyville

Medical[edit]

Disabled or chronically invalided people of any age can benefit enormously from experiencing the mental and emotional freedom
gained by temporarily leaving their disabilities behind and doing, through the medium of their avatars, things as simple and
potentially accessible to able, healthy people as walking, running, dancing, sailing, fishing, swimming, surfing, flying, skiing,
gardening, exploring and other physical activities which their illnesses or disabilities prevent them from doing in real life. They may
also be able to socialise, form friendships and relationships much more easily and avoid the stigma and other obstacles which would
normally be attached to their disabilities. This can be much more constructive, emotionally satisfying and mentally fulfilling than
passive pastimes such as television watching, playing computer games, reading or more conventional types of internet use. [citation
needed]

The Starlight Children's Foundation helps hospitalised children (suffering from painful diseases or autism for example) to create a
comfortable and safe environment which can expand their situation, experience interactions (when the involvement of a multiple
cultures and players from around the world is factored in) they may not have been able to experience without a virtual world, healthy
or sick. Virtual worlds also enable them to experience and act beyond the restrictions of their illness and help to relieve stress. [42]

Virtual worlds can help players become more familiar and comfortable with actions they may in real-life feel reluctant or
embarrassed. For example, in World of Warcraft, /dance is the emote for a dance move which a player in the virtual world can
"emote" quite simply. And a familiarization with said or similar "emotes" or social skills (such as, encouragement, gratitude, problemsolving, and even kissing) in the virtual world via avatar can make the assimilation to similar forms of expression, socialization,
interaction in real life smooth. Interaction with humans through avatars in the virtual world has potential to seriously expand the
mechanics of one's interaction with real-life interactions.[original research?]
Commercial[edit]

As businesses compete in the real world, they also compete in virtual worlds. As there has been an increase in the buying and
selling of products online (e-commerce) this twinned with the rise in the popularity of the internet, has forced businesses to adjust to
accommodate the new market.

Many companies and organizations now incorporate virtual worlds as a new form of advertising. There are many advantages to
using these methods of commercialization. An example of this would be Apple creating an online store within Second Life. This
allows the users to browse the latest and innovative products. Players cannot actually purchase a product but having these virtual
stores is a way of accessing a different clientele and customer demographic. The use of advertising within "virtual worlds" is a
relatively new idea. This is because Virtual Worlds is a relatively new technology. Before companies would use an advertising
company to promote their products. With the introduction of the prospect of commercial success within a Virtual World, companies
can reduce cost and time constraints by keeping this "in-house". An obvious advantage is that it will reduce any costs and
restrictions that could come into play in the real world.

Using virtual worlds gives companies the opportunity to gauge customer reaction and receive feedback. Feedback can be crucial to
the development of a project as it will inform the creators exactly what users want. [43]

Using virtual worlds as a tool allows companies to test user reaction and give them feedback on products. This can be crucial as it
will give the companies an insight as to what the market and customers want from new products, which can give them a competitive
edge. Competitive edge is crucial in the ruthless world that is today's business.

Another use of virtual worlds business is where players can create a gathering place. Many businesses can now be involved in
business-to-business commercial activity and will create a specific area within a virtual world to carry out their business. Within this

space all relevant information can be held. This can be useful for a variety of reasons. Players can conduct business with
companies on the other side of the world, so there are no geographical limitations, it can increase company productivity. Knowing
that there is an area where help is on hand can aid the employees. Sun Microsystems have created an island in Second Life
dedicated for the sole use of their employees. This is a place where people can go and seek help, exchange new ideas or to
advertise a new product.

Gronstedt identifies additional business applications, including: simulations, collaboration, role-playing, mentoring, and datavisualization.[44]

According to trade media company Virtual Worlds Management,[45] commercial investments in the "virtual worlds" sector were in
excess of USD 425 million in Q4 2007,[46] and totaled USD 184 million in Q1 2008.[47] However, the selection process for defining a
"virtual worlds" company in this context has been challenged by one industry blog. [48]

E-commerce (legal)[edit]

A number of virtual worlds have incorporated systems for sale of goods through virtual interfaces and using virtual currencies.
Transfers of in-world credits typically are not bound by laws governing commerce. Such transactions may lack the oversight and
protections associated with real-world commerce, and there is potential for fraudulent transactions. One example is that of Ginko
Financial, a bank system featured in Second Life where avatars could deposit their real life currency after converted to Linden
Dollars for a profit. In July 2007, residents of Second Life crowded around the ATM's in an unsuccessful attempt to withdraw their
money. After a few days the ATM's along with the banks disappeared altogether. Around $700,000 in real world money was reported
missing from residents in Second Life. An investigation was launched but nothing substantial ever came of finding and punishing the
avatar known as Nicholas Portocarrero who was the head of Ginko Financial.[49]

Civil and criminal laws exist in the real world and are put in place to govern peoples behavior. Virtual Worlds such as Eve
Online and Second Life also have people and systems that govern them.[50]

Providers of online virtual spaces have more than one approach to the governing of their environments. Second Life for instance
was designed with the expectation being on the residents to establish their own community rules for appropriate behaviour. On the
other hand some virtual worlds such as Habbo enforce clear rules for behaviour,[50] as seen in their terms and conditions.[51]

In some instances virtual worlds dont need established rules of conduct because actions such as killing another avatar is
impossible. However if needed to, rule breakers can be punished with fines being payable through their virtual bank account,
alternatively a players suspension may be put into effect.[50]

Instances of real world theft from a virtual world do exist, Eve Online had an incident where a bank controller stole around 200bn
credits and exchanged them for real world cash amounting to 3,115.[52]The player in question has now been suspended as trading
in-game cash for real money is against Eve Onlines terms and conditions.[53]
Entertainment[edit]

There are many MMORPG virtual worlds out on many platforms. Most notable are IMVU for Windows, PlayStation Home for
PlayStation 3, and Second Life for Windows. Many Virtual worlds have shut down since launch however. Notable shutdowns are
The Sims Online and The Sims Bustin Out Online Weekend Mode.
see also: MMOG
Single-player games[edit]

Some single-player video games contain virtual worlds populated by non-player characters (NPC). Many of these allow players
to save the current state of this world instance to allow stopping and restarting the virtual world at a later date. (This can be
done with some multiplayer environments as well.)

The virtual worlds found in video games are often split into discrete levels.

Single-player games such as Minecraft allow players to optionally create their own world without other players, and then
combine skills from the game to work together with other players and create bigger and more intricate environments. These
environments can then be accessed by other players, if the server is available to other players then they may be able to modify
parts of it, such as the structure of the environment.
Education[edit]
See also: Virtual learning environment and Online communication between school and home

Virtual worlds represent a powerful new medium for instruction and education that presents many opportunities but also some
challenges.[54] Persistence allows for continuing and growing social interactions, which themselves can serve as a basis for
collaborative education. The use of virtual worlds can give teachers the opportunity to have a greater level of student
participation. It allows users to be able to carry out tasks that could be difficult in the real world due to constraints and
restrictions, such as cost, scheduling or location. Virtual worlds have the capability to adapt and grow to different user needs,
for example, classroom teachers are able to use virtual worlds in their classroom leveraging their interactive whiteboard with
the open source project Edusim. They can be a good source of user feedback, the typical paper-based resources have
limitations that Virtual Worlds can overcome.

Multi-user virtual worlds with easy-to-use affordances for building are useful in project-based learning. For example, Active
Worlds is used to support classroom teachers in Virginia Beach City Public Schools, the out-of-school NASA RealWorld-

InWorld Engineering Design Challenge, and many after school and in school programs in EDUni-NY. Projects range from
tightly scaffolded reflection spaces to open building based on student-centered designs. New York Museums AMNH and NYSci
have used the medium to support STEM learning experiences for their program participants.

Virtual world can also be used with virtual learning environments, as in the case of what is done in the Sloodle project, which
aims to merge Second Life with Moodle.[55] Another project similar to Sloodle is Utherverse Academy.[56]

Virtual worlds allow users with specific needs and requirements to access and use the same learning materials from home as
they would be receiving if they were physically present. This can help users to keep up to date with the relevant information
and needs while also feeling as though involved. Having the option to be able to attend a presentation via a virtual world from
home or from their workplace, can help the user to be more at ease and comfortable. Although virtual worlds are a good way of
communicating and interacting between students and teachers, they do not completely substitute for actual face-to-face
meetings, in that downsides include losing certain body language cues and other more personal aspects.

Some virtual worlds also offer an environment where simulation based activities and games allow users to experiment various
phenomenon and learn the underlying physics and principles. An example isWhyville launched in 1999,[57] which targets kids
and teenagers and offer them many opportunities to experiment, understand and learn. Topic covered by such a virtual world
can vary from physics to nutrition to ecology. And as with every virtual world the user created content part exists and can lead
to discovering entrepreneurship through the existing internal virtual economy.
Language[edit]
Main article: Virtual World Language Learning

Language learning is the most widespread type of education in virtual worlds.[58]


Business[edit]

Online training overcomes constraints such as distance, infrastructure, accommodation costs and tight scheduling.
Although video conferencing may be the most common tool, virtual worlds have been adopted by the business environment
for training employees.[59] For example, Second Life has been used in business schools.[60]

Virtual training content resembles traditional tutorials and testing of user knowledge. Despite the lack of face to face contact
and impaired social linking, learning efficiency may not be adversely affected as adults need autonomy in learning and are
more self-directed than younger students.[citation needed]

Some companies and public places allow free virtual access to their facilities as an alternative to a video or picture.

In fiction[edit]

Virtual worlds, virtual reality, and cyberspace are quite popular fictional motifs. A prominent example is the work of William
Gibson. The first was probably John M. Ford's 1980 novel, Web of Angels.Virtual worlds are integral
to Tron, Neuromancer, The Lawnmower Man, The Lawnmower Man 2, Ready Player One, Epic, Snow
Crash, .hack//Sign, Real Drive, Sword Art Online, Summer Wars, The Matrix, Ghost in the Shell, and the French animated
television series Code Lyoko, Code Lyoko Evolution, and the Cyber World in the popular Viz Media series MegaMan NT
Warrior. In the Planiverse, a 1984 novel by A.K. Dewdney, college students create a virtual world called 2DWorld, leading to
contact with Arde, 2-dimensional parallel universe. In the cyberpunk, computers, psychological thirteen-episode anime
entitled Serial Experiments Lain the main focus is about the Wired, which is a virtual reality-world that governs the sum of all
electronic communication and machines; outer receptors are used to mentally transport a person into the Wired itself as a
uniquely different virtual avatar.

The fourth series of the smash hit New Zealand TV series, The Tribe featured the birth of Reality Space and the Virtual World
that was created by Ram, the computer genius/wizard leader of The Technos.

In 2009, BBC Radio 7 commissioned Planet B, set in a virtual world in which a man searches for his girlfriend, believed to be
dead, but in fact still alive within the world, called "Planet B". The series is the currently the biggest ever commission for an
original drama series.[61]

In the novel Holo.Wars: The Black Hats, three virtual worlds overlap and are possibly a majority of the milieu in the book. [62]

Gamification
Gamification is the use of game thinking and game mechanics[1] in non-game contexts to engage users in solving problems[2] and
increase users' self contributions.[3][4] Gamification has been studied and applied in several domains, with some of the main
purposes being to engage (improve user engagement,[5] physical exercise,[6] return on investment, flow,[7][8] data quality, timeliness),
teach (in classrooms, the public or at work[9]), entertain (enjoyment,[8] fan loyalty), measure[10] (for recruiting and employee
evaluation), and to improve the perceived ease of use of information systems.[8][11] A review of research on gamification shows that a
majority of studies on gamification find positive effects from gamification.[12] However, individual and contextual differences exist.
Applications[edit]

Gamification has been widely applied in marketing. Over 70% of Forbes Global 2000 companies surveyed in 2013 said they
planned to use gamification for the purposes of marketing and customer retention.[24] For example, in November 2011 Australian
broadcast and online media partnership Yahoo!7 launched its Fango mobile app, which TV viewers use to interact with shows via
techniques like check-ins and badges. As of February 2012, the app had been downloaded more than 200,000 times since its
launch.[25] Gamification has also been used in customer loyalty programmes. In 2010,Starbucks gave custom Foursquare badges to
people who checked in at multiple locations and offered discounts to people who checked in most frequently at an individual store.

[26]

There have also been proposals to use gamification for competitive intelligence,[27] encouraging people to fill out surveys,[28] and

to do market research on brand recognition.[29] Gamification has also been integrated intoHelp Desk software. In 2012, Freshdesk, a
SaaS-based customer support product, integrated gamification features, allowing agents to earn badges based on performance. [30]

Gamification has also been used as a tool for customer engagement,[31] and for encouraging desirable website usage behavior.
[19]

Additionally, gamification is readily applicable to increasing engagement on sites built on social network services. For example, in

August 2010, one site, DevHub, announced that they have increased the number of users who completed their online tasks from
10% to 80% after adding gamification elements.[32] On the programming question-and-answer site Stack Overflow users receive
points and/or badges for performing a variety of actions, including spreading links to questions and answers
via Facebook and Twitter. A large number of different badges are available, and when a user's reputation points exceed various
thresholds, he or she gains additional privileges, including at the higher end, the privilege of helping to moderate the site.

Gamification can be used for ideation, the structured brainstorming to produce new ideas. A study at MIT Sloan found that ideation
games helped participants generate more and better ideas, and compared it to gauging the influence of academic papers by the
numbers of citations received in subsequent research.[33]

Education and training are areas where there has been interest in gamification.[35][36] Microsoft released the game Ribbon Hero 2 as
an add-on to their Officeproductivity suite to help train people to use it effectively,[37] which was described by Microsoft as one of the
most popular projects its Office Labs division ever released.[38] The New York City Department of Education with funding from
the MacArthur Foundation and the Bill and Melinda Gates Foundation has set up a school called Quest to Learn centred around
game-based learning, with the intent to make education more engaging and relevant to modern kids.[39] SAP has used games to
educate their employees on sustainability.[40] The US military and Unilever have also used gamification in their training.[41] The Khan
Academy is an example of the use of gamification techniques in online education.[42] In August 2009, Gbanga launched the
educational location-based game Gbanga Zooh for Zurich Zoo that asked participants to actively save endangered animals and
physically bring them back to a zoo. Players maintained virtual habitats across the Canton of Zurich to attract and collect
endangered species of animals.[43] In 2014, the True Life Game project was initiated, with the main purpose of researching the best
ways to apply concepts of gamification and crowdsourcing into lifelong learning.

Applications like Fitocracy and QUENTIQ use gamification to encourage their users to exercise more effectively and improve their
overall health. Users are awarded varying numbers of points for activities they perform in their workouts and gain levels based on
points collected. Users can also complete quests (sets of related activities) and gain achievement badges for fitness milestones.
[44]

Health Month adds aspects of social gaming by allowing successful users to restore points to users who have failed to meet

certain goals.

Employee productivity is another problem that gamification has been used to tackle. RedCritter Tracker,[45] Playcall,[46] and
Arcaris [47] are examples of management tools that use gamification to improve productivity. Digital Brand Group is the first company
in India to fully gamify their work process to make their work style more engaging and encouraging.

Crowdsourcing has been gamified in games like Foldit, a game designed by the University of Washington, in which players compete
to manipulate proteins into more efficient structures. A 2010 paper in science journal Nature credited Foldit's 57,000 players with
providing useful results that matched or outperformed algorithmically computed solutions.[48] The ESP Game is a game that is used
to generate image metadata. Google Image Labeler is a version of the ESP Game that Google has licensed to generate its own
image metadata.[49] Research from the University of Bonn used gamification to increase wiki contributions by 62%. [50]

Experts anticipate that the technique would also be applied to health care, financial services, transportation, government,
[51]

employee training,[41] and other activities.[52]

Alix Levine, an American security consultant, described gamification as some techniques that a number of extremist websites such
as Stormfront and various terrorism-related sites used to build loyalty and participation. As an example, Levine mentioned reputation
scores.[53][54] The Anti-Defamation League has noted that some terror groups, such as Hezbollah, have created actual games to
market their ideology to adolescents.[55]

Microsoft has also announced plans to use gamification techniques for its Windows Phone 7 operating system design.[56] While
businesses face the challenges of creating motivating gameplay strategies, what makes for effective gamification

[57]

is a key

question?

Gamification has also been applied to authentication. For example, the possibilities of using a game like Guitar Hero can help
someone learn a password implicitly.[58] Furthermore, games have been explored as a way to learn new and complicated passwords.
It is suggested that these games could be used to "level up" a password, thereby improving its strength over time.[59] Gamification
has also been proposed as a way to select and manage archives.[60] Recently, an Australian technology company called Wynbox
has recorded success in the application of its gamification engine to the hotel booking process.[61]

HOLOGRAPHY

Holography is a technique which enables three-dimensional images (holograms) to be made. It involves the use of
a laser, interference, diffraction, lightintensity recording and suitable illumination of the recording. The image changes as the position
and orientation of the viewing system changes in exactly the same way as if the object were still present, thus making the image
appear three-dimensional.

The holographic recording itself is not an image; it consists of an apparently random structure of either varying intensity, density or
profile.
Applications[edit]
Art[edit]

Early on, artists saw the potential of holography as a medium and gained access to science laboratories to create their work.
Holographic art is often the result of collaborations between scientists and artists, although some holographers would regard
themselves as both an artist and a scientist.

Salvador Dal claimed to have been the first to employ holography artistically. He was certainly the first and best-known surrealist to
do so, but the 1972 New York exhibit of Dal holograms had been preceded by the holographic art exhibition that was held at
the Cranbrook Academy of Art in Michigan in 1968 and by the one at the Finch College gallery in New York in 1970, which attracted
national media attention.[45]

During the 1970s, a number of art studios and schools were established, each with their particular approach to holography. Notably,
there was the San Francisco School of Holography established byLloyd Cross, The Museum of Holography in New York founded by
Rosemary (Possie) H. Jackson, the Royal College of Art in London and the Lake Forest College Symposiums organised by Tung
Jeong (T.J.).[46] None of these studios still exist; however, there is the Center for the Holographic Arts in New York [47] and the
HOLOcenter in Seoul,[48] which offers artists a place to create and exhibit work.

During the 1980s, many artists who worked with holography helped the diffusion of this so-called "new medium" in the art world,
such as Harriet Casdin-Silver of the USA, Dieter Jung of Germany, andMoyss Baumstein of Brazil, each one searching for a
proper "language" to use with the three-dimensional work, avoiding the simple holographic reproduction of a sculpture or object. For
instance, in Brazil, many concrete poets (Augusto de Campos, Dcio Pignatari, Julio Plaza and Jos Wagner Garcia, associated
with Moyss Baumstein) found in holography a way to express themselves and to renew Concrete Poetry.

A small but active group of artists still useists integrate holographic elements into their work. [49] Some are associated with novel
holographic techniques; for example, artist Matt Brand[50] employed computational mirror design to eliminate image distortion
from specular holography.

The MIT Museum[51] and Jonathan Ross[52] both have extensive collections of holography and on-line catalogues of art holograms.
Data storage[edit]
Main article: Holographic memory

Holography can be put to a variety of uses other than recording images. Holographic data storage is a technique that can store
information at high density inside crystals or photopolymers. The ability to store large amounts of information in some kind of media

is of great importance, as many electronic products incorporate storage devices. As current storage techniques such as Blu-ray
Disc reach the limit of possible data density (due to the diffraction-limited size of the writing beams), holographic storage has the
potential to become the next generation of popular storage media. The advantage of this type of data storage is that the volume of
the recording media is used instead of just the surface. Currently available SLMs can produce about 1000 different images a second
at 10241024-bit resolution. With the right type of media (probably polymers rather than something like LiNbO3), this would result in
about one-gigabit-per-second writing speed. Read speeds can surpass this, and experts believe one-terabit-per-second readout is
possible.

In 2005, companies such as Optware and Maxell produced a 120 mm disc that uses a holographic layer to store data to a potential
3.9 TB, a format called Holographic Versatile Disc. As of September 2014, no commercial product has been released.

Another company, InPhase Technologies, was developing a competing format, but went bankrupt in 2011 and all its assets were
sold to Akonia Holographics, LLC.

While many holographic data storage models have used "page-based" storage, where each recorded hologram holds a large
amount of data, more recent research into using submicrometre-sized "microholograms" has resulted in several potential 3D optical
data storage solutions. While this approach to data storage can not attain the high data rates of page-based storage, the tolerances,
technological hurdles, and cost of producing a commercial product are significantly lower.
Dynamic holography[edit]

In static holography, recording, developing and reconstructing occur sequentially, and a permanent hologram is produced.

There also exist holographic materials that do not need the developing process and can record a hologram in a very short time. This
allows one to use holography to perform some simple operations in an all-optical way. Examples of applications of such real-time
holograms include phase-conjugate mirrors ("time-reversal" of light), optical cache memories, image processing (pattern recognition
of time-varying images), and optical computing.

The amount of processed information can be very high (terabits/s), since the operation is performed in parallel on a whole image.
This compensates for the fact that the recording time, which is in the order of a microsecond, is still very long compared to the
processing time of an electronic computer. The optical processing performed by a dynamic hologram is also much less flexible than
electronic processing. On one side, one has to perform the operation always on the whole image, and on the other side, the
operation a hologram can perform is basically either a multiplication or a phase conjugation. In optics, addition and Fourier
transform are already easily performed in linear materials, the latter simply by a lens. This enables some applications, such as a
device that compares images in an optical way.[53]

The search for novel nonlinear optical materials for dynamic holography is an active area of research. The most common materials
are photorefractive crystals, but in semiconductors or semiconductor heterostructures (such as quantum wells), atomic vapors and
gases, plasmas and even liquids, it was possible to generate holograms.

A particularly promising application is optical phase conjugation. It allows the removal of the wavefront distortions a light beam
receives when passing through an aberrating medium, by sending it back through the same aberrating medium with a conjugated
phase. This is useful, for example, in free-space optical communications to compensate for atmospheric turbulence (the
phenomenon that gives rise to the twinkling of starlight).
Hobbyist use[edit]

Since the beginning of holography, experimenters have explored its uses. Starting in 1971, Lloyd Cross started the San Francisco
School of Holography and started to teach amateurs the methods of making holograms with inexpensive equipment. This method
relied on the use of a large table of deep sand to hold theoptics rigid and damp vibrations that would destroy the image.

Many of these holographers would go on to produce art holograms. In 1983, Fred Unterseher published the Holography Handbook,
a remarkably easy-to-read description of making holograms at home. This brought in a new wave of holographers and gave simple
methods to use the then-available AGFA silver haliderecording materials.

In 2000, Frank DeFreitas published the Shoebox Holography Book and introduced the use of inexpensive laser pointers to
countless hobbyists. This was a very important development for amateurs, as the cost for a 5 mW laser dropped from $1200 to $5
as semiconductor laser diodes reached mass market. Now, there are hundreds to thousands of amateur holographers worldwide.

By late 2000, holography kits with the inexpensive laser pointer diodes entered the mainstream consumer market. These kits
enabled students, teachers, and hobbyists to make many kinds of holograms without specialized equipment, and became popular
gift items by 2005.[54] The introduction of holography kits with self-developing film plates in 2003 made it even possible for hobbyists
to make holograms without using chemical developers.[55]

In 2006, a large number of surplus Holography Quality Green Lasers (Coherent C315) became available and put Dichromated
Gelatin (DCG) within the reach of the amateur holographer. The holography community was surprised at the amazing sensitivity of
DCG to green light. It had been assumed that the sensitivity would be non-existent. Jeff Blyth responded with the G307 formulation
of DCG to increase the speed and sensitivity to these new lasers.[56]

Many film suppliers have come and gone from the silver-halide market. While more film manufactures have filled in the voids, many
amateurs are now making their own film. The favorite formulations are Dichromated Gelatin, Methylene Blue Sensitised
Dichromated Gelatin and Diffusion Method Silver Halide preparations. Jeff Blyth has published very accurate methods for making
film in a small lab or garage.[57]

A small group of amateurs are even constructing their own pulsed lasers to make holograms of moving objects. [58]
Holographic interferometry[edit]
Main article: holographic interferometry

Holographic interferometry (HI) is a technique that enables static and dynamic displacements of objects with optically rough
surfaces to be measured to optical interferometric precision (i.e. to fractions of a wavelength of light). [59][60] It can also be used to
detect optical-path-length variations in transparent media, which enables, for example, fluid flow to be visualized and analyzed. It
can also be used to generate contours representing the form of the surface.

It has been widely used to measure stress, strain, and vibration in engineering structures.
Interferometric microscopy[edit]
Main article: Interferometric microscopy

The hologram keeps the information on the amplitude and phase of the field. Several holograms may keep information about the
same distribution of light, emitted to various directions. The numerical analysis of such holograms allows one to emulate
large numerical aperture, which, in turn, enables enhancement of the resolution of optical microscopy. The corresponding technique
is calledinterferometric microscopy. Recent achievements of interferometric microscopy allow one to approach the quarterwavelength limit of resolution.[61]
Sensors or biosensors[edit]
Main article: Holographic sensor

The hologram is made with a modified material that interacts with certain molecules generating a change in the fringe periodicity or
refractive index, therefore, the color of the holographic reflection.[62][63]
Security[edit]
Main article: Security hologram

Security holograms are very difficult to forge, because they are replicated from a master hologram that requires expensive,
specialized and technologically advanced equipment. They are used widely in many currencies, such as the Brazilian 20, 50, and
100-reais notes; British 5, 10, and 20-pound notes; South Korean5000, 10,000, and 50,000-won notes; Japanese 5000 and 10,000
yen notes; and all the currently-circulating banknotes of the Canadian dollar, Danish krone, andEuro. They can also be found in
credit and bank cards as well as passports, ID cards, books, DVDs, and sports equipment.

Covertly storing information within a full colour image hologram was achieved in Canada, in 2008, at the UHR lab. The method used
a fourth wavelength, aside from the RGB components of the object and reference beams, to record additional data, which could be

retrieved only with the correct key combination of wavelength and angle. This technique remained in the prototype stage and was
never developed for commercial applications.
Other applications[edit]

Holographic scanners are in use in post offices, larger shipping firms, and automated conveyor systems to determine the threedimensional size of a package. They are often used in tandem withcheckweighers to allow automated pre-packing of given volumes,
such as a truck or pallet for bulk shipment of goods. Holograms produced in elastomers can be used as stress-strain reporters due
to its elasticity and compressibility, the pressure and force applied are correlated to the reflected wavelength, therefore its color.[64]

Augmented Reality

Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or
supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general
concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a
computer. As a result, the technology functions by enhancing ones current perception of reality.[1]By contrast, virtual reality replaces
the real world with a simulated one.[2][3] Augmentation is conventionally in real-time and in semantic context with environmental
elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer
vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally
manipulable. Artificial information about the environment and its objects can be overlaid on the real world
Applications[edit]

Augmented reality has many applications. First used for military, industrial, and medical applications, it has also been applied to
commercial and entertainment areas.[62]
Archaeology[edit]

AR can be used to aid archaeological research, by augmenting archaeological features onto the modern landscape, enabling
archaeologists to formulate conclusions about site placement and configuration. [63]

Another application given to AR in this field is the possibility for users to rebuild ruins, buildings, or even landscapes as they formerly
existed.[64] [65]
Architecture[edit]

AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into a real life local view
of a property before the physical building is constructed there; this was demonstrated publicly by Trimble Navigation in 2004. AR can
also be employed within an architect's work space, rendering into their view animated 3D visualizations of their 2D drawings.

Architecture sight-seeing can be enhanced with AR applications allowing users viewing a building's exterior to virtually see through
its walls, viewing its interior objects and layout.[66][67][68]
Art[edit]

AR technology has helped disabled individuals create art by using eye tracking to translate a user's eye movements into drawings
on a screen.[69] An item such as a commemorative coin can be designed so that when scanned by an AR-enabled device it displays
additional objects and layers of information that were not visible in a real world view of it.[70][71] In 2013, L'Oreal used CrowdOptic
technology to create an augmented reality at the seventh annual Luminato Festival in Toronto, Canada.[22]
Commerce[edit]

AR can enhance product previews such as allowing a customer to view what's inside a product's packaging without opening it. [72] AR
can also be used as an aid in selecting products from a catalog or through a kiosk. Scanned images of products can activate views
of additional content such as customization options and additional images of the product in its use.[73][74] AR is used to integrate print
and video marketing. Printed marketing material can be designed with certain "trigger" images that, when scanned by an AR
enabled device using image recognition, activate a video version of the promotional material. A major difference between
Augmented Reality and straight forward image recognition is that you can overlay multiple media at the same time in the view
screen, such as social media share buttons, in-page video even audio and 3D objects. Traditional print only publications are using
Augmented Reality to connect many different types of media.[75][76][77][78]
Construction[edit]

With the continual improvements to GPS accuracy, businesses are able to use augmented reality to visualize georeferenced models
of construction sites, underground structures, cables and pipes using mobile devices. [79] Following the Christchurch earthquake, the
University of Canterbury released, CityViewAR, which enabled city planners and engineers to visualize buildings that were
destroyed in the earthquake.[80] Not only did this provide planners with tools to reference the previous cityscape, but it also served as
a reminder to the magnitude of the devastation caused, as entire buildings were demolished.
Education[edit]

Augmented reality applications can complement a standard curriculum. Text, graphics, video and audio can be superimposed into a
students real time environment. Textbooks, flashcards and other educational reading material can contain embedded markers
that, when scanned by an AR device, produce supplementary information to the student rendered in a multimedia format. [81][82]
[83]

Students can participate interactively with computer generated simulations of historical events, exploring and learning details of

each significant area of the event site.[84] On higher education, there are some applications that can be used. For instance,
Construct3D, a Studierstube system, allows students to learn mechanical engineering concepts, math or geometry. This is an active
learning process in which students learn to learn with technology.[85] AR can aid students in understanding chemistry by allowing

them to visualize the spatial structure of a molecule and interact with a virtual model of it that appears, in a camera image,
positioned at a marker held in their hand.[86] It can also enable students of physiology to visualize different systems of the human
body in three dimensions.[87] Augmented reality technology also permits learning via remote collaboration, in which students and
instructors not at the same physical location can share a common virtual learning environment populated by virtual objects and
learning materials and interact with another within that setting.[88]

This resource could also take of advantage in Primary School. Students learn through experiences, besides when children are so
young, they need see to learn. For instance, they can learn new knowledge about Astronomy, which is usually difficult to acquire to
them, with this device children can understand better The Solar System because they would see it in 3D; even children under 6
years old could understand it following that method. In addition, learners could change the pictures of their Science Book for using
this resource. On the other hand to teach bones or organs, they could also stick one paper on their body and that paper contains an
embedded markers about a bones or an organ that existed under the paper, and the teacher would only need to press a button
when children would change the place of the paper, in this way, we would use the same embedded markers in order to teach
another part of the body.[citation needed]
Emergency management / search and rescue[edit]
LandForm+ is a geographic augmented reality system used for search and rescue, and emergency management.

Augmented reality systems are used in public safety situations - from super storms to suspects at large. Two interesting articles
from Emergency Managementmagazine discuss the power of the technology for emergency management. The first is "Augmented
Reality--Emerging Technology for Emergency Management" by Gerald Baron.[89] Per Adam Crowe: "Technologies like augmented
reality (ex: Google Glass) and the growing expectation of the public will continue to force professional emergency managers to
radically shift when, where, and how technology is deployed before, during, and after disasters.". [90]

Another example, a search aircraft is looking for a lost hiker in rugged mountain terrain. Augmented reality systems provide aerial
camera operators with a geographic awareness of forest road names and locations blended with the camera video. As a result, the
camera operator is better able to search for the hiker knowing the geographic context of the camera image. Once found, the
operator can more efficiently direct rescuers to the hiker's location.[91]
Everyday[edit]

Since the 1970s and early 1980s, Steve Mann has been developing technologies meant for everyday use i.e. "horizontal" across all
applications rather than a specific "vertical" market. Examples include Mann's "EyeTap Digital Eye Glass", a general-purpose seeing
aid that does dynamic-range management (HDR vision) and overlays, underlays, simultaneous augmentation and diminishment
(e.g. diminishing the electric arc while looking at a welding torch).[92]

Gaming[edit]
See also: List of augmented reality software Games

Augmented reality allows gamers to experience digital game play in a real world environment. In the last 10 years there has been a
lot of improvements of technology, resulting in better movement detection and the possibility for the Wii to exist, but also direct
detection of the player's movements.[93]
Industrial design[edit]

AR can help industrial designers experience a product's design and operation before completion. Volkswagen uses AR for
comparing calculated and actual crash test imagery.[94] AR can be used to visualize and modify a car body structure and engine
layout. AR can also be used to compare digital mock-ups with physical mock-ups for finding discrepancies between them. [95][96]
Medical[edit]

Augmented Reality can provide the surgeon with information, which are otherwise hidden, such as showing the heartbeat rate, the
blood pressure, the state of the patients organ, etc. AR can be used to let a doctor look inside a patient by combining one source of
images such as an X-ray with another such as video.

Examples include a virtual X-ray view based on prior tomography or on real time images from ultrasound and confocal
microscopy probes,[97] visualizing the position of a tumor in the video of an endoscope,[98] or radiation exposure risks from X-ray
imaging devices.[99][100] AR can enhance viewing a fetus inside a mother's womb.[101] Also, patients wearing Google Glass can be
reminded to take medications.[102]
Beauty[edit]

In 2014 the company L'Oreal Paris started developing a smartphone and tablet application called "Makeup Genius", which lets
users try out make-up and beauty styles utilising the front-facing camera of the endpoint and its display.[103]
Spatial immersion and interaction[edit]

Augmented reality applications, running on handheld devices utilised as virtual reality headsets, can also digitalise human presence
in space and provide a computer generated model of them, in a virtual space where they can interact and perform various actions.
Such capabilities are demonstrated by "project Anywhere" developed by a post graduate student at ETH Zurich, which was dubbed
as an "out-of-body experience" [7][8][9].
Military[edit]

In combat, AR can serve as a networked communication system that renders useful battlefield data onto a soldier's goggles in real
time. From the soldier's viewpoint, people and various objects can be marked with special indicators to warn of potential dangers.

Virtual maps and 360 view camera imaging can also be rendered to aid a soldier's navigation and battlefield perspective, and this
can be transmitted to military leaders at a remote command center.[104]

An interesting application of AR occurred when Rockwell International created video map overlays of satellite and orbital debris
tracks to aid in space observations at Air Force Maui Optical System. In their 1993 paper "Debris Correlation Using the Rockwell
WorldView System" the authors describe the use of map overlays applied to video from space surveillance telescopes. The map
overlays indicated the trajectories of various objects in geographic coordinates. This allowed telescope operators to identify
satellites, and also to identify - and catalog - potentially dangerous space debris.[105]

Starting in 2003 the US Army integrated the SmartCam3D augmented reality system into the Shadow Unmanned Aerial System to
aid sensor operators using telescopic cameras to locate people or points of interest. The system combined both fixed geographic
information including street names, points of interest, airports and railroads with live video from the camera system. The system
offered "picture in picture" mode that allows the system to show a synthetic view of the area surrounding the camera's field of view.
This helps solve a problem in which the field of view is so narrow that it excludes important context, as if "looking through a soda
straw". The system displays real-time friend/foe/neutral location markers blended with live video, providing the operator with
improved situation awareness.

Researchers at USAF Research Lab (Calhoun, Draper et al.) found an approximately two-fold increase in the speed at which UAV
sensor operators found points of interest using this technology.[106] This ability to maintain geographic awareness quantitatively
enhances mission efficiency. The system is in use on the US Army RQ-7 Shadow and the MQ-1C Gray Eagle Unmanned Aerial
Systems.
Navigation[edit]
See also: Automotive navigation system

AR can augment the effectiveness of navigation devices. Information can be displayed on an automobile's windshield indicating
destination directions and meter, weather, terrain, road conditions and traffic information as well as alerts to potential hazards in their
path.[107][108][109] Aboard maritime vessels, AR can allow bridge watch-standers to continuously monitor important information such as
a ship's heading and speed while moving throughout the bridge or performing other tasks.[110]

The NASA X-38 was flown using a Hybrid Synthetic Vision system that overlaid map data on video to provide enhanced navigation
for the spacecraft during flight tests from 1998 to 2002. It used the LandForm software and was useful for times of limited visibility,
including an instance when the video camera window frosted over leaving astronauts to rely on the map overlays. [111] The LandForm
software was also test flown at the Army Yuma Proving Ground in 1999. In the photo at right one can see the map markers
indicating runways, air traffic control tower, taxiways, and hangars overlaid on the video.[112]

Office workplace[edit]

AR can help facilitate collaboration among distributed team members in a work force via conferences with real and virtual
participants. AR tasks can include brainstorming and discussion meetings utilizing common visualization via touch screen tables,
interactive digital whiteboards, shared design spaces, and distributed control rooms.[113][114][115]
Sports and entertainment[edit]

AR has become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay
augmentation through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first down" line
seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. AR
is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the
playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line
across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current
race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and snooker
ball trajectories. [48][116]

AR can enhance concert and theater performances. For example, artists can allow listeners to augment their listening experience by
adding their performance to that of other bands/groups of users.[117][118][119]

The gaming industry has benefited a lot from the development of this technology. A number of games have been developed for
prepared indoor environments. Early AR games also include AR air hockey, collaborative combat against virtual enemies, and an
AR-enhanced pool games. A significant number of games incorporate AR in them and the introduction of the smartphone has made
a bigger impact.[120][121]
Task support[edit]

Complex tasks such as assembly, maintenance, and surgery can be simplified by inserting additional information into the field of
view. For example, labels can be displayed on parts of a system to clarify operating instructions for a mechanic who is performing
maintenance on the system.[122][123] Assembly lines gain many benefits from the usage of AR. In addition to Boeing, BMW and
Volkswagen are known for incorporating this technology in their assembly line to improve their manufacturing and assembly
processes.[124][125][126] Big machines are difficult to maintain because of the multiple layers or structures they have. With the use of AR
the workers can complete their job in a much easier way because AR permits them to look through the machine as if it was with xray, pointing them to the problem right away.[127]

Television[edit]

Weather visualizations were the first application of Augmented Reality to television. It has now become common in weathercasting
to display full motion video of images captured in real-time from multiple cameras and other imaging devices. Coupled with 3D
graphics symbols and mapped to a common virtual geospace model, these animated visualizations constitute the first true
application of AR to TV.

Augmented reality has also become common in sports telecasting. Sports and entertainment venues are provided with see-through
and overlay augmentation through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first
down" line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first
down. AR is also used in association with football and other sporting events to show commercial advertisements overlaid onto the
view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add
a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the
current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and
snooker ball trajectories.[128][129]

Augmented reality is starting to allow Next Generation TV viewers to interact with the programs they are watching. They can place
objects into an existing program and interact with these objects, such as moving them around. Avatars of real persons in real time
who are also watching the same program.[130]
Tourism and sightseeing[edit]

Augmented reality applications can enhance a user's experience when traveling by providing real time informational displays
regarding a location and its features, including comments made by previous visitors of the site. AR applications allow tourists to
experience simulations of historical events, places and objects by rendering them into their current view of a landscape. [131][132][133] AR
applications can also present location information by audio, announcing features of interest at a particular site as they become
visible to the user

3D Printing

3D printing (or additive manufacturing, AM) is any of various processes used to make a three-dimensional object.[1] In 3D
printing, additive processes are used, in which successive layers of material are laid down under computer control. [2] These objects
can be of almost any shape or geometry, and are produced from a3D model or other electronic data source. A 3D printer is a type
of industrial robot.

3D printing in the term's original sense refers to processes that sequentially deposit material onto a powder bed with inkjet printer
heads. More recently the meaning of the term has expanded to encompass a wider variety of techniques such
as extrusion and sintering based processes. Technical standards generally use the term additive manufacturing for this broader
sense.
Manufacturing applications[edit]

Additive manufacturing's earliest applications have been on the toolroom end of the manufacturing spectrum. For example, rapid
prototyping was one of the earliest additive variants, and its mission was to reduce the lead time and cost of developing prototypes
of new parts and devices, which was earlier only done with subtractive toolroom methods (typically slowly and expensively). [82] With
technological advances in additive manufacturing, however, and the dissemination of those advances into the business world,
additive methods are moving ever further into the production end of manufacturing in creative and sometimes unexpected ways.
[82]

Parts that were formerly the sole province of subtractive methods can now in some cases be made more profitably via additive

ones.

Standard applications include design visualisation, prototyping/CAD, metal casting, architecture, education, geospatial, healthcare,
and entertainment/retail.
Distributed manufacturing[edit]
Main article: 3D printing marketplace

Additive manufacturing in combination with cloud computing technologies allows decentralized and geographically independent
distributed production.[83]Distributed manufacturing as such is carried out by some enterprises; there is also a service to put people
needing 3D printing in contact with owners of printers.[84]

Some companies offer on-line 3D printing services to both commercial and private customers,[85] working from 3D designs uploaded
to the company website. 3D-printed designs are either shipped to the customer or picked up from the service provider.[86]

Mass customization[edit]

Companies have created services where consumers can customise objects using simplified web based customisation software, and
order the resulting items as 3D printed unique objects.[87][88] This now allows consumers to create custom cases for their mobile
phones.[89] Nokia has released the 3D designs for its case so that owners can customise their own case and have it 3D printed. [90]
Rapid manufacturing[edit]

Advances in RP technology have introduced materials that are appropriate for final manufacture, which has in turn introduced the
possibility of directly manufacturing finished components. One advantage of 3D printing for rapid manufacturing lies in the relatively
inexpensive production of small numbers of parts.

Rapid manufacturing is a new method of manufacturing and many of its processes remain unproven. 3D printing is now entering the
field of rapid manufacturing and was identified as a "next level" technology by many experts in a 2009 report.[91] One of the most
promising processes looks to be the adaptation of selective laser sintering (SLS), or direct metal laser sintering (DMLS) some of the
better-established rapid prototyping methods. As of 2006, however, these techniques were still very much in their infancy, with many
obstacles to be overcome before RM could be considered a realistic manufacturing method.[92]
Rapid prototyping[edit]
Main article: rapid prototyping

Industrial 3D printers have existed since the early 1980s and have been used extensively for rapid prototyping and research
purposes. These are generally larger machines that use proprietary powdered metals, casting media (e.g. sand), plastics, paper or
cartridges, and are used for rapid prototyping by universities and commercial companies.
Research[edit]

3D printing can be particularly useful in research labs due to its ability to make specialised, bespoke geometries. In 2012 a proof of
principle project at the University of Glasgow, UK, showed that it is possible to use 3D printing techniques to assist in the production
of chemical compounds. They first printed chemical reaction vessels, then used the printer to deposit reactants into them.[93] They
have produced new compounds to verify the validity of the process, but have not pursued anything with a particular application. [93]
Food[edit]

Cornell Creative Machines Lab announced in 2012 that it was possible to produce customised food with 3D Hydrocolloid Printing.
[94]

Additative manufacturing of food is currently being developed by squeezing out food, layer by layer, into three-dimensional

objects. A large variety of foods are appropriate candidates, such as chocolate and candy, and flat foods such as crackers, pasta,
[95]

and pizza.[96]

Professor Leroy Cronin of Glasgow University proposed in a 2012 TED Talk that it was possible to use chemical inks to print
medicine.[97]
Industrial applications[edit]
Apparel[edit]

3D printing has spread into the world of clothing with fashion designers experimenting with 3D-printed bikinis, shoes, and dresses.
[98]

In commercial production Nike is using 3D printing to prototype and manufacture the 2012 Vapor Laser Talon football shoe for

players of American football, and New Balance is 3D manufacturing custom-fit shoes for athletes. [98][99]

3D printing has come to the point where companies are printing consumer grade eyewear with on demand custom fit and styling
(although they cannot print the lenses). On demand customization of glasses is possible with rapid prototyping. [100]
Automobiles[edit]

In early 2014, the Swedish supercar manufacturer, Koenigsegg, announced the One:1, a supercar that utilises many components
that were 3D printed. In the limited run of vehicles Koenigsegg produces, the One:1 has side-mirror internals, air ducts, titanium
exhaust components, and even complete turbocharger assembles that have been 3D printed as part of the manufacturing process.
[101]

An American company, Local Motors is working with Oak Ridge National Laboratory and Cincinnati Incorporated to develop largescale additive manufacturing processes suitable for printing an entire car body.[102] The company plans to print the vehicle live in
front of an audience in September 2014 at the International Manufacturing Technology Show. "Produced from a new fiber-reinforced
thermoplastic strong enough for use in an automotive application, the chassis and body without drivetrain, wheels and brakes
weighs a scant 450 pounds and the completed car is comprised of just 40 components, a number that gets smaller with every
revision."[103]

Urbee is the name of the first car in the world car mounted using the technology 3D printing (his bodywork and his car windows were
"printed"). Created in 2010 through the partnership between the US engineering group Kor Ecologic and the
company Stratasys (manufacturer of printers Stratasys 3D), it is a hybrid vehicle with futuristic look. [104][105][106]
Construction[edit]

An additional use being developed is building printing, or using 3D printing to build buildings.[107][108][109][110] This could allow faster
construction for lower costs, and has been investigated for construction of off-Earth habitats.[111][112] For example, the Sinterhab
project is researching a lunar base constructed by 3D printing using lunar regolith as a base material. Instead of adding a binding
agent to the regolith, researchers are experimenting with microwave sintering to create solid blocks from the raw material.[113]

Electric motors and generators[edit]

The magnetic cores of electric machines (motors and generators) require thin laminations of special preprocessed electrical steel
that are insulated from each other to reduce core iron losses. 3D printing of any product that requires core materials with special
properties or forms that must be preserved during the manufacturing process, such as the material density, non-crystalline or nanocrystalline atomic structures, etc. or material isolation, may only be compatible with a hybrid 3D printing method which does not use
core material altering methods, such as sintering, fusing, deposition, etc. Preprocessing the raw material is not an extra
manufacturing step because all 3D Printing methods require preprocessed material for compatibility with the 3D Printing method,
such as preprocessed powdered metal for deposition or fusion 3D printing. To conveniently handle the very thin insulated
laminations of amorphous or nano-crystalline metal ribbon, which can reduce electric machine core loss by up to 80%, the wellknown Laminated Object Manufacturing (LOM) method of 3D Printing may show some compatibility for 3D-Printing of electric
machines but only if the method mitigates at least the alteration of the non-crystalline structure of the amorphous material (for
instance) during the forming of slot channels for holding the electric machine windings or during post manufacturing processes, such
as grinding the air-gap surface to flat precision, all while enhancing the packing density of the material. The patented 3D Printer
called MotorPrinter was specifically conceived and developed as the only 3D Printer of axial-flux electric machine cores of any
category or type, such as induction, permanent magnet, reluctance, and Synchro-Sym, with high performance core materials, such
as amorphous metals, all while including the construction of the integral frame and bearing assembly from raw structural steel
instead of assembled from an inventory of pre-manufactured precision castings.MotorPrinter solves the otherwise elusive problems
of 3D Printing of electric machines: 1) electrical material alteration as a result of cutting heat stress with instead a method of cutting
the slots before the ribbon is wrapped into the axial-flux form; 2) imprecise alignment of slots channels when dynamically calculating
the next slot position by the number of wraps and varying ribbon thickness with instead a slot template method that precisely aligns
the remotely cut slots onto the slots of the previous wrap without future calculations; 3) material alteration by secondary grinding
operations (for instance) for a precision flat air-gap surface with instead a method that forces the ribbon to assume the precision
flatness of the rotary table of the 3D Printer on each wrap; and 4) fixed rectangle shaped slot channels with instead a template
method that perfectly aligns slots with any shape for optimal performance.[114]

By preserving the superior molecular performance of the optimized pre-processed electrical materials, such as amorphous metal
ribbon, winding conductors, etc., MotorPrinter provides rapid just-in-time manufacture of a variety of axial-flux electric motor and
generator cores with integral frame and bearing assembly, such as Synchro-Sym, which is the only symmetrically stable brushless
wound-rotor [synchronous] doubly-fed electric motor and generator system that operates from sub-synchronous to supersynchronous speeds without permanent magnets and with cost-performance never before seen. Most recently a household-name
research facility is choosing instead to modify the electric motor topology for manufacturing compatibility with their form of 3DPrinting, if possible, but MotorPrinter was designed to be manufacturing universal with any axial-flux electric motor type, such as
induction, reluctance, or permanent magnet motors, but in particular with Synchro-Sym electric machine technology that eliminates

extraneous electromagnetic components that do not contribute to the production of work, such as permanent magnets, reluctance
saliencies, and squirrel cage windings.

Under a contract from the US Dept. of Energys Arpa-E (Advanced Research Project Agency-Energy) program, a team from the
United Technologies Research Center as of 2014 was working toward producing a 30 kW induction motor using just additive
manufacturing methods, trying to define an additively manufactured induction motor capable of delivering 50 kW peak and 30 kW
continuous power over a speed range of zero to 12,000 rpm, using motor technology that does not involve rare-earth magnets.[115]
Firearms[edit]
Main article: 3D printed firearms

In 2012, the US-based group Defense Distributed disclosed plans to "[design] a working plastic gun that could be downloaded and
reproduced by anybody with a 3D printer."[116][117] Defense Distributed has also designed a 3D printable AR-15 type rifle lower
receiver (capable of lasting more than 650 rounds) and a 30 round M16 magazine.[118] The AR-15 has multiple receivers (both an
upper and lower receiver), but the legally controlled part is the one that is serialised (the lower, in the AR-15's case). Soon after
Defense Distributed succeeded in designing the first working blueprint to produce a plastic gun with a 3D printer in May 2013,
the United States Department of State demanded that they remove the instructions from their website.[119] After Defense Distributed
released their plans, questions were raised regarding the effects that 3D printing and widespread consumer-level CNC machining[120]
[121]

may have on gun control effectiveness.[122][123][124][125]

In 2014, a man from Japan became the first person in the world to be imprisoned for making 3D printed firearms. [126] Yoshitomo
Imura posted videos and blueprints of the gun online and was sentenced to jail for two years. Police found at least two guns in his
household that were capable of firing bullets.[126]
Medical[edit]

3D printing has been used to print patient specific implant and device for medical use. Successful operations include a
titanium pelvis implanted into a British patient, titanium lower jaw transplanted to a Belgian patient,[127] and a plastic tracheal splint
for an American infant.[128] The hearing aid and dental industries are expected to be the biggest area of future development using the
custom 3D printing technology.[129] In March 2014, surgeons in Swansea used 3D printed parts to rebuild the face of a motorcyclist
who had been seriously injured in a road accident.[130] Research is also being conducted on methods to bio-print replacements for
lost tissue due to arthritis and cancer.[131]

In October 24, 2014, a five-year-old girl born without fully formed fingers on her left hand became the first child in the UK to have a
prosthetic hand made with 3D printing technology. Her hand was designed by US-based E-nable, an open source design
organisation which uses a network of volunteers to design and make prosthetics mainly for children. The prosthetic hand was based
on a plaster cast made by her parents.[132]

Printed prosthetics have been used in rehabilitation of crippled animals. In 2013, a 3D printed foot let a crippled duckling walk again.
[133]

In 2014 a chihuahua born without front legs was fitted with a harness and wheels created with a 3D printer.[134] 3D printed hermit

crab shells let hermit crabs inhabit a new style home.[135]

As of 2012, 3D bio-printing technology has been studied by biotechnology firms and academia for possible use in tissue engineering
applications in which organs and body parts are built using inkjet techniques. In this process, layers of living cells are deposited onto
a gel medium or sugar matrix and slowly built up to form three-dimensional structures including vascular systems. [136] The first
production system for 3D tissue printing was delivered in 2009, based on NovoGen bioprinting technology.[137] Several terms have
been used to refer to this field of research: organ printing, bio-printing, body part printing,[138] and computer-aided tissue engineering,
among others.[139] The possibility of using 3D tissue printing to create soft tissue architectures for reconstructive surgery is also being
explored.[140]

China has committed almost $500 million towards the establishment of 10 national 3-D printing development institutes. [141] In 2013,
Chinese scientists began printing ears, livers and kidneys, with living tissue. Researchers in China have been able to successfully
print human organs using specialised 3D bio printers that use living cells instead of plastic. Researchers at Hangzhou Dianzi
Universityactually went as far as inventing their own 3D printer for the complex task, dubbed the "Regenovo" which is a "3D bio
printer." Xu Mingen, Regenovo's developer, said that it takes the printer under an hour to produce either a mini liver sample or a four
to five inch ear cartilage sample. Xu also predicted that fully functional printed organs may be possible within the next ten to twenty
years.[142][143] In the same year, researchers at the University of Hasselt, in Belgium had successfully printed a new jawbone for an
83-year-old Belgian woman.[144]

In January 2015, it was reported that doctors at Londons St Thomas' Hospital had used images obtained from a Magnetic
Resonance Imaging (MRI) scan to create a 3D printing replica of the heart of a two-year-old girl with a very complex hole in it. They
were then able to tailor a Gore-Tex patch to effect a cure. The lead surgeon of the operating team, Professor David Anderson,
told The Sunday Times: The 3D printing meant we could create a model of her heart and then see the inside of it with a replica of
the hole as it looked when the heart was pumping. We could go into the operation with a much better idea of what we would find.
The 3D printing technique used by the hospital was pioneered by Dr Gerald Greil. [145]
Computers and robots[edit]
See also: Modular design and Open-source robotics

3D printing can be used to make laptops and other computers, including cases, as Novena and VIA OpenBook standard laptop
cases. I.e. a Novena motherboard can be bought and be used in a printed VIA OpenBook case.[146]

Open-source robots are built using 3D printers. Double Robotics grant access to their technology (an open SDK).[147][148][149] On the
other hand, 3&DBot is an Arduino 3D printer-robot with wheels[150]and ODOI is a 3D printed humanoid robot.[151]

Space[edit]

In September 2014, SpaceX delivered the first zero-gravity 3-D printer to the International Space Station (ISS). On December 19,
2014, NASA emailed CAD drawings for a socket wrench to astronauts aboard the ISS, who then printed the tool using its 3-D
printer. Applications for space offer the ability to print broken parts or tools on-site, as opposed to using rockets to bring along premanufactured items for space missions to human colonies on the moon, Mars, or elsewhere.[152] The European Space Agency plans
to deliver its new Portable On-Board 3D Printer (POP3D for short) to the International Space Station by June 2015, making it the
second 3D printer in space.[153][154]
Sociocultural applications[edit]
Art[edit]

In 2005, academic journals had begun to report on the possible artistic applications of 3D printing technology.[155] By 2007 the mass
media followed with an article in the Wall Street Journal[156] and Time Magazine, listing a 3D printed design among their 100 most
influential designs of the year.[157] During the 2011 London Design Festival, an installation, curated by Murray Moss and focused on
3D Printing, was held in the Victoria and Albert Museum (the V&A). The installation was called Industrial Revolution 2.0: How the
Material World will Newly Materialize.[158]

Some of the recent developments in 3D printing were revealed at the 3DPrintshow in London, which took place in November 2013
and 2014. The art section had in exposition artworks made with 3D printed plastic and metal. Several artists such as Joshua Harker,
Davide Prete, Sophie Kahn, Helena Lukasova, Foteini Setaki showed how 3D printing can modify aesthetic and art processes. One
part of the show focused on ways in which 3D printing can advance the medical field. The underlying theme of these advances was
that these printers can be used to create parts that are printed with specifications to meet each individual. This makes the process
safer and more efficient. One of these advances is the use of 3D printers to produce casts that are created to mimic the bones that
they are supporting. These custom-fitted casts are open, which allow the wearer to scratch any itches and also wash the damaged
area. Being open also allows for open ventilation. One of the best features is that they can be recycled to create more casts. [159]

3D printing is becoming more popular in the customisable gifts industry, with products such as personalised mobile phone cases
and dolls,[160] as well as 3D printed chocolate.[161]

The use of 3D scanning technologies allows the replication of real objects without the use of moulding techniques that in many
cases can be more expensive, more difficult, or too invasive to be performed, particularly for precious or delicate cultural heritage
artefacts[162] where direct contact with the moulding substances could harm the original object's surface.

Critical making refers to the hands on productive activities that link digital technologies to society. It is invented to bridge the gap
between creative physical and conceptual exploration.[163] The term was popularized by Matt Ratto, an Assistant Professor and
director of the Critical Making lab in the Faculty of Information at the University of Toronto. Ratto describes one of the main goals of

critical as "to use material forms of engagement with technologies to supplement and extend critical reflection and, in doing so, to
reconnect our lived experiences with technologies to social and conceptual critique".[164] The main focus of critical making is open
design,[165] which includes, in addition to 3D printing technologies, also other digital software and hardware. People usually reference
spectacular design when explaining critical making. [166]
Communication[edit]

Employing additive layer technology offered by 3D printing, Terahertz devices which act as waveguides, couplers and bends have
been created. The complex shape of these devices could not be achieved using conventional fabrication techniques. Commercially
available professional grade printer EDEN 260V was used to create structures with minimum feature size of 100 m. The printed
structures were later DC sputter coated with gold (or any other metal) to create a Terahertz Plasmonic Device. [167]
Domestic use[edit]

As of 2012, domestic 3D printing was mainly practised by hobbyists and enthusiasts, and was little used for practical household
applications. A working clock was made[168] and gears were printed for home woodworking machines among other purposes. [169] 3D
printing was also used for ornamental objects. Web sites associated with home 3D printing tended to include backscratchers,
coathooks, doorknobs etc.[170]

The open source Fab@Home project[67] has developed printers for general use. They have been used in research environments to
produce chemical compounds with 3D printing technology, including new ones, initially without immediate application as proof of
principle.[93] The printer can print with anything that can be dispensed from a syringe as liquid or paste. The developers of the
chemical application envisage both industrial and domestic use for this technology, including enabling users in remote locations to
be able to produce their own medicine or household chemicals.[171][172]

3D printing is now working its way into households and more and more children are being introduced to the concept of 3D printing at
earlier ages. The prospects of 3D printing are growing and as more people have access to this new innovation, new uses in
households will emerge.[173]

The OpenReflex SLR film camera was developed for 3D printing as an open-source student project.[174]
Education and research[edit]

3D printing, and open source RepRap 3D printers in particular, are the latest technology making inroads into the classroom.[175][176]
[177]

3D printing allows students to create prototypes of items without the use of expensive tooling required in subtractive methods.

Students design and produce actual models they can hold. The classroom environment allows students to learn and employ new
applications for 3D printing.[178] RepRaps, for example, have already been used for an educational mobile robotics platform.[179]

Some authors have claimed that RepRap 3D printers offer an unprecedented "revolution" in STEM education.[180] The evidence for
such claims comes from both the low cost ability for rapid prototyping in the classroom by students, but also the fabrication of lowcost high-quality scientific equipment from open hardwaredesigns forming open-source labs.[181] Engineering and design principles
are explored as well as architectural planning. Students recreate duplicates of museum items such as fossils and historical artifacts
for study in the classroom without possibly damaging sensitive collections. Other students interested in graphic designing can
construct models with complex working parts. 3D printing gives students a new perspective with topographic maps. Science
students can study cross-sections of internal organs of the human body and other biological specimens. And chemistry students can
explore 3D models of molecules and the relationship within chemical compounds.[182]

According to a recent paper by Kostakis et al.,[183] 3D printing and design can electrify various literacies and creative capacities of
children in accordance with the spirit of the interconnected, information-based world.

Future applications for 3D printing might include creating open-source scientific equipment. [181][184]
Environmental use[edit]

In Bahrain, large-scale 3D printing using a sandstone-like material has been used to create unique coral-shaped structures, which
encourage coral polyps to colonise and regenerate damaged reefs. These structures have a much more natural shape than other
structures used to create artificial reefs, and, unlike concrete, are neither acid nor alkaline with neutral pH.[185]
Intellectual property[edit]
See also: Free hardware

3D printing has existed for decades within certain manufacturing industries where many legal regimes, including patents, industrial
design rights, copyright, and trademark may apply. However, there is not much jurisprudence to say how these laws will apply if 3D
printers become mainstream and individuals and hobbyist communities begin manufacturing items for personal use, for non-profit
distribution, or for sale.

Any of the mentioned legal regimes may prohibit the distribution of the designs used in 3D printing, or the distribution or sale of the
printed item. To be allowed to do these things, where an active intellectual property was involved, a person would have to contact
the owner and ask for a licence, which may come with conditions and a price. However, many patent, design and copyright laws
contain a standard limitation or exception for 'private', 'non-commercial' use of inventions, designs or works of art protected under
intellectual property (IP). That standard limitation or exception may leave such private, non-commercial uses outside the scope of IP
rights.

Patents cover inventions including processes, machines, manufactures, and compositions of matter and have a finite duration which
varies between countries, but generally 20 years from the date of application. Therefore, if a type of wheel is patented, printing,
using, or selling such a wheel could be an infringement of the patent. [186]

Copyright covers an expression[187] in a tangible, fixed medium and often lasts for the life of the author plus 70 years thereafter.[188] If
someone makes a statue, they may have copyright on the look of that statue, so if someone sees that statue, they cannot then
distribute designs to print an identical or similar statue.

When a feature has both artistic (copyrightable) and functional (patentable) merits, when the question has appeared in US court, the
courts have often held the feature is not copyrightable unless it can be separated from the functional aspects of the item. [188] In other
countries the law and the courts may apply a different approach allowing, for example, the design of a useful device to be registered
(as a whole) as an industrial design on the understanding that, in case of unauthorised copying, only the non-functional features
may be claimed under design law whereas any technical features could only be claimed if covered by a valid patent.
Gun legislation and administration[edit]

The US Department of Homeland Security and the Joint Regional Intelligence Center released a memo stating that "significant
advances in three-dimensional (3D) printing capabilities, availability of free digital 3D printable files for firearms components, and
difficulty regulating file sharing may present public safety risks from unqualified gun seekers who obtain or manufacture 3D printed
guns," and that "proposed legislation to ban 3D printing of weapons may deter, but cannot completely prevent their production. Even
if the practice is prohibited by new legislation, online distribution of these 3D printable files will be as difficult to control as any other
illegally traded music, movie or software files."[189]

Internationally, where gun controls are generally tighter than in the United States, some commentators have said the impact may be
more strongly felt, as alternative firearms are not as easily obtainable.[190] European officials have noted that producing a 3D printed
gun would be illegal under their gun control laws,[191] and that criminals have access to other sources of weapons, but noted that as
the technology improved the risks of an effect would increase.[192][193] Downloads of the plans from the UK, Germany, Spain, and
Brazil were heavy.[194][195]

Attempting to restrict the distribution over the Internet of gun plans has been likened to the futility of preventing the widespread
distribution of DeCSS which enabled DVD ripping.[196][197][198][199] After the US government had Defense Distributed take down the
plans, they were still widely available via The Pirate Bay and other file sharing sites.[200] Some US legislators have proposed
regulations on 3D printers, to prevent them being used for printing guns.[201][202] 3D printing advocates have suggested that such
regulations would be futile, could cripple the 3D printing industry, and could infringe on free speech rights, with early pioneer of 3D
printing Professor Hod Lipson suggesting that gunpowder could be controlled instead.

Potrebbero piacerti anche