Sei sulla pagina 1di 21

COMSATS University

Park Road, Tarlai Kalan, Islamabad


Department of Computer Science

Project Report
For
Augmented Reality Visualization of Objects in Warfield
(ARVOW)
Setting new trends in the fields of military

Submitted By:

• Muqaddas Shaaban
FA17-BSE-105

• Noor ul ain Khan


FA17-BSE-109

• Talha Yousaf
FA17-BSE-138

Supervised By:

Ms. Saira Beig

Submission Date: 13-06-2020


Contents
Business Case ............................................................................................................................ 4
Introduction and Background ............................................................................................ 4
Business objective ................................................................................................................. 4
Current situation and problem/opportunity statement .................................................... 4
Critical assumptions and constraints ................................................................................. 5
Analysis of options and recommendation .......................................................................... 5
Preliminary project requirements ...................................................................................... 5
Budget estimate and financial analysis .............................................................................. 5
Schedule estimate ................................................................................................................. 6
Potential risks ....................................................................................................................... 6
Exhibits ................................................................................................................................. 6
Scope .......................................................................................................................................... 8
1. Introduction ...................................................................................................................... 8
2. Problem Statement........................................................................................................... 8
3. Problem Solution for Proposed System ......................................................................... 9
4. Related System Analysis .................................................................................................. 9
5. Advantages/Benefits of Proposed System .................................................................... 10
6. Scope................................................................................................................................ 10
7. Modules ........................................................................................................................... 10
7.1: Training the dataset ................................................................................................ 11
7.2: Capturing the Scene ............................................................................................... 11
7.3: Detecting the war field objects............................................................................... 11
7.4: Recognition of war field objects ............................................................................ 11
7.5: Bounding of Warfield objects ................................................................................ 11
7.6: Displaying of Information on HMD and Desktop application ........................... 12
8. System Limitations/Constraints ................................................................................... 12
10. Tools and Technologies................................................................................................ 13
11. Project Stakeholders and Roles .................................................................................. 13
12. Team Members Individual Tasks/Work Division ..................................................... 14
14. Concepts ........................................................................................................................ 14
15. Gantt chart.................................................................................................................... 15
16. Mockups ........................................................................................................................ 16
17. Conclusion..................................................................................................................... 17
18. References ..................................................................................................................... 17
19. Plagiarism Report ........................................................................................................ 17
Project Schedule Network Diagram ..................................................................................... 18
Cost Estimate .......................................................................................................................... 19
Communication ...................................................................................................................... 20
Business Case

Introduction and Background


Augmented Reality is a technology that superimposes a computer-generated image on a user’s
view of the real world thus providing a composite view. In other words, AR combines
computer-generated virtual views and objects onto a real scene to augment reality.
The purpose of this project is to clearly state and define our proposed system. Since the
proposed system entirely revolves around the fields of augmented reality and machine learning
in safety critical.

Business objective
We plan to design an automated system which will allow army personnel’s to focus on who
and when to hit rather than looking for targets that could be a possible threat. ARVOW
would identify all the classified possibilities of threat. The soldier won’t have to go out of
focus and could easily plan his moves accordingly. The purpose of this project proposal
document is to clearly state and define our proposed system to its relevant readers. Since
the proposed system entirely revolves around the fields of augmented reality and machine
learning in safety critical systems hence this document would serve the purpose of taking its
readers and our committee members in the depths of our project.

Current situation and problem/opportunity statement


It is in human nature, that a person can only maintain focus on one thing at a time. It has
been found that the major reason of attacks that occur at the border, is that the armed
forces are not equipped with such devices that identifies threats for them. Possible threats
include a camouflaged attacker who’s armed, hand grenade, suicide bomber, hidden tunnel
etc.
There is a certain limit beyond which a naked human eye cannot see. Except this limitation
there are other factors too that makes the target objects non visible such as foggy weather
conditions, excessive smoke etc.
Furthermore, it may be a possibility that the attacker is smart enough to dodge army by
throwing or placing a harmless object to the scene. Hence diverting all the focus and
attention of soldiers towards the object, while succeeding in his evil doing.
ARVOW tends to cater the absence of AR in the field of military by introducing itself as a
software application that would provide solution to the above stated issues. It would aid
soldiers and military persons to exactly know about their hurdles and dangers beforehand
that they are going to face during their missions. Furthermore, making them feel confided
and more confident.
Critical assumptions and constraints
There are certain limitations or hurdles that may come our way, when we are planning to
achieve certain goals. System limitations and constraints we would mostly be facing are
stated as follows:

➢ Time Constraint: - In order to meet the deadline of project submission and to be able to
complete project successfully within a week there are always some features that couldn’t
be completed hence resulting in limiting our scope further. Therefore, at times results don’t
come the way we want them to occur.

➢ Expensive equipment:- Equipment such as HMD’s and goggle glasses that would aid our
system are quite costly and expensive to purchase.
➢ Resources:- Unfortunately we lack the exact or high precision resources that are needed
for the excellent showcasing and working of the proposed system, hence resulting in
generating results that are not up to that mark.

Analysis of options and recommendation


There are three options for addressing this opportunity:
1. Do nothing. The business is doing well, and we can continue to operate without this new
Project.
2. Built the system with existing requirements and limited sponsors.
3. Design and implement the new system with more define requirements and work on
achieving the quality.

Based on discussions with stakeholders, we believe that option 3 is the best option.

Preliminary project requirements


• It will provide excellent precision and ease for the warriors.
• It will keep warriors well aware and well informed beforehand.
• It will help soldiers in making quick and efficient decisions.
• Will aid soldiers in pre planning strategies, according to the location and existence of
possible threats.
• Soldiers would feel more confided and confident.
• It would possibly prevent the harm that used to be caused due to non-visibility of objects.
• They won’t have to go out of focus to look for upcoming hurdles and obstacles.

Budget estimate and financial analysis


A preliminary estimate of costs for the entire project is $170,000. This estimate is based on
The project manager working about 20 hours per week for six months and other internal
staff working a total of about 60 hours per week for six months. The customer
representatives would not be paid for their assistance. A staff project manager would earn
$50 per hour. The Hourly rate for the other project team members would be $70 per hour,
since some hours normally billed to clients may be needed for this project. The initial cost
estimate also includes $1000 for purchasing software and services from suppliers. After the
project is completed, maintenance costs of $20,000 are included for each year, primarily to
update the Information and coordinate the Ask the Expert feature and online articles.
Projected benefits are based on a reduction in hours consultants spend researching Project
management information, appropriate tools and templates, and so on. Projected

Benefits are also based on a small increase in profits due to new business generated by this
Project. If each of more than 400 consultants saved just 40 hours each year (less than one
Hour per week) and could bill that time to other projects that generate a conservative
Estimate of $10 per hour in profits, then the projected benefit would be $160,000 per year.
If the new intranet increased business by just 1 percent, using past profit information,
increased profits due to new business would be at least $20,000 each year. Total projected
Benefits, therefore, are about $200,000 per year.

Exhibit A summarizes the projected costs and benefits and shows the estimated net present
Value (NPV), return on investment (ROI), and year in which payback occurs. It also lists
Assumptions made in performing this preliminary financial analysis. All of the financial
estimates are very encouraging. The estimated pay back is within one year, as requested by
the sponsor. The NPV is $103300, and the discounted ROI based on a three-year system life
is excellent at -0.59 percent.

Schedule estimate
The sponsor would like to see the project completed within six months, but there is some
Flexibility in the schedule.

Potential risks
1. AR is vulnerable to security threats and unauthorized access by hacker attacks and malware.
These attacks can lead to a denial of service or overlay wrong information, leading to
severe, potentially catastrophic consequences.
2. AR devices work by first capturing the real-world scene, analyzing it, and then overlaying
extra visual information over it, or ‘augmenting’ the reality. Thus, collecting data is an
essential part of AR functioning. AR devices gather data on not only the users of the devices
but also the people being viewed through the devices. This may not be ideal for ensuring
personal privacy, which will definitely be affected when AR gains mass adoption. Devising
ways to preserve personal privacy despite the mass propagation of AR will be a challenge
for organizations.

Exhibits

Discount rate 10%


Assume the project is done in about 3 months
0 1 2 3 Total
Cost 170000 20,000 20,000 20,000 230000
Discount Factor 1 0.93 0.86 0.79
Discount Cost 170,000 18600 17200 15800 221600
Benefits 0 10000 90000 40000
Discounted Factor 1 0.93 0.86 0.79
Discounted Benefits 0 9300 77400 31600
Discounted Benefits-Cost 170,000 10700 57400 11600 NPV
Cumulative Benefits-Cost Payback in one Year
Discounted Lifecycle ROI -0.59
Assumptions
Costs #hours
PM(600hours,$60hour) 36000
Staff(1200hours,$50hour) 60000
Outsource Software and Services 10000
Total Project Cost in 0 Year 170000
Benefits
#Consultant 300
Hours Saved 30
#hour Profit
Benefits from saving time 140,000
Benefits from 1% increase in profits 30000
Total Annual Project benefits 300000
Scope

Project Category: (Select all the major domains of proposed project)

1. Introduction
Augmented Reality is a technology that superimposes a computer-generated image on a
user’s view of the real world thus providing a composite view. In other words, AR combines
computer-generated virtual views and objects onto a real scene to augment reality.

In recent years, augmented reality has strengthened its roots from entertainment industry to
safety critical systems like health monitoring systems. By carefully going through the
published research papers and the current statistics, importance and rapid growth of
Augmented reality cannot be overlooked. Augmented reality has surely proved itself to be
a highly potential and rapidly evolving technology.

In recent times, AR applications like pokemon go, snapchat and Instagram have taken
internet by storm.
Despite of all these advancements, sadly it has been observed that AR failed to expand or
lay its roots deeper in the fields of military.

The purpose of this project proposal document is to clearly state and define our proposed
system to its relevant readers. Since the proposed system entirely revolves around the fields
of augmented reality and machine learning in safety critical systems hence this document
would serve the purpose of taking its readers and our committee members in the depths of
our project

2. Problem Statement
Missed out the important while focusing on the unimportant?
Or let’s say it this way
Missed out the invisible while focusing on the visible?

ARVOW tends to address the prior questions. Before going towards ARVOW let’s first
clarify what these questions are.

It is in human nature, that a person can only maintain focus on one thing at a time. It has
been found that the major reason of attacks that occur at the border, is that the armed forces
are not equipped with such devices that identifies threats for them. Possible threats include
a camouflaged attacker who’s armed, hand grenade, suicide bomber, hidden tunnel etc.
There is a certain limit beyond which a naked human eye cannot see. Except this limitation
there are other factors too that makes the target objects non visible such as foggy weather
conditions, excessive smoke etc.

Furthermore, it may be a possibility that the attacker is smart enough to dodge army by
throwing or placing a harmless object to the scene. Hence diverting all the focus and
attention of soldiers towards the object, while succeeding in his evil doing.

ARVOW tends to cater the absence of AR in the field of military by introducing itself as a
software application that would provide solution to the above stated issues. It would aid
soldiers and military persons to exactly know about their hurdles and dangers beforehand
that they are going to face during their missions. Furthermore, making them feel confided
and more confident.
Fortunately or unfortunately no such system has even gone near the concept of introducing
AR in warfare. However, US military had made this possible by making use of augmented
and virtual reality in order to train their soldiers for diverse operational environments,
stressing them physically and mentally and bolster readiness through a grueling series of
virtual scenarios.

ARVOW differs from this idea as we don’t tend to use the system for training our soldiers.
We tend to develop this system for the efficient identification of threats in real life.

Major skills we expect to learn from this project include:


1. Problem Solving skills
2. Goal achieving skills
3. Skills of effectively implementing the right knowledge at right place
4. Skills of showcasing the required expertise at the right place

3. Problem Solution for Proposed System


In the light of the problems addressed above, being IT students we tend to integrate
Augmented Reality (AR) and Artificial Intelligence technologies in the military, making
it possible to provide assistance to our soldiers at its maximum. Hence, ARVOW would do
it all.
We plan to design an automated system which will allow army personnels to focus on who
and when to hit rather than looking for targets that could be a possible threat. ARVOW
would identify all the classified possibilities of threat. The soldier won’t have to go out of
focus and could easily plan his moves accordingly.

4. Related System Analysis


Since the idea of AR in military is quite new, we don’t find much of work done on this.
However successful AR applications , like EyeDecide has been seen running, in the fields
of medical. Furthermore applications and games like pokemon go, snapchat and Instagram
have proven themselves in terms of indulging its users into the AR crave.
In Pakistan, no relevant application could be found.
Few AR applications that have been developed abroad, for military are stated below:
Application Name Weakness Proposed Project Solution
1. Threat detection • Only tailored to the • Tends to develop a
Skills Trainer needs of US army. system for Pakistani military.
(ARA) • Works by gathering • It won’t need
Used to improve threat data from soldiers first. information from soldiers
detection performance of US beforehand.
soldiers.

2. Synthetic • Makes use of virtual • Makes use of


Training reality only. Augmented reality and object
Environment (STE) • Used to train detection.
Designed to place soldiers in soldiers on how to deal with • Will be used by soldiers
diverse operational stressful environment. for real time threat detection at
environment, stress them • Used for the training borders.
physically and mentally and of soldiers.
bolster readiness through a
grueling series of virtual
scenarios.

Table-1: Related System Analysis with proposed project solution

5. Advantages/Benefits of Proposed System


The proposed system tends to provide the following listed benefits
• It will provide excellent precision and ease for the warriors.
• It will keep warriors well aware and well informed before hand.
• It will help soldiers in making quick and efficient decisions.
• Will aid soldiers in pre planning strategies, according to the location and
existence of possible threats.
• Soldiers would feel more confided and confident.
• It would possibly prevent the harm that used to be caused due to non-visibility
of objects.
• They won’t have to go out of focus to look for upcoming hurdles and
obstacles.

6. Scope
Scope of the proposed system starts with capturing real scenes and images using HD or
android cameras thus displaying these scenes on our desktop application, with the help of
algorithms using augmented reality. As soon as the video plays back on our desktop
application in augmented reality it starts to detect objects hence locking them in frames and
making them visible via colored boundaries/frames.

7. Modules
The proposed work is based on three phases; it includes input of realistic environmental
scene, vision processing and augmentation of virtual content on Head Mounted Display.
7.1: Training the dataset
The object detection algorithm particularly requires the dataset that contains the wide
variety images, of each class corresponding to each object to be determined by algorithm.
This part is challenging one because we need approximately 300 to 500 images per object
to recognize them perfectly. After creating the dataset successfully, we have to made
weights or that will be trained on or object detection algorithm.

7.2: Capturing the Scene


This phase particularly sends the input received by the system. The real time scene captured
from the camera will be transmitted to the object/ targets detection algorithm by writing the
scripting code which will constantly send the frames captured by the camera. The input may
also contain images, videos and the realistic scene captured video.

7.3: Detecting the war field objects


After training the dataset, the next phase will be determined by an object detection algorithm
used to detect the objects from the scène. Object detection determines the presence of
an object and locations in the image where as object recognition identifies the object
class in the trained database, to which the object belongs to. Object detection is processor
to the object recognition. Object detection can be further divided into soft detection, which
only detects the presence of an object, and hard detection, which detects both the presence
and location of the object. Object detection field is typically carried out by searching each
part of an image to localize parts, whose geometric properties match those of the target
object in the training database. This process typically learn the scene through Convolution
Neural Networks (CNN) also known as machine learning phase.

7.4: Recognition of war field objects


Recognition phase of the proposed system further constitutes two sub modules named
features extraction and features classification.

7.4.1: Feature Extraction and Classification


Features are the information that describes the image or a part of image. In image
processing, feature extraction means reducing dimensions. The extracted features,
properties or interest points are expected to contain the relevant information such as color
histogram, corners and edges that are extracted from the input data, which helps in
recognition. This helps in matching the features of two images on the basis of common
points. Certain algorithms are used to extract these features, which will be decided later
after research.
Image classification is a successor of feature extraction. After matching the features from
capture images to the images stored in a database, the obtained results predict the specific
class of images on the basis of features.

7.5: Bounding of Warfield objects


In this module, we particularly highlight the presence of objects/targets after detection and
recognition phase (5th module) by drawing the 2D bonding box on the detected targets in
such a way that if there are more than one targets detected then the same number of bounded
boxes will be drawn.
7.6: Displaying of Information on HMD and Desktop application
After learning detecting and recognizing the whole scene, it will send the new scene to
displaying device attached to the system. It may be Head Mounted Display HMD as an
output device. Alongside this, all the captured information would also be seen on the
desktop application placed in controller rooms.

8. System Limitations/Constraints
There are certain limitations or hurdles that may come our way, when we are planning to
achieve certain goals. System limitations and constraints we would mostly be facing are
stated as follows:
 Time Constraint: - In order to meet the deadline of project submission and
to be able to complete project successfully within a week there are always some
features that couldn’t be completed hence resulting in limiting our scope further.
Therefore, at times results don’t come the way we want them to occur.
 Expensive equipment:- Equipment such as HMD’s and goggle glasses that would
aid our system are quite costly and expensive to purchase.
 Resources:- Unfortunately we lack the exact or high precision resources that
are needed for the excellent showcasing and working of the proposed system, hence
resulting in generating results that are not up to that mark.

9. Software Process Methodology


The software process methodology adopted is stated as follows:
Instructional Systems Design (ISD) - ADDIE model:
The most common type Instructional Systems Design (ISD) is ADDIE model, which
stands for Analysis, Design, Development, Implementation, and Evaluation. ADDIE is
often used where rapid prototyping is required. Evaluation takes place throughout all
phases and changes are made on the basis of requirements , hence moving forward.
As learners we need deep knowledge to develop the system. After each step, evaluation
phase occurs, which results in jump to the next state or to review the previous step on the
basis if requirements. The cycle continues until last module and final evaluation ends
successfully.

Figure 1: ADDIE model lifecycle


10. Tools and Technologies
In technological perspective, our work mainly lies under the field of Computer Vision with
Augmented Reality. Major tools required in the development are stated below:

Tools Version Rationale

Open CV 3.3 Computer Vision Library


Tools ATOM 2.7 IDE
And Visual Studio code 2013 IDE
Technologies MS Word 2015 Documentation
MS Power Point 2015 Presentation
Pencil 2.0.5 Mockups Creation
Tensor flow CSC 6 Design Work
NVIDEA GPU or something else GTX 1050 or Graphic Card for training
higher dataset
HD Camera 14 Megapixels or For Scene Capturing
higher
Technology Version Rationale

Augmented reality Not applicable Virtualization of real scene


Machine learning Not applicable For the Training of proposed
system
Computer Vision Not applicable For making the proposed
system autonomous

Python Python-autopep8 Programming language


0.1.3
Table 2: Tools and Technologies for Proposed Project

11. Project Stakeholders and Roles


Project stakeholders along with their roles have been depicted in the table below.

SYSVERE
Project Sponsor

Stakeholder • Mr. Saad Ayub Mir(SYSVERE) – Sponsor


• Muqaddas Shaaban – Designing and implementation
• Noor ul ain Khan – Designing and implementation
• Talha Yousaf – Designing and implementation
• Ms Saira Beig – Supervision

Table 3: Project Stakeholders for Proposed Project


12. Team Members Individual Tasks/Work Division
We tend to go towards the completion of our project by following the stated work division
Student Student Responsibility/ Modules
Name Registration
Number
Muqaddas FA17-BSE- • Module 1- Training the Dataset
Shaaban 105 • Module 2- Capturing the scene
• Module 8 – Displaying output on HMD (android
interface)

Noorulain FA17-BSE- • Module 3– Detecting Warfield objects


Khan 109 • Module 4- Recognition of war field objects
• Module 8 – Displaying output on HMD (android
interface)

Talha Module 5– Bounding of Warfield objects


Yousaf Module 6– Displaying output on HMD (android interface)
FA17-BSE- Displaying output on Desktop application (Desktop
138 application interface)

Table 4: Team Member Work Division for Proposed Project

13. Data Gathering Approach


Since, implementation of most efficient algorithm requires a lot of research work hence in
order to incorporate the most efficient and best fit detection algorithms we would consult
published research papers such as ACM, Springer, and IEEE. Furthermore, in order to
gather relevant dataset of object images such as ImageNet to train our algorithm to
recognize those objects. This dataset mainly need approximately 600 to 1000 quality
images (including sub categories) per object to detect them effectively from the real scene.
The dataset will be tested by giving different Input videos, images and the real scene
captured by the camera.

14. Concepts
Concepts applied to the proposed system are as follows :

14.1 Concept-1: Augmented Reality


Augmented reality is a technology that superimposes computer generated image over user’s
view of the real world, thus generating a composite view. We would use this concept for
the visualization of our captured scene along with the objects identified .

14.2 Concept-2: Digital Image Processing


Image processing falls under the category of computer vision. It is a technology to
perform some operations on an image, in order to get an enhanced image or to
extract some useful information from it. In order to implement module5 and 6 that is
feature extraction and classification , concept of image processing would be used.

14.3 Concept-3: Artificial Intelligence

Artificial intelligence (AI) is an area of computer science that emphasizes the creation of
intelligent machines that work and react like humans. Some of the activities computers
with artificial intelligence are designed for include:
• Machine Learning
• Deep learning
• Problem solving
• Computer vision: Computer Vision is a field of Artificial Intelligence and
Computer Science that aims at giving computers a visual understanding of the
world. It is one of the main components of machine understanding.

15. Gantt chart


Figure 2: Gantt Chart
16. Mockups
Following figures gives an at glance idea of the basic expected interface of our application.

Mockup 1
This mockup picturizes the main window of Desktop application ARVOW .

Figure 3: mockup of output on desktop application

Mockup 2
The following mockup shows the window to be displayed to the head mounted display worn
by armed personnel.
Figure 4: Mockup of output on head mounted display

17. Conclusion
It is concluded that we plan to deliver our proposed system to the committee on time making
sure that all the mentioned modules and functionalities have been implemented successfully
and the system working as per expectations.

18. References
http://www.cmo.com/features/articles/2017/7/13/5-realworld-examples-of-augmented-
realityhttp://www.cmo.com/features/articles/2017/7/13/5-realworld-examples-of-
augmented-reality-innovation.html - gs.b61phsAinnovation.html#gs.b61phsA
http://ieeexplore.ieee.org/document/963459/
http://nationalinterest.org/blog/the-buzz/how-the-us-military-using-augmented-reality-
bolster-troop-21884 http://www.bioss.ac.uk/people/chris/ch3.pdf

19. Plagiarism Report


Project Schedule Network Diagram

Critical Path: Scope-SRS-SDS-Real Scene Capturing-Training Dataset-Object Detection-


Feature Extraction-Classification- Testing Dataset-Transferring output to HMD-Completion

PERT:
As PERT=optimistic time + 4 * most likely time + pessimistic time
6
=190 + 4 * (190+39) + 380
6
=248 days
Cost Estimate
#Units/Hrs Cost/Unit/Hrs Subtotals % of
Total
WBS items
Scope 30 $5,000 $150,000 13%
SRS 10 $3,500 $35,000 3%
SDS 10 $3,500 $35,000 3%
Real scene capturing 2 $10,000 $20,000 2%
Training dataset 1 $25,000 $25,000 2%
Filtration and 5 $1,000 $5,000 5%
enhancement of
captured scene
Object Detection 10 $11,000 $110,000 10%
Testing of trained 1 $70,000 $70,000 7%
dataset
Transferring output 5 $40,000 $200,000 20%
to HMD
Completion of 30 $12,000 $360,000 35%
Project
Total $1,010,000
Communication
Stakeholders Document name Document Format Contact Person Due Date

Supervisor Final Project Report Email Saira Baigh June 13

System Engineer Scope document Meeting Muqaddas Shaaban April 2

Developer Software Implementation Email and Meeting Noor ul ain khan May 20

Test Engineer Quality Assurance Email and Meeting Talha Yousaf May 30

Quality :

Cause and Effect Diagram


HR :

Organizational Chart :

Project
Manager

Project
System Independent Quality Configuration
Techanical
Engineering Test Group Assurancee Managment
Lead

S/W Subproject S/W Subproject S/W Subproject


Manager 1 Manager 2 Manager 3

Team 2 Team 1 Team1

Team 2 Team2
Team 1

Team 3

Potrebbero piacerti anche