Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
TABLE OF CONTENTS
S.NO TOPIC PAGE NO.
1 ABSTRACT 1
2 ACKNOWLEDGEMENT 2
3 INTRODUCTION 3
4 WORKING 6
5 APPLICATIONS 9
6 FEASIBILITY STUDY 17
7 SYSTEM DESIGN 18
8 SETUP GUIDE 19
9 SYSTEM IMPLEMENTATION 28
10 CASE STUDIES 30
11 PROJECT 37
12 MAINTAINENCE 45
13 CONCLUSION 46
14 REFERENCES 47
INDUSTRIAL TRAINING BTIT507
ABSTRACT
People looking to buy a new home tend to be more conservative with their budgets
and market strategies. The existing system involves calculation of house prices
without the necessary prediction about future market trends and price increase.
Aim of this project was to develop a real estate web application using Machine
Learning with the help of Jupyter Notebook.The real estate system Give the
functionality for buyers, allowing them to search for houses by features or address.
By this, when the user will search for the property, original property value and
predicted property value will be displayed. By analysing previous market trends and
price ranges, and also upcoming developments future prices will be predicted.
For the price prediction we will be using classification algorithm. This application
will help customers to invest in an estate without approaching an agent. It also
decreases the risk involved in the transaction. The property, original property value
and predicted property value will be displayed. Thus, there is a need to predict the
efficient house pricing for real estate customers with respect to their budgets and
priorities.
INDUSTRIAL TRAINING BTIT507
We will be attempting to predict the mean price of homes in a given Seattle suburb
in the, given a few data points about the suburb at the time, such as the no. of
bedrooms, bathrroms, sqft_living etc.
The dataset we will be using has another interesting difference from our two
previous examples: it has very few data points, only in total, split between 16488
training samples and 4122 test samples, and each “feature” in the input data
(e.g. the price is a feature) has a different scale.
We have 16448 training samples and 4122 test samples. The data comprises 13
features. The features in the input data are as follow:
1. square ft of living
2. square ft of lot
3. number of bedrooms
4. number of bathrooms
5. number of floors
6. zipcode
The targets are the mean values of owner-occupied homes, in thousands of dollars.
ACKNOWLEDGEMENT
Completing a task is never a one person effort. It offers the result of a number of
individuals in a direct or indirect manner that helps in shaping and achieving the
result. Acknowledgement is a genuine opportunity to thank all those people
without that active support in this project would not be able to be possible.
It gives me a great sense of pleasure to present the report of the B. Tech Project
undertaken during B. Tech. Second Year. I owe special debt of gratitude to Dr.
Dinesh Kumar, HOD, Department of Information & Technology, DAV Institute of
Engineering & Technology Jalandhar, for his constant support and guidance
throughout the course of our work. His sincerity, thoroughness and perseverance
have been a constant source of inspiration for me. It is only his cognizant efforts
that my endeavors have seen light of the day.
I am thankful to Dr. Carlos Guestrin and Dr. Emily Fox, Amazon Professor of
Machine Learning, University of Washington, for providing education and
INDUSTRIAL TRAINING BTIT507
Finally, we are thankful to the Almighty God who had given us the power, good
sense and confidence to complete my project successfully. We also thank our
parents who were a constant source of encouragement. Their moral was
indispensable.
INTRODUCTION
All of these things mean it's possible to quickly and automatically produce
models that can analyze bigger, more complex data and deliver faster, more
accurate results – even on a very large scale. And by building precise
models, an organization has a better chance of identifying profitable
opportunities – or avoiding unknown risks.
ML Intelligence
Data Method
INDUSTRIAL TRAINING BTIT507
Machine Learning
The main difference with machine learning is that just like statistical models,
the goal is to understand the structure of the data – fit theoretical distributions
to the data that are well understood. So, with statistical models there is a
theory behind the model that is mathematically proven, but this requires that
data meets certain strong assumptions too. Machine learning has developed
based on the ability to use computers to probe the data for structure, even if
we do not have a theory of what that structure looks like. The test for a
machine learning model is a validation error on new data, not a theoretical
test that proves a null hypothesis. Because machine learning often uses an
iterative approach to learn from data, the learning can be easily automated.
Passes are run through the data until a robust pattern is found.
Deep learning
Deep learning combines advances in computing power and special types of
neural networks to learn complicated patterns in large amounts of data.
Deep learning techniques are currently state of the art for identifying
objects in images and words in sounds. Researchers are now looking to
apply these successes in pattern recognition to more complex tasks such as
automatic language translation, medical diagnoses and numerous other
important social and business problems.
INDUSTRIAL TRAINING BTIT507
WORKING
Popular machine learning methods
Two of the most widely adopted machine learning methods are supervised
learning and unsupervised learning – but there are also other methods of
machine learning. Here's an overview of the most popular types.
Unsupervised learning is used against data that has no historical labels. The
system is not told the "right answer." The algorithm must figure out what is being
shown. The goal is to explore the data and find some structure within.
Unsupervised learning works well on transactional data. For example, it can
identify segments of customers with similar attributes who can then be treated
similarly in marketing campaigns. Or it can find the main attributes that separate
customer segments from each other. Popular techniques include self-organizing
maps, nearest-neighbor mapping, k-means clustering and singular value
decomposition. These algorithms are also used to segment text topics, recommend
items and identify data outliers.
Reinforcement learning is often used for robotics, gaming and navigation. With
reinforcement learning, the algorithm discovers through trial and error which
actions yield the greatest rewards. This type of learning has three primary
components: the agent (the learner or decision maker), the environment
(everything the agent interacts with) and actions (what the agent can do). The
objective is for the agent to choose actions that maximize the expected reward over
a given amount of time. The agent will reach the goal much faster by following a
good policy. So the goal in reinforcement learning is to learn the best policy.
How it works?
To get the most value from machine learning, you have to know how to pair the
best algorithms with the right tools and processes. SAS(Statistical Analysis
System) combines rich, sophisticated heritage in statistics and data mining with
new architectural advances to ensure your models run as fast as possible – even in
huge enterprise environments.
Algorithms: SAS graphical user interfaces help you build machine learning
models and implement an iterative machine learning process. You don't have to be
an advanced statistician. Our comprehensive selection of machine learning
algorithms can help you quickly get value from your big data and are included in
many SAS products. SAS machine learning algorithms include:
Neural networks
Decision trees
Random forests
Nearest-neighbor mapping
INDUSTRIAL TRAINING BTIT507
k-means clustering
Self-organizing maps
Bayesian networks
Tools and Processes: As we know by now, it’s not just the algorithms.
Ultimately, the secret to getting the most value from your big data lies in
pairing the best algorithms for the task at hand with:
APPLICATIONS
Machine Learning Applications in Healthcare
1. Drug Discovery/Manufacturing
2. Personalized Treatment/Medication
Imagine when you walk in to visit your doctor with some kind of an ache
in your stomach. You have an MRI and a computer helps the radiologist
detect problems that possibly could be too small for the human eye to see.
In the end, a computer scans all your health records and family medical
history and compares it to the latest research to advice a treatment protocol
that is particularly tailored to your problem.
You are watching “Game of Thrones” when you get a call from your bank asking
if you have swiped your card for “$X” at a store in your city to buy a gadget. It
was not you who bought the expensive gadget using your card – in fact, it has
been in your pocket all noon. How did the bank flag this purchase as fraudulent?
All thanks to Machine Learning! Financial fraud costs $80 billion annually, of
which, Americans alone are exposed to a risk worth $50 billion per annum.
INDUSTRIAL TRAINING BTIT507
What the future holds for AI and machine learning in banking and finance?
The moment you start browsing for items on Amazon, you see
recommendations for products you are interested in as “Customers Who
Bought this Product Also Bought” and “Customers who viewed this
product also viewed”, as well specific tailored product recommendation on
the home page, and through email. Amazon uses Artificial Neural
Networks machine learning algorithm to generate these recommendations
for you.
To make smart personalized recommendations, Alibaba has developed “E-
commerce Brain” that makes use of real-time online data to build machine
learning models for predicting what customers want and recommending
the relevant products based on their recent order history, bookmarking,
commenting, browsing history, and other actions.
The moment you start browsing for items on Amazon, you see
recommendations for products you are interested in as “Customers Who
Bought this Product Also Bought” and “Customers who viewed this
product also viewed”, as well specific tailored product recommendation on
the home page, and through email. Amazon uses Artificial Neural
Networks machine learning algorithm to generate these recommendations
for you.
To make smart personalized recommendations, Alibaba has developed “E-
commerce Brain” that makes use of real-time online data to build machine
learning models for predicting what customers want and recommending
the relevant products based on their recent order history, bookmarking,
commenting, browsing history, and other actions.
How does Uber enable ridesharing by optimally matching you other passengers
to minimize roundabout routes?
INDUSTRIAL TRAINING BTIT507
How does Uber minimize the wait time once you book a car?
One of Uber’s biggest uses of machine learning comes in the form of surge
pricing, a machine learning model nicknamed as “Geosurge” at Uber. If you are
getting late for a meeting and you need to book an Uber in crowded area, get
ready to pay twice the normal fare. In 2011, during New Year’s Eve in New
York, Uber charged $37 to $135 for one mile journey. Uber leverages predictive
modelling in real-time based on traffic patterns, supply and demand. Uber has
acquired a patent on surge pricing. However, customer backlash on surge-pricing
is strong, so Uber is using machine learning to predict where demand will be
high so that drivers can prepare in advance to meet the demand, and surge pricing
can be reduced to a greater extent.
Here are some machine learning examples that you must be using and loving in
your social media accounts without knowing the fact that there interesting
features are machine learning applications -
Earlier Facebook used to prompt users to tag your friends but nowadays
the social networks artificial neural networks machine learning algorithm
identifies familiar faces from contact list. The ANN algorithm mimics the
structure of human brain to power facial recognition.
The professional network LinkedIn knows where you should apply for
your next job, whom you should connect with and how your skills stack up
against your peers as you search for new job.
INDUSTRIAL TRAINING BTIT507
Government :
FEASIBILITY STUDY
A feasibility study is a test of a system proposal according to its workability impact
on organization, ability to meet user needs and effective use of resources. The
objective of a feasibility study is not to solve a problem but to acquire a sense of its
scope. During the study, the problem definition is crystallized and the aspects of
the problem to be included in the system are determined. After the initial
investigation of the system that helped to have in-depth study of the existing
system, understanding its strength and weaknesses and the requirements for the
new proposed system. Feasibility study was done in three phases documented
below.
It would be problematic to feed into a neural network values that all take wildly
different ranges. The network might be able to automatically adapt to such
heterogeneous data, but it would definitely make learning more difficult. A
widespread best practice to deal with such data is to do feature-wise normalization:
for each feature in the input data (a column in the input data matrix), you subtract
the mean of the feature and divide by the standard deviation, so that the feature is
centered around 0 and has a unit standard deviation.
Note that the quantities that we use for normalizing the test data have been
computed using the training data. We should never use in our workflow any
quantity computed on the test data, even for something as simple as data
normalization.
INDUSTRIAL TRAINING BTIT507
SYSTEM DESIGN
The most creative and challenging phase of SDLC is system design. The term design
describes a final system and the process by which it is developed. It includes
construction of programs and program testing. The purpose of the design phase is to
plan a solution of the problem specifies by the requirements document. This phase
is the first step in the moving from the problem domain to the solution domain.
Starting with what is needed; design takes us towards how to satisfy the needs. The
design of the system is perhaps the most critical factor affecting the quality of the
software. It has major impact on the later phase, particularly testing and
maintenance. The output of this phase is the design document. This document is
similar to blueprint or plan for the solution and is used later during implementation,
testing and maintenance. A systematic method has to achieve the beneficial result at
the end. It includes starting with average idea and developing it into a series of steps.
The series of steps for successful system development are given below:
Study problem completely because first of all we should know the goal, which
he has to achieve.
We should see what kind of output we require and what kind of input we give
so we can get the desired output from system output from system. It is very
challenging step of system development.
According to the output requirement of system the strength of various
databases should be design.
Next, we should know what kind of program we should develop, which will
lead us to reach final goal.
Then we write this individual program, which later on joining solve problem.
Then we test these programs and make necessary correction in them to achieve
target of program.
At last combining all these problems in the forms of a bar in the menu of
windows, this will complete software package for general insurance.
The two main objectives which the designer has to bear in mind are:-
How fast the design will be do the users work given particular hardware
resources.
INDUSTRIAL TRAINING BTIT507
The extent to which the design is secure against the human errors and
machine malfunctions.
System Requirements
Windows 7+ or Windows Server 2012 R2
64-bit architecture
at least 4 GB of RAM
at least 2 GB of free disk space
The learning approach in this specialization is to start from use cases and then dig
into algorithms and methods, what we call a case-studies approach. We are very
excited about this approach, since it has worked well in several other courses. The
first course is focused on understanding how ML can be used in various cases
studies, and the follow on courses will dig into the details of algorithms and
methods for each of the main ML areas. In the first course, you will not be
implementing algorithms from scratch, but rather building intelligent applications
that use ML. In the subsequent course, we will be implementing and comparing a
wide range of algorithms. To make it easy to implement the use cases we will be
covering, we are recommending a particular set of software tools, but you can
successfully complete the course with other tools out there.
Why Python
In this course, we are going to use the Python programming language to build
several intelligent applications that use machine learning. Python is a simple
INDUSTRIAL TRAINING BTIT507
scripting language that makes it easy to interact with data. Furthermore, Python has
a wide range of packages that make it easy to get started and build applications,
from the simplest ones to the most complex. Python is widely used in industry, and
is becoming the de facto language for data science in industry. (R is another
alternative language. However, R tends to be significantly less scalable and has
very few deployment tools, thus it is seldomly used for production code in
industry. It is possible, but highly discouraged to use R in this specialization.)
We will also use the IPython Notebook in our videos. The IPython Notebook is a
simple interactive environment for programming with Python, which makes it
really easy to share your results. Think about it as a combination of a Python
terminal and a wiki page. Thus, you can combine code, plots and text to explain
what you did. (You are not required to use IPython Notebook in the assignments,
and should have no problem using straight up Python if you prefer.)
The main goal of this course is to learn core ML concepts, not how to use a
specific software package. Thus, in this course, we recommend you use GraphLab
Create, a package we have been working on for many years now, and has seen an
exciting adoption curve, especially in industry with folks building real
applications. GraphLab Create is a highly scalable machine learning library for
Python, which also includes the SFrame, a highly-scalable library for data
manipulation. A huge advantage of SFrame over Pandas is that with SFrame, you
are not limited to datasets that fit in memory, which allows you to deal with large
datasets, even on a laptop. (The SFrame API is very similar to Pandas' API. To
download Jupyter on windows execute the following steps.
Jupyter requires Python to be installed (it is based on the Python language). There
are a couple of tools that will automate the installation of Jupyter (and optionally
Python) from a GUI. In this case, we are showing how to install using Anaconda,
which is a Python tool for distributing software. You first have to install Anaconda.
It is available on Windows and Mac environments. Download the executable from
https://www.continuum.io/ (company that produces Anaconda) and run it to install
Anaconda. The software provides a regular installation setup process, as shown in
the following screenshot:
The installation process goes through the regular steps of making you agree to the
distribution rights license:
INDUSTRIAL TRAINING BTIT507
The standard Windows installation allows you to decide whether all users on the
machine can run the new software or not. If you are sharing a machine with
different levels of users, then you can decide the appropriate action:
After clicking on Next, it will ask for a destination for the software to reside (I
almost always keep the default paths):
INDUSTRIAL TRAINING BTIT507
3. From the top right, find the button labeled "New▾". Click the button to get a
drop-down menu, and select "Python 2" under the sub-heading "Notebooks." This
should create a new notebook inside the home directory of IPython notebook.
import os
print os.getcw d(
5. Place any files (notebooks and datasets) under the home directory. You may
organize your files using sub-folders.
INDUSTRIAL TRAINING BTIT507
6. All files and folders placed inside the home folder will appear in the main page:
SS
INDUSTRIAL TRAINING BTIT507
Implementation Issues
Implementation phase of the software development is concerned with translating
the design specifications into the source code. After the system has been designed,
arrives the stage of
putting it into actual usage known as the implementation of the system. This
involves putting up
of actual practical usage of the theoretically designed system. The primary goal of
implementation is to write the source code and the internal documentation so that
conformance of
the code to its specifications can easily be verified and so the debugging,
modifications and
testing are eased. This goal can be achieved by making the source code as clear
and as
straightforward as possible. Simplicity, Elegance and Clarity are the hallmarks of
good programs
whereas complexity are indications of inadequate design and misdirected thinking.
The system
implementation is a fairly complex and expensive task requiring numerous inter-
dependent
activities. It involves the effort of a number of groups of people: user and the
programmers and
the computer operating staff etc. This needs a proper planning to carry out the task
successfully.
Thus it involves the following activities:
§ Writing and testing of programs individually
§ Testing the system as a whole using the live data
§ Training and Education of the users and supervisory staff Source code clarity is
enhance
buy using structured coding techniques, by efficient coding style, by appropriate
supporting documents, by efficient internal comments and by features provided in
the
modern programming language.
The following are the structured coding techniques:
1) Single Entry, Single Exit
2) Data Encapsulation
3) Using recursion for appropriate problems
INDUSTRIAL TRAINING BTIT507
Testing
The most important activity at the implementation stage is the system testing with
the objective of validating the system against the designed criteria. During the
development cycle, user was involved in all the phases that are analysis, design and
coding. After each phase the user was asked whether he was satisfied with the
output and the desired rectification was done at the moment. During coding,
generally bottom up technique is used. Firstly the lower level modules are coded
and then they are integrated together. Thus before implementation, it involves the
testing of the system. The testing phase involves testing first of separate parts of
the system and then finally of the system as a whole. Each independent module is
tested first and then the complete system is tested. This is the most important phase
of the system development. The user carries out this testing and test data is also
prepared by the user to check for all possible combinations of correct data as well
as the wrong data that is trapped by the system. So the testing phase consists of the
following steps:
Unit testing: In the bottom of coding technique, each module is tested
individually. Firstly the module is tested with some test data that covers all
the possible paths and then the actual data was fed to check for results.
Integration testing: After all the modules are ready and duly tested, these
have to be integrated into the application. This integrated application was
again tested first with the test data and then with the actual data.
Parallel testing: The third in the series of tests before handling over the
system to the user is the parallel processing of the old and the new system.
At this stage, complete and thorough testing is done and supports out the
event that goes wrong. This provides the better practical support to the
persons using the system for the first time who may be uncertain or even
nervous using it.
1) Clerical procedure for collection and disposal of results
2) Flow of data within the organization
3) Accuracy of report output
4) Software testing which involves testing of all the programs together. This
involves the testing of system software utilities being used and specifically
develops application software.
5) Incomplete data formats
6) Halts due to various reasons and the restart procedures.
7) Range of items and incorrect formats
8) Invalid combination of data records.
9) Access control mechanism used to prevent unauthorized access to the system.
INDUSTRIAL TRAINING BTIT507
CASE STUDIES
Case Study 1: LINEAR REGRESSION
What is linear regression?
y = a_0 + a_1 * x
INDUSTRIAL TRAINING BTIT507
The motive of the linear regression algorithm is to find the best values for a_0 and
a_1. Before moving on to the algorithm, let’s have a look at two important
concepts you must know to better understand linear regression.
Cost Function
The cost function helps us to figure out the best possible values for a_0 and a_1
which would provide the best fit line for the data points. Since we want the best
values for a_0 and a_1, we convert this search problem into a minimization
problem where we would like to minimize the error between the predicted value
and the actual value.
We choose the above function to minimize. The difference between the predicted
values and ground truth measures the error difference. We square the error
difference and sum over all data points and divide that value by the total number of
data points. This provides the average squared error over all the data points.
Therefore, this cost function is also known as the Mean Squared Error(MSE)
function. Now, using this MSE function we are going to change the values of a_0
and a_1 such that the MSE value settles at the minima.
Gradient Descent
As with many other fields, advances in Machine Learning have brought Sentiment
Analysis into the foreground of cutting-edge algorithms. Today we use natural
language processing, statistics, and text analysis to extract, and identify the
sentiment of text into positive, negative, or neutral categories.
One of the most well documented uses of Sentiment Analysis is to get a full 360
view of how your brand, product, or company is viewed by your customers and
stakeholders. Widely available media, like product reviews and social, can reveal
key insights about what your business is doing right or wrong. Companies can also
use sentiment analysis to measure the impact of a new product, ad campaign, or
consumer’s response to recent company news on social media. Private companies
like Unamo offer this as a service.
A lot of these applications are already up and running. Bing recently integrated
sentiment analysis into its Multi-Perspective Answers product. Hedge funds are
almost certainly using the technology to predict price fluctuations based on public
sentiment. And companies like CallMiner offer sentiment analysis for customer
interactions as a service.
Sentiment Analysis can be used to quickly analyze the text of research papers,
news articles, social media posts like tweets and more.
The algorithm takes an input string and returns a rating from 0 to 4, which
corresponds to the sentiment being very negative, negative, neutral, positive, or
very positive.
INDUSTRIAL TRAINING BTIT507
Clustering is the task of dividing the population or data points into a number of
groups such that data points in the same groups are more similar to other data
points in the same group and dissimilar to the data points in other groups. It is
basically a collection of objects on the basis of similarity and dissimilarity between
them.
For ex– The data points in the graph below clustered together can be classified
into one single group. We can distinguish the cluster s, and we can identify that
there are 3 clusters in the below picture.
Why Clustering ?
Clustering Methods :
2. Hierarchical Based Methods : The clusters formed in this method forms a tree
type structure based on the hierarchy. New clusters are formed using the previously
formed one. It is divided into two category
-> Agglomerative (bottom up approach)
INDUSTRIAL TRAINING BTIT507
3. Partitioning Methods : These methods partition the objects into k clusters and
each partition forms one cluster. This method is used to optimize an objective
criterion similarity function such as when the distance is a major parameter
example K-means, CLARANS (Clustering Large Applications based upon
randomized Search) etc.
4. Grid-based Methods : In this method the data space are formulated into a finite
number of cells that form a grid-like structure. All the clustering operation done on
these grids are fast and independent of the number of data objects example STING
(Statistical Information Grid), wave cluster, CLIQUE (Clustering In Quest).
Clustering Algorithms:
import graphlab
sales = graphlab.SFrame('home_data.gl/')
sales
INDUSTRIAL TRAINING BTIT507
graphlab.canvas.set_target('ipynb')
train_data,test_data = sales.random_split(.8,seed=0)
print test_data['price'].mean()
543054.042563
print sqft_model.evaluate(test_data)
{'max_error': 4149118.5001014257, 'rmse': 255176.56433446918
%matplotlib inline
INDUSTRIAL TRAINING BTIT507
plt.plot(test_data['sqft_living'],test_data['price'],'.',test_data['sqft_living'],sqft
_model.predict(test_data),'-')
sqft_model.get('coefficients')
features = ['bedrooms','bathrooms','sqft_living','sqft_lot','floors','zipcode']
sales[features].show()
INDUSTRIAL TRAINING BTIT507
INDUSTRIAL TRAINING BTIT507
sales.show(view='BoxWhisker Plot',x='zipcode',y='price')
INDUSTRIAL TRAINING BTIT507
features_model =
graphlab.linear_regression.create(train_data,target='price',features=features)
Linear regression:
--------------------------------------------------------
Number of examples : 16488
Number of features : 6
Number of unpacked features : 6
Number of coefficients : 115
Starting Newton Method
--------------------------------------------------------
+-----------+----------+--------------+--------------------+-----------------
-----+---------------+-----------------+
| Iteration | Passes | Elapsed Time | Training-max_error | Validation-
max_error | Training-rmse | Validation-rmse |
+-----------+----------+--------------+--------------------+-----------------
-----+---------------+-----------------+
| 1 | 2 | 0.062400 | 2593719.371933 | 3834931.820422
| 180540.247964 | 206634.094708 |
+-----------+----------+--------------+--------------------+-----------------
-----+---------------+-----------------+
print sqft_model.evaluate(test_data)
print features_model.evaluate(test_data)
{'max_error': 4149118.5001014257, 'rmse': 255176.56433446918}
{'max_error': 3543180.6009335285, 'rmse': 179779.09588756037}
house1 = sales[sales['id']=='5309101200']
house1
INDUSTRIAL TRAINING BTIT507
print house1['price']
[620000L, ... ]
print sqft_model.predict(house1)
[629313.8131997312]
print features_model.predict(house1)
[722914.1214590659]
house2 = sales[sales['id']=='1925069082']
house2
print sqft_model.predict(house2)
[1259315.364362002
print features_model.predict(house2)
[1460311.9229361552]
INDUSTRIAL TRAINING BTIT507
MAINTENANCE
Maintenance Environment
The proper maintenance of the new system is very important for its smooth
working. The maintenance of the software is to be done by the system analyst and
programmers in the organization. But for hardware maintenance engineer may be
called from where hardware was purchased.
CONCLUSION
Here’s what you should take away from this example:
REFERENCES
www.google.com
www.coursera.org
www.geeksforgeeks.org
www.sas.com