Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
“Recommendation System”
Bachelor of Technology
in
Computer Science & Engineering
Submitted by:
Deepika Kovi 01FB15ECS090
Gadamsetty Rohith 01FB15ECS100
Omkar D 01FB15ECS198
Internal Guide
Mahesh H B
Associate Professor,
PES University
FACULTY OF ENGINEERING
CERTIFICATE
This is to certify that the dissertation entitled
‘Recommendation System’
In partial fulfilment for the completion of eighth semester project work in the Program of Study
Bachelor of Technology in Computer Science and Engineering under rules and regulations of
PES University, Bengaluru during the period Jan. 2019 – May. 2019. It is certified that all
corrections / suggestions indicated for internal assessment have been incorporated in the
report. The dissertation has been approved as it satisfies the 8 th semester academic
requirements in respect of project work.
External Viva
1. __________________________ __________________________
2. __________________________ __________________________
DECLARATION
We hereby declare that the project entitled “Recommendation System” has been carried out
by us under the guidance of Prof. Mahesh H B, Associate Professor, and submitted in partial
fulfillment of the course requirements for the award of degree of Bachelor of Technology in
Computer Science and Engineering of PES University, Bengaluru during the academic
semester January – May 2019. The matter embodied in this report has not been submitted to
any other university or institution for the award of any degree.
We would like to express my gratitude to Prof. Mahesh H B, Dept. of Computer Science, PES
University, for his continuous guidance, assistance and encouragement throughout the
development of this project.
We are grateful to the project coordinators, Prof. Preet Kanwal and Prof. Sangeeta V I, for
organizing, managing and helping out with the entire process.
We would also like to take this opportunity to thank Dr. Shylaja S S, Chairperson, Department
of Computer Science and Engineering, PES University, for all the knowledge and support we
have received from the department.
We would like to thank Dr. B.K. Keshavan, Dean of Faculty, PES University for his help.
We would like also like to thank Dr K N B Murthy, Vice Chancellor PES Institutions for his
continuous encouragement and motivation.
We would also like to extend our sincerest gratitude to Prof Jawahar Doreswamy, Pro
Chancellor, PES Institutions for his inspirational dedication and support to all activities
associated with PES.
We would like to thank Dr M R Doreswamy, Chancellor, PES Institutions for his enormous
support and motivation.
Finally, this project could not have been completed without the continual support and
encouragement we have received from our parents.
ABSTRACT
The goal of this project is to develop a web application that can be used by the users to get a
recommendation for themselves based on their specification. The users will be given a
recommendation of a few places to visit for a location on their weekend or days they have
planned with a hotel to stay near the location they are visiting and a restaurant to eat according
to their rating near their stay of location. Basically our recommendation system minimizes the
search for three different things like for location, hotel and restaurant to one system which is
ours. The system is designed such a way that it is user-friendly and minimizes the work for the
end user.
TABLE OF CONTENTS
1. INTRODUCTION 1
1.1 Recommendation System 1
1.2 Motivation 1
1.3 Scope 2
2. PROBLEM DEFINITION 4
2.1 Problem Statement 4
3. LITERATURE SURVEY 5
3.1. Credit Risk Modeling: Combining Classification and Regression Algorithms to Predict
Expected Loss 5
3.1.1. Introduction 5
3.1.2. Approach 5
3.1.2. Results & Conclusion 6
3.2. Credit Default Mining Using Combined Machine Learning and Heuristic Approach 6
3.2.1. Introduction 6
3.2.2. Approach 7
3.2.3. Results & Conclusion 7
4. PROJECT REQUIREMENTS SPECIFICATION 8
4.1 Customer Requirements 8
4.2 Dependencies 9
5. SYSTEM REQUIREMENTS SPECIFICATION 10
5.1 Requirements 10
5.1.1 Software Requirements 10
5.1.2 Hardware Requirements 10
5.2 Assumptions 11
6. HIGH LEVEL DESIGN 12
6.1. System Architecture 12
6.1.1. Web Application (the Frontend) 12
6.1.2. Web Server (the Backend) 13
6.1.3. Machine Learning Model 13
6.2. Design Description 14
6.3. Use Case Diagram 14
6.3.1. Use Case Descriptions 15
6.4. Class Diagram 16
6.4.1. Class Description 16
6.5. Sequence Diagram 17
6.6. Activity Diagram 18
6.7. ER Diagram 19
6.7.1. Entity Descriptions 19
6.8. User Interface Diagrams 19
6.8.1. Index Screen 20
6.8.2. Home Screen 20
6.9. External Interfaces 20
6.9.1. User APIs 20
6.9.2. Prediction APIs 21
6.10. Packaging and Deployment Diagram 21
6.11. Help 22
6.12. Alternate Design Considerations 22
6.12.1. Django 22
6.12.2. NodeJS 22
7. LOW LEVEL DESIGN 24
7.1. Design Description 24
7.1.1. User 24
7.1.2. Prediction 24
7.1.3 Front End Organization 25
8. IMPLEMENTATION AND PSEUDOCODE 26
8.1. Frontend 26
8.1.1 Material Design 26
8.1.2 Components 26
8.1.3 Services 27
8.1.4 JSON Web Token 29
8.2. Backend 29
8.2.1. JSON Web Token 30
8.2.2. MongoDB 31
8.2.3. Error Handling 31
8.2.4. Integration Tests 31
8.3. Model 32
8.3.1. Data Preprocessing 32
8.3.2. Gradient Boosted Trees 33
8.3.3. K-Fold Cross-Validation 33
8.3.4. Hyperparameter Optimization 33
8.4 Tools and Technologies 34
8.4.1 Web FrontEnd 34
8.4.2 Web Backend 34
8.4.3 ML Model 34
9. TESTING 35
9.1. Scope 35
9.2. Test Strategies 35
9.3. Performance Criteria 36
9.4. Testing Environment 36
9.4.1. Hardware 36
9.4.2. Software 36
9.5. Risk Identification and Contingency Planning 36
9.6. Roles and Responsibilities 37
9.7. Test Schedule 37
9.8. Test Tools 37
9.9. Acceptance Criteria 37
9.10. Test Case List 38
10. RESULTS AND DISCUSSION 39
11. SNAPSHOTS 40
12. CONCLUSIONS 45
13. FURTHER ENHANCEMENTS 46
14. REFERENCES 47
BIBLIOGRAPHY 48
APPENDIX A 49
APPENDIX B 50
LIST OF FIGURES
# Name Page
6.1. System Architecture 12
6.2. High Level Design 14
6.3. Use Case Diagram 15
6.4. Class Diagram 16
6.5. Sequence Diagram 17
6.6. Activity Diagram 18
6.7. Entity Relationship Diagram 19
6.8. Packaging 22
8.1.1. Component Usage 27
8.1.2. Service Usage 28
8.1.3. Token header 29
8.1.4. Token header field passed in http request 29
8.2.1. Example backend API 30
8.2.1.1. JWT usage 30
8.2.1.2. JWT decoding 30
8.2.1.3. JWT encoding 31
8.2.2.1. MongoDB find command 31
8.2.2.2. MongoDB remove command 31
8.2.3.1. Error code example 31
8.2.4.1. Login test 32
8.3.1.1. Removal of spurious data 32
8.3.1.2. Imputation and scaling of data 32
8.3.2.1. Gradient boosted trees working 33
8.3.3.1. K-fold cross-validation 33
8.3.4.1. Hyperparameter optimization output 34
11.1. Index page 40
11.2. Password length validation 40
11.3. Login input validation 41
11.4. Valid login example 41
11.5. Prediction input screen 42
11.6. Prediction output screen 43
11.7. Past predictions 43
11.8. Delete prediction 44
LIST OF TABLES
# Name Page
4.1. Customer requirements specification 8
6.1. Use case descriptions 15
6.2. Entity descriptions 19
6.3. User APIs 20
6.4. Prediction APIs 21
7.1. User class data members 24
7.2. User class methods 24
7.3. Prediction class data members 25
7.4. Prediction class methods 25
9.1. Test strategies 35
9.2. Testing environment hardware 36
9.3. Testing environment software 36
9.4. Risk Identification 36
9.5. Roles and responsibilities 37
9.6. Test case list 38
CHAPTER-1
INTRODUCTION
In collaborative filtering a user expresses his or her preferences by rating items (e.g. hotels,
restaurants) of the system. These ratings can be viewed as an approximate representation of
the user's interest in the corresponding domain.
The system matches this user's ratings against other users' and finds the people with most
"similar" tastes.
1.3 Scope
The report for this project has been organized into thirteen chapters. This chapter gives a brief
introduction to the project. It discusses what a recommendation system is and the motivation
behind the project.
Chapter 3 gives a brief summary of the work that has been done in the domain so far.
Chapter 4 defines the requirements from the point of view of the customer.
Chapter 5 defines the system requirements, namely the hardware and software requirements.
Chapter 7 describes the low level details of the implementation of the project.
Chapter 8 focuses on implementation details and the salient features of the project.
Chapter 9 describes the testing plans and process undertaken for this project.
Chapter 13 discusses about the future enhancements that are possible for the project.
Chapter 14 lists the resources we have used while developing the project and preparing the
report.
PROBLEM DEFINITION
A recommendation system for three different features mainly places to visit, Hotels to stay and
restaurants to eat.
As we have already seen in the introduction chapter that an end user has to search for different
places or websites for the main features to go for a travel so for minimizing the work we are
providing a recommender system which allows the end user to search for hotel or restaurant
they want to visit by which the recommendation system will later provide them with the
specifics.
This will help the end users to pick a perfect location to visit for their travel with the places
they can visit which are nearby to that location and some good restaurants and hotels to stay
and eat.
LITERATURE SURVEY
3.1.1. Introduction
This paper states that credit scoring, which is the numerical evaluation of the risk of credit
default, is and important first step in the reduction of credit risk and mentions how it has been
proven to be highly effective. The authors go on to state that the main goal of credit scoring is
to predict the expected loss (EL), which is defined as follows:
EL=PD*LGD*EAD
Here, PD is the probability of default, meaning how likely the borrower will not be able to
(fully) pay back the loan; LGD is the loss given default which is defined as the part of the loan
the lender will be unable to recover; and EAD refers to the exposure at default which is simply
the amount of money at risk.
The authors discuss how previous research in this field has mainly focussed on the calculation
of PD, which is arguably of high importance. The authors propose that the second parameter,
LGD, will help in gaining a more nuanced picture of credit risk.
3.1.2. Approach
The paper first compares the dataset being used with others that have been used in previous
research. The dataset in questions contained many more attributes (~750 vs ~30) and a much
larger number of rows (~100000 vs ~1000).
The preprocessing of the data involved first imputing the missing values with column means
since most algorithms are unable to handle missing data, and then scaling the data.
The paper chose to go with the F1 metric for benchmarking purposes.
F1 = (2 * recall * precision) / (recall + precision)
Where
Finally, the authors discuss which machine learning techniques they used, namely Logistic
Regression, Support Vector Classification and Stochastic Gradient Boosted Tree
Classification.
3.2.1. Introduction
The paper states that traditional statistical techniques are unable to handle large amounts of
data and are not very successful in modelling the dynamic nature of fraud. The authors also
mention that analyzing millions of transactions and making a prediction based on that is
resource intensive and sometimes error prone.
The authors suggest the combination of OLAP (Online Analytical Processing) which is done
on archived historical data, and OLTP (Online Transaction Processing) which is performed on
current transactions.
The paper mentions that while techniques such as KNN, SVM, etc. are commonly used, the
Extremely Randomized Trees method has often been ignored. In contrast to the regular
Random Forest method, Extremely Random Trees determine the splitting attribute in an
extremely random manner.
3.2.2. Approach
The paper follows two approaches- one being the standard approach of using different machine
learning algorithms on the dataset, and the other, which it calls the Heuristic Approach, that
involves combining the score from the archived data along with a certain heuristic used in real-
time.
The authors opted to go with the “Taiwan” dataset which contains 23 features and 30000
instances (22% of which are default cases).
The metrics used to evaluate the models were the accuracy, recall, F1 score and precision.
Machine learning algorithms used include KNN, Naive Bayes, Random Forest and Extremely
Random Trees.
This section of the document will explain the system’s salient features, the interfaces of the
system, what the system will do, the constraints that the system can be subjected to during the
normal run of the system
Reqmt
Requirement
#
The end user will input for coast they want to visit and the
CRS- 4
ratings they would prefer for the results.
The user will input a place they would like to visit and the
CRS- 5 system will recommend the places they can visit near the
location.
4.2 Dependencies
The project is dependent on the Collaborative filtering method which can be called as social
filtering. It predicts or assumes what the user might agree for in the future without any
historical data required.
Since our system is based on the collaborative filtering, the system requirements for the end
user are very minimal.
5.1 Requirements
End User
User Location
Search Restaurants
Recommendation System Nearby
Take User
Preferences
Take data Via
API
5.2 Assumptions
● The client has an active internet connection to send requests and receive responses from
the server.
● The client has a working computer machine that satisfies the above mentioned software
and hardware requirements.
● The user knows all the values to be provided as input for the recommendation.
The above high level design diagram gives a slightly detailed overview of our application. It
mentions the potential users of our application, their actions, the organization of the system
into layers. The components that make up each layer and their interaction.
The above figure portrays the the behavior and functionality provided by the application, by
depicting the actors (end user and administrator) and use cases (Registration, Train the model,
etc.).
The above figure is a class diagram depicted using the Unified Modeling Language (UML). It
indicates the structure of the system by showing its classes along with their attributes,
operations and relationships.
The above diagram shows object interactions in chronological order. It depicts the sequence of
messages exchanged between objects which are needed to carry out the functionality in the
shown scenario. For the use case of generate a prediction, the above diagram shows the
sequence of steps and interaction between the different components of the system.
The above figure represents the activity diagram depicted using the Unified Modeling
Language (UML). This diagram exhibits the dynamic aspects of the system. It depicts the flow
from one activity to another.
Barring registration, a user will have to be logged in to perform any other activity. Once logged
in, a user can choose to enter values, generate a prediction and save it if they wish, or they can
view past predictions that have been generated by them and can choose to delete one of their
choice.
6.7. ER Diagram
The above Entity-Relationship model is composed of entity types (User, Prediction) and
specifies the relationships that can exist between them. It defines the information structure
which is later used during the database implementation. Since the database being used is a
NoSQL one, attributes here is a JSON object with the various fields as its keys.
/api/users/register POST Takes as input a user object with the fields - username,
email id and password, all of which are string fields.
This API will query the database to check if the email-id
is already associated with another account. If such a
document is found, a http status code of 409 is sent to
denote conflict. If no such document is found, a new
/api/predictions/generate POST This API will take an object with input fields that
are necessary to generate a prediction. This will in
turn be fed to the machine learning model to
generate a prediction. The output of this API will
be the request body along with the predicted target
variable with a status code of 200 or appropriate
http status codes in case of failure or error.
/api/predictions/ GET Lists all the predictions a user has made in the past
Table 6.4. Prediction APIs
All the above APIs except for the registration and usernameExists API will require a JSON
web token to be sent in the header field. The APIs will first authenticate the request and only
then proceed with the operation.
Applic
Clie ation Data
nt Server base
Bro
6.11. Help
The simple and intuitive user interface makes sure the user does not have any trouble using or
navigating through the site. When needed, additional instructions will be provided. For
example, the input fields are all equipped with tooltips to inform the user what exactly he/she
is expected to enter.
6.12.1. Django
When deciding which backend framework to use, in addition to Flask, we also considered
Django which is another web server written in Python. Some of the advantages and
disadvantages it has over Flask are listed below:
Advantages:
● Includes a built-in admin panel
● Comes with an ORM (Object-relational mapping) which greatly simplifies database
usage
● A pre-made directory structure
● Several ready-to-use database interfaces
Disadvantages:
● Complex in nature
● Does not provide as fine-grained control
● Is opinionated (has a set way of doing this)
6.12.2. NodeJS
Another backend framework we considered was NodeJS. The pros and cons with respect to
Flask are as below:
Advantages:
● Highly performant in terms of I/O; great for real-time applications
● Single language for both the backend and frontend (Javascript)
Disadvantages:
● Slower development (initially)
● Python arguably easier to use than Javascript
● Not as efficient with CPU-intensive applications
● Since the machine learning model uses Python and it’s libraries, generating a prediction
would require spawning a new process and formatting the input to pass as command
line argument to the spawned process which would significantly affect the time required
to generate the prediction. This would have also lead us to get dangerously close to the
time limit of 5 seconds for generating a prediction as was agreed initially.
7.1.1. User
This class represents a user of the application. It contains the data members needed for
authentication (logging in) and has methods to create, delete and update user information.
Data Members
Methods
Name Purpose Input Output
7.1.2. Prediction
This class represents a prediction that is requested by a user. It stores all the attributes needed
to make a prediction and an additional field to store the outcome of the prediction itself.
Data Members
Methods
Name Purpose Input Output
8.1. Frontend
The web front end for the application was built using Angular 6, a contemporary and popular
JavaScript framework. It adopts a MVVM architecture. It is a platform for developing desktop
and mobile web applications. It is scalable and easy to build. Angular uses TypeScript. Angular
also supports two way binding of data with the view and the model.
Important features of the implementation are listed below:
8.1.2 Components
Angular consists of components that are similar to templates. Each component has its own
HTML content, CSS, and model attributes. The components in Angular define views. These
components implement a custom html tag that can be embedded in the html definitions of other
components making them highly reusable.
A code snippet of the html definition of a component:
In the above code sample, the html tag <app-pred-exp-panel> is a custom tag that is constructed
from a component that is defined in our project. This makes developing the project extremely
easy and encourages reuse.
The CSS classes used above are part of the material design framework. The *ngFor directive
enables us to display a list. So, the <app-pred-exp-panel> is appended as many times as the
number of objects in the details array.
(deleteId) denotes an input that this component will receive from it’s child component.
Whenever there is a change, this parent component will be notified and the appropriate action
will be performed by the component.
8.1.3 Services
Angular services are used for communication between components and for communication
with the server backend. Each component that needs to communicate with the server has a
service defined for itself. This makes the code organized, modular and easy to maintain and
read.
In this project, services are present mainly to communicate with the server. These service
classes have functions defined to make http requests to the backend server. The services are
injected into the components that will make use of this service. We have made use of the
HttpClient module defined in @angular/common/http. It provides an easy to use API and is
based on the XMLHttpRequest interface provided by most popular web browsers. This module
provides easy ways to make different http requests such as GET, PUT, POST and DELETE. It
also takes as input a callback function that is invoked when it receives the response. All http
requests made using this module are asynchronous. All requests are RESTful. All request
bodies are in JSON format.
The above code snippet displays a service class part of our project. HttpClient is injected as a
dependency into the service. The endpoint is defined globally in a separate script file that is
also imported globally. Endpoint is a string containing the protocol, IP address and port of the
server. The above example consists of functions that make http requests to the server. The
HttpClient module functions takes the URI path and the request body in case of a POST request.
Since Angular is based on TypeScript and JavaScript, the objects passed are manipulated as
JSON objects. Therefore, converting the objects to JSON format in the requests is not
necessary. In the validateUsername function, the username is embedded as a path parameter in
the URI of the request.
These requests are subscribed to in the components, i.e it registers this request and when a
response arrives the appropriate callback function is called.
Those requests that require authentication, a json web token is passed in the headers of the
request which will be decoded by the server to authorize the requests.
When the user logs in successfully, the server sends a response with a JSON Web Token. The
client receives it and it is stored in the local storage. For all further requests that require
authentication, the token is fetched from local storage and a header field ‘token’ is created
populated with the value fetched from local storage. It is appended to the header of the http
request sent to the server.
In the above code snippet, the headers constructed as shown in Fig.8.1.3 is passed as an
argument to the post function of the HttpClient module. It appends the header field to the
headers of the http request body.
8.2. Backend
Flask is a Python based web framework written by Armin Ronacher. It is often considered as
a microframework since it does not require any libraries to work. It doesn’t come with features
such as form validation, database abstraction, etc. The backend is composed of RESTful API
endpoints, as described in previous section.
The above code sample is an example of an API in the backend. It uses the HTTP GET method.
The @token_required decorator is used to check the request headers and verify that the user
has been authenticated. The jsonify function is used to convert a Python dictionary into JSON
which can be interpreted by the frontend.
8.2.2. MongoDB
MongoDB is a document-oriented database program. In contrast to more common relational
database programs, MongoDB does not organize data in the form of tables, but rather in a
JSON-like format called BSON.
8.3. Model
The model is arguably the most important module in the application. It is responsible for
determining whether an individual is likely to default on his/her loan. In order to make the most
accurate model possible, we took the following into consideration:
8.4.3 ML Model
● Scikit Learn
● Pandas & Numpy
● LightGBM
TESTING
9.1. Scope
The testing performed includes the unit, integration, system, and regression testing approaches.
The following things fall under the scope of this project:
● Testing of all functional, application performance, security and use cases requirements
listed in the LLD and CRS documents.
● Quality requirements and fit metrics defined in CRS document.
● End-to-end testing and testing of interfaces of all systems that interact with the system.
Error Handling Front end : Handling status codes and error messages
and displaying the to user in a meaningful format.
9.4.1. Hardware
Component Metric Value
9.4.2. Software
Name Details
OS Ubuntu/Debian-based distro
Database MongoDB
Table 9.3. Testing environment software
Member Responsibilities
Nitish J Makam 1. Develop primary test plans for the front end UI.
2. Develop primary test plan for backend database.
3. Manually test the User Interface
Rahul N Pujari 1. Develop primary test plans for the back end API
2. Add test cases for the APIs
2 test_login: Generate a unique username and 403 the first time, then 200
register it in the database. Verify that the
login API returns an error code with the
incorrect password and that a success code
with the correct one.
From the model perspective, we were able to achieve an accuracy of 92% on the labeled part
of the dataset, measured on the accumulated out of fold predictions while using K-Fold cross-
validation with k=5.
Overall, this project was an excellent learning experience for all of us. We familiarized
ourselves with technologies such as Flask and MongoDB. We also discovered new machine
learning algorithms like Gradient Boosted Trees and Extremely Random Forests.
Building this project also gave all of us a first hand experience of how a problem is identified,
approaches that can be taken to solve the problem, etc. It also introduced us to standards
followed in documentation and how requirements are identified and specified, the constraints,
the system requirements, how to decide on designs keeping the tradeoffs in mind while
attempting to keep it simple. It also enabled us to learn about estimating effort and planning
and adhering to a schedule. It also made us familiar with the phases involved in the
development lifecycle of a project.
SNAPSHOTS
Error messages on incorrect inputs for user registration. Since the username is a unique field,
an asynchronous request is made to the server to check for availability of the username. Also,
the SIGN UP button is disabled as inputs are invalid.
Partial screenshot of the home screen that takes inputs and provides option to make a prediction.
On clicking predict, the output of different models and the overall probability is displayed as
as shown above.
This above picture shows the profile page. This page lists all of the user’s previously saved
predictions. On clicking the desired prediction, more details are shown as in the following
figure.
On clicking on the desired previously saved prediction, the details are shown as above with an
option to delete the saved prediction.
CONCLUSIONS
The outcome of this project, LDeP, is a web application that can be used to determine whether
a potential customer will default on his/her loan or not. Predicting nonpayment using traditional
methods is an arduous task and often results in poor accuracy, potentially causing losses and
missed opportunities for profit. Therefore, a delicate balance between being cautious and
seizing opportunities must be found.
Our application aims to do just that by using modern web technologies such as Angular and
Flask with a state-of-the-art machine learning model powering it in the background. At the
same time, the application greatly focuses on simplicity and ease of use.
Our potential clients such as banks and credit card companies should have no problem in using
and adapting the application into their respective workflows.
FURTHER ENHANCEMENTS
We also think we can provide more machine learning models with higher and better accuracies
and better recall and precision scores. We would like to transition this application into a
chargeable software as a service.
In the future, our customers can be segmented and make the models with better performance
be available only to our premium customers. Each prediction that is generated can be charged.
The product can also be split into a premium and free-for-all versions once there are enough
revenues generated making it financially viable and enabling growth in the long term.
REFERENCES
[2] Sheikh Rabiul Islam, William Eberle, Sheikh Khaled Ghafoor, Credit Default Mining Using
Combined Machine Learning and Heuristic Approach, 2018
The list below defined the keywords used in this report. All usages of the keywords are
unambiguous and the meanings remain as defined unless explicitly specified otherwise:
USER MANUAL
We have taken utmost care and precaution to ensure that the user interface is intuitive, simple,
pleasing to the eye and easy to navigate for all kinds of users.
The end user will have to first open a web browser and navigate to the URL of our web
application. If the user is unregistered, the user will have to provide details such as username,
email id and password satisfying the requirements and format expected and click SIGN UP.
Any deviations from this are displayed with the help of appropriate error messages that are
straight forward and can be easily understood by the user.
For an existing user, the user will have to enter his login credentials, and click SIGN IN. On
successful login, the user will be taken to a home page where the user can enter the required
details and click GENERATE. If the user is unclear about what the input requires, hovering the
pointer over the label of the input will display a tooltip that describes the input field. Input
validations are performed and appropriate error messages are displayed that can be understood
by the user and perform necessary corrections. On a successful generation of prediction, it is
displayed to the user in the same page. An option to save is provided as well, which the user
can use to save it.
The user can also choose to view his past predictions in which case, the user will have to click
the menu button on the top left corner of the screen, which will display a side navigation bar
from which the user will have to click My Profile. Once clicked, the user is routed to the profile
section where the user’s past saved predictions are displayed as a list of panels. Clicking on a
single item, will show more details about the prediction. If a user wishes to delete a previously
saved prediction, they can choose to delete it by clicking the DELETE button.
After the user has completed all the activities he intended to perform, the user should click the
menu button on the top left corner of the screen and click Logout. The user will be logged out
and taken to the index screen.