Sei sulla pagina 1di 10

Topic Page No.

1. Introduction
2. Literature Survey

3. Proposed Architecture
3.1 Problem Statement
3.2 System Scope
3.3 Architecture
3.4 Features of Project
3.5 Project Constraints

4. Project Design
5.1 Software Requirement Specification
5.1.1 Product Perspective
5.1.2 Specific Requirements
5.1.3 Software Product Features
5.1.4 Design Constraints
5.1.5 Software System Attributes
5.1.6 Logical Database Requirements
5.1.7 Tools
5.1.8 Software Specification
5.1.9 Cost Specification
5.1.10 Software used
5.2 Data Flow Diagram
5.3 Use Case Diagram
5.4 Class Diagram
5.5 Sequence Diagram
5.6 Activity Diagram
5.7 User Interface
5.8 Testing and Documentation
5.9 Unit Testing
5.10 Integration Testing
5.11 System Testing
5.12 GUI Testing

6. Experimental Results
6.1 Experimental Setup
6.2 Results

7. Schedule of Work

8. Conclusion and Future Work

References

1. Introduction
This Software Requirements Specification is about the project named “Questions
Answering System in AI” .This section gives a scope description and overview of
everything included in this SRS document.
In this work, we present a novel recurrent neural network (RNN) architecture where the
recurrence reads from a possibly large external memory multiple times before outputting
a symbol. Our model can be considered a continuous form of the Memory Network
implemented in . The model in that work was not easy to train via back propagation,
and required supervision at each layer of the network. The continuity of the model we
present here means that it can be trained end-to-end from input-output pairs, and so is
applicable to more tasks, i.e. tasks where such supervision is not available, such as in
language modeling or realistically supervised question answering tasks. Our model can
also be seen as a version of RNN search with multiple computational steps (which we
term “hops”) per output symbol. We will show experimentally that the multiple hops over
the long-term memory are crucial to good performance of our model on these tasks, and
that training the memory representation can be integrated in a scalable manner into our
end-to-end neural network model.

2. Literature Survey :

A number of recent efforts have explored ways to capture long-term structure within sequences
using RNNs or LSTM-based models [4, 7, 12, 15, 10, 1]. The memory in these models is the
state of the network, which is latent and inherently unstable over long timescales. The LSTM-
based models address this through local memory cells which lock in the network state from the
past. In practice, the performance gains over carefully trained RNNs are modest (see Mikolov et
al. [15]). Our model differs from these in that it uses a global memory, with shared read and write
functions. However, with layer-wise weight tying our model can be viewed as a form of RNN
which only produces an output after a fixed number of time steps (corresponding to the number
of hops), with the intermediary steps involving memory input/output operations that update the
internal state. Some of the very early work on neural networks by Steinbuch and Piske[19] and
Taylor [21] considered a memory that performed nearest-neighbor operations on stored input
vectors and then fit parametric models to the retrieved sets. This has similarities to a single layer
version of our model. Subsequent work in the 1990’s explored other types of memory [18, 5, 16].
For example, Das et al. [5] and Mozer et al. [16] introduced an explicit stack with push and pop
operations which has been revisited recently by [11] in the context of an RNN model. Closely
related to our model is the Neural Turing Machine of Graves et al. [8], which also uses a
continuous memory representation. The NTM memory uses both content and address-based
access, unlike ours which only explicitly allows the former, although the temporal features that
we will introduce in Section 4.1 allow a kind of address-based access. However, in part because
we always write each memory sequentially, our model is somewhat simpler, not requiring
operations like sharpening. Furthermore, we apply our memory model to textual reasoning tasks,
which qualitatively differ from the more abstract operations of sorting and recall tackled by the
NTM.

3. Proposed Architecture

3.1 Problem Statement

Predict Accurate Answer by prediction of the accurate answer is done after the question
is asked based on the story.

3.2 System Scope


The purpose of this document is to give a detailed description of the requirements for the
“Questions Answering Systems in AI” software. It will answer questions about a body of text.
The “Questions Answering Systems in AI” is a end to end memory network based application
which helps people to get answers of their questions based on the story .The application should
be free to download from either a mobile phone application / windows store or similar services .
This document is primarily intended to be proposed to the customer for its approval and a
reference for developing the first version of the system for the development team .
The face detection task can be broken down into two steps. The first step is to get new story .
The second step is to ask the question based on that story .

3.3 Architecture :
3.4 Features of Project :

Predict Accurate Answer : Prediction of the accurate answer is done after the question is asked
based on the story.
Fetch data from the users: Fetch and convert that data into knowledge for application to talk with
users .
Users strong information and making best use out of it , other users will be able to intereact with
you even when you are not online . You can share the important details of your life which will be
permanently stored and can be used in need .

3.5 Project Constraints :

Fully automated Questions answer system is implemented using end to end memory network
algorithm and NLP . The main difficulties in implementing a end to end memory is to get secure
socket layer certificate issues in numpy library . End to end memory network algorithm based
technique is not time consuming because it is not rastic .We were thinking about implementing a
technique based on NLTK but could not find sufficient literature about the model . Face
detection system is a modern approach for Natural Language Processing purposes . Face is the
most commonly used in customer or facebook chat bots , amazon alexa , is actively getting used
by google assistant , apple serie etc.

FACE RECOGNITION DIFFICULTIES ARE :


1. Mostly some questions can get wrong answers when the data is complex .
2. Answers can go wrong when the data is not sufficiently defined .

4. Project Design :

4.1 Software Requirement specification:

4.1.1 Project Perspective :


The bAbI benchmark comprises 20 tasks. We optimize the MemN2N architecture on all 20 tasks
together to offer perspective on the average performance of each tuning method for this
particular benchmark. Previous work suggests Bayesian Optimization to be the most efficient
and precise in finding a globally optimal hyperparameters or, in this case MemN2N architecture
configuration. We use these experiments to evaluate this prediction.

4.1.2 Specific Requirements:

The communication interface will be the web page which is getting displayed on the local server
and through which the sytem reads the dataset provided by facebook research dataset and also
the same interface will be responsible for two way communication between the user and system
in the form of asking questions and predicting the answers.The information should be clearly given
using simple grammatical words .Personal information should be protected from the unauthorised users
who tries to access your data in the form of question and answer .

4.1.3 Software Product Features:

1] Maintainability:
Database should be maintained by taking frequent backups .

2] Re usability:
Can be reused in any kind of chat bots .

3] Correctness:
Algorithm provides 90 % accuracy.

4] Reliability:
Its highly effective in real time entities .It works in any scenario.

4.1.4 Design Constraints:

Story showcase:
Story appearce in the story showcase once we press refresh or another story button .

Question field:
User needs to enter questions in the questions field and accordingly we will be getting the
appropriate answers along with its accuracy . Besides that three different levels of accuracies are
shown related to answers.

4.1.5 Software System Attributes:

1) Maintainability: Database have to be maintained in the case of changing some data from
Device.
2) Reusability: Existing data can be reused in some stories for question answering.
3) Correctness: Recorded Data can be Updated in some-cases and then uploaded again.

4.1.6 Tools:

Hardware Tools Laptop


Software Tools python idle 2.2.16

4.1.7 Software Specification:

Python is an interpreted,high-level,general-purpose programming language. Created by Guido


van Rossum and first released in 1991, Python has a design philosophy that emphasizes code
readability, notably using significant white space. It provides constructs that enable clear
programming on both small and large scales. Van Rossum led the language community until July
2018.
Python is dynamically typed and garbage-collected. It supports multiple programming
paradigms, including procedural, object-oriented, and functional programming. Python features a
comprehensive standard library, and is referred to as "batteries included".

4.1.8 Cost Specification :


1. Application file – 100/-

4.7 User Interface :


The communication interface will be the web page which is getting displayed on the local server
and through which the sytem reads the dataset provided by facebook research dataset and also
the same interface will be responsible for two way communication between the user and system
in the form of asking questions and predicting the answers.

4.9 Unit Testing :


Test Case id Test Scenario Test Steps Test Results
T001 Open web application 1)open windows web App Opens
power shell
2)Go to the project
directory.
3)Enter the command
python training model
qa
4)python training
model-d
5) Copy server
address and paste it to
browser .
T002 Ask question 1)Ask question based Answer with
on the given story prediction
accuracy
T003 Predict Result 1)Press predict button Results Predicted
2)Result will be
displayed

4.10 Integration Testing :

Test Case id Test Scenario Test Steps Test Results


T001 Server Open mozilla Server starts
browser and input
commands to start
the server
T002 Client Open command Client active
prompt and input
command to activate
client
T003 Connection between Ask question Answer with
server and client prediction accuracy
4.11 System Testing :

Test case id Test Scenario Test Steps Test Results


T001 Device connection Connect web app via Device connection
mozilla run the successful
application
T002 Prediction 1)Give a story Successful
2)click get answer prediction
button

4.12 GUI Testing :


Test Case id Test Scenario Screenshot
T001 Web App Opens

T002 Get output from powershell


T003 Try using another story

5. Experimental Results :

5.1 Experimental Setup :


Hardware Interfaces
1. PROCESSOR : cortex a53 (1.4 ghz)
2. RAM: 4 GB DDR3 RAM

3. MEMORY : 1 GB

Software Interfaces
1.python ide 2.7.16
2.windows

5.2 Results :
 Successful building of the application on windows device.
 Successful prediction of answers based on the given story.

6. Schedule of Work :

Sr No. Date Task Planned Task Status


01. 15/01/19 Discussion of project Various topics based on
Topics IEEE research papers
discussed.
02. 22/01/19 Topic Finalization Questions – Answering
System
03. 05/02/19 Synopsis Synopsis submission
04. 04/24/19 Presentation of first Presented
module

7. Conclusion and Future work :

Conclusion:
1. Simple model that combines external memory with an RNN
2. Versatile: can be applied to range of tasks – Language modeling, bAbI dataset .
3. Interesting to explore biological parallels – E.g. hippocampus & PFC .

Future Scope :
1. Put in additional layers to Gated Recurrent Unit.
2. Implement a bidirectional Recurrent Neural Network.

References :
1) https://www.rsisinternational.org/journals/ijrias/DigitalLibrary/Vol.3&Issue4/28-30.pdf
2)https://arxiv.org/abs/1503.08895
3)https://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf
4)https://pdfs.semanticscholar.org/3dad/edbca58980805c5576cb7018ffa1b6262023.pdf
https://www.msri.org/workshops/796/schedules/20462/documents/2704/assets/24734

Potrebbero piacerti anche