Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
1. Introduction
2. Literature Survey
3. Proposed Architecture
3.1 Problem Statement
3.2 System Scope
3.3 Architecture
3.4 Features of Project
3.5 Project Constraints
4. Project Design
5.1 Software Requirement Specification
5.1.1 Product Perspective
5.1.2 Specific Requirements
5.1.3 Software Product Features
5.1.4 Design Constraints
5.1.5 Software System Attributes
5.1.6 Logical Database Requirements
5.1.7 Tools
5.1.8 Software Specification
5.1.9 Cost Specification
5.1.10 Software used
5.2 Data Flow Diagram
5.3 Use Case Diagram
5.4 Class Diagram
5.5 Sequence Diagram
5.6 Activity Diagram
5.7 User Interface
5.8 Testing and Documentation
5.9 Unit Testing
5.10 Integration Testing
5.11 System Testing
5.12 GUI Testing
6. Experimental Results
6.1 Experimental Setup
6.2 Results
7. Schedule of Work
References
1. Introduction
This Software Requirements Specification is about the project named “Questions
Answering System in AI” .This section gives a scope description and overview of
everything included in this SRS document.
In this work, we present a novel recurrent neural network (RNN) architecture where the
recurrence reads from a possibly large external memory multiple times before outputting
a symbol. Our model can be considered a continuous form of the Memory Network
implemented in . The model in that work was not easy to train via back propagation,
and required supervision at each layer of the network. The continuity of the model we
present here means that it can be trained end-to-end from input-output pairs, and so is
applicable to more tasks, i.e. tasks where such supervision is not available, such as in
language modeling or realistically supervised question answering tasks. Our model can
also be seen as a version of RNN search with multiple computational steps (which we
term “hops”) per output symbol. We will show experimentally that the multiple hops over
the long-term memory are crucial to good performance of our model on these tasks, and
that training the memory representation can be integrated in a scalable manner into our
end-to-end neural network model.
2. Literature Survey :
A number of recent efforts have explored ways to capture long-term structure within sequences
using RNNs or LSTM-based models [4, 7, 12, 15, 10, 1]. The memory in these models is the
state of the network, which is latent and inherently unstable over long timescales. The LSTM-
based models address this through local memory cells which lock in the network state from the
past. In practice, the performance gains over carefully trained RNNs are modest (see Mikolov et
al. [15]). Our model differs from these in that it uses a global memory, with shared read and write
functions. However, with layer-wise weight tying our model can be viewed as a form of RNN
which only produces an output after a fixed number of time steps (corresponding to the number
of hops), with the intermediary steps involving memory input/output operations that update the
internal state. Some of the very early work on neural networks by Steinbuch and Piske[19] and
Taylor [21] considered a memory that performed nearest-neighbor operations on stored input
vectors and then fit parametric models to the retrieved sets. This has similarities to a single layer
version of our model. Subsequent work in the 1990’s explored other types of memory [18, 5, 16].
For example, Das et al. [5] and Mozer et al. [16] introduced an explicit stack with push and pop
operations which has been revisited recently by [11] in the context of an RNN model. Closely
related to our model is the Neural Turing Machine of Graves et al. [8], which also uses a
continuous memory representation. The NTM memory uses both content and address-based
access, unlike ours which only explicitly allows the former, although the temporal features that
we will introduce in Section 4.1 allow a kind of address-based access. However, in part because
we always write each memory sequentially, our model is somewhat simpler, not requiring
operations like sharpening. Furthermore, we apply our memory model to textual reasoning tasks,
which qualitatively differ from the more abstract operations of sorting and recall tackled by the
NTM.
3. Proposed Architecture
Predict Accurate Answer by prediction of the accurate answer is done after the question
is asked based on the story.
3.3 Architecture :
3.4 Features of Project :
Predict Accurate Answer : Prediction of the accurate answer is done after the question is asked
based on the story.
Fetch data from the users: Fetch and convert that data into knowledge for application to talk with
users .
Users strong information and making best use out of it , other users will be able to intereact with
you even when you are not online . You can share the important details of your life which will be
permanently stored and can be used in need .
Fully automated Questions answer system is implemented using end to end memory network
algorithm and NLP . The main difficulties in implementing a end to end memory is to get secure
socket layer certificate issues in numpy library . End to end memory network algorithm based
technique is not time consuming because it is not rastic .We were thinking about implementing a
technique based on NLTK but could not find sufficient literature about the model . Face
detection system is a modern approach for Natural Language Processing purposes . Face is the
most commonly used in customer or facebook chat bots , amazon alexa , is actively getting used
by google assistant , apple serie etc.
4. Project Design :
The communication interface will be the web page which is getting displayed on the local server
and through which the sytem reads the dataset provided by facebook research dataset and also
the same interface will be responsible for two way communication between the user and system
in the form of asking questions and predicting the answers.The information should be clearly given
using simple grammatical words .Personal information should be protected from the unauthorised users
who tries to access your data in the form of question and answer .
1] Maintainability:
Database should be maintained by taking frequent backups .
2] Re usability:
Can be reused in any kind of chat bots .
3] Correctness:
Algorithm provides 90 % accuracy.
4] Reliability:
Its highly effective in real time entities .It works in any scenario.
Story showcase:
Story appearce in the story showcase once we press refresh or another story button .
Question field:
User needs to enter questions in the questions field and accordingly we will be getting the
appropriate answers along with its accuracy . Besides that three different levels of accuracies are
shown related to answers.
1) Maintainability: Database have to be maintained in the case of changing some data from
Device.
2) Reusability: Existing data can be reused in some stories for question answering.
3) Correctness: Recorded Data can be Updated in some-cases and then uploaded again.
4.1.6 Tools:
5. Experimental Results :
3. MEMORY : 1 GB
Software Interfaces
1.python ide 2.7.16
2.windows
5.2 Results :
Successful building of the application on windows device.
Successful prediction of answers based on the given story.
6. Schedule of Work :
Conclusion:
1. Simple model that combines external memory with an RNN
2. Versatile: can be applied to range of tasks – Language modeling, bAbI dataset .
3. Interesting to explore biological parallels – E.g. hippocampus & PFC .
Future Scope :
1. Put in additional layers to Gated Recurrent Unit.
2. Implement a bidirectional Recurrent Neural Network.
References :
1) https://www.rsisinternational.org/journals/ijrias/DigitalLibrary/Vol.3&Issue4/28-30.pdf
2)https://arxiv.org/abs/1503.08895
3)https://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf
4)https://pdfs.semanticscholar.org/3dad/edbca58980805c5576cb7018ffa1b6262023.pdf
https://www.msri.org/workshops/796/schedules/20462/documents/2704/assets/24734