Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Detection
Soumili Kundu
M.Tech I.T (CWE)
Roll no. 06
SET, Jadavpur University
4/23/2019
INTRODUCTION
LITERATURE SURVEY
Ismail, et al. [1] (2016) proposed Human Emotion Detection via Brain Waves
Study by Using Electroencephalogram (EEG).This research was conducted to
detect or identify human emotion via the study of brain waves. In addition, the
research aims to develop computer software that can detect human emotions
quickly and easily. The main objective of this recognition is to develop "mind-
implementation of Robots". While the research methodology is divided into
four; (i) both visibility and EEG data were used to extract the date at the same
time from the respondent, (ii) the process of complete data record includes the
capture of images using the camera and EEG, (iii) pre-processing, classification
and feature extraction is done at the same time, (iv) the features extracted is
classified using artificial intelligence techniques to emotional faces.
Tyagi, et al. [2] (2018) proposed Emotion Detection Using Speech Analysis. In
this paper we detect the basic emotion of human through speech analysis. Today
95% communication is based on vocals which shows different characteristics of
human and emotion is one of them which shows us attribute like fear, anxiety,
happiness, sadness and angry etc. Hence, voice and speech analysis causes
emotion inside human and very beneficial in different areas of communication.
The main objective of this paper is to develop a system which is helpful in near
future real time systems and improved human-technology interaction. The main
challenge in this project is to develop such system which can detect the emotion
in such a manner so that it is time saving, efficient in all test cases and user
friendly.
Kudiri, et al. [3] (2016) proposed Human Emotion Detection through Speech
and Facial Expressions. This research work revealed that the feature extraction
through speech and facial expression is the most prominent aspect effecting
emotion detection system accompanied by proposed fusion technique.
Although, some aspects considered affecting the emotion detection system, this
affect is relatively minor. It was observed that the performance of the bimodal
emotion detection system is low than the unimodal emotion detection system
through deliberate facial expressions. The results indicated that the proposed
emotion detection system showed better performance with respect to basic
emotional classes than the rest. Feature extraction of visual data is possible in
two ways namely geometric and appearance based.
PROPOSED FRAMEWORK
DATASET EXTRACTION
i) Input Dataset:
The proposed methodology has used 6 different individuals, expressed a
combination of mixed emotions from the collection of 6 basic emotions. 3
participants female and 3 participants male. But here downloaded dataset from
internet is thought of to be used.
b) For the head modality, 12 points along the border of the skull were
tracked. The intuition behind these features was that the tracked points
would describe the shape of the head as well as capture the pitch, yaw,
roll, nod, shake, lateral, backward and forward motion of the head. The
distance between each pair of the tracked point, angle with the horizontal
plane and movement of each tracked point was measured.
c) For the tracking on the hand the palms, wrist, elbow and shoulder joints
of both hands were used resulting in 8 features. These tracked points were
chosen because these points capture the abrupt movement of arms along
all three axes. The distance between each pair of the joint, angle with the
horizontal plane and velocity and displacement of each joint was
calculated.
d) For body modality tracking, the spine centre, left and right hip, knee and
ankles were tracked. The feature vector was created using distance
between pair of joints, angle with horizontal plane and velocity and
displacement of each joint.
iii) Output:
The proposed methodology gives the proper output, which includes the proper
recognition of combintion of emotions based on various scenarios, giving us an
idea of how the Mixed Emotion Detection is paving it’s way to various aspects
of lives of ours. The paper also shows the accuracy percentage achieved in case
of using various modalities.
EXPERIMENTAL RESULT
CONCLUSION
Existing studies have primarily examined automatic recognition of single
emotion. This paper has created 3D features from co-ordinates, positions,
movement and knowledge based behavioral patterns and then used the
combined feature vector to recognize mixed simultaneous emotions.
Combination of head, face, hand, body and audio data was used to generate the
feature vector. The head and face modality accuracy was higher compared to
results from the hand and body modality which indicates that although hands
and body have greater and more frequent displacement than other body parts,
the facial expressions and head movement have more discriminating power to
predict mixed concurrent emotions.
REFERENCES
[1] Ismail, WOAS Wan, et al. "Human emotion detection via brain waves study
by using electroencephalogram (EEG)." International Journal on Advanced
Science, Engineering and Information Technology 6.6 (2016): 1005-1011
[2] Tyagi, Riya, and Anmol Agarwal. "Emotion Detection Using Speech
Analysis." Science 3.3 (2018): 18-20.
[3] Kudiri, Krishna Mohan, Abas Md Said, and M. Yunus Nayan. "Human
emotion detection through speech and facial expressions." 2016 3rd
International Conference on Computer and Information Sciences (ICCOINS).
IEEE, 2016.