Sei sulla pagina 1di 9

STRANGER DETECTION

YADA ARUN KUMAR

arunyada0610@gmail.com

computers and information technologies are apace integration


Abstract— How to accurately and effectively establish into everyday human life. Because the digital world and planet
individuals has continuously been a remarkable topic, both in merge a lot of and a lot of along, the way to accurately and
analysis and in business. With the speedy development of AI in effectively establish users and improve info security has
recent years, facial recognition gains a lot of and a lot of become a very important analysis topic.
attention. Compared with the standard card recognition,
fingerprint recognition and iris recognition, face recognition has Not solely within the civil space, particularly, since the Sept.
several benefits, as well as however limit to non-contact, high
11 terrorist attacks, governments all over the globe have
concurrency, and user friendly. It's high potential to be utilized
in government, public facilities, security, e-commerce, retailing, created imperative demands on this issue, prompting the event
education and plenty of alternative fields. of rising identification strategies. Ancient identity recognition
Deep learning is one of the new and vital branches in machine technologies chiefly trust the individual’s own memory
learning. Deep learning refers to a group of algorithms that solve (password, username, etc.) or foreign objects (ID card, key,
varied issues like pictures and texts by victimization varied etc.). However, whether by virtue of foreign objects or their
machine learning algorithms in multi-layer neural networks. own memory, there are serious security risks. It's not solely
Deep learning may be classified as a neural network from the difficult to regain the initial identity material, however
common kind; however there square measure several changes additionally the identity info is well non-inheritable by others
within the concrete realization. if the identification things that prove their identity are taken or
At the core of deep learning is feature learning that is meant to forgotten. As a result, if the identity is impersonated by others,
get gradable information through gradable networks, thus on then there'll be serious consequences.
solve the vital issues that antecedently required artificial style
options. Deep Learning may be a framework that contains many
Different from the standard identity recognition technology,
vital algorithms. For various applications (images, voice, text),
you would like to use totally different network models to realize bioscience is that the use of the inherent characteristics of the
higher results. body for identification, like fingerprints, irises, face and then
With the huge growth of deep learning, Therefore the on.
introduction of deep convolutional neural networks (DCNN), the
accuracy and speed of face recognition have created nice strides. Compared with the standard identity recognition technology,
However, as we said above, the results from totally different biological options have several advantages, as:
networks and models square measure terribly different. In this 1. Reliableness, biological characteristics are born with, can't
paper, facial features square measure extracted by merging and be modified, so it is not possible to repeat different people's
comparison of multiple models, then a deep neural network is biological characteristics.
constructed to coach and construct the combined options. In this 2. Convenience, biological options as a part of the chassis,
way, the benefits of multiple models may be combined to say the pronto offered, and can ne'er be forgotten.
recognition accuracy. Once obtaining a model with high accuracy
3. straightforward to use. Many biological characteristics won't
we can find the people who are not recognized and stated them as
strangers.
need people to company with the examine device. Based on
the higher than blessings, bioscience has attracted the eye of
Keywords- Deep neural network, Face recognition, Deep multi-
major companies and analysis institutes and has with success
model fusion, Convolutional neural network. replaced ancient recognition technologies in several fields.
And with the speedy development of laptop and AI, bioscience
1. INTRODUCTION technology is straightforward to join forces with computers
and networks to comprehend automation management, and is
Ever since IBM introduced 1st pc on 1981, to the .com era apace integrating into people's lifestyle.
within the early 2000s, to the web searching trend in last ten
years, and also the Internet of Things(IOT) nowadays,
When comparison the variations between completely different distinction options. The recent rise of the face supported the
bioscience, we are able to see that the value of facial depth of learning detection strategies, compared to the normal
recognition is low, the acceptance from user is straightforward, technique not solely shorten the time, and also the accuracy is
and also the acquisition of data is easy. Automatic face effectively improved. Face recognition of the separated faces
recognition is that the use of laptop vision technology and could be a method of feature extraction and distinction
connected algorithms, from the pictures or videos to seek out identification of the normalized face pictures so as to get the
faces, then analysis of the identity. Additionally, additional identity of human faces within the pictures.
analysis of the non-inheritable face, might conduct some extra
attributes of the individual, like gender, age, emotion, and etc. In this paper, we'll initial summarize and analyze the current
analysis results of face recognition technology, and studies a
a) APPLICATION OF THE RESEARCH
face recognition formula supported feature fusion. The formula
Face recognition will be derived back to the sixties and flow consists of face image preprocessing, combination feature
seventies of the last century, and when decades of twists and construction and combination feature coaching.
turns of development has matured. the standard face detection
technique depends chiefly on the structural options of the face
and also the color characteristics of the face. 2. THEORETICAL BACKGROUND [1]

Some ancient face recognition algorithms determine facial a) ANALYSIS OF RELATED WORK
expression by extracting landmarks, or options, from a picture In introduction, we introduced the facial recognition, discussed
of the subject's face. There is an example, as shown in Figure the use case and bright future of this technology. An incredible
1.1, Associate in Nursing algorithmic rule could analyze the amount of research and effort from several major companies
relative position, size, and/or form of the eyes, nose, and universities and been dedicated to this field. In these, we
cheekbones, and jaw. These options square measure then will review the utmost important work in the facial recognition
accustomed seek for alternative pictures with matching field.
options. These varieties of algorithms will be sophisticated, i. FACE DETECTION AND FACE TRACKING
need innumerable cypher power, thence may be slow in This article Robust Real-time Object Detection [2] is that the
performance. And that they can even be inaccurate once the most often cited article in a very series of articles by Viola that
faces show clear emotional expressions, since the scale and produces face detection actually feasible. we are able to find
position of the landmarks will be altered considerably in such out about many face detection ways and algorithmic program
circumstance. from this publication. The article quick rotation invariant
multi-view face detection supported real Adaboost [3] for the
primary time real adaboost applied to object detection, and
projected a a lot of mature and sensible multi-face detection
framework, the nest structure mentioned on the cascade
structure enhancements even have smart results. The article
following in Low Frame Rate Video: A Cascade Particle Filter
with Discriminative Observers of various Life Spans [4] could
be a smart combination of face detection model and following,
offline model and on-line model, and obtained the CVPR 2007
Best Student Paper.

The higher than three papers mentioned concerning the face


detection and face following issues. In step with the analysis
end in these papers, we are able to build real time face
Figure 1.1 Abstract humane face into features [1] detection systems. The most purpose is to search out the
position and size of every face within the image or video,
b) ANALYSIS OF THE PROBLEM STATEMENT except for following, it's conjointly necessary to see the
correspondence between completely different faces within the
A complete face recognition system includes face detection, frame.
face preprocessing and face recognition processes. Therefore,
it's necessary to extract the face region from the face detection ii. FACE POSITIONING AND ALIGNMENT
method and separate the face from the background pattern, that Earlier localization of facial feature points targeted on 2 or 3
provides the premise for the next extraction of the face key points, like locating the middle of the eyeball and
therefore the center of the mouth, however later introduced a The on top of three papers mentioned regarding facial feature
lot of points and value-added mutual restraint to enhance the positioning/alignment. Facial feature extraction could be a face
accuracy and stability of positioning Sex. The article Active image into a string of fixed-length numerical method. This
form Models-Their coaching and Application [5] could be a string of numbers is named the "Face Feature" and has the
model of dozens of facial feature points and texture and point power to characterize this face. External body part to say the
relationship constraints thought of along for calculation. characteristics of the method of input is "a face map" and
though ASM has a lot of articles to enhance, it's price "facial options key points coordinates", the output is that the
mentioning that the AMM model, however additionally corresponding face of a numerical string (feature). Face to face
another necessary plan is to enhance the first article supported feature algorithmic rule are supported countenance of the key
the sting of the feel model. The regression-based approach purpose coordinates of the external body part pre-determined
bestowed within the paper Boosted Regression Active form mode, then calculate the options. In recent years, the deep
Models [6] is best than the one supported the explicit apparent learning methodology essentially dominated the face raise
model. The article Face Alignment by express form Regression feature algorithmic rule, within the articles mentioned on top ,
[7] is another side of ASM improvement and an improvement they showed the progress of analysis during this space. These
on the form model itself. Relies on the linear combination of algorithmic rules area unit fastened time length algorithm.
coaching samples to constrain the form, the result of alignment Earlier face feature models area unit larger, slow, solely
is presently seen the most effective. employed in the background service. However, some recent
studies will optimize the model size and operation speed to be
The purpose of the facial feature purpose positioning is to offered to the mobile terminal beneath the premise of the
additional confirm facial feature points (eyes, mouth center fundamental guarantee algorithmic rule impact.
points, eyes, mouth contour points, organ contour points, etc.)
b) THEORETICAL IDEA OF PROPOSED WORK
on the premise of the face space detected by the face
detection / following, s position. These three articles show the Face recognition is basically pattern recognition, and the
ways for face positioning and face alignment. The essential purpose is to transform actual things into numbers that
plan of locating the face feature points is to mix the feel computers can understand. If a picture is about 64 x 64 bit-
options of the face locals and therefore the position constraints color image, then each pixel of the image values are between 0
of the organ feature points. and 28 − 1 , so we can change an actual image into a matrix.
Then, how can we detect the patterns in this matrix? There is a
iii. FACE FEATURE EXTRACTION way to use a relatively small matrix to stroke from top to
PCA-based eigenfaces [8] area unit one in all the foremost bottom and left to right in this large matrix. Within each small
classic algorithms for face recognition. Though today's PCA is matrix block, we can count the number of occurrences of each
additional employed in spatial property reduction in real color from 0 to 28– 1. So we can express the characteristics of
systems than classification, such a classic approach deserves this block.
our attention. The article native Dennis Gabor Binary Pattern
bar graph Sequence (LGBPHS): a unique Non-Statistical Through this test, we get another matrix consisting of
Model for Face illustration and Recognition [9] is on the brink numerous small matrix block features. And actually original
of several mature industrial systems. In several sensible matrix is larger than this smaller matrix. Then, perform the
systems, a framework for extracting authentication data is PCA above steps again on the smaller matrix to perform a feature
and LDA. victimization electronic device to scale back matrix "concentration". Finally, after many abstractions, we will turn
to avoid the matrix singularity drawback of LDA finding, then the original matrix into a 1 dimension by 1 dimension matrix,
victimization LDA to extract the options appropriate for which is a number. Dissimilar pictures, such as a dog or a cat
classification, To more establish the varied original options will eventually get abstracted to different numbers. Similarly,
extracted when the decision-level fusion. Though a number of faces, expressions, ages, these principles are similar, but the
the LFW take a look at protocols aren't cheap, there's so a face initial sample size will be large, and ultimately the specific
recognition library that's nearest to the particular information. image is abstracted into numbers through the matrix. Then to
during this article, Blessing Dimensionality: High dimensional reach the objective of comparing faces we need to compute the
(HD) Feature and Its economical Compression for Face dissimilarity between the matrices.
Verification [10], the utilization of precise positioning purpose
as a relevancy face multi-scale, multi-regional illustration of
the concept is price learning, is combined with a spread of
illustration.
3. FACE RECOGNITION MODEL WITH NEURAL Feed backward network: There is feedback between neurons in
NETWORK the network, which can be represented by an undirected
complete graph. The info processed in this neural network is a
a) INTRODUCTION TO NEURAL NETWORK transition state that can be managed by dynamic system
Artificial Neural Network (ANN) could be a analysis hotspot theory. The constancy of the system is more related to the
within the field of computing since the Nineteen Eighties. It associative memory method.
abstracts the human brain somatic cell network from the b) CONVOLUTIONAL NEURAL NETWORK
attitude of data process, establishes an easy model, and forms
totally different networks in line with different association Convolutional neural network (CNN) could be a distortion of
strategies. It's conjointly typically cited as neural network or multi-layer perceptron fascinated by biological vision and
neural network in engineering and academe. A neural network therefore the most simplified preprocessing operation. It's
is associate degree operational model consisting of an outsized generally a forward feedback neural network. The largest
range of nodes (or neurons) connected to every different. discrepancy among convolutional neural network and multi-
Every node represents a selected output perform known as layer perceptron is network. The primary few layers area unit
associate degree activation perform. The association between composed of a convolutional layer and a pooled layer
each 2 nodes represents a weight worth for passing the alternately cascaded to simulate a straightforward cascade of
association signal, known as weight. That is reminiscent of the cells and complicated cells for high-level feature extraction
memory of the bogus neural network. The result of the within the visual area.
network varies looking on the association methodology of the
network, The load worth and also the excitation perform. The
network itself is sometimes associate degree approximation of
associate degree algorithmic rule or performs in nature, or it's
going to be associate degree expression of a logic strategy.

In the previous 10 years, the analysis work of artificial neural


networks (ANN) has been concentrated, and nice progress has
been created. It's with success resolved several issues within
the fields of pattern recognition, intelligent robots, automatic
management, prophetic estimation, biology, medicine, and
economy. Sensible issues that area unit troublesome to unravel Figure 2. Typical convolutional neural network (CNN)
in trendy computers, showing sensible intelligence. structure [11]

The artificial neural network model mainly considers the The convolutional neurons answer a little of the input from the
topology of the network connection, the characteristics of the previous layer (called the native receptive field, with overlap
neurons, and the learning rules. Presently, there are between the regions), extracting higher-level options of the
approximately 40 categories of neural network models, input; the neurons of the pooled layer area unit input to the
inclusive of back propagation network, perceptron, self- previous layer. a little of the realm (no overlap between the
organizing map, Hopfield network, Boltzmann machine, areas) is averaged or maximized to resist slight deformation or
adaptive resonance theory and so on. According to the displacement of the input. The latter layers of the
topology of the connection, the neural network model can be convolutional neural network area unit generally associate
divided into: Feed forward network and Feed backward output layer of variety of totally connected layers and a
network. classifier.

Feed forward network: Each neuron in the network accepts the c) BUILD FACE RECOGNITION MODEL WITH
input of the previous stage and outputs it to the next stage. In CNN
this network there is no feedback, and it can be represented by
a directed loop-free graph. This kind of network recognizes the At present, face recognition algorithms is roughly divided into
modification of signals from input place to target place, and its 2 categories:
information processing capability comes from multiple
recombination of simple nonlinear functions. The network (1) Representation-based ways. the fundamental plan is to
structure is quiet easy to implement. convert two-dimensional face input into another area, so use
applied math ways to investigate face patterns, like Eigenface,
Fisherface, and SVM.

(2) A feature-based technique usually extracts native or world


options so sends a classifier for face recognition, like
recognition supported set options and HMM.

Convolutional neural network for face recognition is thought


of as a feature-based technique. It's totally different from
ancient artificial feature extraction and superior classifier style
for options. Its advantage is that feature extraction is
performed by layer-by-layer convolution dimension reduction, Figure 3. Classical LeNet-5 CNN for face recognition [12]
so through multi-layer nonlinear mapping, the network will
mechanically learn from the unprocessed coaching samples to In Figure 3, it shows how a classical LeNet-5 CNN works for
create a feature extractor and classifier that adapts to the face recognition. The network was proposed by LeCun et al
popularity task. This technique reduces the necessities on the [13], and it is composed by below layers:
coaching samples, and therefore the range of layers of the
network. Convolution layer: The convolutional layer simulates the
method of extracting some primary visual options by
i. THEORY: victimization straightforward ways of native affiliation and
weight sharing to simulate straightforward cells with native
Convolutional neural network (CNN) could be a distortion of receptive fields. native affiliation means every nerve cell on
multi-layer perceptron fascinated by biological vision and the convolutional layer is connected with the neurons within
therefore the most simplified preprocessing operation. It's the fastened space within the previous feature map; weight
generally a forward feedback neural network. The largest sharing means the neurons within the same feature map use an
discrepancy among convolutional neural network and multi- equivalent affiliation strength and also the previous layer.
layer perceptron is network. The primary few layers area unit Connection, will cut back the network coaching parameters, an
composed of a convolutional layer and a pooled layer equivalent set of affiliation strength could be a feature
alternately cascaded to simulate a straightforward cascade of extractor, that is complete as a convolution kernel within the
cells and complicated cells for high-level feature extraction method of calculation, and also the convolution kernel worth is
within the visual area. arbitrarily initialized 1st, and at last determined by network
coaching.
The convolutional neurons answer a little of the input from the
previous layer (called the native receptive field, with overlap The pooling/sampling layer: The pooled layer simulates
between the regions), extracting higher-level options of the advanced cells as a method of screening and mixing primary
input; the neurons of the pooled layer area unit input to the visual options into additional advanced, abstract visual
previous layer. a little of the realm (no overlap between the options. it's enforced by sampling within the network. when
areas) is averaged or maximized to resist slight deformation or sampling by the pooling layer, the quantity of output feature
displacement of the input. The latter layers of the maps is unchanged, however the scale of the feature map
convolutional neural network area unit generally associate becomes smaller, that has the result of reducing the machine
output layer of variety of totally connected layers and a complexness and resisting tiny displacement changes. The
classifier. pooling layer planned during this paper adopts large-value
sampling, and also the sampling size is 2*2, that is, the input
feature map is split into non-overlapping 2*2 rectangles, and
also the most worth is taken for every parallelogram, that the
output feature map is output. each the length and also the
breadth square measure 1/2 the input feature map. The neurons
within the pooled layer outlined during this paper don't have
the educational operate.

The totally connected layer: so as to reinforce the nonlinearity


of the network and limit the scale of the network, the network
extracts options from the four feature extraction layers and
accesses a completely connected layer. Every nerve cell of the SDRAM. We train our neural network using NVIDIA GPU
layer is interconnected with all neurons of the previous layer. GeForce GTX 1050 with 4GB GPU
an equivalent layer of neurons aren't connected. RAM. The GPU driver version is 390.48. We use Compute
Unified Device Architecture (CUDA)
ii. BUILD SIAMESE NETWORK WITH CNN version 9.0, and NVIDIA CUDA Deep Neural Network
(cuDNN) version 7.0 for CUDA 9.0. We
After examination completely different neural networks and build the Siamese network with below parameters: [23]
their characteristics, we tend to used Siamese network to
resolve the matter. The Siamese network is neural network for
measure of similarity. It is used for class identification,
classification, etc., within the situation once there ar several
classes, however the quantity of samples per class is little. the
normal categorification technique for distinctive is to grasp
precisely that class every sample belongs to and wish to
possess an explicit label for every sample. And therefore the
relative variety of tags isn't an excessive amount of. These
strategies are less applicable once the quantity of classes is just
too massive and therefore the number of samples per class is
comparatively tiny. In fact, it's conjointly fine understood. For
the whole information set, our data volume is offered, except
for every class, there is solely a number of samples, then
exploitation the classification algorithmic rule to try and do it,
as a result of every class of samples is just too Less, we won't
train any smart results the least bit, thus we are able to solely
realize a replacement thanks to train this information set,
therefore proposing the Siamese network, as showing in Figure
4.

[23]
iii. TRAIN THE NEURAL NETWORK

The face info we elect is ORL [15]. The ORL face info
consists of four hundred photos of forty folks, that is, ten
photos per person. The face has expressions, small gestures
then on. The coaching process is performed on the 2 databases,
and ninetieth of the faces within the library square measure
haphazardly elect because the coaching set, and also the
remaining 100% of the faces square measure used as check
sets, so the faces within the 2 sets square measure
Figure 4. Siamese Network Work Flow [14] standardized. The coaching method was victimization GPU, as
shown in Figure 5 and Figure 6. we will see throughout
The Siamese network learns a similarity live from the coaching, the electronic equipment usage visited 100%, and
information and uses the learned metric to check and match also the operating temperature hyperbolic dramatically.
the samples of the new unknown class. This methodology may
be applied to classification issues wherever the amount of
categories is massive, or the whole coaching sample cannot be
used for previous methodology coaching.

The machine we used for this article is on Ubuntu 18 operating


system. The CPU is Intel(R)
Core(TM) i5-7300HQ CPU @ 2.50GHz with 4 cores. The
memory is dual channel DDR 4 8GB
Figure 5. GPU Usage before training.[23]

Figure 6. GPU usage during training.[23]

We trained the model with 100 epochs. A sample training loss Figure 7. Training loss vs. Epoch[23]
is showed below, we can see the training loss goes down
significantly during the early epochs, and converged to iv. MODEL VERIFICATION
0.0066933 at last. Figure 7 shows clearly how the trend of
training loss goes down as the epoch increases The input of the neural network is an image of human face,
and the output of the neural network is a vector of 5D. A
sample output looks like below:

[23]
To calculate if the faces on two images come from the same
person, we need to calculate the similarity of the two images,
aka, the Euclidean distance between two vectors. Below is an
example output of different people identified by our model,
and the images are shown in Figure 8:

[23]

[23]
the available open source data set, we chose ORL dataset and
trained the model with GPU. The model will take a human
face image and extract it into a vector. Then the distance
between vectors are compared to determine if two faces on
different picture belongs to the same person.

REFERENCES

[1] "cookbook.fortinet.com," 10 10 2018. [Online]. Available:


https://cookbook.fortinet.com/face-recognition-
configuration-in-forticentral/. [Accessed 10 10 2018].
Figure 8. Different people with high Euclidean distance[23] [2] M. J. Paul Viola, "Robust Real-time Object Detection,"
International Journal of Computer Vision, pp. 137-154,
Below is an example output of same person with different 2004.
pose, identified by our model, and the images are shown in [3] H. A. C. H. a. S. L. Bo Wu, "Fast rotation invariant multi-
Figure 9: view face detection based on real Adaboost," Sixth IEEE
International Conference on Automatic Face and Gesture
Recognition, pp. 79-84, 2004.
[4] H. A. Y. T. S. L. Yuan Li, "Tracking in Low Frame Rate
Video: A Cascade Particle Filter with Discriminative
Observers of Different Life Spans," IEEE Transactions on
Pattern Analysis and Machine Intelligence, pp. 1728-1740,
2008.
[5] C. J. T. D. H. C. A. J. G. T. F. COOTES, "Active Shape
Models-Their Training and Application," COMPUTER
VISION AND IMAGE UNDERSTANDING, pp. 38-59,
1995.
[6] D. C. a. T. Cootes, Boosted Regression Active Shape
Models, BMVC, 2007.
[7] Y. W. F. W. a. J. S. X. Cao, "Face alignment by Explicit
Shape Regression," in 2012 IEEE Conference on
Computer Vision and Pattern Recognition, Providence, RI,
2012.
[8] M. T. a. A. Pentland, "Eigenfaces for recognition," Journal
of Cognitive Neuroscience, pp. 71-86, 1991.
[9] S. S. W. G. X. C. a. H. Z. Wenchao Zhang, "Local Gabor
binary pattern histogram sequence (LGBPHS): a novel
non-statistical model for face representation and
recognition," Tenth IEEE International Conference on
Computer Vision, pp. 786-791, 2005.
[10] X. C. F. W. a. J. S. D. Chen, "Blessing of Dimensionality:
High-Dimensional Feature and Its Efficient Compression
Figure 9. Same person, even with different pose, has small for Face Verification," in 2013 IEEE Conference on
Euclidean distance[23] Computer Vision and Pattern Recognition, Portland, OR,
2013.
4. CONCLUSION [11] "Convolutional_neural_network," 2017. [Online].
In this paper, I proposed to create a high performance, Available:https://en.wikipedia.org/wiki/Convolutional_ne
scalable, agile, and low cost face recognition system. First, I ural_network.
studied neural network and convolutional neural network. [12] S. S. Liew, "Research Gate," 1 3 2016. [Online].
Based on the theory of deep learning, And I referred a Siamese Available:https://www.researchgate.net/figure/Architectur
network from documentation [23] which will train the neural e-of-the-classical-LeNet-5-CNN_fig2_299593011.
network based on similarities. Then they examine and compare [Accessed 10 10 2018].
[13] L. B. Y. B. a. P. H. Y. LeCun, "Gradient-based learning
applied to document recognition," Proceedings of the
IEEE, pp. 1-45, 11 1998.
[14] xlvector, "JIANSHU," 25 7 2016. [Online]. Available:
https://www.jianshu.com/p/70a66c8f73d3. [Accessed 18 9
2018].
[15] F. S. Samaria, Face recognition using Hidden Markov
Models, Doctoral thesis, 1995.
[16] E. H. G. R. A. L. H. Learned-Miller, " Labeled Faces in
the Wild: A Survey," Advances in Face Detection and
Facial Image Analysis, pp. 189-248, 2016.
[17] V. Chu, "Medium.com," 20 4 2017. [Online]. Available:
https://medium.com/initializedcapital/benchmarking-
tensorflow-performance-and-cost-across-different-gpu-
options-69bd85fe5d58. [Accessed 19 9 2018].
[18] B. L. M. S. B. Amos, "Openface: A general-purpose face
recognition library with mobile applications," CMU
School of Computer Science, Tech. Rep., 2016.
[19] B. Hill, "HOTHARDWARE," 20 8 2018. [Online].
Available:https://hothardware.com/news/nvidia-geforce-
rtx-1080-rtx-1080-ti-799-1199-september-20th.[Accessed
10 9 2018].
[20] 4psa, "4psa.com," 28 6 2013. [Online]. Available:
https://blog.4psa.com/the-callback-syndrome-in-node-js/.
[Accessed 11 8 2018].
[21] W.-S. Chu, "Component-Based Constraint Mutual
Subspace Method," 2017. [Online]. Available:
http://www.contrib.andrew.cmu.edu/~wschu/project_fr.ht
ml.
[22] W. Hwang, "Face Recognition System Using Multiple
Face Model of Hybrid Fourier Feature under Uncontrolled
Illumination Variation," 2017. [Online]. Available:
http://ispl.korea.ac.kr/~wjhwang/project/2010/TIP.html.
[23] Yang Li, Sangwhan Cha, Available :
https://arxiv.org/ftp/arxiv/papers/1901/1901.02452.pdf

Potrebbero piacerti anche