• SHIVAM KUMAR SINGH Department of Computer Sciences and Engineering, Amity University Jharkhand, Ranchi, India.
  • ANURAG SINHA Department of Information Technology, Amity University Jharkhand, Ranchi, India.


deep learning, support vector machine, pattern recognition, facial expression recognition


This research paper based on the topic ‘Creating facial expressions with the help of emojis’ describes about the basic aspects and provides details of different ways to express and communicate our feelings. There are a wide range of approaches to communicate and impart our sentiments. The two ordered a method of correspondence is verbal and non - verbal. Looks are an incredible method of correspondence including the trading of silent implications. It has allured a lot of examination consideration in the field of PC vision and artificial insight. Numerous sorts of examinations have been accomplished for gathering these articulations. It is essentially done to obtain the suppositions of people. In this the project, an API can be utilized to get pictures from any camera-based application progressively. HAAR course classifier is utilized to separate the picture highlights from the pictures got prior. Backing Vector Machines (SVM) is utilized to order those highlights into comparing articulations. Also, these articulations are then changed over to their comparable emoticons, that these emoticons are getting superimposed over the real face appears as a veil. This task can be utilized to contemplate the diverse facial articulations that a machine can comprehend and furthermore it tends to be utilized as a channel utilized in online media applications like Face book, Instagram, and Snapchat.


Bettadapura, V. (2012): Face expression recognition and analysis: the state of the art. – Cornell University 27p.

Bottou, L. (2010): Large-scale machine learning with stochastic gradient descent. – In Proceedings of COMPSTAT'2010 10p.

Burgoon, J.K., Guerrero, L.K., Floyd, K. (2010): Nonverbal communication. – Routledge 480p.

Carton, J.S., Kessler, E.A., Pape, C.L. (1999): Nonverbal decoding skills and relationship well-being in adults. – Journal of Nonverbal Behavior 23(1): 91-100.

Chatfield, K., Simonyan, K., Vedaldi, A., Zisserman, A. (2014): Return of the devil in the details: Delving deep into convolutional nets. – In Proceedings of BMVC 11p.

Clarke, S. (2004): Measuring API usability. – Doctor Dobbs Journal 29(5): S6-S9.

Clynes, M. (1977): Sentics: The touch of emotions. – Anchor Press 273p.

Coates, A., Huval, B., Wang, T., Wu, D., Catanzaro, B., Andrew, N. (2013): Deep learning with COTS HPC systems. – In International Conference on Machine Learning 9p.

Collobert, R., Kavukcuoglu, K., Farabet, C. (2011): Torch7: A matlab-like environment for machine learning. – In BigLearn, NIPS Workshop 6p.

Cui, X., Goel, V., Kingsbury, B. (2015): Data augmentation for deep neural network acoustic modeling. – IEEE/ACM Transactions on Audio, Speech, and Language Processing 23(9): 1469-1477.

Dhall, A., Goecke, R., Joshi, J., Sikka, K., Gedeon, T. (2014): Emotion recognition in the wild challenge 2014: Baseline, data and protocol. – In Proceedings of the 16th international conference on multimodal interaction 6p.

Dhall, A., Goecke, R., Lucey, S., Gedeon, T. (2012): Collecting large, richly annotated facial-expression databases from movies. – IEEE Annals of the History of Computing 19(03): 34-41.

DiPaolo, C. (2016): Moved perceptron over to using arbitrary io.Writer for logging. – GitHub Official Portal. Available on:


Ekman, P., Friesen, W.V., O'sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., Krause, R., LeCompte, W.A., Pitcairn, T., Ricci-Bitti, P.E., Scherer, K. (1987): Universals and cultural differences in the judgments of facial expressions of emotion. – Journal of Personality and Social Psychology 53(4): 712-717.

Izard, C.E. (2013): Human Emotions. – Springer U.S. 496p.

Kim, Y. (2014): Convolutional Neural Networks for sentence classification. – Cornell University 6p.

Lewine, D.A. (1991): POSIX Programmers Guide. – O’Reilly Media 640p.

McDuff, D., Kaliouby, R., Senechal, T., Amr, M., Cohn, J., Picard, R. (2013): Affectiva-mit facial expression dataset (am-fed): Naturalistic and spontaneous facial expressions collected. – In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops 8p.

Pennington, J., Socher, R., Manning, C.D. (2014): GloVe: Global Vectors for Word Representation. – In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 12p.

Picard, R.W. (1999): Affective Computing for HCI. – In HCI (1) 5p.

Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C. (2013): Recursive deep models for semantic compositionality over a sentiment treebank. – In Proceedings of the 2013 conference on empirical methods in natural language processing 12p.

Sundermeyer, M., Schlüter, R., Ney, H. (2012): LSTM neural networks for language modeling. – In Thirteenth annual conference of the international speech communication association 4p.




How to Cite

SINGH, S. K., & SINHA, A. (2021). EMPLOYING DEEP LEARNING FOR CREATING FACIAL EXPRESSION RECOGNITION. Quantum Journal of Engineering, Science and Technology, 2(4), 15–27. Retrieved from https://qjoest.com/index.php/qjoest/article/view/34