TY - GEN
T1 - Low-cost facial expression on mobile platform
AU - Chu, Chang Chun
AU - Chen, Duan Yu
AU - Hsieh, Jun-Wei
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/11/30
Y1 - 2015/11/30
N2 - Facial expressions of a person have been developed widely in many applications. Most of them use many complex algorithms, so they need many computing resources for conduction. In order to perform facial expression on resource-limited mobile platform, we determine to develop a system which is low complexity, high efficiency, real-time execution and no prior-training needed. In this paper, lip's features are applied to classify the human emotion. First, we detect human faces by Haar-like features. Second, the region of mouth is determined by the horizontal projection on the location of face. Third, we determine the corners of lips by using the vertical projection to find the lip boundary. The features extracted are the distance of the mouth's contour and the difference of gray values between upper lip and the half of mouth's height. Finally, the analysis method that we adopt is feature-based approach. We attempt to recognize four expressions, neutral, smile, surprise and sadness. The whole system can be conducted in real time about twenty frames per second. The experiment results show that on average the recognition rate is about 85% and thus reveals its efficacy for real world environment.
AB - Facial expressions of a person have been developed widely in many applications. Most of them use many complex algorithms, so they need many computing resources for conduction. In order to perform facial expression on resource-limited mobile platform, we determine to develop a system which is low complexity, high efficiency, real-time execution and no prior-training needed. In this paper, lip's features are applied to classify the human emotion. First, we detect human faces by Haar-like features. Second, the region of mouth is determined by the horizontal projection on the location of face. Third, we determine the corners of lips by using the vertical projection to find the lip boundary. The features extracted are the distance of the mouth's contour and the difference of gray values between upper lip and the half of mouth's height. Finally, the analysis method that we adopt is feature-based approach. We attempt to recognize four expressions, neutral, smile, surprise and sadness. The whole system can be conducted in real time about twenty frames per second. The experiment results show that on average the recognition rate is about 85% and thus reveals its efficacy for real world environment.
KW - Expression recognition
KW - Face expression
KW - Mobile platform
UR - http://www.scopus.com/inward/record.url?scp=85020726053&partnerID=8YFLogxK
U2 - 10.1109/ICMLC.2015.7340620
DO - 10.1109/ICMLC.2015.7340620
M3 - Conference contribution
AN - SCOPUS:85020726053
T3 - Proceedings - International Conference on Machine Learning and Cybernetics
SP - 586
EP - 590
BT - Proceedings of 2015 International Conference on Machine Learning and Cybernetics, ICMLC 2015
PB - IEEE Computer Society
T2 - 14th International Conference on Machine Learning and Cybernetics, ICMLC 2015
Y2 - 12 July 2015 through 15 July 2015
ER -