Pallavi S.
Salunke, Dr. Bharti Gawali
Dept. of
Computer Sci., Vivekanad College(VIASMSC) Aurangabad (MS) India
Dept. of
Computer Sci., Dr. B. A. M. University Aurangabad (MS) India
ABSTRACT
Face recognition is an important research problem spanning numerous
fields and disciplines. Face recognition having numerous practical applications
such as bankcard identification, access control, Mug shots searching, security
monitoring, and surveillance system, is a fundamental human behavior that is essential
for effective communications and interactions among people. The human face is involved in a large
variety of different activities. It houses the apparatus for speech production
as well as the majority of our sensors (eyes, nose, mouth). Besides these biological
functions, the human face provides a number of social signals essential for our
public life. This paper introduces recognition of eye movements. The ability to
recognize facial signals is essential
of human facial expression is a challenging problem with many applications. In
this paper we use eye movement recognition technique using Facial Action
Coding System (FACS) is the most widely used and versatile method for measuring
and describing facial behaviors. A facial recognition system is a computer
application for automatically identifying or verifying a person from a digital
image or a video frame from a video source.
Keywords:
FACS, Eye moment.
INTRODUCTION:
The studies on human face recognition were expected to be a reference on
machine recognition of faces, research on machine recognition of faces has
developed independent of studies on human face recognition. During 1970’s,
typical pattern classification techniques, which use measurements between
features in faces or face profiles, were used [1]. During the 1980’s, work on
face recognition remained nearly stable. Since the early 1990’s, research
interest on machine recognition of faces has grown tremendously. A formal
method of classifying faces was first proposed in [2]. The author proposed
collecting facial profiles as curves, finding their norm, and then classifying
other profiles by their deviations from the norm. This classification is
multi-modal, i.e. resulting in a vector of independent measures that could be
compared with other vectors in a database. Progress has advanced to the point
that face recognition systems are being demonstrated in real-world settings [3].
Face Recognition has been an interesting issue for both neuroscientists and
computer engineers dealing with artificial intelligence (AI). A healthy human
can detect a face easily and identify that face, whereas for a computer to
recognize faces, the face area should be detected and recognition comes next.
Hence, for a computer to recognize faces the photographs should be taken in a
controlled environment; a uniform background and identical poses makes the
problem easy to solve. These face images are called mug shots [4]. From these
mug shots, canonical face images can be manually or automatically produced by
some preprocessing techniques like cropping, rotating, histogram equalization
and masking. The history of studies on human face perception and machine
recognition of faces. Template Matching: Brunelli and Poggio [5] suggest that
the optimal strategy for face recognition is holistic and corresponds to
template matching. The Eigenface Method of Turk and Pentland [6] is one of the
main methods applied in the literature which is based on the Karhunen- Loeve
expansion. Their study is motivated by the earlier work of Sirowich and Kirby [7],
[8]. It is based on the application of Principal Component Analysis to the
human faces. It treats the face images as 2-D data, and classifies the face
images by projecting them to the eigenface space which is composed of
eigenvectors obtained by the variance of the face images. Eigenface recognition
derives its name from the German prefix eigen, meaning own or individual. The
Eigenface method of facial recognition is considered the first working facial
recognition technology [9]. When the method was first proposed by Turk and
Pentland [6], they worked on the image as a whole. Also, they used Nearest Mean
classifier two classify the face images. Etemad and Chellappa [10] proposed a
method on appliance of Linear/Fisher Discriminated Analysis for the face
recognition process. Subspace LDA: An alternative method which combines PCA and
LDA is studied [11-14]. Bobis et al. [15] studied on a feature based face
recognition system. They suggested that a face can be recognized by extracting
the relative position and other parameters of distinctive features such as
eyes, mouth, nose and chin.
METHODOLOGY:
Facial feature extraction consists in localizing the most characteristic
face components (eyes,nose, mouth, etc.) within images that depict human faces.
In this paper we develop new, more accurate representations for facial
expression by building a Photo database of facial expressions and then
probabilistically characterizing the facial muscle activation associated with
each expression using a detailed physical model of the skin and muscles. For
Study of eye moment, Facial Action coding system is used. In this FACS method
for facial feature extraction that we use for the initialization of our face
recognition technique.
SCORING PROCEDURE
1.
Notation
Always list the AUs in numerical order to facilitate communication. Alphabetical
letters (i, ii) that refer to specific images or to persons (w, j) are not used
when scoring. When intensity is scored, add the letter A, B, C, D, or E
immediately after the AU number; complying with the intensity guidelines for
each AU. Unilaterality is noted by indicating the side of the face where
the appearance change occurred. This notation is placed in front of the AU
number. Use “L” for the left and “R” for the right side of the person's face.
Remember that this reference is not to your left or right side as you look at
the picture, but to that of the person in the image. Unilaterality is not scored
if there is even a trace of the AU on the other side of the face. Involvement
of only one lip, rather than both lips, is noted by indicating whether the AU was
present only in the top or bottom lip. This notation can be used only with AUs
8, 18, 22, 23, and 28. As with unilaterality, this notation is placed in front
of the AU number. Use “T” for the top and “B” for the bottom lip. Single lip
involvement is not scored if there is even a trace of the AU on the other lip.
Recall you cannot score an action as unilateral and present in only one lip.
You must choose between these two, which best describes the action,
unilaterality or Top vs. Bottom. Action Units that are scored for intensity of
action, are not scored separately for left and right side simply because the AU
is more evident on one side than the other. The intensity rating is made for
the stronger movement on either side of the face. Often you may find that
evidence for an AU appears on one side of the face, but not the other. As long
as there is some evidence on the other side, any trace of any appearance change
for that AU, score the action as bilateral. It is only when there is no trace
of the Action Unit, or a different appearance change on the other side, that a
unilateral score is used.
2.
Score
Sheet
Scoring for the Lower Face, Upper Face, and Head and Eye positions is
entered on the front side of the Score Sheet. The listing of AUs for the
Omission Check, Scoring Step II, is on the other side where each AU is listed
by its number and name. Put your name, date and the time of scoring on the
bottom of the front side of the Score Sheet. Identify the image you are scoring
in the space provided for “stimulus.” When you cannot score a facial area
because it is not visible, use a score of 70 if the brow is not visible, 71 if
the eyes are not visible, 72 if the lower face is not visible, and 73 if the entire
face is not visible. Do not score an area of the face as “not visible” if it is
possible to score any AU that affects this area. For example, if the subject
places a hand on the brow, but wrinkles in the center of the upper part of the
brow allow you to score AU 1, you do not score the brow as not visible (70),
even though you may be in doubt as to whether AU 4 is also present. A score of
73 is used bilaterally, only when the head is turned so much that the back of
the head faces the coder, the subject moves his head completely from your line
of view, or an obstacle covers the face completely. You may, however, use any
of the “not visible” scores as occurring only on the left or the right side of
the face. This situation occurs most often when the head is turned far to the
side, and is scored L73 or R73. Usually the head turn would have to be extreme
so that no part of the area beyond the midline of the face is visible, as many
AUs can be inferred from seeing only a small portion of each facial area.
Scores 70, 71 and 72 may also be used unilaterally. To help you remember which
AUs cannot be scored with each of the “not visible” scores, scores 70 and 71
are located with the Action Units that can affect the brow and eye areas. Score
72 is located with the Lower Face as it cannot be scored simultaneously with
any of the Lower Face scores. Score 73 has been placed with the final full-face
score. The minimum number of Action Units that can be scored for a facial event
is one. If there is some movement during the event you are scoring that is not
possible to score as any AU or AD2, record the event as an “Unscorable” action,
74. Use 74 for unscorable movements only when there is no other scorable AU or
AD on the upper and lower face. If unscorable movement is observed on the face
during a Head/Eye position change, 74 is scored with the Head/Eye position
scores. If there is no detectable action of any kind, the face is scored
“Neutral:” (AU 0). Neutral is scored only once for a facial event and cannot be
scored with any other AU or AD. If all that is observed during a movement is a
shift in the Head and/or Eye position, then NEUTRAL (0) must be scored with the
position scores. Unlike the “Not Visible” scores (70, 71, 72 or 73), 0 Neutral
or 74 Unscorable are not scored for the separate areas of the face, nor is
either scored unilaterally. Scoring proceeds in the following order:
1. Lower Face is
scored before the Upper Face, since if certain Lower Face AUs are scored, they
change the criteria for certain Upper Face AUs. If any of the Miscellaneous
Actions are to be scored, they are scored with the Lower Face.
2. Head and Eye
position is scored next, if it is to be scored at all. If Head and Eye position
is not scored, the Head/Eye check must be made to verify whether the view of
the subject affects the scoring of the other AUs.
3. Upper Face
scoring is done last.
4. The score for
the total face is the combination of scores, possibly rearranged, for the Upper
Face, Lower Face, and Head and Eye position scores.
.
EXPERIMENTAL ANALYSIS OF EYE MOMENT USING FACS
Images which are related to
the eye movements and study are performed on AU which can occur in the images.
AU studied manually using score sheet and then checked by FACS score checker. We have considered six types
of Eye movements which are present in following images like Neutral image, Eye
Closure, Eyes left, Eyes right, Eyes up, Eyes down. The face is not showing any action, often called a
neutral face. The face is not actually at rest because the eyes are open, the
jaw is closed, etc., but no AU can be scored. This neutral face is the baseline
for scoring AUs in the example images of this person, and similar states of
“neutral” faces are used most often as baselines in actual scoring situations,
although other baselines can also be used.
Image 43i: The upper eyelid is relaxed and
drooping down slightly, sufficient to score 43B. Note that the shape of the lower
eyelid is little changed from neutral and there is no additional wrinkling. Be
sure to compare the appearance of relaxing the upper eyelid in 43 to that
resulting from the tensing of the orbicular muscle that is apparent in the 6
and 7 items above.
Image
43ii: This appearance results from relaxing the upper
eyelid, and the eyelids are almost together (closed), but the gap between them
can still be seen, especially in the medial parts and around the lacrimal
gland. This evidence is sufficient to score 43D. Compare this appearance to the
next item, which shows the eyelids closed.
Image 43iii: The
eyes are closed in 43E. Notice that when the eyes are closed, you see the upper
eyelashes laying on the skin of the lower lid and little or nothing of the lower
eyelashes. No sclera or mucosa can be seen. No evidence of any other action is
present in this example.
In scoring Eyes Left and right, We can distinguish marked Eye
Turn 62B from severe Eye Turn 62C. Anything beyond 62C is scored as 62C or 62D
until you reach the point where there is no sclera visible on one side of the
iris, which is scored 62E. In scoring Eyes Up, you can score 63 if: you can
see sclera below the iris, and in the neutral face you cannot. you can see all
of the bottom of the iris, and in the neutral face you cannot. You cannot see
the top portion of the iris, and in the neutral face you can. In scoring Eyes
Down you can score 64 if: you can no longer see all of the bottom of the iris,
but in the neutral face you can, and this is not due to AU 6, 7, 9, 10, 12, or
13. You can no longer see the entire pupil but in the neutral face you can.
Facial Action Coding
System: Score Sheet Designed by Paul Ekman and Wallace V.Friesen
Score sheet # 1 Image 1
_________________________________________________________________________
I. Initial Scoring:
II. Omission
Check:
III.
Reorganized Scoring:
IV. Reference
Check:
AUs in
Numerical Order:
Alternative
AUs: Reference Check:
Results for
Step IV:
V. Revised
Scoring:
Head/Eye Position:
Upper Face
I. Initial
Scoring: (4 + 6 ref) + 7 + 43 Image 1
II. Omission
Check :( 4 + 6 ref)
III.
Reorganized scoring :( 6 or 4+6 ref) + 7 + 43
IV. Reference
Check: (especially: 4 with 9; 6 with 9, 10, 12, & 13; 7 with 6, 12, &
13)
AUs in
Numerical Order: 6 + 7 + 43
Alternative AUs:
None Reference Check:
Results for
Step IV:
V. Revised
Scoring: 6B + 7E + 43E
Final Scoring Upper Face: Final Scoring Lower Face: Final Head/Eye
Positions:
6B + 7E + 43E
Final Full Face Score:
(Score 73 if
Entire Head/Face is out of view)
Coder’s
Name: Date: Time:
Stimulus: Image
# 1 Segment: Item:
Location: Beginning End
--------------------------------------------------------------------------------------------------------
© Copyright 2001 Paul Ekman, Wallace V. Friesen, & Joseph C. Hager
Permission to reproduce this two page score sheet is granted.
CONCLUSION:
•
Focus on developing real time and non-intrusive system for spontaneous
facial activity understanding system
•
Combine computer vision with
graphical models for robust and consistent visual understanding and
interpretation
•
Apply to different applications human computer interaction (e.g. emotion
recognition), transportation, security, medical diagnosis, learning, games,
polygraph, entertainment, etc.
REFERENCES:
[1] R.
Chellappa, C.L. Wilson and C. Sirohey, may 1995 “Human and machine recognition of
faces: A survey,” Proc. IEEE, vol. 83, no.
5, pp. 705-740.
[2] Francis
Galton, June 21, 1888. “Personal identification and description,” In Nature, pp.
173-177.
[3] W. Zaho,
1999. “Robust image based 3D face recognition,” Ph.D. Thesis, Maryland
University.
[4] J. J. Weng
and D. L. Swets, 1999. “Face Recognition”, in A. K. Jain, R. Bolle, and S.
Pankanti
(Editors), Biometrics: Personal Identification in Networked Society, Kluwer
Academic Press.
[5] R.
Brunelli, and T. Poggio, 1993. “Face Recognition: Features versus Templates”,
pp.
1042-1052, IEEE Transactions, PAMI,
15(10).
[6] M. A. Turk,
and A. P. Pentland, 1991. “Face Recognition using Eigenfaces”, pp. 586-
591, IEEE,
[7] L.
Sirovich, and M. Kirby, “Low-dimensional Procedure for the Characterization of
human Faces, 3, March 1987, Journal of the
Optical Society of America, Vol. 4, No. pp.
519-524
[8] M. Kirby,
and L. Sirovich, No. 1, January 1990 “Application of the Karhunen-Loeve
Procedure for the Characterization of
Human Faces”, IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol.
12,. pp. 103- 108.
[9] B.
Kepenekci, September 2001. “Face Recognition Using Gabor Wavelet Transform”,
MSc. Thesis, METU.
[10] K. Etemad,
and R. Chellappa, 1996. “Face Recognition Using Discriminant
Eigenvectors”, pp. 2148-2151, IEEE.
[11] P. N.
Belhumeur, J. P. Hespanha and D. J. Kriegman, No. 7, July 1997 “Eigenfaces vs.
Fisherfaces: “Recognition Using Class
Specific Linear Projection”, IEEE Transactions
on Pattern Analysis and Machine
Intelligence, Vol. 19.
[12] W. Zhao,
A. Krishnaswamy, R. Chellappa, D. L. Swets, and J. Weng, 1998.
“Discriminant Analysis of Principal
Components for Face Recognition”, International
Conference on Automatic Face and Gesture
Recognition, pp. 336-341
[13] W. Zhao,
R. Chellappa, and N. Nandhakumar, 1998 “Emprical Performance Analysis of
Linear Discriminant Classifiers”, pp.
164-169, IEEE.
[14] W. Zhao, 1999 “Subspace Methods in Object/Face Recognition”, pp.
3260-3264, IEEE.
[15] C. S. Bobis,
R. C. Gonzalez, J. A. Cancelas, I. Alvarez, and J. M. Enguita, 1999 “Face
Recognition Using Binary
Thresholding for Features Extraction”, pp. 1077-1080, IEEE.









No comments:
Post a Comment