linking facial recognition and the emotional system as shown by Capgras syndrome

Posted comment on ´Nature and extent of person recognition impairments associated with Capgras syndrome in Lewy body dementia` by C.M.Fiacconi, V. Barkely, E.C. Finger, N. Carson, D. Duke, R.S. Rosenbaum, A. Gilboa and S. Kohler and published in Front. Hum. Neurosci. 24th September 2014


Fiacconi and colleagues looked at one patient suffering with dementia with Lewy bodies (DLB) who showed signs of Capgras syndrome (CS), one control DLB patient without CS and healthy participants. The CS sufferer demonstrated impairments in a fame recognition task using faces (64 faces – rated according to whether observed in the media) compared to the others and also demonstrated a lower level of certainty relating to his answers (confidence of answer rated according to guessing, knowing but uncertain and knowing with certainty). The CS patient also exhibited an impairment in a fame recognition test using voices (48 voices – participants were prompted to decide which of the two voices belonged to a famous person.) and also with less confidence than healthy controls. In comparison when the fame recognition test was carried out using names as cues, the CS sufferer correctly recognized a similar proportion of famous names as the control participants and both sets demonstrated similar levels of confidence. However, the CS sufferer was less accurate in judging the occupation of correctly chosen famous faces and voices. Impairment in facial expression judgements was also observed with the CS sufferer. The subjects were tested for recognition of the intensity of fearful facial expressions (low, moderate and high). Controls showed a significant linear trend with fear ratings being positively related to the intensity of the fearful expression, but this was not observed with the CS sufferer, which Fiacconi and colleagues suggested was a decoupling between intensity level and perceived fear intensity. Using clinical magnetic resonance imaging scans, it was found that the CS sufferer had medial prefrontal cortex atrophy.
From their observations, Fiacconi and colleagues concluded that both overt and covert facial recognition were affected in the Capgras syndrome. This syndrome has been accredited in the literature to covert facial recognition impairments due to deficits in automatic affective responding to the overt visual pathway and hence, the subject finds the face unfamiliar. This signal requires activity in the limbic system including amygdala, possible frontal regions, including insula and anterior cingulate cortex. Atrophy in the medial prefrontal cortex is also reported in the CS sufferer and because this area is involved in affective judgement this supports the authors conclusions.Reports in the literature show that CS sufferers also have right prefrontal cortex damage which suggests executive control deficits. This supports the observation that the CS sufferer exhibits impairments in memory retrieval and information evaluation relevant to establishing the occupations of the names and faces successfully identified. It also supports the defining characteristic of the CS syndrome that sufferers believe that their loved one has been replaced by an imposter.
Therefore, Fiacconi and colleagues in their article conclude that not only covert facial recognition is affected in Capgras syndrome, but also the sensory and perception processes themselves that allow that individual to recognize a face.

This study by Fiacconi and colleagues is interesting because it links visual processing of a specific type directly to the emotional system. The recognition of faces and particularly facial expressions is an important part of our day-to-day lives and guides our behavior and thought processes. We experience an unease when expectations of what people should look like are not met, e.g. the best friend who has had cosmetic surgery, or an individual who does not in a particular situation express the emotions we expect. This can also apply to how we expect computer generated graphics and robots designed to look like humans react facially.
If we look at the hypothesized mechanism for facial recognition by Riddoch and Humphreys (2001) it is likely that the emotional system interacts with incoming information higher up the visual pathway hierarchy at the stage of perception relating to structural description. This stage of the ´factual` input of the visual information comes after the binding of features and multiple shape segmentation to form a coherent firing neuronal grouping (i.e. the individual knows the object is a face) representing the facial shape and features. This is likely to occur in the fusiform gyrus which has been shown to be required for facial recognition. Previous experience, i.e. the face has been seen before means that firing is strengthened and the face is perceived as being familiar, and hence, the emotional system is also positive (unless of course, the face evokes a fear memory). The firing of the neuronal group representing the face leads to other associated groups also being fired and so in this way information about the individual (including his name, occupation if available) is retrieved (the Person Identity Nodes- PIN).
The visual input of expressions is likely to be related to movement of the facial features. Faces are observed holistically (shape, colour at the retinal level), but perceived in parts (Farah et al.) and this may be explained by Neisser`s visual search theory which states that complicated structures are broken down into parts to aid detection. It could also mean that after identification we do not need to evaluate some incoming visual information to any depth, e.g. facial shape since it does not alter whereas other information is deemed more important, e.g. the person`s expressions. In the case of faces and expressions, the information about the movement of features may be detected early (bipolar cells and at the feature binding stage or even perhaps at the stage of view normalization). Recognition of the expression is probably at the same stage as the ´factual` visual information, i.e. structural description along with the emotional system response. This supports why the face is observed holistically and perceived in parts. The evaluation of expression occurs after the initial recognition and continues through out contact provided the individual is being observed.
The work carried out here by Fiacconi and colleagues expanded the connections between facial features, expressions and auditory input in the form of names and voices. It is likely that in normal individuals, the auditory input (the name or voice) can be inputted, recognized by retrieving stored memories of past encounters and this directly links to Riddoch and Humphrey`s semantic system (2001) representing all information (including what the person looks like, or his occupation for example) about the name or voice owner. Facial recognition is not a foregone conclusion from only the auditory information, but both are also linked to the emotional system.
Fiacconi and colleagues studied the connection between facial recognition, names and voices in the case of a particular condition, Capgras syndrome. Sufferers of this condition believe that loved ones are imposters even when presented with copious amounts of evidence showing otherwise. In Fiacconi and colleague`s study it was shown that the CS sufferer has impairments in facial recognition of famous people, voice and expression recognition, but has normal levels of name recognition. This supports the view that sufferers have damaged connectivity between the factual visual input (probably at the level of the structural description since familiar faces are not recognized, but sufferers do recognize that it is at least a face) and the emotional system. Expressions are also not recognized which also supports the deficit at this level of the Riddoch and Humphrey`s model. Name recognition is still likely since the auditory input directly fires the neuronal areas at the PIN level, but the lack of voice recognition means that different areas are likely to be required for sound perception as for language perception.
Therefore, this article is interesting because it establishes the links between visual and auditory information relating to facial, voice and name recognition and indicates where expressions are processed. This information may be important as our contact with computer generated graphics and robots in our everyday lives increases.

Since we`re talking about the topic ……………….

….can we assume that the neuroimages of brain area activity linked to voice and name recognition will be different if individuals that evoke strong emotive responses are used as test sources?
….is it possible that CS sufferers have problems empathizing if the stimuli are visual or auditory, but not if touch-related e.g. in the performance of the experiment by Banissy and Ward (2007)? What would the results be of a test using emoticons instead of real faces?
….would antidepressants that alter medial prefrontal cortex executive activity have any effect on the ability of a CS sufferer to recognize a loved one?

This entry was posted in emotional system, facial recognition and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s