The N170 is a component of the event-related potential (ERP) that reflects the neural processing of faces, familiar objects or words.[1] Furthermore, the N170 is modulated by prediction error processes.[2][3]

When potentials evoked by images of faces are compared to those elicited by other visual stimuli, the former show increased negativity 130-200 ms after stimulus presentation. This response is maximal over occipito-temporal electrode sites, which is consistent with a source located at the fusiform and inferior-temporal gyri, confirmed by electrocorticography.[4][5] The N170 generally displays right-hemisphere lateralization and has been linked with the structural encoding of faces, hence is considered to be primarily sensitive to faces.[6][7] A study, employing transcranial magnetic stimulation combined with EEG, found that N170 can be modulated by top-down influences from prefrontal cortex.[8]

History

edit

The N170 was first described by Shlomo Bentin and colleagues in 1996,[9] who measured ERPs from participants viewing faces and other objects. They found that human faces and face parts (such as eyes) elicited different responses than other stimuli, including animal faces, body parts, and cars.

Earlier work performed by Botzel and Grusser and first reported in 1989[10] also attempted to find a component of the ERP that corresponded to the processing of human faces. They showed observers line drawings (in one experiment) and black-and-white photographs (in two additional experiments) of faces, trees, and chairs. They found that, compared to the other stimulus classes, faces elicited a larger positive component approximately 150 ms after onset, which was maximal at central electrode sites (at the top of the head). The topography of this effect and lack of lateralization led to the conclusion that this face-specific potential did not arise in face-selective areas in the occipital-temporal region, but instead in the limbic system. Subsequent work referred to this component as the vertex positive potential (VPP).[11]

In an attempt to rectify these two apparently conflicting results, Joyce and Rossion[12] recorded ERPs from 53 scalp electrodes while participants viewed faces and other visual stimuli. After recording, they re-referenced the data to several commonly used reference electrode sites, including the nose and mastoid process. They found that the N170 and VPP can be accounted for by the same dipole arrangement arising from the same neural generators, and therefore reflect the same process.

Functional sensitivity

edit

Three of the most studied attributes of the N170 include manipulations of face inversion, facial race, and emotional expressions.

It has been established that inverted faces (i.e., those presented upside-down) are more difficult to perceive[13] (the Thatcher effect is a good illustration of this). In their landmark study, Bentin et al. found that inverted faces increased the latency of the N170 component.[9] Jacques and colleagues further studied the timecourse of the face inversion effect (FIE) using an adaptation paradigm.[14] When the same stimulus is presented multiple times, the neuronal response decreases over time; when a different stimulus is presented, the response recovers. The conditions under which a "release from adaptation" occurs therefore provides a way to measure stimulus similarity. In their experiment, Jacques et al. found that the release from adaptation is smaller and occurs 30 ms later for inverted faces, indicating that the neuronal population encoding face identity require additional processing time to detect the identity of inverted faces.

In an experiment examining the effects of race on the N170's amplitude, it was found that an "Other-Race Effect" was elicited in conjunction with face inversions. Vizioli and colleagues examined the effect of face recognition impairment while subjects process same race (SR) or other race (OR) pictures.[15] The research team devised a N170 experiment based on the premise that visual expertise plays a critical role in inversion, hypothesizing that viewers' greater level of expertise with SR faces (holistic processing) should elicit a stronger FIE compared to OR face stimuli. The authors recorded EEGs from Western Caucasian and East Asian subjects (two separate groups) who were presented with pictures of Western Caucasian, East Asian and African America faces in upright and inverted orientations. All the facial stimuli were cropped to remove external features (i.e. hair, beards, hats, etc.). Both groups displayed a later N170 with larger amplitude (over the right hemisphere) for inverted than upright same-race (SR) faces, but showed no inversion effect for OR and AA photo stimuli. Moreover, no race effects were observed in regard to the peak amplitude of the N170 for upright faces in both groups of participants. The results also found no significant latency differences among the races of stimuli, but facial inversion did increase and delay the N170 amplitude and onset respectively. They conclude that the subjects' lack of experience with inverted faces makes processing such stimuli more difficult than pictures shown in their canonical orientation, regardless of what race the stimulus is.

Besides modulation by inversion and race, emotional expressions have also been a focus of N170 face research. In an experiment conducted by Righart and de Gelder, ERP results show that the early stages of face processing may be affected by emotional scenes when categorizations of fearful and happy facial expressions are made by subjects.[16] In this paradigm subjects had to view color pictures of happy or fearful faces that were centrally overlaid on pictures of natural scenes. And in order to control for low level features, such as color and other items that could care meaning, all the scene pictures were scrambled by randomizing the position of pixels across the image. The final results of the experiment show that emotion effects were associated with the N170 in which there was a larger (negative) amplitude for faces when they appeared in a fearful context then when placed in happy or neutral scenes. In fact, left occipito-temporal distributed N170 amplitudes were dramatically increased for intact fearful faces when they appeared in a fearful scene, though levels were not as high when a fearful face was presented in a happy or neutral scene. Similar results did occur in regard to intact happy faces, but the amplitudes were not as high as those related to fearful scenes or expressions.[17] Righart and de Gelder conclude that information from task-irrelevant scenes is rapidly combined with the information from facial expressions, and that subjects use context information in the early stage of processing when they need to discriminate/categorize facial expressions.

Results from a study conducted by Ghuman and colleagues using direct neural recordings from the fusiform face area using electrocorticography showed that while the N170 displays a very strong response to faces when compared to other visual images, the N170 is not sensitive to the identity of the face.[4] Instead, they showed that which face a person is viewing can be decoded from the activity between 250–500 ms, consistent with the hypothesis that identity processing begins with the N250.[18] These results suggest that the N170 is important for gist-level processing of faces and face detection, processes which may set the stage for later face individuation.

Generators

edit

Given the ease and rapidity with which humans can recognize faces, a great deal of neuroscientific research has endeavored to understand how and where the brain processes them. Early research on prosopagnosia, or "face blindness", found that damage to the occipito-temporal region led to an impaired or complete inability for people to recognize faces. Convergent evidence for the importance of this region in face processing came through the use of fMRI, which found that a region of the fusiform gyrus, the "fusiform face area", responded selectively to images of faces.

Intracranial recordings in humans using electrocorticography provide very strong evidence that the fusiform face area is one of the generators of the N170,[4][5] though other regions of the face processing network may also contribute to the N170.

An investigation of the N170 undertaken[19] used ERP source-localization techniques to estimate the location of the neural generator of the N170. They concluded that the N170 arose from the posterior superior temporal sulcus. However, these techniques are fraught with potential sources of error, and there is disagreement on the validity of inferences drawn from such findings.[20]

Faces or interstimulus variance

edit

In 2007, Guillaume Thierry and colleagues[21] presented evidence that called into question the face-specificity of the N170. Most earlier experiments found an N170 when the response to frontal views of faces was compared to those of other objects that could appear in more variable poses and configurations. In their study, they introduced a new factor: stimuli could be faces or non-faces, and either class could have high or low similarity. Similarity was measured by calculating the correlation between pixel values in pairs of same-category stimuli. When ERPs were compared for these conditions, they found a typical N170 effect in the low-similarity non-face vs. high-similarity face comparison. However, high-similarity non-faces showed a significant N170, while low-similarity faces did not. These results led the authors to conclude that the N170 is actually a measure of stimulus similarity, and not face processing per se.

In response to this, Rossion and Jacques[7] measured similarity as above for several object categories used in a previous study of the N170. They found that faces elicited a larger N170 than other classes of objects that had similar or higher similarity values, such as houses, cars, and shoes. While it remains uncertain why Thierry et al. observed an effect of similarity on the N170, Rossion and Jacques speculate that lower similarity leads to more variance in the latency of the response. Since ERP components are measured by averaging the results from many individual trials, high latency variance effectively “smears” the response, reducing the amplitude of the average. Rossion and Jacques also offer criticism of the methodology used by Thierry and colleagues, arguing that their failure to find a difference between high-similarity faces and high-similarity non-faces was due to a poor choice of electrode sites.

See also

edit

References

edit
  1. ^ Rossion, Bruno; Joyce, Carrie A; Cottrell, Garrison W; Tarr, Michael J (November 2003). "Early lateralization and orientation tuning for face, word, and object processing in the visual cortex". NeuroImage. 20 (3): 1609–1624. doi:10.1016/j.neuroimage.2003.07.010. PMID 14642472. S2CID 6844697.
  2. ^ Johnston, Patrick; Robinson, Jonathan; Kokkinakis, Athanasios; Ridgeway, Samuel; Simpson, Michael; Johnson, Sam; Kaufman, Jordy; Young, Andrew W. (April 2017). "Temporal and spatial localization of prediction-error signals in the visual brain" (PDF). Biological Psychology. 125: 45–57. doi:10.1016/j.biopsycho.2017.02.004. PMID 28257807. S2CID 10308172.
  3. ^ Robinson, Jonathan E.; Breakspear, Michael; Young, Andrew W.; Johnston, Patrick J. (16 April 2018). "Dose-dependent modulation of the visually evoked N1/N170 by perceptual surprise: a clear demonstration of prediction-error signalling" (PDF). European Journal of Neuroscience. 52 (11): 4442–4452. doi:10.1111/ejn.13920. PMID 29602233. S2CID 4887440.
  4. ^ a b c Ghuman, Avniel Singh; Brunet, Nicolas M.; Li, Yuanning; Konecky, Roma O.; Pyles, John A.; Walls, Shawn A.; Destefino, Vincent; Wang, Wei; Richardson, R. Mark (2014-01-01). "Dynamic encoding of face information in the human fusiform gyrus". Nature Communications. 5: 5672. Bibcode:2014NatCo...5.5672G. doi:10.1038/ncomms6672. ISSN 2041-1723. PMC 4339092. PMID 25482825.
  5. ^ a b Allison, T.; Puce, A.; Spencer, D. D.; McCarthy, G. (1999-08-01). "Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli". Cerebral Cortex. 9 (5): 415–430. doi:10.1093/cercor/9.5.415. ISSN 1047-3211. PMID 10450888.
  6. ^ Eimer, Martin (2011). "The Face-Sensitive N170 Component of the Event-Related Brain Potential". In Calder, Andrew J.; Rhodes, Gillian; Johnson, Mark H.; Haxby, James V. (eds.). Oxford Handbook of Face Perception. OUP Oxford. ISBN 9780191622144.
  7. ^ a b Rossion, B. & Jacques, C. (2008). "Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain? Ten lessons on the N170". NeuroImage. 39 (4): 1959–1979. doi:10.1016/j.neuroimage.2007.10.011. PMID 18055223. S2CID 15106925.
  8. ^ Mattavelli, G.; Rosanova, M.; Casali, A. G.; Papagno, C.; Romero Lauro L. J. (2013). "Top-down interference and cortical responsiveness in face processing: A TMS-EEG study". NeuroImage. 76 (1): 24–32. doi:10.1016/j.neuroimage.2013.03.020. PMID 23523809. S2CID 13911132.
  9. ^ a b Bentin, S.; McCarthy, G.; Perez, E.; Puce, A.; Allison, T. (1996). "Electrophysiological studies of face perception in humans". Journal of Cognitive Neuroscience. 8 (6): 551–565. doi:10.1162/jocn.1996.8.6.551. PMC 2927138. PMID 20740065.
  10. ^ Botzel, K.; Grusser, O. J. (1989). "Electric brain potentials-evoked by pictures of faces and non-faces – a search for face-specific EEG-potentials". Experimental Brain Research. 77 (2): 349–360. doi:10.1007/BF00274992. PMID 2792281. S2CID 5373013.
  11. ^ Jeffreys, D. A. (1989). "A face-responsive potential recorded from the human scalp". Experimental Brain Research. 78 (1): 193–202. doi:10.1007/BF00230699. PMID 2591512. S2CID 12084552.
  12. ^ Joyce, C.A. & Rossion, B. (2005). "The face-sensitive N170 and VPP components manifest the same brain processes: The effect of reference electrode site". Clinical Neurophysiology. 116 (11): 2613–2631. doi:10.1016/j.clinph.2005.07.005. PMID 16214404. S2CID 34009900.
  13. ^ Yin R. K. (1969). "Looking at upside-down faces". Journal of Experimental Psychology: Human Perception and Performance. 81 (1): 141–145. doi:10.1037/h0027474.
  14. ^ Jacques, C.; d'Arripe, O.; Rossion, B. (2007). "The time course of the face inversion effect during individual face discrimination". Journal of Vision. 7 (3): 1–9. doi:10.1167/7.8.3. PMID 17685810.
  15. ^ Vizioli, L.; Foreman, K.; Rousselet, G. A.; Caldara, R. (2010). "Inverting faces elicits sensitivity to race on the N170 component: a cross-cultural study". Journal of Vision. 10 (1): 1–23. doi:10.1167/10.1.15. PMID 20143908.
  16. ^ Righart, R. & de Gelder, B. (2008). "Rapid influence of emotional scenes on encoding of facial expressions: an ERP study. Social Cognitive & Affective Neuroscience". Social Cognitive and Affective Neuroscience. 3 (3): 270–8. doi:10.1093/scan/nsn021. PMC 2566764. PMID 19015119.
  17. ^ Blau, V. C.; Maurer, U.; Tottenham, N.; McCandliss, B. D. (2007). "The face-specific N170 component is modulated by emotional facial expression". Behavioral and Brain Functions. 3: 7. doi:10.1186/1744-9081-3-7. PMC 1794418. PMID 17244356.
  18. ^ Tanaka, James W.; Curran, Tim; Porterfield, Albert L.; Collins, Daniel (2006-09-01). "Activation of preexisting and acquired face representations: the N250 event-related potential as an index of face familiarity". Journal of Cognitive Neuroscience. 18 (9): 1488–1497. CiteSeerX 10.1.1.543.8563. doi:10.1162/jocn.2006.18.9.1488. ISSN 0898-929X. PMID 16989550. S2CID 9793157.
  19. ^ Itier, R. J.; Taylor, M. J. (2004). "Source analysis of the N170 to faces and objects". NeuroReport. 15 (8): 1261–1265. doi:10.1097/01.wnr.0000127827.73576.d8. PMID 15167545. S2CID 46705488.
  20. ^ Luck, S. J. (2005). "ERP Localization". An Introduction to the Event-Related Potential Technique. Boston: MIT Press. pp. 267–301.
  21. ^ Thierry, G.; Martin, C. D.; Downing, P.; Pegna, A. J. (2007). "Controlling for interstimulus perceptual variance abolishes N170 face selectivity". Nature Neuroscience. 10 (7): 505–511. doi:10.1038/nn1864. PMID 17334361. S2CID 21008862.
edit
  • Bruno Rossion's lab has an overview of their research on the N170.