Wikipedia:United States Education Program/Courses/Psychology of Language (Kyle Chambers)/Summaries

Please add your 500 word summaries in the appropriate section below. Include the citation information for the article. Each student should summarize a different article, so once you have chosen an article, I would recommend adding the citation with your name (type 4 tildes). That way others will not choose the same article as you. You can then come back later and add your summary.

Speech Perception edit

The Development of Phonemic Categorization in Children Aged 6-12 by Valerie Hazan and Sarah Barrett Lkientzle (talk) 15:38, 29 February 2012 (UTC)[reply]

In 2000, Hazan and Barrett sought to find evidence for development of phonemic categorization in children aged 6 to 12 and compare this to adult subjects. They wanted to test whether categorization is more consistent with dynamic or static cues, as well as how this changes if there are several cues available versus limited cues to signal the phonemic differences. For example, how well can a child distinguish /d/-/g/ and /s/-/z/ depending on the cues given, and how does this compare to how an adult does this task? The reason why this study was so important was because previous research had yielded contradictory results for the age in which children’s perception of phonemic categorization is at an adult level, and criteria and methods for testing this were inconsistent. This study is also important because it provides evidence that phoneme boundary sharpening is still developing well after the age of 12 into adulthood.

Previous research has repeatedly shown a developmental trend that as children grow older, they can better categorize phonemes into their respective categories consistently. The issue of at what age phonemic categorization becomes adult-like is still debatable, though. Some studies have not found significant differences between 7 year olds and adults in their ability to categorize (Sussman & Carney, 1989). Other studies have found the opposite results (significant differences between age groups) with virtually the same criteria (Flege & Eefting, 1986). The present study by Hazan and Barrett sought to re-evaluate these previous findings in a manner that was very controlled, and see if 12 year olds (the oldest of their participant pool, next to their adult control group) were performing at the level of adults signifying the end of this developmental growth.

The test was run with 84 child subjects, aged 6-12, and with 13 adult subjects that served as a control group. Each subject was run separately, and had to complete a two-alternative forced-choice identification procedure that contained different synthesized phoneme sounds. These phoneme sounds were presented on a continuum starting from one sound (/d/) to another (/g/). When a participant had accurately identified at least 75% of a correct phoneme; the next sound on the continuum was presented. This outline was adapted for four different test conditions that each tested different phoneme continuums (such as /s/-/z/) and either was presented as a “single cue” or “combined cue”.

The dependent variables of this study were the categories chosen by participants for the sounds they heard. The independent variables were then the different conditions such as: which phoneme continuum was used and if it was single or combined cue presentation. The combined cue condition varied from a typical presentation of the sounds by varying the contrasting cues by harmony.

This study found that, like their hypothesis presumed, children continue to develop their ability to categorize phonemes as they age, and this continues even after the child turns 12. The researchers controlled for any extraneous variables such as attention deficits in the children, language barriers, or hearing deficits as well. Previous research on young children have shown that humans are proficient at identifying categories by the age of three, but the present study indicates that this ability only grows with age, and becomes more competent with ambiguous-cue situations. The study, therefore, states that there is no reason to presume a child is as competent as an adult at making these distinctions by the age of 12 like some previous research had suggested.

This research is important because it indicates that although we seem to be born with an innate sense of how to process phonemes, and by an early age are quite good at it, we should not assume that a persons environment does not aid in the development of even more advanced capabilities in perception. It seems that we can “practice” this distinction and get better at it by being exposed to more instances that make us figure out how to categorize sounds to make sense of them.

---

The Role of Audition in Infant Babbling by D. Kimbrough Oiler and Rebecca E. Euers Amf14 (talk) 16:37, 21 February 2012 (UTC)[reply]

A number of questions have been raised about the importance of experience in learning to talk. It is possible that infants are born with built in speech capabilities, but it is also possible that auditory experience is necessary for learning to talk. Oller and Eilers proposed the idea that if deaf infants babble in the same typical patterns as regular infants, it would be evidence to support that humans are born with innate abilities to speak. In order to test this proposal, they needed to study what types of speech emerge at each stage of the first year of an infant’s life. By the canonical stage (7-10 months), infants generally utter sounds characterized by repetitions of certain sequences such as dadada or baba. Research has shown that deaf infants reach this stage later in life than regular hearing infants.

It has been moderately challenging to study deaf infants during the past because it is uncommon to diagnose hearing disabilities within the first year of a child’s life. It is also difficult to find deaf infants with no other impairments, who have had severely impaired hearing since birth and have been diagnosed within the first year of their lives.

In this experiment, 30 infants were analyzed, 9 of them being severely or profoundly hearing impaired. Each infant was measured in order to determine at what age they reached the canonical stage. The two groups were designated based on whether the infants were deaf or not. In both groups, infants babbling sequences were tape recorded in a quiet room with only the parent and the experimenter. The number of babbling sequences was counted by trained listeners for each infant. The listeners based their counting on 4 main criteria including if the infant used an identified vowel and consonant, the duration of a syllable and the usage of a normal pitch range. Vegetative and involuntary sounds such as coughs and growls were not recorded. The infants were prompted by their parents to vocalize their babbling while in the room. If they did not comply, or if the behavior was considered abnormal in comparison to their actions at home, they would reschedule the experiment.

Results showed that normal hearing infants reached the canonical stage of speech by 7-10 months. On the other hand, deaf infants did not reach this stage until after 10 months. When analyzing both groups at the same age, none of the deaf infants produced babbling sounds that could be qualified as being at the canonical stage. The hearing subjects were calculated at approximately 59 canonical utterances per infants. This is compared to the deaf subjects who babbled approximately 50 utterances, but 5-6 months later than the hearing subjects did.

Overall, hearing impaired infants show significant delays in reaching the canonical stage of language development. Oller and Eilers concluded this to be due to their inability to hear auditory speech. There is evidence to support the idea that hearing aids can assist infants in reaching babbling stages earlier. Completely deaf infants may never reach the canonical stage. Within the experiment, both groups of babies showed similar patterns of growls, squeals and whispers at the precanonical stage, but once the infant reached an age where language was to develop further, audition and modeling played a far more important role. This significantly leaves deaf children behind in the speech department.

Oller, D., & Eilers, R. E. (1988). The role of audition in infant babbling. Child Development, 59(2), 441-449. Doi:10.2307/113023

Amf14 (talk) 17:21, 28 February 2012 (UTC)[reply]

---

The Impact of Developmental Speech and Language Impairments on the Acquisition of Literacy Skills by Melanie Schuele

Previous studies have wrestled with the task of identifying speech/language impairments in children and determining the means by which they can be remedied. Language impairments are often precursors to life long communication difficulties as well as academic struggles. Hence, researchers past and present are focused on understanding speech/language impairments and finding solutions for children and adults alike. Schuele (2004) provides a review of previous studies that focus on differentiating and evaluating developmental speech impairments.

Individuals struggling with speech/language impairments are often referred to as language delayed, language disordered, language impaired, and/or language disabled. However, the review article defines and builds off of three key types: speech production impairments, oral language impairments, and speech production and oral language impairments. Furthermore, a distinction is made between developmental speech impairments: articulation disorders and phonological disorders. Articulation disorders have a motoric basis that result in difficulties to pronounce several speech sounds. For example, a child may substitute /w/ sounds for /r/; therefore, “rabbit” would sound like “wabbit.” Phonological Disorder (PD) is a cognitive-linguistic disorder that results in difficultly with multiple speech sounds and is detrimental to overall speech intelligibility.

Researchers distinguish between children with PD alone and children with PD + Language who are considered disabled based on their cognitive-linguistic abilities. In one study testing for reading disabilities, only 4% of the PD group showed a disability in word reading and 4% for comprehension. In contrast, within the PD + Language group 46% were classified as disordered in word reading and 25% were classified as disordered in reading comprehension.

A second study focused specifically on the differences between PD alone versus PD + Language. Children between the ages of 4 and 6 were assessed and evaluated upon their entry into third and fourth grade. Assessments revealed that PD +Language children had more severe speech deficits, low language scores, lack of cognitive-linguistic resources, and a family history of speech/language/learning disabilities compared to PD-alone children.

These studies highlight the importance of understanding and addressing speech/language difficulties in children. Children who struggle with a language condition, especially PD + language are at a very high risk for language impairment throughout childhood, adolescence, and potentially adulthood. Although this article did not focus on treatment, future research obstacles were outlined. The population challenge of testing preschoolers/early aged school children for language impairments results from a lack of reliable and valid material that can measure reading abilities and phonological awareness. In addition, children with language impairments spend more time trying to learn the basics of communication while their peer counterparts blaze ahead. The lack of cognitive-linguistic resources to devote to other tasks needs to be addressed when evaluating the efficacy of treatments.

Schuele, M. C. (2004). The impact of developmental speech and language impairments on the acquisition on literacy skills. Mental Retardation and Developmental Disabilities, 10, 176-183. Katelyn Warburton (talk) 20:52, 28 February 2012 (UTC)[reply]

---

Longitudinal Infant Speech Perception in Young Cochlear Implant Users

Much research has been done regarding speech perception in infants with normal hearing, especially in regards to phoneme discrimination in the first year of life. It has been shown that infants with normal hearing have surprisingly sophisticated systems of speech perception from the onset of life. This ability plays a critical part in the development of linguistics. Building on this fundamental concept of basic development of speech perception, Kristin Uhler and her colleagues set out to determine what the course of development would be for a child experiencing developmental challenges.

The present study is a case study exploring the development of speech perception in infants with hearing impairments who have received cochlear implants to aid their linguistic development. Specifically, the study aims to explore how speech perception began in children with new cochlear implants, if they are able to make discriminations in speech patterns, and how their development compares to a child with normal hearing. This research is of great importance because if children with cochlear implants can perceive speech in the same way as normal hearing children, they will be able to interact in a speaking world.

This study focused on case studies of seven children with normal hearing and three children with cochlear implants. Each child underwent speech perception testing in which they were asked to discriminate between two contrasting sounds. The number of sounds played as well as the difficulty was manipulated by the experimenters. They were ultimately measuring the number of head turns performed by the child based on the sounds played in the room. At the onset of the experiment, each child was placed in their caretaker’s lap. After hearing several initial and simple sounds, the children lost interest in the source of the sound. The child was then played slightly differing sounds and was conditioned to turn their head when they heard a difference. All testing took place in a room with double walled sound technology.

The results to these case studies revealed a great deal about the abilities of speech perception in children with cochlear implants. The first case study showed that prior to receiving a cochlear implant, no sounds were perceived to have been occurring in his environment. However, once the cochlear implant was activated, he was able to develop speech perception with accuracy of head turns slightly under that of a child with normal hearing. In the second case study, the child with the cochlear implant had even more promising success. After the implantation, he was able to discriminate many of the five core phoneme contrasts that each control child with normal hearing could discriminate. This child was almost able to normalize speech perception with the use of the cochlear implant except for the distinction between the phoneme /pa-ka/. The final case study showed complete normalization of speech perception with the use of a cochlear implant. This study also suggested that in both children with cochlear implants and normal hearing, vowels and voice onset time becomes prevalent in development before the ability to discriminate the place of articulation. These findings supported the predictions of the researchers.

Broader implications for this research include the application to the importance of linguistic development and phoneme discrimination in early infancy. These findings suggest that children with hearing impairments may be able to participate in this crucial development.

Uhler, K., Yoshinaga-Itano, C., Gabbard, S., Rothpletz, A. M., & Jenkins, H. (2011). Longitudinal infant speech perception in young cochlear implant users. Journal Of The American Academy Of Audiology, 22(3), 129-142. doi:10.3766/jaaa.22.3.2 Kfinsand (talk) 02:13, 29 February 2012 (UTC)[reply]

---

The Role of Talker-Specific Information in Word Segmentation by Infants by Derek M. Houston and Peter W. Jusczyk

When infants are introduced to human speech, they typically hear the majority of words in the form sentences and paragraphs, rather than single words. In fact, previous research found that only 7% of the speech heard by infants is considered to be in the form of isolated words. Although research proves that infants can identify single words produced by different speakers, Houston and Jusczyk aimed to find out if infants could identify these same similarities in the context of fluent speech.

For the initial study, 36 English-speaking 7.5-month-olds were presented with single words and full passages. The passages consisted of six sentences for each of four specific words (cup, dog, feet, bike). The participants were split into two groups: one heard Female Talker 1 first followed by Female Talker 2, and the other group heard the same speakers in the opposite order. Throughout the procedure, the infants’ head turn preference was measured.

The infants turned their head longer toward the familiar words presented by the second speaker, suggesting that infants could generalize these learned words across different speakers.

The second experiment also tested generalization of words across talkers, however the second speaker was replaced by an opposite sex speaker. In contrast with the initial study, the results showed no difference in head turn preference between the two speakers, indicating difficulty generalizing the words between the two speakers of different genders. Experiment three was aimed to mirror the methods and findings of the initial study by using two male speakers, instead of females. The results showed that infants were able to generalize across male speakers, just as they had across female speakers. In the fourth experiment, Houston and Jusczyk attempted to address the possibility that 10.5-month-olds might be able to generalize meaning across speakers of different genders. By replicating the second experiment, but instead using 10.5-month-old infants, they found that infants were able to generalize between speakers of different genders.

This study and the follow up experiments suggest that infants are able to generalize fluent speech between speakers, but only to a certain extent. While, 7.5-month-olds are able to generalize between two women and between two men, they are not able to generalize the fluent speech across genders. However, by the age of 10.5 months, the infants’ ability to generalize has increased and they are able to generalize between speakers of different genders.

Houston, D. M., & Jusczyk, P. W. (2000). The role of talker-specific information in word segmentation by infants. Journal Of Experimental Psychology: Human Perception And Performance, 26(5), 1570-1582. doi:10.1037/0096-1523.26.5.1570 Smassaro24 (talk) 06:53, 29 February 2012 (UTC)[reply]

---

Positional effects in the Lexical Retuning of Speech Perception by Alexandra Jesse & James McQueen Lino08 (talk) 15:01, 23 February 2012 (UTC)[reply]

In the melting pot that is American culture, people speak in many languages, accents, and dialects. It can be a challenge for listeners to always understand another person because pronunciation varies across speakers. Previous research has found that listeners use numerous sources of information in order to interpret the signal. People must also use their previous knowledge of how words should sound to help them acclimate to the differences in pronunciations of others. These ideas led the researchers to postulate that the speech-perception system benefits from all learning experiences because when word-specific knowledge is gained, understanding different talkers’ pronunciations from any position within a word becomes possible. The researchers followed up on this idea and tested whether having lexical knowledge from previously learned words still allows for the understanding of words when categorical sounds are rearranged.

In Experiment 1 of this study, the researchers created lists of 20 /f/-initial and 20 /s/-initial Dutch target words based on results of their pretest. They combined these 40 words with 60 filler words and 100 phonetically legal non-words. Ninety-eight Dutch university students with no hearing problems were chosen as participants and were randomly assigned to 1 of 2 groups. The /f/ training group was presented with 20 natural /s/ initial and 20 ambiguous /f/ initial words and vice versa. Both groups heard all 160 filler words. Participants had to quickly and accurately respond if the word they heard was a Dutch word or not. After the exposure phase, participants had to go through a test phase where they listened to /f/ and /s/ fricatives as either onsets or codas of words. They had to categorize as quickly and accurately as possible whether the sound they heard was an /f/ or an /s/. The independent variable was whether the participants were trained with ambiguous /f/ words or /s/ words, and the dependent variable was the reaction time in the test phase. In Experiment 2, word-final sounds were rearranged with syllable-initial sounds to test for the possible transfer of learning. The researchers keep the procedure the same in both experiments.

The researchers were looking to see if lexical knowledge taken from target words can help recategorize ambiguous categories of syllables. They predicted that if the participants have lexical knowledge of the target words, then they should demonstrate learning and be able to transfer category transfer of syllables (/f/) and /S/) across different onsets and well as between onsets and codas. The results from the first experiment failed to show lexical retuning, meaning that they were not able to determine whether learning transfers across syllables in different positions. The results from the second experiment show that more [f] responses were given by the /f/ training groups than by the /s/ training groups which demonstrates that lexical rearranging and its transfer across different syllable positions. The researchers had mixed results with their findings relating to their expectations. In contrast to their hypothesis, the researchers found no evidence that lexical retuning occurs when ambiguous speech sounds are heard in the word-initial position. However, their findings did show that when sounds in different positions are matched acoustically, a person can generalize over the difference in position. The researchers concluded that retuning helps listeners recognize and understand the words of a speaker even if they have an unusual pronunciation.

Jesse, A. & McQueen J. (2011). Positional Effects in the Lexical Retuning of Speech Perception. Psychonomic Society, Inc. doi: 10.3758/s13423-011-0129-2

---

Influences of infant-directed speech on early word recognition by Leher Singh, Sarah Nestor, Chandni Parikh, & Ashley Yull. Misaacso (talk) 01:11, 29 February 2012 (UTC)[reply]

This study was done in hopes that knowledge could be gained regarding the influence of infant directed speech on long-term storage of words in a native language. Researchers wanted to know if the stimulus input style was influential on the capacity for long-term storage and ability to retrieve the information. It was important to discover if infant directed speech could have an influence on such aspects of word recognition before vocabulary production is evident in an infant.

When adults interact with infants, the speech used tends to be slower, have less sophisticated grammar, be composed of less content and is produced using a higher pitched voice. This child-directed speech, commonly termed infant directed speech, has also been shown in languages other than English. Previous research focused on phoneme perception, syntactic parsing, word segmentation, and boundary detection. Other research found evidence pointing to the ability of infants to generalize a novel talker regarding voice priming when the original and novel talker was producing test stimuli. Since past research did not cover the ability of infants to encode and retrieve words from their native language using infant directed speech and adult directed speech, a study regarding these abilities was prompted.

English-exposed infants of 7.5 months of age were exposed to either the stimulus of an adult using infant directed speech in the presence of the infant, or using adult directed speech toward another adult while the infant was absent. Listening time of passages when the familiarized word was in the infant directed speech condition, listening time of passages when the familiarized word was in the adult directed speech condition and listening time of passages where no familiarized word was present were being measured.

For each condition the infant would hear the words bike, hat, tree, or pear in various sentences. As the infants in the study sat on their care giver’s lap, flashing lights in front of the infant caused fixation which led to the center light being turned off and a light on either side of the infant to flash while the speech stimulus was presented. Familiarization occurred with both the infant and adult directed speech. The infants were tested 24 hours later to determine if the infants could recognize the words from the previous day.

The study concluded that infant directed speech is a key factor in recognizing words early in life proposing that although infants prefer this type of speech, it is even more beneficial as it can aid infants in retrieving and processing words. Infant directed speech also helps an infant generalize memory representations. Infant directed speech assists with storing words in the long-term and extending representation of words in the infant’s mind.

The conclusions of this research can prompt multiple directions of further inquiry. One such topic is of which attention-getting aspect of infant directed speech leads to the findings that were observed in this experiment. Another stem from this research is how words are associated with meaning to an infant as research has been completed for adults but not much is known on how this relates to infants.

Singh, Leher, Nestor, Sarah, Parikh, Chandni, & Yull, Ashley. (2009). Influences of infant-directed speech on early word recognition. Psychology Press, 14(6), 654-666. doi: 10.1080/15250000903263973 Misaacso (talk) 07:15, 1 March 2012 (UTC)[reply]

---

Early phonological awareness and reading skills in children with Down syndrome by Esther Kennedy and Mark Flynn

It is commonly known that individuals with Down syndrome are fully capable of acquiring reading skills. However, much less is known about the processes that lead to the development of their literary skills. Kennedy and Flynn are broadly looking at the literacy skills of the children with Down syndrome who participated in this study. They do so by picking apart the different levels of attaining literacy skills, specifically phonological awareness. The difficulty in studying this population is that the tests used to look at typically developing children must be adapted so that deficits in cognitive skills do not interfere with any of the areas they assessed. They adapted tasks to assess phonological awareness, literacy, speech production, expressive language, hearing acuity, speech perception, and auditory visual memory.

This study took place in New Zealand, and included nine children with Down syndrome. They were between the ages of five and ten, and all had at least six months exposure to formal literacy instruction in a mainstream school. Literacy teaching in New Zealand uses a “whole language” approach, and focuses on the meaning from the text. This means the children in this study had little to no history of phonologically based literacy instruction.

Because hearing impairment is prevalent in individuals with Down syndrome, hindering speech perception and auditory processing skills, an audiologist made sure the children could clearly hear throughout the study. To test short-term memory the subjects were asked to recall pictures they had studied of unrelated pictures of one, two and three syllables. To test speech production, the Assessment of Phonological Processing Revised (Hodson, 1986) was used to obtain a Percentage Consonants Correct from a list of 106 single words. To test expressive language a MLU was recorded of 50-100 intelligible utterances. They used two different methods to test reading. The first was the Burt Word Reading Test-New Zealand Revision (Gilmore, Croft & Reid, 1981). However, if the child was unintelligible, they requested a list of words the child could consistently read accurately. They also tested letter-sound knowledge, the children were asked to identify the letter that the investigator produced. The investigators divided the letters in an attempt to avoid misperceptions between letters that sound similar. They also avoided adding a vowel after voiced phonemes and lengthened them when possible. They used the example of “vvv” rather than “vuh.”

The results were correctly predicted by Kennedy and Flynn. Participants performed better on the tasks depending on how long they had been in school, and the tasks that used a spoken response were more difficult to score due to speech impairments. Participants with higher phoneme awareness skills had higher reading levels. However, only one participant was able to detect rhyming. This study looked solely at reading skills based on text decoding, not whether the participants were able to extract meaning from what they read. This study did not include a control group, and only had nine participants, which could have contributed to limitations.

Kennedy EJ, Flynn MC. Early phonological awareness and reading skills in children with Down syndrome. Down Syndrome Research and Practice. 2003;8(3);100-109. Lcannaday (talk) 00:48, 1 March 2012 (UTC)[reply]

---

Modified Spectral Tilt Affects Older, but Not Younger, Infants’ Native-Language Fricative Discrimination by Elizabeth Beach & Christine Kitamura

At birth, infants rely on basic auditory abilities to distinguish native and nonnative speech and up until 6 months of age prefer low-frequency infant-directed speech to adult-directed speech. They then begin to learn their native vowels and at 9 months also consonants. As infants’ ability to distinguish nonnative consonants decreases while their ability to distinguish native consonants improves they are said to go from a language-general to language-specific mode of speech perception. This led researchers Beach and Kitamura to find out how adjusting the frequency of native speech affects infants speech perception as they develop.

In this study, the ability of 6- and 9-month old infants to discriminate between fricative consonants /f/-/s/ at unmodified, high, and low frequencies was tested. 96 infants were assigned evenly to one of three conditions: normal speech unmodified, normal speech at a lower frequency, and normal speech at a higher frequency. The speech stimuli was four samples of /f/ and four /s/. Measures of overall duration and vowel frequency (F0) remained constant while measures of center of gravity and frequency of the second formant (F2) at vowel transition varied. Each infant was tested individually using a visual habituation procedure in which an auditory stimulus would be presented when the infant fixated on the display. Two no-change controls of the habituation stimulus were presented to ensure there was no spontaneous recovery. Control trials were then followed by two test trials, which alternated the test stimulus with the habituation stimulus.

Results showed that in the normal speech condition, regardless of age, infants increased their fixation durations in test trials compared with control trials. Both age groups showed evidence of discriminating /f/ versus /s/. In the low-frequency condition 6-month old infants had longer fixation periods than 9-month old infant. Both age groups discriminated /f/-/s/. In the high-frequency condition 6-month old infants showed a larger increase in fixation times. In addition, younger infants but not older infants were sensitive to fricative discrimination. 6-month old infants can discriminate /f/-/s/ regardless of speech modification but are best at unmodified or high-frequency conditions. In addition, 9-month olds could only discriminate /f/-/s/ under normal speech conditions or low-frequency conditions with their best performance in the normal conditions.

Based on acoustic modes of perception first used by infants, researchers predicted that amplifying a higher frequency would lead to an increased discrimination for both age groups. Results show evidence of this in 6-month olds but not 9-month olds. On a linguistic base, they predicted that 9-month olds would only be able to discriminate /f/-/s/ in the normal speech condition. The 9-month olds’ inability to discriminate high and low frequency conditions supports this.

This study will serve as a base for future research of speech perception in infants with hearing loss and bring us closer to providing infants with hearing loss the best amplification strategies to ensure best development of language skills.

Beach, E., & Kitamura, C. (2011). Modified spectral tilt affects older, but not younger, infants' native-language fricative discrimination. Journal Of Speech, Language, And Hearing Research, 54(2), 658-667. doi:10.1044/1092-4388(2010/08-0177)Mvanfoss (talk) 01:21, 1 March 2012 (UTC)[reply]

---

Maternal Speech to Infants in a Tonal Language: Support for Universal Prosodic Features in Motherese

Motherese, baby talk, and infant-directed speech are common words to describe the distinctive voice adults use when speaking to infants. Previous research identified that infant-directed speech has a unique acoustic quality or prosodic features. For example, a higher pitch and slower tempo, prosodic features, are consistently associated with motherese. Furthermore, this type of speech provides benefits related to language development for the infants. Since these results are so pervasive across English speaking mothers, DiAnne Grieser and Patricia Kuhl, attempted to test whether this prosodic pattern occurs across other languages. Specifically, they wanted to test tonal languages where a change in pitch alters the meaning of the word. This test will help determine if the pattern is universal.

In this experiment, there were eight monolingual women who spoke Mandarin Chinese and were mothers of an infant between six and ten weeks of age. Each woman was recorded as she spoke on the telephone to a Chinese-speaking friend or as she spoke to her infant that she held in her lap. The average fundamental frequency (FO), average pitch range for each sample recording, average pitch range for each phrase, average phrase duration, and average pause duration were recorded for the adult-to-adult (A-A) conversation and the adult-to-infant (A-I) conversation.

Overall, findings illustrated that fundamental frequency and pitch range, whether measured over the sample or individual phrases, significantly increase or shift upward when Mandarin mothers speak to their infants. In other words their pitch increases. Furthermore, the pause duration and phrase duration are altered when the mothers speak to their infants. They speak slower, shorten their phrases and increase the length of their pauses in comparison to speech directed at adults.

These results indicate that Mandarin motherese is very similar to English motherese. Therefore, the prosodic patterns (increased average pitch, lengthened pauses, and shortened phrases) in maternal speech to infants are not language-specific. This is a surprising result considering that the tonal language of Mandarin Chinese relies on changes in pitch to indicate word meaning. The question then arises whether or not a developmental change in Mandarin motherese must occur when infants approach the age of language acquisition in order for them to accurately understand the differences between words.

Since these findings are fairly robust, it is important to further understand the benefit this type of speech has on infants. More specifically, research should focus on the acoustic characteristics of motherese that capture the attention of infants. Research has identified that this universal languages exists, but now focus should turn to the purpose it serves.

Grieser, DiAnna L., & Kuhl, Patricia K. (1988). Maternal Speech to Infants in a Tonal Language: Support for Universal Prosodic Features in Motherese. Developmental Psychology, 14-20 TaylorDrenttel (talk) 01:28, 1 March 2012 (UTC)[reply]

---

Stuffed toys and speech perception

There is enormous variation in phoneme pronunciation among speakers of the same language, and yet most speech perception models treat these variations as irrelevancies that are filtered out. In fact, these variations are correlated with the social characteristics of the speaker and listener — you change the way you speak depending on who you're talking to. Now, recent research shows that these variations go beyond just speakers: listeners actually perceive sounds differently depending on who they come from. Jennifer Hay and Katie Drager explored how robust this phenomenon is by testing if merely exposing New Zealanders to something Australian could modify their perceptions.

Subjects heard the same sentences, with a random change in accent. The /I/ sound was modified to sound more like an Australian accent or like a New Zealand accent, and all subjects heard all variations. The only difference between the two groups was the type of stuffed animal present -- either a koala, for the Australian condition, or a kiwi, for the New Zealand condition. After hearing each sentence, participants wrote on an answer sheet if it sounded like an Australian speaker or a New Zealand speaker had read it.

When the participants listened to the sentences with a koala nearby, they tended to perceive them as being like an Australian accent, especially in the transitory sentences where the /I/ phoneme was indistinguishable from Australian or New Zealand accents. Similarly, when the kiwi was present, participants were more likely to perceive the sentences as sounding more like a New Zealand accent.

The researchers had originally but skeptical that these results could be obtained. Hay had previously performed a similar experiment, and the results from this study corroborated with those from before. This suggested to the researchers that invoking ideas about a particular region or social aspect can alter the way a sentence is perceived.

Hay, J., & Drager, K. (2010). Stuffed toys and speech perception. Linguistics, 48(4), 865-892. doi:10.1515/LING.2010.027 AndFred (talk) 03:12, 1 March 2012 (UTC)[reply]

---

Infants listenfor more phonetic detail in speech perception than in word-learning task by Christine L. Stager & Janet F. Werker Hhoff12 (talk) 04:00, 1 March 2012 (UTC)[reply]

Previous research showed that infants aged 4-6 months were able to differentiate syllables in their native language as well as languages that were not familiar to them, but lose this ability by the time they are 10-12 months. At this time they are only able to differentiate variations in their own language. Prior to this study, researchers found that 14 month old infants were able to differentiate dissimilar sounding nonsense words with their objects. However, little was known about speech perception sensitivities in early word learning - if infants were able to differentiate words that were phonetically similar, or having similar speech sounds. This study is important because we already know about a reorganization that occurs when infants are only able to detect variations in their native language, but there could possibly be another reorganization that occurs as they go from listening only to syllables to learning words.

In the first experiment, 14 month old infants were taught word-object pairings using phonetically similar nonsense words. Then they were tested on the pairings and their ability to notice changes in the words, objects, or both. Researchers expected that they would be able to learn the two similar sounding words which would confirm that they were using their phonetic discrimination skills. “Bih” and “dih” were assigned to two brightly colored clay objects. The object-word pairs were repeated until the infant became familiar with the objects shown by a decrease in looking time. Next they were tested in two different trials - the same object-word grouping and a switch in the object-word grouping. Successful discrimination would be shown through longer looking times in the switched trials than in the same trials. However, there was no significant difference suggesting that the infants did not notice a switch in the object-word pairings with the similar sounding words.

In the second experiment they tested infants on an easier single word-object association task. They also included 8 months old along with the 14 month olds. Infants were taught that an object was called “bih” and then tested with that object called “bih” or the object called “dih.” The 14 month olds still weren’t able to detect the difference, but the 8 month olds were. This suggests that for the 14 month olds, the task involves word learning whereas with the 8 month olds it is a sound-discrimination task.

In experiment 3 they mimicked research to make sure that the 14 month olds were able to complete a single word-object association task and used dissimilar sounding words. They were able to complete the task which proves it was the phonetic similarity that caused their confusion in experiment 2, not other factors.

In the fourth experiment they wanted to rule out that infants aged 14 months could still differentiate fine phonetic discriminations, so they paired the words with a checkerboard screen that wouldn’t suggest a name. The procedure was the same as in the previous experiments and they were able to differentiate “bih” from “dih.” This shows that infants fail to attend to the fine phonetic details only when they are learning new words.

When learning new words, it is possible that infants ignore fine phonetic details in order to limit the amount of inputs coming in and make them more successful. At an older age, when the learning isn’t as complicated, the fine details should come back which has been shown in other research. This decline in performance is proof of a developmental progression in infants.


---

Phoneme Boundary Effect in Macque Monkeys

There has been debate over what specific characteristics of language are unique to humans. A popular approach to investigating this topic is to conduct studies to test possible innate language processes in animals and then compare these results to human subjects. There has been previous research on the nature and origins of the phoneme boundary effect. Many of these studies are centered on speech and non-speech comparisons along with looking at the difference between human and animal subjects.

Prior to this particular study on macque monkeys, there were five studies that compared perception of speech sounds between the two subject groups of animals and humans. These studies concluded that certain nonhuman species are able to perceptually partition speech continua in the region already defined by human listeners. In addition, animal subjects are able to discriminate stimulus pairs from speech-sound continua. To add to the data that had already been gathered, Kuhl and Padden aimed to extend the research to voiced and voiceless continua in order to further investigate phonetic boundaries.

In the study, Kuhl and Padden used three macaque monkeys as their subjects. The subjects were tested on their ability to distinguish between pairs of stimuli with voice and voiceless properties (ba-pa, da-ta, ga-ka). The subjects were restrained in chairs during the testing and were delivered the audio signals by an earphone in the right ear. There was a response key located in front of the subject along with a green and red light that were used to train the subject to respond at the correct time and in the correct way. In addition, an automatic feeder that dispensed applesauce was used as a positive reinforcement throughout the study.

During the procedure there were two types of trials; the subjects were presented with stimuli that were the same and stimuli that were different. These trial types were run with equal probability. The subject was required to determine if the stimuli were the same of different. This was done by pressing the response key for the full duration of the trial if the two stimuli were the same, and release the response key if they were the same.

Kuhl and Padden found that the subjects were able to discriminate between sounds that were phonetically different significantly better than they discriminated between sounds that were phonetically the same. These results were consistent with the results found in human subjects, both adults and infants. Due to the similarities of these results, it can be suggested that the phoneme-boundary effect is not exclusive to humans. The results of this data brought up different issues involving innate language processes including the relevance of animal data to human data and the role played by auditory constraints in the evolution of language. Further studies will be necessary in order to determine how far these results can be applied to the overall evolution of language.

Kuhl, P.K., Padden, D.M., (1982). Enhanced discriminability at the phonetic boundaries for the voicing feature in macaques. Perception and Psychophysics. doi: 10.3758/BF03204208 Anelso (talk)

---

This article was about speech remaining the same even if the extinction of carnonical acoustic phonemes of the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection. Three tests were conducted to estimate the effects of exposure to natural and sine-wave samples of speech in this kind of perceptual versatility Sine-waves are defined as synthesizing the voice differently and also by deleting particular phonemes. The first experiment was labeled a bench mark of intelligibility of easy and hard sine-wave words. This initial procedure aimed to determine a baseline difference in recognition performance between easy and hard words using test items created by modeling sine-wave synthesis on natural samples spoken by a single talker. The experimenters believed that sine-wave speech would be the same as the talker speaking. They used two sets of seventy-two words (easy and hard). The words different in 3 characteristics: mean frequency of occurance, mean neighborhood density (these words were, also, spoken by a male with a headset). The participants were twelve, English speaking, volunteers recruited from the undergraduate population of Barnard College and Colombia University. In this experiment the participants were to listen to the words and write them down in a booklet (guessing was encouraged). The results showed better results with easy words (42%) and hard words (25%). The second experiment was labeled as a test of the effect of exposure to sine-wave speech. In this experiment they compared exposure to the acoustic form of the contrasts and the idiolectal characteristics of a specific talker. Three kinds of exposure were provided, each to a different group of listeners, preliminary to the word recognition test: (a) sine-wave sentences based on speech of the same talker whose samples were used as models for the easy and hard words; (b) natural sentences of the talker whose utterances were used as models for the sine-wave words, to provide familiarity with the idiolect of the sine-wave words without also creating familiarity with sine-wave timbre; and (c) sine-wave sentences based on natural samples of a different talker, to familiarize listeners with the timbre of sine-wave speech without also producing experience of the idiolect of the talker who produced the models for the sine-wave words. They used two kind of test materials: sentences used in an exposure interval, and easy and hard sine-wave words used in a spoken word identification test. The three exposures were tested as so:Same Talker Natural, was a set of seven-teen natural utterances produced by one of the authors, the same talker whose speech served as the model for the easy and hard sine-wave words, a second set of exposure items, Same Talker SW, was a set of 17 sine-wave sentences, and the third set of exposure items, Different Talker SW, were 17 sine-wave sentences modeled on natural utterances spoken by one of the researchers. The participants were thirty-six volunteers were from the undergraduate population of Barnard College and Columbia University. Randomly assigned to 3 different groups of 12 listeners. The subjects were told to listen to a sentence 5 times (1 second between sentences and 3 seconds between trials). Following this portion of the test there were an identification of easy and hard words. The subjects were supposed to answer the phrases in a booklet. The results were the sentence transcriptions were scored and performance was uniformly good, with natural sentences transcribed nearly error-free and sine-wave sentences identified at a high performance level despite a difference between the talkers (Same Talker SW = 93% correct, Different Talker SW = 78% correct). Each of the 34 sine-wave sentences was identified correctly by several listeners. To summarize the results, easy words were recognized better than hard words in every condition; exposure to natural sentences of the talker whose utterances were used as models for the sine-wave words did not differ from no exposure, nor did it differ from exposure to sine-wave speech of a different talker. Recognition improved for easy and for hard words alike after exposure to sine-wave speech produced by the talker who spoke the natural models for the sine-wave words. The third experiment was labeled as a control test of uncertainty as the cause of performance differences in easy and hard sine-wave words This was a test to estimate residual effects on recognition attributable to the inherent properties of the synthetic test items themselves by imposing conditions that eliminated the contributions of signal-independent uncertainty by using a procedure to eliminate signal-independent effects on identification due to uncertainty, this test exposed any residual signal-dependent differences in word recognition caused by errors in estimating spectrotemporal properties when creating the sine-wave synthesis parameters They used the same easy and hard sine-wave words from the first experiment. Except this time they were arranged differently so that some started with the same letters and some ended with the same letters. There were twenty-four volunteers were recruited from the undergraduate population of Barnard College and Columbia University. Randomly assigned to 2 groups of 12. These participants were given 140 trials of words and wrote them down in a booklet (encouraged to guess). The results showed that between the two different groups (same beginning/ending and no similarities) they both scored close to the same. Even though there was no difference, the results were extremely well among the easy and hard words. Approximately 88 percent of the words were recalled correctly. The discussion concluded that a listener who has accommodated this extreme perturbation on the perception of speech expresses the epitome of versatility, and the three tests reported here aimed to calibrate the components of this perceptual feat by assessing signal-dependent and signal-independent functions.

Remez, Robert E.; Dubowski, Kathryn R.; Broder, Robin S.; Davids, Morgana L.; Grossman, Yael S.; Moskalenko, Marina; Pardo, Jennifer S.; Hasbun, Sara Maria; Journal of Experimental Psychology: Human Perception and Performance, Vol 37(3), Jun, 2011. pp. 968-977. Gmilbrat (talk)

---

Miller, Joanne L,; Mondini, Michele; Grosjean, Francois; Dommergues, Jean-Yves; Language and Speech: Dialect Effects in Speech Perception: The Role of Vowel Duration in Parisian French and Swiss French, Vol 54(4), p. 467-485. Sek12 (talk)

The experiments of this article ask the question of how native Parisian French and native Swiss French listeners use vowel duration in deciding what the contrast is between a short o and a long o in the words cotte and cote, in the French language. They wanted to see whether or not the listeners could perceive the difference between the words in both their native and "abstract" French language.

This research question is important because it is trying to answer whether or not the vowel duration and vowel contrast of the two vowels used in the experiments are noticeably perceivable between Parisian and Swiss French dialects, which are almost identical.

Previous research on this topic, also done by the same authors, used only vowel duration as the indicator of vowel identification. This previous research found that vowel duration played a much more important role in Swiss French than in Parisian French. Parisian French listeners identified the vowels only using spectral information while the Swiss French listeners used both spectral information and vowel duration to identify the vowels presented to them. The current study works to investigate deeper into the dialect difference that is present between the vowels in the study(a short /o/ and a long /o/) and the way the Parisian and Swiss French listeners perceive the difference between those vowels in words.

In Experiment 1 of this study the researchers created four speech series to find the best exemplars of vowel duration in both native languages. Two of the series were based on the language of Parisian and two were based on that of the Swiss French. Each of the four series used the word cotte with a short vowel and one with a long vowel and also included cote with the same differentiation. The variable measured in both studies was the vowel duration difference between the two native languages.

In Experiment 1 and 2 the procedure was the same. Sixteen native Parisian French and sixteen Swiss French participants were chosen. Four series of stimuli were created to be used in the study. Each series consisted of short and long duration vowels in the words cotte and cote and were based on the natural speech of both groups. All the participants in the study took part in two separate sessions. Each of these sessions entailed three parts: familiarization, practice, and test. In the familiarization phase the listeners were presented with stimuli and rated those stimuli on a scale of 1-7(1 being a poor exemplar and 7 being the best fit exemplar). No results were taken from the familiarization phase. In the practice phase the participants were presented with the same stimuli as they would be in the test phase, in random order. The last part of the experiment, the test phase, the participants were presented with 14 blocks of stimuli, giving a rating on the vowel duration.

The results indicate that Swiss French listeners judged that the longer vowels were the best exemplars of the short /o/ and long /o/ when they listened to the Swiss French series and the Parisian French series. For both Parisian and French Swiss listeners the best exemplar was judged as being the long vowel variation of the words used in the study. Both groups showed sensitivity to the vowel duration for both languages in both the short vowel /o/ and the long vowel /o/. The researchers expected that only a small range of vowel durations would be perceived by the listeners as good exemplars and they were correct in this expectation.

The conclusion of this study tells us that "taken together, the analyses indicate that, overall, short /o/ and long /o/ vowels are differentiated by duration in both dialects, but that the difference between the two vowels is greater in Swiss French than in Parisian French, owing to a longer /o/."


Stuffed Toys and Speech Perception Zc.annie (talk)

Human knows how to pick up language when there are different kind of sounds, speech perception can be achieved nicely in many noisy environment. Many other factors can influence speech perception. Niedzielski’s research showed the speech perception could be influence by the regional label on an answer sheet. Also social characteristics, like age and social class, can influence on people’s speech perception. For example, participants shifted their perceptual phoneme boundary between /s/ and /∫/ depending on whether the person’s gender in the video clip. (Strand, 1999) One other factor that influence the vowel perception is the exposure to a dialect. Research found that there is a bias in perception of /I/ toward more Australian-like variants for female participants with “Australian” appeared on the top of the answer sheet, which the speaker is actually from New Zealand.

In this research, psycholinguists tried to see how stuffed animal indicates Australia or New Zealand showing in the room will influence on participants’ speech perception. A male NZer’s voice is used for recording sentences, 20 included the target vowel/I/, and 20 more, 10 with vowel /ɛ/ and 10 with /æ/ to be the distracters. And participates were asked to match the pronunciation of the vowel of the words that underlined on their answer sheet. And the /I/ designed to range from a high Australian sound to a centralized NZ-like sound. For the other two vowels, the same thing happens. All participates listen to all the sentences, in one of the two conditions. The only difference between the set is the stuffed animal present during the task. Australia condition involved stuffed toy kangaroos and koalas; New Zealand condition has stuffed toy kiwis. Experimenters made sure participants notice the toys by looking for answer sheet in the cupboard contains the toy and put the toys on participants table later. No information of the speaker of the recording provided to participants. The results shows that female tend to robust more, and token /ɛ/ is the most influenced one by the condition. Also male tend to respond with a more Australian-like token in the New Zealand condition than they are in the Australian condition, where female respond with more Australian –like token in Australia condition. Higher social class participants tend to respond more to Australian condition than the New Zealand one. Which high class can explain are more like to travel to Australia that expect more Australia English. The results illustrate how the subtle differences in the environment can influence our perception on speech. Even the similar is not vocal but visual.

Hay, J., & Drager, K. (2010). Stuffed toys and speech perception. Linguistics, 48(4), 865-892. doi:10.1515/LING.2010.027

Word Processing edit

________________________________________

Rayner, K., Slattery, T. J., Drieghe, D., & Liversedge, S. P. (2011). Eye movements and word skipping during reading: Effects of word length and predictability. Journal Of Experimental Psychology: Human Perception And Performance, 37(2), 514-528. doi:10.1037/a0020990

Lkientzle (talk) 04:25, 8 March 2012 (UTC)[reply]


The goal of the present study was to determine if word length and word predictability, based on previous context cues, would have a significant effect on how long a person fixates on a target word. Eye tracking devices were used to explore these questions and have been used in previous studies on word-processing to analyze the patterns subjects use to gather meaning from a string of words. Eye fixation time is the amount of time a person keeps their eyes fixated in one specific place, usually inferring more processing is needed for that word. The research also looked at how these variables (both predictability and word length) affected a participant’s likelihood to skip over the target word. Although past research has looked at these two factors separately (predictability of word on fixation time and length of word on fixation time), the present research hoped to combine these two elements to see if predictability of target word combined with varying word lengths affected fixation times.


Previous research on this correlation was attempted once, but encountered ceiling effects due to word length choices; namely, predictable two-letter words were skipped 79% of the time (Drieghe, Brysbaert, Desmet, and De Baecke, 2004). To correct this for the present study, three categories of word lengths were chosen: short, medium, and long, to control for this ceiling effect. The independent variables (IVs) for the present study would then be: word length (short, medium, and long) and word predictability (high vs. low). The dependent variables would be how long a person fixated on a word as a result of the IVs and their probability to skip over the target word as a result of the IVs.


Participants were asked to read a sentence silently that was presented on the screen in front of them while connected to an eye-tracking device. The sentences were randomized and after every third question a comprehension question was asked to insure meaning was understood. The sentences presented varied their target words for the conditions stated above (i.e- high probability with short length word, low probability with long length word…etc.)


Main results from this study indicated that word predictability significantly effected how often a word was skipped and the length of fixation time on the targeted word. Word length had some effects on fixation time and amount of target words skipped, but not enough to be significant. Unpredictable words had longer fixation times in all word length conditions. Predictable words had the most skips across all conditions.


Although researchers were hoping to find a significant correlation between word length and fixation times/skipping probability, this study still lends a hand towards future studies because it was the first to demonstrate skipping rates for long words. This is interesting because the long words (10 letters or longer) extended beyond the limits of the human identification span, meaning that the participant skipped it most likely due to partial information available that allowed for the skip. This finding could lend a hand towards research on the process of skipping longer words, and how we can still generate meaning with out the word.


Lkientzle (talk) 06:06, 8 March 2012 (UTC)[reply]

---

Emotion Words Affect Eye Fixations During Reading (Graham G. Scott, Patrick J. O'Donnell, and Sara C. Sereno) Katelyn Warburton (talk) 21:49, 28 February 2012 (UTC)[reply]

Previous research has evaluated the influence of “emotion words” on arousal, internal activation, and valence (value/worth). There is little disagreement that a reader’s response to emotion words can influence cognition, but physiological, biological, environmental, and mental influences remain understudied. This study evaluates the effect emotionality can have on lexical processes by tracking eye movements during fluent reading.

48 native English speaking participants with uncorrected vision were asked to read from a computer screen (ViewSonic 17GS CRT) while their right eye movements were monitored (by a Fourward Technologies Dual Purkinje Eyetracker). Arousal and valence values as well as frequencies for words were obtained and the values were averaged across categories. 24 sets of world triples including positive, negative, and neutral emotion words were presented to participants, with the target emotion words in the middle of the sentence. Participants were told that they would be asked yes and no questions after they read the sentence to ensure they were paying attention. After they read the sentence and answered the question, they were instructed to look at a small box on the screen while the tracker recalibrated. This occurred through all 24 trial sets.

In order to verify the plausibility of test materials, three additional tests were conducted utilizing different participants than the initial study. The first sub-study involved 18 participants who rated the plausibility of each emotion word appearing in a sentence. The second involved a similar task—participants were asked to rate the plausibility of an emotion word, but made the judgment from a sentence fragment not an entire sentence. Finally, 14 different participants were given a statement and asked to generate the corresponding emotion word. These three norming studies verified that the emotion words being used in the central study were plausible without being predictable.

This is the first study to analyze single emotion words in the context of fluent reading. Researchers found that participants had shorter fixation rates on positive and negative emotion words than neutral words. In addition, the influence of word frequency on fixation was facilitated by arousal levels. More specifically, low frequency words were facilitated by high levels of emotional arousal, either positive or negative. Therefore, emotional biases and word frequencies influence eye fixation while reading. The results of this study were consistent with previous research on emotion word processing, and furthered past studies by evaluating emotional word processing during fluent reading. In short, this study shows evidence of the important role of emotion in language processing. More specifically, that the emotional nature of a word—defined by arousal and valance characteristics—affects lexical access and therefore influences information processing. By following eye movements researchers were able to identify the rate at which words are recognized. This demonstrates that word meanings are activated and integrated quickly into reading context.

Scott, G. G., O'Donnell, P. J., & Sereno, S. C. (2012). Emotion words affect eye fixations during reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, doi: 10.1037/a0027209

---

The Structural Organization of the Mental Lexicon and Its Contribution to Age-Related Declines in Spoken-Word Recognition Amf14 (talk) 04:22, 29 February 2012 (UTC)[reply]

Older age groups generally have a more difficult time with speech processing in comparison to younger groups of people. This was originally thought to be due to the loss of hearing that people of growing age experience. Recently, studies have attempted to show that factors other than hearing loss can attribute to difficulties with processing. These experiments have shown how a reduction in one’s cognitive abilities can greatly damage the processing of spoken language.

The English vocabulary is immense and therefore when accessing a specific word, the brain must activate the meaning in order to interpret and understand what is being said. A model known as the Neighborhood Activation Model suggests that words that are similar phonetically are all activated within the brain upon hearing a word. Words are recognized based on two important characteristics; density and frequency (Luce, 1986). Density refers to how many similar neighboring words there are and frequency indicates how often the word is used in comparison to the other words within their neighborhood. As the input of language continues, words are inactivated and ruled out until one meaning can be concluded.

According to experiments done by Luce, “hard” words, with high frequency and high density neighborhoods, will be more difficult to recognize. The reasoning behind this idea is that when words come from very dense neighborhoods, all of the similar words are activated as possibilities as well. This leads to a slower processing of the word that was initially spoken because a person is unsure of which word to continually activate and recall. Also, processing the mass amounts of spoken language is costly to a person’s cognitive resources. Age related declines in cognitive abilities should then affect one’s abilities to process hard words.

Experiment 1 desired to compare young adults with an elderly population. Each group heard a series of 75 easy and 75 hard words spoken by a male or female speaker. The difficulty of the words was based on the density and frequency of neighborhoods. The listeners were asked to write down the word that they heard and were given full credit if it was an exact match. Results reveal that older adult performance is influenced much more by the difficulty of words. The scores calculated for easy words were comparable between the two groups. The influence of hearing loss was measured but not found to impact the task at hand. An additional experiment was conducted in attempt to rule out extra factors in experiment 1. Questions were raised about how demanding the task was of older adults and to further test this idea, participants from the same population were presented with the same task but in the presence of white noise. Within the older group, the differences in performance between easy and hard words should remain if in fact the poor performance was due to factors other than task difficulty. As predicted, older participants in this experiment consistently had worse performance for words compared to the young group. They also continued to perform better for easy words. These findings suggest that the discrepancies in accuracy cannot be contributed completely to task difficulty, but it is possible that older listeners struggle with isolating specific sounds within words that have similar counterparts within the brain.

A final component of speech processing needed to be analyzed. Experiment 3 was designed to determine the amount of processing resources available at old age. When listening to a multitude of different speakers, the variability of speech signals is increased. Therefore, more cognitive resources are used in order to decipher between the many different acoustic sounds. This study does a better job of simulating real world contexts due to the many different voices surrounding a person. The manipulated factors now included easy and hard words, plus single voices versus multiple voices. Findings from experiment 1 were reinforced and word recognition was also significantly reduced when older adults hear many different speakers. This final experiment disclosed that all age groups have to use more cognitive resources when processing words with multiple speakers, but older listeners are at a greater disadvantage.

As age increases, adults suffer from reductions in cognitive abilities and processing. This leads to lags in the time it takes for them to understand language being spoken. Other factors that also decline with age contribute to these demands being put on them as well. This includes eye sight, hearing, processing and more.

Amf14 (talk) 19:52, 6 March 2012 (UTC)[reply]

Sommers, M. (1996). The structural organization of the mental lexicon and its contribution to age-related declines in spoken-word recognition. Psychology And Aging, 11(2), 333-341. Doi:10.1037/0882-7974.11.2.333

---

Evidence for Sequential Processing in Visual Word Recognition (Peter J. Kwantes and Douglas J. K. Mewhort)

When reading a word, there can be many possible candidates for what the word will end up being but at a certain point, the uniqueness point (UP), only one possible option remains. Previous research by Radeau et al. tested the ability to encode words sequentially using UP, in terms of the position of the letter that signified the uniqueness point of a word. The study was to determine whether the UP followed a pattern such as speech recognition in which latency of words with an early UP was faster compared to a late UP, however the test showed opposite results. In an effort to explain these mixed results, Kwantes & Mewhort redefined the study using an orthographic uniqueness point which distinguishes a word when reading from left to right. The study aimed to determine whether words with an early-OUP could be identified faster than those with late-OUP.

The initial study involved twenty-five undergraduate students who were asked to name a series of seven-letter words aloud, as quickly as possible, while they were presented visually in a sequence on a screen. Half of the words had an OUP with Position 4 (early-OUP) and the other half had an OUP with Position 6 or 7 (late-OUP). The response time (RT) was measured from the onset of the word until the voice response began.

The reaction time results showed a clear advantage for early-OUP words, which were on average 29 ms faster than late-OUP words.

The second study aimed to measure whether the results in Experiment 1 truly reflected a process of production and pronunciation, or whether it depended on reading processes instead. Experiment 2 used a similar procedure to the previous study, but asked participants to read the word silently and then name it out loud when cued to do so. An early-OUP advantage was not detected when naming was delayed by a cue, suggesting no interaction with output or production processes. The third study also repeated Experiment 1, but instead took away the visual word stimulus after 200 ms, in order to focus on the effect of eye movement. Results for Experiment 3 showed similar results as Experiment, with an early-OUP advantage, suggesting that the early-OUP advantage is not a result of eye movement.

The three experiments suggest that the early-OUP advantage in word processing is a result of retrieval from the lexicon, without interference of reading processes or eye movement. The results affirm the predictions of the researchers and present possible reasons for Radeau et al.’s failure to find early-UP advantages. Orthographic uniqueness points display the important role of lexical processes within word recognition.

Kwantes, P. J., & Mewhort, D. K. (1999). Evidence for sequential processing in visual word recognition. Journal Of Experimental Psychology: Human Perception And Performance, 25(2), 376-381. doi:10.1037/0096-1523.25.2.376 Smassaro24 (talk) 16:22, 3 March 2012 (UTC)[reply]

---

Syllabic Effects in Italian Lexical Access

Past research has explored the role of syllables in speech perception in depth. However, there are still large gaps remaining in relation to the function of small syllable units in the ability to identify a word in early processing. Research has in part focused on the syllabic hypothesis, stating that syllables are ordinary units in the processing of speech. It is known that in languages such as English one syllable is often enough to trigger lexical access and therefore recognize a word before it has been completely heard. It has not yet been determined if these effects are similar in other languages, such as those with Romantic qualities.

In the present paper, Tagliapietra and her colleagues aim to determine if these effects are synonymous in the Italian language. The researchers set out to determine if in the Italian language the first sounds of a syllable make contact with the mental lexicon. Specifically, Tagliapietra et al. explore whether this access to the mental lexicon depends on the point of the syllable that is stressed. This research is of great importance as it allows speech perception to be understood cross culturally. The present article features two experiments aiming to clarify this issue. In the first experiment, forty-two undergraduate Italian speaking students participated. They were asked to respond to word syllables determining if they could determine the word by only hearing the first syllable. Participants were first randomly assigned to two conditions of differing syllable structures. Next, they were either assigned to a condition where the word following the priming syllable was related to the syllable and on in which the word was not related. These two factors, syllable structure and the target word, were the two independent variables. Participants were then tested on the rate at which they could recognize a word only from the syllable. The dependent variable, then, was the time it took participants to respond. In the second experiment, the procedure followed the same procedure. However, there was only one independent variable which was the relationship between the priming syllables and the target words. This was done to determine if a fragment shorter than a syllable can establish the same contact with the lexicon.

Results from experiment one indicated that people can make a decision, or contact with the lexicon, about what a word is after only hearing a syllable regardless of whether stress is applied to the syllable. People also responded more quickly when the word following was related to the syllable than when it was completely unrelated. Results of experiment two found that small fragments can also be used to connect to the mental lexicon. Ultimately, the findings do not support the view that in Romance languages, such as Italian, fit within the syllabic theory. Although the results do not completely support this theory, they create an altered version of the same theory stating that not until later in the process of recognition is the set of lexical candidates reduced in Romantic languages.

Implications of this article allow readers to deepen their understanding of speech perception cross culturally. It also aids in understanding the complexities of human speech perception and how this can vary based on minute differences across languages.

Tagliapietra, L., Fanari, R. R., Collina, S. S., & Tabossi, P. P. (2009). Syllabic effects in Italian lexical access. Journal Of Psycholinguistic Research, 38(6), 511-526. doi:10.1007/s10936-009-9116-4 Kfinsand (talk) 06:22, 6 March 2012 (UTC)[reply]

---

Everyday, people say sentences that activate part of the brain. In these sentences we use contextual information that will help reactivate specific words to gain meaning behind the sentence. In this specific article, the experimenters were curious of the certain activation type of the word processing. Specifically, they were concerned with the entities of the words vertical space (ex. roof vs. root). In the first experiment, the participants performed a lexical decision task with words that had an up or down location (ex. eagle vs. worm). If reading the word activates whether it’s up or down, then compatibility occurred. They used thirty-six right-handed German native speakers (with an exclusion of two of the participants because of very low accuracy rates). They used seventy-eight German nouns that had a connotation of being either up or down (thirty-nine words up and thirty-nine nouns down). While these words were being stated, the participants were recorded with a frequency device from University of Leipzig. During the procedure the participants were given a noun and a pseudo word. They were to distinguish in the first half whether it was up or in the second half whether it was down. They found that the responses in the upper location were faster than in the lower location. However, both locations were significant, which means that location is remembered by the contextual information. In the second experiment, the experimenters used the same lexical decision task with the same words, except the words corresponded to the font color. They used twenty-four German native speakers (with an exclusion of one of the participants because of very low accuracy rates). They used the same words but with four different colors (blue, orange, lilac, and brown). There was a significant interaction between response location and referent direction. The responses were also significantly faster with compatible words than incompatible words. These results state that there is no prerequisite for location during word processing. In the third experiment, the experimenters used filler words that have no word location to throw off the naïve participants who believe they are being manipulated. Twenty-four German native speaking participants were used. Thirty-nine filler words were also added to the list from experiment one. This showed that the location of the word and the correctness of the word were faster than if they were false. In the last experiment, the experimenters wanted to know how fast they could push the button to its correlating location. The participants were told to keep their hands stationary on the up or down buttons. There were twenty-four German native speaking participants (one was excluded due to low accuracy). The experiment was exactly designed like experiment two. Like the other experiments, the observers found a compatibility effect. Also, the participants were faster at hitting the up button than they were for the down button. These four experiments strongly concluded that information concerning a referents location of the word is automatically activated when a participant process object nouns.

Lachmair, Martin; Dudschig, Carolin; De Filippis, Monica; de la Vega, Irmgard; Kaup, Barbara; Psychonomic Bulletin & Review, Vol 18(6), Dec, 2011. pp. 1180-1188. Gmilbrat (talk)

---

Semantic processing and the development of word recognition skills: Evidence from children with reading comprehension difficulties

Previous studies had acknowledged the importance of phonological awareness in language and reading acquisition. However, many of these studies ignore the semantic aspects of learning to read. This study was done based on the concept that reading comprehension involves two skills, decoding ability and linguistic comprehension. This study aimed to look specifically at children who have reading comprehension difficulties to test the importance of semantic processing in reading acquisition.

The two predictions tested in this study were that children with specific reading comprehension difficulties show impairments in semantic processing, but not phonological processing. Also, it was predicted that differences in semantic processing skills would influence word recognition abilities. The predicted this because if poor comprehenders still have good decoding skills, they would be able to easily read words they hear often or are spelled normally. However, they would have more difficulty reading low-frequency exception words.

The children were all between 8 years, 6 months and 9 years, 6 months and attended the same school. They were matched for their age, as well as non-verbal and decoding abilities. All the children were tested according to the Neale Analysis and were considered at least age-appropriate in reading accuracy. Decoding ability was tested using the Graded Nonword Reading Test. To test semantic difficulties, the children took the Test of Word Knowledge, which tests receptive and expressive vocabulary.

This study consisted of three experiments. In the first experiment, abilities to access semantic and phonological information were tested. For semantic information, they used a task determining if words word synonyms. They determined if two spoken words rhymed to test phonological processing. The results showed that children with poor comprehension skills performed more slowly and made more errors on the synonym task, but performed similarly to the control group in determining rhyming words. This would suggest that they have trouble with semantic but not phonological processing.

The second experiment tested semantic and phonological processing as well, but did so using verbal fluency. Instead of simply identifying if words are synonymous or rhyme they had to produce them. To test semantics, the children were given spoken categories, such as animals, and given 60 seconds to generate as many examples as they could. To test phonological processing, children were given words and asked to come up with as many words that rhymed with them as possible in 60 seconds. The results of this experiment were consistent with the first experiment, showing more difficulty in accessing and retrieving semantic information in children with poor comprehension skills. It is possible that their difficulty in these single-word semantic tasks contribute to the comprehension difficulties.

The third experiment looked at whether these semantic difficulties would also affect word recognition. The children were asked to read words that varied in their frequency and spelling regularity. The expected to see the children with poor comprehension skills to have more difficulty reading irregular and low-frequency words because they require more than just decoding, but also semantic processing. They did see word-recognition weaknesses in children with comprehension difficulties compared to their controls, specifically in low-frequency and exception words. However, a three-way interaction between reader group, frequency, and regularity was not significant. This could be because the subjects were too young to have fully developed word-recognition skills, or that because of their age they might not have been exposed to high frequency words as much and not think of them as such. This study does show that children with poor comprehension skills had more difficulty with semantics, and semantic abilities play a role in word recognition.

Nation K., Snowling M. J. (1998). Semantic processing and the development of word recognition skills: Evidence from children with reading comprehension difficulties. Journal of Memory and Language, 39, 85–101. Lcannaday (talk) 16:16, 8 March 2012 (UTC)[reply]

---

Speaker Variability Augments Phonological Processing in Early Word Learning by Gwyneth Rost and Bob McMurrayMisaacso (talk) 20:02, 7 March 2012 (UTC)[reply]

Infants have difficulty learning phonologically similar words which is shown in a switch task. For this task, infants are habituated with two objects that are paired to two words and are then tested in two ways. The first test takes the paired objects and words. Infants see the pairing that was used in habituation. On the switch trials, the infants saw an object with a word that was not together during habituation. If infants have actually learned the words, the mismatch should cause them to be no longer habituated to the pairings. This task assesses if an infant has learned a pair of words to an extent for the child to be surprised by differing the word pairing. Past research has discovered that while using the switch task, 14 month olds notice misnaming when words are different when there are multiple phonemes but not if it is a variance of a single phoneme. Learning a word demands multiple abilities of cognition and perception including attention, memory and inductive thinking. When children hear a non-word that sounds similar to a known work, the known word is partially activated. In 17 month olds, there is a correlation with an ability to successfully complete the switch task and the size of the lexicon.

Phoneme discrimination is a component of learning words and these abilities continue to develop while cognitive capacities increase, making it so lexical similarities cause less difficulties. Mispronouncing words is normal as shown with infant directed speech as it has a larger range for possible production when compared with adult directed speech. Studies using visual category learning have shown that infants can distinguish individual examples if they are trained on single pattern tests. If infants are trained using multiple patterns, they are able to differentiate what belongs to specific categories. This study used the switch task that has been used in previous research but with a few alterations. A novel object was introduced at the end of the test, real objects of one color were used instead of multi-color objects, photographs were used instead of moving film, and two of the original words used were replaced to make the learning situation easier.

The infants were presented with three photographs of single-colored objects and two sound files with a female voice reciting /buk/ and /puk/ at intervals of two seconds. The picture and sounds were put together so they appeared at the same time. Infants were habituated to the stimulus and looking time was recorded. After habituation infants were tested on same trials which were the same as the habituation task, switch trials where the object was paired with the opposite word, and novel trials where an object that the infant had never seen was paired with one of the stimulus words.

In experiment one, researchers found that infants were not able to distinguish between lexical neighbors when they were paired with the various visual stimuli. It was found that infants do learn during this task as they became no longer habituated to new visual stimuli. In experiment two, researchers modified the auditory stimulus by splicing the /buk/ /puk/ recordings of eighteen adults. In the experiment each trial had multiple groupings of each word from the different adult speakers. The test condition, whether same, switch, or novel was the independent variable with the looking times being the dependent variable. The multi-pattern switch task gave infants enough information to successfully complete the switch task.

Rost, C. Gwyneth and McMurray, Bob. (2009). Speaker variability augments phonological processing in early word learning. Developmental Science, 12(2), 339-349. doi: 10.1111/j.1467-7687.2008.00786.x Misaacso (talk) 06:46, 8 March 2012 (UTC)[reply]

---

Morphological Awareness: A key to understanding poor reading comprehension in English TaylorDrenttel (talk) 20:14, 7 March 2012 (UTC)[reply]

Reading ability in elementary school children is an extensively researched topic. At this age children are learning a vast amount of vocabulary and developing their reading and writing skills. This particular study sought to examine several reading-related abilities of children in Grades 3 and 5. More specifically, they tested how morphological awareness affects reading comprehension in children that are already below average in comprehension and when this weakness emerges.

Previous research has linked morphological awareness to success in word reading and reading comprehension. Furthermore, there is evidence that the role of morphological awareness is more pertinent to children’s reading development in later elementary grades. The authors hoped to solidify this link and determine when the weakness in morphological awareness might emerge.

This longitudinal study categorized three groups of Grade 5 children into three levels of comprehenders based on a regression equation that compiled age, word reading accuracy, word reading speed, and nonverbal cognitive ability. The three groups were unexpected poor comprehenders, expected average comprehenders, and unexpected good comprehenders. The three groups were similar in nonverbal ability and word reading accuracy, but differed in reading comprehension. Children were administered reading or oral tasks that tested reading comprehension, word identification, word reading efficiency, nonverbal ability, vocabulary knowledge, naming speed, phonological awareness, orthographic processing, and morphological awareness. With regards to morphological awareness, children were given a word analogy that consisted of 10 inflectional and derivational items. The child was to say the word that matched the pattern. The tests were given twice each year.

The groups did not differ in phonological awareness, naming speed, and orthographic processing, but there were significant group differences in morphological awareness. At Grade 3 the groups slightly differed in morphological derivation but not inflection. Overall Grade 3 children performed less well than Grade 5 children. At Grade 5, unexpected poor comprehenders performed significantly less well than the two other groups. Again, this was specific to morphological derivation and not inflection. As the researchers predicted morphological awareness is associated with reading comprehension; however, they extended previous findings to show that these children show adequate orthographic, phonological awareness, and naming speed skills.

This study is the first to give an understanding of when morphological difficulties emerge in poor comprehenders. This knowledge is critical for schoolteachers of third, fourth, and fifth grade students. Greater emphasis should be placed on morphological forms, specifically derived ones, in order to boost the reading comprehension of students.

Tong, X., Deacon, S., Kirby, J. R., Cain, K., & Parrila, R. (2011) Morphological Awareness: A key to understanding poor reading comprehension in English. Journal of Educational Psychology, 103(3), 523-534. Doi:10.1037/a0023495

---

Foveal processing and word skipping during reading by Denis Drieghe

Previous research has generated models which all include the assumption that word skipping during reading is closely linked to the amount of parafoveal processing during the preceding fixation. The fovea is the center of the retina and provides focused vision or an attentional beam, the parafoveal region is then the area around this. According to our assumption of word recognition, the word under the attentional beam is the only word being processed. It has been shown that while the attentional beam may remain on a word (n), processing in the parafoveal region is occurring allowing us to actually process (n+1). Words are skipped because they are recognized in parafoveal vision. Prior research has also shown that skipping rates are higher for short words than long words and for common words than less common words. In addition case and contrast alteration produce longer fixation times on word n compared to normal conditions but no on n+1. The current study focused on the E-Z Reader model. It sought to test the skipping rate of n+1 and to further exam the link between fixation and skipping of n+1. 72 sentences featuring a 5 letter word followed by n+1 (half the sentences were three letters long and half four letters long). Two additional conditions were created by presenting word n in reduced contrast or case alteration. Thirty university students were presented with 12 practice sentences to calibrate the eye tracking system in which 4 featured a case alteration word, 4 featured a reduced-contrast word, and 4 were unmodified. 24 sentences per condition per participant were read. Results show that reducing contrast of word n led to increased fixation times on word n, but not n+1. When word n was manipulated through case alteration, longer fixation times on word n and n+1 occurred. These findings reflect earlier research. Interestingly, the low contrast condition reduced skipping of word n+1 and there was no difference between case alteration and the normal condition. Researchers also had results inconsistent with previous research which showed that less common word n led to reduced skipping of word n+1 and that incorrect parafoveal preview was skipped less than a correct preview. This decision to skip n+1 is influenced by not just the amount of parafoveal processing but also the ease of processing of word n. Findings from the present study suggest that we need to reevaluate the models of word processing, particularly the E-Z reader model focused on in this study.

Drieghe, D. (2008). Foveal processing and word skipping during reading. Psychonomic Bulletin & Review, 15(4), 856-860. doi:10.3758/PBR.15.4.856 Mvanfoss (talk) 00:22, 8 March 2012 (UTC)[reply]

---

Automatic Activation of Location During Word Processing

If a person was asked to point to either the ceiling or the floor, they would most likely either point up or down respectively. In interacting and learning about the world, people tend to associate the objects or events that they encounter with the word that defines it. This means that words become associated with experiential traces which are related to the locations of where these labels are made. Previous studies have shown that when people later read or hear words of objects or events, their experiential tracers (associations made at the time of learning) are reactivated. These earlier studies have shown that experiential traces are activated during word processing and they do affect sensory-motor activation as well. This study aims to find out when the activation of an entity’s location information, such as up and down, occurs after encountering that word and if the speed of reaction time is relative to the compatibility of the word and its location. They are also looking to see if the activation of the location information is relatively automatic or if it is task dependent or requires further context.

In the first experiment, participants were asked to perform a lexical decision task with words that have a referent up or down connotation while they measured the reaction time of upward or downward movement. Thirty-six native German speakers were presented with 78 German nouns and 78 nonsense filler words. These 78 German nouns included 39 words that were associated with an up connotation and 39 words that had a down connotation. The participants were asked to respond if the letter string they saw was a word – if yes, they used an upward motion and the other half used a downward motion. They measured the reaction time of the movement of the participants. The researchers found a significant interaction between the referent location of the word and the response direction of the participants. Participants were faster in determining if the letter string was a word if the direction of their response was compatible with the direction (up or down) associated with the word than if the word location and movement were incompatible.

Next, they wanted to see if this location activation occurred automatically. With the same set-up as experiment 1, they had participants respond with an upward or downward movement based on the font color. They found significant interaction between the direction of the participants’ responses and the direction connotation of the presented word. This suggests that the location information of a word is automatically activated when a word with that referent location information is seen. The researchers also performed similar experiments to control for the participants’ recognition of the location pattern as well as to control for reaction times affected by the movement of the dominant hand. These experiments found data compatible to the previous results.

In each of their four experiments, the researchers found that the participants were faster at responding to the word when the word was compatible in location with the direction of their response (for example, when the word had an up connotation and the direction of their response was also upward) than when they were opposite. This is exactly what the researchers predicted. They also found that this location information is automatically activated without further context which also supports their hypothesis. Knowing this can help people further understand why they make certain associations with words – they are based on a previous experience that involved that particular word.

Lachmair, M., De Filippis, M. & Kaup, B. (2011). Root versus roof: automatic activation of location inormation during word processing. Psychonomic Society, Inc. doi:10.3758/s13423-011-0158-x Lino08 (talk) 05:54, 8 March 2012 (UTC)[reply]

---

Processing words with multiple meanings is a task people do daily, how people do it is difficult to study and explain. Previous researchers have developed theories ranging on a scale of possibilities, on one end of the scale the meaning of the word is determined by the context in which it is used. The other end of the scale believes each word has its own representation and nothing else in common with the other matching words but the phonological form. The middle of the scale is where most of the theories lie; individual senses are stored separately while related senses of a word share a general representation, this can be best understood when compared to a dictionary. Dictionaries have one entry for polysemes (related words) with subentries for each sense of the word and separate entries for homonyms (unrelated words). Studies use response time to show how easily a person accesses the word, words with related senses have been found to have a faster reaction time than words with unrelated senses. The faster response time for polysemous words is thought to be due to the ease of accessing one broad mental representation while homonyms have slower reaction time because people have to search through the separate mental representations. Most studies have used nouns as the homonyms and polysemes however, the current study decided to use verbs, as they may give insight into whether there is a sharp distinction between meanings that are related and those that aren’t. Using a theory where related and unrelated words are stored similarly, with separate representations, it is predicted that the degree of meaning relatedness would have no effect on response time or accuracy. Knowing the process people go through in order to understand the meaning is key to understanding language processing. Four groups of stimuli were created (1) homonymy, (2) distantly related senses, (3) closely related senses, and (4) same senses. Each group consisted of 11 pairs of phrases; each pair contained the same verb. Placement in these groups were determined by WordNet, the Oxford English Dictionary and a rating from 0 to 3 of relatedness of the verb senses in each pair, with 0 being completely unrelated and 3 being the same sense. By using a one-way repeated measures design every participant saw all eleven pairs in each of the four groups. Testing both response time and accuracy the participants viewed the phrases on a computer screen and were instructed to press the yes button when a phrase made sense and press no when the phrase was difficult to make sense of, and to answer quickly but as accurately as possible. The results showed faster response time and better accuracy for same sense pairs as well as closely related senses. As meaning relatedness decreased, reaction time increased, while accuracy decreased as relatedness decreased. While the results helped to further understand the way people process meaning it did not fit the prediction, each meaning does not have a separate mental representation. This study has implications for lexicography, foreign language learning, and computer processing of natural language.

Brown, Susan W. (2008). Polysemy in the mental lexicon. Colorado Research in Linguistics, 21, 1-12. Ahartlin (talk) 03:08, 8 March 2012 (UTC) Ahartlin (talk) 03:33, 29 March 2012 (UTC)[reply]

---

Lanthier, S. N., Risko, E. F., Stolz, J. A., & Besner, D. (2009). Not all visual features are created equal: Early processing in letter and word recognition. Psychonomic Bulletin & Review, 16(1), 67-73. doi:10.3758/PBR.16.1.67 Hhoff12 (talk) 06:45, 8 March 2012 (UTC)[reply]

Researchers in this study were trying to determine which visual features were most important in letter and word recognition. Features work together to make letters and letters work together to make words. Previous research used confusion matrices, but they weren’t able to differentiate between the different features. We are able to determine the importance of a feature based on how the participant’s performance changes amongst the feature changes. Research done by Bierdman removed midsegments and vertices from objects. In this experiment, they determined that the removal of midsegments was not as detrimental as the removal of vertices when trying to identify objects. The current study wanted to look at letters and words to see if they could find results similar to those found with the objects. Deleting verticies from the letters and words should be more detrimental than removing the midsegments.

They used four different experiments to help determine if certain features were more important than others. In the first experiment, letters were presented normally (with nothing removed), with vertices removed, or with midsegments removed. The letters without verticies were eliminated from the experiment completely. Three versions of the letter were created. When vertices and midsegments were removed, they made sure to remove the same number of pixels from each. Participants watched the screen and a letter was presented until they made a vocal response. At that time, the experimenter marked whether or not the answer was correct. What they found in this first experiment was that there were significantly slower reaction times in the trials with midsegment and vertex deletions. Also, vertex deletion was more detrimental than midsegment deletion.

In the second experiment, they only showed participants the letters for a brief period. Previous research showed that when the duration of the exposure decreased the effects of vertex deletion increased. Sure enough, the reaction times were slower and the vertex deletion was slower than the midsegment deletion.

The third experiment looked at words instead of letters. With words, there is an element of context that helps readers identify them. Previous research showed that this context could cancel out the simple letter processing. This time, participants were only presented with two condtions, letters with midsegment deletions and letters with vertex deletions. This time, there was no significant difference between the two condtions. This suggests that context really can help eliminate the letter-level processing.

Similar to the second experiment, the fourth one looked at words, but only for a brief period. Results showed significantly slower reaction times to words with verticies missing than with midsegments missing. There were also more errors made in the verticies condition. When time is limited, removing verticies is more detrimental.

All the research culminates by saying that verticies are an important feature.

---

Word misperception, the neighbor frequency effect, and the role of sentence context: evidence from eye movements

A key — and one might argue the only — part of reading is identificaton of words. Word identification is affected by several factors, including frequency in the language and the orthography of the word. The visual input triggers activation in the lexicon, and words that are more common are activated faster than others. However, words that appear similar to each other like "gloss" and "glass" may trigger each other readily enough that it affects how the word is processed. Such words that have the same number of letters and are only one letter different are called neighbors, and Slattery tested if a low-frequency word could be misperceived as a higher-frequency neighbor (HFN).

The effects of HFN on reading were tested using eye movements. Previous research has shown that when there anomalies in the sentence, readers have issues immediately. Slattery proposed that if the problem with the sentence was misperceiving a word as an HFN, readers would fixate on it longer than the rest of the words. To test this, thirty two students were presented two kinds of sentences. In the experimental sentences, one word had an HFN, while the controls did not. Additionally, in some sentences, the target word and its associated HFN were consistent.

Words that fit with the context of the sentence were not fixated on more readily or for longer, even if the target word had an HFN. However, if an HFN conflicted with the sentence, readers looked at the target word for longer. These results are consistent with previous research, which showed that priming the lower-frequency target word could eliminate HFN misperceptions. Slattery notes that these effects occur with a relatively high frequency of 9.8%, even though participants could look at words as long as they want. Because an HFN misperception they have to deal with integrating an incorrect meaning into the current sentence, he argues that these errors are misperceptions.

Slattery, T. J. (2009). Word misperception, the neighbor frequency effect, and the role of sentence context: Evidence from eye movements. Journal Of Experimental Psychology: Human Perception And Performance, 35(6), 1969-1975. doi:10.1037/a0016894 AndFred (talk) 22:33, 8 March 2012 (UTC)[reply]

---

Parise, Eugenio; Palumbo, Letizia; Handl, Andrea; Friderici, Angela D. (2011). Influence of Eye Gaze on Spoken Word Processing: An ERP Study With Infants. Child Development, May/June, Vol. 82, No. 3, pp. 842-853. Sek12 (talk)

Researchers in this study wanted to answer the question of whether or not attention to speech and non-speech stimuli varied with the use of eye gaze in 4 to 5 month olds to see if both visual and auditory cues were present from early on in infancy. This question is important for word processing research because it will determine how early in infants that visual and auditory cues can have an effect on their word processing.

Previous research has shown that infants are able to differentiate between speech and no speech during their first few months. Other research has pointed to the fact that infants show interest in “face-like” stimuli which gives the current research a starting point in determining whether or not eye gaze can affect word processing in infants. This research presents two experimental studies with two different independent variables. In experiment 1 eye gaze was directed either at or away from the infant and in experiment 2 eye gaze was directed either at or away from an object. The researchers expected that higher attention from the infant when eye gaze was toward the infant or toward an object.

In experiment 1 fifteen German infants viewed visual stimuli that consisted of 3 pictures of a women with happy faces and the auditory stimuli consisted of 74 German verbs(Backword spoken and Forward spoken) spoken by a woman as well. The infants were presented first with the pictures with either direct or indirect gaze and then the auditory stimuli. A video camera recorded the infant’s faces to monitor their attention to the various stimuli. Experiment 2 used the same methods as experiment 1 but the visual stimuli were different in that an object was added next to the picture of the face.

In experiment 1 it was found that infants payed closer attention to Backward spoken words when compared with Forward spoken words, along with a directed gaze from the picture, toward the infant. Experiment 2 showed no interaction between the object presented with the picture and the auditory stimuli. These findings suggest that the researchers first hypothesis was correct in that visual and auditory stimuli do interact early on in infants. This was the first study completed that showed that infants process eye gaze and spoken words together.


---

Author used the boundary paradigm to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n+2 preprocessing effects. Two experiments were used to argue the controversy result form previous researches. And the result explains why there were opposite results from similar experiments before.

College students were participated in the study. Their eye movements were recorded as seeing the sentences on the screen. The experimental sentences were displayed on screen which the n+1 and n+2 display change occurred with in 9ms of a reader’s gaze crossing the boundary. After practicing, targets read 120 experimental sentences embedded in 48 filler sentences in random order. Approximately 33% of the sentences were followed by a two-alternative forced-choice comprehension question that subjects answered by pressing the button corresponding to the correct answer on a button box.

In this study, researchers were able to test two factors that might explain why some studies found evidence of parafoveal preprocessing of the second word to the right of fixation, whereas other studies found no such evidence. In Experiment 1, authors investigated whether the properties, specifically word length and word type, of the first word to the right of fixation influenced whether word n+2 could be processed parafoveally or not. Even when word n+1 was the article the – arguably, the word that can be identified with the least processing effort --- we found no evidence of parafoveal lexical preprocessing of n+2, neither when n+1 was the definite article nor when it was a nonarticle three-letter word.

In Experiment 2, researchers tested whether the amount of preprocessing of word n+2 was influenced by the frequency of word n. Again, we did not find any solid evidence of parafoveal n+2 preview processing, except in those conditions in which parafoveal information about subsequently skipped or attempted to be skipped. The only variable that showed some effects of n+2 preview even when n+1 preview was denied was landing position, although these effects might reflect low-level properties of the masks used rather than effects of lexical processing.

It is, of course, possible that the extent of parafoveal processing of word n+2 is determined by a variable not systematically manipulated in this or any previous study. This study, therefore, does not demonstrate that readers never use parafoveal information from beyond an unidentified word n+1 and word n+2 at the same time. It does, however, show that readers, at least when reading English, do not seem to make use of parafoveally available information about word n+2 on a regular basis. This implies that parallel lexical processing being the default.

Angele B, Rayner K. Parafoveal processing of word n + 2 during reading: Do the preceding words matter?. Journal Of Experimental Psychology: Human Perception And Performance [serial online]. August 2011;37(4):1210-1220. Available from: PsycINFO, Ipswich, MA. Accessed March 9, 2012. Zc.annie (User talk:Zc.annie|talk]])

---

Semantic Facilitation

Before this study, there was previous research conducted on the topic of semantic facilitation by using lexical decision tasks. Lexical decision tasks require the subject to determine if the given stimulus is a word. When two stimulus are presented, the latency to decide if they are words is shorter if the second word is primary associate of the first (“bread-butter”), this is known as the association effect. It has been found that this effect is produced at the encoding stage and the facilitation is produced more by “automatic” processes than by attention or expectancy. The effect of association on lexical decision time leads to investigation of the role of semantic relationship between words. There have been attempts to determine the various types of association between semantic judgment but allows for further study.

This experiment by Fischler builds on the previous research by eliminating direct associative strength from a set of stimuli. Sets of stimulus pairs were constructed for the experiment so the two words were not normatively associated but still had semantic similarity.

Twenty-four participants completed the experiment as part of an undergraduate psychology course experiment. The subjects were placed in front of a four-field exposure box, which was used to present the stimulus. Four groups of pair types were established: associatively related (AR), associatively unrelated (AU), semantically related (SR), and semantically unrelated (SU). These pairs were confirmed before the experiment was conducted.

During the experiment, each subject saw half of the pairs from each of the four pair type groups, giving them a total of 32 positive trials. In addition, there were 32 negative trials in each session that consisted of word-nonword, nonword-word, and nonword-nonword pairs. The order of pair presentation was random, with no more than four positive or negative trails presented consecutively. The participants were instructed to determine if the presented strings were words and to indicate their response as quickly as possible by depressing the response keys located in front of them. The latency was recorded as well as the errors.

The experiment found that the mean latency for correct responses. Both semantically related and associatively related pairs had shorter latencies than the matching control pairs. The related pairs were responded to at a significantly quicker rate than the control pairs. In addition, word pairs in the semantic condition (SR and SU) had significantly longer latencies than word pairs in the associative condition (AR and AU).

These findings were very similar to results found in earlier studies, suggesting that extending the definition of association to include mutual associates had little effect. In addition, it supports the theory that semantic facilitation can be interpreted as priming, or spreading activation throughout the semantic network. However, further research with more semantically related word pairs across the range of relatedness would be needed to examine the boundary conditions of this priming effect.

Citation: Fischler, I., (1977). Semantic facilitation without association in a lexical decision task. Memory & Cognition. doi: 10.3758/BF03197580. Anelso (talk)


Bilingual Lexicon in Toddlers and Preschooler Zc.annie (talk)

One of main job for us in our childhood is developing our language lexicon. This basically means learning vocabulary. And how this is different between monolingual and bilingual toddlers and preschool children. The relationship between the vocabulary size and the input of certain language, especially how monolingual and bilingual is different. When we count the vocabulary size of bilinguals, there are two common measures, Total Vocabulary and Total Conceptual Vocabulary. In the second one, words mean the same in two languages count as one. We all kind of notice that language input dose not always equal to the vocabulary size. The exposure time of a certain language influence that language’s vocabulary size up until a certain amount. This concluded by the positive relationship between exposure time and vocabulary size until a certain age for bilingual children grow up in a bilingual environment. Also the huge environment exposure counts for the vocabulary size. Spanish-English bilingual children in the United States tend to use more English vocabulary. One point needs to be mentioned, is that the time spend on learning one language dose not facilitates in the learning of the other one. Also research shows that children experiencing bilingual through their parents have higher proportion of words with equivalent meanings across the languages.

In language processing, bilingual children know that when a word is heard in one language, it might or might not refer to something they had a word for in the other language. This shows they know more possibility in language. They can use two languages appropriately at home towards their bilingual parents they learn language from. With the ability of knowing two languages, people want to know is it easier to learn a word in a second language? The answer is no, bilingual toddlers tend to learn their second language as first language. Research shoed that bilinguals’ two vocabularies did not share any more or fewer than monolinguals. This might be different in cognate pairs, which are the words spelled similar in both languages. Research do show children can realize those kind of words in a different language, but this might be the word is as a part of the base-language, retrieval that is the same as retrieval word in first language.

Goldstein, B. (2004). Bilingual language development and disorders in Spanish-English speakers. Chapter 4. Baltimore: Paul H. Brookes Pub..

Sentence Processing edit

Sentence Comprehension and General Working Memory

Working memory is defined in this study as the “workspace in which information is maintained and manipulated for immediate use in reasoning and comprehension tasks.” (Moser, Fridriksson, & Healy, 2007). This information is later turned into permanent information that can be retrieved at a later date. Studies in the past have shown a correlation between sentence processing and the working memory, but the interpretation of this relationship has remained unclear. The present study sought to understand this correlation better. This is an important study because there is disagreement in the cognitive construct of the working memory. Specifically, some researchers believe that there is a separate working memory for mediating sentence comprehension, while others believe it is all in one confined network. There are also clinical implications for this research that include treating aphasia patients that generally have difficulty matching sentences to pictures and deriving themes from sentences. This is said to be a symptom of the Broca’s area, and more research on the working memory and sentence processing could hopefully aid in aphasia research by localizing where these processes are occurring. In general, previous research has focused on verbal working memory tasks through a reading span task that has participants judge how truthful a statement is after hearing it, and then say the last word of as many of the sentences as possible. The problem with this type of task is that sentence processing and reading span tend to overlap, and therefore research needs to analyze nonverbal working memory tasks as well.

The present research is hoping to build on previous research on this topic by introducing a nonverbal working memory task to see if working memory is not just language based (verbal), but generally based (including also nonverbal). If sentence processing is correlated also with nonverbal working memory, then we can assume that there is not a separate area responsible for this outside of other “types” of working memory.

Sentence-Parsing (SP), Lexical Decision (LD), and non-verbal working memory tasks have been used in the past to gain understanding on how humans interpret language in different ways, and were all utilized in the present study to uncover how the working memory effected the participant’s sentence processing. The lexical decision task presented participants with both “real” words and “non-words” and required them to respond as quickly and accurately as possible to which it was. This was used as a control task.The sentence-parsing task presented the participants with both semantically-plausible (i.e. The goalie the team admired retired) and non-plausible sentences (i.e. The candy that the boy craved ate) on a computer screen and the participants (upon the onset of the last word) had to choose whether it was plausible or not by answering “yes” or “no”. The nonverbal working memory task involved in this study consisted of presenting Chinese characters to participants on a page and then later asking them to use those characters in a simple equation. They would have to decide if the equation’s solution was correct by selecting either a “no” or “yes” button. 60 of these trials were completed checking for both accuracy and reaction times.

Participants individually completed these tasks online, and then an additional nonverbal working memory task was given offline. The independent variables were the four different tasks the participant was made to do (nonverbal working memory task, SP task, LD task all of which were online, and an additional nonverbal working memory task performed offline). The dependent variables of this study were the reaction times of each task, as well as the accuracy in completing each task.

The present study found that nonverbal working memory is correlated with sentence processing, suggesting that the ability to do sentence parsing is in relation to a GENERAL working memory capacity, and is not just language specific. This is what the researchers had hypothesized in the beginning of their study, but the correlation that was yielded was only of moderate effect (.51 on average). This study reinforces the idea that there is just one single working memory capacity that is responsible for all types of language processing, not multiple working memories responsible for different tasks.

Moser, D. C., Fridriksson, J., & Healy, E. W. (2007). Sentence comprehension and general working memory. Clinical Linguistics & Phonetics, 21(2), 147-156. doi:10.1080/02699200600782526 Lkientzle (talk) 03:21, 15 March 2012 (UTC)[reply]

---

TAKING PERSPECTIVE IN CONVERSATION: The Role of Mutual Knowledge in Comprehension by Boaz Keysar, Dale J. Barr, Jennifer A. Balin, and Jason S. Brauner Amf14 (talk) 19:05, 6 March 2012 (UTC)[reply]

During a conversation, the two people involved encounter a number of sentences. Because language is ambiguous, the sentences can be interpreted in a multitude of ways. In this sense, we rely on the “mutual knowledge” between the two people speaking to each other in order to decipher what the exchange of information means. For example, if a friend asks to see “this” magazine, one may make an error in the understanding of which magazine she is requesting, because the word “this” is not specific enough. If she is referring to an object that can be seen by both individuals, it is easy to conclude that both persons have knowledge that the conversation is including the visible object. This is called a co-presence heuristic because both individuals can mutually see the magazine. However, a person may have more than one magazine in their line of vision and therefore could make an error in judgment of which magazine is being denoted. This is called an egocentric heuristic because only one person can see the second magazine.

In order to correct mistaken interpretations, we must ensure that both individuals in a conversation have a mutual knowledge of what they are speaking about. To study this concept, experiments were conducted with multiple referents visible to a speaker during a conversation so that it was unclear which object was the one intended to be referred to.

The experiments explored a difference in perspectives. Participants were given an array of objects and as they sat across from a confederate, they were asked to move these objects to new places. However, along with the object being discussed, a similar unintended referent was placed in the participant’s perspective that the confederate could not see. Therefore, when asked to move an object, their fixation would focus on both objects that could be being referred to. Eye movements were tracked to detect where the subjects eyes were looking during the experiment. When calculating the eye fixations of the participants, results showed that they looked twice as often at the unintended referent and also stared at it for much longer periods of time. They were aware that the confederate could not see the object but they still considered it as a possible object that they could be referring to during the conversation. When they finally chose which object to move, the times were significantly slower than the control group that lacked the extra referent.

The fact that the participant had the knowledge that the confederate couldn’t see the extra object, led them to correct their errors in interpretation, but at a much slower rate. This knowledge was necessary for the task to go smoothly. What was most interesting to the experimenters is that although the participant had information about what objects were inaccessible to the confederate, their eye movements still suggest that they used egocentric interpretations throughout the experiment.

These findings suggest that although mutual knowledge is necessary in a conversation, the knowledge does not restrict a person’s interpretation. They still consider all possible referents, causing them to be forced to use mutual knowledge in comprehension. This causes a delay in understanding.

---

Eye Movements of Young and Older Adults During Reading

Previous research on working memory limitations in adults while reading sentences has been largely focused through the visual moving windows paradigm. This theory allows the reader to process phrases one by one. Much of this research has revealed that age differences and the working memory deficits that accompany aging do not cause different processing strategies to be used. However, the introduction of eye tracking software to the realm calls attention to individual differences in sentence processing. Susan Kemper and Chiung-Ju Liu set out to determine if eye tracking software impacts the accuracy of this previous research.

The present study explores the differences in the processing of both ambiguous and unambiguous sentences between age groups. Specifically, the researchers examined processing of various sentence types with differing points of ambiguity. This research is deeply important as it contributes to understanding the effects of aging on speed of reading as well as the length of time it takes an individual to clarify ambiguity in a given sentence.

Experiment one examined differences in the processing of various sentences between younger and older adults. The use of eye tracking software was of primary importance in assessing these differences. Sentence types used by the experimenters included pairs of cleft object and cleft subject sentences as well as subject relative clause sentences and object relative clause sentences. Each type of sentence featured a critical region which participants were predicted to focus on more intently than other portions of the sentences. Participants were randomly assigned to lists of sentences and asked to read them while their eye movements were tracked. Additionally, participants completed a sentence acceptability judgment task in which comprehension of the sentences was assessed. Four fixation measurements were collected through eye movement software which indicated the length of time participants focused on the critical region of each sentence. Experiment two followed procedures identical to those of experiment one, but featured sentences in which ambiguity was increased by deleting the word “that” from each sentence.

Results from experiment one indicated that cleft subject and subject relative sentences were much more difficult to process for both age groups. However, older adults experienced greater difficulty in processing the sentences, suggesting that limitations occurring in the working memory of older adults affect the ability to resolve ambiguities. Experiment two revealed that even more time was needed in sentence processing when the word “that” was eliminated from each sentence structure, especially for older adults. Ultimately, the findings of both experiments revealed that older adults had to regress to the critical regions of the sentences more often and required additional time to process ambiguities in the sentences. Older adults cannot hold all components of a sentence in their working memory while processing, and therefore must reprocess ambiguities several times. Through the use of eye tracking software, the present study reveals that younger and older adults may in fact differ in their abilities to process and clarify ambiguities in various sentence structures. These findings contradict previous research stating that there is little difference between age groups in sentence processing. These findings are of great importance as they contribute to understandings of the effects of aging on working memory.

Kemper, S., & Liu, C. (2007). Eye movements of young and older adults during reading. Psychology And Aging, 22(1), 84-93. doi:10.1037/0882-7974.22.1.84 Kfinsand (talk) 03:00, 8 March 2012 (UTC)[reply]

---

Effects of Age, Speech of Processing, and Working Memory on Comprehension of Sentences With Relative Clauses (Caplan, DeDe, Waters, Michaud) Katelyn Warburton (talk) 20:46, 8 March 2012 (UTC)[reply]

Previous studies have examined the effects of age on comprehension and memory, and show a general decline in abilities as age increases. Past research has mainly focused on comparisons between extreme age groups or within certain age groups. The current study focuses on the influence of age on the speed in which an individual is able to process and remember information from specific sentences.

This study was conducted using two different experiments: the first presented a plausibility judgment task and the second was a question-answer verification task. Two-hundred individuals divided among four age groups (70-90, 50-69, 30-49, and 18-29) were recruited through Boston University and the Harvard Cooperative Program on Aging. In the first experiment, twenty-six pairs of sentences of four types (Cleft-Subject, Cleft-Object, Subject-Subject, and Subject-Object), either implausible or plausible, were presented. Participants were asked to respond “yes” or “no” to the sentences they read, indicating whether or not they made sense.

In the second experiment, thirty-six stimuli sentence pairs were presented—they contained a sentential complement (SC) with a relative clause and a doubly-embedded relative clause (RC) in another. For example, a SC would read “The dealer/indicated that/the jewelry/that/was identified/by the victim/implicated/one of his friends,” whereas an RC would be written as follows, “The dealer/who/the jewelry/that/was identified/by the victim/implicated/was arrested by the police.” Participants were asked to make a yes or no decision involving comprehension of these statements.

Results showed that there was a significant, negative correlation with both age and working memory as well as age and speed of processing. These results replicated several previous findings, but this study was the first to show the effects of the speed of processing on comprehension in sentences with relative clauses. Results also showed that working memory supports the controlled, conscious process that requires storage of linguistic models instead of being automatic and unconscious.

This study suggests that future research should evaluate the nature of online processing and task-related processing to understand the distinctions we are able to make about complex thoughts, concepts, and sentences. Future studies should also evaluate the differences within age categories to more specifically understand what factors can influence working memory and sentence comprehension.

Caplan, D., DeDe, G., Waters, G., & Michaud, J. (2011). Effects of age, speed of processing, and working memory on comprehension of sentences with relative clauses. Psychology and Aging, 26(2), 439-450.

---

Pay now or pay later: Aging and the role of boundary salience in self-regulation of conceptual integration in sentence processing Smassaro24 (talk) 18:46, 10 March 2012 (UTC)[reply]

Among other factors that change as a person ages, language processing is one that shows multiple differences as one’s age progresses. When reading, people pause between syntactic boundaries in order to process the information they have read. These “micro-pauses” are known as “wrap-up,” and can create longer-enduring representations of the information read, that is used later in interpreting the sentence. Previous research supports the idea that wrap-up time increases with age. Researcher Elizabeth Stine-Morrow and fellow researchers examined the wrap-up for texts with different strengths of syntactic boundaries. Using the same material for the sentences, they varied the boundaries from: unmarked (no boundary), to weakly marked (added commas), to strongly marked (used a period at the end of a sentence). They aimed to determine whether a greater wrap-up time at a clear syntactic boundary (such as a period) could influence the length of wrap-up at a subsequent boundary within the sentence, and to determine whether an age difference existed.

In the initial study, the participants consisted of twelve older adults, with an average age of 65.7 years, and twelve younger adults, with an average age of 22.9 years. The participants were asked to read passages of text using the moving window method, in which the readers revealed each word of text individually by pressing a button. Each passage included unmarked, weakly marked and strongly marked boundaries. The task measured processing time for each word segment.

The results of the study indicate that the stronger the boundary, the longer the reader’s wrap-up time is. The study also supports a “pay now or pay later” effect, in which the longer wrap-up early-on in the sentence can reduce the wrap-up time needed to conceptualize the information at the end of the sentence. Another finding, in accordance with early research, found that when boundaries were unmarked, the older readers used more frequent wrap-up pauses, and these occurred earlier in the sentence processing process.

A follow-up study aimed to replicate the data of experiment 1, using an eye-tracking device in place of the moving window method. The participants were asked to read the passages from a computer screen, while wearing a device mounted on their head that measured where their eyes were directed at all times. The data from experiment 2 supported and exaggerated the prior experiment’s conclusion that older readers showed longer and more frequent wrap-up.

The results of the study suggest that wrap-up can be activated by the presence of clearer syntactic boundaries. This finding can affect how the lexical meaning of the sentence is processed before the sentence has been fully read, which contributes to a shorter wrap-up time later in processing the sentence. The studies also show the tendency of older adults to use more wrap-up time at breaks in the syntactic boundaries, suggesting a downstream process advantage which helps older readers consolidate the meaning of the material they have read, making the reading more efficient. Both the strongly marked syntactic boundaries, as well as longer wrap-up times, helped the participants, especially the older readers, to understand the meaning of the passages they read.

Stine-Morrow, E. L., Shake, M. C., Miles, J. R., Lee, K., Gao, X., & McConkie, G. (2010). Pay now or pay later: Aging and the role of boundary salience in self-regulation of conceptual integration in sentence processing. Psychology And Aging, 25(1), 168-176. doi:10.1037/a0018127

---

Effects of Age, Speed of Processing, and Working Memory on Comprehension of Sentences

Unlike Peter Pan and the Lost Boys, aging, and the consequences that come with it, affect everybody. One of these consequences is the ability to quickly process and understand sentences. Two theories exist in regards to aging and sentence processing. The first theory states that as people age, they have more experience with syntactic structures which allows for more efficient process. However, other theorists suggest that with declining mental faculties, sentence comprehension and interpretation may be slowed. Some studies suggest that elderly people with lower working memory capacities took longer in understanding and interpreting an ambiguous sentence when compared to young adults. However, another study found no age differences between the auditory comprehension in older and younger adults. This study attempted to assess the affect of age, processing speeds, and working memory on sentence comprehension in both young and older adults.

The researchers created two experiments to study the effects of age, processing speed, and working memory on sentence comprehension. One experiment presented sentences in a plausibility judgment task and the other that was a verification task. The study included 200 participants in 4 categories: old elderly (ages 70-90), young elderly (ages 50-69), old young (ages 30-49), and young young (18-29 years). Participants were asked to complete 3 tests of working memory capacity: alphabet span, subtract 2 span, and sentence span. Next participants went through 4 different measures of processing speed test: digit copying, boxes, pattern comparison, and letter comparison.

To test the participants’ sentence comprehension in experiment one, 4 different types of sentences were used: cleft-subject, cleft-object, subject-subject, and subject-object. The cleft sentences start with “it was” and then continue on. The subject sentences have the subject before the verb and the object sentences have the direct object first, then the subject, and finally the verb. In experiment two, the researchers created sentences that had only one relative clause – a “that” clause or sentences that had one relative clause included in another relative clause. To actually test sentence comprehension, participants used a self-paced reading paradigm in which dashes would appear on the screen. Each time the participant pressed a key one part of the sentence would appear. After pressing the key again, that part would disappear and the next part of the sentence would be revealed.

After analyzing the results of their two experiments, the researchers found that working memory and speed of processing were positively correlated. They also found that accuracy of sentence comprehension was positively correlated with working memory. The results from experiment one show that reading times for the cleft-object were longer than for the cleft-subject sentences and the reading times for subject-object sentences were longer than those for the subject-subject sentences. These effects of age and working memory on accuracy and reading times correspond with results from previous studies. The researchers found that the correlations between age, speed of processing, and working memory to comprehension measures followed what they predicted – that they would increase with age and decrease with faster speed of processing and a larger working memory. Their analysis showed that older individuals spend more time reading/processing the sentence but had poorer comprehension of the sentences. This research shows that age does have an effect on sentence comprehension and that more research should be done to see if there is anything the aging population can do in order to slow this decline in comprehension ability so that they can remain functional in society for longer.

Caplan, D., DeDe, G., Waters, G., & Michaud, J. (2011). Effects of Age, Speed of Processing, and Working Memory on Comprehension of Sentences with Relative Clauses. Journal of Psychology and Aging 26 (2) 439-450. doi:10.1037/a0021837 138.236.22.152 (talk) 01:25, 13 March 2012 (UTC)[reply]

---

These researchers were concerned with the lexical processing effects we Americans have when we are processing a sentence. Their first hypothesis was that during lexical processing, we should see a greater or lesser word activation based on the contextual word that follows the previous one. The second hypothesis that they were concerned with was that once the target word has been activated, the words would find a particular pattern. Basically they were trying to find prosody while processing sentences.

The experimenters used forty-three participants from the university of Rochester and the surrounding community. All of these participants were English native speakers with normal hearing and normal-corrected vision. In the experiment, they tested the participants to see if the target word or the competitor word were activated after processing the sentence (for example; target word is antlers and competitor word was ant). Each stimulus that was tested contained four pictures: a function, a target word, a competitor word, and two distracters that were phonologically irrelevant to the target word. To stop the participants from becoming familiar with the target words, the experimenters added twenty filler sentences that had nothing to do with the experiment. During the experiment, the authors shifted the frequency of the pitches in the sentences. There were two different pitches: the first was low-high pitch (the first five words were low pitched prosodically aligned so the pitches were the same) and the second was high-low pitch (the first five words were high pitched prosodically aligned so the pitches were the same). During the procedure, the experimenters recorded eye movement. Each trial began with four pictures appearing on the screen. Then one of the prerecorded sentences was presented. The participants were to click on the particular picture that relates to the target word. It is important to note that the pictures were also randomly ordered on the screen.

Three of the participants were excluded from the data because of low testing accuracies or because of the incompletion of the trials. The overall data suggested that the experiment was rather easy. The participants only chose the incorrect picture three percent of the time. In the perceptual grouping process, the high-low frequencies were more likely to choose the competitor word than the low-high frequency. The authors believe that this experiment accurately portrays the perceptual grouping process. These results suggest that the competitor word was more likely to be activated when the target word is stated. There data supports the notion that natural utterances are more easily processed compared to unnatural utterances. In the future the experimenters want to work on natural conversational speech instead of sentences that are rarely stated. They also want to work on the prosody of the natural sentences too.

Brown, Meredith; Salverda, Anne Pier; Dilley, Laura C.; Tanenhaus, Michael K.; Psychonomic Bulletin & Review, Vol 18(6), Dec, 2011. pp. 1189-1196. Gmilbrat (talk)

---

Sentence comprehension in young adults with developmental dyslexia By Wiseheart, Altmann, Park, and Lombardino. Misaacso (talk) 02:27, 14 March 2012 (UTC)[reply]

Researchers are looking at the causes of dyslexia in regards to a deficit in working memory instead of a deficit in linguistic processing. Working memory is a type of short-term memory that assists in conscious and linguistic processing. Past research linked working memory as the weakness in understanding sentences. Results of the research revealed that there are phonological deficits in the working memory of children with dyslexia. The children are unable to successfully perform syntactic tasks including grammar judgments and sentence correction. Some sentences require information to be held in the working memory for a longer period of time which can be problematic for individuals with unsatisfactory working memory. When the clause of the sentence is in the center, the subject has to stay ready for recall in the working memory until the verb is presented so the meaning of the sentence is clear. When it is a center clause that is object-relative (The man that the woman is pulling pulls the dog), it is thought that there is more demand on the working memory to keep vital information present as compared to a center clause that is subject-relative (The man that is pulling the woman pulls the dog). It is not known what part of working memory assists a normally developed adult in processing complex sentences because much of the research deals with children and dyslexia, not adults. Early difficulties of language development are predictive of later reading issues but, it is not known which direction the cause of such deficiencies is.

Researchers wanted to find out if difficulties would continue to adulthood and cause issues with reading ability if children had difficulties with sentence comprehension. They hypothesized that adults with dyslexia would not do as well as controls regarding tasks that were strenuous on the working memory and challenged understanding of syntactic structures. The experiment used young adults, both with and without dyslexia by using a sentence and picture matching game. Researchers wanted to see how syntactic complexity and working memory influenced the ability of subjects to process sentences. Experiment one dealt with active and passive voice in the sentences while in experiment three, four types of clause sentences were used to see how people responded to and comprehended the sentences. In both experiments, the dependent variables were accuracy and response times on the comprehension task.

Participants who were eligible for the study completed working memory, vocabulary, and sentence comprehension tasks. In the memory task, participants completed the digit span forward and digit span backward which comprised of participants reciting an increasingly long line of numbers, either as they had heard it or in backwards order. For vocabulary, participants completed tests that had them define various words. The sentence comprehension task had the participants indicate whether or not a picture matched the sentence provided.

In experiment one, all participants responded more slowly to passive than active sentences but participants with dyslexia were marginally slower. The accuracy of people with dyslexia was significantly slower than controls but all were less accurate with passive sentences than active sentences. In experiment two, people with dyslexia did more poorly on center-embedded relative clause sentences. In sentences with relative clauses, decision time was not impacted by working memory or word reading times. Overall, the findings that were important are that the participants with dyslexia were different than controls in their accuracy to understand passive sentences, response times for passive sentences, and accuracy of interpreting the complex sentences provided. The hypothesis was supported because it was found that there are syntactic comprehension deficits among young adults with dyslexia that could be a feature of that disorder

Wiseheart, R., Altmann, L. J. P., Park, Heeyoung & Lombardino, L. J. (2009). Sentence comprehension in young adults with developmental dyslexia. Ann. of Dyslexia, 59, 151-167. doi: 10.1007/s11881-009-0028-7 Misaacso (talk) 05:28, 15 March 2012 (UTC)[reply]

---

Arnold, J. E., Tanenhaus, M. K., Altmann, R. J., & Fagnano, M. (2004). The old and thee, uh, new: Disfluency and reference resolution. Psychological Science, 15(9), 578-582. doi:10.1111/j.0956-7976.2004.00723.x Hhoff12 (talk) 14:40, 15 March 2012 (UTC)[reply]

Previous research has looked at how listeners are able to quickly detect linguistic information about meaning and structure in fluent utterances. However, speakers are frequently disfluent and includes things like hesitations and false starts, repetitions, or saying “thee” instead of “the”. There are a few recent studies that suggest that disfluencies such as “uh” can be beneficial because the slight delay alerts the listener to the upcoming speech. Other research showed that people are more likely to say “thee” when they are having trouble coming up with what to say next and therefore, the researchers of this study reason that people who hear “thee uh” prior to a noun will refer to something new rather than something that’s already been given. This study is important because it considers real world speech features that could be more applicable to everyday life.

First, they confirmed their reasoning that putting “thee uh” before a noun would make listeners look to something new rather than something that’s already been given. Participants viewed a display with four objects placed around a central spot. They were given instructions like, “Put the grapes above the candle.” Then they either heard, “Now put” in the short condition or, “Now put the/thee uh” in the long condition. The participant was asked to circle the label of the object he or she thought the speaker would say next. In the long conditions, participants chose new objects 35% of the time when the article “the” was used, but when “thee uh” was used, participants chose a new object 85% of the time. This shows that disfluencies affect expectations of words to come.

Other research shows that words with similar beginnings all become activated when a person hears a word. The ones with the strongest overlaps compete more and are called cohort competitors. For this study, the researchers guess that listeners should be more likely to look at cohort competitors that were previously given when the noun is preceded by “the,” a fluent article and they should be more likely to look at new cohort competitors when the noun is preceded by “thee uh,” a disfluent article.

They monitored the eye movements of participants while they viewed computer screens. A display similar to the one used earlier was used in this experiment and had two cohort competitors in it such as candle and camel. The other two objects were distractors and had no phonetic overlap. The first sentence used one of the cohort competitors to establish it was a given and the other cohort competitor as a new. The first and second sentences were divided into four different conditions. There was a fluent-given target, fluent-new target, diffluent-given target, and a diffluent-new target.

The results showed that the article affected the target noun. When a fluent article was given, participants initially spent more time looking at the given cohort competitor. When a disfluent article preceded the noun, they looked to the new cohort competitor. This is important because it shows that there are multiple pieces that all come together in language comprehension and that natural speech could be more important than previously thought.


---

Processing Coordination Ambiguity by Engelhardt and Ferreira There are two types of models of sentence processing developed from previous research: restricted and unrestricted. The Garden Path model is the most famous of the restricted models and says that the processer uses syntactic information to determine meanings in order of simplicity. The Constraint-based model is the primary unrestricted model and says that the sentence processor uses things like contextual information and occurrence frequency to form an initially more complex meaning. From previous research we’ve learned that the use of a visual modifier can resolve ambiguities of phrases which can be interpreted as either a location or a modifier (ie “put the apple on the towel in the box”).

The current study investigated coordination ambiguity, the ambiguity that arises when two phrases are combined ie. put the butter in the bowl and the pan on the towel with the noun phrase the pan referring to either a location for the butter or an object to be moved. Coordination ambiguity is common and depends on whether the noun phrase following the conjuction [of two sentences] is a complex object or the subject of a joined sentence. Previous studies suggest that slower reading times occur because noun-phrase coordination is syntactically simpler than sentence coordination. Participants received instructions of three types 1) noun-phrase coordination 2) ambiguous sentence coordination 3) unambiguous sentence coordination. Eye movements were monitored with a head mounted eye tracker. There were 33 visual displays: 3 practice, 15 experimental items, and 15 fillers. The experimental items were one of the three types. Researchers predicted that listeners will make predictable fixations to objects on display as they hear the corresponding words. They also predicted a connection between gaze and planned hand movements related to pointing or grabbing. The study revealed a preference for noun-phrase coordination which is surprising because in their pre experiment analysis of sentences they found that sentence coordination was 3 times more prevalent then noun-phrase coordination. They also found that there was still high fixation even with the unambiguous sentence instruction (although it could have been because it visually interesting). Another surprising finding was that the garden-path effect was observed only during the ambiguous noun phrase (contrary to previous research).

Based on the results, the researchers feel they can rule out unrestricted processing models. A simplicity heuristic is instead the favored basis of processing.

Engelhardt, P. E., & Ferreira, F. (2010). Processing coordination ambiguity. Language And Speech, 53(4), 494-509. doi:10.1177/0023830910372499Mvanfoss (talk) 01:02, 15 March 2012 (UTC)[reply]

---

“Integration of Visual and Linguistic Information in Spoken Language Comprehension”

It has been thought that as a spoken sentence unfolds it is structured by a syntactic processing module. Previous research has shown that brain mechanisms are responsible for the rapid structuring on input, but it has not been determined if the early moments of syntactic processing can be influenced by removing or reducing nonlinguistic information in the environment. In addition,

However, with new technology, Tanenhaus and his colleagues were able to use the tracking of eye movements to provide insight into the many processes involved with language comprehension. In particular, when subjects were given complex instructions they had sequenced eye movements that were closely following the words spoken to them and were relevant in establishing reference. From their experiments they found an integration of visual and linguistic information They were interested in determining if information provided in a visual context affects syntactic processing. Specifically, they investigated if the information provided by visual context affects the processing of a subject after given an instruction.

Six subjects were presented with either ambiguous or unambiguous instructions. These sentences were: “Put the apple on the towel in the box” and “Put the apple that’s on the towel in the box”. These instructions were then paired with different visual contexts (one-referent that supported the destination interpretation of the instruction, and a two-referent context that supported the modification interpretation). This created four experiment conditions. The subjects’ eye movements were tracked while presented with the sentence and visual context in order to determine how visual information is used while processing both ambiguous and unambiguous sentences.

The results of the eye movement fixation patterns of the six subjects suggested that in a natural context, visual references are sought in the early moments of linguistic processing. Specifically, “on the towel” was originally seen as a destination in the first one-referent context but as a modifier in the two-referent context. In addition, when presented with an unambiguous sentence, there was never any confusion presented in the eye movements. The participants did not look at incorrect destinations as they did with the ambiguous instruction.

These results are important because they shed light onto the way that language is processed using cues from the surrounding environment. It suggests that the approaches to language comprehension and processing must integrate the different processing systems. The approaches in the future must combine both linguistic and non-linguistic information.

Tanenhaus, M.K., M.J. Spivey-Knowlton, K.M. Eberhard, J.C. Sedivy (1995). Integration of Visual and Linguistic Information in Spoken Language Comprehension. Science. Doi: 10.1126/science.7777863. Anelso (talk)

---

Grammatical and resource components of sentence processing in Parkinson’s disease

Because sentence comprehension requires not only linguistic skills, but also cognitive tasks, this study aimed to specify which area of sentence comprehension adults with Parkinson’s disease struggle with. It is known that Parkinson’s disease patients have difficulty with understanding sentences that are grammatically complex. However, this may not be entirely due to linguistic impairment. Recent studies have shown that their difficulties concerning comprehension may be related to cognitive resources such as working memory and information-processing speed. FMRI studies suggest that frontal-striatal brain regions are recruited for the cognitive tasks involved in sentence processing. However, this area is compromised in patients with Parkinson’s disease, and it has been shown in previous studies that it is activated less in nonlinguistic tasks that require similar problem solving. In this study, they wanted to combine these previous studies, and determine whether the limitations of this area in patients with Parkinson’s disease affected the cognitive processes necessary in understanding grammatically complex sentences.

The study included seven right-handed patients with Parkinson’s disease, but no dementia, and nine healthy seniors of the same age. They were presented with four sentences. They tested grammatical aspects of comprehension by placing clauses in the sentences that were either subject-relative or object-relative. The first was subject-relative with short linkage: “The strange man in black who adored Sue was rather sinister in appearance.” The second was subject-relative with long linkage: “The cowboy with the bright gold front tooth who rescued Julia was adventurous.” The third was object-relative with short linkage: “The flower girl who Andy punched in the are was five years old.” Finally, the fourth sentence was object-relative with long linkage: “The messy boy who Janet the very popular hairdresser grabbed was extremely hairy.” The head noun being separated from where the displaced noun phrase is interpreted tested working memory. As soon as subjects knew the answer to the question provided at the beginning of each run, “Did a male or female perform the action described in the sentence?” they would press a button, which would cause the presentation of the following sentence.

Results showed that all subjects activated brain regions associated with grammatical processing, such as posterolateral temporal and ventral inferior frontal regions of the left hemisphere. However, there was an important observation the authors recognized. Healthy seniors activated brain regions in the frontal-striatal brain regions used for cognitive tasks, especially on the sentences with long linkage, suggesting a greater use of working memory. These areas include left dorsal inferior frontal, right posterolateral temporal, and striatal regions that are also associated with cognitive functions during sentence processing. While the patients with Parkinson’s disease accessed these regions less, they activated different regions much more. They had increased activation of the right inferior frontal and left posterolateral temporal-parietal areas. The authors suggest that this compensatory up-regulation of certain brain regions allows the patients with mild cases of Parkinson’s disease, like those in this study, to maintain accurate sentence comprehension.

Grossman M, Cooke A, DeVita C, Lee C, Alsop D, Detre J, Gee J, Chen W, Stern MB, Hurtig HI. Grammatical and resource components of sentence processing in Parkinson’s disease: an fMRI study. Neurology. 2003;60:775–781. Doi: 10.1212/01.WNL.0000044398.73241.13 Lcannaday (talk) 16:13, 15 March 2012 (UTC)[reply]

---

Even through there are many different languages in this word, but how we process our sentences are similar across language with some little differences.

In order to learn how language processing across language similar and different, there two main varieties that been discussed in the article. One tests language as a between-subjects variable, by looking at the question “Who did the action?”. This will trigger different cues in different language due to how the sentences in the language structured. This means the hierarchy of words that been active are different due to the different sentence structure. That means different languages have different sentence comprehension and production. For example, like “The rock is kissing the cow”, English listeners chose to first noun, whereas speakers of most other languages chose the second noun.

Also on the structure of sentence, English has more reliance on word order than other language. In article, comparing to Italian. In the test that requires participants to recognize who did the bad thing in an ambiguous sentence visually, English speakers tend to take longer reaction time on center embedding sentence, but longer in Italian when their favorite cue is missing in the sentence.

Although the sentences processing are slightly different across languages, but they all learnable and have most of the processing similar from speech perception, word to sentence.

Bates E, Devescovi A, Wulfeck B. Psycholinguistics: A cross-language perspective. Annual Review Of Psychology [serial online]. 2001;52:369-396. Available from: PsycINFO, Ipswich, MA. Accessed March 15, 2012. Zc.annie (talk) 16:34 15 March 2012(UTC)

Semantic priming during sentence processing by young and older adults.

How age changes language processing is a topic that has two different views. The first is that age is irrelevant, with all language capacities maintained even in old age, while the other proposes that growing older leads to a decline in verbal abilities. This study attempted to find a common ground between the two: by differentiating between the cognitive processes involved in language processing. The authors propose that age does not cause language skills to deteriorate, but it does make it more difficult for older adults to remember what they have understood.

In order to isolate processing from memory, a lexical decision task was used. When a word that completes a sentence appear after that sentence, subjects respond faster than if the word is unrelated or a non-word. The lexical decision task used in this task was split into three parts. In the first, the target words are related to a description of a scene presented prior ("The round table is usually in the kitchen", related target word "chair," unrelated "boat"), while in the second, the words are merely implied by the sentence ("The cook cut the meat," related word "knife," unrelated "key"). Both tasks require the use of inferential information provided by the priming sentence, which is difficult for older adults to use. The third task used target words that are related to only one word in the sentence ("She gave her dog something to chew", related word "bone," unrelated word "nest"). Lexical decision tasks were interspersed with recognition tests used to assess memory.

The authors expected that older adults would respond more slowly to related words in the first two tasks due to their comprehension difficulties with implied, inferential information. The third task allows for the examination of sentence processing capacities: if older adults have "difficulty with deep processing of sentences," then their reaction times should be much slower than if their abilities are intact.

The data revealed that sentence processing does not appear to be impaired by age. Related words were recognized faster than unrelated ones in both the older and younger adults. However, older adults did have more difficulty with sentence retention. The authors conclude that age does not degrade the ability to make inferences or process sentences, and effects that seem to be caused by such difficulties are likely caused by memory issues.

Burke, D. M., & Yee, P. L. (1984). Semantic priming during sentence processing by young and older adults. Developmental Psychology, 20(5), 903-910. doi:10.1037/0012-1649.20.5.903 AndFred (talk) 00:47, 16 March 2012 (UTC)[reply]

---

Ellipsis is a term used to describe part of a sentence that is left out but can be identified through context. Previous research has found that the difficulty of the antecedent (an earlier part of the sentence that give context for the ellipsis) in the sentence does not affect the processing of the ellipsis. The processing time for omitted information has been found to occur instantly when the antecedent is re-activated. Sentences conjoined with the word ‘because’ delayed the re-activation of the antecedent as opposed to ‘and’. ‘And’ or ‘because’ are words that imply cause-effect relations, the present study took these findings and tested them with ‘but’ which is a word that does not imply cause-effect relations, predicting the antecedent will be re-activated after the ellipsis site. Looking at the process of sluicing this experiment is looking to answer how much of the antecedent is recovered in the elided clause and when does the ellipsis resolution take place in the sentences. Forty sentences were created all containing a sluice (strikethrough) of the ellipsis. Participants listened to the sentences through headphones in a soundproof booth, facing a computer. They were instructed to quickly and accurately indicate whether the visual probe was an English word or not and then to carefully listen to the sentences and answer a comprehension question. Testing accuracy and reaction time they manipulated the probe position, where the elision site was, and the probe type, subject or object. The results coincide with the predictions, object-related probes were faster when the probe is downstream and slower when at the elision site. That means the antecedent in sluicing is recovered as the sentence progresses. This study has proven to be a good start to understanding the way sentences are processed but further research is needed to clarify the effect and whether or not the complexity of the antecedent and its effects on real-time ellipsis resolution.

Poirier, J., Wolfinger, K., Spellman, L., Shapiro, L. P. (2010). The real-time processing of sluiced sentences. Psycholinguist Res, 39, 411-427. doi: 10.1007/s10936-010-9148-9 Ahartlin (talk) 03:41, 29 March 2012 (UTC)[reply]

Taking Perspective in Conversation: The Role of Mutual Knowledge in Comprehension edit

TaylorDrenttel (talk) 20:47, 13 March 2012 (UTC)[reply]

Language is ambiguous. Comprehension requires deciphering the meaning of context, words, intentions and many other elements. In conversations understanding the information each speaker holds also helps to eliminate ambiguity. Previous theories of language suggest that people share a mutual perspective in comprehension. For example, if you are sitting at a table with a friend with a book in between you, and your friend asks to see that book. There may be other books around you, but since you both mutually see the book on the table, you assume it is the intended item. The authors of this article attempted to delve deeper into this theory. They believe that people use an egocentric heuristic initially in conversations. An egocentric heuristic means that people consider objects that are not seen or known to the other person, but are potential possibilities from one’s own perspective. Furthermore, the experiments in this article attempt to identify if mutual knowledge is used to correct errors that result from an egocentric interpretation.

In the first experiment twenty native English speakers played a version of the referential communication game. A shelf with 4x4 slots contained seven objects, which occupied various slots. The participant sat one side of the shelf where all the objects were visible, and the confederate sat on the other side where some objects were hidden. The confederate was given a picture and directed the participant where to move the various objects to match the picture. At some point during the task the confederate gave an ambiguous direction, which could refer to two objects: one that was visible to both and one only visible to the participant. An eyetracker was used to follow the participant’s eye movements and identify how long they looked at the objects. These trials were compared to a control condition where the object in the hidden spot changed, so it was not a potential possibility in the ambiguous direction. For example, the ambiguous direction was to move the small candle one slot right. The confederate can only see one candle, but the participant can see two. In the control condition a monkey, or object completely unrelated, would be in the hidden spot.

On average, the participant’s eyes fixated on the hidden spot almost twice as long when it contained a possible target (test condition) versus an unrelated object (control condition). Furthermore, in the test condition the participant’s initial eye movements were the fastest to the hidden object. The initial fixation on the shared object was delayed in the test condition compared to the control condition. These results show that the egocentric heuristic interferes with their ability to consider the mutually shared object. In some cases the egocentric interpretation was so strong that people picked the hidden object even though they knew the confederate could not see it.

The second experiment was identical to the first except participants helped set up the arrays, so they knew exactly what could not be seen from the confederate’s perspective. The same results were found as in the first experiment.

These results confirm the researchers expectations that participants occasionally use an egocentric interpretation, which considers certain objects not within the speakers view. The egocentric heuristic, though prone to errors, may be used because it is the more efficient strategy and requires less cognitive effort. Overall this research furthers the understanding of language comprehension and mental processes in conversations.

Keysar, B., Barr, D.J., Balin, J. A., & Brauner, J. S. (2000). Taking perspective in conversation: the role of mutual knowledge in comprehension. Psychological Science, 11(1), 32-38. doi:10.1111/1467-9280.00211


Insensitivity of the Human Sentence-Processing System to Hierarchical Structure Sek12 (Sek12)

“Hierarchical phrase structures are considered fundamental to any description of the syntax of human languages because of their ability to handle nonadjacent, hierarchical dependencies between the words of a sentence.” The researchers in this study sought to test three different “probabilistic language models” to find out whether or not there is use of hierarchical structures in sentence processing. This question is important because it can show the researchers and thus everyone, that some mechanism besides hierarchical structure in sentence processing is affecting our expectations of upcoming words in sentences that we are hearing and processing.

Previous studies have found that ungrammatical sentence structures are easily processed which points towards insensitive hierarchical structures; this research question tries to find a more exact answer to the question of the importance or unimportance of hierarchical structures in sentence processing.

In this experiment the researchers used and compared three different language models which included: Probabilistic PSGs, Markov models, and echo state networks(ESNs). The participants in the study, while it is not exactly clear what they did method-wise in the study, completed these three language models and then the data was evaluated.

Through evaluating the three language models the researchers were able to find that ESNs estimated surprisal values better than did the PSGs. This suggests that humans may rely more heavily on sequential rather than hierarchical structures in sentence processing. The problem they found was the question of whether or not these findings would generalize to a wider range of language models, both hierarchical and sequential. The researchers hypothesis was correct in that they found that hierarchical sentence structures were not as important in sentence processing than were sequential structures.

Frank, Stefan L., Bod, Rens. (2011). Insensitivity of the Human Sentence-Processing System to Hierarchical Structure. Psychological Science 22(829). DOI: 10.1177/0956797611409589.

Bilingualism edit

Cognitive benefits and costs of bilingualism in elementary school students: The case of mathematical word problems. By: Kempert, Sebastian, Saalbach, Henrik, Hardy, Ilonca, Journal of Educational Psychology, 00220663, 20110801, Vol. 103, Issue 3


International comparative studies have shown that students with immigrant backgrounds face serious academic problems in school due to language barriers in the classroom. Regardless of cognitive ability, English-as-second-language speakers perform worse than their native English-speaking counterparts in reading, math, and science. Previous research has also shown that proficient bilinguals perform better on tests conducted in a congruent language as the one they were instructed in (i.e- if you learned math in German and take a test in German you will do better than if it was presented in English, even if you are equally proficient in both). These scores contrast other studies that show the increased cognitive abilities of bilingual speakers over their monolingual counterparts. This advantage in cognitive functioning is from constantly selecting from two different language systems in the bilingual speakers, which enhances executive control processes.

In the present study, researchers attempted to determine what role bilingualism has as a practical application in the classroom, specifically in the mathematics domain. Research looked at the effects of bilingualism on arithmetic word problem solving because according to previous research language processing and mathematical problem solving use similar cognitive functions and therefore are closely connected in their effects on bilingual performance. This is an important study due to its practical applications to classrooms with multicultural students that are proficient in more than just English, and finding effective ways to support these students throughout their education.

Researchers examined whether a students level of proficiency in an instructional language predicted how well they performed on a math test in the same language. They hypothesized that proficiency in instruction language would be correlated to solution rates of these math problems, even with cognitive ability and other socioeconomic features controlled. Researchers also predicted that proficiency in instructional language would yield higher solution rates than a test presented in their native language that was NOT what they had been instructed with.

The independent variables of this study were: the native language of the student tested (either German or Turkish) and the instruction language used for the test. The dependent variable was how the subject performed (accuracy in solution) on the mathematical exam. Results of the bilingual students were compared to their monolingual counterparts. The procedure consisted of gathering background data, giving word problems in German, giving language proficiency test in German, and then a reading comprehension test in German. This test followed up with a second session that included a cognitive ability test, arithmetic skills test, and for the bilingual subjects word problems and language proficiency test in Turkish. The order of Turkish and German tests was randomly chosen for order.

Results showed that students who were proficient in German were more likely to solve mathematical word problems in German correctly than students with poor German language skills, indicating how important language proficiency is. The same results appeared for Turkish proficient students with Turkish tests. Native language did not yield significant correlations to correct answers indicating that it is more important to be proficient in the language than to have it be your native language. Bilinguals outperformed their monolingual counterparts overall across the testing conditions further supporting the notion that bilingual cognition has advantages over monolinguals. These results will be helpful to teachers working in multicultural classrooms with multiple language speaking children. It is important to develop a child's proficiency in a language of instruction if the child is to ever perform as well as their english proficient classmates.

Lkientzle (talk) 02:10, 22 March 2012 (UTC)[reply]


______________________________________________________________

The Forgotten Treasure: Bilingualism and Asian Children's Emotional and Behavior Health Katelyn Warburton (talk) 20:32, 8 March 2012 (UTC)[reply]

As the immigrant population within the United States increase, the importance of understanding language acquisition and multilingualism increases. Previous research has evaluated the “immigrant paradox,” or the fact that immigrants tend to adjust and navigate to a new culture, but the successes are not sustained through later generations. Researchers have delved into this paradox to determine the influence of bilingualism on interpersonal relationships, emotional well-being, and academic performance. Studies reveal that bilingualism can positively impact cognitive flexibility, thinking skills, and “cultural capital” within families and communities. Han and Huang (2010) furthered this research by analyzing the relationship between language status and behavior and emotional well-being in children.

This longitudinal study analyzed 12580 children (1520 children with family roots in Asian, 1106 US-born, non-Hispanic, white children) from kindergarten through eighth grade. Time-invariant data was collected which included age, gender, weight, and type of care before kindergarten, parents’ marital status, and parents’ education. Time-variant data was also gathered including household siblings, family socioeconomic status, home environment, parental school involvement, and location of residence.

Behavioral problems were analyzed through teacher-reported data that included both internalizing (anxiety, loneliness, low self-esteem) and externalizing (arguing, fighting, impulsivity) behaviors. To gather information relating to generation status as well as race/ethnicity, both parents were asked to report whether they were born in the United States and if their child was born in the United States (as well as the country of origin if born outside of the United States). Measurement of language status included information about the language spoken at home and at school. The language interaction between parent(s) and child were recorded and children took the Oral Language Developmental Scale (OLDS) test. Reading competence was evaluated through an Item Response Theory (IRT) test in a one-on-one setting. The setting of the school was evaluated across five constructs: English as a second language institution/services, school resources, learning environment, teaching environment, and work climate.

Researchers utilized a three-level growth curve to assess the relationship between language status and children’s behavior as well as emotional health. Five assessment points were used to assess developmental trajectories, which revealed variations in outcome at every level at each data point. Comparisons were made between and within categories.

Results indicate that by fifth grade, children who were classified as fluent bilingual, non-English dominant had the lowest levels of internalizing and externalizing behaviors; in contrast, the highest levels of both types of behavioral problems were reported for non-English, monolingual children. A potential confound could be found in the behavior reports of the teachers due to favor biases. In addition, this study did not account for the differences between behaviors present during “home life” in comparison to the classroom.

Future research should more closely evaluate the cultural impact of bi/mono-lingualism on childhood experience, education, and behavior. In addition, research is needed to distinguish between the effects of academic achievement and language proficiency in relation to children’s overall well-being and behavioral tendencies. This study indicates that a supportive teaching environment in a school that welcomes bi/mono-lingualism may result in better-behaved, emotionally stable, academically driven students.

Han, W. & Huang, C. (2010). The forgotten treasure: Bilingualism and asian children's emotional and behavioral health. American Journal of Public Health, 100(5), 831-838. ---

English Speech sound development in preschool-aged children from bilingual English-Spanish environments

Limited research has been conducted regarding speech sound acquisition in bilingual children. The lack of information surrounding this topic has posed problems for health professionals, especially speech-language pathologists because it is more difficult to diagnose speech sound disorders and delays. Despite the lack of research in this area, studies have shown that bilingual children use what they know about one language to inform production of phonemes in a second language. Although there are clear differences in the complexities of English and Spanish, the languages share enough qualities that phoneme sounds can be mistaken for each other. In an attempt to provide more background to this issue of bilingualism, Christina Glidersleeve-Neumann and her colleagues set out to explore speech production patterns in preschool children with varying degrees of exposure to English and Spanish.

The present study examined speech production in monolingual English children, English-Spanish bilingual children who had been exposed mainly to English, and children with balanced English-Spanish exposure. Specifically, the study aimed to explore if differences in bilingual exposure had an effect on the rate at which children acquired speech. This research is deeply important in determining the best practices in diagnosing children with speech disorders and delays. This study included 33 participants all three years of age. They ranged from speaking only English to completely bilingual. The participants were asked to spontaneously produce names for 65 objects shown to them in pictures. Their spoken responses were measured for accuracy in consonant and vowel production and the data was also compared to the acquisition of speech for normal monolingual English and Spanish speaking children.

Results of this study revealed that speech produced from each of the three groups was much the same. However, children who spoke only English made the fewest errors on average. Children in the bilingual group more readily transferred their knowledge about phonemes in one language to the other, meaning they ultimately displayed more errors in speech production. Final consonant devoicing was also highest in bilingual children. This included the use of glottal stop in the bilingual children. It remains unknown why this result occurred and it must be further explored in future research.

Broader implications for this research include an understanding of how to appropriately assess children with differing language backgrounds as well as allowing the ability to accurately diagnose children with speech delays. It is important to emphasize that the results did not suggest it is better to only learn one language because children improve in language abilities as time passes. Ultimately, most children exposed to two languages fully develop competency in bilingualism.

Gildersleeve-Neumann, C. E., Kester, E. S., Davis, B. L., & Peña, E. D. (2008). English speech sound development in preschool-aged children from bilingual English-Spanish environments. Language, Speech, And Hearing Services In Schools, 39(3), 314-328. doi:10.1044/0161-1461(2008/030)

Kfinsand (talk) 07:36, 13 March 2012 (UTC)[reply]


---

Language mixing in bilingual speakers with Alzheimer’s dementia : a conversation analysis approach Amf14 (talk) 03:01, 15 March 2012 (UTC)[reply]

Some bilingual individuals that suffer from Alzheimer’s dementia show difficulties with separating two languages within their brain. When speaking with a monolingual individual, they may begin to speak the wrong language or combine both languages (“mixing”) within the conversation. Those with more severe dementia generally exhibit worse patterns of language mixing but most cases vary depending on the person. Healthy bilingual speakers can manage their multiple languages easily by selecting which one is appropriate to use in a given situation.

Two possibilities underlie this difficulty that Dementia patients have. It is possible that they fail to recognize situational cues, therefore having a problem with language choice. They also may have problems with inhibiting their second language. It is thought however, that individuals are fully aware of which language should be used.

Researchers were determined to provide evidence that language mixing occurred due to Alzheimer’s Dementia rather than disorientation, attention deficits and other effects of the disease. They therefore used conversational analysis to observe specifically what causes one to mix languages when speaking to another individual.

Four bilingual individuals with Alzheimers were studied in order to understand the language choice and mixing problem. All subjects spoke two languages from the age of twelve or earlier and both languages were spoken throughout adulthood. The experiment that was performed included two ten minute conversations. These conversations were naturally occurring, occurred on two separate days and were of two different languages; English and Afrikaan.

Not all subjects switched languages during conversations, but when languages were mixed together, it occurred more often during the conversation in which a speaker was less skilled in their non-dominant language. The severity of dementia did not seem to have an effect on how often languages were switched. Occasionally when a language change occurred during conversation, the subject would struggle with a “trouble spot.” This means that they were unable to regain their thoughts and enter back into the conversation at an adequate speed. When a subject recovered quickly from a trouble spot, it supported the idea that those with Dementia were aware of what caused the initial trouble and could quickly use the appropriate language again. However; most subjects were not able to recover on their own suggesting that they were not aware that the mixing occurred in the first place.

One subject displayed an exception to the results of the data. She was at a much later stage in Dementia and did not recover from trouble spots, but rather restated the inappropriate language utterance in a louder voice. This provides evidence that the severity of Dementia may change how often inappropriate languages are used in conversation.

Friedland, D., & Miller, N. (1999). Language mixing in bilingual speakers with Alzheimer's dementia: A conversation analysis approach. Aphasiology, 13(4-5), 427-444. doi:10.1080/026870399402163

---

The effects of bilingualism on toddlers' executive functioning. Lino08 (talk) 05:04, 20 March 2012 (UTC)[reply]

Many studies have suggested that learning a second language can be very beneficial in cognitive terms, especially when that second language is learned at a young age. There are numerous studies that show evidence that bilingual children are better at tasks that require selective attention and cognitive flexibility. Other studies have shown that executive cognitive control of bilinguals is increasingly improved through adolescence, but much of the significant development occurs during the preschool years. The authors of this article looked the evidence that by 24 months children who are bilingual have already separated their two languages and decided to test children this age with a variety of cognitive tasks to determine if the benefits of bilingualism can be predicted at an earlier at (24 months) than previously thought. Their goal was to test both the ability to suppress a motor response (delay of gratification) and to prevent attention to a distracting stimulus to determine the effects of bilingualism on young children.

The researchers tested 63 children whom they split into a monolingual and a bilingual group. They determined that if over 80% of the child’s language exposure is one language, they would be put in the monolingual group, and if their exposure to their first language was less than 80% and they had exposure to a second language (average exposure was 35.5%), then they were included in the bilingual group. The children were given 5 executive functioning tasks to complete in their primary language in order to test the cognitive abilities of monolinguals as compared to bilinguals. The tasks included: Multilocation, Shape Stroop, Snack Delay, Reverse Categorization, and Gift Delay. In multilocation, the experimenter hid a treat in one drawer out of a row of 5 drawers, and they asked the children to find the treat. After the children were primed to look in the middle drawer, the experimenter would change the treat’s location and ask the child to find it again after a 10 second delay. In Shape Stroop, the children were asked to find the picture of a small apple, orange, or banana located within a picture of a large apple, orange, or banana. In both the snack and gift delay tasks, the children were asked to not take the snack or open the gift when the experimenter left the room. In reverse categorization, the child was asked to put small and large blocks in small and large buckets according to the rule presented. Next, they were asked to put the large blocks in the small bucket and the small blocks in the large bucket.

Previous research has found bilinguals to better at cognitive control functions (inhibiting a motor response and having selective attention) as compared to monolinguals. In this study, the researchers found a significant difference between the language groups on the Stroop task (a selective attention task). Their findings do support the prediction that 24 month olds have less experience with language production and therefore have not yet acquired all of the benefits of bilingualism. Their language experience has also been mostly receptive and not expressive. Even though their research didn’t prove statistically significant between the language groups on all the tasks, their results do shown that there is a bilingual advantage and it is developed sooner than previous research had discovered.

Poulin-Dubois, D, et al. The effects of bilingualism on toddlers' executive functioning. Journal of Experimental Child Psychology (2010), doi:10.1016/j.jecp.2010.10.009

--- Reasoning about other people's beliefs: Bilinguals have an advantage

Previous studies have shown that bilingual children perform better on false-belief (FB) tasks. In the classic Sally-Anne task, children are presented with two characters, Sally and Anne, who are playing with a toy. When they finish, they put the toy in a box and Anne leaves. While Anne is away, Sally moves the toy to a different box, and when Anne returns children are asked in which box Anne will look for the toy. Monolingual children answer this question correctly by the time they are around the age of four years old. However, bilingual children tend to answer correctly as young as three years of age. The current study aimed to test the false-belief reasoning abilities of adults, comparing monolinguals to bilinguals. Because it is expected that all would answer the questions correctly, they tested using eye-tracking techniques to see if they have an egocentric bias. In other words, if the participants initially look at the box they know the toy is in, even though it is incorrect, this would show an egocentric bias. The authors expected that bilinguals would suffer less from this example of the curse of knowledge in the studies’ tasks.

The participants in this study were 46 undergraduate students from Princeton University, which included 23 bilinguals and 23 monolinguals. English was the primary language for all the subjects, but the bilinguals had practiced a second language frequently for at least ten years.

The first task involved the Sally-Anne story. Participants, who were told they were being used as a control group in a children’s study, watched the narrated story on a computer screen and answered comprehension questions using the keyboard. They were expected to answer correctly, but their eye-movement was observed as well.

The authors studied three areas within the results: gaze direction, fixation latency, and response time. As far as gaze direction, the authors noted which basket the participants initially fixated on when asked which basket Anne would look in for her toy. 19 participants looked at the correct basket first, while 26 did reveal an egocentric bias, and one monolingual subject did not fixate on either before answering the question. Of the 19 that looked at the correct basket first, 13 of them were bilingual and 6 monolingual, creating a reliable difference. Looking at fixation latency, the authors measured how long it took the subjects to fixate on the correct target. Because of the results to the gaze direction, bilingual participants were faster at first fixating on the correct target with a reliable difference. Finally, when looking at response times to the questions, there was no reliable difference between monolinguals and bilinguals, suggesting response time is less sensitive than eye-tracking measures.

After the Sally-Anne task, subjects participated in a Simon Task, asking them to click a right-handed key when the word “RIGHT” appeared on the screen, and click a left-handed key when the word “LEFT” appeared on the screen. The authors intended to look at the subjects’ level of executive control with this task. Bilinguals performed significantly better on the task than monolinguals.

Overall, the majority of adults suffer from an egocentric bias, however bilinguals less so. In adults, unlike children, this does not affect task performance.

Rubio-Fernández, P., & Glucksberg, S. (2012). Reasoning about other people's beliefs: Bilinguals have an advantage. Journal Of Experimental Psychology: Learning, Memory, And Cognition, 38(1), 211-217. doi:10.1037/a0025162 Lcannaday (talk) 19:58, 20 March 2012 (UTC)[reply]


Dual-modality monitoring in a classification task: The effects of bilingualism and ageing By Ellen Bialystok, Fergus I. M. Craik & Anthony C. Ruocco Misaacso (talk) 04:19, 21 March 2012 (UTC)[reply]

Proficient bilinguals are able to have both languages active and ready to be used at the same time. To do this, bilinguals need to manage both systems to prevent one language to activate at an inopportune time. Attention needs to be on the target language while the other, unneeded language is inhibited. These aspects are all a part of executive processing which is a set of cognitive abilities that regulates thoughts and behaviors. Previous research states that since bilinguals deal with activating and suppressing each language on a daily basis, they may show better control of attention on attention draining tasks than monolinguals can. Executive functions are said to develop earlier and decline slower and later in life in bilinguals compared to monolinguals. Researchers have found that regarding linguistic processing, bilingual children age five to nine are better at completing tasks that have misleading information that has to be ignored. Spatial, number and classification concepts all appear to be nonverbal areas that bilinguals outperform monolinguals on. Bilingualism is also said to reduce the speed and magnitude of the declining of executive processing as a person ages. This benefit is increased with age, especially regarding selective attention, inhibition and resistance to interfering stimuli. Past research has also shown that bilinguals are able to more efficiently complete a task than monolinguals when the task involves monitoring and switching of attention.

There were three groups of subjects, moderately bilingual (unbalanced), fluent bilinguals, and monolinguals. All were undergraduate students and completed a self-report language background questionnaire. In the study, subjects completed a visual and audio classification task where they determined if the stimuli were letters or numbers and if they were animals or musical instruments. If they were in the same group it was called related but if they were different the condition was called unrelated. It was hypothesized that bilinguals would have less difficulty completing the task than monolinguals and that unbalanced bilinguals would fall between these groups. A second hypothesis was regarding the need to switch between or within the groupings of stimuli. Ignoring interference would take more effort than switching attention between the two material groups, so similar conditions would require more effort. There were four combinations: two domains- letter or number and animal or musical instrument and two categories of relatedness- related and unrelated. The dependent variable was the cost of performing the auditory task alone or with the visual task. Study two was used to replicate the findings in study one. In that experiment subjects also completed a cultural intelligence test.

The letter number grouping had more classifications and was unaffected by the auditory task. The animal and musical instrument group was more complicated and easily disrupted when auditory letter and number stimuli were presented. Bilinguals had the highest scores, followed by the unbalanced and finally the monolingual groups. In the dual-task condition the cost of the task was the same for all participants. In the second experiment, the results replicated that the letter and number classification is easier than the animal and musical instrument classification as it is based on shallow judgments. In the control conditions there were no performance differences between the groups.


Bialystok, E., Craik, F. I. M., & Ruocco, A. C. (2006). Dual-modality monitoring in a classification task: The effects of bilingualism and ageing. The Quarterly Journal of Experimental Psychology, 59 (11), 1968-1983. doi: 10.1080/17470210500482955

Misaacso (talk) 03:55, 23 March 2012 (UTC)[reply]

Musical expertise, bilingualism, and executive functioning by Bialystok and DePape

Previous research has shown that bilinguals of all ages that use both languages on a regular basis have higher levels of executive control than monolinguals. We know that both languages are activated when a bilingual is speaking even in a very monolingual context which requires bilinguals to focus on the target language in order to maintain fluent speech. This executive control can be seen in musicians as well. Musicians show increased activity in the frontal lobes that non musicians do not. Additionally, they have been shown to have better verbal memory and an increased IQ. The current study investigated whether intensive musical experience led to better executive processing. If executive functioning is affected by musical experience, then it should be seen in an auditory judgment task. If it has a more generalized effect like that of bilingualism then they would also be observed in nonverbal spatial tasks.

In the current study, there were four groups of participants: monolingual instrumentalists, monolingual vocalists, monolinguals, and bilinguals. A test was created to establish a baseline on spatial memory and cognitive function in addition to confirming no general difference in intelligence level. An experimental task assessed resolution of conflict in a spatial task while another assessed resolution of auditory processing conflict. Participants were presented with a control condition, followed by congruent and incongruent conditions in which the direction of response did or did not correspond to the pitch direction (Stroop task) or to the directional arrows presented (Simon arrows task). Reaction time was measured

In the background task, bilinguals and musicians performed similarly. Both musicians and bilinguals performed more rapidly than monolinguals on spatial tasks while the two musician groups performed better than the other groups on pitch oriented tasks. The results demonstrate the effectiveness of musical training on enhancing cognitive control even on tasks with no clear relation to music. They also show that the effects from musical or bilingual experience can be generalized but the greatest effects are seen in tasks more similar to the experience itself.

The study has provided evidence connecting musical experience to executive control and has furthered bilingual research on executive control providing further evidence to support the idea that the enhancement found in bilingualism is not specific to language. However, the mechanism behind these positive effects still remains unknown.

Bialystok, E., & DePape, A. (2009). Musical expertise, bilingualism, and executive functioning. Journal Of Experimental Psychology: Human Perception And Performance, 35(2), 565-574. doi:10.1037/a0012735Mvanfoss (talk) 21:31, 21 March 2012 (UTC)[reply]

Coordination of Executive Functions in Monolingual and Bilingual Children edit

By Ellen Bialystok

Executive control is a broad term that describes cognitive processes involving working memory, multi-tasking, problem solving and other cognitive functions. Previous research overwhelmingly has focused on the development of each component of executive control; however, the contributions of each individual component to a complex task have been difficult to empirically measure. Real-life tasks such as our everyday multi-tasking require the coordination of each piece. Other studies looked at how experience modifies the development of executive control, as is the case with bilingualism. Executive control develops faster in bilinguals and is used constantly as they decide which language to use. To build on the findings from previous research the current study sought to discover how bilingualism affects children’s performance during multi-tasking, which requires the coordination of executive control components.

The participants consisted of 63 8-year-olds, 32 of which were monolingual and 31 of which were bilingual. The students were administered a dual-modality classification task (DMCT), and two other tests that measured intelligence and receptive vocabulary. The DMCT required students to classify visual and auditory stimuli as animals or a musical instrument. The single-task trials presented students with either an auditory or visual stimulus, whereas the dual-task trials presented the stimuli simultaneously. Reaction time and accuracy were measured.

Overall, the single-task was performed faster and was less difficult than the dual-task. Performance was relatively similar across all children when the auditory and visual tasks were presented alone; however, bilingual children were more accurate than monolingual children in the dual-task trials, especially with regards to the visual task. The dual task condition required coordination of all three executive control components: working memory, inhibition, and shifting. While there was no evidence that one component was performed more effectively by the bilingual children, coordination of these elements made them more efficient.

These findings support the author’s assertion that the combination of these elements is more important to examine than any one component. Most real-life tasks require the integration of more than one network, which makes this research more applicable to everyday tasks. Since bilingualism helps to better develop the network responsible for cognitive performance, this research and skill may be more important than previously believed.

Bialystok, E. (2011). Coordination of executive functions in monolingual and bilingual children. Journal of Experimental Child Psychology, 110(3), 461-468. doi:10.1016/j.jecp.2011.05.005 TaylorDrenttel (talk) 03:07, 21 March 2012 (UTC)[reply]

Delaying the onset of Alzheimer disease: Bilingualism as a form of cognitive reserve.

Bilingual speakers have been found to function at a higher level in old age when dealing with dementia, Alzheimer’s disease, and other memory issues. The brains of bilingual dementia patients have shown enhanced attention and control of cognition when compared with similar patients of a monolingual background. Since people mostly speak multiple languages due to certain circumstances, rather than choice, these brain differences cannot be attributed only to genetics. Instead, bilingualism is believed to enhance the brain’s cognitive reserve. Previous research of hospital records showed that memory symptoms were seen 3 or 4 years later in bilingual speakers. Other research showed this effect only in some groups. This finding led Fergus Craik, Ellen Bialystok, and Morris Freedman to test solely patients with diagnosis of “probable Alzheimer’s Disease (AD) in an attempt to recreate the previous finding of delayed dementia in bilingual speakers.

The study consisted of 211 patients that had been diagnosed with probably AD. Information regarding the age of onset of the symptoms, along with additional educational and background information were collected and recorded for further analysis. In order for a patient to qualify as bilingual, they must have consistently spoken 2 or more languages throughout the majority of their life.

The results of the research replicated those of previous research in showing that the onset age of Alzheimer’s was delayed in bilingual speakers. There was a difference of 5.1 years between bilingual speakers and monolingual speakers for time of symptom onset. Although this finding is based on a subjective recording of time of onset, it is still significant. There was also a 4.3 year delay in the occurrence of the bilingual patients’ first visit to the clinic to address the Alzheimer’s symptoms. The results reflected the hypotheses of the researchers in showing that bilingual Alzheimer’s patients show delayed onset of dementia symptoms and the first visit to a clinic.

This study demonstrates the importance of brain stimulation, such as that of bilingual speaking, in the process of cognitive reserve for Alzheimer’s patients. In the future, the researchers hoped to clarify whether the effect was exclusively based on speaking multiple languages, or if other factors influenced brain stimulation in the study. Further research on participants’ education, occupation, and immigrant status were possible areas of subsequent study.

Craik, F. M., Bialystok, E., & Freedman, M. (2010). Delaying the onset of Alzheimer disease: Bilingualism as a form of cognitive reserve. Neurology, 75(19), 1726-1729. doi:10.1212/WNL.0b013e3181fc2a1c 138.236.34.197 Smassaro24 (talk) 06:18, 21 March 2012 (UTC)[reply]

Bilingualism is one of the most debated subjects in language. There are many different mixed effects of having bilingualism. Many psychologists have conducted experiments that prove the beneficial advantages that people have with bilingualism rather than people who are monolingual. This is the reason these experimenters conducted this experiment. In their experiment they wanted to know the effects of monolingual people and bilingual people have in two standard false-belief tasks (Salley-Anne task and Simon task). The experimenters used forty-six Princeton University undergraduates. Twenty-three of the students were monolingual and the other twenty-three were bilingual. To distinguish which group these participants fell in, the researchers asked two questions: 1) how long have you spoken this language? 2) How often do you have to switch back in forth with the two languages? They developed a certain criteria, based on the results, which only allowed participants to be qualified as bilingual if they had learned the language before the age of nine. It is very important to note that all of the participants were born in the United States. The first task that the participants had to be involved in was the Sally-Anne task. This task is comprised of two different portions: False-belief and True-Belief. False-belief in this task was clarified as “Anne puts her doll in the basket before school. While she is gone, Sally moves Anne’s doll from the basket to the box. When Anne comes back from school where will she look for her doll?” True-belief is qualified as “Sally puts her doll in the basket before school. Anne was also going to put her horse in the basket but because it was full she put it in the box. When Anne comes home from school where will she remember her horse will be?” Participants were directed to watch these actions on a computer screen and press the keys to predict where it would be. The experimenters followed the eyesight of the individuals as the experiment went on. The results were shown in three different parts. The first part was gaze. The experimenters found that half of the participants looked at the correct place while the other half looked at the incorrect place. Bilinguals however, were sixty percent likely to look at the correct place than the monolinguals that were twenty percent. The second part of the results was fixation latency. Basically this is how long it took for the participants to look at the correct answer, once the question was asked. The Bilingual group overall performed better than the monolingual group. The third part of the results was the reaction time. The results showed that the Bilingual group reacted faster than the Monolingual group in this task. The second task that the participants conducted was the Simon task. Basically the task were to press a certain button when the participants saw the word right and press another button when the participants saw the word left. These words were displayed on a computer screen and varied in placement on the screen. Unfortunately the experimenters found no significant difference in this task. The results varied between the two tasks but it is needless to say that the bilingual participants have an advantage to be one step ahead of the monolinguals in the Sally-Anne task.

Rubio-Fernández, Paula; Glucksberg, Sam; Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol 38(1), Jan, 2012. pp. 211-217. Gmilbrat (talk)


Bilingualism and Dementia

This study explored the effect of bilingualism on delaying the onset of dementia symptoms in patients of old age. Currently, dementia is a social and economic burden and even a six-month overall delay in the onset of dementia would have large public health implications. The factors that predisposition a person for dementia are mainly biological so it is important to identify environmental and lifestyle factors that have the possibility of contribute to its delay of onset.

Previous research on this topic has focused on the cognitive reserve. It has been argued that the neurological brain reserve is biological and possibly genetic. However, this article focuses on the behavioral brain reserve, or cognitive reserve. It is thought that complex mental activity adds to this cognitive reserve and in turn protects against the onset of dementia. In particular, dementia can be prevented though education, high occupational status, high intelligence, and engaging in mentally stimulating activities. Since bilingualism increases attention and cognitive control in people of all ages, it is believed that it contributes to the cognitive reserve and delays the onset of dementia. This study builds on the previous research by testing the hypothesis that bilingualism delays the onset of dementia.

This study looked at 184 patients; 91 who were monolingual and 93 who were bilingual. In each language group, there were 66 patients diagnosed with probable Alzheimer’s disease. The age of onset of dementia (Alzheimer’s disease) was determined by interviewing families and caregivers. In addition, language history was examined, including languages spoken (for majority of their lives), English fluency, place of birth, date of birth, and year of immigration. Other information about the patients, including education and employment history was also collected in order to eliminate other possible factors responsible for the delayed onset of dementia.

There was a significant difference between the onset of dementia between monolinguals and bilinguals. The average difference was 4.1 years. Cultural differences such as immigration, education, and employment status were all taken into consideration and none of these factors could account for the increased delay in the onset of dementia in bilingual patients. These findings are important because it shows that there are environmental and lifestyle factors that influence the age of onset of dementia. It is necessary to investigate the interaction of these environmental, lifestyle, and biological factors in order to fully understand how to best prevent the onset of this disease.

Bialystock, E., C. Fergus, M. Freedman, (2007). Bilingualism as a protection against the onset of symptoms of dementia. Neuropsychologia. Doi:10.1016.


Reasoning about other people's beliefs: Bilinguals have an advantage.

Bilingualism has many cognitive benefits, including enhanced executive control skills, and even empathy. Bilingual children perform better on false-belief tasks than monolingual children the same age. Though adults are able to easily adopt another person's point of view, they are affected by the evocatively-named curse of knowledge, where their own knowledge of a scene biases their judgements. By using eye-tracking to see where participants looked during a false-belief task, the authors were able to discern how multilingualism affects adult performance.

The 46 participants were made of up equal numbers of bilinguals and monolinguals. Though all participants had some prior experience with a second language, the bilinguals had learned the second language before age nine and had been using it regularly for at least a decade. Most participants learned the language in a bilingual home, a monolingual home where a foreign language was spoken, or in a language immersion school. The authors point out that this provides a robust test bed for executive control due the need for the bilingual participants to switch languages multiple times daily for many years.

Participants were given a Sally-Anne task, which presented a visual story about two girls. In the false-belief condition, Anne puts her doll away in a chest and leaves the scene. Sally then moves the doll from the chest to a box, and Anne returns. After they see the scene, participants are asked where Anne will look for the doll. In the true-belief condition, Sally puts her horse in the box and leaves the scene. Anne was planning to put her doll in the box, but because it was full, she puts it in the chest instead. Anne leaves for the day, and when she returns the following day, participants are asked where she will look for the doll.

Eye tracking revealed that more than half (56.5%) of the participants had an egocentric bias and looked towards the container they knew the doll was in before looking at the correct one. This number was skewed towards monolinguals, who looked at the correct container only 26.1% of the time. Bilinguals were much more successful, looking at the correct container 56.5% of the time. Bilinguals were also significantly faster at looking at the correct target than were the monolingual participants.

Surprisingly, adults can, in fact, perform poorly on false-belief tests. The curse of knowledge seems to give them an egocentric bias, causing them to initially look in the wrong place. Unlike children, however, adults can correct this and give the appropriate answer. Adult bilinguals seem to be less likely to look at the wrong container and identify the correct one more quickly than adult monolinguals. The authors propose this is due to the improved executive control systems of bilinguals that allows them to inhibit their own knowledge of the scene. That many bilinguals have an intimate understanding of another culture may also affect the performance — coming from a different background may allow them to adopt another person's perespective more easily than monolinguals.

Rubio-Fernández, P., & Glucksberg, S. (2012). Reasoning about other people's beliefs: Bilinguals have an advantage. Journal Of Experimental Psychology: Learning, Memory, And Cognition, 38(1), 211-217. doi:10.1037/a0025162 AndFred (talk) 03:56, 23 March 2012 (UTC)[reply]


Glennen, S. (2002). Language development and delay in internationally adopted infants and toddlers: A review. American Journal Of Speech-Language Pathology, 11(4), 333-339. doi:10.1044/1058-0360(2002/038) Hhoff12 (talk) 05:21, 22 March 2012 (UTC)[reply]

Children who are adopted from a different country are unique. Not only in the sense that they are adopted, but also because it puts them in a exclusive category of people who are not typical bilingual language learners because they don’t maintain their native language, usually. It is rare that the adoptive family speaks their native language and therefore the use of their native language ends as soon as they’re adopted. At this time they are required to learn a new adopted language and during this time they have restricted in both languages. Learning more about this type of second language learner is important because international adoptions have increased quickly over the past years.

Typically there are two types of bilingual language learning. Simultaneous bilingualism happens when two languages are learned at the same time. An example of this would be if one parent speaks one language and the other parent speaks another language. Successive bilingualism happens when one language is learned before the other. An example of this would be learning one language in the home and then learning another language when they go to school. In both of these instances, one language helps with the learning of the other language. What happens with adopted children is quite different. The article describes their learning experience as arrested language development because the native language is stopped too early as the adopted language comes into play. This could leave the child at risk for semilingualism, or the inability to be proficient in either language. Previous evidence shows that internationally adopted children lose their native language soon after their adoption. Children as old as 4 to 8 years lost most of their native language within 3 to 6 months and all functional use within a year. Younger children such as infants and toddlers who would have less developed language skills would likely lose their native language faster.

Many people, even professionals, don’t think that the toddlers and infants will be affected by the change in languages because they have only just begun to learn their native language at the time of the adoption. However, children being to learn languages, or at least portions of them at a very young age. They begin to separate speech into words and phrases based on the language’s rhythms and the arrangement of words. Once they learn that, they switch to learning what they mean. When the patterns change in the adopted language, children need to realize the difference and learn what they mean. Another important factor is that by the age of 8 to 9 months infants learn to listen only to phonemes that are a part of their native language. These children will have trouble, at least at first, hearing and creating phonemes that were meaningless in their native language.

This research is important because current data shows that many children who have been adopted internationally are receiving speech and language services. By learning more about this unique case of language learning, we will be better able to help these children in the future.



Musical Expertise, Bilingualism, and Executive Functioning Sek12 (talk)


The researchers in this study attempted to determine whether or not intensive musical experience leads to better executive control as has been seen in research done with bilinguals. This research question is important because it points to the fact that better cognitive executive control can be gained through experience and extensive practice; the same way that bilinguals must use their executive system to distinguish between the two languages they know. The research question shows that tasks that increase executive control, like musical practice and using two languages, can both increase executive functioning and protect against onset of dementia. Prior research mentioned in the article shows that lifelong bilinguals demonstrate higher levels of executive control when compared to monolinguals. In the introduction of this paper it is pointed out that past research points to the conclusion of the involvement of frontal regions in the brain as playing a role in executive functioning but not specifically in the form of testing musicians. This study presents some of the first research asking the question of whether or not musical performers gain enhanced executive control and they compare this control with bilinguals and monolinguals.

The design of this study compared musical performers(both vocal and instrumental), bilingual, and monolingual participants. The specific questions was whether or not musical performers would show enhanced executive control on nonverbal tasks when compared to bilinguals and it was shown that musical performers showed enhanced control for specialized auditory tasks; the same way that bilinguals show enhanced executive control for both verbal and nonverbal tasks in previous studies. In this experimental study the participants(95 total) completed a Language and Musical Background Questionnaire, Cattell Culture Fair Intelligence Test(a nonverbal measure of general intelligence), Spatial Span Subtest of the Wechsler Memory Scale III(a short term visual span memory test), Trail Making Test, Simon arrows Test, and an Auditory Stroop Task.

The key findings showed that while all groups showed similar results in general intelligence, cognitive processing, and working memory they differed across groups in the executive control tasks. On the nonverbal spatial conflict tasks both musicians and bilinguals performed better than monolinguals. The evidence provided show that both specific executive control components were involved in the tasks and that bilinguals and musicians performed better in those tasks. The most important finding is that musicians use a different specific executive component than bilinguals do to perform executive tasks.

This research has broader implications due to the fact that is shows that musical training enhances executive control in much the same way as has been shown for bilinguals in previous research. It also extends research on bilingual’s executive control because it shows that executive enhancement is not necessarily specific to language experience.

Bialystok, Ellen, Depape, Anne-Marie (2009). Musical Expertise, Bilingualism, and Executive Functioning. Journal of Experimental Psychology: Human Perception and Performance 35(2), 565-574. DOI: 10. 1037/a0012735.


Psycholinguistic and Bilingualism Zc.annie (talk)

One of the main topics that psycholinguists interested in is bilingualism. They look into how bilinguals different from monolingual in every aspect of language. There are three main focuses in bilingual psycholinguistics. One is how human processing language from once heard or seen by the ears or eyes, to how the language knowledge been stored in our brain, and also how human produce language to react or just express to communicate. How the understanding is working, what kind of underlying processes is formed are all topics in this aspect. The second one is for bilingual, how many lexicons are in the brain in order to work with two languages. This is a continue debating topic that one theory is there is only one lexicon than all the words stored but labeled in different language; one theory indicates there are two lexicons, one for each language; even more, some psycholinguists say there are three, one is the conceptual store for corresponding the languages, and the other two are one for each language. The third one is how bilingual can keep the two languages separate. There are evidence for language switching and how our brain gate the tow languages, and how activate and deactivate words are interesting points in this area. There are two main language modes for bilingual, one is monolingual language mode, the other is bilingual language mode. From the name we can see the theory easily. Monolingual mode says bilinguals try to adopt the other language as best as they can. But rarely can be perfect, most of the time interference will occur in every level of the language and in all modalities. This is why we can see little details of another language when bilingual use the other language. Bilingual language mode says both languages are active when bilingual use either one language. Code-switching and borrowing are happened in this mode, human just need to make choice and decisions for which they are going to use. How bilingual recognize words gets more and more attention by psycholinguists. To learning about the assessment of the lexicon, recognize time of certain words is mostly used for testing. Which showed bilingual tend to spend more time on recognizing non-word, and longer on code-switching words on bilingual language mode, but not in the base-language situation. More studies were down in the similar mode that shows in bilingual certain factors play a role to access of lexicons in a mixed language situation.

Muysken, P., Milroy, L. (1995). One speaker, two languages: cross-disciplinary perspectives on code-switching. Chapter 12. Cambridge [England]: Cambridge University Press.