StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Language as a Complex Activity of a Human Brain - Essay Example

Cite this document
Summary
The author of the paper “Language as a Complex Activity of a Human Brain” states that in reading and listening, the main issue is how the meaning of various words are stored in the brain and how they are accessed. It is still largely unclear as to where the language is hiding in the mysterious reaches of the human brain…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER98.4% of users find it useful
Language as a Complex Activity of a Human Brain
Read Text Preview

Extract of sample "Language as a Complex Activity of a Human Brain"

Language-Chapter Summary The Flight 301 incident is an example of the complexities that people face in everyday language. In this incident a plane was called back to the gate as the air traffic controller heard the word “hijack” while guiding the pilots to a safe takeoff while, in reality, it was the sound of a passenger who greeted the pilot Jack with “Hi Jack”. Language is one of the most complex activities of human brain, and how people use language is an important aspect of cognitive neuroscience. Most theories agree on the fact that there is a mental lexicon (a mental store of information about words). This store contains semantic and syntactic information about the words. In addition, there will be information about their spellings and sound patterns. It is reported that an adult speaker knows nearly 50,000 words, and can recognize and produce about 3 words per second. This shows that this database is very large but arranged in a highly efficient manner. Evidently, the mental lexicon is not arranged alphabetically. Otherwise, it would have taken longer to find words in the middle of the alphabet. In addition, the mental lexicon learns and forgets words. Moreover, in the mental lexicon, frequently used words are more easily accessible than the rarely used ones. Finally, in the mental lexicon, more auditory neighbors make the identification of a word slower. This is so because the mental lexicon is arranged in information-specific networks. An example is the model proposed by Willem Levelt. According to this framework, the information-specific networks exist at two levels’ the lexeme level and the lemma level. The conceptual level contains semantic information (the conceptual conditions under which it is appropriate to use a word), and these specifications are communicated between the levels by the sense connections. The concept that representations in the mental lexicon are organized according to meaningful relationships between words is supported by priming studies. At first ,it was thought that the priming effects were the result of the automatic spread of activation between the nodes in the word network but later the possibility of expectancy-induced priming was identified. However, there is still dispute regarding the number of conceptual or semantic systems in the brain. While some scholars think there is a unitary semantic system that works on a proposition-based format, some are of the opinion that different types of information can be stored separately on the basis of perceptual or verbal codes, resulting in a multiple conceptual system. There are various organizational structures proposed by researchers on how conceptual information is represented, ranging from feature lists, schemas, exemplars, and connectionist networks. According to the model proposed by Collins and Loftus (1975), word meanings are represented in a semantic network in which words are connected with each other and the distance between words are decided by the semantic relations. In this model, activation spreads from one conceptual node to others and the nodes that are closer together will benefit more from the activation. The functional organization of the mental lexicon can be understood by observing patients with neurological problems. For example, Wernicke’s aphasia results in semantic paraphasis, and patients with progressive semantic dementia show impairments in the conceptual system. It is found that there is correlation between the type of semantic deficit and the area of lesion. Similarly, in people with semantic dementia, semantic problems are often localized to certain semantic categories. In other words, the deficit is always category-specific. Elizabeth Warrington argues that this reflects the types of information stored with different words in the semantic network. The claim by Warrington was tested in a computational model by Martha Farah and James McClelland (1991). They considered semantic memory as having different visual and functional subsystems. In addition, they added the idea that living things have representations based on visual attributes while nonliving things contain information about functional attributes. The test revealed that a lesion in the visual properties impaired the ability in dealing with living things and lesion in the functional properties resulted in impairments with nonliving things. This test fully supported the claim by Warrington. However, Warrington’s observations were disputed by Alfonso Caramazza and others on the ground that the materials used in Warrington’s studies were not well-controlled. For example, when comparing living things versus manmade things, some studies did not ensure the visual complexity, visual similarity across objects, frequency of use, and familiarity of objects were matched. Contradicting Warrington’s theory, Caramazza introduced a semantic network organized along lines of the conceptual categories of animacy and inanimacy. According to him, selective damage in patients indicate evolutionarily adapted domain-specific knowledge systems that are subserved by distinct neutral mechanisms. The idea that conceptual representations of living things versus man-made things in the brain depend on different neuronal circuits is supported by the studies by Alex Martin and colleagues using PET and fMRI. It was observed in their studies that information on any living thing activates the lateral aspect of the fusiform gyrus and the superior temporal sulcus. In contrast, identifying and naming of tools activate the medial aspect of the fusiform gyrus, the left middle temporal gyrus, and the left premotor area. Similarly, the study by Hanna Damasio and colleagues investigated a large number of patients with brain lesions, and found a correlation between naming deficits and different brain regions. For example, it was found that damage in the left temporal pole is correlated with problems in retrieving the names of persons and lesions in the anterior part of the left inferotemporal lobe is correlated with problems in naming animals. However, they noticed that even those patients who could not retrieve names were able to many of the conceptual properties relevant to the name. Based on this, they argued that the brain has three levels. The first level is the conceptual level, which contains semantic features about words. The lexical level stores the word form that matches the concept. The next level is the phonological level where the sound information that corresponds to the word is stored. This model is different from the previous model because this contains no lemma level. Analyzing oral language is considerably different from analyzing written language. While analyzing spoken words, the listener has to decode the acoustic input, and then the information is converted into a phonological code because the lexical representations of auditory word forms are stored in the mental lexicon in that form. Once converted into phonological format, the most suitable lexical representation from the mental lexicon is selected. This selected word then stimulates the lemma level that contains grammatical information and the semantic information. Reading words is similar to listening in the last two steps but very different in the earlier stages. In reading, the reader identifies orthographic units from the visual input, and these orthographic units are directly mapped onto orthographic word forms in the mental lexicon. Alternatively, the orthographic units might be translated into phonological units, which might activate the phonological word form in the mental lexicon. The smallest building-blocks of spoken language are phonemes. There are three main factors in deciding the actual speech sound produced by a person. The first factor is voicing. All vowels are usually voiced. However, some consonants are not voiced. There are two ways to modify the speech sound for consonants. One is the point of articulation (the part of the speech apparatus) and the other is the manner of articulation (the manner in which the airstream is changed). The listener’s brain has to face a number of problems while understanding the spoken language. Firstly, there is the variability of the signal. Secondly, the word boundaries of speech are murky, making it difficult to identify words and sentences as separate chunks of information as in written language. For example, there is silence within words and a lack of clear silence between words because of coarticulation. While some researchers think there are discrete units of representation for the speech signal, some think this theory is wrong. One clue that helps the listener understand the spoken input is prosodic information, that is, speech rhythm and pitch of the speaker’s voice. Another cue, as identified by Ann Cutler, is strong syllables to indicate word boundaries. Animal studies and fMRI and PET studies in humans reveal that the superior temporal cortex is an important area of sound perception. In the first step of hearing, the speech signal is processed by pathways in the brain that are not specialized for speech. Heschl’s gyri are located on the supratemporal plan, superior and medial to the superior temporal gyrus in each hemisphere. They contain the primary auditory cortex, that processes the auditory input first. It is revealed in PET and fMRI studies that both Heschl’s gyri and the superior temporal gyri are activated by sounds, both speech and non-speech sounds. This indicates that this area is important for sound perception but not for specialized linguistic processes. According to indications, the midsection of the superior temporal sulcus of both hemispheres is responsible to distinguish speech from other sounds. In a study, Jeffrey Binder and colleagues (2000) identified that the areas that were more sensitive to the speech sounds were more ventrolateral, in or near the superior temporal sulcus. It is a difficult task to identify the areas that are particularly important for the processing of phonemes because hearing a word always automatically activates both phonemes and the meaning. Anyway, as of now, studies indicate that listening to spoken words activates middle temporal gyrus, sometimes lateralized to the left hemisphere. While some written languages have shallow orthography, some have deep orthography (for example, English). While some are alphabetic, some are syllabic and some are logographic. The problem with all these languages is that the symbols they use do not resemble what they mean or represent. One model to show how people understand letters from written language is the pandemonium model proposed by O.G Selfridge. In the model, the sensory input is stored as a memory by the image demon. Thereafter, the features demons decode the features in the stored image. Thereafter, the image demons activate all the representations of letters with the same features. Finally, the decision demon selects the representation that best matchers the input. A contradictory view came from McClelland and David Rumelhart. This model consists of three levels; a layer for the features of the letters of words, a layer for letters, and a layer for the representation of words. While this system allows information from the top layers to influence earlier layers, the model by Selfridge only allows bottom-up flow of information. In addition, while McClelland and Rumelhart allow parallel action, enabling processing of several letters at the same time, the Selfridge model allows processing of only one letter at time. Some recent studies using PET and fMRI have provided some information on where humans process letters in brain. The identification of orthographic units may take place in occipitotemporal regions of the left hemisphere. The study by Gregory McCarthy at Yale University found that occipitotemporal cortex was activated in response to unpronounceable letter strings. The same result was found in the study by Nobre et al. (1994). According to researchers, lexical processing involves three components: lexical access, lexical selection, and lexical integration. Researchers suggest dual-route reading models, one route from orthography to word form and another indirect route in which written input is translate into phonology and then mapped onto the word form. In fact, the spoken language is different from the written output because in the former, it is easy to lose track of the information flow while it is possible to reread in the latter. The cohort model by William Marslen-Wilson points out that processing in speech starts with the very first sound or phoneme that the listener has identified as the onset of a word. Thereafter, out of the activated word form representations, the one that best matches the sensory input is selected. This is called lexical selection. Another difficulty is in English, there are only 30 to 40 phonemes and many words closely resemble each other. To solve this problem, the brain uses sublexical cues such as stress patterns in the words. (20). Recent studies reveal that the competition between word candidates as proposed in the cohort model is not limited to words that have the same word initial cohort but to all lexical forms that partially overlap with the speech input. For example, the word ‘strange’ will activate words like ‘strain’, ‘range’, ‘straight’, and ‘change’. In order to understand the auditory word perception of brain, Jeffery Binder and colleagues (2000) proposed a hierarchical model of speech processing. In this model, the auditory information moves from auditory cortex in Heschl’s gyri to the superior temporal gyrus, and then to the superior temporal sulcus. From, there, the information moves to the middle temporal gyrus and the inferior temporal gyrus. It is the first place where phonological and lexical-semantic aspects of the words are processed. After that, the information is processed in the angular gyrus and more interior regions in the temporal pole. The last four areas are lateralized more to the left hemisphere. In the case of written input, the perceptual analysis of letters takes place in the primary and secondary visual cortex of both hemispheres but the orthographic units are recognized in the occipitotemporal regions of the left hemisphere. Studies show that the middle temporal gyrus is involved in phonological processing, especially the semantic aspects of word processing. The orthographic input is converted into phonological information in the left inferior frontal gyrus. Linguistic context has a role in deciding the meaning of words. While context representations are crucial to determine the sense and grammatical form of a word to be used, sensory analysis is equally necessary for a message representation to take place. There are different models showing the interaction between these two. According to the modular models of word comprehension, language comprehension takes place within separate and independent modules where the information only goes bottom-up. In this model, the contextual information does not affect lexical selection. Context comes only after the activation of all the meanings of the word. In contrast, interactive models propose that context has an influence on lexical access and selection by affecting the activational status of the various possible word form representations. Combining these two, there is the hybrid model, claiming that lexical access is autonomous but is influenced by sensory and higher level contextual information. This claim is supported by Pienie Zwitserlood, whose study revealed that the process of lexical access can be guided by the sensory input and higher level contextual information and that lexical selection can be influenced by sentence context. Semantic and syntactic integration is necessary to understand the right meaning of a words in the context of a sentence. According to studies, syntactic analysis continues even in the absence of any real meaning in a sentence. According to psycholinguistic point of view, reading sentences activates word forms (lexemes) that in turn activate the lemma level. This lemma contains information about not only syntactic properties but also the possible sentence structures. Once the suitable lemma information is available, words are assigned their syntactic roles and arranged into groups. The lemma information is inserted into the already built constituent structure. This process is called ‘parsing’. Unlike words, representations of whole sentences are not stored in the brain because brain cannot store incredible number of various sentences, and hence, parsing is not just a retrieval of representations of sentences. The garden path model of Lynn Frazier (1987) explains how parsing takes place. According to this model, we process syntactic information in a way that minimizes the time required for normal comprehension. The two mechanisms required for this are ‘minimal attachment’ and ‘late closure’. ‘Minimum attachment’ makes sure that only the minimum number of additional syntactic nodes are computed, and ‘late-closure’ works to assign incoming words to the syntactic phrase or clause being processed at the point of time. However, studies reveal that other sources of information can immediately influence syntactic processing. According to the interactive views, the brain has semantic information which prevents us from following the garden path. Studies show that syntactic priming occurs in language comprehension and that it is easier to understand a difficult syntactic structure when it has been primed by an identical structure. The studies also reveal that repetition of the verb between prime and target structures is important to syntactic priming. While modular context-free models suggest that other information does not affect the initial syntactic tree and that the mental lexicon contains no information about possible sentence structures, the interactive models suggest that information on possible syntactic structures is stored with lexical representations and that both are activated together. The study by Thomas Munte and colleagues (1998) reveals that nonlinguistic sources of information can immediately influence normal sentence comprehension. It was found in the study that conceptual knowledge of the temporal order of events can influence the processing of sentences from the very beginning. Hagoort tried to identify when semantic and pragmatic information is used by brain in sentence comprehension. It was found that both the information are accessed and integrated in the sentence in parallel. In addition, it was found that the left inferior frontal gyrus is used in both word and world knowledge comprehension. Discourse comprehension is totally different from words and sentences. There are a number of cues to make it cohesive and understandable. One way is to link the current text to the larger discourse through repetition and synonyms, pronoun morphology, and discourse prominence. Similarly, there are nonlinguistic cues, usually based on our world knowledge. According to the traditional view, at first, the lexical, syntactic, and semantic properties of a word is integrated with the meaning of the local sentence context, and then, the wider discourse exerts its influence. However, the constraint-based models claim that the influence of the wider discourse context is no more delayed than the influence of the lexical and syntactic properties in the local sentence. The study by Anthony Sanford and colleagues identified that some people interpret discourse without taking into full account the semantic information of each individual word. This is called ‘semantic illusion’. It was fond by Jos van Berkm and his colleagues that the semantic illusion occurs only when the meaning of the word is integrated into the overall representation of the discourse context. Patients with agrammatic aphasia have difficulty understanding complex sentences. David Caplan and colleagues proved through PET studies that the Broca’s area may be important for processing syntactic information. Similarly, Marcel Just and colleagues (1996) identified activation of Broca’s and Wernicke’s areas (lateral frontal and posterior superior temporal cortex). Similarly, Nina Dronkers foundthat syntactic processing is linked to the anterior portions of the superior temporal gyrus, near area 22. Willem Levelt (1989) provided an influential model for language production. Language production starts with a concept that requires appropriate words. The first stage is the preparation of a message. This involves two aspects; macroplanning and microplanning. The former involves communicative intention and the latter involves taking perspective. The output of the macro and micro planning is the input for a hypothetical formulator, which produces the message in a grammatically and phonologically correct form. Before the phonological encoding, morphological encoding takes place where the formulator collects information from lemma to understand syntactic and semantic properties. There is a possibility of error in speech during the transition from lemma level to the lexeme level, which makes mixing up of speech sounds or exchanging of words. However, the claim in this model that the selection of lemma takes place before phonological encoding is still under debate. In contrast to this view, interactive models like the one by Gary Dell (1986) suggest that phonological activation starts immediately after the semantic and syntactic information has been activated and they actually overlap each other. A study by Miranda van Turennout and colleagues (1999) tested whether lemma selection precedes lexeme activation. It was found in the study that lemma selection might occur before the phonological information at the lexeme level is activated. Studies reveal that phonological encoding in speech production activates the left frontal operculum and the posterior parts of Broca’s area are involved in the articulation of words. In addition, it is found that motor cortex, the supplementary motor area and the insula are also activated. It is found that a lesion in the insula results in apraxia of speech in patients with Broca’s aphasia. In addition, cortical stimulation of the basal temporal region of the left hemisphere in epileptic patients causes a temporary inability to produce words. This shows the presence of a widespread network of brain regions, mainly located in the left hemisphere, in producing speech. In total, language processing involves representing, comprehending, and communicating symbolic information in a written or spoken format. In reading and listening, the main issue is how meaning of various words are stored in the brain and how they are accessed. It is still largely unclear as to where the language is hiding in the mysterious reaches of the human brain. Read More
Tags
Cite this document
  • APA
  • MLA
  • CHICAGO
(Language as a Complex Activity of a Human Brain Essay Example | Topics and Well Written Essays - 3500 words, n.d.)
Language as a Complex Activity of a Human Brain Essay Example | Topics and Well Written Essays - 3500 words. https://studentshare.org/humanitarian/1854573-summary-of-a-chapter-called-language-and-brain
(Language As a Complex Activity of a Human Brain Essay Example | Topics and Well Written Essays - 3500 Words)
Language As a Complex Activity of a Human Brain Essay Example | Topics and Well Written Essays - 3500 Words. https://studentshare.org/humanitarian/1854573-summary-of-a-chapter-called-language-and-brain.
“Language As a Complex Activity of a Human Brain Essay Example | Topics and Well Written Essays - 3500 Words”. https://studentshare.org/humanitarian/1854573-summary-of-a-chapter-called-language-and-brain.
  • Cited: 0 times

CHECK THESE SAMPLES OF Language as a Complex Activity of a Human Brain

Briefing note on philosophy article

This study focused on two levels of scientific inquiry; the computational-relational approach, and the electrical activity of the brain.... He further asserts that language use is not able to be subjected to scientific definition or study, and neither can anything be learned from studying the functionality of the brain with regards to how it processes language concepts.... Chomsky refers to the study done on ERP (event-related potentiality) in which there were noted to be five categories of structure regarding patterns of electrical activity within the brain associated with language....
2 Pages (500 words) Essay

The Future of the Human Brain

The human brain is the largest of all primate brains.... Historically, it has been the fastest growing brain in… The much larger neocortex area of the human brain distinguishes humans from other animal species, imparting them with the ability to think, to decide and to judge.... But, according to neuroscientists, the human brain will not grow for ever; it might become double Anthropology The human brain has undergone natural evolution for millions of years....
2 Pages (500 words) Essay

The Brain All You AreIs Here

The different readings assigned revealed significant findings about the functions of the brain, potential of the brain to develop as people gain experience, injuries and their effect on certain parts of the brain, as well the ”self” within the brain.... The article titled… orina's brain” is about a sensitive brain tumor operation that points out the location of Broca's and Wernicke's on the frontal part of the brain....
2 Pages (500 words) Essay

Psychology - The Mind in the Machine

a human brain assimilates and processes information in much the same way as a computer.... However, because the mind of man possesses consciousness, it perceives beauty, generates moral judgments and… When the computer was in its early development stages, it was thought of as an electronic, thinking device, the mechanical equivalent of the human brain.... This discussion examines the functions of the human brain in addition to the meaning of knowledge and the limitations of machines as compared to the human mind....
4 Pages (1000 words) Essay

Is Human Emotion Hard Wired in the Brain

oseph Le Doux: I have investigated human emotions and it is my conclusion that emotions exist because they develop out of complex neural systems existing in the human brain.... I have noted this in actual practice when I carried out an experiment with Eugene D'Aquili to record the brain activity of Tibetan monks when they were… My findings showed that when their meditations reached a peak, their brain activity intensified.... I have noted this in actual practice when I carried out an experiment with Eugene D'Aquili to record the brain activity of Tibetan monks when they were meditating (www....
2 Pages (500 words) Essay

Mental Processes

The mind-brain identity theory holds that the mind is the brain and that mental states are the brains.... It identifies sensations and other mental phenomena with the physical processes of the brain (Brook & Roberts, 178).... This theory views the mind and brain as being identical.... Unlike other philosophers, who argue that, experiences are brain processes, but are non-physical properties.... The brain-mind theory affirms that mind is a physical thing, which is the brain....
2 Pages (500 words) Essay

Learning and Memory

Poremba, the left hemisphere of the human brain processes complex auditory signals.... Poremba, the left hemisphere of the human brain processes complex auditory signals.... his article was preferable because the findings come from research work by specialists in the human brain.... The splitting of the human brain into two hemispheres explains most human behavior, since the function s of the two hemispheres differ (Poremba, 2006)....
2 Pages (500 words) Essay
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us