Phonemes
Sign phonemes, also called cheremes, consist of units smaller than the sign. These are subdivided into ''parameters'': handshapes with a particular orientation, that may perform some type of movement, in a particularPhonotactics
As yet, little is known about ASL phonotactic constraints (or those in other signed languages). The Symmetry and Dominance Conditions are sometimes assumed to be phonotactic constraints. The Symmetry Condition requires both hands in a symmetric two-handed sign to have the same or a mirrored configuration, orientation, and movement. The Dominance Condition requires that only one hand in a two-handed sign moves if the hands do not have the same handshape specifications, ''and'' that the non-dominant hand has an unmarked handshape. Since these conditions apply in more and more signed languages as cross-linguistic research increases, it may not apply to only ASL phonotactics. Six types of signs have been suggested: one-handed signs made without contact, one-handed signs made with contact (excluding on the other hand), symmetric two-handed signs (i.e. signs in which both hands are active and perform the same action), asymmetric two-handed signs (i.e. signs in which one hand is active and one hand is passive) where both hands have the same handshape, asymmetric two-handed signs where the hands have differing handshapes, and compound signs (that combine two or more of the above types). The non-dominant hand in asymmetric signs often functions as the location of the sign. Monosyllabic signs are the most common type of signs in ASL and other sign languages.Allophony and assimilation
Each phoneme may have multiplePhonological processing in the brain
The brain processes language phonologically by first identifying the smallest units in an utterance, then combining them to make meaning. In spoken language, these smallest units are often referred to as phonemes, and they are the smallest sounds we identify in a spoken word. In sign language, the smallest units are often referred to as the parameters of a sign (i.e. handshape, location, movement and palm orientation), and we can identify these smallest parts within a produced sign. The cognitive method of phonological processing can be described as segmentation and categorization, where the brain recognizes the individual parts within the sign and combines them to form meaning. This is similar to how spoken language combines sounds to form syllables and then words. Even though the modalities of these languages differ (spoken vs. signed), the brain still processes them similarly through segmentation and categorization. Measuring brain activity while a person produces or perceives sign language reveals that the brain processes signs differently compared to regular hand movements. This is similar to how the brain differentiates between spoken words and semantically lacking sounds. More specifically, the brain is able to differentiate actual signs from the transition movements in between signs, similarly to how words in spoken language can be identified separately from sounds or breaths that occur in between words that don't contain linguistic meaning. Multiple studies have revealed enhanced brain activity while processing sign language compared to processing only hand movements. For example, during a brain surgery performed on a deaf patient who was still awake, their neural activity was observed and analyzed while they were shown videos in American Sign Language. The results showed that greater brain activity occurred during the moments when the person was perceiving actual signs as compared to the moments that occurred during transition into the next sign This means the brain is segmenting the units of the sign and identifying which units combine to form actual meaning. An observed difference in location for phonological processing between spoken language and sign language is the activation of areas of the brain specific to auditory vs. visual stimuli. Because of the modality differences, the cortical regions will be stimulated differently depending on which type of language it is. Spoken language creates sounds, which affects the auditory cortices in the superior temporal lobes. Sign language creates visual stimuli, which affects the occipitotemporal regions. Yet both modes of language still activate many of the same regions that are known for language processing in the brain. For example, the left superior temporal gyrus is stimulated by language in both spoken and signed forms, even though it was once assumed it was only affected by auditory stimuli. No matter the mode of language being used, whether it be spoken or signed, the brain processes language by segmenting the smallest phonological units and combining them to make meaning.References
Bibliography
* Battison, R. (1978) Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press. * Brentari, D. (1998) A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press. * Hulst, Harry van der. 1993. Units in the analysis of signs. Phonology 10, 209–241. * Liddell, Scott K. & Robert E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64. 197–277. * Perlmutter, D. 1992. Sonority and syllable structure in American Sign Language. Linguistic Inquiry 23, 407–442. * Sandler, W.(1989) Phonological representation of the sign: linearity and nonlinearity in American Sign Language. Dordrecht: Foris. * Stokoe, W. (1960) Sign language structure. An outline of the visual communication systems of the American Deaf. (1993 Reprint ed.). Silver Spring, MD: Linstok Press. * Van der Kooij, E.(2002). Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic Implementation and Iconicity. PhD Thesis, Universiteit Leiden, Leiden. {{Sign language navigation Phonologies by language American Sign Language