BLOGGER TEMPLATES AND TWITTER BACKGROUNDS

Tuesday, December 6, 2011

Music and Language

The current article we are reading for class, is all about music and language. I'm going to bring up some more research on the correlations between music and Language. (Sorry, I can't help myself I'll try to tone down the Psycholingo).

Jackendoff (2009) mentions seven cognitive similarities between the learning and course of understanding music and language. Music and language necessarily involve memory, both bring together units in working memory with methodical regulations and arranged schemata, they both involve making predictions, music and language involve controlled muscle movement of the mouth or hands (mouth for talking, singing, and wind instruments; and hands for sign language, percussion instruments, and string instruments), acquisition involves the aural reproduction of what other’s sing, say, or do, we have the capability to create new words and songs, the potential to do the action of speaking and playing with other people. Jackendoff mentions that these capacities are not singularly related music and language.

One cognitive ability that is just applied to music and language that he mentions is that they both entail a pattern of time-based sounds. Yet, language relies on syllables and music relies on notes which can vary widely in length over time. Intonation in speaking, is comparably invariable because it tends to move between two secure pitches. Yet, these two pitches are not set like the first and fifth scale degree in music. Furthermore, two separate areas of the brain are in charge of prosodic intonation and musical pitch (Jackendoff 2009). Yet, contrary evidence supports that prosodic grouping involves related brain regions as music grouping-- the mental gathering and condensing of components into separate hierarchy levels (Patel, 2006).
Jackendoff claims the process of music processing does not match that of prosodic processing because music does not have syntax which is the basis of mental hierarchy. Music has no conceptual meaning so it does not have a match to this mental organization (Jackendoff 2009).

Converse to Jackendoff’s claim, Patel, Peretz, Tramo, & Labreque (1998) measured syntactic commonalities between music and language and found that music and language both have syntax.

Slevec, Rosenberg, & Patel (2009) found that musical syntax and linguistic syntax use the same cognitive resources and musical syntax and linguistic semantics do not occupy the same mental resources.

Yet, even though semantics do not occupy the same cognitive resources as musical syntax, listening to music still primes semantically related words. Koelsch, Kasper, Sammler, Schulze, Gunter, & Friederici (2004) found that words that the composers used to describe their pieces semantically primed words in a lexical decision task. (A lexical decision task is when either words on non-words are presented and participants have to quickly judge "yes" or "no" to whether it is a word or not. For example a non word would be manty. Both related and non-related words were used). So when participants listened to pieces that the composers labeled as happy, they were quicker to respond "yes" to the word happy than to sad. This shows that listening to sad music on some level makes us think of the word sad -- it semantically primes it.

I hope I didn't confuse anyone and hopefully you learned a little bit about Music perception!

Question: Jackendoff mentions eight simillarities between language and music (seven that do not only pertain to music and one that does) Can you think of any other correlations/similarities between music and language?






References




Jackendoff, R. (2009). Parallels and nonparallels between language and music. Music Perception 26(3), 195-204.

Koelsch, S. D., Kasper E., Sammler D., Schulze K., Gunter T., & Friederici A. (2004). Music, language and meaning: brain signatures of semantic processing. Nature Neuroscience, 7 (3), 302-307.

Patel A.D. (2006). Musical rhythm, linguistic rhythm, and human evolution. Music Perception, 24(1), 99-104.

Patel, A. D., Peretz, I., Tramo, M., & Labreque, R. (1998). Processing prosodic and musical patterns: A neuropsychological investigation. Brain And Language, 61(1), 123-144.


Slevc, L., Rosenberg, J. C., & Patel, A. D. (2009). Making psycholinguistics musical: Self-paced reading time evidence for shared processing of linguistic and musical syntax. Psychonomic Bulletin & Review, 16(2), 374-381.

0 comments: