![]() 10.1121/1.1908022 Search in Google Scholarġ1 Fu Q-J, Zeng F-G (2000): Identification of temporal envelope cues in Chinese tone recognition. 10.1121/1.426724 Search in Google Scholarġ0 Fry DB (1955): Duration and intensity as physical correlates of linguistic stress. Search in Google Scholarĩ Culter A, Otake T (1999): Pitch-accent in spoken word recognition in Japanese. Search in Google ScholarĨ Cedrus (2009): SuperLab. Search in Google Scholarħ Boersma P, Weenink D (2011): Praat: doing phonetics by computer. Search in Google Scholarĥ Beckman ME (1986): Stress and Non-Stress Accent. Search in Google ScholarĤ Bates D, Maechler M, Bolker B, Walker S (2014): lme4: linear mixed-effects models using “Eigen” and S4. Proc 14th ICPhS, San Francisco, vol 2, pp 873-876. Search in Google Scholarģ Amano S, Kondo T, Kato K (1999): Familiarity effect on spoken word recognition in Japanese. Pap Linguist Phon Mem Pierre Delattre 54:31-44. Acoustic analysis of the stimuli revealed that relative mean amplitude and relative maximum amplitude were greater for final-accented words than for unaccented words.ġ Abramson AS (1972): Tonal experiments with whispered Thai. While word identification by Tokyo Japanese speakers had higher accuracy for natural speech than for edited speech, the accuracy exceeded the chance level for edited speech, suggesting the existence of secondary cues for Japanese accent. Edited speech stimuli were created by replacing F0 in the natural speech stimuli with white noise. These words were then produced by a native Tokyo Japanese speaker, and presented to participants in both unedited and edited forms. First, minimal pairs of final-accented and unaccented words were identified using a database, resulting in 14 pairs of words. In the present study, a perception experiment was conducted to examine whether any secondary cues exist for Japanese accent. However, little is known about its secondary cues. In Japanese, fundamental frequency (F0) is the primary cue for pitch accent. Studies have shown that such phonetic redundancy is found not only in segmental contrasts, but also in suprasegmental contrasts such as tone. Overall, these findings are consistent with previous literature showing acoustic differences between voiced and whispered speech beyond the articulatory change of eliminating fundamental frequency.Phonological contrasts are typically encoded with multiple acoustic correlates to ensure efficient communication. ![]() Despite the increase in duration, the acoustic vowel space area (measured either with a vowel quadrilateral or with a convex hull) was smaller in the whispered speech, suggesting that larger vowel space areas are not an automatic consequence of a lengthened articulation. Consistent with previous studies, vowel duration was generally longer in whispered speech and formant frequencies were shifted higher, although the magnitude of these differences depended on vowel and gender. To further explore differences between voiced and whispered speech, acoustic differences were examined across three datasets (hVd, sVd, and ʃVd) and three speaker groups (ciswomen, transwomen, cismen). Previous research has shown that listeners are less accurate at identifying linguistic information (e.g., identifying a speech sound) and speaker information (e.g., reporting speaker gender) from whispered speech. Several other acoustic differences exist between whispered and voiced speech, such as speaking rate (measured as segment duration) and formant frequencies. Whispered speech is a naturally produced mode of communication that lacks a fundamental frequency.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |