Papers - MARTIN Andrew Thomas
-
The sound symbolism of size and speed in Japanese vehicle and Pokemon character names
Andrew Martin
甲南大学紀要・文学編 171 53 - 58 2021.3
Single Work
DOI: 10.14990/00003759
-
Are Words Easier to Learn From Infant‐Than Adult‐Directed Speech? A Quantitative Corpus‐Based Investigation Reviewed
Adriana Guevara‐Rukoz, Alejandrina Cristia, Bogdan Ludusan, Roland Thiollière, Andrew Martin, Reiko Mazuka, Emmanuel Dupoux
Cognitive Science 42 ( 5 ) 1586 - 1617 2018.7
Joint Work
-
Vowels in infant-directed speech: More breathy and more variable, but not clearer Reviewed
Kouki Miyazawa, Takahito Shinya, Andrew Martin, Hideaki Kikuchi, Reiko Mazuka
Cognition 2017.9
Joint Work
-
Utterances in infant-directed speech are shorter, not slower
Andrew Martin, Yosuke Igarashi, Nobuyuki Jincho, Reiko Mazuka
COGNITION 156 52 - 59 2016.11
Joint Work
Publisher:ELSEVIER
It has become a truism in the literature on infant-directed speech (IDS) that IDS is pronounced more slowly than adult-directed speech (ADS). Using recordings of 22 Japanese mothers speaking to their infant and to an adult, we show that although IDS has an overall lower mean speech rate than ADS, this is not the result of an across-the-board slowing in which every vowel is expanded equally. Instead, the speech rate difference is entirely due to the effects of phrase-final lengthening, which disproportionally affects IDS because of its shorter utterances. These results demonstrate that taking utterance-internal prosodic characteristics into account is crucial to studies of speech rate. (C) 2016 Elsevier B.V. All rights reserved.
-
Learnability of prosodic boundaries: Is infant-directed speech easier?
Bogdan Ludusan, Alejandrina Cristia, Andrew Martin, Reiko Mazuka, Emmanuel Dupoux
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 140 ( 2 ) 1239 - 1250 2016.8
Joint Work
Publisher:ACOUSTICAL SOC AMER AMER INST PHYSICS
This study explores the long-standing hypothesis that the acoustic cues to prosodic boundaries in infant-directed speech (IDS) make those boundaries easier to learn than those in adult-directed speech (ADS). Three cues (pause duration, nucleus duration, and pitch change) were investigated, by means of a systematic review of the literature, statistical analyses of a corpus of Japanese, and machine learning experiments. The review of previous work revealed that the effect of register on boundary cues is less well established than previously thought, and that results often vary across studies for certain cues. Statistical analyses run on a large database of mother-child and mother-interviewer interactions showed that the duration of a pause and the duration of the syllable nucleus preceding the boundary are two cues which are enhanced in IDS, while f0 change is actually degraded in IDS. Supervised and unsupervised machine learning techniques applied to these acoustic cues revealed that IDS boundaries were consistently better classified than ADS ones, regardless of the learning method used. The role of the cues examined in this study and the importance of these findings in the more general context of early linguistic structure acquisition is discussed. (C) 2016 Acoustical Society of America.
DOI: 10.1121/1.4960576
-
Mothers Speak Less Clearly to Infants Than to Adults: A Comprehensive Test of the Hyperarticulation Hypothesis
Andrew Martin, Thomas Schatz, Maarten Versteegh, Kouki Miyazawa, Reiko Mazuka, Emmanuel Dupoux, Alejandrina Cristia
PSYCHOLOGICAL SCIENCE 26 ( 3 ) 341 - 347 2015.3
Joint Work
Publisher:SAGE PUBLICATIONS INC
Infants learn language at an incredible speed, and one of the first steps in this voyage is learning the basic sound units of their native languages. It is widely thought that caregivers facilitate this task by hyperarticulating when speaking to their infants. Using state-of-the-art speech technology, we addressed this key theoretical question: Are sound categories clearer in infant-directed speech than in adult-directed speech? A comprehensive examination of sound contrasts in a large corpus of recorded, spontaneous Japanese speech demonstrates that there is a small but significant tendency for contrasts in infant-directed speech to be less clear than those in adult-directed speech. This finding runs contrary to the idea that caregivers actively enhance phonetic categories in infant-directed speech. These results suggest that to be plausible, theories of infants' language acquisition must posit an ability to learn from noisy data.
-
The multidimensional nature of hyperspeech: Evidence from Japanese vowel devoicing
Andrew Martin, Akira Utsugi, Reiko Mazuka
COGNITION 132 ( 2 ) 216 - 228 2014.8
Joint Work
Publisher:ELSEVIER SCIENCE BV
We investigate the hypothesis that infant-directed speech is a form of hyperspeech, optimized for intelligibility, by focusing on vowel devoicing in Japanese. Using a corpus of infant-directed and adult-directed Japanese, we show that speakers implement high vowel devoicing less often when speaking to infants than when speaking to adults, consistent with the hyperspeech hypothesis. The same speakers, however, increase vowel devoicing in careful, read speech, a speech style which might be expected to pattern similarly to infant-directed speech. We argue that both infant-directed and read speech can be considered listener-oriented speech styles each is optimized for the specific needs of its intended listener. We further show that in non-high vowels, this trend is reversed: speakers devoice more often in infant-directed speech and less often in read speech, suggesting that devoicing in the two types of vowels is driven by separate mechanisms in Japanese. (C) 2014 Elsevier B.V. All rights reserved.
-
Computational Approaches to the Acquisition of Phoneme Categories Invited Reviewed
Andrew Martin
Journal of the Phonetic Society of Japan 2013.12
Single Work
-
Learning phonemes with a proto‐lexicon Reviewed
Andrew Martin, Sharon Peperkamp, Emmanuel Dupoux
Cognitive Science 2013.1
Joint Work
Authorship:Lead author
-
Learning Phonemes With a Proto-Lexicon
Andrew Martin, Sharon Peperkamp, Emmanuel Dupoux
COGNITIVE SCIENCE 37 ( 1 ) 103 - 124 2013.1
Joint Work
Publisher:WILEY-BLACKWELL
Before the end of the first year of life, infants begin to lose the ability to perceive distinctions between sounds that are not phonemic in their native language. It is typically assumed that this developmental change reflects the construction of language-specific phoneme categories, but how these categories are learned largely remains a mystery. Peperkamp, Le Calvez, Nadal, and Dupoux (2006) present an algorithm that can discover phonemes using the distributions of allophones as well as the phonetic properties of the allophones and their contexts. We show that a third type of information source, the occurrence of pairs of minimally differing word forms in speech heard by the infant, is also useful for learning phonemic categories and is in fact more reliable than purely distributional information in data containing a large number of allophones. In our model, learners build an approximation of the lexicon consisting of the high-frequency n-grams present in their speech input, allowing them to take advantage of top-down lexical information without needing to learn words. This may explain how infants have already begun to exhibit sensitivity to phonemic categories before they have a large receptive lexicon.
-
(Non)words, (non)words, (non)words: Evidence for a protolexicon during the first year of life
Céline Ngon, Andrew Martin, Emmanuel Dupoux, Dominique Cabrol, Michel Dutat, Sharon Peperkamp
Developmental Science 16 ( 1 ) 24 - 34 2013.1
Joint Work
Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin &
Newport, 1996) and that they can use these cues to extract word-like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. In order to investigate this issue, we rely on the fact that besides real words a statistical algorithm extracts sound sequences that are highly frequent in infant-directed speech but constitute nonwords. In three experiments, we use a preferential listening paradigm to test French-learning 11-month-old infants' recognition of highly frequent disyllabic sequences from their native language. In Experiments 1 and 2, we use nonword stimuli and find that infants listen longer to high-frequency than to low-frequency sequences. In Experiment 3, we compare high-frequency nonwords to real words in the same frequency range, and find that infants show no preference. Thus, at 11 months, French-learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a 'protolexicon', containing both words and nonwords. Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin &
Newport, 1996) and that they can use these cues to extract word-like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. We show that at 11 months, French-learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a "protolexicon", containing both words and nonwords. © 2012 Blackwell Publishing Ltd. -
Is the vowel length contrast in Japanese exaggerated in infant-directed speech?
Keiichi Tajima, Kuniyoshi Tanaka, Andrew Martin, Reiko Mazuka
INTERSPEECH 2013, 14th Annual Conference of the International Speech Communication Association, Lyon, France, August 25-29, 2013 3211 - 3215 2013
Joint Work
Publisher:ISCA
Other Link: http://dblp.uni-trier.de/db/conf/interspeech/interspeech2013.html#conf/interspeech/TajimaTMM13
-
Speech perception and phonology Reviewed
Andrew Martin, Sharon Peperkamp
The Blackwell companion to phonology 2011.1
Joint Work
Authorship:Lead author
-
Grammars leak: Modeling how phonotactic generalizations interact within the grammar Reviewed
Andrew Martin
Language 2011.1
Single Work
-
Speech perception and phonology
Andrew Martin
The Blackwell Companion to Phonology 2334 - 2356 2011
Single Work
-
The Evolving Lexicon
Andrew Thomas Martin
2007.12
Single Work
-
Loanwords as pseudo-compounds in Malagasy
Andrew Martin
Proceedings of the Twelfth Annual Conference of the Austronesian Formal Linguistics Association 287 - 295 2005
Single Work