|Office|||||Redmond Barry 621|
Context plays an important role in how people use and interpret language. For the most part, work in language processing investigates the effects of context without investigating questions about what determines a context. For example, the interpretation of any referential expression (e.g., "the chipped cup") must take into account the notion of a referential domain (e.g., the set of cups visually present). In past work, we have demonstrated that people use perceptual cues in establishing a referential domain, or linguistic context. We are currently investigating the automaticity of context construction. Do we automatically build linguistic contexts as we go about our daily lives or do we only build a context when we have a linguistic signal that needs to be understood?
Current models of how children learn words focus on the mapping of words and objects. By keeping track of the co-occurrence of objects and words across multiple contexts, children should form the correct association between words and objects. This cross-situational learning is only one component of the problem that young children face. In addition to linking words with specific objects, children have to learn that words extend to multiple objects by learning concepts and their mappings to words. Mapping words to concepts captures the fact that we often use a word (e.g., spoon) not to refer to a single unique object but to refer to many different objects (e.g., any of the spoons in the drawer). We have modeled the acquisition of word-concept mappings using the assumptions of cross-situational learning as joint inference over words and concepts. Our model shows how children could learn this mapping by trading off the simplicity of concepts and the ability of the word-concept mappings to explain observed word-object mappings. We have also implemented a recursive version of our model allowing children to use words-concept mappings they have already learned to aid learning of future word-concept mappings.
Future research with this model is headed in three directions: 1) How do the characteristics of the observed data (e.g., ego-centric vs. non ego-centric input) influence word learning? 2) How might children shift from constructing concepts based on perceptual features to concepts based on relationships? and 3) How might the communicative nature of language influence children's patterns of word extension?
Models of word learning vary in how much weight they place on data. Early word learning might critically depend on cognitive maturation or early word learning might depend on observing and retaining data. Further, there is evidence that children can learn from single instances of data and they can learn by keeping track of word-object co-occurrence across contexts over time. We have developed a data analysis model of word learning to assess how much of early word learning is data-driven and to characterize the typical data-driven mechanism: How much data is needed? How frequent do children recieve data? and When do children start paying attention to data? We found that early word learning is highly data-driven. Across 13 languages, children start learning to comprehend words from birth and to produce words around 6 months of age. For both receptive and productive acquisition, words require about 10 effective learning instances, which come on average once every two months. Future research will focus on how different characteristics of words and charcteristics of children might influence the data-driven learning mechanism. For example, do early, late and average talkers differ in how they use data or start attending to data?