One of children's main achievements in the first year of life is learning to reliably manipulate their vocal tracts in order to produce sounds that become progressively more speech-like. Since the human vocal tract is controlled by dozens of muscles and there are many nonlinearities in the mapping from muscle activations to vocalization acoustics, this is a nontrivial learning task. I will present a computational neural network model I am developing of how human infants learn to control the muscles of their vocal tract to produce speech-like vocalizations. The model actively explores its vocal capabilities by randomly activating vocal tract muscles. These vocal tract muscles set the parameters of a realistic vocalization synthesizer, resulting in a sound. If the sound has desirable acoustic properties, the model receives reinforcement for its action. In response to receiving reinforcement, it updates its neuromuscular connection weights so that sounds like the one just produced become more likely to be produced again in the future. If the model produces a sound without desirable acoustic properties, it is not reinforced and no changes to its neuromuscular connections are made. Two aspects of early vocalization learning, (1) learning to reliably produce voicing at the larynx and (2) learning to produce vowels that resemble those of the ambient language, are modeled to show that the proposed mechanism can subserve multiple aspects of vocal development. The model thus combines exploration, reinforcement, and self-organization of neuromuscular connections to support the production of increasingly speech-like vocalizations.
University of Memphis
A computational neural network model of infant vocal learning
Area of Study