Analysis of Nonhuman Intercommunication with Machine Learning
Daniel Mann (University of Arkansas at Little Rock)
Human language is still considered to be fundamentally different from the vocalizations of non-human animals, and yet little work has been conducted at the level of the segment – a unit similar to phones in human speech – in other species. If an alien researcher were to analyze spoken human language at the breath level, as is typical in human analysis of other species, they would miss the combinatorial elements that make language possible and likely conclude that humans do not have language. However, because individually labeled animal vocal data are difficult to acquire, within-species vocal behavior has been largely ignored by studies using cutting edge, machine learning based analysis techniques. Yet, within-species vocal behavior is the only place we could find, if it exists at all, vocal behavior that is akin to human language. Via a large interdisciplinary collaboration network we aim to develop a framework enabling automatic and non-invasive clean recordings of individual animals that are vocalizing simultaneously in a group setting. These recordings will be segmented and clustered and we will conduct bioacoustic analyses and behavioral tests in order to determine the relevance and meaning of segments in our model species: budgerigars. Our long-term goal is that the software and insights we create in this project will provide a fundamental overhaul to the study of within-species bioacoustics and allow us to determine the true nature and limits of animal communication.