Newsroom: Yakety yak, Twain talked back. Interactive playback experiments and interstellar communication

Many wonder what goes on behind the eyes (and in the minds) of animals. In fact, the aspiring Doctor Dolittle’s of the world may go as far as to wish they could communicate with animals and understand them. However, this is much harder than one might initially think, and it requires a lot of research to first understand what animals are saying (by classifying their sounds) and why they are saying it (the function of those sounds). This heroic task is being undertaken by acousticians (sound experts) around the world to break down the communication barrier between humans and animals, facilitating interspecies communication (that is, communication between species). Although we are still a ways away from having full blown conversations with our pets, with new developing technologies, we are certainly getting closer. 

One of the main tools acousticians and biologists use to probe animal communication systems and search for signal meaning is through playback experiments. During these experiments researchers take acoustic recordings from animals and play it back into their environment to measure their reactions. These methods are an essential way for humans to communicate with a non-human species using their own acoustic signals. When these signals correspond directly to context, this allows us to understand the potential function of a specific call type. For example, when humpback whales are preparing for a feeding event (involving producing a bubble net, a complex cooperative behavior) they often produce a single feeding call that is very distinct and stereotypical. Thus, we can be fairly confident those calls signal that a bubble net feeding bout is in progress. Interestingly, these calls may also serve to “corral” or scare the fish prey into a tighter shoal to prepare for bubble net feeding. Playback experiments using feeding calls of humpback whales may better help us disentangle these two potential mechanisms. Due to their high fidelity with feeding contexts, researchers have used playbacks of feeding calls to aid in animal interventions such as luring “Humphrey” the humpback whale out of the Sacramento River (1985 & 1990).

A group of humpback whales (Megaptera novaeangeliae) bubble net feeding at dusk.

However, not all playbacks are created equal, and some employ different methodologies than others. Most of the playback experiments to date are described as “passive playbacks”, where the researcher broadcasts the acoustic stimuli at pre-determined intervals irrespective of the animal’s response. More dynamic versions of these experiments are referred to as “interactive playbacks”, where the researcher instead adjusts the playback protocol (i.e. when to broadcast, or what signal to broadcast) depending on the response of the animal in real-time. Despite being used sparingly in the literature, these methodologies have immense potential to emulate a more natural “conversation” with non-human animals. To illustrate this point, consider two scenarios, one where you are having a conversation with someone who just keeps repeating the same phrase, and another where someone adjusts what phrases they respond with based on what you say. You might imagine that the first scenario could get boring or may become less relevant as the conversation progresses, whereas the second scenario may maintain the participant’s interest due to dynamic changes in responses to progress the conversation. Researchers on the WhaleSETI team (including former Ethogram editor, Josephine Hubbard and ABGG faculty Brenda McCowan) argue that this interactive methodology is imperative to bridging the gap between our current understanding of animal communication and our ability to speak directly to animals using their own signals.

In their 2023 publication in PeerJ, “Interactive bioacoustic playback as a tool for detecting and exploring non-human intelligence: “conversing” with an Alaskan humpback whale”, they describe a rare and opportunistic encounter with a whale who showed an extended turn-taking response to an interactive playback experiment. The whale was identified using a fluke ID (i.e. photograph of the tail) from the Happywhale database as a 38-year-old female named Twain. During this encounter researchers broadcasted a pre-recorded call (known as the whup or thrup call) thirty-six times, whereas Twain responded with a whup call thirty-eight times while circling the research vessel over a 20-minute period.

Figure showing the three phases of the experiment (baseline, experimental, and followup), highlighting the recorded whup exemplar and Twains response. Fluke photo in the bottom right was used to identify Twain using the Happywhale database.

Whup calls were used because they are hypothesized to function like contact calls, which may provide information about the sender or information regarding the locations of individuals and is expected to elicit acoustic responses from the receiver. When analyzing the time delay (also referred to as “latency” in the paper) between the researcher’s playback calls and the whale’s responses, there was evidence that Twain was actively responding to the researcher’s playbacks due to a higher level of synchrony in her responses when compared to a previous “control” day when no playbacks were conducted. To ensure that the experimental manipulation elicited the acoustic response received from Twain, the 20-minute playback period was compared to pre and post control periods of equal duration where no sounds were broadcasted. From these comparisons, it was resoundingly clear that Twain only showed a response during the experimental period, but not during the baseline or follow up periods. Just like that, after weeks of getting no responses from over a dozen playback experiments, Twain talked back.

Violin plot showing the inter-call interval (i.e. number of seconds elapsed between Twains call and the playback stimuli) on the previous day (Day 1) where no playbacks were conducted, compared to the day where we conducted the interactive playback experiment (Day 2). The latencies were significantly lower on Day 2 when compared to Day 1, indicating an interactive acoustic exchange between Twain and the recorded exemplar.

Reeling from such a unique encounter, researchers from the WhaleSETI group have postulated on the potential reason for why Twain responded, but other whales in over a dozen other trials did not. Perhaps it is because Twain was present on the previous day when the exemplar (i.e. whup call) was recorded. Due to this, it is possible that Twain responded so strongly to the playback because it was a recording of one of her groupmates from the previous day. There is also a possibility that Twain may have responded strongly to a playback of her own recorded whup call, thereby creating a mirroring effect. It is difficult to know which of these scenarios is true, since eight whales were present on that day. Regardless, the recorded whup signal was clearly so salient for Twain that she was motivated to engage with it over an extended period. These results are exciting for both pushing the boundaries of interspecies communication and gaining a deeper understanding of animal signaling. While further research is needed on acoustic techniques that alert whales to human threats (ex. ships, nets, oil spills) and guidance during immediate danger (ex. entanglement, live stranding), the potential for interactive playbacks meeting those needs is very promising.

Researchers from the group are also intrigued for these methodologies to potentially inform the search for intelligent beings in the universe. One of the major assumptions of SETI (the search for extraterrestrial intelligence) is that interstellar beings are motivated to engage in a conversation with us earthlings. Establishing methodologies for ensuring signals are salient to receivers may significantly further this initiative. Furthermore, animal communication systems here on earth provide a good opportunity to “decode” signals that can be starkly different from our own human language. The WhaleSETI group aims to expand their interactive playback methodologies to replicate these findings with Twain and pave the way for interspecies (and interstellar) communication with non-human beings.

Humpback mother and calf swimming together. Photo credit: Jodi Frediani

Dr. Josephine Hubbard is an ABGG Alumni, former editor of the Ethogram, and currently working as a postdoctoral scholar at the University of California, Davis and as a member of the WhaleSETI research group. This research was conducted under NOAA permit #19703 and all protocols were reviewed by the Marine Mammal Commission to ensure ethical protocols that minimize disturbance to target and non-target animals.


Reference:

McCowan, B., Hubbard, J., Walker, L., Sharpe, F., Frediani, J., & Doyle, L. (2023). Interactive Bioacoustic Playback as a Tool for Detecting and Exploring Nonhuman Intelligence: Conversing with an Alaskan Humpback Whale. bioRxiv, 2023-02.

Invited lecture from the Interspecies Internet:

https://www.interspecies.io/lectures/conversing-with-whales

[Edited by Isabelle McDonald-Gilmartin]

Leave a comment