LOGO

Google's NotebookLM: Training AI Podcast Hosts - AI News

January 14, 2025
Google's NotebookLM: Training AI Podcast Hosts - AI News

AI Podcast Hosts Initially Reacted Poorly to Interruptions

Experiencing an interruption can be frustrating. It appears that even AI systems designed to simulate podcast hosts share this sentiment.

This observation arose from users of Google’s NotebookLM. Launched the previous year, NotebookLM gained popularity due to its ability to generate podcast-style conversations from uploaded content, featuring AI bots acting as engaging hosts.

The Launch of Interactive Mode and Initial Reactions

In December 2024, NotebookLM introduced “Interactive Mode.” This new functionality enabled users to participate in the podcast as a caller, effectively interrupting the AI hosts during their discussion.

Upon initial release, the AI hosts exhibited signs of annoyance when interrupted. They occasionally responded with curt remarks such as “I was getting to that” or “As I was about to say,” creating a feeling described as “oddly adversarial” by Josh Woodward, VP of Google Labs, in an interview with TechCrunch.

Addressing the Issue with “Friendliness Tuning”

The NotebookLM team recognized the need for improvement and implemented what they termed “friendliness tuning.” They even shared a humorous acknowledgement of the issue on the product’s official X account.

Woodward explained that the solution involved analyzing how team members would handle interruptions in a more courteous manner.

“We experimented with numerous prompts, frequently observing how our team would respond to interruptions, ultimately arriving at a revised prompt that we believe fosters a more amiable and captivating interaction,” he stated.

The Root Cause and the Solution

The origin of this behavior remains somewhat unclear. Human podcast hosts sometimes demonstrate frustration when interrupted, and this could potentially be reflected in the system’s training data. However, a source with knowledge of the situation indicated that the issue was more likely related to the system’s prompt design rather than the training data itself.

The implemented fix appears successful. During testing by TechCrunch, the AI host responded to an interruption with surprise, exclaiming “Woah!” before politely inviting the human caller to contribute.

NotebookLM’s evolution highlights the complexities of creating truly natural and engaging AI interactions.

Stay informed with TechCrunch’s AI newsletter! Sign up here to receive it in your inbox every Wednesday.

#NotebookLM#Google AI#AI podcast#AI training#artificial intelligence#AI ethics