Announcement

Aizuchi-bot Creators Win Honorable Mention at Chi 2024!

A huge congratulations to Kazumi Yoshimura, Dominique Chen, and Olaf Witkowski for winning Honorable Mention at SIGCHI's CHI 2024 conference!
Cross Labs
|
August 6, 2024
An illustration of Aizuchi-bot interpreting Japanese and English
READ THE PAPER

The team from Waseda University and Cross Labs created Aizuchi-bot, a new kind of AI bot that co-creates synlogue with humans.

What is a Synlogue?

You've certainly heard of a dialogue, where two people talk to each other to co-create a conversation. In a synlogue, two or more people co-create a one-sided conversation. It's a bit like co-creating a monologue on-the-fly, except the structure and contents of the resulting synlogue feels much different than a monolog created by only one person. In a traditional dialogue, speakers exchange complete messages in a turn-taking manner. Conversely, synlogue involves turn coupling, where incomplete utterances are shared and completed cooperatively by multiple speakers.

So why did they do it?

The main idea was to explore the resulting innovative conversational patterns in order to enhance human-computer interactions. This study builds on existing linguistic and anthropological studies to find new ways to explore the differences between dialogue and synlogue.

The key features of synlogue are incompleteness, overlap, multimodality, and co-adaptation, all of which play crucial roles in communication. Synlogue’s cooperative nature has the potential to mitigate social divisions in our increasingly digital world. While we might understand it better through humans interacting, we don't understand it as much via human-computer interactions. Here, Yoshimura and colleagues propose a design concept for synergistic interactions between humans and computers. They expand into the subjective factors that facilitate synlogic interactions.

Aizuchi-bot is a prototype designed to implement synlogic interactions between humans and computerized technology. The experiment involved participants discussing given topics with an online person (simulated by Aizuchi-bot). Aizuchi-bot uses an algorithm to dynamically generate responses, such as “verbal nods” or affirmative sounds, during conversations.

After each discussion with Aizuchi-bot, humans completed a subjective evaluation questionnaire and participated in semi-structured interviews. The researchers aimed to understand how different response patterns affected the participants' perception and interaction quality. Participants interacted with Aizuchi-bot under various conditions, including silent, synthetic voice, and human voice patterns.

Quantitative and qualitative analyses of the data revealed several significant insights:

  • The synthetic voice pattern particularly excelled in reducing nervousness and expanding conversation topics.
  • Cooperative interactions were more likely when participants perceived a sense of humanness in Aizuchi-bot’s responses.
  • Conversely, recognizing Aizuchi-bot as a machine hindered cooperative relationship formation.
  • Participants adjusted their speech pace and pauses in response to Aizuchi-bot’s timing, facilitating a seamless co-adaptation process.
  • Perceptions of humanness and co-adaptation led to more relaxed and enjoyable conversations, akin to casual chatting.

So what did they learn?

All in all, this whole study underscored the importance of designing conversational agents that can be perceived as capable of understanding and predicting human intentions. For synlogic interactions to be effective, humans need to perceive their digital counterparts as entities with communicative intentions. While this may be intuitive to humans who are communicating with humans, it may indicate that we still have a ways to go in terms of designing technology that we can comfortably have conversations with.

This collaboration offers a glimpse into the future of human-computer interaction, suggesting that synlogue-based designs could enhance the quality and depth of digital communication, as we continue to integrate AI into our daily lives, such insights will be invaluable in creating more natural and engaging interactions.

See for yourself! You can watch the CHI 2024 conference video presentation here.