Overview
Great conversations so often lead to weak artifacts. How do we translate the energy of live community dialogue into outputs that people actually want to explore? Recordings and transcripts are comprehensive, but tedious to navigate. Meeting notes (whether by humans or AI) are fast but can flatten nuance and voice. Custom web portals are engaging, but require time and expertise to build.
Anthology combines the strengths of these approaches to create rich, shareable, interactive visualizations in minutes.
First, users upload an audio recording of a conversation. Anthology automatically transcribes and cleans speech, identifies speakers, and breaks dialogue into individual responses. Each response is visualized as a node, with arrows showing how people respond to one another. Language models analyze responses for meaning, and using UMAP and D3.js libraries we arrange them by meaning, question, and narrative.
The result is a living map that makes it easy to discover, hear, and understand the ideas that shape a conversation. Clicking any node plays the original audio clip, so you hear each contribution in the speaker’s own voice. Users can also add their own voices, with new contributions parsed and mapped in real time.
Over time, Anthology will support a wider range of inputs, including videos, multiple conversations, group chats, and online comment threads.