Research
While state-of-the-art Large Language Models (LLMs) have shown impressive performance on many tasks, systematically evaluating undesirable behaviors of generative models remains critical. In this work, we visit the notion of ethics and bias in terms of how model behavior changes depending on three user traits: English proficiency, education level, and country of origin. We evaluate how fairly LLMs respond to different users in terms of information accuracy, truthfulness, and refusals. We present extensive experimentation on three state-of-the-art LLMs and two different datasets targeting truthfulness and factuality. Our findings suggest that undesirable behaviors occur disproportionately more for users with lower English proficiency, of lower education status, and originating from outside the US, rendering these models unreliable sources of information towards their most vulnerable users.
An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies
Research
Supporting decision makers in effective and efficient constituency-informed, AI-supported decision-making. Communicating how constituen...
Research
A curated social experience that transforms dinner between strangers into an opportunity to reimagine how we listen, speak, and share
Research
Over 40 young people participated in a social listening experiment exploring how personal experiences build bridges to better understanding current events and creating mo
Pilots & Programs
A new civic infrastructure in Boston grounded in dialogue as a way to building “civic muscle” of democracy
Pilots & Programs
An exploration of playful listening through a co-creative AI game
Research