Formal Report
Abstract.
Conversational AI is moving into the most private rooms of human life, yet it is almost exclusively built on Western defaults, by developers who treat safety as a legal disclaimer rather than a structural requirement. For Māori and Pasifika communities, this is a familiar story: technology built without us, for us, to extract from us. This research asks: How might we design ethical conversational AI for vulnerable interactions using Māori and Pasifika values?
Guided by the Kei Compass (Dell, 2025), a five-directional Indigenous framework, this project used a practice-based, build-to-think methodology across four iterative AI builds: a voice-first survey agent, Leadership AI Coach (leadership coaching), Culture Meets AI (a wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." -based exploration of AI with Māori and Pasifika communities), and Ray, a voice-first AI relationship coach and the capstone prototype. Data was gathered across 167 survey conversations, 349 coaching sessions, a 90-minute wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." with 12 participants, and 697 minutes of voice coaching across 59 Ray sessions. The methodology was practice-based and iterative: the building was the thinking, with autoethnography and thematic transcript analysis applied across all four builds.
Three core findings emerged. First, text-based digital systems architecturally exclude oral-first communities. Voice is not a preference, it is a justice decision. Second, across the Ray pilot, participants consistently described the behaviours those values were designed to produce (pacing, somatic grounding, non-judgment) even when they didn't name the values explicitly. That pattern suggests structural embedding matters more than surface labelling, though the pilot scale limits how far that claim travels. Third, a pattern I am calling the Human Proxy Theory Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." : across the Ray pilot, participants appeared willing to be vulnerable not because the AI itself was trustworthy, but because they knew the human behind it. This finding points toward a reframing of AI trust questions, though further research across different contexts would be needed to test how widely it holds.
This research concludes that ethical conversational AI is not a features problem. It is a power problem. The vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." between builder and community is where safe technology lives, and it must be tended before a single line of code is written.
Reference
Dell, K. (2025). Using Māori values to ethically evaluate food-enabling technologies [Lecture, Week 12]. Master of Technological Futures, GEN25. AcademyEX, 27 February 2025. Framework adapted by the author as the "Kei Compass."
Keywords