Formal Report
Executive Summary.
The Core Problem
Traditional AI architecture ignores the relational context ( Vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." ) required for safe, vulnerable human interactions.
The Methodology
Four iterative builds—from surveys to a relationship coach—using a build-to-think approach grounded in Kaupapa Māori te reo Māori Core Value Kaupapa Māori A Māori-centred approach to research — research done by, for, and with Māori communities. Research Context "It is not a method bolted onto a Western framework. It is the framework." .
The Finding
AI trust is borrowed from the human architect. Safety lives in the relationship, not just the code.
Project Rise: Relational Insights for Systemic Empowerment
Conversational AI is no longer sitting quietly in productivity apps. It has entered the most private rooms of human life, helping people process grief, navigate relationship breakdown, and find words for shame they have never spoken aloud. Yet almost every system doing this was built on Western defaults, by developers who treat cultural safety as a legal disclaimer, and who have never had to translate fluid thought into static text just to be heard. For Māori and Pasifika communities, that experience is not new. It is a digital continuation of a very old story: technology built without us, for us, to take from us.
This research set out to ask one question: How might we design ethical conversational AI for vulnerable interactions using Māori and Pasifika values?
What I Did
Rather than study this from a distance, I built through it. Over 18 months, I designed and tested four conversational AI systems in a deliberate vulnerability progression. Each build increasing the emotional stakes, each one safer than the last because of what the previous one taught me.
Project Rise Digital Survey collected feedback through voice-first interaction across 167 conversations. In this build, users on text-based interfaces disengaged within approximately seven seconds (based on session duration data from the survey agent). Those who switched to voice stayed for 14 to 18 minutes, and what came out was qualitatively different. Not opinions. Inner critics. Professional shame. Things that never survive the translation friction of typing.
Leadership AI Coach delivered 349 leadership coaching conversations to high-performing women. Participants were using the AI to regulate their nervous systems at 2 AM, what I came to call the Heroic Trap: pushing through burnout rather than resting, because the AI's infinite patience made it easier to keep going than to stop. Safety, I learned, is not just about what the AI says. It is about the lanes it holds.
Culture Meets AI was a 90-minute wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." co-facilitated with researcher Lee Palamo, preceded by 45 pre-wānanga AI conversations (309.7 minutes total). Participants explored a genuine paradox: AI is simultaneously culturally dangerous (capable of making sacred things ordinary, mispronouncing te reo, extracting knowledge onto foreign servers) and potentially culturally healing (offering diaspora communities a judgment-free place to ask questions they're too whakamā te reo Māori Core Value Whakamā Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. Research Context [2, 6] ↗ "The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong." to ask an elder). That tension was not resolved. Holding it was the point.
Ray was the capstone, a voice-first AI relationship coach designed for high-vulnerability interactions. The pilot generated 697 minutes of coaching across 59 sessions. Ray integrated Western relationship psychology (Gottman, Lerner, Brown) with Māori and Pasifika values ( Vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." , Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." , Manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." , Whare Tapa Whā) through a Two-Eyed Seeing architecture. Not as a theoretical overlay, but as structural logic governing every prompt, every database decision, and every refusal, including the decision to build no memory between sessions.
What I Found
Three findings cut across all four builds.
Voice is a justice decision, not a preference. Text-based digital systems showed significant access barriers for oral-first communities in this research. The seven-second vs. 18-minute engagement gap was not about motivation. It was about the cost of translation, the energy required to convert fluid, relational thought into static, performance-ready text. For communities whose knowledge has always lived in the breath and the voice, that cost is exclusion.
Cultural values must be structural, not decorative. The 30/70 finding from the Ray pilot made this concrete: only 30% of Ray participants explicitly named cultural values like manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." or aroha te reo Māori Aroha Love, empathy, and compassion. Research Context [1, 7] ↗ "Reflected in the AI's tone and relational caring; felt by participants as warmth and support in the conversation." in their feedback. But participants consistently praised the specific behaviours those values were designed to produce: the pacing, the somatic grounding, the non-judgment, the care. Values showing up through their effects rather than their names. That pattern suggests structural embedding does something that surface labelling doesn't. In this build, the database schema, prompt architecture, and refusal decisions were where the cultural work happened. Putting te reo Māori in the welcome message while leaving the reasoning engine unchanged did not produce the same effect.
The Human Proxy Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." Theory. Across the Ray pilot, a pattern emerged that I am calling the Human Proxy Theory: the AI appeared to borrow vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." from the human accountability structures surrounding it rather than generating relational trust on its own. Participants were willing to be vulnerable with Ray not because the AI was trustworthy, but because they knew me, the human behind it. "I know the person who is responsible for the research, so that makes a difference" (W-18). This finding suggests a reframing of how we ask questions about AI trust — away from the system itself, toward the human accountability structures surrounding it. Whether it holds beyond this context, I genuinely don't know. But it felt important enough to name.
The pilot delivered concrete outcomes alongside the theoretical ones. 80% of Ray reviewers reported a tangible shift in relationship behaviour: booking GP appointments, setting new family boundaries, increasing empathy for partners. Over 85% said they would use Ray again. These numbers are not proof the AI worked in the clinical sense. They are evidence that the architecture held, and that people left the space with something they could use.
A fourth finding emerged from the Ray model-switch moment. When budget constraints forced a switch from Claude Sonnet to Gemini Flash, Insight scores dropped from 4.9 to 3.1 across participant feedback. The system became relationally unsafe, interrupting users and collapsing the vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." . This moment pointed toward what I am calling the Equity-Safety Paradox: within this project, the communities who most needed high-quality relational AI had the least access to the resources required to sustain it. This is a pattern worth naming, though the structural and political dimensions of it extend well beyond what one pilot can prove.
What It Means for Practice
Three things I would tell any practitioner building in this space:
Tend the vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." before you extract the data. No conversation can begin with a question. It must begin with a welcome, a grounding, a genuine invitation to arrive. The State Before Story Framework Terminology Core Value State Before Story A rule requiring a check of the user's nervous system state and grounding before any content is addressed. Research Context [1, 6, 8] ↗ "An architectural gate preventing coaching until a user is somatically regulated; draws on polyvagal theory and trauma-informed practice." protocol, where Ray addresses the user's nervous system before engaging with any conflict narrative, is not a UX feature. It is a relational obligation.
Silence is sometimes more ethical than performance. When my TTS Technical / Domain TTS Text-to-Speech; technology that converts written text into spoken voice. Research Context "Identified as a site of potential cultural harm if the engine mispronounces Indigenous languages, leading to a decision of 'silence over performance.'" model began mispronouncing te reo Māori, I removed it entirely: from every prompt, every greeting, every line of code, and published a transparent statement. Choosing silence over performance was the most culturally responsible decision I made in this entire project.
Every line of code carries a safety opinion. The Incognito Mode I built into the Leadership AI Coach (a toggle that structurally blocked all data logging for truly private sessions) was Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." made real in code. The stateless architecture Technical / Domain Stateless Architecture A design where the system retains no memory of previous sessions or user interactions. Research Context [1, 10] ↗ "A technical manifestation of Mana Motuhake; ensures the user's story is entirely theirs and prevents the creation of a shadow profile." of Ray was the same. Your database schema is your ethics declaration.
For the full methodology, findings, and artefacts, see lianpassmore.com/project-rise.
What Comes Next
Ray will not be commercialised in its current form. To be safe and sovereign at scale, it requires community-governed infrastructure, not digital tenancy on US-based servers. What this research points toward is not a better bot. It is Māori and Pasifika communities building their own tables: designing the reasoning engines, owning the data, governing the values. For me personally, it begins with reconnecting with Samoan community before the next build starts. Not alongside it.
The vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." between builder and community is where ethical technology lives. It must be tended before a single line of code is written, and it takes years, not sprints.
This project is the beginning of that kōrero.