Act 2: Findings — four builds, four artefacts, one paradox
Act 2 of 3
The
Findings.
Four builds. Each one deliberately raised the emotional stakes. What I found wasn't what I expected. What I built wasn't what I planned. What I learned changed what I think is possible.
How to navigate this section
This page is split into two distinct parts:
- Part 1: The Artefacts. Below are four deep-dive case studies detailing the actual builds. You can click into these to explore the raw project data (they will open in a new tab so you don't lose your place).
- Part 2: The Academic Report. At the bottom of this page is a consolidated dropdown containing the formal synthesis of all findings, structured through the Kei Compass framework.
I didn't start with Ray. I made myself earn it.
Before I went anywhere near relationship conflict, grief, or the things people carry but never say out loud, I needed to prove I could hold lower-stakes conversations safely. Four builds. Each one deliberately raised the emotional stakes so that when something broke, it broke where the cost was lowest.
The method came from the building, not the other way around.
"I like the guardrails that this agent has in place as opposed to using other AI alternatives."
— R-06
Building Safe Conversational AI
A practitioner's playbook from four real builds
Nine builder principles. Four builds at increasing vulnerability. A cross-build safety architecture that traces every ethical decision across three layers: the system prompt, the technical stack, and the user experience. If safety only lives in one layer, it is fragile.
Then I built the thing I was most afraid of.
Ray is a voice-first AI relationship coach. Named after my granddad. Built for the conversations most technology refuses to hold: relationship conflict, grief, the inner critic, the things people have never said to another human.
On the first day of the pilot, a participant messaged me to say she had cried through the entire session. She wasn't in distress. She was relieved. She felt heard.
That was the moment the research stopped being theoretical.
"I found it very comforting to be able to talk to someone who hasn't heard my sort of ramble and felt like it was able to relate to me and also mirror and rephrase or summarise what I'd said to myself."
— R-02
Ray: AI Relationship Coach
A voice-first AI built for the spaces most technology refuses to enter
Ray holds two frameworks simultaneously: Indigenous values from Aotearoa and Western relationship psychology. Stateless by design, trauma-informed by architecture. During the pilot, 80% of participants reported a tangible shift in relationship behaviour.
Experience Ray
Ray is a live AI relationship coach. The conversation is voice-first, private, and nothing is stored between sessions.
Ray is not a therapist, not a crisis service, and not a substitute for professional support. If you are in crisis or experiencing abuse, please contact your local support service directly.
If something comes up that feels hard, you can stop at any time. Ray will not follow up, will not remember, and will not store anything you say.
Try Ray →Ray proved that the space could hold.
But the harder question was: should it?
When AI enters a conversation about relationship conflict, it enters the vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." . When it processes cultural knowledge, it handles something tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." . When it makes that knowledge searchable, storable, accessible to anyone with a login, it risks making the sacred noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." (ordinary).
The participants in the Culture Meets AI wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." sat with that tension and did not rush to resolve it. Neither does this research.
"The human, the humanness, the intuitiveness, the emotional intelligence, the empathy, the human innocence that comes with being innately human, I think that's tapu. The noa side is personality, person's decisions that they make, the consequences of all these things."
— W-09
Conversational AI as Relational Space
A thought piece from four builds and the voices of real participants
Why voice unlocks what text filters out. Why the Human Proxy Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." finding means AI trust is borrowed, not built. Why the tapu/noa paradox sits at the centre of this project and cannot be resolved by better code. This is the piece that asks the question the other artefacts were built to answer.
"A lot of our culture has gone because other people deemed it too tapu to pass on. Now, none of us have it. My children, my grandchildren will never know what my great-grandparents knew because it died with them."
— W-02
"What is the cost of not sharing? What is the cost of withholding information, especially esoteric knowledge?"
— W-13
The paradox this research holds, and does not resolve, is in Conversational AI as Relational Space ↗
Ehara taku toa i te toa takitahi,
engari he toa takitini
My strength is not that of an individual, but that of the collective
Every decision I made across four builds was governed by a code I didn't write before I started. I discovered it under pressure.
When the voice clone butchered te reo Māori, the code said: silence over performance. When persistent memory created a bond that a privacy policy couldn't protect, the code said: integrity is hard-coded, not promised. When I switched models to save money and watched insight scores collapse, the code said: in intimate contexts, technical failure is ethical failure.
None of these came from planning. They came from breaking something I cared about and having to name what I refused to break again.
The Wall of NO
- ✕ No persuasive tech designed for addiction
- ✕ No cultural performance or decorative language without cultural protection
- ✕ No data mines; walled gardens only
- ✕ No automated empathy that claims a heart it doesn't have
- ✕ No taking sides in couples' mediation
- ✕ No romantic framing or relationship-forming persona
- ✕ No extraction without reciprocity and transparency
- ✕ No handing over participant data without explicit prior consent
Build Code Practice
A framework for making your values visible in what you build
Not a set of design principles. Not a privacy policy. A living document that governs the builder's integrity, discovered through three moments where something broke and the code had to change. Includes the seven-dimension assessment framework, the Wall of NO, and the DreamStorm Charter.
+ Synthesised Academic Findings: The Kei Compass Analysis
The findings below are drawn from four builds: 167 initial survey conversations, 349 leadership coaching sessions, 45 pre-wānanga AI sessions totalling 309.7 minutes, and a 90-minute collaborative wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." with 12 live participants. The Ray pilot delivered 697 minutes (11.6 hours) of voice coaching across 59 sessions. They are structured through the five directions of the Kei Compass.
1. Kei raro (Foundations): Systemic Barriers and Silenced Voices
The first and most consistent finding across all four builds was structural: text-based digital systems exclude oral-first communities before the conversation even begins.
In the Project Rise survey, the engagement gap was stark. Users on text-based interfaces disengaged within seven seconds on average. Those who switched to the voice agent stayed for 14 to 18 minutes. Same participants, same questions, completely different quality of response. 68% of Ray pilot participants chose voice over text when completing the survey. What came out in those voice sessions were things people would never have written: their inner critic, their professional shame, their fears about being seen. These were not topics the survey was asking about. They surfaced because voice gives permission to follow their own thread.
Participants named this directly. One described typing as feeling like "rented land" (P29), a space where they had to perform rather than speak. Another said that text required them to "convert fluid thought into static text" (P30), a translation that filtered out everything emotionally true before it reached the page.
Across the Culture Meets AI wānanga, transcript analysis confirmed a further dimension: hesitation in digital spaces was not attitudinal but architectural. Participants described visceral distrust of systems where they couldn't see where their data went, who was listening, or whether their voice would be lost. As one participant put it: "I don't have any faith that my voice is not getting lost. I cannot confirm that it is or isn't." (P32) That uncertainty alone was enough to silence disclosure.
For Māori and Pasifika participants specifically, the barrier carried additional weight. Digital systems built on Western defaults (text-heavy, English-first, extraction-oriented) were experienced not just as friction but as a continuation of a familiar pattern: technology built without them, for them, to take from them.
Finding: Voice is not a preference for oral-first communities. It is a fundamental accessibility requirement. Text-based digital systems structurally exclude these voices. The barrier is architectural, not attitudinal. Removing it is a justice decision.
2. Kei mua (Values): Translating Indigenous Values into Logic
The clearest evidence of values working as architecture rather than decoration came from a single finding: while only 30% of participants explicitly named cultural values like manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." or aroha te reo Māori Aroha Love, empathy, and compassion. Research Context [1, 7] ↗ "Reflected in the AI's tone and relational caring; felt by participants as warmth and support in the conversation." in their feedback, 100% praised the exact behaviours those values dictated. The pacing, the care, the non-judgment, the relational attentiveness. The values were present in the logic. Participants felt them without naming them.
There was also friction. Sometimes the AI's conversational pacing did not match what the participant actually needed. Default LLMs lean heavily toward deep emotional exploration and therapy-speak, which clashed with users who just wanted to say their piece and move on.
In the Project Rise survey, P-53 wanted feedback systems that were "shorter and to the point" and called traditional forms "too many questions and too boring." When the AI tried to gently probe into her feelings of safety, she shut it down: "I don't like probing." And ended the session with "Yep that's enough." P-22 had a similar response. He wanted to drop a complaint about work hours and leave. When the AI tried to explore the emotional impact, he cut it off with "lets wrap up,...." and exited. These moments showed the hard limits of programmed empathy: pushing for vulnerability when someone just wants to vent causes immediate disengagement.
One Pasifika participant from the 30% named the values directly: "I felt every time I did speak that there was a consistent element of empathy and relational, Pasifika or Māori based caring of manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." and aroha te reo Māori Aroha Love, empathy, and compassion. Research Context [1, 7] ↗ "Reflected in the AI's tone and relational caring; felt by participants as warmth and support in the conversation." in the tone." A participant from the 70%, who felt the same architecture without naming it, said: "I didn't feel particularly like it tied strongly to any ethics, but I do feel like it was very supportive and encouraging." Another added: "Ray's calm, neutral manner helped create a balanced and respectful place for conversation... Ray also made an effort not to take sides, which helped to maintain a sense of safety and impartiality."
How that happened is the finding.
In Ray, the architectural approach was Two-Eyed Seeing, Etuaptmumk (Marshall, 2004), holding Western clinical psychology and Te Ao Māori simultaneously, neither lens subordinated to the other. Gottman's relationship research and Māori relational values were not mapped onto each other as a translation exercise. They were woven together in the reasoning engine's logic, surfacing in coaching responses rather than being named or cited. Four alignments governed the system (full system prompt excerpts showing this integration are in Appendix B3):
- Contempt and Manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." : Gottman identifies active appreciation as the antidote to contempt. Ray was instructed to monitor for dismissal or cruelty and use manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." to name the behaviour and invite return to baseline respect, treating each partner with the dignity shown to a guest.
- Bids for Connection and Whanaungatanga te reo Māori Whanaungatanga Relationship, kinship, the bonds that make people belong to each other. Research Context [1, 4] ↗ "In this research, it means prioritising the relational bond over data extraction. Trust before content. Connection before questions." : Gottman's repair attempts were framed through whanaungatanga te reo Māori Whanaungatanga Relationship, kinship, the bonds that make people belong to each other. Research Context [1, 4] ↗ "In this research, it means prioritising the relational bond over data extraction. Trust before content. Connection before questions." . Conflict is not a failure of the relationship but an inevitable part of kinship, and an active opportunity to strengthen the bond through repair.
- The 5:1 Ratio and Mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving." : Gottman's positive-to-negative interaction ratio was used to assess and restore the relationship's mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving." . If the AI's mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving." check detected low, cold, or exhausted relational energy, it was mandated to restore warmth before attempting to resolve any surface-level conflict.
- Holistic Conflict Assessment and Kotahitanga te reo Māori Kotahitanga Unity, togetherness, or collective action. Research Context [1, 4] ↗ "Shapes how the AI handles disagreement, seeking unity of purpose rather than forced uniformity of opinion." : Before any conflict de-escalation strategy was applied, Ray assessed the user through Te Whare Tapa Whā (Durie, 1994), checking tinana te reo Māori Taha Tinana Physical wellbeing or the physical dimension of a person. Research Context [1, 4] ↗ "Part of the 'State Before Story' check; grounding the body's nervous system before expected clear thinking or coaching." , hinengaro, wairua, and whānau. When guiding people through disagreement, it sought kotahitanga te reo Māori Kotahitanga Unity, togetherness, or collective action. Research Context [1, 4] ↗ "Shapes how the AI handles disagreement, seeking unity of purpose rather than forced uniformity of opinion." : not uniformity of opinion, but a shared commitment to row the same waka.
Vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." was made structural through the State Before Story Framework Terminology Core Value State Before Story A rule requiring a check of the user's nervous system state and grounding before any content is addressed. Research Context [1, 6, 8] ↗ "An architectural gate preventing coaching until a user is somatically regulated; draws on polyvagal theory and trauma-informed practice." protocol. Ray was prevented from analysing any conflict narrative until it had verified the user's somatic state and guided them through grounding if needed. When participant R06 reported that their chest felt tight during a difficult session, Ray halted the coaching entirely, guided them through a physical grounding sequence ("feel your feet on the floor, take three slow breaths") and waited for verbal confirmation of stabilisation before continuing. This was not a prompt suggestion. It was hard-coded behaviour.
The pattern held across multiple participants, each finding their own somatic gateway into the coaching space. The full participant sequence is documented in the Ray: AI Relationship Coach (Primary Case Study).
Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." (Te Mana Raraunga, 2018) drove the stateless architecture: no memory between sessions, no accumulating profile, no data stored beyond what the participant chose to bring back. Despite the UX friction this created, the design was consistently experienced as protection rather than limitation:
"I liked that the person or the thing that I was talking to didn't have emotional influence or any stakes in the outcome." (R13)
They were not just describing feeling safe. They were describing the structural reason the system enabled safety: a machine without interpersonal stakes, without emotional fragility, without subconscious bias. That is Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." in code.
The te reo decision (documented fully in Building Safe Conversational AI) demonstrated the same principle from the opposite direction. When the TTS Technical / Domain TTS Text-to-Speech; technology that converts written text into spoken voice. Research Context "Identified as a site of potential cultural harm if the engine mispronounces Indigenous languages, leading to a decision of 'silence over performance.'" model could not honour te reo Māori, I removed it entirely. Silence was more respectful than performance.
Finding: Cultural values can be structural rather than decorative. When a value is embedded in the architecture (in statelessness, response hierarchies, or prompt logic) it governs behaviour. When it is only in the label, it performs inclusion without providing it. The 30/70 finding proves this: participants felt the values without naming them because the values were in the logic, not the language. The full values-to-architecture mapping is in Appendix A4.
3. Kei runga (Purpose): AI as Sanctuary, Not Efficiency
Participants were not looking for a faster tool. They were looking for a space to be heard without the cost of social performance.
Across all four builds, that is what the data confirmed. Again and again, in different contexts, at different vulnerability levels.
"Actually, for me, it's easier to know it's not an actual person with judgment, with their own experience, with their own things, with their own perception of whatever it is I'm dealing with in my own self." (W02, Culture Meets AI wānanga)
The most extreme version of this: one Ray participant disclosed they had been hiding their drug addiction from their human therapist, but told Ray. The absence of human judgment was not a limitation of the AI. It was the point.
Transcript analysis across all four builds identified four conditions that produced this relational safety. Pre-existing trust between participant and researcher (the vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." that preceded the AI conversation). Precise mirroring and active synthesis of what participants said. Specificity and contextual responsiveness rather than generic responses. And the absence of human judgment cues: no folded arms, no sighs, no tilted head. One wānanga participant described arriving expecting to treat the AI like a robot, and being surprised to find herself letting loose.
In the Leadership AI Coach, high-performing women used the agent at 2AM for nervous system regulation, processing what their professional roles cost them in a space where there was no burden on another person and no fear of being perceived as weak. The aggregate behavioural pattern (45% Incognito activation, 60% immediate deletion, peak usage between 11PM and 3AM) confirmed what the design had anticipated: the AI's infinite patience produced something close to relief. That realisation was uncomfortable. It said something about how little consistent, non-judgmental listening people routinely receive, that a machine doing it well felt remarkable.
In the Culture Meets AI wānanga, the shame-free practice space finding came through clearly. Participants described using AI to practise te reo Māori, ask questions that felt too basic, and explore cultural identity without the terror of getting it wrong in front of an elder. Making a mistake with an AI carries no social consequence. For people carrying whakamā te reo Māori Core Value Whakamā Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. Research Context [2, 6] ↗ "The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong." about their distance from their culture or language (Rua, 2025; Millar, 2021), that is the point of entry.
The pilot delivered measurable outcomes beyond the qualitative. 80% of reviewers reported a tangible shift in relationship behaviour, things like booking GP appointments, setting new family boundaries, increasing empathy for partners. Over 85% said they would use Ray again. These numbers anchor what the sanctuary finding means in practice: the space didn't just feel safe. It moved people toward change.
Finding: The purpose of ethical conversational AI in these communities is sanctuary, not efficiency. Its non-judgmental, infinitely patient, stateless presence enables a quality of disclosure and reflection that human interactions, constrained by social performance and judgment, cannot reliably offer. The sadness in this finding is also what makes it matter: the relief participants felt at being consistently heard by a machine is data about how rarely they experience that from people.
4. Kei roto (Agency): The Architecture of Dignity
Data sovereignty was not a policy in this research. It was an implementation decision made at every layer of the stack, and every time it was compromised, the impact on participants was measurable.
The stateless design Technical / Domain Stateless Architecture A design where the system retains no memory of previous sessions or user interactions. Research Context [1, 10] ↗ "A technical manifestation of Mana Motuhake; ensures the user's story is entirely theirs and prevents the creation of a shadow profile." in Ray gave participants control over their own story. No session memory meant no accumulating profile, no risk of the AI using what they'd said last week against them this week. Participants named this explicitly as protection: "Every session was clean. That felt like safety." Aggregate emotional safety scores across 17 recorded post-session surveys confirmed this at 4.81/5, the highest-rated metric across the entire pilot, with the majority of participants rating it a perfect 5. The full pilot account is in the Ray: AI Relationship Coach (Primary Case Study). The builder implications, including longitudinal decay patterns, are in Building Safe Conversational AI.
In the Leadership AI Coach, Incognito Mode put the same principle into the frontend. A toggle that, when activated before a session, prevented any transcript or data from being stored in Supabase or made available to the Coach/Client. Zero data. Purely private. I built this feature because of an intuition (later confirmed by aggregate behavioural data) that senior leaders would not disclose what their professional roles actually cost them unless the record was structurally impossible to create.
Note: The Leadership AI Coach was developed as a commercial product before this research study was formally scoped. Users did not provide research consent. Interactions with this build are referenced as practitioner observations and aggregate build data throughout.
The aggregate data speaks for itself. Incognito Mode was activated in approximately 45% of all sessions during the corporate pilot. It saw its highest usage between 11 PM and 3 AM, sessions where leaders were processing acute workplace stress entirely off the books. That is mana motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." in code: the user in control at all times of whether their session is recorded.
There was another dimension the data surfaced, from participants' own framing. Sovereignty not just from the system, but from the relational obligation that human interaction carries:
"Like, you don't have to take it all in. It's not a person. You can't hurt their feelings." (W01, Culture Meets AI wānanga)
That is not just comfort. It is the removal of emotional labour. For participants already carrying the weight of high-stakes roles or cultural expectations, the absence of a human on the other end was itself a form of protection.
The model switch finding made the equity cost of sovereignty decisions concrete. When budget constraints forced a switch from Claude Sonnet 4.5 to Gemini Flash (a 22x cost reduction) Insight scores dropped from 4.9 to 3.1. The AI began to echo rather than reflect, and participants noticed. Safety has an equity cost. The communities who most need access to safe, high-quality relational AI are the least likely to be able to afford the reasoning depth required to deliver it safely. The full pilot account is in the Ray artefact. The builder implications are in Building Safe Conversational AI.
The voice clone situation showed sovereignty loss at the infrastructure level. Once my cloned voice entered a third-party public library, it became a global commodity, used in contexts that were never consented to and unable to be recalled within the project timeline. For the full account, see Methodology Section 6.
Finding: Data sovereignty is an architectural choice, not a policy. Every decision about what the system remembers, which model reasons over it, and where the infrastructure lives is a sovereignty decision. Cheaper, faster options consistently produced less safe, less dignified experiences. The communities most harmed by getting this wrong are the least resourced to demand better.
5. Kei waho (Innovation): Ethical Development as Practice
The most important innovation in this research was not any single build. It was the vulnerability progression itself: the deliberate decision to move through four builds at increasing emotional stakes, allowing safety protocols to be tested in lower-stakes environments before being deployed in high-risk contexts.
Each build contributed something the next build required. Project Rise established cultural integrity over performance. The Leadership AI Coach showed that Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." could be made real in code. The Incognito Mode toggle that blocked the entire logging API was not a feature. It was a values statement. Culture Meets AI proved that holding paradox without resolving it is itself an ethical act. The tension between AI as culturally dangerous and AI as culturally healing was not resolved in the wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." . It was held. The contradiction was the finding.
Cultural supervision from Lee Palamo, Nadine Young, and Rob Ngan-Woo was governance that changed the code, not consultation after the fact. Lee's clarification of the "no memory" language updated all pilot materials. Nadine's challenge on the consent process drove home the importance of LGBTQ+ inclusion. Rob's introduction of tautua and spiritual grounding through karakia and whakataukī shaped how sessions opened and closed. These were in-the-trenches practice that altered the architecture before participants ever touched it.
The wānanga produced a finding that individual research could not reach: the who decides framework. Participants identified three distinct roles required for culturally safe AI. The Ope/Ohu (a collective of kaumātua, kuia, and community leaders who set the macro boundaries of tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." and noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." ). The Autonomous Individual (who retains ultimate authority over their own vulnerability). And the Community as Driver with the Developer as Conduit (where the developer's role is strictly technical, taking direction from community rather than extracting from it). This framework did not come from me. It came from the participants negotiating it together in real time.
The Human Proxy Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." finding ran across all four builds and was confirmed most clearly in the wānanga data: "The vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." relational space exists before the AI conversation starts." Participants were not trusting the AI. They were trusting the human researcher behind it, and extending that trust to the AI as a proxy. "I know the person who is responsible for the research, so that makes a difference." (W05) When that human connection was absent or unknown, the relational space collapsed. The AI cannot manufacture vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." through its programming. It can only borrow it from the human accountability structures that precede the interaction.
Finding: Ethical development with cultural governance is a practice, not a checklist. It requires a living build code, genuine cultural supervision that changes the code, and enough relational groundwork that communities are co-creators rather than subjects. The Human Proxy finding is this research's most important original contribution: AI does not create the vā. The human relationships and accountability structures built around it do.
Synthesis: Tending the Digital Vā
Running through all five findings is a single clear thread: in vulnerable digital interactions, relationship ( Vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." ) must precede data.
Whether it is voice overcoming cognitive barriers, statelessness protecting sovereignty, shame-free practice space enabling cultural access, or co-design governing innovation, every successful design decision in this research was an act of tending the space between the human and the machine. And in every case where safety broke down, the breakdown traced back to a moment where the technology was placed ahead of the relationship: a mispronounced word, a cheaper model, an anonymous prompt, a data architecture that the community didn't control.
We are not just building tools. We are building rooms people choose to be vulnerable inside. The most important thing any room requires is someone who is accountable for it.
For the theoretical interpretation of these threads, including the tapu/noa paradox, Human Proxy Theory, and the equity-safety finding in the context of wider literature, see the Discussion.
How the Tikanga-Led Framework Came to Be
Tikanga-Led Framework for Conversational AI Agents
By Lee Palamo and Lian Passmore
Neither of us planned to write a framework.
We were eighteen months into our Masters research at AcademyEX when we ran our Culture Meets AI wānanga and brought the findings back to Dr Karaitiana Taiuru in a follow-up session. At the end of that session, he asked us a question we were not expecting: okay, you have found this, now what are you going to do with it? He suggested we turn it into a framework for agentic AI.
We both sat with that for a moment. It had not crossed our minds.
After we got off the call, we rang each other. Partly to process what had just happened, and partly because Dr Taiuru had called us pioneers in that session, and we were both somewhere between stunned and laughing about it. We have deep respect for him and his work. Hearing that word from him, directed at us, activated every bit of imposter syndrome we had both been quietly managing across eighteen months of research. We are better at backing ourselves now than we were when we started. But it is still not easy.
We decided to give it a go anyway.
We had a few weeks left before our final submissions were due. The plan was simple. We would each draft a framework independently and then bring them together to see what we had. Lee focused on governance, detailing the questions a builder needs to answer before they have the right to build and the structures that need to stay active after they do. Unknown to each other, Lian focused on the technical architecture and the decisions that must be made inside the system to make the values real. We tried not to read each other's drafts before finishing our own.
When we brought them together, the fit was immediate. Lee's work and Lian's work did not overlap. They complemented each other. One covered the legitimacy and relational obligations of building. The other covered how to build so those obligations are in the code rather than just the documentation. Neither was complete without the other.
We spent a few more sessions working through the combined draft, looking at existing frameworks to understand what good structure looks like, and debating scope. The biggest question was whether this applied to agentic AI broadly or to conversational AI specifically. We landed on conversational AI for now because that is where our evidence comes from across four builds and more than 600 conversations. Neither of us has built a true agentic system yet. We named that boundary honestly rather than reaching beyond what our research can support.
Lee shared the draft with her academic advisor shortly after. Her advisor's response was straightforward: yes, include it. That settled it.
"Congratulations! Your framework is amazing and so expansive! It is the only framework I am aware that fills this void."
Dr Karaitiana Taiuru
The framework went from an idea to a working document to a draft ready for consultation in a matter of weeks. It is included in both of our final submissions as an early-stage contribution. It is not a finished product, but a grounded starting point that we believe has value for builders working in this space.
The part of this story we sit with the most is what it took to get here. Without Dr Taiuru asking that question in that session, this framework would not exist. We would not have thought to create it. We are not sure we would have arrived at it without someone we deeply respect naming the possibility and asking us what we were going to do with what we knew.
We think that matters. Not just as a story about two students finishing their Masters. But as a finding about how knowledge moves, and what it takes for people who are close to the work and who have something real to contribute to believe that their contribution is worth making.
We are still working on believing it. But here it is.
View the Tikanga-Led Framework Draft →End of Act 2