Mā te kōrero
ka ora.
Through conversation, there is life.
Ka whakatipu i ngā reo katoa — Growing all voices
Section 01 / Welcome — 3 min Talk Layer
I talk first, write second. That's not a workaround. That's how my mind moves. So, I will start this the way I start everything: by talking. This research is about what happens when conversational AI moves into the most vulnerable parts of people's lives, and what it takes to build it without causing harm.
This microsite is my final report. Scroll for the journey. Use the nav to jump. The four artefacts are standalone. Te reo Māori is woven through everything. Hover any highlighted word to see its meaning.
[+] Read the written introduction
1. The Problem: Architecture Without Presence
A woman told Ray, my voice-first AI relationship coach, that she had been hiding her drug addiction from her human therapist. She hadn't planned to say it. She said it because there was no face watching hers when she did.
That moment is what this research is about.
Conversational AI has moved into the most private rooms of human life. It's processing relationship conflict, holding space for grief, offering a place to practise te reo Māori without the fear of being whakamā te reo Māori Core Value Whakamā Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. Research Context [2, 6] ↗ "The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong." . And almost every one of those rooms is built on Western defaults, built at speed, built by people who treat safety as a legal disclaimer rather than something structural. For Māori and Pasifika communities, this is not a new story. It is the same story: technology built without us, for us, to extract from us.
I am a speaker, not a typer. I experience the friction of text-based digital systems as a physical weight. It's not something I observe. It's a wall I hit. That friction is what my research sits inside: the data confirmed what I already knew in my body. Voice unlocks something text cannot. The scale of that gap shows up in the Findings. What came out in those sessions were inner critics, professional shame, disclosures people had never made to another human. All of it filtered out by the cost of typing (Project Rise Digital Survey; Ray). When the voice clone mispronounced te reo Māori, I removed it entirely. Silence over performance (see Building Safe Conversational AI). That decision is what vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." demands. The sacred relational space between people cannot be tended at the level of aesthetics. It has to be tended in the architecture itself.
The problem is not a lack of features. It is a lack of vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." . If Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." and Manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." are not hard-coded into the system, if they do not govern response hierarchies, data decisions, and what the AI refuses to do, then we are not building helpers. We are building automated exclusion, made faster.
This research is my attempt to build differently.
2. Why This Research, Why Now
AI integration in Aotearoa New Zealand in 2025 and 2026 is not a slow trend. It is a structural shift, and it is happening faster than governance can follow. Across the four builds in this project (167 initial survey conversations, 349 coaching sessions, a 90-minute collaborative wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." , and 59 Ray pilot sessions) the pattern was consistent: the communities with the most to gain from relational AI are the ones whose values, languages, and ways of knowing are least likely to survive the build process intact. The risk is not hypothetical. AI will noa-ise te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." (make ordinary) that which is tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." , stripping dignity from stories in the service of data, unless someone builds explicitly against that outcome.
The opportunity is equally real. Voice AI is the first technology that could genuinely equalise access for oral-first traditions. Not by digitising exclusion more efficiently, but by offering a space where people feel safe to say what they would never write, in the way they actually think (Ray). If we get this wrong, we automate the silencing of the people who are already unheard. If we get it right, we build a future where technology restores mana rather than extracting it.
3. The Researcher: A Position of Practice
I am a Moana Tiriti researcher. Pacific Islander and New Zealand European, a self-taught builder-researcher working where technical implementation meets cultural responsibility. I did not arrive at this research from outside the problem. I built a bridge I personally needed and discovered it was a bridge my communities needed too.
My positionality as a builder-researcher, including where my knowledge holds and where it ends, is documented in the Methodology (Section 4) and the Researcher Context. Being honest about where my knowledge ends is what makes the cultural grounding in this work credible, not decorative.
4. The Research Framework: The Kei Compass
How might we design ethical conversational AI for vulnerable interactions using Māori and Pasifika values to protect marginalised communities?
I use the term 'Kei Compass' to name a directional framework I adapted from Dr Kiri Dell's Week 12 lecture on using Māori values to ethically evaluate technologies (Dell, 2025). The five directions (Kei raro, Kei mua, Kei runga, Kei roto, Kei waho) are hers. The application to conversational AI design is mine.
This question has five dimensions, structured through the Kei Compass (adapted from Dell, 2025):
What systemic barriers silence vulnerable voices in digital spaces?
How do Māori and Pasifika values translate into AI design decisions?
What purpose does ethical AI serve for vulnerable communities?
How do we protect data sovereignty, safety, and dignity?
How do we develop this ethically with cultural governance?
These are not five separate questions. They are one inquiry, examined from five angles. Each one shaped what I built next.
5. How This Report Is Structured
This microsite is the report. It is a living archive of a practice-based journey. It does not describe the work. It is the work, made navigable.
The Methodology documents the untidying of this project: the shift from feedback systems to relational AI, and why the build was the thinking. The Findings, Discussion, and Critical Reflection are the formal research analysis. Evidence before interpretation, then interpretation that goes somewhere, then an honest account of what I learned and what it cost me. The Researcher Context is the positionality statement, the messy personal timeline that ran underneath all of it.
The four artefacts each hold a different angle on the same central question. Ray is the capstone build: a voice-first relationship coach for high-vulnerability interactions. Building Safe Conversational AI is a practitioner's manual for safety architecture across four builds. Conversational AI as Relational Space is a data-grounded thought piece on voice, vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." , and what participants actually told us about being heard. Build Code Practice is a framework for making developer values visible in every line of code.
Move through in any order. The thread holds.
6. A Reader's Invitation
To the practitioners: I hope you find a framework you can use tomorrow. To the participants: I hope you see your own wisdom reflected in these decisions. And to the assessors: I hope you feel the weight of the responsibility held in every line of code.
This is not just a report. It is the beginning of a kōrero about what it means to be heard in a digital world.
Let's begin.
The Language
This research is built on Māori and Pasifika values. They are not a theoretical layer added to the analysis — they are the analysis. Ten concepts run through everything you are about to read.
Vā
The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. When AI enters a human interaction, it enters the vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." . That is the central design obligation of this research.
Mana Motuhake
Absolute sovereignty — over your data, your story, your identity. In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." .
Manaakitanga
Care as obligation, not gesture. In this research, manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code.
Whanaungatanga
Relationship, kinship, the bonds that make people belong to each other. In this research, it means prioritising the relational bond over data extraction. Trust before content. Connection before questions.
Tapu & Noa
Tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." is the state of being sacred, restricted, under spiritual protection. Noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." is the state of being ordinary, accessible. The central paradox of this research sits between them: AI risks making tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." things noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." . But for people cut off from their culture, noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." may be the only door available. That paradox is not resolved. It is held.
Utu Tūturu
Enduring collective reciprocity — not transactional exchange but ongoing obligation. What you take, you give back. Participants gave real, vulnerable parts of their stories. The loop must close by returning findings to them. That is not optional.
Whakamā
Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong.
Kaupapa Māori
A Māori-centred approach to research — research done by, for, and with Māori communities. It is not a method bolted onto a Western framework. It is the framework.
Wānanga
A gathering for deep learning and knowledge sharing. In this research, the Culture Meets AI wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." was a 90-minute session where participants explored together whether AI belongs in cultural spaces.
Kaitiakitanga
Guardianship — stewardship that extends across generations. In this research, it means protecting data and knowledge not just for the people in the room today, but for their mokopuna.
Other te reo Māori and Pasifika terms appear throughout. Hover over any highlighted word to see its meaning.
Mā te whakarongo, ka mōhio,
mā te mōhio, ka mārama
Through listening comes knowledge; through knowledge comes understanding.
The Core Inquiry
How might we design ethical conversational AI for vulnerable interactions using Māori and Pasifika values to protect marginalised communities?
Kei Runga / Purpose
What purpose does ethical AI serve for vulnerable communities?
Kei Raro / Foundations
What systemic barriers silence vulnerable voices in digital spaces?
Kei Roto / Agency
How do we protect data sovereignty, safety, and dignity?
Kei Waho / Innovation
How do we develop this ethically with cultural governance?
Kei Mua / Values
How do Māori and Pasifika values translate into AI design decisions?
The Kei Compass Framework (Adapted from Dell, 2025)
These are not five separate questions. They are one inquiry, examined from five angles. Each one shaped what I built next.
Kia whakatōmuri te haere whakamua
I walk backwards into the future, eyes fixed on my past.
How I Got Here
Audio: How I Got Here (Podcast) - 4 Mins
The Accidental Full Stack Developer
Voice unlocks what text filters out.
Privacy is the precondition for honesty.
Hold the paradox, don't resolve it. →
04. Ray 59 sessionsThe relationship behind the AI is where safety lives. →
[+] Read the full written methodology
My brain works out loud. It always has. I've been de-coding my thoughts and re-coding them for text my entire life. This research began there: with the energy it costs to convert fluid thought into static text, and the question of what gets filtered out in that process. For Māori and Pasifika communities, that filter has always been active. Voice is the primary mode of knowing and relating. Speech comes before writing. It always has.
That lived experience shaped the methodology from the start. I designed for voice because I needed voice, and I suspected my communities did too. The engagement data from the Project Rise survey confirmed this at a scale I hadn't anticipated (see Findings, Kei raro), but the commitment to voice-first design came before the data. It came from my body, not my literature review.
The methodology that emerged is practice-based and iterative. Not because that is the approved framework, but because the building was the thinking. Every architectural decision made across four builds generated a finding. Every failure (the feedback forms that broke in production, the voice clone I couldn't recall, the Credit Crisis in the Ray pilot) taught me something the planning documents never could. The build was the research. This document is the account of how I came to know that, and what it cost.
2. The Build-to-Think Process: Iterative Cycles
Four builds. Each one shaped what came next.
| Phase | Focus | What Surprised Me |
|---|---|---|
| Project Rise Initial Survey | Voice-to-text agents | People would talk to an AI for 23.4 minutes on high-engagement days about service feedback if the voice felt relational |
| Leadership AI Coach | Learning integration | The Heroic Trap: high-performers formed a relational bond with a familiar digital voice, using it to regulate their nervous systems at 2 AM |
| Culture Meets AI (originally scoped as YourHQ, see G1 below) | Wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." + AI in cultural context | The emotional stakes weren't just about a tool. They were about belonging, identity, and the fear of getting culture wrong |
| Ray | Relational/vulnerable space | The Credit Crisis: participants used Ray so much more than expected that they bankrupted the pilot budget early |
The clearest example of how each phase shaped the next is the State Before Story Framework Terminology Core Value State Before Story A rule requiring a check of the user's nervous system state and grounding before any content is addressed. Research Context [1, 6, 8] ↗ "An architectural gate preventing coaching until a user is somatically regulated; draws on polyvagal theory and trauma-informed practice." rule. In the Leadership AI Coach build, the behavioural pattern was clear: shorter, more defended sessions from users who arrived dysregulated. This became a hard-coded architectural decision in Ray. The AI must address the user's nervous system and body state before analysing any relationship conflict. That is not a prompt suggestion. It is an ethical requirement written into the system logic.
The pivot after Christmas 2025 was an intentional strip-back. I stopped solving feedback systems (the what) and went deeper on conversational AI as relational space (the how). That came from recognising a genuine obsession with the technical and ethical design process itself. Advisor Felix Scholz confirmed what the builds had already been telling me: the building was the thinking. The original research questions (long, AI-assisted, difficult to remember) were replaced with a single focused question and five sub-questions. Clarity that only comes from doing.
Updating the learning agreement three times felt exposing. The first time, in December, nobody else in my study group was doing it, and there was real anxiety about being seen as someone who couldn't commit. The second time, in January, felt different. Settled. And the third time at the end of March was a final tweak after the decision to swap a previously named case study out for another, which required more work but made the research much richer. A learning agreement is a living document. Updating it wasn't instability. It was the research working exactly as it should.
3. Methodological Deviations, Named Honestly
This project deviated from its original plan in six ways. Each one is named here because assessors will notice, and because honest methodology is better than a tidy one. The full deviation table and governance timeline is in Appendix C2.
- G1: YourHQ replaced by Culture Meets AI. The original Safe Conversational AI artefact scoped YourHQ as the third build. While the YourHQ build happened and was documented, the focus shifted during the research to Culture Meets AI, a wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." -based project exploring AI with Māori and Pasifika communities that generated richer data and more relevant findings for the core research question. The learning agreement reflects this change. Culture Meets AI was harder and more culturally complex than YourHQ was.
- G2: The Kaupapa Charter is not a standalone artefact. The Kaupapa Charter was named in the original project description as a potential output. It lives inside the Build Code Practice artefact, where it belongs, as one part of a values-in-practice framework rather than a standalone document. This was a scoping decision, not an omission. Signpost: see Build Code Practice for the full Charter.
- G3: Co-design and user testing were reduced from the original plan. The original methodology envisioned deep community co-design across multiple groups. What was delivered was peer consultation. Structured, relational, and honest, but not the same thing. The relational reality is that 18 months is only enough to work with existing trust. Genuine co-design with Māori and Pasifika communities requires a long period of groundwork before the first line of code is written. That is a finding about what ethical technology co-design with Indigenous communities actually requires.
- G4: The wānanga moved from in-person to online. The original plan included in-person wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." . My own wānanga were delivered online, though I did facilitate an in-person session for cohort peer and co-researcher, Lee Palamo, on AI Conversations. Where the original plan included face-to-face hui with youth, Māori, and Pasifika groups, what was delivered was an online digital survey using my own voice (the Project Rise Digital Survey). The rationale was practical: geography, participant availability, and the need to include voices beyond a single location. The impact on relational depth was real and is acknowledged. Online wānanga are not the same as in-person hui. The data gathered remains valuable. The limitation is named.
- G5: A parallel peer accountability structure emerged alongside the formal advisory setup. Aligning schedules across multiple advisors and a full-time research workload was harder than I had planned for, and the rhythms of asynchronous and synchronous availability didn't always meet the moment. Alongside the formal advisory structure, a parallel accountability structure emerged through my peer cohort in GEN25, who pushed back, reframed, and held the kaupapa with me through the build. I made clear, documented decisions independently where needed, and every major deviation was recorded. The ethics form (MTF.8888.275, see Appendix C1) was maintained as the anchoring ethical document throughout. This is a methodological learning about how distributed accountability actually works in practice.
4. Cultural Governance: The Moana Tiriti Lens
Working as a Moana Tiriti researcher involved a constant negotiation between identity and responsibility. A recurring sense of cultural disconnection, particularly from Samoan heritage, was not a limitation of the study. It was a reflexive data point. It kept the research honest.
I can speak to the messy middle of technical implementation with confidence. I remain a student of the Vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." . That is a methodological position, not a performance of humility. My role is to keep listening, to stay in the questions, to resist the urge to claim what is not yet fully mine. There is a whakapapa te reo Māori Whakapapa Genealogy, lineage, or descent; the layering of history and connection. Research Context [1, 5] ↗ "Data is viewed as an extension of whakapapa rather than a corporate asset, necessitating absolute authority and protection of Indigenous data." connection. A sense of kinship, of being drawn toward Te Ao Māori the way a cousin is drawn toward whānau they grew up apart from. The place is known, even when the language isn't yet fluent.
This drove a core design decision: rather than decorating the AI with Māori words or phrases whose pronunciation I could not guarantee, I chose plain English grounded in structural values ( manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." , whanaungatanga te reo Māori Whanaungatanga Relationship, kinship, the bonds that make people belong to each other. Research Context [1, 4] ↗ "In this research, it means prioritising the relational bond over data extraction. Trust before content. Connection before questions." ) where the architecture itself carried the ethic. Anti-extractive building meant choosing integrity over aesthetics.
Key cultural shifts shaped by peer advisors (governance advisory moments documented in Appendix C2):
- Clarifying "no memory": Lee Palamo identified that the phrase was ambiguous. Did it mean Ray wouldn't recall past conversations, or that transcripts weren't retained at all? I clarified this in all pilot materials: Ray holds no conversation context between sessions for participant privacy, but transcripts are retained by the researcher for safety monitoring and analysis.
- The Mate on the Back Porch: Peer feedback reframed Ray from a clinical therapist model to a companion sitting in a noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." space to hold tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." emotions. That cultural distinction shaped the agent's tone and its limits.
- Spiritual grounding: Rob Ngan-Woo recommended incorporating karakia, whakataukī, or lāgaupu to bookend sessions. I embedded this in the architecture through reflection opportunities. It reflected tautua Samoan Tautua Service; serving with a pure heart. Research Context [Pala'amo, 2018] ↗ "The principle of building for human dignity over product 'stickiness' or retention; serving the user's wellbeing first." (service offered with a pure heart) and anchored the digital interaction in relational and spiritual context, not just functionality.
5. Data Ethics and the Tension of Intimacy
The ElevenLabs transcripts were the data source that felt most alive, because they captured raw human vulnerability. This created the hardest ethical tension in the project.
The Researcher's Burden: Reading transcripts of known participants felt like a violation. These were friends sharing intimate relationship struggles, fears, and private moments with a tool I had built. The intimacy of the data did not feel like a research asset. It felt like trespass.
The Safety Backstop: I developed a triage protocol to hold this tension. When a crisis flag triggered an automated email alert (for example, the keyword "end it") the process was: open ElevenLabs, copy the flagged transcript without reading it and fed it to a pre-set up Claude Project space to assess context. AI acted as a first-pass filter, a buffer between my personal relationship with participants and the safety obligation. I remained the final decision-maker about whether direct contact was required.
The emotional cost: The idea that something I had built could cause harm to somebody made me feel physically ill. But the responsibility to ensure a participant didn't leave a session in a mental health crisis overrode personal discomfort. That tension between relational intimacy and research obligation is not resolved by an ethics form. It lives in the builder.
The technical lesson: Both Ray and the Leadership AI Coach had feedback form features that didn't perform as intended. In the Leadership AI Coach, dynamic variables weren't populating correctly into Supabase, but this was out of scope anyway, set up for a potential admin dashboard that was never the priority. In Ray, it stung more. The test environment had worked. Production didn't. Rather than risk breaking something else by updating the live app, I left it and worked with what Supabase had captured directly. What softened the blow: Ray's first line of defence was the agent itself. It asked participants for feedback verbally, and a modal popup gave them a second chance. There's a reasonable chance most of that feedback was captured anyway. But the gap between what was planned and what actually worked is its own finding: build in redundancy, and test in production conditions before you launch with real participants.
The AcademyEX ethics approval (MTF.8888.275, Appendix C1) and crisis protocols (SOS button, 1737 referral links) provided a legal safety net. They fell short of the emotional weight of knowing friends were sharing secrets with a tool I built. Relational safety had to be built through personal follow-up. Not just a form.
The Safe Builder shift: The Ray pilot produced a permanent change in how I approach building. I will never build tech for tech's sake. Every line of code I write now carries a safety opinion. If I can't build the human-in-the-loop safety net to catch a user when the AI fails, I have no business building the system at all. Safety isn't a feature. It's the foundation.
6. AI as Research Method
AI was not just the subject of this research. It was the method. The full account of which tools I used, how my use evolved, and where AI got it wrong is in the AI Reflection. Two methodological insights belong here.
The first is about structure becoming framework. I set up separate NotebookLM notebooks organised by purpose and by build. The decision about what data belonged in the low-vulnerability Project Rise notebook versus the highest-vulnerability Ray notebook forced a categorisation discipline that eventually became the Vulnerability Progression Framework at the heart of this methodology. The tool's limitation (you cannot share documents across notebooks) became a research architecture.
The second is about AI as design partner. Claude pushed the design of Ray beyond "nice relationship advice" toward Clarity over Comfort, helping me see that Ray should function as a regulated mirror, not a cheerleader. That shift shaped every subsequent system prompt iteration and is visible in the prompt evolution documented in the AI Reflection and Appendix B3.
A living contradiction. My voice clone was submitted to the ElevenLabs public library to solve a technical problem. The cloned voice wasn't appearing as an option in the agent builder until it was made public. What I didn't anticipate was that once someone started using it, the library's terms required six months' notice to remove it, well outside this project's timeline. The voice clone (recorded from old, flat podcast audio, never updated) is now used in social media reels, in ads, and by cohort members for their own mahi. There is currently no other Pacific voice in the ElevenLabs library. The voice is out there, earning income, being used in contexts that I never realised I was agreeing to, and it cannot easily be recalled. This was not a deliberate provocation. It became one anyway. It shows what happens to Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." when you build on third-party infrastructure: the voice is now a global commodity.
7. Methodological Gamble and Limitations
The single biggest gamble was focusing the entire thesis on the Ray case study in the final 12 weeks. That meant parking months of work on broader feedback systems to go after the high-risk, high-reward territory of vulnerable conversational space. It was a deliberate choice and it paid off, but it required abandoning significant prior investment.
What still stings is the demographic gap. Not having enough Pasifika voices feels like an unfinished promise to community. This is not a data quality issue. It is a relational one. And it has opened something personal: this research has created a real commitment to reconnecting with Samoan heritage. There is an ache and a yearning there, not just an academic gap. What that looks like in practice is still being figured out. But it means building relationships with Pasifika communities before the next build begins, not alongside it.
What I have made peace with is the technical limitations, particularly the British tilt of current TTS Technical / Domain TTS Text-to-Speech; technology that converts written text into spoken voice. Research Context "Identified as a site of potential cultural harm if the engine mispronounces Indigenous languages, leading to a decision of 'silence over performance.'" models, whose mispronunciation of te reo Māori undermined cultural resonance. This cannot be fixed by one researcher. Being honest about it is a research finding: current commercial voice AI is not yet culturally safe for te reo Māori delivery, and that matters.
On the demographic gap: to rush recruitment of Pasifika whānau without a pre-existing relationship would have been extractive research. The decision to work primarily with a Human Proxy Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." (peers already in a relationship of trust) ensured safety protocols were tested in a container of existing care, not pulled from communities who had not yet consented to the relationship. That is a methodological value: relational integrity over demographic targets.
8. Reflection: From Tidy Project to Authentic Contribution
This methodology is not a recipe. It is what happened when I stopped trying to make the project tidy and started paying attention to what the builds were actually telling me. An ethical framework is not a document written at the start of a project. It is the sum total of every hard no, every breakdown, and every moment of clarity that emerged through building.
The most important methodological decision was the decision to untidy the project. To acknowledge that the how of building mattered more than the what of the original survey goal. The richest data, the strongest findings, and the most important ethical questions all emerged from the builds, not the planning documents.
Tautua as architecture: Serving with a pure heart means building for human dignity over product stickiness. This played out concretely in Ray's stateless design Technical / Domain Stateless Architecture A design where the system retains no memory of previous sessions or user interactions. Research Context [1, 10] ↗ "A technical manifestation of Mana Motuhake; ensures the user's story is entirely theirs and prevents the creation of a shadow profile." , the deliberate choice to sacrifice better UX and user retention to ensure each participant's story remained entirely their own. No memory was not a technical limitation. It was a values statement.
"O tagata lava e tautua ma le fa'amaoni, e fa'apalēina!" "Those who serve others with a pure heart, will be blessed!" — Rob Ngan-Woo, GEN25 peer advisor
To the reader of this microsite: this methodology is not perfect. That's the point. Things changed as the research went along. The commitment made at the start was made without fully knowing what it would ask for. But the way of working turned out to be authentic regardless, so there was nothing to unlearn, only things to deepen. What this shows, hopefully, is the ability to evolve across time. To hold a commitment loosely enough that it can grow into what it actually needs to be. You can change. The work can change. That's not failure. That's research.
End of Act 1