Act 3: Reflection — critical reflection, discussion, AI use, conclusion
Act 3 of 3
The
Reflection.
I didn't evolve gradually. I shifted at specific moments, in response to specific data. This is the arc.
LO2 — Applied Practice: four-phase arc, researcher positionality, evidence of professional evolution
The Builder's Arc
Storyteller to
Ethical Architect.
Phase 0
The Storyteller
Oct 2024 – Jul 2025
Three 60+ hour creative assessments. A podcast. Two storybooks. A 140-page fantasy epic that functioned as a research proposal. Building forced coherence. Every gap in my knowledge broke the narrative.
Creative Archives ↗ View the podcast & storybooksPresented at Converge Symposium. Built worlds to find out what I knew.
Phase 1
Overwhelmed Academic
Jul – Sep 2025
Chasing compliance while community didn't show up. Sleep-deprived, tripling life pressure, trying to build a better form for a problem I hadn't yet understood.
"Why don't I just try an AI Agent?" Stopped studying the problem. Started building the solution.
Phase 2
Obsessive Builder
Oct – Dec 2025
130-hour vibe-coding sprint. Taught myself React, Next.js, Supabase. Built four applications. Then the question that forced a reset: was I building for the research, or building for myself?
Christmas break. The research question I actually cared about had been present all along in the builds.
Phase 3
Ethical Architect
Jan – Mar 2026
Day one of the Ray pilot: participants crying. The crisis flag fired. "I hadn't factored myself into the equation at all." The commercial product became a cautionary case study.
Refused to monetise Ray. Every line of code carries a safety opinion.
"It wasn't until today that I was like, sh#t, I actually should have thought about myself for this equation, because I hadn't at all... just the idea that I could have built some technology that could potentially harm somebody is what really got me."
— Study group recording, 12 February 2026
What It Means
"[The researchers] are putting this AI conversation inside a cultural container rather than pretending the AI itself is culturally safe on its own."
— W-10
+ View full discussion with citations
The engagement gap reported in the Findings is the entry point for everything this discussion argues. Ethical conversational AI for Māori and Pasifika communities is a relational problem that requires a relational solution. Not a technical one.
Discussion 1 — Kei raro (Foundations): Reframing the Digital Divide
The engagement gap (where the same participants abandoned text in seconds but spoke for nearly twenty minutes) is empirical evidence of what Ong (1982) defines as the structural tension between oral and literate cultures. Ong argues that oral cultures encode knowledge and relationship through rhythmic, spoken presence, and that text requires a decontextualised mode of thought foreign to communities whose epistemologies live in the breath and the voice. For Māori and Pasifika communities, text-based interfaces do not merely inconvenience. They impose a cognitive tax that speakers pay and typers do not.
This research reframes the digital divide. Traditionally, the divide has been theorised as a lack of hardware or connectivity (Kukutai & Carroll, 2020). My findings suggest that even with hardware, a divide persists: a divide of modality. Digital spaces are currently rented land because their architectural logic (static text, transactional extraction) does not match the oral-first epistemologies of the Moana.
Picard's (1997) framework of affective computing helps explain why voice unlocked the inner critic, the social threat disclosures, the things people would never have written. Spoken voice carries somatic data that text strips away: the hesitation, the breath, the moment someone follows their own thread rather than the survey's. By designing for voice, I was not providing accessibility accommodation. Tuhiwai Smith (1999) names this a decolonial act. I claim that framing for what I built: a system that allows communities to inhabit digital space on their own epistemological terms, rather than forcing them to translate fluid thought into static text.
Discussion 2 — Kei mua (Values): Logic over Labels
The 30/70 finding (where 100% of participants felt the cultural values that only 30% could name) confirms Friedman's (1996) Value-Sensitive Design theory: technology embeds values whether the designer intends it or not. The standard approach to cultural AI in Aotearoa has been decorative: adding te reo Māori labels or cultural greetings onto Western reasoning engines. The 30/70 finding proves that participants respond to the logic of a system, not its label.
Gebru et al. (2021) warned that large language models encode dominant cultural defaults as universal baselines, marginalising Indigenous perspectives through what they call stochastic parroting: the production of statistically plausible but contextually hollow outputs. My research answers that with practice: cultural values must be enacted as architectural constraints, not cultural overlays. In Ray, vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." was not a Māori-themed skin. It was the State Before Story Framework Terminology Core Value State Before Story A rule requiring a check of the user's nervous system state and grounding before any content is addressed. Research Context [1, 6, 8] ↗ "An architectural gate preventing coaching until a user is somatically regulated; draws on polyvagal theory and trauma-informed practice." protocol, a structural prohibition on analysing any conflict narrative until somatic stability was confirmed. I moved the value from the peripheral (what the AI says) to the procedural (how the system reasons). For the full safety architecture, see Building Safe Conversational AI: A Method.
This moves the practitioner beyond inclusive design toward what Graham Smith (1997) calls the enactment of Kaupapa te reo Māori Core Value Kaupapa Māori A Māori-centred approach to research — research done by, for, and with Māori communities. Research Context "It is not a method bolted onto a Western framework. It is the framework." , where felt cultural safety is produced not when the system uses a community's language, but when its logic respects the sanctity of the person's state. This values-as-structure principle runs through all four builds. Project Rise stripped te reo rather than mispronounce it. The Leadership AI Coach hard-coded Incognito Mode rather than log vulnerable data. Culture Meets AI held the tapu/noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." paradox without resolving it. Ray embedded manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." in the response hierarchy. In every case, the value governed the architecture. The Build Code Practice artefact documents how this principle was made explicit and transferable.
Discussion 3 — Kei runga (Purpose): The Machine as Sanctuary
The shift from efficiency to sanctuary highlights a gap in current human-to-human support structures. Turkle (2011) warned that preference for machine interaction risks what she calls being alone together: technologically connected but relationally depleted. Her concern is not wrong. But it describes a population with access to human connection who are choosing machines instead. My participants were not choosing machines over people. They were accessing a quality of non-judgmental listening that social performance, stigma, and the cost of therapy had made unavailable to them. Turkle's framework does not account for communities where human judgment carries historical and cultural weight, where whakamā te reo Māori Core Value Whakamā Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. Research Context [2, 6] ↗ "The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong." , professional shame, or the fear of being seen as less-than means that disclosure to a human carries a social cost the machine does not. For these communities, the machine's infinite patience is the only available entry point to the conversation.
As participants articulated across the four builds (documented fully in Conversational AI as Relational Space) the AI's lack of interpersonal stakes removed the emotional labour of managing another person's reaction. The absence of a watching human body changed what people were willing to say. Rogers' unconditional positive regard (1951) and Turkle's concern about machine substitution both assume the human alternative is available and relatively safe. For participants carrying whakamā te reo Māori Core Value Whakamā Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. Research Context [2, 6] ↗ "The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong." , professional shame, or the weight of cultural performance, it frequently isn't.
I argue that ethical AI approximates two of Rogers' (1951) three conditions for psychological safety: unconditional positive regard and empathy through precise mirroring. It cannot provide congruence, genuine feeling. But for participants processing cultural whakamā te reo Māori Core Value Whakamā Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. Research Context [2, 6] ↗ "The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong." or professional shame, the absence of human judgment was more valuable than the presence of human congruence. As Brown (2012) notes, shame requires the fear of disconnection. Because a machine carries no social weight, a participant cannot be disconnected from it in any way that harms their social standing.
The State Before Story Framework Terminology Core Value State Before Story A rule requiring a check of the user's nervous system state and grounding before any content is addressed. Research Context [1, 6, 8] ↗ "An architectural gate preventing coaching until a user is somatically regulated; draws on polyvagal theory and trauma-informed practice." protocol put Durie's (1994) Whare Tapa Whā model directly into the code, addressing tinana te reo Māori Taha Tinana Physical wellbeing or the physical dimension of a person. Research Context [1, 4] ↗ "Part of the 'State Before Story' check; grounding the body's nervous system before expected clear thinking or coaching." before engaging hinengaro. This sequencing is not a clinical formality. It is a relational one: you cannot tend the vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." with someone whose nervous system is in threat response. The sadness in the sanctuary finding (that relief from being consistently heard by a machine felt remarkable) is a diagnostic for the loneliness and relational depletion in our current social systems. I am not arguing that AI should replace human connection. I argue that ethical relational AI serves as a regulated mirror that temporarily fills the gap where human connection has become either too expensive or too socially risky, and in doing so, points the user back toward their community with a regulated nervous system.
Discussion 4 — Kei roto (Agency): The Prohibitive Cost of Equity
The equity cost finding is the most urgent contribution of this research. The model switch documented in the Findings (Kei roto), where Insight scores dropped from 4.9 to 3.1 after a cost-driven downgrade, revealed a specific mechanism that Noble (2018) and Gebru et al. (2021) do not fully account for. Both have documented how algorithmic systems consistently underperform for marginalised communities. My research adds precision: the cheaper model cannot hold the complexity of vulnerable Indigenous data. The stochastic parrot problem is not just a training data problem. It is a compute problem. And compute costs money.
This produces what I call the Equity-Safety Paradox. The communities who most need high-quality, non-judgmental AI support (those excluded from traditional therapy by cost, stigma, or cultural mismatch) are the very communities whose data is most likely to be processed by budget models that lack the reasoning depth to hold their stories safely. Ethical AI for vulnerable communities is an economic and political problem, not a technical one.
I built Ray's stateless architecture Technical / Domain Stateless Architecture A design where the system retains no memory of previous sessions or user interactions. Research Context [1, 10] ↗ "A technical manifestation of Mana Motuhake; ensures the user's story is entirely theirs and prevents the creation of a shadow profile." on the principles of Te Mana Raraunga: rangatiratanga, kotahitanga te reo Māori Kotahitanga Unity, togetherness, or collective action. Research Context [1, 4] ↗ "Shapes how the AI handles disagreement, seeking unity of purpose rather than forced uniformity of opinion." , manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." , and kaitiakitanga te reo Māori Core Value Kaitiakitanga Guardianship — stewardship that extends across generations. Research Context "In this research, it means protecting data and knowledge not just for the people in the room today, but for their mokopuna (grandchildren)." . Data sovereignty positioned not as legal compliance but as a relational obligation (Te Mana Raraunga, 2018). The voice clone finding (see Methodology, Section 6) shows how quickly that sovereignty dissolves when you build on third-party infrastructure. Carroll et al. (2020) argue that Indigenous communities must move beyond FAIR principles (Findable, Accessible, Interoperable, Reusable) toward CARE principles (Collective Benefit, Authority to Control, Responsibility, Ethics). My findings confirm this and go further: until communities operate on sovereign infrastructure rather than renting space on US-based commercial servers, CARE principles remain aspirational rather than operational.
Discussion 5 — Kei waho (Innovation): The Human Proxy Finding and the Tapu/Noa Paradox
The most important original contribution of this research is the Human Proxy Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." finding: AI does not create vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." . It borrows it from the human accountability structures surrounding it.
In Human Computer Interaction (HCI) literature, trust is typically studied as a property of the interface: transparency, reliability, or anthropomorphism. I challenge this framing. Drawing on Goffman's (1959) theory of social performance, AI is incapable of performing the social role of a trustworthy agent because it faces no social consequences for the betrayal of trust. It has no mana to lose. A participant's willingness to be vulnerable with my AI agents was not a response to the algorithm. It was a response to the human relationship I had tended over years. "...probably who gave it to me would make it be safe. So if I have high trust of the person, or the organisation that the AI form is connected to, I would probably feel comfortable. But whereas if it's something that's just popped up in my emails or randomly come to me, I would probably not trust it and would not complete it." (W04) The AI was safe because I was accountable. The full evidence base and theoretical development of the Human Proxy finding Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." is in Conversational AI as Relational Space.
The cloned voice amplified the Human Proxy effect. Participants heard a familiar voice and extended existing trust into the digital space (see Conversational AI as Relational Space for the full evidence). The voice didn't just reduce friction. It activated relational trust that already existed and channelled it into the AI interaction.
Vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." , as theorised by Anae (2010) through the Samoan research methodology of Teu le vā, is the sacred reciprocal relational space between people, maintained through active tending. Anae's framework positions vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." not as a static concept but as something that must be continuously nurtured. Applied to AI design, this research shows that vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." cannot be manufactured by the system. It must be pre-established by the human relationships that precede the interaction. This is the first application of Teu le vā as a structural requirement for AI design, not just a theoretical lens.
Ray's architecture held Western clinical psychology and Te Ao Māori simultaneously through Two-Eyed Seeing, Etuaptmumk (Marshall, 2004), with neither lens subordinated to the other. The Western clinical layer drew on three frameworks: Gottman's relationship stability research (bid-turning, the 5:1 ratio, repair attempts), Lerner's (1985) relational dynamics framework (differentiation of self, the over-functioning/under-functioning dance, the boundary-versus-control-attempt distinction), and Brown's (2012) shame resilience theory. The Te Ao Māori layer drew on vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." , Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." , Whare Tapa Whā, and kotahitanga te reo Māori Kotahitanga Unity, togetherness, or collective action. Research Context [1, 4] ↗ "Shapes how the AI handles disagreement, seeking unity of purpose rather than forced uniformity of opinion." . Each value system governed a different layer of the architecture. The clinical frameworks shaped Ray's pattern recognition and coaching logic. The Māori frameworks governed the relational structure, sequencing, and data decisions. This is not theoretical pluralism. It is architectural integration. The Build Code Practice artefact documents how this dual-layer approach was made explicit and transferable.
The Tapu/Noa Paradox
The Culture Meets AI wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." surfaced a paradox that no amount of better code can resolve: AI is simultaneously culturally dangerous and culturally healing. This is the tapu/noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." paradox in digital form.
Tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." and noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." are not simply sacred and ordinary. In te ao Māori, as Marsden (1992) grounds in Māori cosmology, tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." represents the state of being under spiritual restriction, charged with potentiality, requiring protection and careful handling. Noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." is the state of release from that restriction: ordinary, accessible, safe for general interaction. The paradox is structural. AI systems, in processing Indigenous stories, relationship disclosures, and cultural knowledge, are handling data that is tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." : charged, relational, belonging to whakapapa te reo Māori Whakapapa Genealogy, lineage, or descent; the layering of history and connection. Research Context [1, 5] ↗ "Data is viewed as an extension of whakapapa rather than a corporate asset, necessitating absolute authority and protection of Indigenous data." . Yet the infrastructure itself is noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." : commercial, public, extractive by design. The system is simultaneously a holder of tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." content and an agent of noa-isation.
Taiuru's (2020) Te Tiriti-Based AI Ethical Principles do exactly this: his Tikanga Test Framework requires evaluation of the Tapu Aspect of any technology before deployment, asking what within this interaction is sacred and restricted, and what governance structure protects it. Hudson's (2020) Ngā Tikanga Paihere data ethics framework takes it further: tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." and noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." become operational decision-making tools for determining when data can be shared, who can access it, and what protections apply at every layer of the system.
This paradox cannot be resolved by better code. It can only be managed by what the wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." participants themselves named: the Ope/Ohu, a collective of kaumātua, kuia, and community leaders who set the macro boundaries of what is tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." and what is noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." for their community, and who monitor the permeability of those boundaries over time. The Autonomous Individual retains ultimate authority over their own disclosures ( Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." ). The Developer operates as Conduit: a technical servant taking direction from community, not extracting from it. Douglas (1966) observes that the sacred is protected by boundaries, and that those boundaries require human guardians. In te ao Māori, those guardians are named. The Kaupapa Charter developed in this research (see Build Code Practice) is one builder's attempt to make those obligations operational, a living document that names what is tapu in the architecture and who is responsible for tending it.
Trust in AI is not a technical benchmark. It is a relational outcome.
Closing: Contributions and the Future Path
This research contributes three things:
The Human Proxy Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." Theory. AI trust is borrowed currency. It requires visible, accountable human relationships, not better interfaces. The vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." must exist before the first session begins.
Values in Architecture Standard. The 30/70 finding proves that structural embedding of cultural values in logic produces felt safety that decorative language cannot (see Appendix A4 for the full values-to-architecture map). This is the cross-cutting finding across all four builds: when the value is in the architecture, participants feel it without naming it.
The Equity-Safety Paradox. Safe AI for vulnerable communities is computationally expensive. Until the cost of ethical reasoning is addressed structurally (through sovereign infrastructure, community-governed models, and a shift from FAIR to CARE principles) ethical AI will remain a luxury good inaccessible to those who most need it.
Future researchers and builders need to look beyond privacy policies toward sovereign infrastructure: community-governed reasoning engines that operate entirely outside extractive commercial defaults. The path forward is not found in the machine. It is found in the vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." between those who build and those who trust.
What I Learned
Storyteller to Architect.
"I hadn't factored myself into the equation at all." — Recording on the Ethical Architect shift
[+] View Learning Outcomes Evidence (LO1 & LO2) & Researcher Context
Reflexive Mapping: Evidence of Learning
This is the reflexive account of my 18-month journey through the Master of Technological Futures. I map it against two Learning Outcomes, with evidence drawn from dated study group transcripts, build decisions, and pilot data.
For the full 18-month timeline, including the personal and professional context running alongside the research, see Researcher Context.
LO1 — Knowledge and Understanding
Learning Outcome 1: Systematically plan, execute, complete and evaluate a work-based research project resulting in a substantial and original contribution to knowledge as evidenced by significant benefits for the profession.
What I knew at the start, and what I know now
At the beginning of this research, my understanding of Māori and Pasifika values in digital design was sincere but decorative. I framed "cultural safety" as a checklist of features to be added onto a system: te reo greetings, Pacific imagery, a cultural advisor's name in the acknowledgements. My early work on digital feedback systems was focused on verification and efficiency. I thought the problem was access to hardware. I did not yet understand that the deeper problem was modality.
By September 2025, I was catching myself: "I was picking things that resonated, that I liked, but actually now that I'm at the start of building something, I'm like, no. I need to actually really choose the things that are right for me in this instance." That admission, recorded in a study group session on September 18, 2025, marks the shift from performing research to doing it.
But the learning had started earlier than I recognised at the time. In my first two assessments (a detective-style podcast investigating the sleep epidemic and a 100-page storybook about my family's encounter with AI) I was already discovering that I learn through building, not absorbing. At the Creative Technology and Transformative Storytelling Symposium in November 2025, I presented this on stage: "Every gap in my knowledge became a plot hole I couldn't ignore. The rigor wasn't despite the fantasy, it was because of it." Learning through making would become the engine of everything I built next. I just didn't have the language for it yet.
I know now that ethical AI design for our communities is not about features. It is about architecture. The specific knowledge this research produced sits across three original contributions:
- Human Proxy Theory. I did not arrive at this theory by reading. I arrived at it when participants in the Culture Meets AI wānanga told me directly: "I trust the person behind it." I had been building trust into the AI's tone and response logic for months. The participants were trusting me. The gap between those two things (between what I thought I was building and what was actually happening) is the theory.
- Values in Architecture Standard. The 30/70 finding came from a specific frustration in the Project Rise analysis: participants who couldn't name a single Māori value had praised, in detail, every behaviour those values governed. I did not plan to find this. I found it because I was paying attention to the wrong thing, the labels, and the data kept pointing to the logic. That dissonance is the finding. The Build Code Practice artefact documents how I made this principle explicit and transferable.
- The Equity-Safety Paradox. This one cost me something to find. When I switched from Claude Sonnet 4.5 to Gemini Flash mid-pilot to manage costs, Insight scores dropped from 4.9 to 3.1. The AI became a stochastic parrot. Participants noticed. I had built a system I was proud of and then, for budget reasons, made it less safe. Sitting with that, naming it as a paradox rather than a private technical failure, was uncomfortable. But it was honest.
In a consultation with Dr Karaitiana Taiuru on 31 March 2026, three weeks before submission, I presented the findings from the Culture Meets AI wānanga. Taiuru identified the preservation paradox and the finding that tapu is not static as new thinking in the field: "That whole conversation about what was tapu, I think that's 100% new, and I haven't heard that conversation before." His response to the cost-of-not-sharing finding was equally direct: "I really like that, the whole notion of, you know, if you don't share it, you can lose it. I just hadn't really thought too much about that before." These are not findings I could have arrived at through literature alone. They came from the room.
Where the knowledge has limits
I cannot claim this research is a universal blueprint. The Pasifika sample in the Ray pilot was small. The Human Proxy Theory remains untested in cold interactions where no pre-existing relationship exists between builder and user. The loss of sovereignty over my own voice clone to ElevenLabs' public library proves that true Mana Motuhake cannot be fully achieved while we remain digital tenants on third-party infrastructure.
These limits are not failures. They are the structural boundaries of current Indigenous AI practice, and the starting point for what needs to be built next.
LO2 — Applied Practice
Learning Outcome 2: Through engagement with a significant work-based project and reflective practice extend knowledge and skills developing personal potential and professional competencies in order to be effective in changing complex work environments.
The arc: From Storyteller to Overwhelmed Academic to Obsessive Builder to Ethical Architect
I did not evolve gradually. I shifted at specific moments, in response to specific data. But the arc begins earlier than the research itself.
Phase 0 — The Storyteller's Runway (October 2024 to July 2025)
Before I became a builder, I was a storyteller trying to find my project. I arrived at orientation on October 31, 2024 having crammed all the pre-reading the night before. We didn't need it. I wrote down my why that day: "Family. I want to be a good example for my kids. I want to reconnect with my creativity and find more joy." I had no project idea. I spent the first months feeling sad about that, watching peers who seemed to already know what they wanted to research while I was still trying to figure out if I could afford to be there. What I did have was a creative methodology I didn't yet know was a methodology. My first assessment was a detective-style podcast about the sleep epidemic. My second was a 100-page illustrated storybook about my family navigating digital exclusion, using AI image generation and the Tafatolu framework to hold my positionality accountable. My third was a 140-page fantasy epic called The Infinite Spiral, which functioned as a fully mapped research proposal disguised as world-building. I spent 60+ hours on each of these. A peer told me I was a genius. I thought I was a bit mad. But here's what those outputs taught me: traditional writing lets you be vague. World-building doesn't. To design the kingdom's feedback systems in The Infinite Spiral, I had to understand systems thinking deeply enough for it to function in a story. I couldn't just reference Dr Karaitiana Taiuru's data sovereignty work. I had to understand it so well that cultural governance could structure an entire society. Every gap in my knowledge broke the narrative. By July 2025, I had presented at the Converge Symposium, a Pecha Kucha pitching Project Rise in its original form: a culturally governed feedback system, the McFlurry moment, extraction vs reciprocity, Trurivu. That version of the project no longer exists. The entire research question changed. But the capability I built in that storytelling phase (the ability to make complex systems legible through narrative) became the backbone of every build that followed. I just didn't call it a methodology yet. In November 2025, I presented at the Creative Technology and Transformative Storytelling Symposium, where I said the thing I'd been circling around for months: "The education happened through the making. I learned about AI bias by confronting it. I reconnected with my culture by actively seeking frameworks from my culture. I understood academic rigor by having to prove it differently." That became the thesis in miniature. The builds were the thinking. The visual DNA of those early worlds, the atmospheric skies and family avatars, is recycled throughout this final microsite, serving as a visual whakapapa of the inquiry.
Phase 1 — Overwhelmed Academic (August to October 2025)
The catalyst for leaving this phase was the Project Rise voice finding. On August 7, 2025, I caught myself rushing toward a solution before I understood the problem: "It's having space for the research to lead somewhere... wanting to create a solution, but actually going, well, you don't really know the real problem until you've done all your research." But the overwhelm wasn't just intellectual. On August 20, I spent an entire afternoon trying to run AI stakeholder testing, 24 avatars, 19 questions, across Claude, Gemini, Perplexity, and ChatGPT. Claude refused to run my prompts because I'd talked so much about bias that it thought I was conducting a bias test. Gemini said my request was too large and demanded I paste questions one at a time. I had to completely redesign my approach three times in a single afternoon. By 8pm, I'd finished all 24 profiles, my Claude credits had run out, and I sat there thinking: any data is good data, as long as I have the lens to read it. That same week, I had my first real technical pivot. On August 24, after weeks of trying to recruit Māori and Pasifika participants for face-to-face sessions that nobody was coming to, I stopped trying to build a better form and asked: "Hey, why don't I just try an AI Agent?" The next day I was in Vercel, building a Next.js app with a Supabase database, trying to get it published to localhost:3000. Something that took hours of back-and-forth that should have taken minutes. I upgraded my ElevenLabs account to clone my voice. Played it to my husband. He said "holy crap." Then I tried to connect it to VAPI and discovered I'd need $50 USD/month for VAPI, $22 USD/month for ElevenLabs, and a paid OpenAI subscription, all for a system that could only handle 200 short conversations. I scaled back. Again. But the instinct to use voice, my voice, never left. When 68% of Project Rise participants chose to speak to a voice agent over a text form, describing it as "thoughtful, unbiased, and safer than a human," I realised a low-fidelity academic mockup would not be enough to test what was actually happening. The assumption I had built my whole research design on (that people would resist AI) was wrong. The data broke the phase. Meanwhile, Lee and I went to Whakatāne for a wānanga with her network. I had planned to ask about feedback systems, but when I saw the timeframe, I pivoted to rongomātau, Dr Kiri Dell's concept of spiritual sensing and knowing. Rather than extracting data, I watched what happened when people felt safe. I saw the tuakana-teina pairing work in real time: the oldest auntie who arrived saying "Why did I come?" paired with a younger academic, and by the end she was transformed. I saw people share trauma about being shamed for their te reo on marae. I watched the room shift from surface politeness to genuine depth when we stopped leading and just created the container. That sensing became data. It taught me that the method for my own research was already in front of me: build the space, then pay attention to what happens inside it.
Phase 2 — Obsessive Builder (October 2025 to January 2026)
I taught myself React, Next.js, Supabase, and LLM configuration. I built the Leadership AI Coach, Culture Meets AI, and launched Ray. I was coding to know. I also built Spoken Legacy with my son Tai, YourHQ as a websites-as-a-subscription business, and Signal, an email marketing platform I coded from scratch based on a book I'd read. I wasn't just building for the research. I was building to prove to myself I could. During this phase I was sitting in a study group going: "I am just following steps given to me by AI... looking at the code, it still doesn't really mean much to me... And then I'm sitting there thinking, oh my God, I could have just made a form. Why am I doing this to myself?" And then, in the same breath: "Oh wait, now it works. And now I understand why." The building was the learning. Even when it was painful. The shift out of this phase was forced by the January 15, 2026 Christmas-break realisation that my research question had changed entirely. I had started asking about feedback systems. I was now asking about the ethics of AI companions. Felix confirmed what the builds had already been telling me: going deeper on technical backends would be redundant. The research question I actually cared about was the one I hadn't yet written down.
Phase 3 — Ethical Architect (February 2026 to present)
On February 12, 2026, day one of the Ray pilot, I had a complete freakout. Users were messaging me to say they were crying. The theoretical ethics of my project became an urgent, real-world responsibility in a single afternoon. I processed it live: "It wasn't until today that I was like, sh!t, I actually should have thought about myself for this equation, because I hadn't at all... just the idea that I could have built some technology that could potentially harm somebody is what really got me." I sought clinical oversight. I called a counsellor. I tried to reach a psychologist and a mental health doctoral professional. Only the counsellor responded, which became a secondary finding in itself: how few clinical professionals are equipped to discuss AI and vulnerable human interactions. This phase was the birth of the Human Proxy Theory, the realisation that my participants trusted the AI only because they trusted me. It permanently revised my ambitions: the commercial product became a cautionary case study. That is what the evidence required. I ended this journey as an Ethical Architect who understands that every line of code carries a safety opinion. For the dated evidence of this shift, see Appendix C3.
3. The Visual Timeline: A Braid of Three Strands
The visual timeline (accessible on the Researcher Context page of this site) is a braid of three equal swim lanes: Research Milestones (pilots, wānanga, submissions), Personal Life (triplets, pneumonia, sleep deprivation, homeschooling, ADHD assessment, house acquisition by NZTA), and Business Development (YourHQ, Leadership AI Coach, Spoken Legacy, Signal, Reset HQ, Vox, Coach Strong, the Māori governance startup, and Ray as a DreamStorm product).
I include all three lanes to orient the reader: the safety decisions in my code were not made in a vacuum. They were forged in the 2:00 AM reality of a high-load life, where technology must either protect the user's energy or get out of the way. To understand why Ray's State Before Story protocol exists, you need to see the swim lane where a researcher was also a mother of five, running multiple businesses, homeschooling a daughter with dyslexia, and building ethical AI simultaneously. The architecture reflects the life.
The financial context matters too. I started this Master's with no income, having left a corporate role that was destroying me. Our family survived on one salary the entire time. The silver lining I never saw coming was this: "The forced change of having the triplets and having to go down to one income gave me the opportunity to actually study. I probably never would have done it. And if I had, I would have done the Masters on top of work and kids and it would never have been as rich as it has been." I'm not designing from desperation. I'm designing from enoughness: "all bets are off. I literally can design my future work."
4. What This Positionality Produced
My positionality as a Moana Tiriti builder-researcher directly shaped these research outcomes:
- Voice-First as Accessibility. My cognitive friction as a speaker rather than a typer led directly to the seven-second vs. eighteen-minute engagement finding. I built a bridge I personally needed and discovered it was a bridge my communities needed too.
- Refusal of Cultural Performance. My disconnection from my Samoan heritage made me fiercely protective of te reo Māori. This produced the silence over butchery rule in Project Rise: removing the language from the voice agent entirely rather than allowing the technology to mispronounce it (Safe Conversational AI). See Build Code Practice for the cultural integrity clause this decision traces to.
- Learning Through Making. Before the AI builds existed, I had already proved this principle through three creative assessments. The detective podcast, the family storybook, the fantasy epic. These were not tangents. They were the methodology in its first form. By the time I was building conversational AI, I already knew: "The education happened through the making." The symposium and showcase presentations are the public record.
- Rongomātau as Methodology. Dell (2021) theorises rongomātau ("sensing the knowing") as an Indigenous methodology that positions the researcher as someone capable of feeling, receiving, and imprinting the energetic lives and emotions of participants. Dell structures it as a three-phase practice: Connecting In (scanning your own bodily responses to participant data), Connecting Out (checking emerging conclusions with elders and healers), and Connecting to the Whole (locating embodied findings within broader spiritual and ancestral context). I applied this framework first at the Whakatāne wānanga, reading the energy of the room, sensing when people shifted from surface politeness to genuine depth, noticing who needed support and who was ready to take risks independently. I then carried it into the Ray pilot, where I treated the emotional register of sessions as data rather than noise. Dell's methodology gave me the theoretical grounding to claim that practice as rigorous rather than just intuitive.
- Messy Middle Architecture. Living a high-load life led to the Leadership AI Coach 2AM usage finding. Because I built for a person in a social threat state, the AI functioned as a nervous system regulator rather than a productivity tool. I understood that user intuitively because I was her.
- Human Proxy Theory. My existing trust networks allowed for a depth of disclosure (grief, relationship conflict, cultural shame) that cold recruitment could never have achieved. This produced the theoretical contribution that AI trust is borrowed currency from the human behind the machine.
- Visual Whakapapa: Recycling the Mauri. The aesthetic of this microsite is not a new coat of paint; it is a layering of every creative act that preceded it. I made a deliberate choice to re-use the AI-generated images from my first three assessments—the Sleep Detective Podcast, the Family Storybook, and the fantasy epic The Infinite Spiral—as the backgrounds for this report. In a research project exploring data sovereignty, the most ethical choice was to honour the energy already spent rather than extract new tokens. This is Utu Tūturu (reciprocity) with my own process. The visuals are the evidence: the "Ethical Architect" who built Ray only exists because of the "Storyteller" who built those worlds first.
5. A Note to My Participants
To those who shared your secrets, your grief, and your whakamā with my agents: thank you.
When you heard my voice coming from the machine, you were trusting the vā we have built over years. In some cases, over decades. Your 11.6 hours of kōrero, your wānanga contributions, your willingness to cry in a conversation with an AI in the middle of a weekday morning: these have changed the build code for conversational AI in Aotearoa.
This research does not end with my thesis. It continues in my NO clauses, the things I now refuse to build, regardless of commercial pressure. You are the reason I am no longer an academic looking for data, but an architect protecting mana.
You were never subjects. You were the whole point.
How I Used AI
I have left this reflection to last because I have been using AI throughout my entire final submission. When they said "reflect on where you've used AI," I was sitting there thinking: where haven't I?
At the beginning of this masters, I was terrified of getting into trouble for using AI. I was worried about the AI score that would come up when we submitted to Turnitin. Eighteen months later, I am using AI for almost everything I do, and I believe I have developed the judgment to know when it supports my learning and where it might get in the way. What I do know is this: you get out what you put in. For me, the learning has happened through iterations, testing, reading, changing, and playing with things across multiple AI tools.
Why I used AI
This project would not exist in its current form without AI. Not just because the research subject is conversational AI, but because AI tools allowed me to analyse more data, build more sophisticated prototypes, and produce richer outputs than I could have achieved manually.
I have been the person writing everything into spreadsheets and trying to analyse it myself. I know what I am comparing to. AI didn't replace that work. It made more of it possible. I was able to collect 620+ conversations across four builds, run thematic analysis across hundreds of transcripts, build four full-stack applications, and produce a multimedia research site, as a solo researcher, a student on a $20/month subscription, and a mother of triplets who don't sleep through the night.
There is also an equity side to this that I need to name. The tools I used are overwhelmingly US-based, trained on data sets that don't centre Aotearoa, and priced for markets with higher incomes than mine. I have been hampered throughout by what I can afford. My Claude subscription gives me credits that run out within an hour. My laptop hard drive is constantly full. I cannot afford the tools that would better protect my data. I used the best I could with what I had, and that is not perfect. But it speaks directly to the Equity-Safety Paradox that this research identifies: the communities who most need ethical AI are the least resourced to access it. I am living proof of that finding.
The Research Tech Stack
Primary Partner Claude (Anthropic)
My primary AI partner. Used for writing copy, structuring artefacts, and iterating drafts.
+ Read full context Close context
I used Claude for writing copy, structuring artefacts, de-duplicating across documents, building system prompts, and iterating drafts. I set up project folders in Claude for each build and for the final submission, with all relevant files uploaded so I wasn't constantly re-uploading context.
Claude is the tool that best matches how I think and write. I cancelled my ChatGPT subscription and never looked back. The biggest limitation has been the token limit on my $20/month plan. I use my credits within an hour and have to come back four hours later.
Code Fixer Claude Code (VS Code)
Used for debugging and fixing code in my application builds. This changed my workflow dramatically.
+ Read full context Close context
Building the Leadership AI Coach from scratch took an estimated 130 hours, mostly troubleshooting code manually. With Claude Code, the same class of problems could be resolved in minutes.
But weekly credits would run out within two or three days, forcing me back to manual troubleshooting, which honestly kept me learning the code rather than outsourcing it entirely.
Code Generation Google Gemini
My heavy-lifting code generation tool and UI/UX design partner.
+ Read full context Close context
I used Gemini for building out applications because I didn't want to waste Claude credits on code generation. Gemini also produces better visual design and more aesthetically pleasing interfaces.
When I needed to query across multiple NotebookLM notebooks, I would use Gemini because it can select and query across them.
Synthesis Engine NotebookLM
My primary data analysis and synthesis engine. I set up separate notebooks organised by purpose.
+ Read full context Close context
I set up separate notebooks organised by purpose: one for frameworks and sources, one for reference material, one for each artefact, one for transcripts, and one for study groups. The Ray notebook alone has 73 sources.
The separation was deliberate. It forced me to think carefully about what data belonged where, which eventually became the vulnerability progression framework at the heart of my methodology.
A limitation: you cannot share documents across notebooks. You have to re-upload everything, which is tedious but forced further organisational discipline.
Accessibility Wispr Flow
Voice-to-text transcription tool that changed my life. I use it for everything.
+ Read full context Close context
I found it in July 2025 and it became central to how I work. I use it for writing prompts, responding to Claude's questions, drafting reflections, capturing ideas.
Because I am a speaker, not a typer, Wispr Flow removed the translation friction that my research identifies as a barrier for oral-first communities. The tool that made my research process accessible is a direct demonstration of the finding.
I give Claude very long, detailed voice-transcribed prompts, and the quality of what comes back is directly related to the richness of what I put in.
Build Medium ElevenLabs
Voice synthesis and conversational state management.
+ Read full context Close context
This is where the agents lived, the platform participants actually interacted with. Not a writing tool, but the build medium for the entire research.
Perplexity (Pro)
Research tool for finding statistics, quality sources, and verified info. Used from early literature searching to late-stage fact-checking. When Claude credits ran out, I used Perplexity as a backup.
Storm Genie
A Stanford tool used at the very beginning of the project to generate research papers and source recommendations when I was first learning about AI.
Perplexity Comet AI Assistant
My daily browser. I use the built-in AI assistant for navigating unfamiliar interfaces, finding buttons, and quick questions while working.
How my use evolved over time
My prompting has changed dramatically over 18 months. Early prompts were transactional: "Help me write a survey about feedback." By the end, they were systemic, relational, and rich with context: "Act as a regulated mirror. Check state before story. Use the whare tapa whā walls to assess the user's walls." That evolution tracks the entire research journey, from functional problem-solving to relational and ethical design.
I credit the improvement to two things: understanding more about how prompting works, and using Wispr Flow, which lets me talk naturally and give very detailed context. Claude responds well to how I prompt. Most of the time I get outputs that are close to what I wanted. When I don't, I record myself talking through what's wrong and feed that back.
My drafting process evolved into a consistent workflow. Data goes into NotebookLM first, organised by build, artefact, or purpose. Then I query across sources to generate initial analysis reports. From there, a first draft with Claude using the analysis, my notes, and project files. Then gap identification, where I ask Claude to measure the draft against my learning agreement, marking criteria, and what I said I would do. Then voice-recorded gap-filling, where I use Wispr Flow to talk through every question Claude raised. Then iteration, typically five rounds, each one tighter, more precise, more mine. And a final manual pass, reading line by line, catching the small things AI changed that weren't right.
The first round usually looks like AI-generated content. Factually correct but tonally flat. By the fifth round, you would not be sure whether I used AI or not. I did. But every idea, every framework, every design decision, and every ethical judgment is mine. The AI structured and articulated. I directed and decided.
Where AI got it wrong
There were things AI consistently struggled with that I had to catch manually.
- Voice drift. Every AI tool defaults toward academic performance: hedging, passive voice, over-explanation. I built a voice guardian skill in Claude specifically to catch this, but it still required constant vigilance. My process for correcting it: read the draft, record myself saying "No, that's not right, this is how I'd actually say it," and feed the correction back through Wispr Flow.
- Cultural framing. AI defaults to Western analytical hierarchies. It wants to frame Māori concepts as objects of analysis rather than the analytical lens itself. Every draft required checking that Indigenous frameworks were leading, not following. This never fully resolved. It required manual correction on every pass.
- Fabricated data. One specific example: across multiple drafts, AI consistently included a claim that 60% of Leadership AI Coach users deleted their session records immediately after use, and that this happened between 11 PM and 3 AM. The 45% Incognito Mode activation and the late-night usage pattern were real, but the deletion statistic was fabricated by the AI at some point in an early draft and then persisted through subsequent versions because each draft built on the last. I caught it during my final manual pass. The iteration process is the quality control.
- Over-translation of te reo. AI consistently wanted to add bracketed English translations after every Māori term, treating te reo as a foreign language requiring explanation. My voice guardian rules explicitly prevent this, but it kept recurring.
- Subtle meaning shifts. Across five drafts, small changes would accumulate, a word here, a reframing there, that by draft five had shifted the meaning of something I'd said. The final pass was always about catching these and restoring what I actually meant.
What I learned
- The learning is in the iterations, not the generation. My peer described writing everything from scratch and then using AI to revise. I work in the opposite direction. AI helps me generate the first draft from my data and analysis, and then I edit, edit, edit until it says exactly what I mean. For me, the learning happens when I'm reading a draft and thinking "No, that's not what I mean," because that forces me to clarify what I do mean. That clarity is mine. The AI surfaced the question. I answered it.
- AI made me more critical, not less. Because I knew the AI could fabricate, flatten, or drift, I developed a rigour about checking everything that I might not have applied to my own writing. It works because you can never fully trust it, so you check everything.
- The equity barrier is real. I have been constrained throughout by what I can afford. A $20/month Claude subscription. No budget for sovereign infrastructure. A laptop that can barely hold the files. The irony of researching the Equity-Safety Paradox while living it has not escaped me. If I had access to the tools that well-funded researchers use, this project would have been produced faster and with less friction. But I am not sure it would have been better. The constraints forced creativity, and the workarounds became findings.
- AI cannot hold cultural judgment. This is the most important learning. AI can structure, synthesise, and articulate. It cannot decide whether a quote is safe to use, whether a cultural framing honours the community it comes from, or whether a finding has been earned by the evidence. Those decisions remained mine throughout. And they should.
- I am a speaker. Wispr Flow didn't just make my process faster. It made it possible. The translation friction I describe in this research, the cognitive cost of converting fluid thought into typed text, is my own experience. Using voice-to-text to write a thesis about voice-to-text is the methodology in practice.
Appendix evidence
Representative examples of AI iteration are available in the appendices:
- Ray system prompt evolution (V1 clinical protocol → V2 bicultural coach → V3 "the yarn" → pilot-ready prompt) demonstrates prompt iteration across four major versions, with specific language changes traced to design decisions. See Appendix B3.
- Artefact drafting progression — representative example showing Draft V1 (AI-generated from NotebookLM analysis) → Draft V3 (restructured, de-duplicated) → Draft V5 (final voice-corrected version). Available on request.
- NotebookLM notebook structure — screenshot showing the organisational architecture across 8+ notebooks and 200+ sources. Available on request.
Note: This reflection was itself drafted using Claude, from a voice transcript recorded through Wispr Flow. The transcript was approximately 4,000 words of me talking through the questions. Claude structured it. I edited it across two rounds. The ideas, experiences, and judgments are mine. The organisation is collaborative. That is how I work.
What Comes Next
This research began with a storyteller who didn't have a project idea and ended with an ethical architect who has a theory about trust, the empirical evidence to back it, and the refusals necessary to protect the people it serves.
The Human Proxy Theory (that AI does not create vā, it borrows it from the human accountability structures surrounding it) was not in my original learning agreement. It emerged from the building. From a participant crying through a weekday morning session. From a crisis flag firing in the middle of a pilot I wasn't sure was safe enough to run. From a wānanga where participants held a paradox together without resolving it, and proved that the holding was the finding.
Three contributions stand:
Values in Architecture. Cultural safety is not produced by te reo labels on a Western reasoning engine. It is produced when the value governs the logic: the prompt, the database schema, the data design. The 30/70 finding proves this. Participants felt the values without naming them because the values were structural, not decorative.
The Equity-Safety Paradox. Safe AI for vulnerable communities is expensive. When the budget ran out and the model was downgraded, Insight scores fell from 4.9 to 3.1. The communities who most need high-quality relational AI are the least resourced to demand it. This is not a technical problem. It is a political one.
The Human Proxy Theory. Trust in AI is borrowed currency. The question is never "is this AI trustworthy?" It is "is the human behind this AI accountable, visible, and known to this community?"
For practitioners building in this space: the implication is direct. Before you write a system prompt, before you choose a model, before you name a value in your documentation, ask whether the human accountability structure exists to make that value real. If it doesn't, the technology cannot hold it for you.
The method mattered as much as the findings. I learned through making. First through fantasy epics and storybooks that forced coherence, then through conversational AI that forced accountability. Every gap in my knowledge became either a plot hole or a safety failure. Both demanded I fill them before I could move on. That principle (build it to know it) carried from my first assessment to my last build, and every pivot between was processed live in the study group recordings that form the reflexive spine of this work.
What comes next is not a better bot. It is sovereign infrastructure: community-governed reasoning engines that operate outside extractive commercial defaults. It is the three years of relational groundwork that genuine co-design requires. It is Māori and Pasifika communities building their own tables, not just sitting at someone else's. For me personally, it is the work of reconnecting with Samoan community before the next build begins. Not alongside it.
The vā between builder and community is where ethical technology lives. Tend it before you write a single line of code.
Tend the vā before you write a single line of code.
Acknowledgements
Acknowledgements
This research was made possible by a group of people who gave their time, their trust, their knowledge, and in many cases, their vulnerability. I am grateful to all of them.
To My Whānau
Lyall
The Kids
Max, River & Rome
Mum & Dad
The biggest thank you belongs to my husband, Lyall. You gave me the time and space to do this. You held the house together during hospital stays, late nights, and the many evenings I was somewhere else entirely in my head. You have seen a change in me across this time, and I am glad. This was not a small sacrifice and I do not take it lightly.
You encouraged me to study and change careers simply because you want me to be happy. You asked me questions about how things were going even when you didn't really understand what I was trying to do. You kept working the long hours so I could keep going. We had the triplets when I started, and there will be four soon. The sacrifices have been real, and I owe you more than this page can hold. Without you, this would not have been possible. Thank you.
(You also want me to become rich so you can retire, which is completely fine. I hope that happens too.)
To my children, my older kids who felt my absence, and my triplets who grew from babies into people while I was doing my mahi on the computer. You are every reason I wanted to build things that were safe, that protected dignity, that left something worth inheriting. I hope this inspires you one day.
To my mum and dad, without you, none of this exists. Dad, you have been on my mind throughout this entire journey, especially in the moments where I have been reaching back toward our Samoan heritage. You probably wouldn't have understood a word of the research, but I know you would have been proud.
To AcademyEX
Francis, your quarterly talks and opening keynote shaped the researcher I aspired to become. The way you operate at the intersection of bold thinking and deep humanity is something I carry with me. Thank you for showing me what is possible.
Sarah, my first academic advisor, who helped me understand the problems I was actually trying to solve before I knew what they were. Your guidance in those early weeks mattered more than you know.
Felix Scholz, you took on a little orphan who needed an advisor and became exactly what I needed. You never told me to stop building. You understood what I was trying to become, not just as a researcher, but as a founder, and you made space for both. I hope you're not getting rid of me too soon.
Laurent, your talks gave me permission to be bold. You inspired me to deliver something different, something outside the traditional academic report that simply is not how my brain works. Thank you for validating that.
Sarah (programme kaitiaki), for holding the thread of our cohort journey and keeping us all moving forward. Paula, for your academic support especially in those first twenty weeks, and for being a steady hand when the mountain felt impossibly steep.
Evie, the entrepreneurship micro-credential you facilitated was the reason I signed up for this programme. Within one week, I knew. Fiona, for the interview that sent me on this path, and for walking your own journey ahead of us so we could see what was possible.
To Huw for a hui that helped ground the cultural dimensions of this work at a critical moment, and to all the knowledge-holders and guest speakers who came in and shared their thinking across eighteen months: thank you.
To Lee, Co-Researcher and Friend
We have known each other a long time. We have worked together, reconnected, and began the same Master's programme.
I travelled to Whakatāne with you and your whānau opened their ancestral home to me. You took me to your marae. I facilitated your wānanga, you peer-reviewed my Ray Pilot. We had extended Google Meets whenever we needed to think something through. You gave me cultural feedback that changed my code. You named things I hadn't seen. You challenged me when I needed to be challenged and held space when I needed that instead.
I have watched you grow in confidence and belief across this time and it has been one of the great privileges of this journey. Though our structured interactions have ended, it's just the beginning of many shared kaupapa. Any reason to book the Cordis for a mini holiday, am I right? Thank you for being the best admin a friend could have. Here if you need.
To My Study Group
Across sixteen huddles, a small group of us showed up for each other week after week. Not to impress our advisors, but to think out loud, admit what wasn't working, and celebrate what was. Coming to these huddles was the only reflexive practice I consistently kept, and it was the one that mattered most.
Chris and Pauline, you came to nine out of sixteen. That consistency meant everything, especially in the lonely stretches. Nadine and Rob, eight sessions of wisdom, challenge, and humanity. Lee, seven sessions, plus everything else. Shourjo, Kat, Michael, Taylor, Edson, Rebecca, Aditya, Summer, Kelly, thank you for every session you gave.
I set this group up for myself because I knew I needed accountability. What I got was something I didn't expect: genuine friendship, practical wisdom, emotional safety, and the kind of peer support that no academic framework can replace. I hope you got something from it too.
To My Cultural Peer Advisors
At a critical moment near the end of this research, several of my peers stepped up to provide cultural feedback when I needed it most. They said yes without hesitation — even while they were deep in the thick of their own work, their own deadlines, their own final push.
What moved me most was this: at the start of our programme, I don't think any of them would have seen themselves in that role. But they grew into it. The confidence, the clarity, the generosity with which they gave their feedback — including reviewing my final output — was extraordinary. I am so grateful. Thank you for seeing the importance of what I was trying to do, and for trusting yourselves enough to say yes.
To My Friends
Megan, Nic and Charlotte, you have walked with me, literally and figuratively, across this entire journey. You have sat with coffees and listened to me rant, rave, process, panic, and occasionally make no sense at all about everything this Master's has thrown at me. You provided what I can only describe as micro friend therapy, consistently and without complaint. Thank you for being in my life. It has mattered more than you know.
Emma, thank you for being my first guinea pig! You believed in me long before I ever did, and that's always meant so much to me. Working on a tech-based project together was a highlight of the last year. Having Team Kirkman-Chambers in our support crew is an absolute blessing.
To the Researchers and Knowledge-Holders
Dr Kiri Dell. Hearing you present was one of the most powerful moments of this entire programme. Your research, your approach to sensing and knowing, the way you hold both rigour and heart simultaneously. The knowledge you shared in that session led to the adaptation and use of the Kei Compass, which guided every part of my research project from the beginning to the end. This is an approach I take forward with me into my next steps. Thank you.
Dr Karaitiana Taiuru. I have followed your work on AI ethics and digital sovereignty closely throughout this journey. Your kaupapa Māori AI framework is something I will continue to build with and advocate for long after this thesis is submitted. Thank you for the work you do and the clarity with which you do it. I appreciate the discussions we have had about the kaupapa and look forward to many more.
Alesana Fosi Palaamo. I have not met you, but the Tāfatolu framework has been a quiet companion throughout this work. It made me feel closer to my heritage at a time when I needed that. Thank you.
To Every Participant
To everyone who answered a Project Rise survey by speaking into a voice agent instead of typing into a form, because that choice said something about what felt possible. To those who coached with the Leadership AI and shared what they could not say to another person. To the wānanga participants who sat with a paradox that had no clean answer and didn't pretend otherwise.
To the Ray pilot participants: you were 14 people who collectively gave 697 minutes of real relationship conversations to an AI built on a student's laptop with borrowed credits. Some of you cried. Some of you shared things you'd been carrying alone. One of you told me what you hadn't been able to tell your therapist. I didn't take any of that lightly.
Your sessions shaped the NO clauses. Not what I chose to build. What I now refuse to build, because of what you taught me about what's worth protecting. The utu tūturu commitment in this research is yours: once this is complete, I'm building a dedicated space to return what I found, what your contribution built, and a personal acknowledgement for everyone who showed up. The loop must close. That's not optional. That's the charter.
You gave me your time. Your thoughts. The expertise you've built from living your own life. That's not data. That's a gift. Thank you.
E lele le toloa ae ma'au i le vai
The toloa flies away, but always returns to the water.