Thought Piece

Conversational AI as Relational Space

Download PDF

"A thought piece from four builds and the voices of real participants"

If you are Māori or Pasifika and you have ever felt the cost of being expected to type your way through a digital system that wasn't built for how you think or speak, this piece is grounded in that experience.

If you're interested in what conversational AI actually means in a Māori and Pasifika context, and what users told us about talking to an AI versus typing at one, come in.

The moment that changed the question

At 8:30 on the morning Ray launched, a message came through from a participant. She had just finished her first session. She said she had cried through the whole thing.

I was driving. I couldn't open the message. I panicked, running through the crisis protocol in my head, hoping she was okay. When I finally stopped and read it properly, I understood. She wasn't in distress. She was relieved. She felt heard. This woman is someone I have known for 15 years. In all that time, neither of us had ever cried in front of the other. And here she was, crying in a conversation with an AI I had built, not because something had gone wrong, but because something had gone profoundly right.

That moment is the closest I have come to understanding what Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." actually means in a digital context.

Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." is the sacred relational space between people in Māori and Pasifika thought. Not a gap or an absence, but a living connection that must be actively tended (teu le vā; Anae, 2010, as cited in Pala'amo, 2018). When trust flows through it, Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." is generative. When it is broken, through disrespect, extraction, or carelessness, the damage is real and felt in the body.

When AI enters human relational spaces, it enters the Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." . That is not a metaphor. It is a design obligation, one most builders haven't fully reckoned with. The architecture must nurture the space or it will exploit it. Not passively, not deliberately, but by default. Every design decision in this research was made in answer to that obligation: how do you build a system that tends the space rather than extracting from it?

And there is a harder question I have not stopped asking: is it even possible to create genuine Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." through an artificial system? My own experiences with conversational AI convinced me to try. That first morning with Ray convinced me it might be possible. But I hold the question deliberately, because any builder who stops asking it has already started causing harm.

The Vā in Digital Space

Architecture as Relational Tending

Human ↔ Human Utu / Reciprocity
Human ↔ AI Somatic Grounding
User ↔ Researcher Human Proxy Trust
Accountability Ohu / Governance
Tended vā = trust flows Broken vā = physiological response

Why voice matters

I will make this personal before I make it statistical.

I am a Pacific Islander who has grown up disconnected from her Samoan heritage. My father passed away and with him went the clearest entry point back into that side of my family. I walk a complicated line. I sound one way, I look another, I feel caught between worlds I can't fully claim. I name this not as an apology but as context, because it is precisely this position that made me obsessed with who gets to be heard in digital spaces, and who gets left out.

Through all of that complexity, the one thing I have always known is that I love to speak. Pacific oral oratory has always been the primary vehicle for knowledge, for relationship, for the passing down of everything that matters. One word in te reo Māori can carry more feeling, more history, more relational weight than a paragraph of English, life woven into the language itself. English, as I experience it, is a mechanism for description. Asking someone from an oral tradition to write their experience is asking them to translate something living into something flat.

This is what I mean by translation friction: the cognitive and cultural cost of converting fluid thought into typed text. You edit yourself before you start. You make things shorter than they are. You get it over with.

In the Project Rise survey, this showed up as data: the average text-based engagement was 7 seconds. The average voice engagement was 14 to 18 minutes. Same participants, same questions, but voice unlocked a completely different quality and depth of response (full participation summary in Appendix C4).

The Text Experience
7 seconds

Average text-based engagement

  • Filtered thought
  • Edited response
  • Transactional entry
The Voice Experience
14–18 mins

Average voice engagement

  • Fluid thought
  • Reflective response
  • Relational depth
0% Drop-off Disabled/Neurodivergent participants using voice
33% Deep Disclosure Open-text onboarding vulnerability rate

But the headline number isn't the sharpest finding. Look at what sits underneath it.

Every single Disabled and Neurodivergent participant in the Project Rise cohort chose extended voice engagement. For participants who identified as dyslexic, typing was a wall: "It takes ages. Voice just fast-tracks everything. I'd lose interest within a very short space of time if I had to type." (R-08). For participants managing emotional or cognitive strain, the difference was even starker. R-14 named it directly: typing "takes more brain work" when you're "feeling sad or angry." Voice didn't just engage people for longer. It removed a barrier that text had been quietly enforcing for years.

R-01 captured something specific to shared relational use. He had used Ray across multiple sessions for two different relationships. With his wife present, voice made something possible that typing couldn't: "By interacting with you and having the freedom to truly express how I felt alongside hearing my wife express how she felt to a third party was very helpful. It could be very honest without being directed and requiring a reply from my partner." Voice didn't just reduce friction. It created a neutrality that made joint sessions possible without requiring direct confrontation.

The depth data confirms it. 33% of participants used the open-text onboarding box to disclose intense vulnerabilities: ADHD, drug use, trauma. The same people expanded on those disclosures for 10 to 15 minutes once the voice session began. What came out in those sessions were things people would never have written: their inner critic, their professional shame, their fears about being seen. These were not topics the survey was asking about. They surfaced because voice gives people permission to follow their own thread.

Writing feels like a test. Talking feels like a kōrero. For our people, knowledge has always lived in the breath and the voice.

What people actually said

Four builds. One thing kept coming back: people felt heard. Not because the AI was intelligent. Not because it said something surprising. Because it listened, and then reflected back what it heard. Reflective listening: one of the most foundational communication techniques in wellbeing practice, and one of the most routinely absent in everyday human interaction. The AI was programmed to do it because good communication requires it. And for participant after participant, being on the receiving end of that consistency produced something close to relief.

That made me sad. Actually sad. People are so rarely listened to without agenda that a machine doing it felt remarkable.

R-02 captured the distinction between an echo and a mirror:

"I found the feedback that you would give back to me after I've explained something was really helpful for me to hear it from another person or to a machine that felt like they really listened."

— R-02

The AI wasn't just repeating. It was reorganising. Returning meaning in a shape the participant could hold.

What participants named most consistently was the absence of a watching human body. No folded arms. No sighs. No head tilting sideways. That changed what they were willing to say. W-12 hadn't expected to lower her guard at all:

"you are an AI screen and I'm not fearing the judgment of watching your body language or you folding your arms or sighing or your head going to the side... I thought I'd keep my shield up and I'd just be talking to a robot back to it like this. Really interesting. I didn't see that coming."

— W-12

R-02 named something more structural, something about what humans do to each other even with the best intentions: "I felt that as we progressed and developed the trust in the relationship, that I could be a lot more open and I don't have to worry about how that person is going to take it because quite often humans, when you engage with them, are seeing what we say to them as how this will affect them first." That is an observation about the relational cost of human interaction, the emotional labour of managing someone else's reaction while trying to process your own experience. The AI removed that labour entirely.

W-22 described the same thing from the wānanga:

"I think when people are in it and the thick of it sometimes it can be really helpful to know someone's going to hold space and you're not burdening anyone, right? So in that way I imagine that can be really helpful because actually the burden's not on another person."

— W-22

W-03 described what it felt like when the AI got this right:

"It feels great. It feels warming. It feels real. It feels progress is being made in the conversation because you're articulating clearly to me how I feel and my perspectives. And you're not throwing in your own angle."

— W-03

For Māori and Pasifika participants specifically, voice AI also functioned as a shame-free practice space. In the Culture Meets AI wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." , participants described using AI as an "advanced parrot", something they knew lacked true mana but that was precisely safe because of that lack. Making a mistake with an AI carries no social consequence. Mispronouncing a kupu, asking a question that reveals how much you don't know, practising a kōrero you're not yet confident with: the AI holds no judgment and remembers nothing. For people carrying whakamā te reo Māori Core Value Whakamā Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. Research Context [2, 6] ↗ "The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong." about their distance from their culture or language (Rua, 2025; Millar, 2021), that is the point of entry.

W-10 went further, naming the values she felt in the AI without being prompted:

"I feel like AI has a natural manaakitanga wairua. It's always looking for solution-based kaupapa... So it actually role models what healthy communication looks like, what tools can be used."

— W-10

That is the 30/70 finding in one sentence. A participant naming the value the architecture carried, unprompted.

The stateless design, no memory between sessions, was part of why. The AI couldn't accumulate a picture of them. It couldn't use what they'd said last week against them this week. Every session was clean. Participants named this as protection, not limitation.

When the space collapsed

But the Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." in a digital space is fragile. And it is fundamentally bounded by the machine's lack of wairua.

When the AI's programmed empathy mismatched a participant's reality, the relational space collapsed immediately. W-21 directly challenged the AI's capacity to hold space for human pain, asking: "Does the AI have the ability to feel the emotion that's connected to that vulnerability?" When the AI acknowledged its limitations, she rejected the space explicitly: "This to me doesn't feel like a vulnerable conversation [because it] doesn't emulate any emotions."

When the technology felt too clinical, participants experienced a jarring disconnect. W-14 said: "Yes, I feel like I'm speaking to an automated, like, calling center. That's what it feels like." Another participant, after briefly mistaking the AI for a real human, expressed deep discomfort upon realising the deception and abandoned the session entirely.

And then there was the moment that reveals a different kind of boundary. P-18 abruptly exited upon realising the AI was processing her background conversations: "Sorry I didn't know it could hear us talking... All right, I have to go to class now. Bye!" The AI's constant listening, the thing that makes it available, also makes it intrusive. That is a design tension, not a bug.

There is also the moment that captures what it looks like when vā is broken at the language level. The butchering of te reo Māori by text-to-speech models is not a minor UX issue. When I hear it, I feel it in my chest, a tightening, a wrongness. It is the sound of a language being disrespected by a system that was never designed to honour it. That physiological response is data. It is what happens when a tool enters a relational space it was not built to hold. The decision to strip te reo from the voice agent entirely, and what that required, is in Building Safe Conversational AI. The vā principle it proved: silence is more respectful than performance.

Participants confirmed this directly. One described how the mispronunciation didn't just create distance. It triggered the memory of times they themselves had been judged for mispronouncing te reo. The harm was not the AI's error. It was the echo of a longer history of shame. That is not a design problem. That is a safeguarding obligation. For the full participant evidence on this decision, see Build Code Practice.

These moments are not footnotes to the success stories. They are the other half of the evidence. The Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." can be tended, the earlier sections prove that. But it breaks easily, and when it breaks, participants don't give second chances. They leave.

The paradox we are not going to resolve

The Culture Meets AI wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." did not give us a clean answer (participant questions and wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." protocol in Appendix C5; coding frame in Appendix A3). It gave us something more honest: a full continuum of views, shaped by personal experience, sitting in genuine tension with each other. That tension is the finding.

The Tapu / Noa Paradox

The Danger

Risk of Extraction

  • Data sovereignty risk
  • Mispronunciation & erasure
  • Noa-ising taonga (making sacred ordinary)
  • US-hosted infrastructure
The Healing

Judgment-Free Access

  • Overcoming whakamā te reo Māori Core Value Whakamā Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. Research Context [2, 6] ↗ "The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong." (shame)
  • Knowledge preservation
  • Safe entry point for the diaspora
  • Discovering voice on your own terms
Shooting Star
Ohu Collective
Governance

The only thing that can hold this tension is community, not technology and not an individual researcher.

On one side sits a real danger. AI systems built on foreign infrastructure, trained on data that doesn't include Indigenous voices, mispronouncing the words of a living language: these are not neutral tools. They risk noa-ising taonga, taking what is sacred and making it ordinary, stripping the relational context from knowledge and turning it into a data point on a server in another country (Hudson, 2020; Taiuru, 2025). When a language is mispronounced by a system that was never built to hold it, the tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." of that language is not just ignored. It is actively undone. The AI doesn't know it's doing damage. That's part of the danger.

On the other side sits something equally real. For people in the diaspora carrying whakamā te reo Māori Core Value Whakamā Shame — specifically, the shame of not knowing enough about your own culture, language, or identity. Research Context [2, 6] ↗ "The reason a judgment-free AI space matters: a place to ask questions that feel too basic, too exposing, without the terror of getting it wrong." , shame about not knowing their language, their stories, their whakapapa te reo Māori Whakapapa Genealogy, lineage, or descent; the layering of history and connection. Research Context [1, 5] ↗ "Data is viewed as an extension of whakapapa rather than a corporate asset, necessitating absolute authority and protection of Indigenous data." , AI offers a place to begin without the terror of getting it wrong in front of an elder (Rua, 2025; Millar, 2021). A place to ask questions that feel too basic, too exposing, too much of an admission. A judgment-free starting point. And here is the other side of the tapu/noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." tension: for someone who has been cut off from their culture, noa is sometimes the only door available. An ordinary, accessible, low-stakes entry point may be the only one they have. That matters.

"So much has been lost because we didn't store it anywhere... I don't think our own tikanga will be used against us by a machine."

— W-19

W-09 offered a different articulation of the tapu/noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." boundary: "The human, the humanness, the intuitiveness, the emotional intelligence, the empathy, the human innocence that comes with being innately human, I think that's tapu. The noa side is personality, person's decisions that they make, the consequences of all these things."

And then there was the story that stayed with me. Someone in the wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." talked about a knowledge holder in their family who passed away before sharing what they knew. That knowledge is simply gone now. Not held somewhere in an archive. Gone.

W-13 landed on the question that holds the whole paradox: "What is the cost of not sharing? What is the cost of withholding information, especially esoteric knowledge?" That is not rhetorical. It has a real answer on both sides, and the fact that both answers are true simultaneously is what makes this unresolvable by any individual researcher.

W-19 said it most directly:

"I live in a space where a lot of my culture has gone because other people deemed it too tapu to pass on. Now, none of us have it... I don't think in any way going forward that knowledge that we have now should be just left in a little sacred place, because we're all oral, learned... Knowledge is useless if not given to the next generation."

— W-19

"You'll never understand. You've never had your heart broken... I can tell you all my deepest, darkest secrets." (W-12). The contradiction itself was the finding.

I like option one and two. I think that I should, in the asset map, include the page structures.

Here is what I came to: this paradox cannot be resolved by a researcher. It can only be held by a community. That is what ohu means in practice. Not a committee. Not a consultation hui. A collective governance structure (kaumātua, kuia, rangatahi, knowledge holders) that holds the tapu/noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." tension continuously, not just at the point of design, and not just at the point of launch. Pre, during, and post. The question of what can be digitised, what must stay protected, what counts as tending Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." and what counts as violating it: those are not technical decisions. They are relational ones, and they require ongoing collective authority. One builder, even a Māori or Pasifika one, cannot hold that alone. I am not the right person to draw that line. No individual researcher is.

"Finding that balance with maybe an ope, an ohu, or a collective, a group where this is constantly... transparent between the two ope."

— W-03

What I believe, and I hold it as a position, not a conclusion, is that AI could be genuinely generative for communities that have always preserved what matters most through voice and story. There is something different about someone writing your story down and handing it to the next generation versus you recording it in your own words, in your own voice. AI could facilitate that. But only if Māori are building it. Not just at the table having the conversation, but building their own table. Designing the infrastructure. Owning it. Governing it themselves.

The relationship behind the AI

The most important finding of this research was not a feature, a metric, or a design decision.

Participants were not trusting the AI. They were trusting me.

I call this the Human Proxy Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." finding: AI does not create Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." . It borrows it from the human accountability structures that surround it. The digital relational space exists before the AI conversation starts. It is built from the real-world relationship between participant and researcher, and the AI is only ever as safe as that relationship is strong. The thematic analysis structure underpinning this theory is in Appendix A3.

The wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." made this visible in ways no survey could have. W-18 said it plainly: "possibly because I know that the information that's being gathered from this conversation, I know the person who is responsible for the research, so that makes a difference." W-06 was direct: "I'd only feel comfortable if I had high trust in the specific person or organisation connected to the AI prompt. I would not trust a random or anonymous AI prompt."

P-19 described the Human Proxy effect through the cloned voice: "At first I thought it was really weird hearing your voice because it feels like, because it's someone I know, right? Even though I know it's AI but it allows me to open up more... because it's you, even though it's not you, they're using your voice, it makes it a lot easier."

The cloned voice amplified the effect more broadly. Because participants heard my voice, not a generic AI, the session felt like a natural extension of a relationship that already existed. W-22 said: "I know it's AI, but I do think I do feel like I'm talking to [the researcher]. And that's maybe because I know her a little bit. So I think that definitely helps."

W-10 named the structural logic underneath all of this: "[The researchers] are putting this AI conversation inside a cultural container rather than pretending the AI itself is culturally safe on its own." That is exactly it. The container is human. The AI operates inside it. Remove the container, and the safety disappears.

This finding has a ceiling, and it matters to name it. Ray has never been tested in a cold interaction, with strangers, without a pre-existing relational foundation. Whether the Human Proxy effect can be replicated through process and transparency alone, rather than through existing relationship, is the open design question. But within the pilot there was evidence that process and care can partially close that gap. R-15 entered without a deep pre-existing relationship with me. Yet the framing, the opt-out process, and the transparency about what the research was, built enough trust to begin. That is not the deep Human Proxy effect of a 15-year friendship. But it shows the effect can be partially established through deliberate process, an important finding for any builder who cannot rely on pre-existing relationship.

W-11 closed the section with the design implication:

"The safety of the digital space must be visibly tethered to human accountability. The AI cannot operate as an anonymous, faceless corporate entity; the human stewards responsible for the data must be highly visible."

— W-11

What this means for builders

The design implications of this research live across two other artefacts. For the full safety method (crisis architecture, stateless design, the nine builder principles) see Building Safe Conversational AI. For the values framework and build codes that governed every decision, see Build Code Practice.

What this artefact contributes that those don't:

Human Proxy Theory

The anchor finding. If you are building conversational AI for vulnerable communities, the most important design decision is not in the code. It is in who stands behind the system and how visible, accountable, and reachable they are when the system fails. AI borrows trust. It does not generate it.

The tapu/noa paradox

The unresolved tension. The same technology that risks stripping cultural knowledge of its sacredness is also the technology creating space for people carrying shame to engage with that knowledge for the first time. This paradox cannot be resolved by a builder. It can only be held by a community, through collective governance structures (ohu) that make decisions about what can be digitised, what must stay protected, and who holds that authority over time.

Voice as sovereignty

For communities grounded in oral tradition, the choice between voice and text is not a feature preference. It is an accessibility and cultural decision. Build for voice deliberately, or you are designing for the people who were already included.

References

Anae, M. (2010). Research for better Pacific schooling in New Zealand: Teu le vā — a Samoan perspective. MAI Review, 1. (As cited in Pala'amo, 2018)

Hudson, M. (2020). Ngā Tikanga Paihere: A framework to guide ethical data use. Stats NZ / data.govt.nz. https://www.data.govt.nz/toolkit/data-ethics/government-algorithm-transparency-and-accountability/nga-tikanga-paihere/

Mika, J. P., Dell, K., Newth, J., & Houkamau, C. (2022). Manahau: Toward an Indigenous Māori theory of value. Philosophy of Management, 21, 441–463. https://doi.org/10.1007/s40926-022-00195-3

Millar, N. (2021, July 15). The shame holding Māori back from speaking te reo. E-Tangata. https://e-tangata.co.nz/korero/the-shame-holding-maori-back-from-speaking-te-reo/

Pala'amo, A. (2018). Tafatolu (threesides): A Samoan research methodological framework. Aotearoa New Zealand Social Work, 30(4), 17–28.

Rua, M. (2025). Everyday experiences of te reo Māori trauma [RNZ coverage]. Radio New Zealand.

Taiuru, K. (2025). Māori data governance 2025: State of the nation. taiuru.co.nz.

Te Mana Raraunga. (2018). Principles of Māori data sovereignty (Brief #1). https://www.temanararaunga.maori.nz/tutohinga