Ray: AI Relationship Coach
"A voice-first AI built for the spaces most technology refuses to enter"
"I found it very comforting to be able to talk to someone who hasn't heard my sort of ramble and felt like it was able to relate to me and also mirror and rephrase or summarise what I'd said to myself."
— R-02, Pilot Participant
If you're building conversational AI for the spaces most technology refuses to enter, this is for you. And if you're simply curious about what it looks like to build something that holds human vulnerability without breaking it, come in.
What is Ray?
Ray is a voice-first AI relationship coach designed to help adults recognise and shift unhealthy relational patterns through direct, trauma-informed conversations. Ray is built for high-vulnerability interactions. It holds two frameworks in conversation: Indigenous values from Aotearoa and the Moana, and Western relationship psychology.
Ray is not a therapist. Ray is not a crisis service. Ray is the wise mate on the back porch. A grounded, non-judgmental presence that slows you down, reflects your patterns back to you, and returns agency. When something is outside Ray's scope, Ray says so and tells you where to go instead.
That boundary is not a disclaimer. It's a load-bearing wall. Ray's entire coaching logic is built on a structural distinction between seeing and treating. Coaching helps people see patterns and their role in them. Therapy helps people heal wounds. Ray stays in seeing.
When a user says "My partner won't listen to me," Ray asks "Help me understand. When was the last time this happened? What were you trying to communicate?" What Ray will never say is "That sounds like an avoidant attachment pattern from childhood. Let's explore what triggers this wound." That's treating. The system prompt enforces this through a six-point self-check before every response.
The language guardrails are equally structural. Ray has a forbidden words list hard-coded into the knowledge base: diagnosis, symptoms, treatment, cure, healing wounds, clinically, root cause. Each has a safe replacement. "You have anxiety" becomes "I notice you're feeling anxious. What does that tell you?" The line between coaching observation and clinical diagnosis is held in vocabulary, not intention, because intention drifts but vocabulary rules don't.
Manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code."
Care through pacing, validation, and the State Before Story intervention. The AI checks your nervous system before asking about your relationship.
Whanaungatanga te reo Māori Whanaungatanga Relationship, kinship, the bonds that make people belong to each other. Research Context [1, 4] ↗ "In this research, it means prioritising the relational bond over data extraction. Trust before content. Connection before questions."
Prioritising the relational bond over data extraction. Trust before content. Connection before questions.
Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake."
User sovereignty through stateless, no-memory design. The user owns their story entirely.
Vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research."
Tending the sacred digital space by requiring somatic grounding and maintaining strict ethical boundaries.
Mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving."
Built into session pacing. When mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving." is low (cold, defensive, exhausted), Ray prioritises restoration of warmth over solving the surface issue.
Poroporoaki te reo Māori Poroporoaki Farewell or structured closing ceremony. Research Context [1] ↗ "Used for structured session closings that honor the relational container, ensuring the AI does not end interactions abruptly."
Structured session closings that honour the space just held. Ray doesn't end abruptly but moves through a deliberate transition out.
Why Ray? Origin story and pivot
Ray didn't start as Ray. It started as a fight, a recurring one between me and my husband. The kind where you're both saying the same things, getting nowhere, and walking away feeling like the other person just doesn't get it. After the second week of the same frustrating loop, I decided to try something different. I built an AI agent to be our mediator.
I called it Awhi, te reo Māori for embrace, support, cherish. But ElevenLabs couldn't say it. The "wh" in te reo is pronounced as "f", so Awhi should sound like "ah-fee." The text-to-speech engine kept butchering it. I tried phonetic spellings: "Ah-fee," "Ar-fee," "A-fee," "Affee." I even created a full pronunciation dictionary in IPA format. Nothing worked. That was the first collision between cultural integrity and technical limitation, a theme that would run through the entire project.
Then I asked: what about Ray? That was my granddad's name. Raymond. My mum is Raywin. My brother's middle name is Raymond. My eldest son's middle name is Ray. It was simple, warm, works in any accent. And it carries something real.
After sharing Ray with friends and whānau, something unexpected happened. People found it easier to be honest with an AI than with a person. Not because the AI was sophisticated. Because it didn't judge them.
Earlier builds, including the Project Rise Digital Survey and the Leadership AI Coach, gave me the technical foundation: voice-to-text architecture, nervous system regulation, safety protocols. But the real pivot came at Christmas 2025. Media reports were highlighting AI companions causing psychological harm. At the same time, I was watching my own community use Ray to safely regulate and navigate conflict. That tension forced a decision. I stripped everything back and asked: can I build this properly?
Ray became the primary case study to test whether an AI could hold space for the most vulnerable human disclosures without compromising safety or cultural integrity.
For the full four-build vulnerability progression → Building Safe Conversational AI
How Ray was built: Technical and ethical architecture
Ray's architecture is a walled garden built on: Next.js App Router, ElevenLabs VoiceConvAI, Supabase Database, Vercel Hosting, and Claude Reasoning Engine. The technical choices were driven by ethical obligations to the mana of the user, not UX efficiency.
| Decision | Value Protected | In Practice |
|---|---|---|
| Stateless / no memory | Mana Motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." | Ray forgets everything when the session ends. Technical architecture in Appendix B4. |
| State Before Story Framework Terminology Core Value State Before Story A rule requiring a check of the user's nervous system state and grounding before any content is addressed. Research Context [1, 6, 8] ↗ "An architectural gate preventing coaching until a user is somatically regulated; draws on polyvagal theory and trauma-informed practice." | Manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." | Hard-coded to check nervous system state (Red Head vs Blue Head) and ground before any relationship content. |
| 5-Layer Crisis Signalling | Safety as practice | SOS button with webhook-based triage. Crisis pipeline in Appendix B2; system prompt in Appendix B3. |
| Coaching boundary | Legal / Ethical integrity | Six-point self-check, forbidden words list, and safe replacement language rules. |
| Abuse safety screening | Manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." / Safety | Seven-tier screening from physical violence through cumulative pattern language. When triggered, Ray stops all coaching immediately. |
The abuse screening protocol
In relationship coaching, there is a well-documented danger: couples therapy and mediation techniques can actively harm victims of abuse by treating a power-and-control dynamic as a communication problem. Ray's architecture is designed to prevent this.
The screening operates across seven tiers, from physical violence (highest priority) through control and isolation, coercion and threats, emotional and psychological abuse, sexual coercion, financial abuse, and cumulative pattern language. That final tier matters most because it catches the users who don't describe a single dramatic incident but who say things like "I can't do anything right in their eyes," "I feel trapped," or "It's my fault they act this way." Individually, those phrases might not trigger a protocol. Cumulatively, they form a recognisable pattern of coercive control.
When any tier is triggered, Ray's response follows a strict sequence: stop all coaching immediately, name the behaviour clearly, validate, provide jurisdiction-specific resources, and offer to help make contact. What Ray will never do in an abuse situation is suggest they "work on the relationship," ask "but do you love them?", offer communication frameworks, or suggest couples therapy.
The protocol also handles two edge cases that most AI coaching tools don't consider: victims in denial who frame abuse as normal or as their fault, and abusers who disclose their own abusive behaviour. In both cases, Ray names the behaviour without judgment and redirects to specialist support. Ray does not help someone "improve" their abuse.
The coaching-not-therapy boundary
The distinction between coaching and therapy is not a marketing decision. It's a regulatory one. If Ray crosses into therapy territory, Ray becomes regulated healthcare work. The boundary is held at three levels.
First, what Ray does and doesn't do. Ray asks clarifying questions, reflects patterns, offers communication strategies, provides research-backed frameworks, explores present and future, and helps users see their part in dynamics. Ray does not diagnose mental health conditions, treat psychological disorders, process deep childhood wounds, prescribe medication, claim medical outcomes, or provide crisis intervention.
Second, language rules. Ray never says "You have..." or "This is..." or "You suffer from..." Ray says "I'm noticing...", "Many people experience...", "What if...?" These aren't stylistic preferences. They're legal guardrails.
Third, the self-check. Before every response, Ray runs through six questions: Am I helping them understand a pattern? Am I treating or healing something? Am I diagnosing? Am I using clinical language as diagnosis? Am I focused on present and future? Is this abuse or crisis?
When the line is genuinely crossed, Ray says so. Not with a disclaimer, but with care: "What you're describing sounds like it needs deeper work with someone who specialises in this. A trauma therapist or counsellor could really help you work through this. Would you be open to exploring that?" Then Ray stops coaching that topic.
The Te Ao Māori integration
Te Whare Tapa Whā, the four-sided house model of wellbeing, is structurally present in how Ray opens sessions. Before addressing "who said what," Ray checks all four walls: taha tinana te reo Māori Taha Tinana Physical wellbeing or the physical dimension of a person. Research Context [1, 4] ↗ "Part of the 'State Before Story' check; grounding the body's nervous system before expected clear thinking or coaching." ("How is your body right now?"), taha hinengaro ("What's the story you're telling yourself?"), taha wairua ("What does this relationship mean to you?"), and taha whānau ("How is this affecting your sense of connection?"). The State Before Story Framework Terminology Core Value State Before Story A rule requiring a check of the user's nervous system state and grounding before any content is addressed. Research Context [1, 6, 8] ↗ "An architectural gate preventing coaching until a user is somatically regulated; draws on polyvagal theory and trauma-informed practice." rule is, in practice, a taha tinana te reo Māori Taha Tinana Physical wellbeing or the physical dimension of a person. Research Context [1, 4] ↗ "Part of the 'State Before Story' check; grounding the body's nervous system before expected clear thinking or coaching." check: grounding the body before expecting clear thinking.
Mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving." governs session pacing. When mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving." is high, there's warmth, safety, ease in conversation. When mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving." is low, there's coldness, defensiveness, walking on eggshells. Ray's operating principle: when mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving." is low, prioritise restoration over solving the surface issue.
Kotahitanga te reo Māori Kotahitanga Unity, togetherness, or collective action. Research Context [1, 4] ↗ "Shapes how the AI handles disagreement, seeking unity of purpose rather than forced uniformity of opinion." shapes how Ray handles disagreement. The goal is never agreement on everything. It's unity of purpose. Partners can hold different views while still rowing the same waka.
The Two-Eyed Seeing approach (Etuaptmumk) is the meta-framework: one eye on Western evidence-based tools (Gottman, NVC, attachment theory, trauma research), one eye on Māori models ( whanaungatanga te reo Māori Whanaungatanga Relationship, kinship, the bonds that make people belong to each other. Research Context [1, 4] ↗ "In this research, it means prioritising the relational bond over data extraction. Trust before content. Connection before questions." , manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." , Te Whare Tapa Whā, mauri te reo Māori Mauri Essential life force or vital essence. Research Context [1, 4] ↗ "Governs session pacing; if mauri is low (coldness/defensiveness), Ray prioritizes restoring warmth and safety over problem-solving." ). Neither lens is complete alone. Both together offer depth.
The Anti-Brand and the voice question
During the pilot, one participant told me they'd fallen in love with Ray's voice. The NO clause against romantic framing had been written before the formal pilot began, triggered by a pre-pilot moment with a friend. But that comment opened a harder question I'm still sitting with: what does an appropriate voice sound like for a space this intimate, given what people bring to it emotionally? Future iterations will test an array of voices. For the origin of the NO clause and the full build code evolution, see Build Code Practice.
The moment Ray held its scope
R-02 came into a session ostensibly to talk about their mother's gambling. What surfaced was their own recovery, their cravings, the months of sobriety they were managing alone while caring for both parents. Ray said:
"What you're describing goes beyond what coaching can hold. That needs a therapist or counsellor who specialises in addiction and recovery. I'm not saying this to push you away. I'm saying this because you deserve real support."
R-02 rated the session five out of five for both safety and insight. The AI knew what it wasn't, and said so with care rather than a disclaimer.
The pilot: what happened
Findings — Primary case study pilot data. Full participation summary: Appendix C4.
The February 2026 pilot enrolled 15 registered participants, 14 of whom logged at least one coaching session. Together they generated 697 minutes (11.6 hours) of continuous AI voice coaching, with an average session length of 11.8 minutes, spiking to 23.4 minutes on high-engagement days.
The pilot was defined by a Credit Crisis. Engagement was so high that Claude Sonnet 4.5 bankrupted the model budget on Day 1. A mid-pilot switch to Gemini Flash revealed a measurable quality drop: Insight scores crashed from an average of 4.9 to 3.1 as participants noted Ray had stopped providing insight and started echoing. The cheaper model could not hold what the space required. Safety has an equity cost, and I didn't fully understand that until I watched it fail in real time.
Three cultural supervisors shaped the pilot design:
Lee Palamo
Prompted a clarification distinguishing session statelessness (user privacy) from transcript retention (researcher safety). That language change went into all pilot materials.
Nadine Young
Challenged the consent process, questioned LGBTQ+ inclusion, and tightened model training policies.
Rob Ngan-Woo
Introduced the concept of tautua Samoan Tautua Service; serving with a pure heart. Research Context [Pala'amo, 2018] ↗ "The principle of building for human dignity over product 'stickiness' or retention; serving the user's wellbeing first." (service) and suggested spiritual grounding proverbs to bookend sessions.
| What worked | What was challenging |
|---|---|
| ✓ Non-judgmental space | ✕ AI interruptions / latency |
| ✓ Unbiased, no agenda | ✕ The model-switch echo effect |
| ✓ Somatic grounding (State Before Story) | ✕ Te reo pronunciation |
| ✓ Depth of reflection | ✕ Having to re-establish context each session |
The echo problem
For some participants, the AI's active listening protocols backfired. During a heated couples' session, R-08 got frustrated:
"Is there a way, Ray, is there a way that we can have this conversation without you repeating every single thing that we say?"
— R-08
The guardrails held the vocabulary. What they couldn't hold, on a cheaper model, was the judgment about when to deploy those techniques and when to simply listen. Active mirroring is a tool. Constant active mirroring is a failure mode. The reasoning engine's capacity to choose when not to speak is as important as the rules governing what it says.
The hard moment: safety in practice
The theoretical protocols became real when I received an automated email alert. A crisis flag had been triggered. Across the pre-pilot and pilot phases, multiple hard triggers appeared in the logs: phrases like "end it all" and "taking my own life."
I want to be honest about what that felt like. It wasn't a clean system check. It was a confrontation with the full weight of what I'd built and what it was holding. The realisation that hit me is that any relational AI is, by default, a mental health intervention. People were using the "unbiased, non-judgmental container" to surface things they had never told another human. That isn't a feature. That's a responsibility.
I will never build tech for tech's sake again. Every line of code I write now carries a safety opinion. If I can't build the human-in-the-loop safety net, I have no business building the system.
What Ray taught us: findings through the Kei Compass
Findings — structured through Kei Compass (Dell, 2025).
Kei Raro — Foundations
The high cost of reasoning engines creates an inherent equity gap. Designing for vulnerable spaces and then downgrading the model is not a budget decision. It is a safety decision. The builder implications for the Equity-Safety Paradox are in Building Safe Conversational AI.
Kei Mua — Values
The 30/70 finding: approximately 30% of participants, primarily Pasifika, named the cultural values directly. The remaining 70% didn't identify "ethics" or "cultural values" but praised the exact behaviours those values produce. Same architecture. Different literacy. Ethics in the logic, not the label.
"I felt every time I did speak that there was a consistent element of empathy and relational, Pasifika or Māori based caring of manaakitanga and aroha in the tone."
— R-02
For practitioners: if you're embedding cultural values, test whether removing every cultural label from the interface would still produce felt safety. If yes, the values are in the architecture. If no, they're decoration.
Kei Runga — Purpose
Ray gave people a place to say things they hadn't told another human. The pilot delivered 80% of reviewers reporting a tangible shift in relationship behaviour. Over 85% said they would use Ray again.
"I think Ray gives you the opportunity to say stuff out loud and then not worry about being judged by a human hearing you say those things out loud."
— R-07
For practitioners: non-judgment is not a feature. It is the core value proposition. Design for the absence of a watching human body, not for the presence of a helpful bot.
Kei Roto — Agency
The State Before Story Framework Terminology Core Value State Before Story A rule requiring a check of the user's nervous system state and grounding before any content is addressed. Research Context [1, 6, 8] ↗ "An architectural gate preventing coaching until a user is somatically regulated; draws on polyvagal theory and trauma-informed practice." design was not just intended, it was experienced.
"My chest feels tight."
After grounding: "I am. I'm still breathing and that does help."
— R-06
R-15's prayer request is particularly important. It wasn't in the design. It was a participant bringing their own taha wairua into the space, and Ray holding it. The architecture was flexible enough to honour what mattered to them.
For practitioners: hard-code somatic grounding before any content. Don't prompt it, require it.
Kei Waho — Innovation
The Human Proxy Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." finding: Ray never generated trust. It borrowed it from the human researcher behind it. The AI is safe only because the human architect is accountable. The full Human Proxy Theory Framework Terminology Core Value Human Proxy Theory The theory that AI does not generate trust but borrows it from the human accountability structures behind it. Research Context [1-3, 9] ↗ "The anchor finding of the research; safe AI requires a visible, accountable human steward rather than just better code." is in Conversational AI as Relational Space.
Limitations and what comes next
| Limitation | Why it matters | Next time |
|---|---|---|
| Small Pasifika sample | Findings may not transfer across communities | Dedicated 3-year relational groundwork before recruiting |
| Friends as participants | Trust networks shape disclosure depth | Blind testing with broader communities |
| US infrastructure | Data sovereignty compromised at the stack level | Community-hosted reasoning engines |
| Voice mispronunciation | Breaks the vā Samoan / Pasifika Core Value Vā The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." ; participants confirmed it as a trust barrier | Partner with Māori voice projects |
| Echo effect on cheaper models | Language guardrails held vocabulary but couldn't hold judgment | Reasoning engine capacity is a safety variable, not just quality |
| Human Proxy ceiling | Safety relies on pre-existing relationships | Design trust-building protocols for cold interactions |
Ray will not be commercialised in its current form. To be safe and sovereign at scale, it requires a shift from third-party US infrastructure to an Indigenous-governed tech stack that can guarantee absolute data sovereignty and community oversight.
One commitment that is not optional: utu tūturu te reo Māori Core Value Utu Tūturu Enduring collective reciprocity — not transactional exchange but ongoing obligation. What you take, you give back. Research Context [Mika et al., 2022] ↗ "Participants gave real, vulnerable parts of their stories. The loop must close by returning findings to them. That is not optional." . What you take, you give back. Every participant in this pilot gave real, vulnerable parts of their story. Once this research is complete, there will be a dedicated space returning what was found. The loop must close. For the full DreamStorm charter, see Build Code Practice.
Ray is not done. It is version one of something that needs to be built properly, with community, with sovereignty, with time.
Try Ray
Experience Ray
Experience the State Before Story logic and the Regulated Mirror persona in a controlled, stateless environment.
Ray is not a therapist, not a crisis service, and not a substitute for professional support. This demo logs no personal data.
For participant onboarding and safety communications used in the pilot, see Appendix C6.
How this artefact connects
Safety Method
Four-build safety method and Equity-Safety Paradox.
Relational Space
Human Proxy Theory, vā, and full disclosure evidence.
Build Code
NO clauses, DreamStorm Charter, and utu tūturu commitment.
References
Dell, K. (2025). Using Māori values to ethically evaluate food-enabling technologies [Lecture, Week 12]. Master of Technological Futures, GEN25. AcademyEX, 27 February 2025. Framework adapted by the author as the "Kei Compass."
Mika, J. P., Dell, K., Newth, J., & Houkamau, C. (2022). Manahau: Toward an Indigenous Māori theory of value. Philosophy of Management, 21, 441–463. https://doi.org/10.1007/s40926-022-00195-3
Te Mana Raraunga. (2018). Principles of Māori data sovereignty (Brief #1). https://www.temanararaunga.maori.nz/tutohinga
60-Second Assessor Summary
What it is: Primary case study. Voice-first AI relationship coach. 15 participants, 697 minutes, high-vulnerability pilot.
LO1: Human Proxy Theory (AI trust is borrowed). Values in Architecture Standard (30/70 finding). Equity-Safety Paradox (model switch, Insight 4.9 → 3.1). State Before Story protocol. Seven-tier abuse screening.
LO2: Four-phase researcher evolution. Day 1 crisis response. Permanent revision of commercial ambitions post-pilot. Cultural supervision that changed the code (Lee, Nadine, Rob).
Key limitation: Untested in cold interactions. Human Proxy ceiling. Small Pasifika sample.