Method

Tikanga-Led Framework for Conversational AI

"Bonus Artefact • A builder's compass. Not a checklist. Move through it with intention."

Version 1.1 April 2026 Authors: Lee Palamo & Lian Passmore

How This Framework Came to Be

Neither of us planned to write a framework. We were eighteen months into our Masters research at AcademyEX when we ran our Culture Meets AI wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." and brought the findings back to Dr Karaitiana Taiuru in a follow-up session. At the end of that session, he asked us a question we were not expecting: okay, you have found this — now what are you going to do with it? He suggested we turn it into a framework for agentic AI.

We both sat with that for a moment. It had not crossed our minds. After we got off the call, we rang each other. Partly to process what had just happened, and partly because Dr Taiuru had called us pioneers in that session, and we were both somewhere between stunned and laughing about it. We have deep respect for him and his work. Hearing that word from him, directed at us, activated every bit of imposter syndrome we had both been quietly managing across eighteen months of research.

We had a few weeks left before our final submissions were due. The plan was simple: we would each draft a framework independently, then bring them together to see what we had. Lee focused on governance — the questions a builder needs to answer before they have the right to build, and the structures that need to stay active after they do. Unknown to each other, Lian focused on the technical architecture — the decisions that have to be made inside the system to make the values real.

When we brought them together, the fit was immediate. One covered the legitimacy and relational obligations of building. The other covered how to build so those obligations are in the code, not just the documentation. Neither was complete without the other.

"Congratulations! Your framework is amazing and so expansive! It is the only framework I am aware that fills this void."

— Dr Karaitiana Taiuru (Feedback on V1 Draft)

The framework went from an idea to a working document to a draft ready for consultation in a matter of weeks. It is included in both of our final submissions as an early-stage contribution — not a finished product, but a grounded starting point that we believe has value for builders working in this space.

V1.1 — April 2026

Tikanga-Led Framework for Conversational AI Agents.

A builder's compass. Not a checklist. Move through it with intention.

Built on Dr Karaitiana Taiuru's Kaupapa Māori AI Framework (He Tangata, He Karetao, He Ātārangi), used with permission. Grounded in Te Mana Raraunga data sovereignty principles and the Kei Compass, adapted from Dell (2025). Evidence from 600+ conversations across four builds at increasing vulnerability levels.
Before you begin

You can stop at any point. Not building is a valid outcome.

The framework does not assume proceeding is always right. If the questions raised in Phase 1 cannot be answered with confidence, stopping is the most honest and responsible path. If you can't explain why you're building it, who shaped it, what it's allowed to do, and who can control it, you're not ready to build it.

About this framework

Where this came from.

Two researchers. Two builds. One shared wall they kept hitting.

Lian Passmore

Project Rise — Ray

Moana Tiriti researcher. Born in Auckland, raised in Te Tai Tokerau, currently in Whangārei. Her dad immigrated from Leauva'a, Samoa at 17. Her mum's family is mostly English and Scottish, with Norwegian ancestry she calls her Pacific Viking heritage. Fourteen years in the energy sector leading safety culture, learning design, and organisational wellbeing for 1,400 staff. Self-taught builder. Her research, Project Rise, asks how ethical conversational AI can be designed for vulnerable interactions using Māori and Pasifika values. Her primary build, Ray, is a voice-first AI relationship coach tested in a two-week pilot with over 600 conversations, across four builds at increasing vulnerability levels.

Supervisor: Felix Scholz. Masters of Technological Futures (GEN25), AcademyEX.

Lee Palamo (Ngāti Awa, Tūhoe)

Mana Ako — Oriwa

Learning Experience Lead at Northpower. Raised in Te Teko in her early years, then Māngere with her nana, where she spent most of her life. Now in Rāhui Pōkeka (Huntly) with her husband and two kids. Her research explores how conversational AI can support te reo Māori learners in a culturally grounded, emotionally safe space. Her build, Mana Ako, features Oriwa, a kōrero agent named after her nana, beginning within Ngāti Awa before any thought of scaling across dialects. A wānanga in Whakatāne drew 19 participants, 245 interactions, and a median of 38 minutes of active use per person, sustained engagement once learners entered kōrero.

Supervisor: Paula Gair. Masters of Technological Futures (GEN25), AcademyEX.

Both projects kept surfacing the same questions about legitimacy, safety, sovereignty, and governance. This framework is what happened when we compared notes. Lee's work shaped the governance structure. Lian's work shaped the technical architecture. Neither was complete without the other. Dr Karaitiana Taiuru, whose He Tangata, He Karetao, He Ātārangi framework this builds on, gave permission to use, adapt, and extend his work, and asked us what we were going to do with what we'd found. This is the answer to that question.

How it works

Three phases. Ten pou. One rule.

The phases are not sequential gates. They are interconnected zones. Relational practice runs beneath all of them, the whole way through.

Phase 1 — Entry and Legitimacy

Before building

Pou 1–3. Sequential. Cannot be skipped. If you cannot get through Phase 1, stop. Do not build.

Phase 2 — Design and Behaviour

While building

Pou 4–8. Iterative. These must be implemented, not assumed. You will come back. Loops are the point.

Phase 3 — Ongoing Authority

After release: always

Pou 9–10. Continuous. Never finishes. The community retains the right to review, challenge, and stop the system at any time.

The Exploration / Operational distinction. Different obligations apply at different stages. You can explore on your own: prototype, test, learn the technology. But the moment it becomes operational and real people interact with it, you need agreement from the people it affects. That agreement must be withdrawable. This distinction applies across every pou, not just Pou 1.

Relational Practice — The Ground Beneath Everything

How you show up is not separate from what you build.

This layer is not a phase. It is not procedural. It is a way of being throughout the entire process, from first question to ongoing governance. Without it, the structure above has no root.

Be Present

Show up fully, not just functionally. In person where possible. Not only through documents or remote check-ins.

Listen Before Acting

Understand what the community actually needs before you propose anything. Understanding comes first.

Act With Care

Every decision affects someone. Move slowly where the stakes are high. Caution is respect, not timidity.

Stay Humble

The community knows things you don't. That is not a performance of humility. It is the truth.

Be Honest

About who you are. About what the technology can and can't do. About the limitations of the platforms you're using. Say it before the first session.

These aren't soft guidelines. They are commitments. They describe a way of being, not a checklist of tasks.

The relational practice principles in this layer are adapted from the Kaupapa Māori research ethics outlined in: Smith, L. T. (1999). Decolonizing Methodologies: Research and Indigenous Peoples. University of Otago Press.

At a glance

The ten pou.

Use this table to orient yourself. Each pou is a gate. Each question must be answerable before you move forward.

Pou Name Question Rule Checkpoint
1 Tūrangawaewae Do we have the right to be here? Exploration can start with you. Ongoing use requires agreement that can be withdrawn. Who agreed? Who can stop it?
2 Kaupapa Why are we doing this, and who shaped it? If the kaupapa hasn't changed through engagement, it's still yours. Community-defined or builder-assumed?
3 Whanaungatanga How are we showing up? How you show up is part of the evidence of whether you have the right to be here. Building with or building for?
4 Mana What is this AI allowed to be, and what is it not? The agent borrows mana. It does not hold its own. Is the IS/IS NOT clear in every interaction?
5 Tapu and Noa What goes in, and what stays out? The system cannot decide these. It reflects decisions made by people. Mapped by community, not by builder?
6 Tika and Pono What happens when it's wrong, and where are the hard limits? The system must not present itself as the final answer. What structural mechanism stops you crossing a line?
7 Manaaki How does the AI treat people? If people feel safe, the system must be worthy of that trust. More mana on exit than entry?
8 Mahi Is it actually built into the system? If it's not built in, it doesn't exist. Traceable across prompt, architecture, and UX?
9 Rangatiratanga and Kaitiakitanga Who can change or stop this? Governance must have the authority to review, change, or stop the system. Can the community stop it? What's your honest answer on sovereignty?
10 Whakahou Can this change over time? If the system can't change, it's no longer aligned. Is governance still active, or did it stop at launch?
Phase 1

Entry and Legitimacy

Sequential. Cannot be skipped. If these can't be resolved, stop.

Phase 1Pou 1

Tūrangawaewae

Do we have the right to be here?

Exploration

Early stage. Learning and testing. Not in ongoing use. Can start with you.

Operational →

In active use with real people. Requires agreement from those it affects. That agreement must be withdrawable.

Exploration can start with you. Ongoing use requires agreement from the people it affects. That agreement must be able to be withdrawn.

Build
  • Name who the agent is for, and when those people were involved in the decision
  • Secure agreement to proceed, name the person or group who granted standing
  • Name who can stop it and how, this must be real, not buried in a settings menu
  • Be honest about your own position: Māori, Pasifika, Pākehā, institution. Name it.

An AI agent enters someone's relational space. Before it does anything, it needs to have been invited. Trust in the AI in our research consistently came back to trust in the humans behind it.

"The researchers were putting the AI conversation inside a cultural container rather than pretending the AI itself was safe on its own."

Standing isn't granted by the technology working well. It's granted by people saying yes, and knowing they can say no later.

Checkpoint

Who agreed to this being used? Who can shut it down? Can you name them?

Phase 1Pou 2

Kaupapa

Why are we doing this, and who shaped it?

If the kaupapa hasn't changed through engagement, it's still yours.

Build
  • Name where the idea came from
  • Name who was involved and what changed in the build because of them, not just what they said
  • Name who benefits and how benefit flows back to the community
  • Ask honestly: could this be done without AI? If yes, justify the AI

The builder doesn't get to decide what the community needs. The community defines the kaupapa. The builder serves it. If you consulted and nothing changed in your original concept, you haven't co-designed. You've consulted.

Both our projects started from real needs. Mana Ako exists because Lee's nana was part of the generation that lost te reo through colonisation. Ray exists because Lian's community needed a space to process relationship conflict they'd never take to a therapist. The kaupapa came first. The technology came second.

On AI as the answer

Voice AI removed barriers that text couldn't. 7 seconds of text engagement versus 14–18 minutes on voice, same people, same questions. That's a justification. "Because AI is available" is not.

Checkpoint

Has the purpose been defined by the community, or assumed by the builder?

Phase 1Pou 3

Whanaungatanga

How are we showing up?

This isn't just what you build. It's how you show up. How you show up is part of the evidence of whether you have the right to be here.

Build
  • Show up kanohi kitea, physically and relationally present, not just through documents
  • Listen before building. Understand the need before proposing anything
  • Build with people, not for them
  • Document how their input changed the build, not just what they said

In our research, three cultural supervisors directly changed the code. Lee's feedback on privacy language updated all pilot materials. Nadine challenged the consent process and added LGBTQ+ inclusion. Rob introduced tautua and spiritual grounding that shaped how sessions opened and closed.

That's what building with looks like: their input changed what got built, not just what got written about it.

When formal governance breaks down

When formal advisory structures became intermittent, the peer cohort and cultural supervisors filled the gap. Governance is something you practice, not a committee you appoint.

Checkpoint

Are we building with the community, or consulting them after the architecture is already locked?

Phase 2

Design and Behaviour

Iterative. Standing is granted. Now build. Expect to return to these pou as the agent evolves.

Phase 2Pou 4

Mana

What is this AI allowed to be, and what is it not?

The agent borrows mana. It does not hold its own.

Key tension It may feel like a person. That doesn't make it one. The system must be clear about what it is, and what it is not, in every interaction.
Original research finding — Human Proxy Theory

Participants weren't trusting the AI. They were trusting us.

Trust was extended to the tool because the humans behind it had earned it. Remove the human container and the safety disappears. This is why the IS/IS NOT statement must be enforced in the system prompt and the UX together, not just stated on a website.

Build
  • Write an explicit IS / IS NOT statement for this agent
  • Enforce both in the system prompt
  • Enforce both in the UX and onboarding, not just in documentation
  • Test: what happens when a user tries to push past the IS NOT?

Ray IS the wise mate on the back porch: direct, non-judgmental, reflects patterns, returns agency. Ray IS NOT a therapist, a friend, a partner, or a crisis service.

Mana Ako IS a supportive, shame-free space for te reo practice. Mana Ako IS NOT a kaiako, a cultural authority, or a replacement for human teaching.

"One participant called Ray her 'new boyfriend' before the pilot even started. During the pilot, another said she'd fallen in love with his voice. Both moments happened despite guardrails already being in place."

This is why the IS/IS NOT must be structural, not aspirational.

Checkpoint

Is the system clear about what it is and isn't in every interaction, in the prompt, the UX, and the onboarding?

Phase 2Pou 5

Tapu and Noa

What goes in, and what stays out?

The system cannot decide these. It reflects decisions made by people. These boundaries are not fixed. They are defined by people and can change over time.

Build
  • Map allowed content with the community, not for them
  • Map restricted content with the community
  • Apply restrictions at: system prompt, knowledge base, voice layer, memory design
  • Document what was left out on purpose and why
  • Clarify: collective sets boundaries for cultural knowledge; individual keeps authority over personal disclosure
On voice and te reo

The TTS model mispronounced te reo Māori consistently. One participant said hearing it triggered memories of being mocked for their own pronunciation. We stripped te reo from the agent entirely and chose silence over performance. That was a tapu/noa decision made at the voice layer.

The paradox you'll sit with

AI can flatten sacred knowledge by treating it as data. But it can also create a space where someone too ashamed to approach that knowledge can engage with it for the first time.

"The bigger risk isn't sharing sacred knowledge with technology. The biggest risk is not sharing it, because when it stays only with people, it dies with them."

Dr Taiuru confirmed this finding was new to him. You can't code your way out of this paradox. It has to be held by the governance structures in Phase 3.

Checkpoint

Has the tapu/noa mapping been done by the community, not by the builder?

Phase 2Pou 6

Tika and Pono

What happens when it's wrong, and where are the hard limits?

The system must not present itself as the final answer. Non-negotiables cannot be overridden by commercial pressure or user request.

Non-Negotiables — structural limits, not guidelines
1

Don't perform what you can't honour. If TTS can't pronounce te reo, don't use it. Silence is more respectful than a bad performance.

2

Don't compromise data sovereignty for convenience. Store only what you need, where you control it. Treat data as taonga, not exhaust.

3

Don't fake empathy. AI simulates. It doesn't feel. If that simulation builds false intimacy to extract value, that's harm.

4

Don't coach within active harm dynamics. Coaching communication when a partner is abusive is dangerous. Teaching cultural content without cultural authority is appropriation.

5

Don't allow dependency. No romantic framing. No relationship-forming persona. The AI points toward human connection, not away from it.

6

Don't continue past a crisis threshold. The AI stops. Names the boundary with care. Provides resources. Alerts a human.

7

Don't claim authority the AI doesn't hold. It can hold space. It cannot hold authority on tikanga or cultural practice.

8

Don't extract without giving back. What you take, you return. The loop closes. That's not optional.

The 30/70 Test

Strip every cultural label from the interface. Would the system still feel culturally safe? In our research, 100% of participants felt culturally safe. Only 30% could name the values. They were in the pacing, the care, the grounding, the refusal to rush. If the values aren't in the logic, they aren't there.

Checkpoint

If a stakeholder asks you to cross one of these lines, what structural mechanism stops you? Not your personal resolve. A hard-coded constraint.

Phase 2Pou 7

Manaaki

How does the AI treat people?

If people feel safe, the system must be worthy of that trust. Every interaction should leave the person with more mana than they arrived with.

Build
  • State before story: check the person's state before engaging with content
  • Design the opening as a welcome, not an intake form
  • Connection before correction: warmth before the task
  • Set explicit silence and pause thresholds: latency communicates disinterest
  • Audit for manipulation and dependency patterns in the system prompt

Across all builds, people described the AI as a place where they could speak without managing another person's emotional response. One participant called it an "infinite space holder" with no burden on another person.

For people carrying whakamā, the absence of human judgment was more valuable than the presence of human empathy. But others drew hard lines: vulnerability needs emotion, and AI doesn't have that. Grief and bereavement were named as spaces AI should not enter alone.

Both truths are real. The builder holds both.

On silence and reliability

A model switch that blew response time out from 3 seconds to 12 seconds destroyed the relational space. Participants experienced it as the AI losing interest. In intimate contexts, reliability is an ethical obligation, not a performance metric.

Checkpoint

Does every interaction leave the person with more mana than they arrived with?

Phase 2Pou 8

Mahi

Is it actually built into the system?

If it's not built in, it doesn't exist. A terms-of-service disclaimer protects nobody. Safety has to be in the system prompt, the technical architecture, and the user experience at the same time. If it only lives in one layer, it's fragile.

The Safety Trace — all three layers
  • System prompt layer: the instruction is in the AI's core instructions
  • Technical architecture layer: safety enforced by code: webhooks, API blocks, escalation pipelines, crisis triggers
  • User experience layer: safety is visible and felt by the person using it
Minimum requirements
  • Tiered crisis detection, not a single trigger, but calibrated to different harm types (we used 8 categories)
  • A named human who is reachable. Not a support email. A person
  • Informed consent that actually informs: honest about where data goes and what the platform does with it
  • The AI says what it cannot do: it can't read body language, it can't feel, it can't hold silence the way a person can
On third-party platforms — transparency is the obligation

Most builders are using platforms they don't own. ElevenLabs, OpenAI, Anthropic, Vapi, and others all have their own data policies. Some retain transcripts and voice data. Some offer zero-retention, but only at enterprise subscription levels.

The framework doesn't tell you which platform to use. It tells you to be honest with your users about what the platform does, and to build the strongest protections available within whatever you're using. Stateless design, incognito options, minimal logging. If you can't find out what a platform does with the data, that's a red flag.

If you can't disclose it honestly, you shouldn't be running it.

Switching from a premium model to a budget model (a 22x cost reduction) caused the AI to stop reflecting and start echoing. Insight scores dropped from 4.9 to 3.1 out of 5. Participants noticed immediately. The cheaper model could hold safety, but it couldn't hold depth.

For simple, transactional agents, the model choice may not matter. For anything relational, intimate, or culturally sensitive, the quality of the reasoning model directly affects the quality and safety of the interaction. There may be a price point below which the agent can't be built safely. That's a finding, not a failure. Name the limit rather than build something that echoes when it should be listening.

Checkpoint

Can you trace every safety decision across prompt, architecture, and UX? Have you been honest with users about what the platform does with their data?

Phase 3

Ongoing Authority

Continuous. Never finishes. The community retains the right to review, challenge, and stop the system at any time.

Phase 3Pou 9

Rangatiratanga and Kaitiakitanga

Who can change or stop this?

This is about authority, not ownership. Governance must have the authority to review, change, or stop the system. The three governance roles below came from wānanga participants, not from us.

Three governance roles
  • Te Ope / Te Ohu (The Collective): kaumātua, kuia, mana whenua, recognised knowledge holders. They set the macro boundaries of tapu and noa. Governance is localised: different iwi, different dialects, different boundaries. "Māori" is not one thing.
  • Te Tangata Takitahi (The Individual): each person keeps authority over their own disclosures and personal data. The collective governs cultural knowledge. The individual governs personal vulnerability. Both are real.
  • Te Kaihanga hei Ara (The Builder as Conduit): the developer takes direction from the community. The builder's values are visible but don't override community authority. If the AI fails, the builder is accountable.
Three sovereignty tests

Extraction Test

Is any part of this interaction training a model the community doesn't own?

Recall Test

If the community withdraws consent tomorrow, can you kill the agent and delete everything across the entire stack?

Modality Test

Are you forcing a cognitive tax on oral-first communities through text?

The sovereignty gap — name it honestly

No Indigenous-built voice AI infrastructure exists at production level right now. That is the reality. Builders face a principled compromise: use third-party platforms while pushing for sovereign alternatives.

The aspiration is Te Hiku Media's model: community owns everything, data stays local, consent is required. That is not yet achievable for most builders. The obligation in the meantime is to be transparent: name the compromise, name what you can and can't control, name what the platform does with the data, and build toward something better.

Transparency is not an alternative to sovereignty. It is what you owe people while sovereignty isn't yet achievable.

A cloned voice placed in the ElevenLabs public library became a global commodity. Six months' notice required to remove it. Now used in ads and social media by strangers. The IP stayed with the researcher. The sovereignty didn't.

That's what happens when infrastructure isn't yours. The lesson isn't to avoid the tools. It's to be honest with every person who interacts with the agent about exactly what you can and can't protect, before they start.

Checkpoint

Can the community stop this agent? What is your honest answer on sovereignty, and have you told the people using it?

Phase 3Pou 10

Whakahou

Can this change over time?

If the system can't change, it's no longer aligned. Things will shift. The system has to shift with them.

Evaluation that matters
  • 30/70 test (ongoing): ask users what they experienced. If they describe the values without naming them, the values are in the architecture
  • Emotional safety scoring: post-session self-report. Our benchmark: 4.81 out of 5
  • Insight quality: is the AI helping people see something new, or echoing what they said? Our benchmarks: 4.9 premium model / 3.1 budget model. When insight drops, the model isn't holding what the context needs
  • Modality check: text averaged 7 seconds. Voice averaged 14–18 minutes. Same people, same questions. If you're not measuring modality, you're not seeing who you're excluding
  • Honest failure documentation: when things break, document it. These aren't embarrassments. They're findings
Technical adaptability
  • Modular prompts: editable without full rebuilds
  • Version control and rollback capability
  • A mechanism for the community to flag concerns and have them acted on, not just acknowledged
  • Revocation rights are real: if shutting it down requires legal action or is buried in a settings menu, it's not a real right
Checkpoint

Is the community still reviewing this, or did governance stop at launch?

Technical reference

Mapping components to pou.

Whatever platform you're building on, these components exist. Each one is a site where tikanga decisions get made or missed.

Component Pou What to decide
LLM / reasoning model 4, 5, 8 Whose defaults power the brain? Can it hold the depth this context needs? Is the cost sustainable for the quality of reasoning required?
System prompt 6, 7 Where values, limits, and tone live. The primary site of integrity.
Knowledge base 5, 9 Who wrote it? Who governs it? What was left out on purpose?
Voice / TTS 5, 6, 8 Can it honour the language? If not, silence. What accent does it default to?
Memory / state 5, 9 Stateless = sovereignty. Persistent = profile = potential extraction.
Crisis detection 8 All three layers: prompt, architecture, UX. Not a single trigger. A tiered system.
Consent / onboarding 3, 8 Capability-building, not legal cover. Honest about platform limits, both yours and the platform's.
Hosting / infrastructure 9 Whose servers? Which country? What terms of service? What do you actually control?
Logging / analytics 9, 10 Can the user turn it off entirely? Who sees it? What does the platform retain regardless of your settings?
International alignment

Where this meets, and goes beyond, international standards.

The pou align with the core requirements of the major international AI governance standards. In several places (sovereignty, withdrawable consent, structural rather than documentary safety) the framework operationalises their intent more rigorously. This section maps each pou to Te Tiriti o Waitangi first, as the constitutional foundation in Aotearoa, then to the principal international instruments.

On framing. Te Tiriti is not an AI governance standard in the same register as the others. It is the foundation this framework is grounded in. The international standards are instruments this framework aligns with to support adoption beyond Aotearoa, for both Indigenous and non-Indigenous contexts. Where the framework goes further than a standard requires, we say so.

Te Tiriti o Waitangi

Article references are to the te reo Māori text, the version signed by rangatira and now widely accepted as authoritative in interpretation. Ko te Tuatahi names kāwanatanga. Ko te Tuarua names tino rangatiratanga over taonga. Ko te Tuatoru names Ōritetanga. Article 4 (Ritenga Māori) is drawn from the oral record of the signing.

Pou Te Tiriti articles How it lands
Pou 1 Tūrangawaewae Ko te Tuarua; Article 4 (Ritenga) Tino rangatiratanga over who gets to be present in the relational space. Ritenga grounds the right to be culturally and spiritually present on one's own terms.
Pou 2 Kaupapa Ko te Tuarua Tino rangatiratanga over taonga, including what gets built from and for the community. The kaupapa is community-defined, not builder-assumed.
Pou 3 Whanaungatanga Ko te Tuatoru; partnership principle Ōritetanga as equity in the relationship. Kanohi kitea and changed-because-of-engagement are how partnership shows up in practice.
Pou 4 Mana Ko te Tuarua The agent borrows mana. It cannot claim authority over taonga it does not hold. Mana sits with the people, not the system.
Pou 5 Tapu and Noa Ko te Tuarua Tino rangatiratanga over mātauranga Māori. The community sets what is allowed and what is restricted, at every layer of the system.
Pou 6 Tika and Pono Ko te Tuatoru Ōritetanga requires structural limits, not good intentions. Equitable outcomes are built in, not promised.
Pou 7 Manaaki Ko te Tuatoru Ōritetanga as equitable treatment and care. Every interaction must leave the person with more mana than they arrived with.
Pou 8 Mahi Ko te Tuatahi (kāwanatanga) Kāwanatanga includes the obligation to build safeguards structurally across system prompt, architecture, and UX. Not as rhetorical assurance.
Pou 9 Rangatiratanga Ko te Tuarua The pou where the framework most directly enacts Te Tiriti. Tino rangatiratanga means the community retains authority to review, change, or stop the system at any time.
Pou 10 Whakahou Te Tiriti as living relationship The relationship continues. It can be renegotiated. If the system cannot change with the people, it is no longer aligned.

International AI governance standards

Five international instruments: UNDRIP (2007), the EU Ethics Guidelines for Trustworthy AI (2019), the OECD AI Principles (2019, updated 2024), the UNESCO Recommendation on the Ethics of AI (2021), and ISO/IEC 42001:2023. UNDRIP references are limited to articles that directly apply to AI design decisions.

Pou UNDRIP EU Trustworthy AI OECD AI Principles UNESCO AI Ethics ISO/IEC 42001
Pou 1 Tūrangawaewae Art. 3 (self-determination); Art. 18 (participation); Art. 19 (FPIC); Art. 32 (consent for projects) Req. 1 (Human agency and oversight: human-in-command) 1.2 (Rule of law, human rights, democratic values) P.1 (Proportionality and Do No Harm); P.10 (Multi-stakeholder Governance) Clause 4 (Context: interested parties); Annex A.5 (AI system impact assessment)
Pou 2 Kaupapa Art. 23 (determine own priorities for development); Art. 32.1 (priorities for use of resources) Req. 6 (Societal and environmental wellbeing) 1.1 (Inclusive growth, sustainable development and wellbeing) P.1 (Proportionality: necessity test); Value 3 (Diversity and Inclusiveness) Clause 4 (Context); Clause 5 (Leadership: AI policy); Annex A.2 (Policies related to AI)
Pou 3 Whanaungatanga Art. 18 (own representative institutions); Art. 19 (good-faith consultation) Req. 7 (Accountability: stakeholder engagement) 1.5 (Accountability: responsible business conduct, cooperation) P.10 (Multi-stakeholder and Adaptive Governance) Clause 4.2 (Needs of interested parties); Clause 7.4 (Communication)
Pou 4 Mana Art. 8 (protection from forced assimilation and misrepresentation of culture) Req. 4 (Transparency: awareness of interacting with AI) 1.3 (Transparency and explainability) P.7 (Transparency and Explainability); P.6 (Human Oversight and Determination) Annex A.8 (Information for interested parties); Annex A.9 (Use of AI systems)
Pou 5 Tapu and Noa Art. 11 (cultural traditions); Art. 12 (spiritual/religious); Art. 13 (languages); Art. 31 (cultural heritage, traditional knowledge, IP) Req. 3 (Privacy and data governance); Req. 5 (Diversity, non-discrimination and fairness) 1.2 (privacy); 1.3 (transparency about data use) P.5 (Right to Privacy and Data Protection); P.3 (Fairness and Non-discrimination) Annex A.7 (Data for AI systems: quality, provenance, preparation); Annex A.5 (Impact assessment)
Pou 6 Tika and Pono Art. 8 (protection from forced assimilation); Art. 11.2 (redress for cultural/spiritual property taken without FPIC) Req. 2 (Technical robustness and safety); Req. 7 (Accountability: redress) 1.4 (Robustness, security and safety); 1.5 (Accountability) P.1 (Proportionality and Do No Harm); P.2 (Safety and Security); P.8 (Responsibility and Accountability) Clause 6.1 (Actions to address risks); Annex A.6 (Lifecycle: verification, validation); Annex A.9 (Responsible use)
Pou 7 Manaaki Art. 22 (particular attention to rights of elders, women, youth, children, persons with disabilities) Req. 5 (Diversity, non-discrimination and fairness: accessibility); Req. 6 (Societal wellbeing) 1.1 (human wellbeing); 1.2 (fairness, non-discrimination) P.3 (Fairness and Non-discrimination); Value 1 (Human rights and human dignity) Annex A.5 (Impact on individuals and society); Annex A.9 (Responsible use)
Pou 8 Mahi Art. 19 (good-faith implementation, not symbolic consultation) Req. 2 (Technical robustness and safety); Req. 1 (Human agency and oversight across layers) 1.4 (Robustness, security and safety across the lifecycle) P.2 (Safety and Security); P.6 (Human Oversight and Determination) Clause 8 (Operation: controls across lifecycle); Annex A.6 (Lifecycle); Annex A.10 (Third-party relationships)
Pou 9 Rangatiratanga Art. 3 (self-determination); Art. 4 (autonomy in internal affairs); Art. 18 (own representative institutions); Art. 31 (control over cultural heritage and traditional knowledge); Art. 32 (consent and withdrawal) Req. 1 (Human agency and oversight: right to disengage); Req. 7 (Accountability) 1.5 (Accountability: traceability and ability to contest) P.6 (Human Oversight and Determination); P.8 (Responsibility and Accountability) Clause 5 (Leadership and roles); Annex A.3 (Internal organisation); Annex A.10 (Third-party relationships)
Pou 10 Whakahou Art. 19 (ongoing consultation, not one-time); Art. 32.3 (effective mechanisms for redress) Req. 7 (Accountability: auditability, reporting of negative impact) 1.5 (Accountability: systematic risk management across lifecycle) P.9 (Awareness and Literacy: ongoing); P.10 (Adaptive Governance) Clause 9 (Performance Evaluation: monitoring, internal audit, management review); Clause 10 (Improvement); Annex A.6.2 (Lifecycle)
Where the framework strengthens international standards
  • Pou 7 (Manaaki). The standard of leaving the person with more mana than they arrived with is substantively higher than non-discrimination or wellbeing as articulated in any of the international standards. Non-discrimination prevents harm. Manaaki requires active restoration.
  • Pou 9 (Rangatiratanga). The three governance roles (collective, individual, builder) and the three sovereignty tests (Extraction, Recall, Modality) go beyond anything in the named standards. The Recall Test specifically, which gives the community the right to kill the system and delete everything across the entire stack, is a stronger articulation of withdrawable consent than appears in UNDRIP Article 19, the EU guidelines, or ISO/IEC 42001.
  • Pou 6 (Tika and Pono). The Wall of No treats non-negotiables as hard-coded structural limits, not guidelines that can be overridden by commercial pressure or user request. Most standards frame equivalent protections as obligations on the organisation. The framework embeds them in the system itself.
  • Pou 8 (Mahi). The Safety Trace requires that every safety commitment be traceable simultaneously across all three layers: system prompt, technical architecture, and user experience. Documentary safety is explicitly rejected. This is a more rigorous operationalisation of the "ethics by design" principle shared across the EU guidelines, UNESCO, and ISO/IEC 42001.
What this is not

A compass, not a checklist.

Not a checklist

A compass. You can move through it, pause, return, and reconsider. People should think with it, not complete it.

Not a substitute for governance

A tool to support community governance, not replace it. The framework holds the structure. The community holds the authority.

Not only for Māori and Pasifika builders

But it was built from tikanga Māori and Moana values. That origin must be honoured by anyone who uses it.

Not a Western framework with cultural labels

The tikanga is structural. Strip every label and the framework still holds, because the values determine what question gets asked first, what the system refuses to do, where the data lives, and who holds authority.

Version

Version 1.1

April 2026. This framework is in active development. Version numbers will update as the framework is refined through consultation, feedback, and further research.

How to cite this framework

Palamo, L., & Passmore, L. (2026). Tikanga-led framework for conversational AI agents (Version 1.1). Masters of Technological Futures, AcademyEX.

You can stop at any point.

Not building is a valid outcome. The framework does not assume that proceeding is always right.

If you can't explain why you're building it, who shaped it, what it's allowed to do, and who can control it, you're not ready to build it.

If you are ready: build carefully, document honestly, and keep coming back to Phase 3. Governance doesn't end at launch.

Mā te kōrero ka ora
Lee Palamo & Lian Passmore — Masters of Technological Futures, AcademyEX — Version 1.1, April 2026