Practice

Build Code Practice

Download PDF

"A framework for making your values visible in what you build"

If you build things for other people and you want a practical framework for making your values visible in every decision you make, this is for you. If you've ever drifted toward a shortcut that compromised the people you were building for and needed language for why that felt wrong, this is also for you.

What is a build code?

Most people who build things have values. Not everyone makes those values explicit.

A build code is the practice of doing exactly that: translating your ethics and your design instincts into a document that governs every decision you make, from database schema to system prompt to the words on a consent form. It is not a privacy policy. It is not a set of design principles. Those things exist to protect the provider or guide the user. A build code exists to keep the builder in integrity.

Think of it as a compass. It doesn't tell you every step. It keeps you oriented when the messy middle of development gets hard, when commercial pressure, time constraints, or technical limitations tempt you toward shortcuts that compromise the people you're building for. Without it, values stay implicit. And implicit values in tech tend to default toward what is easiest, most profitable, or most familiar, which is rarely centred on the people most often excluded.

This is not a framework for everyone. If you're building tools to maximise engagement or push a particular narrative, this is not what you're looking for. But if you want to call yourself an ethical tech developer, if you want your technology to shift power, not just money, then having a code you live and build by is not optional. It is the thing that keeps you from drifting toward harm without noticing.

What I know from eighteen months of building is that a build code is not something you write and then execute. Dated study group evidence of each shift is in Appendix C3. It is something you discover under pressure. Three moments across four builds shifted mine from aspiration into architecture:

Error: Performance Sep 2025

The Cultural Performance Failure

I programmed the AI agent to use te reo Māori greetings. During testing, it butchered every word. The realisation was immediate: using the language performatively while the technology was inadequate was a violation of manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." , not an expression of it. I deleted every instance of te reo from the voice instructions. The build code shifted from we value te reo Māori to we refuse to use te reo Māori if the TTS Technical / Domain TTS Text-to-Speech; technology that converts written text into spoken voice. Research Context "Identified as a site of potential cultural harm if the engine mispronounces Indigenous languages, leading to a decision of 'silence over performance.'" makes it a performance of disrespect.

This wasn't just a builder's instinct. Pre-wānanga participants confirmed it from the other side. W-11 described how the mispronunciation had directly limited how much she opened up: "I may have shared more if I could have interacted with you in Māori. The fact that you couldn't understand some of the words I used... actually limited how much I warmed to you."

W-04 named the stakes from a different angle: "'cause if we don't teach them what is right and what is wrong... it's gonna end up in the colonisers' hands." That is what the te reo decision was protecting against. Not just mispronunciation. Erasure.

Patch: Architecture Nov 2025

The Bolted-On Realisation

During the Leadership AI Coach build, I recognised that the combination of a cloned voice and persistent memory created a relational bond that a simple privacy policy could not protect. The ethical obligations of that bond had to be designed as architecture (buttons, workflows, structural blocks) not just instructions in a prompt. Integrity is hard-coded, not promised.

Error: Latency Feb 2026

The Vā vs. Latency Crisis

During the Ray pilot, I switched the underlying LLM to manage costs. The average response time blew out from 3 seconds to 12.58 seconds. Insight scores crashed. The Samoan / Pasifika Core Value The sacred relational space between people. Not a gap or an absence — a living connection that must be actively tended. Research Context [1-4] ↗ "When AI enters a human interaction, it enters the vā. That is the central design obligation of this research." , the relational space, was destroyed by a technical decision I'd framed as a budget call. In intimate contexts, technical failure is ethical failure. Reliability is not optional.

Each of these produced a clause in the build code. None of them came from planning. They came from breaking something I cared about and having to name what I refused to break again.

A build code is not what you say you value. It is what your system does when nobody is watching.

Document Terms of Service Design Principles Build Code
Orientation External Functional Internal
Purpose Protects the provider Guides the user experience Governs the builder's integrity
Who it's for Legal compliance Users The builder
Is it living? Rarely Sometimes Always

The Master Chief: who is building this?

My name in every AI tool I use is Master Chief. I love the Halo show and I love what Master Chief represents, a principled operator who holds a code even when the mission gets complicated. It is my alter ego as a builder.

Cosmic Sky Background
Developer_Dossier.md

Master Chief

Systems-thinking Storyteller

CliftonStrengths Profile

Ideation • Learner • Futuristic • Activator • Focus

Design Philosophy

"Every line of code carries an opinion."

Non-Negotiables

  • const No extraction without reciprocity
  • const Absolute data sovereignty
  • const Integrity is hard-coded
  • const No automating human presence
  • const No cultural performance

Still Learning

  • Somatic sustainability (avoiding the Red Zone)
  • Cultural humility in Māori spaces
  • Staying curious longer before jumping to solutions

My strengths profile (CliftonStrengths) is: Ideation, Learner, Futuristic, Activator, Focus. I am a systems-thinking storyteller. I think in problems I want to solve and stories I want to tell. My operating system is grounded in Kaupapa Māori te reo Māori Core Value Kaupapa Māori A Māori-centred approach to research — research done by, for, and with Māori communities. Research Context "It is not a method bolted onto a Western framework. It is the framework." and Pacific values, specifically manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." (care), whanaungatanga te reo Māori Whanaungatanga Relationship, kinship, the bonds that make people belong to each other. Research Context [1, 4] ↗ "In this research, it means prioritising the relational bond over data extraction. Trust before content. Connection before questions." (relationship), and kaitiakitanga te reo Māori Core Value Kaitiakitanga Guardianship — stewardship that extends across generations. Research Context "In this research, it means protecting data and knowledge not just for the people in the room today, but for their mokopuna (grandchildren)." (guardianship). These aren't values I borrowed for this project. They're the reason the seven-dimension framework below looks the way it does. Manaakitanga te reo Māori Core Value Manaakitanga Care as obligation, not gesture. Research Context [1, 4] ↗ "In this research, manaakitanga lives in the AI's pacing, its validation, its insistence on checking your nervous system before asking about your relationship. It is care in the code." shaped how I defined purpose, whanaungatanga te reo Māori Whanaungatanga Relationship, kinship, the bonds that make people belong to each other. Research Context [1, 4] ↗ "In this research, it means prioritising the relational bond over data extraction. Trust before content. Connection before questions." shaped what I count as safety, kaitiakitanga te reo Māori Core Value Kaitiakitanga Guardianship — stewardship that extends across generations. Research Context "In this research, it means protecting data and knowledge not just for the people in the room today, but for their mokopuna (grandchildren)." gave me the language for data sovereignty.

I believe every line of code carries an opinion. I refuse to build "tech for tech's sake." I define myself partly by what I refuse to do: I will not build persuasive tech designed for addiction. I will not automate human connection where human presence is required. I will not compromise data sovereignty for convenience.

The moment I am most proud of as an expression of my build code was the Incognito Mode feature in the Leadership AI Coach. I built it myself, not because anyone told me to, but because I had just launched the Ray pilot and experienced the full ethical weight of what I was holding. Users were coming in vulnerable. They were crying. They were disclosing things that had nowhere else to go. And I understood, viscerally, that they needed to be able to engage without any trace being kept. Not a privacy policy that said it was safe, but a physical mechanism that made it structurally impossible for a record to exist. A frontend toggle that blocked the entire logging API. Zero data generated. Purely private. That was mana motuhake te reo Māori Core Value Mana Motuhake Absolute sovereignty — over your data, your story, your identity. Research Context [1, 3, 4] ↗ "In this research, it means the person who generated the data owns it. Every decision about what gets stored, who sees it, and whether a record exists at all traces back to mana motuhake." made real in code. For the full technical implementation, see Building Safe Conversational AI.

The build code framework

Across all four builds, seven dimensions proved essential. The values-to-architecture map showing how these dimensions played out in practice across builds is in Appendix A4. Each one is both a question and a commitment:

01. Purpose: who is this for, and what are you truly trying to empower?

Ray: for any adult struggling with a relationship challenge who would never take the step to talk to someone in person, or who needs support in the moment rather than weeks later.

02. Values: what do you believe about the people using your system?

Project Rise: I wanted people to feel seen, heard, and valued through their interaction with the AI. That was the design target, not the engagement metric. The value was in the architecture.

03. Non-negotiables ("NO" clauses): what will you never do?

Across all builds: no extraction without reciprocity or transparency. This is a personal value I am not willing to compromise because it would put me out of integrity.

04. Cultural obligations: who are you accountable to, and how?

As Moana Tiriti: first to the land I stand on; then to the IP owner if building for someone else; then to every person who interacts with the technology.

05. Safety commitments: what does safety look like beyond a privacy policy?

Ray: (1) design the AI not to harm; (2) actively connect people with the right resource when something is outside scope; (3) get a human in the loop, connect them to me, and follow up if required.

06. Data sovereignty: where does the data live, and who controls it?

All builds: the participant is the kaitiaki te reo Māori Core Value Kaitiakitanga Guardianship — stewardship that extends across generations. Research Context "In this research, it means protecting data and knowledge not just for the people in the room today, but for their mokopuna (grandchildren)." of their own story. I am the gatekeeper between the builder and what participants share. Nothing gets handed over that participants don't know about in advance.

07. Human-in-the-loop: where must a human remain present and accountable?

Leadership AI Coach: the AI integration coach only works because the human coach delivers face-to-face training first. The AI integrates the learning from a human relationship; it does not replace it.

Build codes per implementation

Each build had its own specific code, shaped by its vulnerability level and the communities it served.

Level 01 // Low Vulnerability

Project Rise

Validate charter values through voice feedback

  • Relationship over transaction
  • Loop closure is mandatory
  • People feel seen, heard, valued

NO_CLAUSE

No cultural performance or decorative language.

> Removed te reo Māori from voice agent to protect linguistic sovereignty.

Level 02 // Medium Vulnerability

Leadership AI Coach

Scale a coach's IP into a 24/7 digital partner

  • Strategy meets soma
  • Mirror patterns, don't diagnose
  • Human contact is the foundation

NO_CLAUSE

Never automate connection where presence is required.

> Built Incognito Mode so leaders could process without being monitored.

Level 03 // Crossover Vulnerability

Culture Meets AI

Explore where AI belongs in cultural knowledge

  • Discomfort is data
  • Facilitator as visible safety anchor
  • Hold the paradox, don't resolve it

NO_CLAUSE

No resolving tension for comfort.

> Named the te reo limitation openly at the start, caught the elephant in the room.

Level 04 // Highest Vulnerability

Ray

Support relational reflection for vulnerable adults

  • State Before Story Framework Terminology Core Value State Before Story A rule requiring a check of the user's nervous system state and grounding before any content is addressed. Research Context [1, 6, 8] ↗ "An architectural gate preventing coaching until a user is somatically regulated; draws on polyvagal theory and trauma-informed practice." required
  • Safety is relational not procedural
  • No romantic framing allowed

NO_CLAUSE

No memory between sessions; No persona.

> Stateless architecture returns absolute data control to the user.

On "discomfort is data": When I demonstrated the AI mispronouncing te reo at the start of the wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." , I did it deliberately. I said, "It's giving me the ick" and everyone laughed, and then we got it out of the way. I didn't want people sitting with that discomfort for the whole session. Naming it early, letting the AI acknowledge its own limitation, and moving on was a facilitation decision driven by the build code: discomfort is a signal to go deeper, not to avoid.

On relational vs. procedural safety in Ray: Relational safety was that I knew everyone in the study, or they were participating with a trusted person who knew they were in the pilot. I offered a debrief after every session. Procedural safety is the guidelines in the background, the flags, the webhooks, the crisis escalation. But relational safety came first, because you can't bolt on trust.

The "NO" clause: refusal as ethical design

The most deliberate design decisions I made were the ones about what I refused to build.

An explicit NO clause lets you pre-commit to refusal before the pressure arrives, before the commercial ask, the shortcut, the "just this once." It protects the mana of the user and the integrity of the builder in the same move. Without it, you can drift toward harm without even noticing. Look at the Groks of the world: vulnerable users are not at the centre. Eyeballs and engagement are. In some cases it's not even about money; it's about pushing a particular rhetoric. A build code doesn't prevent that in others, but it makes it impossible for you to end up there without knowing you made the choice.

The Wall of NO

  • FATAL: No persuasive tech designed for addiction
  • FATAL: No cultural performance or decorative language without cultural protection
  • FATAL: No data mines; walled gardens only
  • FATAL: No automated empathy that claims a heart it doesn't have
  • FATAL: No taking sides in couples' mediation
  • FATAL: No romantic framing or relationship-forming persona
  • FATAL: No extraction without reciprocity and transparency
  • FATAL: No handing over participant data without explicit prior consent

Source: DreamStorm + Ray build code

My NO clause evolved across four builds. The line about "no automated empathy that claims a heart it doesn't have" came directly from participant feedback, a pattern of comments across transcripts that made me realise the AI was at risk of performing care rather than facilitating it. AI agents do not have heart. They simulate. The intention behind that simulation is everything. If you're doing it to build false intimacy and extract value from someone, that is harm. If you're doing it to create a safe space someone has chosen to enter, that is a design decision. The line between them is the NO clause.

One thing I learned specifically from Ray: early on, before the pilot, a friend told me Ray's voice was "so dreamy" and she'd "found her new boyfriend." That was a hard stop. The NO clause for Ray had to include an explicit prohibition on romantic framing, on relationship-forming persona, on anything that could encourage emotional dependency. I updated the system prompt, added guardrails, and reframed Ray clearly as the "wise mate on the back porch."

W-10 described what governance needs to look like for decisions this consequential: "My first and foremost, most important thing for them would be to, once this is created, or even before, there should be a pre, a current, and a post engagement. This needs to be taken to iwi members, kaumātua, kuia, leaders for their input. And why I say this is because they are able to identify what tapu te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." is and what is not, and what shouldn't be touched and what can be." That is the ohu/ope model described by a participant, not a researcher. The NO clause protects the builder's integrity. The governance structure protects the community's mana. Both are needed.

The DreamStorm / Trurivu Kaupapa Charters

DreamStorm started as my digital sleep service business. It has become my ethical tech incubator, the umbrella under which I build any technology that involves other people's data or wellbeing. The Leadership AI Coach, Ray, and several builds outside this research all sit under DreamStorm. When I build under that name, the charter travels with the build.

Trurivu was the original name for the feedback app I thought I was going to build. I bought the domain. The Project Rise survey ran under it. When the focus shifted to conversational AI and vulnerable spaces, Trurivu went out of scope, but the Trurivu Kaupapa Charter remained, because the values it codified were not project-specific. They were mine.

The DreamStorm Charter has four core commitments (full charter and NO clauses in Appendix B1):

The DreamStorm Charter

manifest.json // Core Ethical Commitments

Absolute Data Sovereignty

User data is not a resource to be harvested. It belongs to the person who generated it. This commitment led directly to Ray's stateless architecture Technical / Domain Stateless Architecture A design where the system retains no memory of previous sessions or user interactions. Research Context [1, 10] ↗ "A technical manifestation of Mana Motuhake; ensures the user's story is entirely theirs and prevents the creation of a shadow profile." (no memory between sessions because the user's story is entirely theirs) and to the Incognito Mode in the Leadership AI Coach (see Building Safe Conversational AI for the full technical implementation). That is the charter in code, not in words.

Values in the Architecture

Cultural and ethical values must be structural, not decorative. This is where the te reo decision sits: removing the language from the voice agent rather than allowing it to be mispronounced was the charter in action.

Seven Generations Thinking

Decisions are made not for the next quarter but for the legacy they leave. This came from Simon Sinek's The Infinite Game (2019) and deepened through Indigenous frameworks, particularly the concept of seven-generation stewardship across Indigenous cultures, including as articulated through Dr Kiri Dell's kaupapa Māori ethics work (Dell, 2025). In the context of a software build that may be obsolete in five years, seven generations thinking means: would my children's children be proud of what I built? Was I uplifting mana or extracting it? That question outlasts the product.

Community Reciprocity ( Utu Tūturu te reo Māori Core Value Utu Tūturu Enduring collective reciprocity — not transactional exchange but ongoing obligation. What you take, you give back. Research Context [Mika et al., 2022] ↗ "Participants gave real, vulnerable parts of their stories. The loop must close by returning findings to them. That is not optional." )

What you take, you give back (Mika, Dell, Newth & Houkamau, 2022). This is not the last item on the list by accident. I've put it here deliberately, because it is the most important. Utu tūturu te reo Māori Core Value Utu Tūturu Enduring collective reciprocity — not transactional exchange but ongoing obligation. What you take, you give back. Research Context [Mika et al., 2022] ↗ "Participants gave real, vulnerable parts of their stories. The loop must close by returning findings to them. That is not optional." is what separates a research project from extractive data collection. In the Ray pilot, participants received high-fidelity AI relationship coaching in exchange for feedback that shaped the system's safety logic. They were not test subjects. They were co-creators. I am also creating a dedicated space for Ray pilot participants once this research is complete, sharing what I found, adjacent work that connects to theirs, and personal thank-you notes for everyone who participated. The loop must close. That is not optional. That is the charter.

The charter is a living document. Cultural supervision from Lee, Nadine, and Rob shaped it in concrete ways. Lee's feedback on the "no memory" language (that people wouldn't understand what it meant) led to a clarification in all pilot materials: no conversation context carries between sessions. That shift came from a direct conversation and went straight into the charter language. Lee also surfaced the tapu/noa te reo Māori / Framework Terminology Core Value Tapu & Noa Tapu is the state of being sacred, restricted, under spiritual protection. Noa is the state of being ordinary, accessible. Research Context [2, 4, 8] ↗ "The central paradox of this research sits between them: AI risks making tapu things noa. But for people cut off from their culture, noa may be the only door available. That paradox is not resolved. It is held." tension for the first time, which didn't change the charter immediately but led to the Culture Meets AI wānanga te reo Māori Core Value Wānanga A gathering for deep learning and knowledge sharing. Research Context "In this research, the Culture Meets AI wānanga was a 90-minute session where participants explored together whether AI belongs in cultural spaces." : the charter asking a question it couldn't yet answer.

How to build yours

You don't need four builds and an 18-month masters to write a build code. You need honesty about what you actually value, and the willingness to name what you won't do before someone asks you to do it.

Start here:

Woven Braid Texture
draft_code.md

Draft Your Build Code

> Who is this actually for?

Not the business case. Not the stakeholder. The human being who will sit with whatever you build at 2am. Name them specifically.

> What do you believe about that person?

Write it down. "I believe they deserve X." If you can't finish that sentence, you're not ready to build for them yet.

> What will you never do?

This is harder than it sounds. Your NO clause needs to survive commercial pressure, tight deadlines, and a client who says "just this once."

> Who are you accountable to, culturally?

Name that accountability explicitly. Who do you check with? What does reciprocity look like?

> What does safety look like beyond a privacy policy?

A privacy policy is legal protection for you. Safety is relational protection for them. Name both.

> Who controls the data?

"We" is not an answer. Name the person. Name the system. Name what happens when you're no longer the builder.

> Where must a human stay?

Name those moments before you build, not after someone gets hurt.

"The things that align with your personal values most likely won't change. The things that need to adapt will tell you when."

The build code is a living document. Yours will change as your builds get harder and the stakes get higher. That's not a failure of the framework. That's the framework doing its job.

References

Dell, K. (2025). Using Māori values to ethically evaluate food-enabling technologies [Lecture, Week 12]. Master of Technological Futures, GEN25. AcademyEX, 27 February 2025. Framework adapted by the author as the "Kei Compass."

Mika, J. P., Dell, K., Newth, J., & Houkamau, C. (2022). Manahau: Toward an Indigenous Māori theory of value. Philosophy of Management, 21, 441–463. https://doi.org/10.1007/s40926-022-00195-3

Sinek, S. (2019). The infinite game. Portfolio/Penguin.