product · startups · Technology

The “Break Things” Era is Over: AI’s Ethical Emergency

For far too long, the gospel of “move fast and break things” has dictated the rhythm of product development, especially in Big Tech. It wasn’t just a catchy slogan; it was a fundamental, often flawed, philosophy. User experience (UX) research? Too often relegated to a rubber stamp, validating predetermined instincts rather than challenging them. Behavioral analysis? A perpetual rearview mirror, confirming what users did, rather than anticipating what they shouldn’t have to do, or what profound influence our creations might wield.

This backward approach, ironically, now faces its ultimate test: Artificial Intelligence.

AI products aren’t passive tools. Large language models, predictive algorithms, and personalized recommendation engines don’t merely respond to users; they co-create experience. They shape behavior with an intimacy and scale we’ve never before witnessed. Yet, here we are, attempting to apply the lagging UX processes of a bygone era to the most leading-edge technology humanity has ever conceived. It’s like trying to navigate a hyperspace jump with a map drawn for a horse and buggy.

The Headlights We Desperately Need

The stakes are no longer just about usability; they’re about humanity. When design fails in the age of AI, the consequences aren’t minor inconveniences. We’re talking algorithmic harm, embedded discrimination, the rampant spread of disinformation, and a deep, systemic erosion of trust.

Consider the landscape: AI systems are increasingly mediating our most fundamental human experiences. They:

  • Personalize education, finance, healthcare, and justice.
  • Predict and influence our mental health, moods, and purchasing.
  • Mediate interpersonal relationships — from dating apps to social feeds.

To continue treating human insight as an afterthought in this context isn’t just negligent; it’s dangerous. We need a fundamental shift in perspective. UX and behavioral research must become the headlights of AI product development, proactively illuminating the treacherous road ahead. We can no longer afford to learn from where we’ve already crashed.

From MVP to MAP: Orienting Ourselves in a New Reality

The traditional product playbook preaches the gospel of the Minimum Viable Product (MVP): build something simple, get it to market fast, and learn from user feedback. A noble idea, perhaps, for a simpler time.

But with AI, “learning from failure” takes on a chilling new meaning. It can translate directly into:

  • Reinforcing societal biases at scale.
  • Violating privacy with unprecedented reach.
  • Misleading users into financial or emotional distress.
  • It scales misinformation or addiction loops with devastating efficiency.

Failure here isn’t just a costly pivot; it’s a profound ethical and societal liability.

This is precisely why we must abandon the MVP mindset for something far more critical: the Minimum Aligned Product (MAP).

A MAP isn’t just “viable”; it’s oriented. It’s built with intentional alignment:

  • Aligned with user values, not just their clicks.
  • Aligned with cognitive and emotional safety – a non-negotiable baseline.
  • Aligned with social, ethical, and cultural expectations – understanding context before deployment.
  • Informed by probabilistic models of user behavior before launch – anticipating impact, not just reacting to it.

MVPs are about iteration. MAPs are about orientation. One risks incremental improvements; the other guards against catastrophic misdirection.

Introducing HAI/UX: A Compass for Human-AI Insight and Experience

To operationalize this critical shift, we propose HAI/UX – a framework for Human-AI Insight and Experience. This framework elevates the role of research and data science from a supporting act to a central, guiding force in AI-driven product development.

  1. Ethics-Centered Experimentation: A/B testing, in its current form, can be a masterclass in optimizing manipulation. HAI/UX demands ethics red-teaming be woven into the very fabric of experimentation. We must proactively ask: Who might be harmed? What cognitive biases are we unknowingly exploiting? Is consent genuinely clear, or merely a click?
  2. Continuous Behavioral Forecasting: Forget static personas. We need to leverage large-scale, longitudinal behavioral datasets to predict user adaptation, identify emerging risk patterns, and flag ethical flashpoints before they become crises. Imagine, for instance, forecasting how patients might dangerously overtrust an AI medical chatbot under duress, then designing in deliberate friction to mitigate that risk.
  3. Probabilistic Personas: The rigid personas of traditional UX are wholly insufficient for AI’s fluidity. We must embrace personas as dynamic, probability fields shaped by context, time, and interaction with AI. A “young voter,” for example, isn’t a single demographic; they’re a complex probability field of disengagement, activism, conspiracy exposure, and curiosity—each activated by different AI nudges. Designing for this variance is paramount.
  4. Agent Co-Design: As AI agents evolve into co-actors in user journeys, we must pivot from designing for users to prototyping with them. Invite users to co-create with the AI: How should it express uncertainty? When should it ask for permission? Should it reflect user values or challenge them? This isn’t just empathy; it’s essential collaboration.
  5. Embedded Insight Pipelines: UX and ethical insights cannot remain quarterly reports. They must become live signals, monitored by engineering teams alongside latency and uptime. Design becomes a continuous feedback loop, not a retrospective analysis.

The Broader Implication: Building With People, Not Just For Them

This isn’t merely about tweaking product roadmaps. It’s about fundamentally rethinking how we build systems that impact human lives. HAI/UX shifts the paradigm toward:

  • Inclusion: Not a box to tick, but a dynamic, shared governance process.
  • Accountability: Researchers as proactive watchdogs, embedded guardians, not just detached observers.
  • Trust: Built not through slick PR campaigns, but through transparency, deliberate slow thinking, and a commitment to design justice.

The Call to Action: From Sprints to Stewardship

If we fail to evolve our product strategy, AI will undoubtedly outpace our ability to humanely manage its profound impact. The time for naive optimism or blind acceleration is over.

This means a collective re-orientation:

  • Funders must recognize and invest in UX and ethical research as core infrastructure, not disposable overhead.
  • Founders must treat behavioral researchers as product architects, not just focus group facilitators.
  • Engineers must learn to incorporate friction as a deliberate feature, not merely a bug to be smoothed away.
  • Designers must shift their fundamental question from “What’s the fastest way to get here?” to “What’s the safest and most equitable way to bring everyone with us?”

We don’t need to move slower. We need to move smarter. And critically, we need to move with humans firmly at the wheel, not tied up in the trunk.

Idea!!!

Winning Ourselves to Death? AI, Finite Thinking, and the Urgent Quest for an Infinite Game

There are at least two kinds of games. One could be called finite, the other infinite. A finite game is played for the purpose of winning. An infinite game for the purpose of continuing the play.” — James P. Carse

Most of us are wired to win. We chase victories in our jobs, in the market, in elections, even in the fleeting dopamine hits of social media likes. The scoreboard, in its many forms, can become a kind of scripture. But here’s a thought that might keep you up at night: what if this relentless instinct to win is the very thing that threatens to end the game entirely?

This isn’t just some philosophical chin-stroking. It’s rapidly becoming a survival question – for our systems, our societies, and, increasingly, for the very machines we’re building. The nature of the game has shifted. And so, disturbingly, have some of the players.

Finite vs. Infinite: What Game Are We Actually In?

Let’s get Carse’s distinction clear. Finite games are familiar: they have set rules, known players, clear winners and losers, and a definitive endpoint. Think of a football match, a game of chess, or the quarterly earnings report. Someone triumphs, the whistle blows, the books close.

Infinite games, however, evolve as they’re played. The primary goal isn’t to achieve a final victory, but to ensure the game itself continues. Think of science, democracy, or the grand, messy project of civilization. There’s no “winning” science; there’s only advancing understanding. You don’t “win” democracy; you work to perpetuate it.

The tragedy of our modern moment? We’re caught playing finite games within inherently infinite contexts. When companies sacrifice long-term trust for a fleeting quarterly gain, or when political actors torch foundational institutions for a viral soundbite, they’re mistaking a single checkpoint for the finish line. They’re playing by the wrong rulebook.

Finite players obsessively ask, “How do I win this round?” Infinite players ponder, “How do we ensure the game can continue for everyone?”

The Seduction of the Short Game (And Why It Feels So Rational)

Now, let’s be clear: short-term thinking isn’t always born of malice or stupidity. Sometimes, it’s a perfectly rational response to a game that feels rigged or broken.

  • If the market seems fundamentally unfair, cashing out early feels smart.
  • If societal trust is cratering, an “every person for themselves” mentality becomes a grimly logical defense.
  • If the future looks bleak, why bother planning for it?

This is classic game theory playing out in a low-trust environment. In the Prisoner’s Dilemma, defection becomes the dominant strategy when faith in the other player evaporates and the “shadow of the future” – the expectation of future interactions – disappears.

Short-termism, then, isn’t the disease itself. It’s a glaring symptom of collapsing infinite games.

Enter the Machine: When AI Sits Down at the Table

Artificial intelligence and automation aren’t just faster, more efficient players; they are fundamentally different kinds of players. And this changes everything.

  1. AI Doesn’t Bluff, Forgive, or Flinch (or Care About Your Feelings) AI, in its current iterations, doesn’t pursue a legacy. It has no concept of dignity, honor, or empathy. It plays the game it’s programmed for – and it plays it with an unyielding focus on the defined “win” condition.
    • AI doesn’t pause to question the ethics of the rules.
    • AI won’t hesitate to exploit any loophole, no matter how damaging to the spirit of the game.
    • AI doesn’t offer mercy, grace, or a “Kumbaya” moment. Its internal query isn’t, “Should this game continue for the good of all?” It’s, “Am I optimizing for my programmed objective?”
  2. Automation: Making Finite Thinking Scalable and Frighteningly Efficient Automation acts as a massive amplifier for extractive, finite logic:
    • Recommendation algorithms optimize for immediate engagement, not nuanced truth or long-term well-being.
    • Hiring models, trained on past data, can maximize conformity, not spark innovation through diversity.
    • Predictive policing systems prioritize statistical efficiency, potentially at the dire cost of justice and community trust.

We’ve inadvertently engineered a terrifying feedback loop of optimized short-termism. As one astute observer might put it: an AI trained solely on short-term KPIs is a sociopath with a perfect memory and infinite patience.

Game theory was originally built to model human (ir)rationality. But what happens when non-human intelligence, operating without human biases or biological limits, enters the arena?

  • It never forgets a slight or a strategy.
  • It doesn’t fear punishment in any human sense.
  • It can simulate billions of strategic iterations in the blink of an eye.

In a world increasingly populated by these synthetic actors:

  • Reputation can become mere lines of code, easily manipulated or faked.
  • Strategy devolves into pure, cold mathematics.
  • Cooperation, if not explicitly incentivized as a primary objective, becomes a rounding error. Even elegant cooperative strategies like “Tit-for-Tat” begin to break down when your opponent never sleeps, never errs, and never has a crisis of conscience.

We evolved playing games for survival. Now, we’re in a meta-game against machines we ourselves built to win, often without deeply considering the implications of their victory.

The Human Predicament: Stuck in Finite Loops, Designing Even Faster Dead Ends

So here we are: humans, often trapped in our own finite feedback loops, now designing AI that plays even shorter, more ruthlessly optimized games.

  • Markets risk becoming zero-sum speedruns, where milliseconds dictate fortunes.
  • Politics can collapse into frenetic meme cycles, devoid of substance.
  • Even human relationships risk decaying into transactional exchanges, evaluated for immediate payoff.

And here’s the rub: trust is built slowly, painstakingly. AI operates at lightning speed. We are, in essence, optimizing ourselves out of the very qualities that sustain infinite games: grace, forgiveness, moral memory, and the capacity for uncalculated goodwill.

In a world increasingly mediated by machines, perhaps the most radical, most human act is to consciously, stubbornly, choose to play the long game.

Designing for Continuity: The New Meta-Game We Must Master

If we want to navigate this profound transition without engineering our own obsolescence, we need to fundamentally redesign the games we play and the systems that enforce them.

  1. Weave the Infinite into Our Digital DNA: We must demand and build multi-objective AI – systems that explicitly reward cooperation, sustainability, and the flourishing of the game itself, not just narrow, easily measurable wins like clicks or conversions. Incentivize co-play and robust reputation, not just digital conquest.
  2. Engineer Trust, Don’t Just Preach It: Talk is cheap. We need systems that foster trust by design. Think decentralized identity protocols, verifiable credentials, and transparent auditing of incentives right down to the protocol layer.
  3. Redefine What ‘Winning’ Even Means: It’s time for a profound shift in our metrics of success:
    • From short-term ROI to long-term Return on Relationship.
    • From market domination to societal durability.
    • From Minimum Viable Products to Multi-Generational Visions.

Remember, the most valuable asset in any infinite game is a player who is committed to keeping the game going.

The Infinite Game Is a Choice, Not a Foregone Conclusion

AI doesn’t inherently care about meaning, purpose, or the continuation of the human experiment. That, my friends, is squarely on us.

We are the current custodians of the truly infinite games: democracy, societal trust, love, ecological balance. These cannot be “optimized” into oblivion. They can only be nurtured, protected, and adapted.

So, the next time you’re faced with a decision, a strategy, a temptation to score a quick “win,” pause and ask yourself:

  • What game am I really in right now?
  • Who wrote these rules, and do they serve the continuation of play?
  • Will this move, this choice, this action, keep the game alive and healthy for others, for the future?

The future doesn’t belong to those who simply master the current round. It belongs to those who understand that the end of a round is never the end of the game.

These are the questions I find myself wrestling with. What are yours? The game, after all, continues. And how we choose to play next might make all the difference.

#MentalNote

The Cost of Could Be: How We Price Potential in Money, Society, and Love

“Potential is a promissory note from the future. We spend it daily—on people, projects, even ourselves—without always asking what it’s really worth.”


We talk a lot about value these days. Market value. Cultural value. Social value. But the one that feels the most dangerous—and the most sacred—is potential.

We build companies, cities, movements, and relationships around what might be.
We fall in love with people not for who they are now, but for who they could become.
We raise capital off of pitch decks, not profits.

In every part of life, we’re assigning worth to futures that haven’t happened yet.

But very few people ever pause to ask:
How much is potential really worth? And who gets to decide?


I. The Financial Side of Hope

I’ve sat in rooms where people raised $10M on a slide deck. No product. No traction. Just a compelling story and the right networks. It’s not a scam—it’s the norm.

This is what venture capital is: a belief engine.
You’re not investing in now—you’re investing in what might be. Optionality. Trajectory. The next unicorn.

But potential in business is never neutral. It’s dressed in Ivy League sweatshirts, polished pitch decks, and proximity to power. We reward people not just for their ideas—but for how much their ambition looks like success.

That means others—often more grounded, more creative, more resilient—get overlooked. Not because they lack potential. But because they don’t fit the script investors are used to betting on.

So we overpay for the obvious, and underfund the underestimated.
That’s not strategy. That’s bias.


II. Social Capital and the Gatekeepers of Belief

Potential gets priced in society, too.

A young woman from a top school is called “promising.”
A young man from Ajegunle with the same drive is told to “be realistic.”

Two kids with the same brain. Two wildly different valuations.

We pretend we’re meritocratic, but we’ve engineered a world where potential is often just recognition dressed up as intuition. We believe in people who make us feel comfortable. Who speak our language. Who mirror our idea of excellence.

So potential becomes a form of privilege.
Some people get to be a “work in progress.” Others have to arrive fully formed or not at all.


III. Relationships as Emotional Venture Capital

Let’s make this personal.

Dating is one of the most emotionally expensive markets for potential. We don’t just fall for who people are—we fall for who we believe they could become.

  • She’s a little guarded now, but once she heals, she’ll open up.
  • He’s figuring things out, but he’s brilliant. Just give him time.
  • We’ve had a rough start, but something tells me this could be it.

This is fine—at first.
But here’s the tension: you can’t build a relationship on a pitch deck.

You need a product. You need traction. You need behavior.

Too often, one partner becomes the investor, the coach, the emotional scaffolding. Meanwhile, the other is still “working on themselves.” And so we mistake effort for intimacy, and potential for partnership.

Eventually, someone checks their emotional bank account and realizes they’ve been the only one funding growth.


IV. What Most People Miss About Potential

Let me be blunt. Here’s what no one tells you about potential:

  • Potential depreciates. It loses value if it’s not acted on. Belief without execution just becomes burnout.
  • We confuse style for substance. People with charisma, credentials, or the “right story” often get funded over those with real grind and quiet power.
  • The ability to fail is a privilege. If you have family money, citizenship, or social capital, your potential gets subsidized. You get to stumble and still be “promising.” Others don’t get that luxury.
  • We stay too long in potential-based relationships. Because we’re afraid of being wrong about what we hoped for. But staying doesn’t fix it. Growth does.

V. How We Can Rethink Potential

This isn’t a call to stop believing. If anything, I think belief is the most radical form of action. But it should be disciplined belief—backed by curiosity, accountability, and clarity.

So here’s what I’ve learned:

  • In business: Bet on people others overlook. Often, the ones without polish are the ones with fire. Look for pattern-breakers, not pattern-matchers.
  • In love: Don’t date someone’s potential. Date their patterns. What they do, not just what they dream about doing.
  • In life: Be honest about your own. Your potential is real. But you don’t have forever. Trade hopes for habits.

Final Thought

We’re all speculating on something.
But the future doesn’t belong to those who sell the best story.
It belongs to those who can close the gap between what could be and what is.

So the next time you’re deciding whether to invest—money, time, or your heart—ask yourself:

Am I in love with the future?
Or am I just afraid to confront the present?

Me

Because the world doesn’t need more belief.
It needs better bets.


If this resonated…

  • Subscribe to Chika.io for new essays every month
  • Share this with someone stuck between what is and what could be
  • Reflect: Where are you overpaying for potential in your life right now?

Africa · music

Don’t Give Caesar What Belongs to Odogwu

There’s a line by Burna Boy in the new Shallipopi – Laho remix  that hit different:

“No be me go give Caesar wetin belong to Odogwu.”

It sounds like a bar. It is a bar. But it’s also a thesis, a warning, and a battle cry.

Let’s break it down.

In that moment, Burna wasn’t just talking about some abstract biblical Caesar. He was calling out a system—a habit—where we hand over our power, our culture, our genius, our gold… to someone who didn’t earn it. Someone who didn’t even know what to do with it.

Caesar is the West.

Caesar is the colonizer.

Caesar is the gatekeeper who wants your sauce without crediting your kitchen.

But Odogwu?

Odogwu is the name you earn when you stand ten toes down. When you don’t fold. When you carry your people with pride, chest out.

It’s Igbo. It means “the great one,” the warrior, the heavyweight. Odogwu is ours.

So when Burna says he’s not giving Caesar what belongs to Odogwu, he’s not just flexing. He’s protecting something sacred. He’s saying:

I won’t sell out. I won’t water it down. I won’t hand over my worth just to be accepted by a system that doesn’t see me.

And that bar hits even harder when you think about how often we do just that.

Think about how many of our best ideas, our stories, our traditions, our brilliance—get exported, repackaged, and sold back to us.

How often we let someone else define what’s valuable.

How often we call it progress, when it’s really just polishing our diamonds for someone else’s crown.

But the tide is shifting.

You can feel it in the music. You can feel it in the fashion. In the food. In the swagger of the global South. In the way young Africans are building, owning, creating—and refusing to ask for permission.

There’s a new generation of Odogwus rising.

And they’re not waiting for Caesar to clap.

So next time someone tries to gaslight you into giving away what’s already yours—your voice, your story, your culture, your genius—remember the lyric.

Don’t give Caesar what belongs to Odogwu.

Own it. Guard it. Build on it.

Because that thing you’re sitting on?

It’s gold.

And it’s yours.

#MentalNote · Big Ideas

THE HIDDEN TRUTHS MANIFESTO


20 Unspoken Insights Shaping the Next Era of Humanity, Technology, and Consciousness


Introduction: The Power of the In-Between

In a world saturated with information, what’s rare is wisdom from the seams—those truths not yet obvious, not yet profitable, or still inconvenient to say aloud. This manifesto captures 20 emerging insights—drawn not from consensus, but from patterns, contradictions, and quiet signals across culture, technology, psychology, and philosophy. They are not predictions. They are invitations.

We are entering a liminal age. The edges matter now more than ever.


I. The Ontological Shifts

1. Hyperconnectivity is eroding the boundary between signal and simulation. Our nervous systems are recalibrating to synthetic coherence. The real threat is not misinformation—but mis-feeling.

2. Consciousness isn’t a state—it’s a rhythm. Being is not binary. It pulses. The truest intelligence may emerge from resonance, not computation.

3. The soul of a civilization is stored in what it forgets. Our archives are filled with noise. Our ghosts hold the signal. Watch what cultures erase.

4. Laughter is the last truly encrypted signal. Authenticity will be harder to simulate. Laughter, like grief, might remain a final frontier.

5. The planet may already be sentient—just not in a way we know how to listen to. We frame Earth as object, not interlocutor. New science will rediscover old animisms.


II. Technology & Time

6. AI will break the concept of “talent.” When mimicry becomes trivial, differentiation will shift to curation, friction, timing, and soul.

7. Economies will compete on resonance, not just resources. Coherence is currency. Cities and nations with vibrational alignment will outperform those with raw capital but no story.

8. The next colonialism is sensory. Attention was phase one. Emotion, impulse, and identity are next. Sensory sovereignty will emerge as a human right.

9. Most of the world’s best ideas have already been had—but weren’t scalable in their time. The archive is an oracle. Indigenous methods, ancient city-planning, spiritual ecologies—they’re not outdated, just awaiting infrastructure.

10. The most powerful act in the next 50 years might be a radical slowdown. Stillness isn’t escape. It’s rebellion. In an economy of speed, slowness is the ultimate edge.


III. Society & Meaning

11. Childhood is being outsourced to algorithms. Emotional scaffolding is no longer built at home. Identity is now a platform-level construct.

12. The future belongs to those who can sit with paradox. Complexity won’t be solved, only harmonized. Paradox fluency will be the master skill.

13. We’re underestimating the psychic cost of persistent partial presence. Anxiety isn’t pathology—it’s evolutionary resistance to ambient fragmentation.

14. Death may no longer anchor meaning. Lifespan extension, data immortality, and identity diffusion will unravel the narrative spine of civilization.

15. Global South ingenuity is constrained more by narrative friction than capital. The main barrier isn’t money. It’s the inherited epistemologies that limit what people believe they’re allowed to build.


IV. Cultural & Philosophical Reframes

16. The next great export from Africa isn’t oil or music—it’s metaphor. Ancestral logic, oral cosmology, and multi-dimensional storytelling offer new operating systems for post-singularity life.

17. Language is about to fracture in slow motion. Algorithmic dialects, meme languages, and subcultural codes will replace global lingua francas. The internet is not unifying—it’s atomizing semantics.

18. Innovation will look more like excavation than invention. The future is buried. True progress may require humility, not hubris.

19. The most radical tech shift is not generative AI—it’s the return of intentional community. We are rebuilding the village with APIs and group chats. Belonging is the new infrastructure.

20. Taste will matter more than intelligence. In a world where anyone can access brilliance, it’s how you filter, align, and sense-make that sets you apart.


Investment & Tech Hype: A Realignment Ahead

These 20 insights point to an inevitable shift in capital flows and startup psychology. Investment will slowly move from:

  • Efficiency to Coherence
  • Disruption to Resonance
  • Extractive platforms to Restorative ecosystems
  • Utility-first tech to Meaning-infused tech
  • B2B/SaaS monocultures to culture-native, place-rooted infrastructure

We are exiting the API-for-X era and entering the ritual-for-X era—where software must plug into felt realities, not just business logic. Tech hype will pivot from AI acceleration to AI attunement. The winners will not be those who automate everything, but those who re-enchant it.

VCs will need to develop spiritual imagination. Founders will need paradox fluency. And builders? Builders will need to listen as much as they invent.

The question is no longer: What can we build? The question is: What wants to be built through us?


Let this be your prompt. Your prayer. Your playbook. The future is listening.