Politics

The Western Fall Revisited Pt 1: My 2016 Reflections in the Light of 2025’s Multipolar Reality

When I first wrote about the concept of a “Western Fall” back in 2016, I was diagnosing what I saw as a period of profound internal challenge brewing within Western nations. My analysis then pointed to the societal friction from rapid social liberalization clashing with traditional values, the corrosive effects of widening income inequality, and the seismic disruptions brought by globalization and technology. These, I argued, were key drivers of a growing popular disenchantment that could lead to a potential decline in the West’s outward influence.

Looking back from our vantage point in mid-2025, it’s striking how those internal recalibrations have not only deepened but have also acted as significant catalysts on the global stage. The internal stresses I identified, as I suspected they might, have contributed to accelerating the transition from a post-Cold War order, often perceived (perhaps too simplistically) as one of Western or unipolar dominance, to a genuinely multipolar global landscape. This new era is characterized by multiple, assertive centers of power, more fluid and often transactional alliances, and a far more contested and unpredictable international stage. Events since 2016 are now punctuated by the raw, kinetic volatility we’ve witnessed just this past week: with Russia and Ukraine continuing to trade devastating blows in a war of attrition that has become a laboratory for next-generation drone warfare, and the direct, unprecedented missile exchanges between Iran and Israel threatening to pull the entire Middle East into a wider conflagration. These events underscore the trajectory I was beginning to trace.

The manifestations of this shifting global power dynamic have become even clearer than I might have anticipated. The rise of assertive non-Western powers, which I was tracking, has solidified. China, despite its own evolving economic narrative, has moved to a more pronounced global presence. Its Belt and Road Initiative, though adapted in response to critiques around debt and sustainability, continues to be a significant vector of influence alongside its formidable military modernization and robust push in critical technological domains like AI. India, whose economic resilience I noted, has truly championed its “strategic autonomy.” Its robust GDP growth and nuanced foreign policy—balancing relationships with the US, Russia, and China—confirm its role as a pivotal independent force. I also observed the growing independence of regional powers like Turkey, Saudi Arabia, and the UAE; today, their diversified partnerships and assertive national visions are undeniable. The expansion of BRICS+ in 2024, incorporating nations like Egypt, Ethiopia, Iran, Saudi Arabia, and the UAE, was a landmark I couldn’t have precisely predicted in detail, but the underlying aspiration it represents a Global South seeking greater voice and alternative platforms that aligns with the systemic shifts I was exploring.

The emerging multipolar order is characterized by increased volatility and a distinct resurgence of “hard” geopolitics. The direct state-on-state missile attacks between Iran and Israel this week, targeting oil facilities, nuclear-related sites, and population centers, have torn away the veil of their long-running shadow war. This escalation, which has reportedly killed dozens and wounded hundreds on both sides, exemplifies the grave risk of miscalculation in a multipolar system where regional powers act more assertively and the constraints of hegemonic oversight have frayed. This volatility is reflected in rising defense budgets; global military expenditure reached a record $2.718 trillion in 2024, according to SIPRI. Concurrently, the war in Ukraine persists as a brutal testament to this reality. Recent reports from the front lines in June 2025 describe a grinding conflict where unmanned systems now account for a huge percentage of casualties and where both sides are constantly innovating… Ukraine advancing its Sapsan ballistic missile project while Russia deploys North Korean artillery clones, highlighting a protracted struggle with devastating human cost and global repercussions.

This diffusion of power has inevitably stressed traditional Western alliances and institutions. The UN Security Council frequently finds itself deadlocked, and the WTO’s Appellate Body has remained non-functional since late 2019. In this context, the rise of “minilateral” groupings like the Quad and AUKUS makes sense as more agile arrangements. The intensification of competition in new arenas is another area where trends have sharpened. The race for technological supremacy in AI and semiconductors has evolved into a major geostrategic fault line, visible in the US export controls targeting China’s tech advancement and Beijing’s equally determined drive for self-reliance.

These shifts have profound implications for addressing our shared global challenges, a core concern of my 2016 piece regarding isolationism. Effective climate action is demonstrably complicated by geopolitical rivalry that can fracture efforts through trade barriers and divert vital resources. The COVID-19 pandemic provided a painful lesson in how “vaccine nationalism” can hamper global health security. Economic stability is increasingly vulnerable to trade fragmentation and strategic “decoupling,” which can disproportionately impact developing nations. And the erosion of the arms control architecture, with treaties like New START facing expiry without a clear successor, brings the specter of a renewed nuclear arms race into sharper, more alarming focus.

Reflecting on my “Western Fall” thesis from 2016, it seems less about an absolute, terminal decline of the West and more about a profound, ongoing recalibration of its relative power and influence in a world where other poles are not just rising but are now firmly established. This “new normal,” as I termed it then, is dynamic and fiercely contested. For Western nations, the challenge is to adapt to a reality where their primacy is no longer assured. For rising powers, their enhanced stature brings the undeniable opportunity to co-shape global norms, but also the critical responsibility to contribute constructively to global public goods. The overarching risk, as some analysts have warned with the “G-Zero” concept, is a leadership vacuum where heightened geopolitical instability stymies collective action. The “deliberate steering” I called for then remains an urgent imperative. And perhaps the most critical variable in this equation remains the internal health of the West itself, particularly the state of American democracy, which warrants its sober reflection. (Part 2 coming next week)

#MentalNote · #productideas · Big Ideas

Decoding the Chaos: Welcome to Wahala Economics

During my time navigating the vibrant streets of Lagos, I often found myself observing patterns that defied conventional economic wisdom. What initially appeared as disorganization or inefficiency hinted at something more complex, a hidden logic beneath the surface-level ‘wahala.’ It was there, amidst the bustling markets and intricate social dynamics, that the idea of ‘wahala economics’ began to take shape for me – a lens through which to understand the underlying, often unconventional, economic forces at play in such environments. It’s about recognizing that what looks like chaos might actually be a rational, if not always optimal, response to a unique set of constraints and incentives.

Consider the real estate market in Lagos. An outsider might observe seemingly high property prices, perhaps juxtaposed with visible signs of economic hardship. Scratch a little deeper, and you might hear about the lucrative returns some are making through platforms like Airbnb. This visible success, even if enjoyed by a relatively small fraction of property owners, can act as a powerful signal. The perceived profitability of short-term rentals creates an impression of high returns across the board. Consequently, buyers and investors, perhaps lacking granular data on actual Airbnb occupancy rates and profitability across different properties, may bid up prices, not just for Airbnb-suitable apartments, but for real estate more broadly. What appears ‘irrational’ – higher prices even for properties less suited to short-term rentals – becomes a rational response to the distorted incentives created by the highly visible, though potentially unrepresentative, success of some Airbnb ventures.

This phenomenon in the Lagos real estate market isn’t an isolated quirk. Across ‘wahala economies,’ you often find that the incentives themselves are skewed in ways that would seem counterintuitive in more conventional settings. What might appear as irrational behavior – individuals making choices that don’t maximize standard economic utility – often becomes rational when you understand the distorted incentive landscape they navigate. For instance, in environments where trust in formal institutions is low or where scarcity is pervasive, seemingly ‘inefficient’ behaviors like hoarding resources or prioritizing immediate gains over long-term investments can become logical responses to the prevailing conditions. The actors aren’t necessarily irrational; their rationality is simply calibrated to a different, often more challenging, set of incentives.

Beyond the immediate distortions of information asymmetry and skewed incentives, another layer of understanding in ‘wahala economics’ comes from the perspective of ‘infinite games.’ Unlike finite games with clearly defined players, rules, and an end goal, infinite games are about continuing to play. In environments marked by uncertainty and ongoing challenges, actions that appear ‘inefficient’ in the short term might be strategic moves within a much longer, undefined game. Consider a seemingly convoluted or time-consuming negotiation process. From a purely transactional viewpoint, it might look like a waste of resources. However, within the context of an ‘infinite game’ – where building relationships and establishing trust for future interactions is paramount – that extra time and effort might be a crucial investment.

Ultimately, ‘wahala economics’ invites us to look beyond the simplistic metrics of efficiency and immediate transactional gains. The seemingly chaotic dance of these economies often reveals a deeper, adaptive logic rooted in navigating information gaps, responding to skewed incentives, and playing the long game in environments where trust might be localized rather than widespread. The ‘inefficiencies’ we observe on the surface can be understood as the emergent strategies of actors responding rationally (within their context) to the particular ‘wahala’ they face.

What examples of ‘wahala economics’ have you observed in your own experiences or travels? Share your insights!


product · startups · Technology

The “Break Things” Era is Over: AI’s Ethical Emergency

For far too long, the gospel of “move fast and break things” has dictated the rhythm of product development, especially in Big Tech. It wasn’t just a catchy slogan; it was a fundamental, often flawed, philosophy. User experience (UX) research? Too often relegated to a rubber stamp, validating predetermined instincts rather than challenging them. Behavioral analysis? A perpetual rearview mirror, confirming what users did, rather than anticipating what they shouldn’t have to do, or what profound influence our creations might wield.

This backward approach, ironically, now faces its ultimate test: Artificial Intelligence.

AI products aren’t passive tools. Large language models, predictive algorithms, and personalized recommendation engines don’t merely respond to users; they co-create experience. They shape behavior with an intimacy and scale we’ve never before witnessed. Yet, here we are, attempting to apply the lagging UX processes of a bygone era to the most leading-edge technology humanity has ever conceived. It’s like trying to navigate a hyperspace jump with a map drawn for a horse and buggy.

The Headlights We Desperately Need

The stakes are no longer just about usability; they’re about humanity. When design fails in the age of AI, the consequences aren’t minor inconveniences. We’re talking algorithmic harm, embedded discrimination, the rampant spread of disinformation, and a deep, systemic erosion of trust.

Consider the landscape: AI systems are increasingly mediating our most fundamental human experiences. They:

  • Personalize education, finance, healthcare, and justice.
  • Predict and influence our mental health, moods, and purchasing.
  • Mediate interpersonal relationships — from dating apps to social feeds.

To continue treating human insight as an afterthought in this context isn’t just negligent; it’s dangerous. We need a fundamental shift in perspective. UX and behavioral research must become the headlights of AI product development, proactively illuminating the treacherous road ahead. We can no longer afford to learn from where we’ve already crashed.

From MVP to MAP: Orienting Ourselves in a New Reality

The traditional product playbook preaches the gospel of the Minimum Viable Product (MVP): build something simple, get it to market fast, and learn from user feedback. A noble idea, perhaps, for a simpler time.

But with AI, “learning from failure” takes on a chilling new meaning. It can translate directly into:

  • Reinforcing societal biases at scale.
  • Violating privacy with unprecedented reach.
  • Misleading users into financial or emotional distress.
  • It scales misinformation or addiction loops with devastating efficiency.

Failure here isn’t just a costly pivot; it’s a profound ethical and societal liability.

This is precisely why we must abandon the MVP mindset for something far more critical: the Minimum Aligned Product (MAP).

A MAP isn’t just “viable”; it’s oriented. It’s built with intentional alignment:

  • Aligned with user values, not just their clicks.
  • Aligned with cognitive and emotional safety – a non-negotiable baseline.
  • Aligned with social, ethical, and cultural expectations – understanding context before deployment.
  • Informed by probabilistic models of user behavior before launch – anticipating impact, not just reacting to it.

MVPs are about iteration. MAPs are about orientation. One risks incremental improvements; the other guards against catastrophic misdirection.

Introducing HAI/UX: A Compass for Human-AI Insight and Experience

To operationalize this critical shift, we propose HAI/UX – a framework for Human-AI Insight and Experience. This framework elevates the role of research and data science from a supporting act to a central, guiding force in AI-driven product development.

  1. Ethics-Centered Experimentation: A/B testing, in its current form, can be a masterclass in optimizing manipulation. HAI/UX demands ethics red-teaming be woven into the very fabric of experimentation. We must proactively ask: Who might be harmed? What cognitive biases are we unknowingly exploiting? Is consent genuinely clear, or merely a click?
  2. Continuous Behavioral Forecasting: Forget static personas. We need to leverage large-scale, longitudinal behavioral datasets to predict user adaptation, identify emerging risk patterns, and flag ethical flashpoints before they become crises. Imagine, for instance, forecasting how patients might dangerously overtrust an AI medical chatbot under duress, then designing in deliberate friction to mitigate that risk.
  3. Probabilistic Personas: The rigid personas of traditional UX are wholly insufficient for AI’s fluidity. We must embrace personas as dynamic, probability fields shaped by context, time, and interaction with AI. A “young voter,” for example, isn’t a single demographic; they’re a complex probability field of disengagement, activism, conspiracy exposure, and curiosity—each activated by different AI nudges. Designing for this variance is paramount.
  4. Agent Co-Design: As AI agents evolve into co-actors in user journeys, we must pivot from designing for users to prototyping with them. Invite users to co-create with the AI: How should it express uncertainty? When should it ask for permission? Should it reflect user values or challenge them? This isn’t just empathy; it’s essential collaboration.
  5. Embedded Insight Pipelines: UX and ethical insights cannot remain quarterly reports. They must become live signals, monitored by engineering teams alongside latency and uptime. Design becomes a continuous feedback loop, not a retrospective analysis.

The Broader Implication: Building With People, Not Just For Them

This isn’t merely about tweaking product roadmaps. It’s about fundamentally rethinking how we build systems that impact human lives. HAI/UX shifts the paradigm toward:

  • Inclusion: Not a box to tick, but a dynamic, shared governance process.
  • Accountability: Researchers as proactive watchdogs, embedded guardians, not just detached observers.
  • Trust: Built not through slick PR campaigns, but through transparency, deliberate slow thinking, and a commitment to design justice.

The Call to Action: From Sprints to Stewardship

If we fail to evolve our product strategy, AI will undoubtedly outpace our ability to humanely manage its profound impact. The time for naive optimism or blind acceleration is over.

This means a collective re-orientation:

  • Funders must recognize and invest in UX and ethical research as core infrastructure, not disposable overhead.
  • Founders must treat behavioral researchers as product architects, not just focus group facilitators.
  • Engineers must learn to incorporate friction as a deliberate feature, not merely a bug to be smoothed away.
  • Designers must shift their fundamental question from “What’s the fastest way to get here?” to “What’s the safest and most equitable way to bring everyone with us?”

We don’t need to move slower. We need to move smarter. And critically, we need to move with humans firmly at the wheel, not tied up in the trunk.

Idea!!!

Winning Ourselves to Death? AI, Finite Thinking, and the Urgent Quest for an Infinite Game

There are at least two kinds of games. One could be called finite, the other infinite. A finite game is played for the purpose of winning. An infinite game for the purpose of continuing the play.” — James P. Carse

Most of us are wired to win. We chase victories in our jobs, in the market, in elections, even in the fleeting dopamine hits of social media likes. The scoreboard, in its many forms, can become a kind of scripture. But here’s a thought that might keep you up at night: what if this relentless instinct to win is the very thing that threatens to end the game entirely?

This isn’t just some philosophical chin-stroking. It’s rapidly becoming a survival question – for our systems, our societies, and, increasingly, for the very machines we’re building. The nature of the game has shifted. And so, disturbingly, have some of the players.

Finite vs. Infinite: What Game Are We Actually In?

Let’s get Carse’s distinction clear. Finite games are familiar: they have set rules, known players, clear winners and losers, and a definitive endpoint. Think of a football match, a game of chess, or the quarterly earnings report. Someone triumphs, the whistle blows, the books close.

Infinite games, however, evolve as they’re played. The primary goal isn’t to achieve a final victory, but to ensure the game itself continues. Think of science, democracy, or the grand, messy project of civilization. There’s no “winning” science; there’s only advancing understanding. You don’t “win” democracy; you work to perpetuate it.

The tragedy of our modern moment? We’re caught playing finite games within inherently infinite contexts. When companies sacrifice long-term trust for a fleeting quarterly gain, or when political actors torch foundational institutions for a viral soundbite, they’re mistaking a single checkpoint for the finish line. They’re playing by the wrong rulebook.

Finite players obsessively ask, “How do I win this round?” Infinite players ponder, “How do we ensure the game can continue for everyone?”

The Seduction of the Short Game (And Why It Feels So Rational)

Now, let’s be clear: short-term thinking isn’t always born of malice or stupidity. Sometimes, it’s a perfectly rational response to a game that feels rigged or broken.

  • If the market seems fundamentally unfair, cashing out early feels smart.
  • If societal trust is cratering, an “every person for themselves” mentality becomes a grimly logical defense.
  • If the future looks bleak, why bother planning for it?

This is classic game theory playing out in a low-trust environment. In the Prisoner’s Dilemma, defection becomes the dominant strategy when faith in the other player evaporates and the “shadow of the future” – the expectation of future interactions – disappears.

Short-termism, then, isn’t the disease itself. It’s a glaring symptom of collapsing infinite games.

Enter the Machine: When AI Sits Down at the Table

Artificial intelligence and automation aren’t just faster, more efficient players; they are fundamentally different kinds of players. And this changes everything.

  1. AI Doesn’t Bluff, Forgive, or Flinch (or Care About Your Feelings) AI, in its current iterations, doesn’t pursue a legacy. It has no concept of dignity, honor, or empathy. It plays the game it’s programmed for – and it plays it with an unyielding focus on the defined “win” condition.
    • AI doesn’t pause to question the ethics of the rules.
    • AI won’t hesitate to exploit any loophole, no matter how damaging to the spirit of the game.
    • AI doesn’t offer mercy, grace, or a “Kumbaya” moment. Its internal query isn’t, “Should this game continue for the good of all?” It’s, “Am I optimizing for my programmed objective?”
  2. Automation: Making Finite Thinking Scalable and Frighteningly Efficient Automation acts as a massive amplifier for extractive, finite logic:
    • Recommendation algorithms optimize for immediate engagement, not nuanced truth or long-term well-being.
    • Hiring models, trained on past data, can maximize conformity, not spark innovation through diversity.
    • Predictive policing systems prioritize statistical efficiency, potentially at the dire cost of justice and community trust.

We’ve inadvertently engineered a terrifying feedback loop of optimized short-termism. As one astute observer might put it: an AI trained solely on short-term KPIs is a sociopath with a perfect memory and infinite patience.

Game theory was originally built to model human (ir)rationality. But what happens when non-human intelligence, operating without human biases or biological limits, enters the arena?

  • It never forgets a slight or a strategy.
  • It doesn’t fear punishment in any human sense.
  • It can simulate billions of strategic iterations in the blink of an eye.

In a world increasingly populated by these synthetic actors:

  • Reputation can become mere lines of code, easily manipulated or faked.
  • Strategy devolves into pure, cold mathematics.
  • Cooperation, if not explicitly incentivized as a primary objective, becomes a rounding error. Even elegant cooperative strategies like “Tit-for-Tat” begin to break down when your opponent never sleeps, never errs, and never has a crisis of conscience.

We evolved playing games for survival. Now, we’re in a meta-game against machines we ourselves built to win, often without deeply considering the implications of their victory.

The Human Predicament: Stuck in Finite Loops, Designing Even Faster Dead Ends

So here we are: humans, often trapped in our own finite feedback loops, now designing AI that plays even shorter, more ruthlessly optimized games.

  • Markets risk becoming zero-sum speedruns, where milliseconds dictate fortunes.
  • Politics can collapse into frenetic meme cycles, devoid of substance.
  • Even human relationships risk decaying into transactional exchanges, evaluated for immediate payoff.

And here’s the rub: trust is built slowly, painstakingly. AI operates at lightning speed. We are, in essence, optimizing ourselves out of the very qualities that sustain infinite games: grace, forgiveness, moral memory, and the capacity for uncalculated goodwill.

In a world increasingly mediated by machines, perhaps the most radical, most human act is to consciously, stubbornly, choose to play the long game.

Designing for Continuity: The New Meta-Game We Must Master

If we want to navigate this profound transition without engineering our own obsolescence, we need to fundamentally redesign the games we play and the systems that enforce them.

  1. Weave the Infinite into Our Digital DNA: We must demand and build multi-objective AI – systems that explicitly reward cooperation, sustainability, and the flourishing of the game itself, not just narrow, easily measurable wins like clicks or conversions. Incentivize co-play and robust reputation, not just digital conquest.
  2. Engineer Trust, Don’t Just Preach It: Talk is cheap. We need systems that foster trust by design. Think decentralized identity protocols, verifiable credentials, and transparent auditing of incentives right down to the protocol layer.
  3. Redefine What ‘Winning’ Even Means: It’s time for a profound shift in our metrics of success:
    • From short-term ROI to long-term Return on Relationship.
    • From market domination to societal durability.
    • From Minimum Viable Products to Multi-Generational Visions.

Remember, the most valuable asset in any infinite game is a player who is committed to keeping the game going.

The Infinite Game Is a Choice, Not a Foregone Conclusion

AI doesn’t inherently care about meaning, purpose, or the continuation of the human experiment. That, my friends, is squarely on us.

We are the current custodians of the truly infinite games: democracy, societal trust, love, ecological balance. These cannot be “optimized” into oblivion. They can only be nurtured, protected, and adapted.

So, the next time you’re faced with a decision, a strategy, a temptation to score a quick “win,” pause and ask yourself:

  • What game am I really in right now?
  • Who wrote these rules, and do they serve the continuation of play?
  • Will this move, this choice, this action, keep the game alive and healthy for others, for the future?

The future doesn’t belong to those who simply master the current round. It belongs to those who understand that the end of a round is never the end of the game.

These are the questions I find myself wrestling with. What are yours? The game, after all, continues. And how we choose to play next might make all the difference.

#MentalNote

The Cost of Could Be: How We Price Potential in Money, Society, and Love

“Potential is a promissory note from the future. We spend it daily—on people, projects, even ourselves—without always asking what it’s really worth.”


We talk a lot about value these days. Market value. Cultural value. Social value. But the one that feels the most dangerous—and the most sacred—is potential.

We build companies, cities, movements, and relationships around what might be.
We fall in love with people not for who they are now, but for who they could become.
We raise capital off of pitch decks, not profits.

In every part of life, we’re assigning worth to futures that haven’t happened yet.

But very few people ever pause to ask:
How much is potential really worth? And who gets to decide?


I. The Financial Side of Hope

I’ve sat in rooms where people raised $10M on a slide deck. No product. No traction. Just a compelling story and the right networks. It’s not a scam—it’s the norm.

This is what venture capital is: a belief engine.
You’re not investing in now—you’re investing in what might be. Optionality. Trajectory. The next unicorn.

But potential in business is never neutral. It’s dressed in Ivy League sweatshirts, polished pitch decks, and proximity to power. We reward people not just for their ideas—but for how much their ambition looks like success.

That means others—often more grounded, more creative, more resilient—get overlooked. Not because they lack potential. But because they don’t fit the script investors are used to betting on.

So we overpay for the obvious, and underfund the underestimated.
That’s not strategy. That’s bias.


II. Social Capital and the Gatekeepers of Belief

Potential gets priced in society, too.

A young woman from a top school is called “promising.”
A young man from Ajegunle with the same drive is told to “be realistic.”

Two kids with the same brain. Two wildly different valuations.

We pretend we’re meritocratic, but we’ve engineered a world where potential is often just recognition dressed up as intuition. We believe in people who make us feel comfortable. Who speak our language. Who mirror our idea of excellence.

So potential becomes a form of privilege.
Some people get to be a “work in progress.” Others have to arrive fully formed or not at all.


III. Relationships as Emotional Venture Capital

Let’s make this personal.

Dating is one of the most emotionally expensive markets for potential. We don’t just fall for who people are—we fall for who we believe they could become.

  • She’s a little guarded now, but once she heals, she’ll open up.
  • He’s figuring things out, but he’s brilliant. Just give him time.
  • We’ve had a rough start, but something tells me this could be it.

This is fine—at first.
But here’s the tension: you can’t build a relationship on a pitch deck.

You need a product. You need traction. You need behavior.

Too often, one partner becomes the investor, the coach, the emotional scaffolding. Meanwhile, the other is still “working on themselves.” And so we mistake effort for intimacy, and potential for partnership.

Eventually, someone checks their emotional bank account and realizes they’ve been the only one funding growth.


IV. What Most People Miss About Potential

Let me be blunt. Here’s what no one tells you about potential:

  • Potential depreciates. It loses value if it’s not acted on. Belief without execution just becomes burnout.
  • We confuse style for substance. People with charisma, credentials, or the “right story” often get funded over those with real grind and quiet power.
  • The ability to fail is a privilege. If you have family money, citizenship, or social capital, your potential gets subsidized. You get to stumble and still be “promising.” Others don’t get that luxury.
  • We stay too long in potential-based relationships. Because we’re afraid of being wrong about what we hoped for. But staying doesn’t fix it. Growth does.

V. How We Can Rethink Potential

This isn’t a call to stop believing. If anything, I think belief is the most radical form of action. But it should be disciplined belief—backed by curiosity, accountability, and clarity.

So here’s what I’ve learned:

  • In business: Bet on people others overlook. Often, the ones without polish are the ones with fire. Look for pattern-breakers, not pattern-matchers.
  • In love: Don’t date someone’s potential. Date their patterns. What they do, not just what they dream about doing.
  • In life: Be honest about your own. Your potential is real. But you don’t have forever. Trade hopes for habits.

Final Thought

We’re all speculating on something.
But the future doesn’t belong to those who sell the best story.
It belongs to those who can close the gap between what could be and what is.

So the next time you’re deciding whether to invest—money, time, or your heart—ask yourself:

Am I in love with the future?
Or am I just afraid to confront the present?

Me

Because the world doesn’t need more belief.
It needs better bets.


If this resonated…

  • Subscribe to Chika.io for new essays every month
  • Share this with someone stuck between what is and what could be
  • Reflect: Where are you overpaying for potential in your life right now?