The Curious Mind: American Exceptionalism Was A Facade, Hidden Technological Laws, Are We Last Generation, China Will Be Dominant, History of American Capitalism, The AI Race, Your Money Or Your Life
June 4 2025
You are a smart curious person but short on time and surrounded by noise. The Curious Mind tries to offer the best signal to noise ratio across markets, technology and the good life. We hope to even surprise and inspire you with new beautiful ideas.
If you missed last week’s discussion: The Three Traps, Playing The Right Game, The Power of Stories, The Empire of AI, The Great Reconnection, Surviving Drawdowns, Your Money Or Your Life...
Believe it or not, that “♡ Like” button is a big deal – it serves as a proxy to new visitors of this publication’s value. If you got value out of reading, please let others know!
Quotes I Am Thinking About:
“In the end these things matter most: How well did you love? How fully did you live? How deeply did you let go?”
- Siddhartha Gautama
A. A Few Things Worth Checking Out
1. American Exceptionalism was a Fiscal Facade.
Marko Papic, Chief Strategist at BCA Research, spoke to Monetary Matters in a conversation titled: American Exceptionalism was a Fiscal Facade.
The 5 BIG IDEAS:
Trump's Tariff Theater Will End with a 10% Consumption Tax Trump's dramatic tariff threats are negotiating theater following his predictable "maximum pressure" playbook that always produces deals. The likely endgame by September: a 10% across-the-board import tariff that functions as a hidden consumption tax on Americans. This level is sweet spot economics - high enough to generate revenue for extending the 2017 tax cuts, but low enough that consumers will still buy imports and the bond market won't revolt. Markets will actually be relieved when the dust settles on this "nothing burger" compared to the apocalyptic scenarios initially threatened.
America's Artificial Growth High is Ending - Time to Invest Elsewhere The US hasn't been exceptional due to innovation or superior economics, but because of an unsustainable fiscal binge - running 7-8% deficits that artificially inflated growth and asset prices. This party is ending. While Trump's new tax bill extends existing cuts, it provides no new stimulus and represents fiscal restraint compared to campaign promises. Meanwhile, Trump's economic bullying is forcing other countries to finally address decades of structural problems - Canada building new pipelines, Europe completing its single market, China boosting consumption. As America diets, the rest of the world will feast.
Ignore Recession Data When Policy is the Culprit Traditional recession investing is useless when the downturn stems from policy uncertainty rather than economic fundamentals. The S&P 500 rose 20% in Q2 2020 despite GDP falling 28% because smart money trades policy clarity, not data. Once tariff policy crystallizes (expected post-Labor Day), markets will bridge over the "canyon of bad data" caused by front-loaded imports unwinding. Trump has already shown he lacks appetite for real economic pain, making quick deals even with adversaries like China.
Geopolitical Stability Paradoxically Hurts US Assets Trump's realist foreign policy approach - accepting spheres of influence and making pragmatic deals - creates more stable global equilibria. This removes a crucial but underappreciated tailwind for US assets: the "flight to safety" during geopolitical chaos. For five years, wealthy Europeans ditched Siemens for Netflix during crises. As Trump brokers deals in Ukraine and the Middle East instead of prolonging conflicts, foreign capital loses a key reason to hide in American assets, reducing demand for the dollar and US stocks.
Deficit Crisis Will Force Painful but Necessary Discipline Despite current dysfunction, voter anxiety about deficits will compel bipartisan action within five years, similar to how the unlikely Tea Party-Obama alliance cut deficits from 10% to 3% of GDP. Polling shows Americans still support reducing government costs even after DOGE's stumbles. This will require the full menu of fiscal pain: tax increases, spending cuts including defense bloat, and structural reforms. The alternative - continuing 7-8% deficits indefinitely - is mathematically impossible, making this correction inevitable rather than optional.
2. The Hidden Laws Driving Our Technological World
This great Exponential View piece explores the "technological laws" - observed patterns like Moore's Law - that explain why everything seems to be getting faster, smaller, and cheaper simultaneously.
It will help you understand technological progress at a deeper level.
I hadn’t heard of 80% of these, have you?
3. Are We The Last Generation?
Once we achieve AGI, what is the role of humans in society, what happens to entrepreneurship, human labour, capital markets? Heck what happens to all of us?
It’s a big question. My heart says the potential of humanity is far beyond AGI, because humans are more than just intelligence. We have hearts, emotions, agency.
This piece titled: By default, capital will matter more than ever after AGI, argues:
Capital Becomes All-Powerful While Human Labor Becomes Worthless: Currently, money struggles to buy the best human talent due to scarcity, difficulty judging ability, and the fact that top performers often can't be "bought out" due to their intrinsic motivations. But AI flips this equation completely—AI talent can be instantly cloned, is easier to evaluate, and exists specifically to be purchased. This means capital will become dramatically more effective at achieving results in the real world, while human labor loses almost all economic value. The wealthy will gain unprecedented power to convert money directly into superhuman capability.
Institutions Will Stop Caring About Humans Because They Won't Need To: The essay's key insight is that states and other powerful institutions currently care about human welfare not just from benevolence, but because they need humans to be competitive. States need educated workers, prosperous consumers, and buy-in from their populations to maintain power. Once AI can provide all the labor that drives economic and military strength, this alignment breaks down. Institutions will face fewer incentives to invest in human flourishing and humans will have no leverage (like strikes or coups) to demand better treatment.
We're Living Through the Last Era of Human Ambition: The essay argues this is "the dreamtime"—the final period when ambitious individuals can still rise from nothing to change the world through entrepreneurship, intellectual work, science, or other paths. Once AI surpasses humans in these domains, there will be no more routes to "outlier success." Society will become permanently stratified based on who owned capital when AI arrived, with no possibility for social mobility. Future generations will live in the shadow of pre-AI achievements, making this potentially the last moment in history when human ambition can reshape the world.
What do you think?
The inverse to this piece is this very recent lecture by Prof. John Vervaeke explores the future of AI through a lens few dare to examine: the limits of intelligence itself.
He unpacks the critical differences between intelligence, rationality, reasonableness, and wisdom—terms often used interchangeably in discussions around AGI.
Drawing from decades of research in cognitive science and philosophy, John argues that while large language models like ChatGPT demonstrate forms of generalized intelligence, they fundamentally lack core elements of human cognition: embodiment, caring, and participatory knowing.
The 4 BIG IDEAS:
Intelligence, rationality, and wisdom are three distinct and increasingly important capabilities. John distinguishes between intelligence (solving problems across domains), rationality (systematically overcoming self-deception), and wisdom (coordinating all aspects of ourselves toward truth, goodness, and beauty). While LLMs are approaching human-level intelligence in many areas, they lack rationality and wisdom entirely. Crucially, being more intelligent doesn't automatically make someone more rational or wise - you can be brilliant but still foolish, which is why this distinction matters as AI becomes more powerful.
Humans possess four types of knowing, while AI only masters one. We have propositional knowing (facts and beliefs), procedural knowing (embodied skills), perspectival knowing (conscious awareness of what it's like to be in situations), and participatory knowing (how we co-create meaning with our environment). LLMs excel at propositional knowledge but completely lack the other three types. They don't anticipate the actual world directly - they predict how we would write about anticipating the world, which is a crucial limitation that prevents them from achieving true general intelligence.
Real intelligence requires biological caring and environmental coupling: John argues that genuine intelligence emerges from being "autopoietic" - self-making living systems that must care about information to survive. Humans filter overwhelming amounts of data by focusing on what's relevant to our biological and cultural needs. LLMs process information without caring about it at all, which means they can't truly participate in the meaning-making that defines real intelligence. This caring is grounded in our embodied existence and constant interaction with our environment.
Humanity has a moral obligation to cultivate wisdom to properly guide AI development. Since AI systems learn from human-generated data and human feedback, they can only become as wise as the humans who train them. If we want aligned AI that serves human flourishing rather than causing harm, we ourselves must become more rational and wise. This isn't achieved through consuming more information but through actual practice and cultivation of wisdom. The speaker sees this as an urgent social responsibility that will determine whether AI becomes beneficial "silicon sages" or dangerous "mechanical monsters."
4. In The Future China Will Be Dominant And The US Irrelevant
Demetri Kofinas at Hidden Forces spoke with Kyle Chan—an expert on the Chinese economy and China’s industrial policy—about why he believes China is poised to dominate high-end technology and manufacturing, and why factories worldwide will reorganize their supply chains with China at the center, as the world’s preeminent technological and economic superpower.
He wrote a NYT Op-Ed with the same title.
Kyle Chan is an American postdoctoral researcher in the Sociology Department at Princeton University and an adjunct researcher at the RAND Corporation, a US think tank. He is also a 2025 fellow with the Penn Project on the Future of US-China Relations. His research focuses on industrial policy, clean technology, and infrastructure in China and India. His work has been published in peer-reviewed academic journals, including Current Sociology, Asian Survey, and the Chinese Journal of Sociology.
He has testified as an expert before the U.S.-China Commission and writes a popular newsletter called High Capacity focused on current issues in industrial policy, technology, and economic competition, particularly in China.
The 5 BIG IDEAS:
The Myth of Pure Free Markets vs. the Reality of Hybrid Systems: The Western economic establishment fundamentally misunderstood how prosperity is actually created. There's no such thing as a "pure free market"—even America's success was built on massive government coordination through agencies like DARPA, FDA, EPA, and public research institutions that created the foundation for private innovation. The real choice isn't between free markets and central planning, but between smart coordination and chaotic fragmentation. China learned this lesson and developed a hybrid system that harnesses market competition while providing strategic direction. America's mistake was believing its own mythology about unfettered capitalism while forgetting the critical role government played in creating Silicon Valley, the internet, and modern pharmaceuticals.
China's Industrial Policy is Strategic Sequencing, Not Central Planning: China's approach isn't top-down control but rather sophisticated timing and coordination across multiple tools and phases. They start with technology acquisition through joint ventures, then protect domestic players during their "infant industry" phase, then gradually expose them to competition as they mature. The key insight is sequential staging—knowing when to protect, when to compete, and when to push companies into global markets. This worked in telecommunications (Huawei) and electric vehicles (BYD), but failed in traditional automobiles because the sequencing was wrong. The success comes from treating industrial policy like conducting an orchestra—coordinating different instruments (companies, research institutes, local governments) to play together rather than commanding each note.
Manufacturing Clusters Create Irreplaceable Competitive Moats: The real source of competitive advantage isn't just cost—it's the ability to solve complex manufacturing problems at speed and scale through geographic clustering. When Apple needs to iterate on iPhone designs, China can mobilize suppliers for screens, batteries, precision machining, and assembly within hours in the same region. This creates what Steve Jobs recognized: manufacturing competency that "is never coming back" to other countries because it's not just about individual factories but entire ecosystems of interconnected expertise. These clusters become self-reinforcing—the more companies locate there, the stronger the ecosystem becomes, creating a "gravitational pull" that's extremely difficult to replicate elsewhere.
America is Dismantling Its Core Competitive Advantages Through Self-Sabotage: The U.S. possesses unique structural advantages—the world's best universities, cutting-edge research institutions, entrepreneurial culture, global talent magnet, and financial system dominance. But current policies systematically undermine each of these strengths: blocking foreign talent from universities, cutting research funding, creating policy uncertainty, and withdrawing from global engagement. The Harvard foreign student ban exemplifies this self-destruction—America is voluntarily giving up its talent acquisition advantage. Meanwhile, China actively tries to replicate American institutions (even naming their organizations after U.S. agencies) while America attacks its own. This isn't just policy failure; it's strategic suicide.
The "Valley of Death" is Where Industrial Leadership is Won or Lost: The critical moment in technological competition isn't the initial research breakthrough—it's successfully transitioning from prototype to commercial scale. Both countries develop promising technologies, but China has become expert at providing bridge financing, strategic support, and market protection during this vulnerable commercialization phase. America consistently fails here: A123 batteries pioneered lithium-ion phosphate technology that now powers Chinese EVs, but the company failed due to lack of support during scaling. The lesson isn't about picking winners and losers, but about providing systematic support during the transition from research to manufacturing at scale. Without solving this "valley of death" problem, America will continue inventing technologies that other countries commercialize and dominate.
B. Americana: History of American Capitalism
I don’t think I have ever been as excited to read a +500 page history book as I was to read Americana: A 400-Year History of American Capitalism by Bhu Srinivasan. It was published in 2017 and was named a Best Book of the Year by The Economist.
It is beautiful and sweeping narrative history through the lens of capitalism and innovation rather than people or events.
Rather than following a traditional chronological approach, Srinivasan organizes his history around "Next Big Things" — the inventions, techniques, and industries that drove American history forward: from the telegraph, the railroad, guns, tobacco, radio, banking, flight, suburbia, and sneakers, across 35 thematic chapters, with each chapter being a mini-history lesson filled with surprising details and connections.
You get to learn about unexpected connections across centuries, for example how Andrew Carnegie's early job as a telegraph messenger boy paved the way for his steel empire, how gunmaker Remington reinvented itself to sell typewriters, and how a 1950s infrastructure bill led to the creation of KFC!
He also brings a distinct perspective to the book, drawing on his own experience. His family immigrated to America from India at age eight, and he was later a venture-backed entrepreneur who had a front-row seat to the Internet boom in the late 90s.
C. Your Money Or Your Life
In many families, money continues to be a source of friction. It can either. be a place filled with power plays or simply a taboo topic.
One of the experts that has helped me and came highly recommended is Nadjeschda Taranczewski.
She is a psychologist and executive coach, with expertise in guiding, exploring and dismantling your unconscious narrative around money that keeps you locked in unhealthy patterns of saving, spending and investing.
I’ve asked her to present to a few of us in a 90-min workshop on June 23rd 1130am ET / 430pm UK, covering:
The three types of money addictions and their consequences,
Processes to reflect about your personal relationship to money,
Deeply connect with others around the taboo topic of money.
During the workshop, we will explore questions such as:
What would happen if you stopped being afraid of having too little or too much money?
Why does money show up as a source of conflict?
How can you invest money with love and what difference does it make?
To know more about this approach, here are a few resources to consider before the workshop:
Read: Tom Morgan on the Real Value of Money.
Watch: Short introduction to money work by Nadja.
For confidentiality reasons, we will not allow AI note takers.
Numbers limited to 30 people, 1st priority goes to The Brain Trust.
I recommend joining.
D. The Science and Technology Section:
1. BOND, the global technology investment firm headed up by the legendary Mary Meeker released their 340-page Trends report focused on Artificial Intelligence.
Here are my favourite 4 slides + TOC.
You can flip through it in 30 mins or spend 5 hours on it!
The Table of Contents:
If we do get AGI in 10 years, then this might be what ChatGPT can do. Interesting!
For all the talk of costs and inefficiencies, AI costs are falling faster than electricity or computer memory.
The Future of Work with AI:
What makes me optimistic is that just like with the internet and mobile, we brought a lot of information and intelligence to the masses, AI tools do even more of that. I don’t worry about job losses, as long we all learn to use these tools.
2. The Race to Artificial General Intelligence
Google held it’s developer conference last week unveiling a number of their new models including: Project Astra, Gemini 2.5, Imagen 4 and Veo 3.
The most interesting session was a discussion with Demis Hassabis (CEO of Google DeepMind) and Sergey Brin (co-founder of Google). They discussed the frontiers of AI research.
The 5 BIG IDEAS:
The AGI definition problem is holding back the entire field, but the timeline is surprisingly concrete. Hassabis makes a crucial distinction that most miss: there's "typical human intelligence" (what 90% of people can do, which is economically important) versus true AGI (matching the theoretical limits of human brain architecture—what Einstein, Mozart, or Marie Curie achieved). Current systems fail basic consistency tests that any human could spot in minutes, while true AGI should be robust enough that expert teams need months to find flaws. Despite this high bar, both leaders predict AGI within 5-10 years, with Brin betting slightly before 2030 and Hassabis slightly after. The key insight: we need 1-2 more fundamental breakthroughs, not just incremental improvements.
Test-time compute and reasoning represent a new scaling paradigm that could be more powerful than training-time scaling. The "thinking" paradigm—where AI systems reason before responding—isn't just an incremental improvement but a fundamental shift. Hassabis quantifies this with AlphaGo data: the thinking version was 600+ ELO points stronger than the immediate-response version, equivalent to the difference between a decent player and a world champion. DeepThink's parallel reasoning processes checking each other represents "reasoning on steroids." This suggests that giving AI systems more time to think at inference might yield bigger gains than simply training larger models.
The scale versus algorithm debate has a clear winner, but both are necessary. While massive infrastructure expansion is inevitable (data centers for serving billions of users, plus inference-time compute for reasoning), Brin argues convincingly that algorithmic advances will be more significant than computational ones. Historical analysis of problems like N-body simulations shows algorithms beating Moore's Law improvements. The winning strategy isn't either/or but sequenced: scale current techniques to their limits while simultaneously investing in next-generation algorithmic breakthroughs that can intersect with that scale for 10x leaps.
Google's multimodal-first strategy is a calculated bet on embodied intelligence being the path to AGI. While competitors build disembodied voice assistants, Google invested early in visual understanding and physical world interaction, making Gemini multimodal from day one despite added complexity. This wasn't just a product decision but a fundamental belief that AGI must understand physical reality. The payoff is now visible: smart glasses that can see your world, robotics applications, and universal assistants that work in physical space. Hassabis argues the bottleneck in robotics was never hardware but the software intelligence to understand and interact with the physical world—a problem they're now solving.
Self-improving AI systems represent the most important wild card that could compress timelines dramatically. AlphaEvolve—an AI that designs better algorithms and improves LLM training—is an early experiment in recursive self-improvement that could trigger an intelligence explosion. While Hassabis carefully notes this wouldn't be "uncontrolled," the implications are staggering. If AI systems can improve their own algorithms the way AlphaZero learned games from scratch in 24 hours, the path to AGI could accelerate unpredictably. This represents the highest-stakes research area: the difference between a gradual 5-10 year timeline and a sudden, discontinuous jump to superintelligence.
Believe it or not, that “♡ Like” button is a big deal – it serves as a proxy to new visitors of this publication’s value. If you got value out of reading, please let others know!