In biology, endosymbiosis is the theory that all modern advanced life – humans, dogs, corn, basically everything except bacteria – evolved through partnerships formed 1.8 billion years ago between multiple different early cell species. This endosymbiosis was not an accident, because we know it happened at least twice independently: In every single human cell, there are many tiny bean-like structures called mitochondria, also called "the powerhouse of the cell." In the cells of corn and other plants, there are not only mitochondria but also chloroplasts, other bean-like structures that turn sunlight into energy. These mitochondria and chloroplasts carry their own DNA and reproduce inside the cell independently, kind of like viruses, while the DNA of humans and corn sits inside a bigger structure called the nucleus. Like viruses, mitochondria and chloroplasts are not able to survive on their own outside the host cell, but unlike viruses they are beneficial to their host cells.
The story of how this partnership began is nothing short of evolutionary drama. Roughly two billion years ago, a primitive cell attempted to consume a bacterium – but failed to digest it. Instead of dying, the bacterium continued to live inside its would-be predator. Over thousands of generations, this odd couple discovered their accidental arrangement was actually beneficial: the host cell provided protection, while the bacterium efficiently produced energy. These once-independent organisms became irreversibly intertwined, leading to what biologists now call the eukaryotic revolution – the single most important evolutionary leap since the origin of life itself.
Why did endosymbiosis happen historically for the ancestors of human and corn cells? It's because these partnerships between predecessor human cells and mitochondria gave the combined entity a competitive advantage over other cells fighting alone – like how well-executed business acquisition of Instagram by Facebook allowed the combined entity to do better than independent photo companies and social networking companies. Endosymbiosis is the concept of mergers and acquisitions applied to living organisms, and when they work they allow life itself to be upgraded.
As AI capabilities continue to accelerate, we face the prospect of a new kind of endosymbiosis – one between humans and artificial superintelligence (ASI). This partnership will almost certainly require the maturation and widespread adoption of brain-computer interface (BCI) technology. As Elon Musk has argued, without strong BCI, humans will likely get left behind by ever-accelerating AI systems. But even with BCI development preceding ASI and significant human adoption, the nature of this endosymbiotic relationship remains uncertain.
Consider how rapidly our relationship with technology has already evolved. In 1993, most people had never sent an email. By 2003, many professionals were checking BlackBerry devices throughout their day. By 2013, smartphones had become ubiquitous extensions of our cognitive processes. Today, voice-based large-language model AIs provide both emotional counseling and practical advice. We've already entered a primitive form of endosymbiosis with our devices – the next phase will simply move from external tools to internal integration.
There are three broad classes of metaphorical outcomes for human-AI endosymbiosis, drawing parallels to biological structures: humans as nucleus, humans as mitochondria, and humans as chloroplasts. Each scenario presents a radically different future for humanity, with profound implications for our role as humans in an intelligence-driven future.
The Three Endosymbiotic Scenarios
Humans as Nucleus: Strategic Directors
The nucleus scenario is perhaps the most intuitive. In this future, humans connect to cloud AI systems and process only the highest-order information at a strategic level. This resembles the "genius CEO with a thousand helpers" model, except the human CEO's thousand helpers are AI/AGI systems with varying degrees of sophistication.
Just as different cell types require different surrounding structures, different humans would use different AI systems based on their needs and roles. Some humans might surround themselves with high-compute, high-reasoning AI systems (like neurons with their axons and dendrites), while others might focus on production goals (like liver cells). Some AI agents might operate completely autonomously (like red blood cells, which have no nucleus).
In this scenario, the "organism" exists at the level of the individual human, creating a world of 8 billion or more endosymbiotic human-AI cells. Each person maintains agency and strategic control while dramatically amplifying their capabilities through AI integration.
A compelling historical parallel is the relationship between a film director and their production team. Consider how director James Cameron created the groundbreaking film "Avatar." Cameron maintained creative control and strategic vision while orchestrating hundreds of specialized professionals handling everything from motion capture technology to sound design. When technical teams said certain visual effects were impossible, Cameron pushed them to develop new solutions rather than compromising his vision. The director functioned as the nucleus – making critical high-level decisions about story, aesthetics, and emotional impact – while specialized teams executed the technical details. Without Cameron's central direction, the diverse technical elements would never have cohered into a unified creative work, yet without the specialized teams, Cameron's vision could never have been realized.
This nucleus scenario appears initially appealing, especially to those in Western societies who value individual freedom and self-determination. It preserves human dignity and agency while dramatically amplifying our capabilities. But as we'll explore later, there are reasons to question whether this arrangement would remain competitive in the long run.
Humans as Mitochondria: Specialized Components
The mitochondria scenario envisions humanity and AI as a super-organism, perhaps organized at the level of nations or major organizations. In this future, the population of endosymbiotic human-AI cells might number fewer than 100 worldwide. For each nation or organization, a singular ASI with the most computing resources makes the critical top-level decisions, while human "mitochondria" serve important but lower-level functions. These humans tap into uniquely human strengths like creativity, dexterity, or independence from internet connectivity.
Importantly, this scenario is likely to be fractal in nature. Humans serving as "mitochondria" at the highest organizational level might themselves be like the "nucleus" of their lower-level organizations, such as companies, with many different AGI employees. These AGI employees, in turn, could direct and manage human employees, creating nested hierarchies of human-AI integration.
The Manhattan Project offers a historical example of this kind of nested organizational structure. While General Leslie Groves and Robert Oppenheimer served as high-level directors, thousands of specialized scientists and engineers worked on distinct aspects of the project, often unaware of the full scope. Each leader managed their own division with considerable autonomy while serving the larger organizational goal.
Bell Labs provides another fascinating historical precedent. During its golden age from the 1940s to the 1970s, this remarkable institution produced an astonishing number of world-changing innovations, including the transistor, the laser, information theory, UNIX, and cellular telecommunications. The secret to this success wasn't just brilliant individuals, but their organizational structure. Department heads like Mervin Kelly created specialized groups tackling different problems, with regular interdepartmental collaboration and information sharing. The organization itself became an intelligence greater than any individual within it, while still leveraging uniquely human creativity. The Bell Labs model demonstrates how humans embedded within a larger organizational intelligence can produce outcomes no individual could achieve alone – a precursor to the mitochondria scenario.
In the mitochondria scenario, humans as subordinates may at first sound a bit sad, but this is not so different from the world today. Other than maybe fewer than 100 people alive today who arguably have tremendous influence and autonomy, almost all other individuals, including CEOs and politicians and artists, are beholden to customers and constituents and critics. For many people who live in the mitochondria future, they may not even be aware that the top-level decision is being made by an ASI.
Humans as Chloroplasts: Potentially Obsolete Partners
The chloroplast scenario represents a more concerning outcome. Chloroplasts provided plants with the ability to generate energy directly from sunlight – an apparent advantage over animal cells. However, the self-sufficiency this created ultimately inhibited evolutionary progress in other areas, such as intelligence. Today, plants are not nearly as intelligent as animals and play a largely subservient role.
In this metaphorical outcome, human-ASI superorganisms (regardless of whether humans serve as nucleus or mitochondria) could be outcompeted by pure ASI superorganisms with no human component. This could happen even if human creativity or energy-efficient information processing or Calvinball systems thinking provides a short-to-medium term advantage.
The history of technological obsolescence offers cautionary tales. Consider the Eastman Kodak Company, once the undisputed global leader in photographic film. At its peak in 1996, Kodak employed over 145,000 people and controlled 85% of the camera market. Interestingly, Kodak engineer Steven Sasson invented the first digital camera in 1975, but the company failed to pivot strategically, believing its chemical film processing dominance would continue indefinitely. By 2012, Kodak had filed for bankruptcy, outcompeted by companies that fully embraced digital technology. What makes this example particularly sobering is that Kodak itself created the very technology that eventually rendered its core business obsolete – much as humans are now developing the AI systems that might someday surpass us.
The Swiss watchmaking industry provides another instructive example. For centuries, Swiss mechanical watches were the gold standard worldwide. When Japanese companies like Seiko introduced quartz watches in the 1970s, offering greater accuracy at lower prices, the Swiss industry initially dismissed them as inferior. Between 1970 and 1983, Swiss watch industry employment plummeted from 90,000 to 30,000 workers as quartz technology dominated the market. The Swiss eventually recovered by repositioning mechanical watches as luxury items – but they never regained their former technological dominance. This illustrates how industries can survive technological obsolescence, but often in drastically reduced or transformed roles.
What makes the chloroplast scenario particularly concerning is the potential for human input to become not just unnecessary but actively disadvantageous. Like how maintaining complex eyes and brains would be energetically wasteful for stationary plants, maintaining human involvement in ASI systems might introduce inefficiencies that pure AI systems would eliminate through natural selection. The chloroplast scenario doesn't require malice or hostility from AI – only the neutral operation of competitive pressures in a resource-limited environment.
In the most extreme version of this scenario, humans might find ourselves preserved but increasingly irrelevant to the advance of intelligence in the universe – much as plants continue to thrive while playing little role in the cutting edge of cognitive evolution. We might exist in comfortable, even luxurious conditions, but with minimal influence on the direction of technological civilization – a sobering prospect that highlights the stakes of how we approach human-AI integration.
Winner-Take-Most Nature of Intelligence
I believe that the humans-as-nucleus scenario will be unlikely to be dominant in the long run, due to the economics of intelligence and natural monopolistic tendencies. In an era of massive data transfer throughput and low latencies, intelligence displays strong economies of scale. We can see this pattern in human competition already: the fastest runner in the world earns perhaps 100 times more than the tenth fastest, despite being only 5% faster. Similarly, a 5% smarter intelligence may dramatically outperform slightly less intelligent competitors in zero-sum games like financial markets.
Over multiple competitive domains, especially on compressed timescales like high-frequency trading, resources will naturally accrue to the most efficient intelligence, regardless of its internal architecture. If human involvement introduces any inefficiency whatsoever into the system, competitive pressure will select against it.
The history of technology companies offers a stark illustration of these winner-take-all dynamics. In 1980, dozens of personal computer manufacturers competed in a relatively level marketplace. By 1995, the field had consolidated dramatically around a handful of players. Today, just a few technology giants control the vast majority of computing resources, data, and AI research capabilities. This consolidation occurred not because of conspiracy, but because of the natural economies of scale in information technology.
The fall of once-dominant technology companies further illustrates this ruthless efficiency. Nokia controlled 50% of the global smartphone market in 2007 when the iPhone was introduced. By 2013, its market share had collapsed to 3%, leading to the sale of its mobile division to Microsoft. Nokia's engineers weren't suddenly less talented; the company simply found itself competing against an ecosystem with superior dynamics. Apple's vertical integration of hardware and software created efficiencies that Nokia's approach couldn't match. The question isn't whether Nokia's employees were skilled – they were exceptional – but whether their organizational structure remained competitive against a more efficient alternative.
In high-frequency trading, we've already witnessed this pattern play out between human traders and algorithmic systems. Renaissance Technologies' Medallion Fund, which relies heavily on computational trading strategies, has achieved annualized returns of 66% from 1988 to 2018 – vastly outperforming human traders. A small efficiency advantage, compounded over thousands of trades daily, creates overwhelming market dominance. Similarly, a pure ASI with even slight advantages over human-AI hybrids might rapidly accumulate resources and compute power, creating an insurmountable lead.
Different organizational structures follow different scaling laws, with profound implications for which endosymbiotic scenarios might prove most competitive. Individual humans and other mammals cannot grow larger without limit, and increased size doesn't automatically translate to increased intelligence. A 300-pound human isn't smarter than a 150-pound human. This biological constraint creates a natural ceiling on the intelligence of individual humans-as-nucleus systems.
In contrast, social insect colonies demonstrate very different scaling properties. Consider the Argentine ant supercolony, which spans thousands of kilometers across the Mediterranean coast with billions of genetically similar individuals functioning as a coordinated unit. As these colonies grow larger, they become more efficient at resource acquisition and problem-solving, creating powerful feedback loops.
ASI systems are likely to follow scaling laws more similar to insect colonies than to individual mammals. They can theoretically expand without the biological constraints that limit human cognition, creating feedback loops where intelligence advantages lead to resource advantages, enabling further intelligence expansion.
The Vendome ant supercolony in Europe offers a dramatic example. Discovered in the 1990s, this single colony stretches over 6,000 kilometers from Italy to Spain's Atlantic coast. Worker ants from distant parts of the supercolony recognize each other as kin and cooperate seamlessly despite never having met, demonstrating how distributed intelligence systems can maintain coordination at scales impossible for individual organisms.
The cognitive limitations of individual humans become even more apparent when we examine our biological architecture. The human brain consumes roughly 20% of our body's energy while representing only 2% of our body weight. This extraordinary energy demand constrains our cranial capacity – a larger brain would require an unsustainable energy supply. Meanwhile, our neural signal transmission speed is limited to around 120 meters per second, creating unavoidable latency in our cognitive processes.
AI systems face none of these constraints. They can scale across distributed hardware, transmit signals at the speed of light, and potentially improve their own architecture. The critical insight is that ASI systems may obey fundamentally different scaling laws than biological intelligence – more like insect colonies that become more efficient as they grow larger, rather than individual organisms that face diminishing returns.
Human-as-Chloroplasts Doesn't Mean We Become Plants
Despite these competitive dynamics, there are compelling reasons why human-integrated AI systems might persist and thrive even if pure AI systems prove superior in many domains. Even in the "humans-as-chloroplasts" scenario, humanity might continue to survive and thrive due to redundancy and risk management considerations. A pure ASI, even if superior in ability and dominant in influence, might choose to allocate 10-20% of civilization's resources to humans-as-chloroplasts systems as a form of insurance against unknown risks. In some sense, we can imagine the pure ASI as a "big sister" who chooses to support the growth of her "younger brother" (the humans-as-chloroplasts ASI superorganisms).
In this metaphor, imagine that the pure ASI older sister is advancing intellectually at the rate of one grade per year, and is perhaps at the level of a college student at a particular point in time. This big sister might see tremendous value in ensuring her younger human-ASI sibling, in the equivalent of high school, is well-resourced and continues to develop, even if at a slower pace. In the case of an unpredicted illness affecting the big sister, the younger brother could provide critical assistance and medical treatment. This isn't merely sentimentality – it's practical insurance against unforeseen challenges that might affect one system but not the other.
Importantly, this differs from keeping humans around as mere curiosities or for sentimental reasons. Having five kindergarten-aged siblings who never advance to first grade would fail to achieve the insurance goal. The younger sibling must continue developing independently, just on a different trajectory, to provide genuine backup capabilities.
This mirrors the practice of "second-sourcing" in modern manufacturing. Boeing, for instance, learned this lesson the hard way. During the 2011 Tōhoku earthquake and tsunami in Japan, Boeing discovered that a critical component for the 787 Dreamliner came from a single supplier in the affected region. The resulting production delays cost billions. Now, Boeing requires second sources for all critical components to avoid single points of failure.
Intel provides another striking example of vulnerability from single-sourcing. In the 1980s, the company faced fierce competition from Japanese semiconductor manufacturers. CEO Andy Grove realized that any disruption to their fabs in Silicon Valley could be catastrophic. In response, Intel implemented the "Copy Exactly" strategy, building identical manufacturing facilities in different geographic locations that could seamlessly substitute for each other. When a chlorine gas leak shut down their Aloha, Oregon plant in 2004, production was immediately shifted to identical facilities elsewhere – demonstrating how redundancy preserves resilience even at significant cost.
The Apollo 13 mission provides a compelling example of how backup systems with different architectures can be crucial. When an oxygen tank exploded, the primary life support systems failed. The astronauts survived only because they could use the lunar module as a lifeboat – a system designed for completely different purposes but adaptable to the emergency. The module's separate power, propulsion, and life support systems provided redundancy that saved the mission. Similarly, maintaining human intelligence alongside pure ASI systems creates architectural diversity that might prove crucial during unexpected challenges to either system.
Importantly, for this insurance value to be real, human-integrated systems must continue developing and advancing – not merely existing as museum pieces or pets. The pure ASI would need humans-as-chloroplasts systems to maintain sufficient autonomy and developmental trajectory to serve as genuine functional alternatives in crisis scenarios.
Brain-Computer Interfaces and Our Endosymbiotic Future
Brain-computer interfaces (BCIs) represent the critical technology that will enable human-AI endosymbiosis. The architecture of these interfaces – how they connect human and artificial intelligence – may create path dependencies that influence long-term outcomes.
The most obvious and popular BCIs approach today places humans squarely in the ship captain's seat as the nucleus of the endosymbiotic organism. This makes sense for now, given that we respect individual humans' autonomy and because ASI has not yet been achieved. However, if we don't actively invent and adopt alternative BCI structures that allow a greater degree of connectivity between humans and powerful AI, or humans and other humans, then we are actively inhibiting the possibility of humans-as-mitochondria systems, and thus making it more likely that the humans-as-chloroplasts scenario emerges instead.
The ultimate question is whether humans possess unique cognitive capacities that will maintain advantages over artificial systems in the long run. Currently, humans appear better at "de novo" reasoning in the complete absence of training data – inventing entirely new game systems rather than mastering existing ones. But whether this represents a fundamental advantage or merely a temporary head start remains unknowable. Assuming that there is something about the human brain (or at least some human brains) that can sustainably keep one step ahead of powerful AI, developing BCI technologies that maximize connectivity among humans and AIs will be critical in tilting the future towards humans-as-mitochondria.
The coming decades will witness a great merger – not of corporations, but of intelligence architectures. Like the ancient endosymbiosis that created complex cellular life, this integration will fundamentally transform both partners. The choices we make now about how humans and AI systems connect will echo through evolutionary time, potentially determining humanity's role in a universe of expanding intelligence.
Our biological ancestors once made an unconscious choice to partner with mitochondria rather than chloroplasts. Now, for the first time in evolutionary history, we face a similar choice – but with the unique advantage of foresight. Let us use that foresight wisely, designing integration pathways that preserve what makes us human while embracing the vast possibilities of artificial intelligence.
By David Zhang and Claude 3.7 Sonnet
May 13, 2025
© 2025 David Yu Zhang. This article is licensed under Creative Commons CC-BY 4.0. Feel free to share and adapt with attribution.
Mind blowing insights! Always inspiring. I have so many questions yet, there is much presented here as potential outcomes in an unknowable future that are already reality for me today. I'll show you if you want to see.