74 min read

Industries Of The Future

I have a few reasons for writing this post.

Firstly, I am partly inspired by books written by other technologists and futurists, including Peter Diamandis’s “The Future Is Faster Than You Think”, Ray Kurzweil’s “The Singularity is Nearer”, and Mustafa Suleyman’s “The Coming Wave”. They showed me what the future could look like, what subjects are worth studying, and what industries we should probably be paying more attention to. They gave me lots of new things to think about, and I hope this post can give readers some knowledge and inspiration too.

The second reason is more straightforward: I’m still studying, and I want to know which technologies will remain relevant in the future. This feels especially important now, with AI making so many skills and trades less secure than they used to be. I don’t want to spend years studying something, only to graduate into a world where that thing has already mostly been automated. By understanding what technologies will power the future, I can work backward and think about how I should spend my time today.

When I don’t know what to do, I read. When I still don’t know what to do, I write, because that helps me properly internalize what I’ve read. Writing down my thoughts solidifies my ideas and helps draw out the half-formed thoughts swimming in the recesses of my mind. And because I have to write concisely and in a relatable manner, I’m forced to understand a technology for what it really is at its core.

Lastly, entrepreneurship is my goal. Knowing what the core technologies of the future are gives me inspiration for what I can build. More importantly, it helps me think about which problems are actually worth solving.

This post has also grown a lot bigger than I’d expected. I’d originally given myself a week to finish it (right after final exams week), mostly by revisiting ideas from the books/articles I’d read and doing some new research. But it now seems like it will take closer to two months to achieve the level of depth I had in mind. Some kind of scope creep I guess. So instead of waiting until everything is “perfect,” I’ve decided to progressively release what I’ve written and get feedback along the way.

Writing this post hasn’t been easy for me. I need to explain these ideas in a way that a potentially non-technical audience can understand, while still making the post informative and intellectually stimulating. This post also became a huge challenge in taxonomy. I can’t just write whatever comes to mind, because then I’d be sitting in a very messy pile of ideas. So, I need some systematic way to categorize and introduce different technologies, so readers can identify common themes and make sense of what I’m trying to say.

A lot of my planning time went into figuring out the right categories. I didn’t want them to be so narrow that they stopped being useful buckets, but I also didn’t want them to be so broad that they lost their meaning. So I had to find that balance. These buckets aren’t readily defined, so I’ve had to invent a few along the way, and I spent lots of time focused on that.

I’m sure there are technologies I’ve missed that are absolutely worth writing about. You may also disagree with how I’ve categorized certain things. I’m always open to discussion - email me here! I’m still learning about many new developments every day, and I always enjoy conversations where I come away knowing something I didn’t know before.

With that in mind, and based on the things I’ve been reading, researching, and thinking about (daily news, books, publicly traded companies, investment themes, general trends), I’ve decided to write about the following areas:

  • Computing & Storage Beyond Silicon: Faster, more efficient ways to compute and store information beyond traditional chips
  • Better Connectivity: Faster, more reliable, and more widely accessible network
  • Energy Generation: Cleaner, denser, and more reliable sources of electricity
  • Space Economy: Servicing, manufacturing, sourcing and utilizing resources beyond Earth
  • Biotech & Medicine: More precise, personalized, predictive, and regenerative approaches to health
  • Robotics: More capable machines that can sense, decide, move, and work in the physical world
  • New Human Computer Interfaces: More natural ways to interact with computers through voice, wearables, XR, muscles, and brain signals
  • New Materials: Materials that can bend light, change shape, self-repair, or respond to their environment
  • Next-Generation Mobility: New ways to move people, goods, and services
  • Food Technologies: New ways to produce protein and nutrients with less dependence on traditional agriculture

There are also other fields I’ve chosen not to cover in the post, but I still think they’re worth watching

  • Artificial Intelligence: (I chose not to cover this because it’s already widely discussed, so there isn’t as much value in me covering it. If I do, it probably needs an entire separate post on its own)
  • Climate Intervention: Technologies that manage or reduce climate risks
  • Blue Economy: New ways to use, protect and build around the oceans
  • Energy Storage: Better ways to store electricity which makes clean energy more useful and reliable

I’m also interested in exploring trends on a more macro level, from the perspective of geopolitics, history, and civilization more broadly. I’d probably borrow and expand on ideas from some of my favourite books, including Harari’s “Sapiens”, Kishore Mahbubani’s “Beyond the Age of Innocence”, Strauss and Howe’s “The Fourth Turning”, Tim Marshall’s “Prisoners of Geography” and Ray Dalio’s “The Changing World Order”.

I’ll get to writing about those, someday.

Until then, happy reading.

Key Ideas

This post has become unexpectedly long, so I’ve extracted some key concepts and keywords for each topic. Hopefully, this gives you a clearer sense of the bigger picture. You can always revisit this section if the details start to feel overwhelming.

Computing & Storage Beyond Silicon

  • Quantum Computing: Superconducting, Trapped-ion, Photonic, Neutral-atom, general-purpose, fault-tolerant
  • Neuromorphic Computing: Event-driven, co-located, von Neumann Bottleneck, sparsity, energy efficiency
  • Photonic Computing: Memory-bound, photonic interconnects, wavelength-division multiplexing, hybrid
  • DNA Storage: Long-term archival, DNA sequencing, DNA synthesis

Better Connectivity

  • 6G: Sub-terahertz frequencies, latency
  • Non-Terrestrial Networks (NTN): Direct-to-cell, licensed cellular spectrum

Energy Generation

  • Small Modular Reactors (SMRs): Passive safety, modular construction, high-assay low-enriched uranium (HALEU), Uranium-235, centrifuges, quasi-energy planners, offtake agreements
  • Nuclear Fusion: Deuterium-tritium fusion, Tokamak
  • Space-based Solar Power: Solar irradiance, mobile, space power grid

Space Economy

  • In-space servicing, assembly, and manufacturing (ISAM): Rendezvous, Proximity, Operations, and Docking (RPOD), Satellite life extension, use in space, use on Earth, microgravity manufacturing
  • Space resource utilization: In-space use, earth-return mining, In-Site Resource Utilization (ISRU), regolith, Helium-3, platinum-group metals
  • Resilient positioning, navigation, and timing (PNT): Jamming, spoofing, space-based PNT, Terrestrial-based PNT, independent PNT

Biotech & Medicine

  • Longevity: Lifespan, healthspan, chronological age, biological age, aging clocks, geroprotective drugs, senolytics, repurposed drugs, cellular reprogramming, translational problem
  • Precision Medicine: Germline genomics, Pharmacogenomics, Biomarker testing, cell/gene/RNA-based therapies
  • Organ Replacement: Xenotransplantation, 3D-printed organs
  • Human Biology Models: Organ-on-chips, Body-on-a-chip, Mechanistic computational models, AI virtual organoids

Robotics

  • AI-powered Autonomous Robots: Autonomous Mobile Robots (AMR), sim-to-real gap, data problem
  • Soft Robotics: Compliance, actuators
  • Humanoids: Vision-language-action (VLA), professional-use humanoids, consumer-use humanoids, vertically integrate, hardware-focused

[WIP for “New Human Computer Interfaces”, “New Materials” “Next-Generation Mobility”, and “Food Technologies”]

Computing & Storage Beyond Silicon

For decades, almost everything digital has been built on silicon chips. They’ve taken us incredibly far, from personal computers to smartphones to today’s AI systems. But as our demand for computing keeps growing, silicon is starting to run into some tough limits, especially around speed, heat, and energy use.

So, we’re now experimenting with totally different approaches, from computers that run on quantum mechanics, to chips inspired by the brain, to systems that use light, and even DNA as a way to store data. In this section, I’ll go through four of these frontiers: quantum computing, neuromorphic computing, photonic computing, and DNA storage.

Quantum Computing

Quantum computing can solve certain problems far more efficiently than classical machines. For example, Shor’s algorithm can factor large numbers fast enough to threaten today’s cryptographic systems, while quantum annealing is aimed at certain optimization problems like scheduling and logistics. Quantum systems can also natively simulate molecules and materials, which could accelerate drug discovery and chemistry in cases where classical simulations struggle with the sheer number of interactions involved.

That said, today’s quantum hardware isn’t there yet - it’s either too noisy or too small.

By “noise”, I don’t mean a literal whirring sound, but unwanted interference from the qubit’s surroundings. Qubits are extremely sensitive to their environment, so tiny disturbances can cause decoherence (they lose the quantum behavior we need) or lead to incorrect measurements. This can corrupt the final result. Quantum error correction and fault tolerance try to solve this by spreading information across many physical qubits, then repeatedly checking for signs of errors without directly disturbing the information itself.

As for today’s quantum computers, they are still far too small and too error-prone to be general-purpose machines. Breaking modern encryption like RSA with Shor’s algorithm would require a fault-tolerant quantum computer far beyond what exists today. One widely cited older estimate put this at around 20 million noisy physical qubits for RSA-2048 (a widely used encryption scheme), though newer estimates have reduced that number - from anywhere under 100,000 physical qubits to as few as 10,000 reconfigurable atomic qubits.

Today’s systems, by comparison, are mostly in the hundreds to low-thousands of physical qubits. IBM introduced its 1,121-qubit Condor processor in 2023, while Google’s Willow chip in 2024 has 105 qubits. And when we hear news about quantum computers having thousands of qubits or atoms, those numbers usually refer to raw physical qubits, not fully error-corrected, general-purpose logical qubits that can run long, reliable calculations.

TLDR: There’s still lots of work to be done.

Four of the main hardware approaches in quantum computing today are: Superconducting (IBM, Google, Rigetti), Trapped-Ion (IonQ, Quantinuum), Photonic (PsiQuantum, Xanadu), and Neutral Atoms (Pasqal, QuEra). Each one is trying to solve the same basic problem - how do you build qubits that are controllable, scalable, and not too noisy? They all have their own tradeoffs, and there’s no clear winner yet.

Superconducting systems, used by companies like IBM, Google, and Rigetti, use tiny electrical circuits cooled to near absolute zero. They are fast and relatively mature, but noisy and difficult to scale because they need complex cryogenic infrastructure.

Trapped-ion systems, used by IonQ and Quantinuum, use individual charged atoms held in place and controlled with lasers. Their qubits can be very stable and precise, but the systems are usually slower than superconducting systems, and scaling them up is still a major engineering challenge.

Photonic systems, pursued by companies like PsiQuantum and Xanadu, use particles of light as qubits. They are attractive because photons travel well and may fit with existing semiconductor manufacturing. However, photons do not naturally interact with each other, which makes it harder to build many quantum gates that depend on those interactions.

Neutral-atom systems, pursued by companies like Pasqal and QuEra, use uncharged atoms arranged with laser beams. They look promising for scale because large arrays of atoms can be created and rearranged without having to wire up each qubit individually. But the approach is still relatively new, and it has to prove itself for reliable, general-purpose quantum computing.

The holy grail is a general-purpose, fault-tolerant quantum computer that can run arbitrary algorithms reliably, but we’re still pretty far from that.

That said, quantum computing is no longer just a theoretical field. Companies are already testing it in narrow, early-stage use cases. D-Wave, for example, focuses on quantum annealing and hybrid quantum-classical systems for optimization problems like scheduling, routing, and resource allocation. Quantinuum and IBM are exploring how quantum methods could help model molecules, materials, and chemical reactions.

There’s also real investor interest: Quantinuum raised about $600 million in 2025 at a $10 billion valuation and recently filed an IPO as of April 2026. Public quantum companies like IonQ, Rigetti, and D-Wave trade at multi-billion-dollar market caps - showing that there’s real investor interest.

Neuromorphic Computing

Neuromorphic computing is a way of designing computers that process information using principles inspired by biological brains. Instead of the standard neural networks used today, it uses spiking neurons to perform computations. These spiking neurons accumulate inputs over time, emit a brief signal (a spike) when their internal state crosses a threshold, and then reset.

A neuromorphic system differs from conventional computing in two key ways.

First, computation is event-driven rather than clock-driven. In conventional digital processors, computation is usually synchronized by a clock: the hardware advances in regular cycles, and data may be evaluated even when most input values have barely changed. In neuromorphic systems, activity is triggered only when something meaningful happens - when a neuron reaches its threshold and fires. This reduces the need to repeatedly process unchanged information, which is one major reason these systems can be far more energy efficient.

Second, memory and processing are co-located; they are integrated into the same circuit. In traditional architectures, the CPU (compute) and RAM (memory) are separate, so data must constantly move between them - an inefficiency known as the von Neumann Bottleneck. Neuromorphic systems reduce this by placing state (memory) directly alongside the compute units (neurons), minimizing data movement and improving efficiency.

As a rough comparison, Apple’s Apple M-series chips improved performance partly through Unified Memory Architecture (UMA), where memory sits close to the processor on the same package. This reduces the distance data must travel. Neuromorphic systems go much further: instead of a single shared memory pool, they distribute memory across many small compute units, embedding it locally within the network itself.

Another key idea is sparsity. In the real world, most signals are quiet/unimportant most of the time. In a video, most pixels don’t change from one moment to the next. In sensor systems, most readings remain within normal ranges. Much of the data being processed at any given time carries little new information.

Conventional systems still process all of this data continuously, updating every value at every step whether anything meaningful has changed or not. This leads to a lot of redundant computation. Neuromorphic systems take a different approach. Their neurons remain mostly inactive and only communicate when there is a meaningful change. As a result, the system can remain continuously active while expending far less energy when little is happening.

This energy efficiency makes neuromorphic computing a promising direction for modern AI systems, which are extremely power-hungry. It’s also especially well-suited for edge applications - like drones or wearable medical devices - where systems must operate continuously under tight energy constraints.

At present, the ecosystem is dominated by a small number of players and a growing group of startups. Large companies like Intel (Loihi), IBM (TrueNorth), and BrainChip (Akida) are leading the development of neuromorphic hardware, designed around spiking neural networks and optimized for low-power, real-time processing. Alongside them is a newer wave of specialized companies like SynSense, Innatera, and GrAI Matter Labs, which are focused on ultra-low-power “edge AI” applications.

Photonic Computing

Photonic computing is a way of moving and processing information using light (photons) rather than relying only on electrical signals (electrons), which is what conventional computers rely on. Instead of sending every signal through metal wires and transistor-based circuits, photonic systems guide light through waveguides - tiny optical pathways etched into chips or integrated into advanced chip packages.

Photonic computing goes further than just moving data with light. It can also perform certain types of computation with it. For example, when two light waves overlap, they can interfere with each other. If their peaks align, the resulting signal becomes stronger. That behavior can represent mathematical operations such as addition. Photonic systems can implement linear operations, including matrix multiplications that dominate many AI workloads.

As I alluded to earlier when discussing the von Neumann bottleneck, one of the biggest constraints in modern computing isn’t only raw processing power, but moving data efficiently. This becomes especially pronounced in AI systems. As models scale across thousands of GPUs, the amount of data moving between servers and chips grows rapidly. Processors increasingly spend time waiting on data rather than performing computation. In practice, a significant portion of training time - often on the order of 30% to 70%, depending on the workload - is spent stalled on data movement rather than actual computation.

Long story short, modern AI systems are often memory-bound: the bottleneck is moving data, not computing on it.

Using light for communication - often referred to as photonic interconnects - can significantly increase bandwidth while reducing energy consumption. This helps explain why companies like NVIDIA have invested heavily in networking infrastructure, including its $6.9B acquisition of Mellanox.

Photonic systems also offer potential energy efficiency advantages. Electrical signals lose energy as heat when they move through circuits. In contrast, light can propagate with lower loss, generates less heat, and can support very high bandwidth through techniques like wavelength-division multiplexing, where multiple wavelengths of light carry data in parallel. This makes photonics attractive in large-scale environments like data centers where power and cooling are major constraints.

That said, photonic computing is unlikely to be a drop-in replacement for traditional electronics. Light is harder to confine and manipulate precisely on a chip than electrical signals, which makes control challenging. Many logic operations remain easier to implement electronically, and storing data optically is still far less practical than using conventional memory due to limitations in density, cost, and reliability. As a result, most systems today are hybrid, using electronics for control, memory, and general-purpose logic while relying on photonics for high-speed data movement and specialized computations.

In October 2024, photonic computing startup Lightmatter raised $400M at a $4.4B valuation to commercialize photonics-based interconnect technology for AI data centers. In 2025, it unveiled the Passage M1000, a 3D photonic superchip which delivers roughly 114 Tbps of total optical bandwidth. In simple terms, Lightmatter is trying to use light to move data between AI chips much faster and more efficiently than conventional electrical wiring can manage.

DNA Storage

The world now generates hundreds of exabytes of data every day - driven by everything from video streaming and social media to AI and connected devices. Recent estimates suggest this is around 0.4 zettabytes (roughly 400 exabytes) daily, and the total continues to grow rapidly each year. This rapid growth is placing increasing strain on traditional storage technologies, driving interest in alternative information technologies such as DNA-based storage.

A single gram of DNA can theoretically store hundreds of exabytes, far denser than conventional storage media used in today’s data centers. Moreover, DNA can last for thousands of years under stable conditions, making it well-suited for long-term archival of rarely accessed data. DNA storage also requires no power to maintain data, unlike modern data centers that consume massive amounts of energy for operation and cooling.

That said, the main bottleneck in DNA storage remains the speed and cost of reading and writing. To retrieve data, DNA must be sequenced: a sample of strands is read, and the original data is reconstructed computationally from many overlapping, randomly sampled fragments. Unlike SSDs (microsecond access) or RAM (nanosecond access), DNA storage retrieval typically takes hours to days.

Writing data involves DNA synthesis - converting binary information into nucleotide sequences (A/T/G/C) and chemically constructing those strands. Each base addition requires a chemical reaction cycle, making the process inherently slow and expensive. Current costs are still orders of magnitude higher than traditional storage, often reaching thousands of dollars per megabyte.

Moreover, unlike traditional storage such as SSDs, DNA storage doesn’t support direct random access (i.e. the ability to read any location directly without scanning other data). Instead, data must be reconstructed from many sampled fragments.

Because of these limitations, DNA storage is not suited for everyday computing. Instead, it is best viewed as a long-term archival (“cold storage”) solution for preserving data over decades or centuries.

Catalog’s (acquired by Biomemory) Shannon DNA writer claims to encode ~1.63 TB of compressed data per run with high-throughput writing (~10 MB/s). Atlas Data Storage, a spin off from Twist Bioscience (the leading synthetic DNA producer), is working toward storage densities where 13TB of digital data can be stored in the equivalent volume of a single drop of water. Currently, the company has introduced Atlas Eon 100, an archival service that encodes information into synthetic DNA, which is then dehydrated into powder and sealed inside small 0.7-inch steel capsules. They claim that it can pack 60PB (60,000 TB) into approximately 60 cubic inches (roughly the size of a coffee mug).

Better Connectivity

The internet is behind almost everything we do today. Messaging friends, using Google Maps all depend on devices being able to connect and talk to each other. So, when we talk about the future of technology, connectivity is a key enabling factor. Better connectivity doesn’t just mean faster downloads, but also smoother virtual experiences, more reliable connections, and internet access in hard-to-reach places.

In this section, I’ll cover two technologies pushing this space: 6G and non-terrestrial networks (NTNs).

Networks: A Technical Primer

Feel free to skip this part if you’re already familiar with the concepts of throughput and latency. Or as a TLDR, just know this: Throughput is about capacity (how much data the network can carry per second), while latency is about delay (how long it takes data to reach its destination).

How does modern mobile network technology work? Each new generation of mobile networks operate in its own range of frequencies. Higher frequencies usually means more available bandwidth (the network can carry more data at once), but reach shorter distances. For instance, 4G operates in the frequency ranges 400 MHz - 3.8 GHz and supports transfer speeds up to 300 Mbps, while 5G operates in 410 MHz - 71 GHz, supporting transfer speeds up to 10 Gbps.

When we talk about mobile networks, we’re mainly concerned with 2 things: throughput and latency.

Throughput measures how much data the network can deliver per second, usually expressed in bits per second, such as Mbps or Gbps. Throughput is affected by the type of network technology being used. For example, 5G generally supports higher maximum throughput than 4G. It is also affected by bandwidth: a wider range of frequencies available for transmitting data allows more data to be sent per second. In mobile networks, throughput also depends on signal quality, network congestion, and how many users are sharing the same cell tower.

Latency measures how long it takes data to reach from a source to its destination, usually expressed in milliseconds. It is affected by the physical distance the data needs to travel; longer distances generally mean higher latency. It is also affected by network congestion, because packets may have to wait in queue when routers, cell towers, and other network equipment are overloaded. Other factors include routing, such as how many network hops the data must pass through, and retransmissions, which happen when packets are lost or corrupted and need to be sent again.

But wait - aren’t throughput and latency the same thing? Doesn’t higher throughput mean faster data transfers, and therefore latency goes down? Well, it depends.

A higher-throughput network is like a wider road: more cars can pass through per second. However, widening the road doesn’t shorten the distance from your home to the shopping mall. If the trip usually takes 20 minutes because of the distance, then having a wider road won’t make the journey much faster. In the same way, higher throughput doesn’t automatically reduce latency.

However, if the main reason our trip is slow is traffic congestion, then widening the road can help. More cars can move through at once, queues become shorter, and the journey becomes faster. Networks work similarly: higher throughput can reduce delays when congestion is the bottleneck, but it does not remove delays caused by distance, routing, or processing.

A more concrete example: Imagine sending a tiny 1-bit message. On a 10 Mbps network with 40 ms latency, the message may arrive in about 40 ms. On a 1 Gbps network with the same 40 ms latency, the message may still arrive in about 40 ms. That is because the message is so small that transfer speed is not the bottleneck. Most of the delay comes from latency-related factors, such as distance, routing, and processing along the way.

But now imagine downloading a 5 GB video. In that case, throughput matters. The bottleneck is no longer “how long does the first bit take to arrive?” but “how much data can the network deliver every second?” A higher-throughput network can download the video much faster, even if the latency stays the same.

6G

6G is the next generation of mobile network technology after 5G, still under development and expected to deploy around 2030. Improvements fall into three buckets: higher throughput (moving more data at once), lower latency (faster responses), and higher reliability (predictable performance).

One headline promise of 6G is raw speed. Researchers are exploring spectrums far above today’s mobile bands, including sub-terahertz frequencies (sub-THz, roughly 100-300 GHz). These higher frequencies offer vastly more bandwidth, which means faster data speeds, but with a trade-off: the signals degrade faster over distance and are more easily blocked by obstacles (weaker penetration strength).

Sub-THz research has already hit proof-of-concept downlinks exceeding 100 Gbps - about 20 times faster than today’s peak 5G. In China, a team in Huairou ran what they called the world’s first 6G terahertz trial network, claiming peak downlinks of approximately 1 Tbps.

But 6G isn’t just about faster downloads. Many of the most interesting 6G use cases depend on latency: how long it takes data to travel from point A to point B. Immersive extended reality (XR), real-time holographic calls, remote robotic surgery, and autonomous vehicles need networks that respond in fractions of a millisecond; even a few milliseconds of delay can break the illusion or put lives in danger. 6G promises ultra-low latency by combining techniques such as faster radio scheduling, AI-assisted network control, and service-aware resource allocation.

In August 2025, a team from Peking University and City University of Hong Kong published a paper in Nature describing a tiny chip - just 11 mm by 1.7 mm - made of thin-film lithium niobate. It is capable of 100 Gbps transmission across an ultra-wide range of frequencies, from 0.5 GHz up to 115 GHz. Multi-band technology isn’t new, but traditionally, covering that wide a frequency range would require nine separate radio systems - this approach uses just a single chip. The innovation is “optoelectronic fusion”: the chip uses light to generate stable signals across all those bands, allowing a single device to switch frequencies dynamically.

This is important for 6G because future networks are expected to use a much broader mix of spectrum: lower bands for coverage and robustness, mid/upper-mid bands for wide-area capacity, and sub-THz bands for very high-throughput short-range links. This chip points toward more adaptive 6G radio hardware that can switch bands dynamically to improve capacity and support ultra-high-speed services - all using a single radio system.

So, the goal of 6G isn’t just “5G, but faster.” It aims to make networks more intelligent, flexible, and resilient - being able to choose the right kind of connection for the right situation. That can enable a new wave of applications from immersive video calls, real-time robotics, to autonomous systems.

Non-Terrestrial Networks (NTN)

Traditional mobile networks rely on ground towers. That works well in cities, but it breaks down in places where towers are too expensive, difficult, or fragile to build: rural highways, mountains, oceans, and disaster zones. That’s why our phones can still drop to “no service” the moment we drive far enough from a city center.

Non-terrestrial networks (NTN) try to fill that gap by using satellites, high-altitude platforms, and airborne nodes as part of the communications network. Imagine them as floating cell towers in the sky. They are not meant to replace ground-based infrastructure, but to serve as a fallback layer when those traditional networks are not present.

The technology itself isn’t new, but it is evolving. In the past, satellite connectivity usually meant carrying a Starlink dish or a dedicated satellite phone. With direct-to-cell, a compatible everyday smartphone can connect directly to a satellite. Satellite connectivity is becoming less of a separate product and more of a feature inside normal mobile plans.

In 2025, T-Mobile launched T-Satellite with SpaceX’s Starlink, aimed at areas where traditional cellular coverage does not reach. The service is powered by 650+ Starlink direct-to-cell satellites and supports texting plus selected apps such as WhatsApp, Google Maps, and X. T-Mobile says the service works with compatible devices in most outdoor areas where users can see the sky.

That last detail matters because they’re a big reason why NTNs aren’t going mainstream. The current version isn’t full mobile broadband from space. Today’s direct-to-cell service is still low-bandwidth: useful for texting, location sharing, emergency communication, and lightweight app use, but not designed to replace a normal 5G connection or support consistently data-intensive uses like video calls.

So, NTNs are unlikely to replace ground towers. Dense cities still need normal terrestrial networks because they offer far more capacity at a much lower cost. The value of NTN lies in making connectivity much more resilient and less geographically fragile. This matters most in places with little coverage to begin with, or where hurricanes, wildfires, earthquakes, conflict, or power outages have knocked networks offline.

AST SpaceMobile is another major player pioneering NTNs. In April 2026, the Federal Communications Commission (FCC) approved the company to deploy and operate a 248-satellite low-Earth-orbit constellation for direct-to-device service in the U.S. AST SpaceMobile is trying to move beyond emergency texting toward space-based cellular broadband for ordinary smartphones, working with mobile operators including AT&T, Verizon, and FirstNet.

For NTN to work at scale, regulation matters almost as much as the satellites themselves. This is similar to other heavily-regulated industries like biotech, eVTOLs, or banking, which depend on licenses, certifications, and approvals. A satellite company cannot simply beam signals to phones using any frequency it chooses. Ordinary phones are designed to operate on standardized cellular bands, and those bands are usually licensed to mobile network operators country by country. That’s why it’s strategic for satellite companies to partner with mobile operators - such as the T-Mobile and SpaceX partnership. SpaceX provides the satellite infrastructure (tech), while T-Mobile provides access to the licensed cellular spectrum, regulatory standing, network integration, and market access.

In 2024, the FCC adopted its Supplemental Coverage from Space framework, creating rules for satellite operators and terrestrial mobile carriers to extend mobile coverage from space. It basically gave the industry a legal path to treat satellites as an extension of mobile networks.

Europe is approaching NTNs through the lens of sovereignty. IRIS2 is the European Union’s planned secure satellite-communications constellation, designed to provide resilient connectivity for governments, businesses, citizens, and critical infrastructure. It is planned as a 290-satellite multi-orbit system, with full governmental services expected by 2030.

For a technology this pervasive, NTNs also carry political implications. Satellite connectivity can make internet shutdowns harder to enforce because communication no longer depends entirely on domestic ground infrastructure. But it does not make censorship impossible. Governments can ban equipment, jam signals, criminalize use, or pressure operators. Iran shows both sides of this: smuggled Starlink terminals have helped some people stay connected during shutdowns, but the government has also criminalized Starlink use and targeted users.

Ukraine shows the other side of the risk. Satellite networks can be lifesaving infrastructure, but dependence on a private network creates its own vulnerability. A Starlink outage in 2025 disrupted Ukrainian military communications for more than two hours, which is a stark reminder that critical operations cannot depend on a single provider.

Ultimately, NTN is as much a permission business as it is a satellite business. Having satellites in orbit is not enough. To reach ordinary phones, satellite operators need access to licensed mobile spectrum, and that usually means partnering with carriers country by country.

Energy Generation

Energy is becoming one of the biggest bottlenecks for the future. As AI, data centers, electric vehicles and everything else starts demanding more power, the question is: where are we going to get all that energy from?

Ideally, we want energy that is clean, abundant, reliable and cheap. That combination is hard to get. Solar and wind are getting more mature, but those are dependent on weather and location. Nuclear has always been interesting because it can produce huge amounts of always-on power without carbon emissions.

In this section, I’ll look at three paths being explored for the future of energy: Small Modular Reactors (SMRs), Nuclear Fusion, and Space-Based Solar Power as viable alternatives for the future of energy. They’re all in very different stages of development, but they’re chasing the same goal: clean, abundant energy that we can rely on.

Small Modular Reactors (SMRs)

Small Modular Reactors are nuclear fission reactors generally rated up to 300 MWe per unit - roughly one-third or less of conventional nuclear reactors. A 300 MWe unit is often described as enough to power a few hundred thousand homes, depending on local electricity use.

But the interesting part isn’t its smallness, but its modularity. The SMR pitch is that nuclear should not be a one-off megaproject every time. Instead of spending a decade building a giant custom plant, we want to build smaller, standardized reactors in factories, ship major components to site, and repeat the same design. In other words: turn nuclear energy from a construction project into a scalable product.

SMRs have two key ideas: passive safety and modular construction.

Passive safety basically means the reactor should be able to keep cooling itself even when things go wrong. Instead of relying entirely on powered pumps, backup generators, and fast operator action, many SMR designs use things like gravity, natural circulation, convection, condensation, or pressure-driven cooling. In simple terms: heat should keep moving away from the reactor core even if the power goes out.

We see why this matters in Fukushima. In 2011, the earthquake shut the reactors down, but the fuel still produced residual decay heat. Then the tsunami disabled offsite power and flooded backup diesel generators needed to run cooling systems. Without enough cooling, the reactor cores overheated and melted down.

SMRs are designed to reduce this kind of failure mode. NuScale and GE Hitachi’s BWRX-300 use one passive-safety trick: “hot water rises, cold water sinks,” also called natural circulation. The reactor core heats the water, hot water rises, cooler water sinks to replace it, and the loop keeps moving without large powered pumps. But that is only one version of passive safety. Other designs use gravity-fed water, steam condensation, air cooling, or tougher fuel materials that can survive much higher temperatures.

The second idea is modular construction. Major components can be factory-built, transported, and transported onsite - allowing developers to theoretically add modules incrementally as demand grows. This makes them good for deployment in areas like industrial facilities, data centers, military bases - places which require dense, reliable power.

But nuclear energy has its own unique problem: cost. There is giant upfront capital cost, long construction timelines, regulatory uncertainty, and financing risk. SMRs reduce the size of each project, but this “smallness” is also a disadvantage. A smaller reactor loses some economy of scale; estimates say it takes around 3000 units of production for SMRs to breakeven. Moreover, nuclear energy itself is costly; it has a higher total cost per megawatt-hour than traditional energy like fossil fuels. NuScale’s flagship UAMPS project in Idaho was cancelled after the target power price rose from ~$58/MWh to $89/MWh.

The fuel supply chain is also another bottleneck. Many advanced SMRs require high-assay low-enriched uranium (HALEU), which is enriched between 5-20% U-235 (Uranium-235). Conventional reactor fuel is usually below 5% U-235. Centrifuges matter because they are the refineries of the nuclear cycle: they increase the concentration of U-235 to usable levels. Whoever controls enrichment capacity controls a pretty strategic chokepoint, similar to how ASML and TSMC control key bottlenecks in chipmaking.

That chokepoint is geopolitical. The U.S. banned Russian uranium imports in 2024 (The “Prohibiting Russian Uranium Imports Act”), but included waivers through January 2028 because an alternative supply is not yet ready. Russia hosts about 44% of global uranium enrichment capacity, and supplies around a quarter of uranium entering the US. This sort of fuel supply chain cannot be rebuilt overnight.

China is taking a different approach. It is trying to vertically integrate its entire nuclear stack, from reactor design to domestic fuel-cycle capability. This resembles its EV strategy: control more of the stack, build domestic supply chains, and use state-backed capital deployment to develop know-how and climb the learning curve. Its Linglong One (aka ACP100) is a 125 MWe SMR in Hainan that became the first SMR design to pass an International Atomic Energy Agency (IAEA) safety review in 2016 and is expected to begin commercial operation in the first half of 2026. China also already operates the HTR-PM, one of only two commercially operating SMR projects globally as of 2025, alongside Russia’s KLT-40S floating plant.

The buyer base for nuclear energy is also changing. Historically, nuclear was something utilities bought and regulators approved. Now, Big Tech is showing up. Google signed with Kairos Power for 500 MW of advanced nuclear projects by 2035. Amazon is backing X-energy with a target of more than 5 GW of new U.S. nuclear projects by 2039. Meta is seeking 1-4 GW of new nuclear generation starting in the early 2030s. In short: AI demand is pulling nuclear energy back into the limelight.

This changes the business model of the nuclear energy industry. Traditional utilities are cautious because first-of-a-kind nuclear risk eventually lands on ratepayers. Hyperscalers have different incentives - they need huge amounts of reliable power, have large balance sheets, and can sign long-term power-purchase agreements that make these speculative projects financeable. Effectively, AI companies are becoming quasi-energy planners where their energy needs shape what gets built for the future. Their offtake agreements can de-risk technologies that ordinary utility procurement would reject as too expensive, slow, or risky.

Nuclear is still the most permissioned energy technology on earth. SMRs may sound like a hardware or physics problem, but the bottleneck is in approval, financing, and fuel supply.

Nuclear Fusion

Nuclear fusion is the process that powers the Sun and the stars. Current nuclear reactors use fission, which splits heavy atoms such as uranium. Fusion does the opposite: it pushes light atomic nuclei together until they merge, which converts a tiny amount of mass into a large amount of energy

The most common near-term fuel target is deuterium-tritium fusion (both are isotopes of hydrogen). Deuterium and tritium fuse, producing helium, a neutron, and a large amount of energy. The IAEA says fusion has the potential to generate about four times more energy per KG of fuel than fission, and nearly four million times more than burning oil or coal.

It’s easy to see the allure of fusion. A working nuclear fusion power plant can provide huge amounts of electricity without greenhouse-gas emissions, and without long-lived high-level radioactive waste that fission reactors have. Fusion machines will still create radioactive materials, but those are mostly low-level waste that decays within decades to a century, as opposed to thousands of years.

But fusion is hard. To make fusion happen in a machine, we have to heat fuel into a plasma (a superhot soup of charged particles) at temperatures far hotter than the Sun’s core. And then we have to keep that plasma stable long enough for fusion to occur.

No physical material can hold this plasma directly (it’s far too hot), so the most famous design, the tokamak, uses magnetic fields. The tokamak is a donut-shaped machine designed to trap plasma inside a magnetic cage - designed by Soviet fusion research in the 1950s (yes, that’s more than 70 years ago). This long history of progress without a commercially proven fusion power plant is one reason why fusion’s most famous joke is that it’s “always 30 years away.”

The biggest tokamak project in the world is ITER, now being built in southern France. It is one of the most ambitious scientific collaborations ever attempted, with seven members: the European Union, China, India, Japan, South Korea, Russia, and the United States.

ITER’s goal is not to sell electricity, but functions more like a physics experiment. Its target is Q = 10, meaning 500 megawatts of fusion power from 50 megawatts of input heating power. ITER is meant to prove that large-scale fusion can work as a controlled physical process; it is not a commercial power station.

ITER has also been slow. Its revised schedule now pushes deuterium-tritium operations into the late 2030, around 2039.

That’s why the private fusion sector will also play a huge part in fusion development. While ITER is moving like a giant international science project - with all the coordination, caution, and red tape that implies - private companies are trying to move like startups. Helion has signed a power purchase agreement with Microsoft for a first fusion plant scheduled for deployment in 2028. Commonwealth Fusion Systems has raised close to $3 billion and has signed an agreement for Google to buy 200 megawatts of power from its planned ARC fusion plant in Virginia, which CFS expects to put on the grid in the early 2030s.

But these agreements or funding shouldn’t be confused with proof. No private fusion company has delivered commercial fusion power yet. These deals just show that serious customers and investors are willing to bet on fusion. The National Ignition Facility in California is a useful cautionary tale. In 2022, NIF achieved a historic fusion milestone: it delivered 2.05 megajoules of laser energy to a target and produced 3.15 megajoules of fusion energy. For the first time, the fusion reaction released more energy than the laser energy delivered to the fuel. But the full facility consumed far more energy than the fusion reaction produced - not true net-positive energy output.

Moreover, even if fusion works, it’s naive to think it will be cheap immediately. The first fusion plants will probably be expensive, complicated, and bespoke. They will need advanced magnets, materials that can survive intense neutron bombardment, systems for extracting heat, and reliable ways to maintain the reactor over time.

There’s also the fuel problem. Deuterium is abundant; it can be extracted from water. Tritium is not abundant: it is radioactive, rare, expensive, and decays over time. Most serious D-T fusion designs assume that the reactor will make its own tritium using lithium inside the reactor system. In theory, the neutron released by the fusion reaction hits lithium and produces more tritium. This creates a fuel loop: tritium helps produce the neutron, the neutron helps produce replacement tritium, and the reactor has to extract and recycle that tritium safely. In practice, this loop needs to be engineered with very high reliability.

Yet, fusion is interesting because it comes at a time where energy consumption is ramping up. AI data centers need vast amounts of reliable power. In a world where computation is ubiquitous and energy is the main constraint, fusion offers something rare: the possibility of abundant, clean, always-on power.

Space-based Solar Power

Energy has always been geographically-linked. Hydropower stations are positioned where rivers fall. Oil exists where geology happens to be generous. Even solar, which feels almost universal, is still trapped by land, weather, and the fact that half the planet is dark at any given time.

We try to break that link with space-based solar. The idea is simple: Put solar panels in orbit, collect sunlight there, convert that energy into microwaves or lasers, beam it down to Earth using microwaves or lasers, and catch it with ground receivers. That energy is then fed into the grid.

The appeal is straightforward. Above the atmosphere, sunlight is cleaner and more predictable - there’s no clouds, dust storms, rainy weeks, and the sun shines almost all the time. Solar irradiance, the power per unit area received from the sun, is much stronger in space because it doesn’t have to pass through Earth’s atmosphere, which absorbs a portion of the sunlight. Japan’s government has estimated that space solar could generate five to ten times more electricity over time than ground solar panels because an orbital system could operate far more consistently and efficiently.

The technology has already undergone early demonstrations. Caltech’s Space Solar Power Demonstrator (SSPD) showed in 2023 that its MAPLE experiment could wirelessly transmit power in space and beam detectable power to Earth. It wasn’t a commercial power plant, but it proved that controlled wireless power transfer from orbit is possible.

But the question isn’t just “Can this work?” but “Where would this make sense first?” Well, most probably not in the ordinary electricity grid.

When compared with ground solar, wind, nuclear, and traditional energy, SBSP looks expensive. NASA’s 2024 assessment found that SBSP could have lifecycle electricity costs 12 to 80 times higher than terrestrial alternatives, unless major improvements happen in launch costs, manufacturing, deployment, and in-space assembly.

Several required capabilities are also not mature yet. To make utility-scale space solar work, we need to assemble enormous structures in orbit, keep them operating for years, maintain them with minimal human involvement, and beam energy to Earth efficiently and safely. The microwave or laser beam would also need to be precisely controlled and proven safe for people, aircraft, satellites, wildlife, and electronics.

However, SBSP’s unique advantage is that it’s extremely mobile - we can deploy it anywhere we want, especially in places where other forms of energy are hard to get. A remote mine does not just pay for diesel; it also pays for the whole chain that gets diesel to the generator: trucks, ships, storage, security, maintenance, and delays. In remote or islanded systems, diesel electricity can reach around $0.50 per kilowatt-hour or more, around 3 times more expensive than the traditional grid.

So, the first real customers of SBSP will probably be businesses running in remote areas where energy is hard to procure. At this point, SBSP doesn’t need to beat the average grid price - it just has to beat the cost of getting fuel to places where normal infrastructure barely exists. So, we shouldn’t understand SBSP as “cheaper electricity,” but more like “power for places without energy infrastructure.”

Aetherflux is looking at remote customers such as isolated military installations, mines, and offshore oil rigs, rather than trying to start with large-scale grid power. Its stated early applications include AI compute in orbit and delivering power to “contested environments” on Earth.

Another interesting future for SBSP is using space solar to power space itself. Satellites already play an integral role in modern lives: GPS, weather forecasting, Earth observation - all these depend on machines in orbit. But satellites are power-constrained machines, often having less than 5 kW of power available, and this limitation directly caps their computing, communication, and sensing capabilities. More power means a satellite can do more useful work because it can run better sensors, process more data onboard, and support more advanced workloads.

Star Catcher is building what it calls a space power grid, using optical power beaming to send concentrated solar energy to satellites’ existing solar arrays - the linked collection of solar panels used by satellites to collect sunglith. The company claims this can increase available satellite power by up to 10x without hardware modifications.

Compute is also beginning to look upward. In November 2025, Google announced Project Suncatcher, a research effort to put TPU-equipped, solar-powered satellite constellations in orbit connected by free-space optical links. Axiom Space says it launched its first two orbital data center nodes to low Earth orbit in January 2026. In April 2026, Meta announced a partnership with Overview Energy for up to 1 gigawatt of space solar energy. Overview is targeting a demonstration in 2028 and commercial delivery around 2030. Its concept is to collect solar energy in geosynchronous orbit and beam it down to existing solar farms on earth using low-intensity near-infrared light, helping those sites produce power outside normal daylight hours.

Space Economy

Space used to be a government business. NASA, Roscosmos, ESA, and other agencies built the infrastructure, ran the stations, launched the missions, and decided who got access. Private companies were usually just contractors in the background.

That’s now starting to flip. NASA is slowly moving from being the owner-operator to being a customer. The ISS is expected to retire around 2030, and the next generation of stations, vehicles, satellites, and services will likely be built and operated by private companies. Governments will still be involved, but they will increasingly buy capacity and services from the market.

One helpful way to think about the space economy is by looking at altitude. Low Earth Orbit (LEO) sits below 2,000 KM; it is close enough for astronauts to reach and for satellites to take high-resolution images. Geostationary Orbit (GEO) sits about 35,786 KM above Earth’s equator, where satellites match Earth’s rotation and appear fixed over one point, making it especially useful for telecommunications and broadcasting. Medium Earth Orbit (MEO) lies between LEO and GEO and is where many navigation systems operate.

Just as there are different altitudes, there is no single “space market.” Satellite internet, in-space servicing, lunar infrastructure, asteroid mining, and resilient navigation are very different businesses with different timelines and economics. Some are already here, while others are much further out, as we will see in the coming sections. I’ll cover three areas which are particularly promising: In-space servicing, assembly, and manufacturing (ISAM), space resource utilization, and resilient positioning, navigation, and timing (PNT).

In-Space Servicing, Assembly, and Manufacturing (ISAM)

ISAM is the maintenance and construction layer of the cislunar economy. As the name suggests, it’s not a single technology, but a bundle of tools that makes space a lot less “use it once and throw it away” than it is today.

In-space servicing means working on an existing spacecraft after it has already launched into space. Reasons for doing that include extending a satellite’s life, refueling it, replacing parts, inspecting it after a failure, or moving it to a new orbit. Rendezvous, Proximity Operations, and Docking (RPOD) is the set of maneuvers that allows one spacecraft to find another object in orbit, approach it carefully, fly nearby, and sometimes dock with it or grab it. RPOD is a core enabling capability behind servicing. A repair robot cannot fix a satellite unless it can first reach it, and a refueling spacecraft cannot refuel a satellite unless it can safely align with it.

Satellite life extension is one of the most mature servicing technologies. Satellites have a predicted end-of-useful-life based on limited fuel, aging parts, and orbital rules. Why do these satellites still need fuel? For example, geostationary communications satellites must stay over roughly the same point on earth, and since gravity from the moon, sun, and earth push it out of place, it must expend fuel to stay in its assigned orbital slot (parking slot for satellites). Apart from fuel, satellites also use moving components like reaction wheels (used to rotate the satellite without using fuel), antennas, and mechanical gyroscopes - which are subject to wear and tear.

Northrop Grumman’s Mission Extension Vehicles (MEV-1, MEV-2) have already docked with commercial satellites in geostationary orbit to extend their operational lives. MEV attaches to satellites that have run out of fuel, and becomes the satellite’s new propulsion system by using its own fuel and thrusters. In 2020, MEV-1 docked with the Intelsat 901 satellite (fyi: Intelsat is a commercial satellite operator providing global communication services; it was acquired by rival satellite operator SES in 2023) in a “graveyard” orbit (just above GEO) and moved it back into operational service. In 2021, MEV-2 docked with the Intelsat 10-02 satellite while it was in active GEO position, marking the first time such a docking occurred without disrupting the satellite’s ongoing commercial services.

Orbit Fab is trying to establish an open-license standard for satellite refueling ports. Its RAFTI interface is a TRL 8 (flight-qualified) cooperative docking and refueling interface designed to replace a spacecraft’s normal fill-and-drain valve and support both ground and on-orbit refueling. Astroscale is leading a planned Space Force refueling mission in GEO: its APS-R refueler is scheduled to launch in summer 2026, refuel a Space Force satellite, refill itself from an Orbit Fab depot, and then conduct a second refueling operation.

Next, we talk about in-space assembly, which is a shift in how spacecraft and space infrastructure are designed, built, and deployed. Traditionally, spacecraft had to be designed around the limits of the launch vehicle: they had to fit inside the payload fairing, survive intense vibration and acceleration, and deploy reliably after launch. These constraints capped the size of large systems such as space telescopes and communications antennas. With in-space assembly, components can be launched separately and connected, unfolded, manufactured, or robotically assembled once in orbit, enabling larger and more capable space infrastructure.

DARPA’s NOM4D program is a research initiative aimed at developing technologies to build, maintain, and repair large structures directly in space, rather than launching them pre-built from Earth. Its key partners include Caltech, UIUC, and University of Florida. Active in-space demonstrations are scheduled for 2026. The vision is to ferry raw materials from earth (maybe we won’t even have to do that, as we’ll see…), manufacture in orbital construction facilities, and utilize the end products in orbital applications.

NASA’s Fly Foundational Robots (FFR) mission, launching in late 2027, is a technology demonstration featuring a robotic arm capable of dexterous manipulation, autonomous tool use, and walking across spacecraft structures in zero or partial gravity. The arm is being developed by Motiv Space Systems. The goal is to demonstrate a commercial robotic arm in LEO that can eventually handle complex assembly work.

China’s Tiangong space station used in-space assembly through a modular approach: its major modules were launched separately, then joined and repositioned in orbit. China first launched Tianhe in 2021, the core module that provided the main docking hub, propulsion, guidance, life support, station control, and crew living space. In 2022, the lab modules Wentian and Mengtian each performed autonomous rendezvous and docking with Tianhe’s forward port. After docking, each lab module was moved from the forward port to a side port on the Tianhe node, forming Tiangong’s final T-shaped configuration. Tiangong’s full three-module station is too large and heavy to launch as one complete object; by splitting it into modules, China could launch pieces that each fit within the Long March 5B’s 25-ton payload capacity, then assemble them in orbit.

Lastly: in-space manufacturing. This means making things in space instead of making everything on Earth and launching it fully finished. That sounds simple, but it changes the entire logic of space systems. Historically, spacecraft began on Earth. Every piece of equipment - antenna, tool, spare part - had to fit inside a rocket, survive launch, and then unfold or deploy correctly once in orbit. This introduces a unique design problem: space hardware must be built not only for space, but for its violent journey to space.

The rocket launch is often the most violent event in a spacecraft’s life, so orbital structures must survive intense vibration, acceleration, and acoustic loads before they ever begin their real work. This creates a mass penalty: engineers add reinforcements, deployment mechanisms, and safety systems that are mainly there because of launch, not because they are useful once the spacecraft is operating.

In-space manufacturing offers a different model. Instead of launching the finished object, we can launch raw materials, compact feedstock, or smaller components, and then manufacture or assemble the final structure in orbit. That allows future systems to be larger, lighter, simpler, or better adapted to the space environment itself.

There are two broad categories of in-space manufacturing: manufacturing for use in space, and manufacturing for use on Earth. The benefits mentioned in the above paragraphs are mostly referring to the former.

The first category is more obvious: manufacturing things in space for use in space. This includes printing replacement parts on a space station, making tools on demand, and eventually using lunar or asteroid materials as raw material. NASA has already tested this idea on the International Space Station: in 2014, the first 3D printer was sent to the ISS through a NASA and Made In Space (now part of Redwire) effort. That December, astronauts printed a ratcheting socket wrench from a design file sent from earth, often remembered as the “emailed wrench.” NASA has described on-demand manufacturing as a way to reduce the need to launch every possible tool and spare part from Earth, which matters even more for future Moon, Mars, or deep-space missions where resupply is slow, expensive, or impossible.

In the longer term, the deeper shift is local production from local materials. Instead of launching raw materials from the surface of Earth, future missions can use material already in space to build habitats, roads, landing pads, shielding, mirrors, antennas, and structural frames.

The second category is less obvious: manufacturing in space for use on Earth. This involves using space conditions - primarily sustained microgravity, and sometimes vacuum, radiation exposure, thermal cycling, and solar power - to make or improve high-value products that are difficult to produce on Earth, such as advanced materials, semiconductors, fiber optics, pharmaceuticals, and biological products.

The key ingredient is microgravity. On Earth, gravity constantly gets in the way as it drives convection and sedimentation; hot fluids rise, denser particles sink, and growing crystals can be tugged into uneven shapes. In microgravity, those effects are greatly reduced - particles don’t sediment in the same way, and crystals can grow in a calmer environment. Importantly, many of the most valuable things humans make are sensitive to tiny imperfections: drug crystals, protein crystals, semiconductor crystals, and other advanced materials. A small imperfection can make or break its usefulness. So, microgravity is interesting because it gives us a gentler environment where some of these sensitive materials could be made better than they can on Earth.

Pharmaceuticals are one of the clearest early examples. Many drugs depend not just on their chemical recipe, but on their crystal structure. In microgravity, crystals can sometimes grow more evenly than they do on Earth, making it possible to study or create drug forms that are difficult to produce on the ground.

Varda Space Industries is one of the companies testing this idea. Instead of relying on astronauts or a space station, Varda uses robotic free-flying spacecraft with reentry capsules. Its W-Series platform is designed to manufacture materials in microgravity and return them to Earth. On its first W-1 mission, launched in 2023 and recovered in February 2024, Varda processed ritonavir, an antiviral drug used in HIV treatment. The important fact isn’t that Varda “made a drug in space,” but that it demonstrated a full loop in the drug-manufactured-in-space process: launch a manufacturing payload, process pharmaceutical crystals in orbit, survive reentry, and recover the material on Earth.

BioOrbit is working on a related problem: biologic drugs, including some cancer treatments. Many of these medicines are given by IV infusion in hospitals because they are hard to pack into a small, stable injection. If you simply concentrate the liquid, it can become too thick to push through a small needle. BioOrbit’s idea is to use microgravity to crystallise protein drugs into tiny, orderly particles suspended in liquid. In theory, that could let a small injection carry a large dose while still being fluid enough for self-injection. The goal is not to invent new cancer drugs in space, but to reformulate some existing hospital treatments into versions patients could eventually take at home.

The same idea applies to advanced electronics. Semiconductors depend on extremely pure, low-defect crystals. On Earth, gravity can disturb crystal growth because molten materials naturally swirl and separate as hotter and denser parts move around. In microgravity, those flows are reduced, which may allow crystals to form more evenly. A 2024 review in npj Microgravity, a Nature Portfolio journal, found that semiconductor materials made in microgravity often showed improvements such as larger crystals, better structure, more uniform composition, and in some cases better performance than Earth-grown versions.

Space Forge is focusing on manufacturing advanced semiconductor materials in space. Its returnable satellites, called ForgeStar, are designed to act like small orbital furnaces: they go into LEO, use microgravity and the vacuum of space to grow cleaner semiconductor crystals, and eventually return those materials to Earth. In late 2025, it successfully generated plasma aboard its ForgeStar-1 satellite, showing that a key step in semiconductor crystal growth could be controlled on an autonomous spacecraft.

United Semiconductors LLC is also working on semiconductor crystal growth in microgravity. It has already flown semiconductor crystal-growth experiments to the International Space Station, and it has reserved future payload space on Starlab, a planned commercial space station, to move from experiments toward larger-scale production.

Space Resource Utilization

Space exploration has always had one constraint: everything has to be brought from Earth. Every single resource - water, oxygen, fuel, food, spare parts - has to be launched from Earth and carried across space, and this process is resource-intensive and time consuming. Space resource utilization tries to save the day.

Space resource utilization is the discovery, extraction, processing, and use of materials found beyond Earth, including resources from the Moon, Mars, asteroids, and even orbital debris. These resources include water, oxygen, metals, and rare isotopes.

There are two broad ways to think about space resources. The first is in-space use, where we use space resources in space. The second is Earth-return mining, where we extract something valuable in space and bring it back to Earth. (These are similar to the categories we used for in-space manufacturing)

These two models have very different economics. Water mined on the Moon may not be valuable on Earth, but it can be extremely valuable if it is sitting near a lunar base. On the other hand, platinum from an asteroid would need to be valuable enough to justify the cost and difficulty of bringing it all the way back to Earth.

The first model is usually called In-Situ Resource Utilization (ISRU). Instead of carrying everything from Earth, future missions would use materials already present on the Moon, Mars, or asteroids.

This matters because delivery to space is expensive. It costs around $1.2 million to launch a kilogram of payload to the lunar surface. When delivery is that expensive, even ordinary materials like water, oxygen, and fuel become precious. ISRU reduces the amount of material that has to be launched from Earth, while also making larger and longer missions possible.

The moon is a good place to understand ISRU. The lunar surface is covered in regolith, a layer of dust, broken rock, and glassy fragments created by billions of years of impact. Regolith is chemically useful - much of it is made of minerals containing oxygen, silicon, aluminum, iron, magnesium, and calcium. Yet, this oxygen is sitting inside as oxides, and we need chemistry to extract it. That oxygen is useful in two ways. First, humans (obviously) need oxygen to breathe. Second, oxygen is a major component of rocket propellant. Rockets don’t just need fuel, but also an oxidizer (typically liquid oxygen), to fuel combustion in space.

Regolith could also be useful for construction. Lunarcrete, a proposed concrete-like material made by mixing regolith with a binder, could be used to build landing pads, roads, shelters, or radiation shielding. Construction material is exactly the kind of thing a permanent lunar base will need in large quantities.

Lunar polar regions contain water ice, especially in permanently shadowed areas near the poles. That ice could be melted into water, used for life support, split into hydrogen and oxygen, or turned into rocket propellant. This is why the lunar south pole has become one of the most strategically interesting places in space.

For now, ISRU is still in the experimental stage. NASA’s PRIME-1 payload flew on Intuitive Machines’ IM-2 mission in 2025 to test lunar resource technologies near the Moon’s south pole. PRIME-1 included TRIDENT, a drill designed to bring lunar soil to the surface, and MSOLO, a mass spectrometer designed to study gases released from that material. The lander reached the Moon, but it landed on its side, which prevented PRIME-1 from fully accomplishing its science and technology goals. Even so, NASA reported that the drill and MSOLO powered on and completed important tests.

Private companies are also trying to turn ISRU into businesses. Starpath, founded by former SpaceX engineers, raised $12 million in 2024 to work on mining lunar water ice and producing liquid oxygen (LOX). Its bet is that reusable lunar landers and spacecraft will need propellant, and that producing LOX from lunar resources could become valuable if traffic to and from the Moon increases.

Blue Origin’s Blue Alchemist is using simulated lunar regolith to make solar cells, power transmission wire, silicon, metals, and oxygen through molten regolith electrolysis. The idea is to heat regolith until it becomes liquid, then use electricity to separate oxygen from metals such as iron, aluminum, and silicon.

The second model is Earth-return mining: extracting something valuable in space and bringing it back to Earth. Two resources are particularly cited: helium-3 from the Moon and platinum-group metals (PGMs) from asteroids.

Helium-3 is an isotope of helium that is rare on Earth. It is used in special applications including medical imaging and dilution refrigerators that cool quantum computing hardware to extremely low temperatures. It has also been discussed as a possible fuel for advanced fusion energy.

The moon contains helium-3 in its regolith because it has been bombarded by the solar wind for billions of years. But it is spread very thinly; the Moon may contain large total amounts of helium-3, but extracting it will require processing enormous quantities of soil.

Interlune aims to commercialize natural resources from space, starting with helium-3 from the Moon. It is developing the Interlune excavator, a machine designed to ingest 100 metric tons of regolith per hour and extract helium-3. They announced helium-3 purchase agreements, including a U.S. Department of Energy Isotope Program order for three liters of lunar-sourced helium-3, deliverable by April 2029, and a Maybell Quantum agreement for thousands of liters per year from 2029 through 2035.

Asteroids are the other major target for Earth-return mining. Some asteroids are thought to be rich in metals, including PGMs like platinum, palladium, and rhodium. These are metals that are useful in catalytic converters, electronics, and other industrial processes. On Earth, these metals are rare and hard to extract, which is why the idea of mining them from asteroids gets so much attention.

AstroForge is one of the companies trying to make asteroid mining real. It is working toward mining and refining PGMs from asteroids by building relatively small spacecraft to identify, visit, and eventually extract material from metallic asteroids (M-type asteroids). Its Odin spacecraft launched in February 2025 as a 100 kg deep-space mission intended to assess a near-Earth asteroid’s metallic composition, but the spacecraft was declared lost a month later. Their next spacecraft, DeepSpace-2, is scheduled for launch in 2026. It will attempt the first private landing on a body outside the Earth-Moon system, with the goal of reaching a suspected metal-rich asteroid and directly testing whether it could be a viable mining target.

Resilient PNT

When we think of GPS, we think of it as the blue arrow on a map, telling us where we are, and when we should turn. But GPS is actually doing much more than that. It’s part of a broader family of systems called Positioning, Navigation, and Timing (PNT). Positioning tells us where we are. Navigation tells us how to get to where we’re going. Timing tells machines what exact time it is.

It’s easy to understand how GPS helps with positioning and navigation. But providing precise timing is another important feature of GPS. GPS timing helps synchronize mobile networks, financial transactions, aircraft systems, power grids, data centers, and parts of the internet. Put together, GPS supports many critical operations, including communications, transportation, and emergency services. This “single-point-of-failure” dependence is also what makes GPS so fragile.

GPS signals come from satellites thousands of kilometers above Earth. By the time those signals reach a receiver on the ground, they become extremely weak. That makes them vulnerable to two kinds of attacks: jamming and spoofing. Jamming blasts noise on the same frequencies, drowning out the real satellite signal. Spoofing is more subtle; instead of blocking the signal, an attacker sends fake satellite-like signals to deceive receivers into calculating incorrect positions or times.

These problems are becoming more common. The International Air Transport Association (IATA) reported that GPS signal-loss events increased by 220% between 2021 and 2024. Aviation authorities have warned about rising interference, especially around conflict zones. Maritime authorities have also reported GNSS jamming and spoofing in the Baltic Sea and other regions.

You may ask: why not just use another satellite system? There are already other Global Navigation Satellite Systems (GNSS): Russia has GLONASS, Europe has Galileo, China has BeiDou, and the US has GPS. Many modern receivers can listen to several of them at once. Yet, the underlying problem is the same: All these GNSS work via satellites broadcasting radio signals from space, and they are still subject to jamming or spoofing.

So, future PNT is probably not a replacement for GPS, but a hybrid system that utilizes multiple data sources, each with their own strengths, weaknesses, and failure modes.

A simple version of this is already happening in our phones. When we walk into a shopping mall, GPS often performs badly - the signal may be blocked by walls, ceilings, and crowds. But our phones can use nearby Wi-Fi networks, cell towers, Bluetooth beacons, motion sensors, barometric pressure, and map data to estimate where we are. Apple and Android both use combinations of satellite signals, Wi-Fi, Bluetooth, cellular networks, and sensors to improve location estimates.

That is the basic idea of resilient PNT: don’t just rely on a single source, but aggregate information across many sources, which work using different physical means.

The US National Telecommunications and Information Administration (NTIA) groups these technologies into three categories: space-based, terrestrial-based, and independent PNT systems. A resilient PNT architecture will likely use all three.

Space-based PNT still uses space, but not necessarily GPS alone. One promising idea is Low Earth Orbit PNT (LEO-PNT). Traditional GPS satellites orbit in Medium Earth Orbit, while LEO satellites fly much closer. As they’re closer, their signals arrive stronger at the ground, improving accuracy and availability in places where normal GNSS signals are blocked or weakened. They also move across the sky more quickly, which can give receivers more changing geometry to work with.

The European Space Agency (ESA) launched its Celeste initiative in March 2026 to test LEO-PNT services. An interesting part of Celeste is that it’s using multi frequency bands. Traditional GNSS mainly uses L-band, while Celeste is also looking at S-band, C-band, and Ultra High Frequency (UHF). ESA says C-band, which operates above L-band, shows strong resilience against jamming and spoofing. UHF may help in places where signals need to penetrate obstacles better.

Another space-based timing method is Two-Way Satellite Time and Frequency Transfer (TWSTFT). It is not something ordinary consumers use. It is a high-precision method used by national labs and timing organizations to compare atomic clocks across long distances.

Terrestrial-based PNT are PNT systems built from infrastructure on or near Earth. Instead of listening to weaker satellite signals from space, it uses stronger signals or infrastructure already on Earth. Examples include Broadcast Positioning System (BPS), enhanced Long Range Navigation (eLoran), and time over fiber.

BPS uses ATSC 3.0, the newer digital TV broadcast standard also known as NextGen TV. Because many broadcast towers already exist, BPS can reuse parts of that infrastructure, with the right upgrades, to transmit precise timing information for critical systems during GPS disruption.

eLoran uses powerful low-frequency transmitters on land, operating around 100 kHz. These signals can travel long distances and are much harder to jam over a wide area than weak GNSS signals from space. eLoran supports both positioning and timing, making it useful for ships, ports, and critical infrastructure.

Time over fiber sends highly accurate clock signals through optical fiber, rather than through radio signals from satellites. It is mainly a timing backup, helping telecom networks, financial systems, labs, and data centers stay synchronized without relying only on GPS.

Independent PNT are systems that don’t depend mainly on satellite or radio signals. They are more self-contained or based on natural references. Examples include Inertial Navigation Systems (INS), Quantum Atomic Clocks, Magnetic Navigation (MagNav), visual navigation, and celestial navigation.

INS uses accelerometers and gyroscopes to track movement from a known starting point. For instance, a car entering a tunnel knows how many metres it has moved forward, and can estimate its position using its own motion sensors. Although INS cannot be spoofed or jammed in the same way radio systems can, it can drift: tiny measurement errors accumulate, so after enough time the system needs correction from another source. Northrop Grumman is a significant supplier of navigation systems, including military-grade GPS/INS systems for military aircraft, missiles, and naval systems.

Quantum atomic clocks measure time using the precise frequencies at which atoms switch between quantum energy states. NASA launched its Deep Space Atomic Clock in 2019, a miniaturized mercury-ion atomic clock intended to help spacecraft navigate more independently instead of waiting for Earth-based instructions.

MagNev reads variations in Earth’s magnetic field and compares them with magnetic maps. Faking the magnetic fingerprint of a large region of Earth is much harder. SandboxAQ’s AQNav uses quantum magnetometers and AI to read magnetic anomalies and estimate position when satellite signals are unavailable.

Visual navigation uses cameras and image matching to infer position from terrain, landmarks, maps, or previous imagery. Inertial Labs launched a visual-aided inertial navigation system for aircraft operating without reliable GPS. Honeywell has also demonstrated vision-aided navigation that compares live camera feeds with maps in GPS-denied conditions.

Celestial navigation uses stars, planets, the Sun, or other celestial objects to determine position or orientation. Unlike GPS, stars cannot be jammed, although they can be obscured by clouds, weather, or daylight conditions. Honeywell’s CNAV uses star tracking and resident space objects for GPS-free navigation.

Again, there is no perfect navigation system. Each has their own strengths and weaknesses. GPS is accurate and global, but vulnerable to interference. LEO satellites may offer stronger signals but still rely on space infrastructure. BPS and eLoran are harder to jam over wide areas, but need terrestrial infrastructure. And so the future of PNT will not just be one system replacing GPS. It will be many imperfect systems checking one another, all contributing pieces of the answer.

Biotech & Medicine

Biotech and medicine have always benefited with new technology, but something feels different now. Biology is becoming more measurable, programmable, and personal. AI-driven drug discovery gets a lot of this attention, but there are also other parts involved in this big shift. This includes new efforts to slow or reverse aspects of aging, tailor prevention and treatment to each person’s biology, replace damaged organs, and build better models of the human body so drugs and diseases can be studied more accurately.

Longevity

First, we need to distinguish between lifespan and healthspan. Lifespan is how long we live. Healthspan is how long we remain healthy and functional enough to enjoy being alive. The goal of longevity science isn’t just to increase lifespan, but to increase healthspan: the number of years spent with good quality of life.

We also distinguish between two types of ages. Chronological age is the number of years since birth, while biological age is an estimate of how “old” the body is, based on measurable features. These two ages are not always correlated: A 50-year-old could have a biological age of 45-years-old. Longevity interventions aim to delay or reverse aspects of biological aging.

Scientists are now treating aging as a biological process that can be studied and possibly modified. Work in the longevity space can be categorized into three areas: Understanding aging, slowing aging, and reversing aging. We still can’t fully explain or fully measure the processes underlying aging; understanding these will bring us closer to treating age-related decline. Slowing aging aims to delay the biological processes that cause decline. Finally, reversing aging aims to repair existing damage in cells, tissues, and organs to restore more youthful function.

The first category, understanding aging, is important because longevity is hugely a measurement problem: we can’t wait forty years to know whether a drug, lifestyle intervention, or gene therapy has actually slowed aging. To test these technologies efficiently, we need ways to accurately estimate biological age - so we can measure a drug’s impact on it.

Aging clocks are statistical or machine learning models that estimate biological age and pace of aging from measurable biomarkers. Different clocks use different inputs: epigenetic clocks read DNA methylation patterns; proteomic clocks analyze blood protein levels; imaging clocks use MRI, retinal scans, or other medical images. The field is also moving beyond whole-body estimates toward organ-age clocks, because different parts of the body may age at different rates.

One reason longevity is hard is because there are multiple processes that affect aging. The dominant framework for understanding aging comes from the 2013 paper “Hallmarks of Aging,” which identified nine biological processes that drive age-related decline, including genomic instability, telomere attrition, mitochondrial dysfunction, and cellular senescence. In 2023, this framework was expanded to twelve hallmarks, adding newer areas such as chronic inflammation.

Wearables try to turn aging measurement into continuous monitoring. Instead of relying on blood samples, these devices can analyze signals like heart-rate variability, sleep regularity, and resting heart rate. Oura Ring’s Cardiovascular Age estimates the biological age of the vascular system, while the WHOOP watch estimates WHOOP Age and pace of ageing from nine metrics across sleep, strain, and fitness.

Calico Labs, the Alphabet-backed biotech company, is studying the biology of aging to develop age-related therapies. TruDiagnostic sells epigenetic age tests that estimate biological age and pace of aging. Gero is using large-scale medical data and AI to better understand aging and identify drug targets that can slow down or reverse aging.

The second category is slowing aging. The goal isn’t to make old cells young again, but slow the processes that cause decline. Geroprotective drugs aim to slow biological aging. One of the best-known candidates is Rapamycin. In animal studies, it is one of the most reliable life-extending compounds we know - giving around a 20% lifespan increase in mice. Some studies show lifespan extensions in the low double digit percentages. AgelessRx offers low-dose rapamycin as part of its anti-aging treatments.

Another approach is senolytics. As we age, some damaged cells enter a zombie-like state: they stop dividing, but they do not die. These senescent cells can release inflammatory signals that irritate nearby tissue and promote aging. Senolytic drugs try to selectively kill these cells. In mouse studies, senolytics has improved physical function and increased survival. Unity Biotechnology’s UBX1325 was a drug designed to remove senescent cells in diseased retinal blood vessels. However, the company later winded down in 2025 due to massive financial losses and lack of revenue-generating products, showing the difficulty of turning longevity science into approved therapies.

NAD+ boosters are another popular branch of longevity treatments. NAD+ is a molecule crucial for energy production and cell repair, and levels often fall with age. Supplements such NMN and NR aim to raise NAD+ levels, although whether this reliably slows aging is still unproven. Companies like Elysium Health sell NAD+ focused products, and some trials show they can raise NAD+ in humans.

Some existing drugs are also being repurposed for longevity. Metformin, a cheap and widely used diabetes medicine, is being studied as a potential longevity drug candidate. GLP-1 receptor agonists, the family of drugs behind Ozempic and Wegovy, began as diabetes treatments and became powerful obesity drugs. Researchers are now asking whether their effects on weight, inflammation, heart disease, kidney disease, and metabolism can make them part of the longevity toolkit.

This brings us to the last, and most science-fiction sounding category: reversing aging. This is an attempt to reverse aspects of aging. Cellular reprogramming is one of the central ideas. The basic idea starts with the discovery of Yamanaka factors in 2006, four transcription factors (proteins) that could take an ordinary adult cell and reprogram it into an induced pluripotent stem cell (iPSC). “Pluripotent” means the cell regains stem-cell-like flexibility, similar to cells in an early embryo, allowing it to develop into many different cell types. For example, a skin cell can be reprogrammed into an iPSC and then guided to become a heart, nerve, or muscle cell. This mirrors early development, when embryonic cells gradually specialize into the many cell types that make up the body.

This helps with aging, because reprogramming not only changes what a cell can become, but also resets some age-related features of the cell. However, full reprogramming is risky because it can erase a cell’s identity and push it into a pluripotent, fast-dividing state, where it may grow uncontrollably or form tumors. Hence, longevity researchers are most interested in partial reprogramming: resetting some youthful features of a cell without turning it all the way back into a stem cell.

Altos Labs is focused on restoring cell health by developing cellular rejuvenation reprogramming technology. Retro Biosciences, backed by Sam Altman, is working on several approaches to increase healthspan by 10 years, including partial cellular reprogramming. OpenAI and Retro Biosciences developed GPT-4b micro, a specialized AI model designed for protein engineering that made cells show 50x stronger signs of entering a stem-cell-like state. That isn’t proof that aging has been reversed in humans, but it shows how AI can help scientists search through possible biological interventions much faster.

Yet, longevity science has a major translation problem: many compounds that look exciting in animals fail in people; a drug may slow aging in animals, but proving that it works in humans is much harder. Mice live short lives, are genetically controlled, and can be studied in ways humans cannot. That’s why it is easy to overstate longevity developments (“marketing stunts”).

The field needs more human evidence. This comes from better aging clocks, clinical trials, organ-on-chip systems, AI models of biology, and more precise ways of testing longevity interventions without having to wait many years before seeing their effects.

Genetics: A Primer

The next section, precision medicine, will touch a fair bit on genetics. This is a primer on genetics so the things discussed in that section make sense. If you already have a strong foundation, feel free to skip directly to the next part.

A genome is the full set of genetic material in an organism. In humans, the genome is made of DNA, a long molecule written in four chemical “letters”: A, T, G, and C. Specific stretches of DNA are called genes. Genes contain instructions for making proteins and other important molecules that help build and run the body. The entire human genome contains about three billion DNA letters, but only about 1.5% of it consists of the 20,000 protein-coding genes. Other regions help control how genes are used or have functions scientists are still studying.

Humans mostly share the same genome. Any two people share approximately 99.9% of their DNA. But because the genome is so large, that tiny remaining percentage still adds up to millions of differences. These differences are called genetic variants. A variant might be as small as one changed DNA letter, called a single nucleotide variant (SNV). Other variants are larger: a stretch of DNA may be deleted, duplicated, inserted, flipped, or moved somewhere else.

Most variants do not cause disease. Some affect traits such as eye color or height, while others increase the risk of common diseases. A smaller number can directly cause inherited disorders. By studying a person’s genetic variants, we can identify risks long before symptoms appear.

Whole-genome sequencing (WGS) is the process of reading all of a person’s DNA. A sample is taken, often from blood or saliva. DNA is then extracted, sequenced, and compared with a reference genome. This reference genome isn’t a “perfect” human genome (that probably doesn’t exist), but acts more like a standard map of where every DNA letter is supposed to be. The differences between a person’s sequence and the reference are called genetic variants.

DNA is usually too long to read from beginning to end in one go, so sequencing machines read many smaller fragments instead. Computers then align those fragments to a reference genome, using sequence similarity to figure out where each piece most likely belongs. Once the pieces are placed on the map, scientists can look for differences between the person’s DNA and the reference.

There are two major styles of genome sequencing: short-read and long-read sequencing.

Short-read sequencing reads relatively small DNA fragments, around 50-300 DNA letters long. It is fast, accurate, and widely used for detecting SNVs and small insertions and deletions. Illumina has been the dominant platform in this area.

Long-read sequencing reads much longer stretches of DNA, often thousands to ten thousands of DNA letters at a time. That makes it better at solving messy regions of the genome: repeated sequences, large rearrangements, duplicated genes, and other places where short fragments can be hard to place correctly. PacBio and Oxford Nanopore are two major long-read sequencing platforms. Long-read sequencing has historically been more expensive, but it can reveal variants that short-read sequencing may miss.

Knowing the genetic variants is not enough; we need to interpret it. There are two broad types of genetic risk findings: single-gene findings and polygenic findings.

Single-gene (aka monogenic) findings involve a variant in one gene that has a large effect on a person’s risk of developing a specific condition. For instance, variants in the HBB gene cause sickle cell disease, while harmful variants in BRCA1 or BRCA2 can greatly increase the risk of breast and ovarian cancer. These findings are interpreted using multiple scientific and clinical resources, including databases such as OMIM, which catalog relationships between human genes and genetic diseases.

However, many common diseases do not result from one gene alone. Conditions such as Type 2 diabetes, coronary artery disease, and hypertension are complex because they are influenced by many genetic variants (aka polygenic findings), as well as environmental and lifestyle factors.

A Polygenic Risk Score (PRS) tries to estimate a person’s inherited genetic predisposition to a disease or trait by combining the effects of many variants across the genome. Many of these variants are discovered through genome-wide association studies (GWAS), which compare genetic variation across large groups of people to find variants statistically associated with a disease or trait. The GWAS Catalog and PGS Catalog are examples of resources used to organize this type of evidence.

Precision Medicine

There is no such thing as an “average” patient, yet much of medicine is designed around a statistical average. The result we’ve treated as normalcy is that some patients respond well, others see little benefit, and side effects are an avoidable trade-off.

But it doesn’t always have to be that way. Precision medicine aims to change that by tailoring prevention, diagnosis, and treatment to a person’s genes, biology, environment, and lifestyle. I’ll look at how germline genomics, pharmacogenomics, biomarker testing, and cell, gene, and RNA therapies are making this possible.

Germline genomics is the study of genetic variants that we’re born with - the ones inherited from our parents. These are called germline variants, and they help reveal whether we carry a higher risk for certain diseases. These are different from somatic variants, which arise after conception whether through things like lifestyle or DNA copying errors during cell division. Germline variants are usually present in nearly every cell of our body, while somatic variants are found only in some cells. For example, cancer is often driven by somatic mutations that build up in a group of cells over time.

For germline testing, blood or saliva is commonly used because the goal is to read the DNA we were born with, to see the risks we’ve inherited. For cancer, doctors may sequence the tumor itself to find the somatic mutations helping the cancer grow.

The big promise of germline genomics is that healthcare becomes less reactive. Presently, medicine often begins after the damage has already started: we develop symptoms, tests are ordered, and treatment follows. With genomics, we take a more proactive approach. We can identify elevated risks years before disease appears, giving people and doctors more time to act. For example, someone with a higher genetic risk of type 2 diabetes can pay closer attention to weight, diet, exercise, and blood-sugar screening

Human Longevity Inc. sells preventative-health packages built around whole-genome sequencing (WGS), full-body MRI imaging, and biomarkers, with the aim of detecting disease risk before symptoms arise. GeneDx is more focused on rare and complex genetic diseases, offering WGS to help clinicians diagnose patients with suspected genetic conditions. Helix works at the population-health level, partnering with health systems to run large-scale genomic screening programs that can identify inherited risks, inform clinical care, and support research.

Pharmacogenomics (PGx) studies how inherited genetic variants affect the way people respond to drugs. The same pill can behave very differently in two bodies: one person may get the intended benefit, while another may face a higher risk of serious side effects. PGx helps clinicians make better-informed prescribing decisions by using a patient’s genetic variants to predict how they may respond to certain drugs: whether a drug is likely to work, what dose may be safest, and whether there is a higher risk of severe side effects.

This is a big shift because prescribing has often been empirical and population-based. Clinicians choose a medicine based on what works for most people, and adjust it if the patient doesn’t respond well or develops side effects. This system has saved many lives, but still contains a trial-and-error element. PGx adds a precise and personalized layer to prescribing, being able to reveal risks before the first dose is taken. Another benefit is that a patient’s inherited genetic variants generally do not change, so once useful PGx results are known, they can be reused across future prescribing decisions.

The FDA maintains a Table of Pharmacogenetic Associations, which lists scientifically supported gene-drug interactions. One can think of it as a map between genetic variants and how a drug behaves: some variants affect how quickly a drug is broken down, some affect whether a drug is activated properly, and some raise the risk of dangerous reactions. For instance, people with the HLA-B*57:01 genetic variants have a much higher risk of a serious immune reaction to abacavir, a commonly-used HIV drug. Beyond the FDA table, there are also scientific databases such as PharmGKB, a Stanford-based PGx knowledgebase that collects and curates research on how human genetic variation affects drug responses.

Biomarker testing looks for genes, proteins, or other biological signals that reveal the biological mechanisms driving a disease. Two patients may share the same diagnosis, but their diseases may be powered by different biological drivers and may therefore require different treatments. For example, “lung cancer” is a label based on where the cancer appears, but one person’s cancer may have a treatable EGFR mutation, while others may have a different mutation, such as KRAS. In this case, the first patient may benefit from an EGFR inhibitor, while the second may need a different treatment strategy such as KRAS-targeted therapy. This is called targeted therapeutics: once clinicians understand what is helping a patient’s disease grow, survive, or resist treatment, they can choose therapies designed to target that specific weakness.

Biomarker testing includes several approaches such as somatic tumor testing, germline testing, liquid biopsy, and proteomic marker testing. Somatic tumor testing looks for mutations in the tumor, helping doctors understand what may be driving it. Germline testing looks for inherited genetic variants present from birth. Liquid biopsy analyzes blood or other body fluids for tiny traces left by tumors, called tumor-derived material, making it possible to study cancer without always needing an invasive tissue biopsy. Proteomic testing measures proteins in blood, tissue or other samples, and shows what proteins are actively signaling, damaging, defending, or driving disease.

Many companies now offer biomarker testing, each reading a different layer of disease biology. Tempus offers tumor genomic profiling services that analyze tumor DNA and RNA to identify mutations and other molecular changes that may guide treatment selection. Guardant Health offers blood-based liquid biopsy tests to profile tumor DNA. Alamar Biosciences is developing NULISA, a proteomics platform designed to detect extremely low-abundance protein biomarkers in blood - signals that may be difficult to measure but could reveal early disease activity or treatment responses that DNA testing alone might miss.

Cell, gene, and RNA-based therapies are also emerging within the precision medicine space. Instead of just managing symptoms, they aim to intervene at the biological source: whether it is through correcting faulty DNA, or reprogramming cells to attack disease. Gene therapies introduce, replace, silence, or edit genetic instructions. Cell therapies use living cells as the treatment itself, such as immune cells engineered to attack cancer or stem-cell-derived products designed to repair damaged tissue. RNA therapies can use RNA molecules to deliver temporary instructions, silence disease-causing messages, or change how proteins are made.

Gene therapies are especially useful for many genetic rare diseases, because these conditions are often caused by mutations in a single gene. This makes them strong candidates for therapies designed to correct the underlying genetic defect. Casgevy, developed by CRISPR Therapeutics and Vertex Pharmaceuticals, became the first FDA-approved therapy to use CRISPR/Cas9 gene-editing technology in 2023. Approved for certain patients with sickle cell disease, it involves editing a patient’s blood stem cells outside the body to increase fetal hemoglobin production, then reinfusing the modified cells to the patient.

Gene therapies can be delivered in two ways: in vivo or ex vivo. In vivo gene therapy delivers genetic material directly into the patient to modify target cells. This can happen through gene addition or replacement, where a working gene copy is supplied so cells can produce a missing, deficient, or defective protein; gene editing, where tools like CRISPR systems or base editors are delivered into the body to rewrite DNA inside selected cells; or gene silencing, which reduces or blocks the expression of a harmful gene.

Ex vivo therapy involves removing the patient’s cells, genetically modifying them in a laboratory, and returning the engineered cells to the patient. A well-known example is CAR-T therapy, a cell-based gene therapy in which a patient’s immune cells are collected from the blood and genetically modified to produce special proteins that help the immune cells recognize, target, and destroy cancer cells.

Cell therapies treat disease by giving patients living cells, often genetically modified, to repair damaged tissue or treat diseases. We’ve already seen CAR-T, where immune cells are reprogrammed to fight cancer. Other cell therapies include stem-cell based approaches, in which cells with regenerative potential are used to repair or replace damaged tissue. Some use a patient’s own cells, often derived from bone marrow, while others are allogeneic, or “off-the-shelf,” meaning they come from a donor. Allogene Therapeutics is trying to make off-the-shelf CAR-T work, making therapies readily available to patients on-demand.

RNA therapies work at a different layer of biology. Unlike DNA-editing therapies, most RNA medicines don’t permanently alter the genome. Instead, they temporarily change which genetic messages a cell reads or translates. This reversibility offers safety advantages when the goal is to modulate gene expression without making permanent genetic changes. mRNA therapies deliver a set of instructions that cells use to make a useful protein for a limited period, while small interfering RNA (siRNA) can silence a harmful messenger RNA before it is translated into protein. Moderna and BioNTech are two companies primarily responsible for popularizing and commercializing mRNA technology, particularly after successful application of its COVID-19 vaccines.

Organ Replacement

We never have enough donor organs. If someone’s heart, liver, or kidney fails, they often have to wait for a transplant, and that wait can stretch from months to years depending on the organ, country, and urgency. Some people die before an organ becomes available.

That’s why ideas like animal-to-human transplants and 3D-printed organs are important, because losing an organ no longer means a death sentence.

Xenotransplantation is the use of living cells, tissues, or organs from animals into humans to treat organ failure. In practice, the field has mostly focused on pigs, as their organs are similar in size to ours, they are easy to breed, and they mature quickly.

A pig organ cannot simply be placed into a human body as our immune system rejects the pig organ as a foreign entity. Hence, those organs must be genetically modified - remove the pig genes that trigger rejection, and add human genes that make the organ look more familiar to the immune system. Even then, rejection isn’t fully solved, along with infection risk, and the big unknown of whether these organs can last for years rather than weeks or months.

The first transplant of a genetically modified pig heart into a living person, performed in 2022, showed that a pig organ could sustain a human patient for far longer than earlier xenotransplant attempts, though the patient died about two months later. Since then, the field has moved especially fast with kidneys. In March 2024, surgeons performed the first successful transplant of a genetically edited pig kidney into a living patient: a 62-year-old man received an eGenesis kidney with 69 genomic edits. He died nearly two months later, but the hospital said there was no indication that the transplant caused his death. In October 2025, a US patient lived with a gene-edited pig kidney for a record 271 days before doctors removed it because its function was declining.

The field crossed a larger threshold in 2025, when pig-kidney xenotransplantation began moving from one-off experimental cases into formal clinical trials. In February 2025, the FDA cleared United Therapeutics to begin EXPAND, the first formal clinical trial of a gene-edited pig kidney transplant in people with end-stage kidney disease. Its kidney, called UKidney, is a pig kidney with 10 gene edits: six human genes are added to improve compatibility, while four pig genes are knocked out to lower rejection risk and control organ growth. The first transplant in that trial was announced in November 2025. eGenesis followed with FDA clearance in September 2025 for EGEN-2784, its own genetically engineered pig-kidney candidate for clinical testing.

Xenotransplantation is also moving beyond the idea of just replacing a human organ with a pig one. In April 2025, eGenesis and OrganOx received FDA clearance to test a gene-edited pig liver as temporary support for patients with severe liver failure. The liver is not implanted. Instead, it sits outside the body while the patient’s blood is pumped through an external device connected to the pig liver, similar to dialysis. The hope is that this can give a patient’s own liver time to recover, or keep them alive long enough to receive a human liver transplant.

Unlike xenotransplantation, 3D-printed organs are not yet a present-day medical product. The real progress currently lies in bioprinted tissues such as skin, cartilage, and cell-based implants. However, these are still necessary stepping stones. Before we can print a functioning kidney or liver, we must first learn how to print blood vessels, organize living cells, keep them alive, and make them integrate safely inside the body.

3DBio Therapeutics has developed AuriNovo, an investigational (undergoing clinical trials), patient-specific, 3D-bioprinted living tissue ear implant designed to reconstruct the external ear in people born with microtia, a congenital condition in which one or both outer ears are underdeveloped or absent. Aspect Biosystems is focusing on bioprinted tissue therapeutics to replace, repair, or supplement biological functions inside the body. For example, they are using 3D bioprinting to organize living insulin-producing cell clusters inside a protective implant, so they can survive in the body and help control blood sugar.

Human Biology Models

Medicine has always depended on models. Before we test a drug in a living person, scientists usually test it in simpler systems first: cells in a dish, computer simulations, and often animals. One of the most common models is the 2D cell culture, where cells grow as flat layers on a petri dish. These cultures are still useful in answering simple questions like whether a compound kills cancer cells or changes gene expression. However, the human body is far more complex than a flat sheet of cells: it has blood flow, pressure, electrical signals, oxygen levels, and neighbouring tissues constantly influencing each other.

So, human biology models are now moving from “cells in isolation” to “cells with context.” Instead of just looking at how cells behave on their own, scientists are trying to recreate more of the body-like environment those cells normally live in: the flow of fluids, chemical signals from nearby cells, physical forces, and the way one organ can affect another. The closer a model gets to real biology, the more useful its answers can become. These newer models include organ-on-chips (OoC), body-on-a-chip (BoC), mechanistic computational models, and AI Virtual Organoids (AIVO).

Organ-on-chip are tiny devices that grow living human cells in microengineered conditions that mimic part of the body. These engineered environments are capable of simulating selected functions of an organ such as blood flow, immune interactions, and electrical activity. For instance, a lung-on-a-chip replicates the breathing motions and air-liquid interfaces of the human airway, while a brain-on-a-chip can predict which drugs will successfully penetrate the central nervous system.

Body-on-a-chip takes organ-on-chip a step further by linking several organ chips into one system. Instead of studying the liver, heart, or kidney as separate systems, BoC lets us watch how they influence each other. A drug may be absorbed by the gut, transformed by the liver, affect the heart, and then be filtered by the kidneys. This makes BoC especially useful when we care about how a drug behaves across the body, not just how it behaves in one isolated organ.

Some of these OoC and BoC systems are already commercially available. Emulate, Inc. sells OoCs such as its liver-chip, used to study drug-induced liver injury; brain-chip R-1, which models the blood-brain barrier and neurovascular unit to study drug transport and neuroinflammation; and kidney-chip, used to study renal drug transport and kidney toxicity. CN Bio’s PhysioMimix Core platform supports single-organ and multi-organ models, including lung-liver and gut-liver systems, and can be configured to run many experiments in parallel. TissUse’s HUMIMIC Chip4 can connect up to four organ models in one system. Hesperos is another BoC company building interconnected multi-organ systems that it calls “Human-on-a-chip.”

We are also moving toward dry labs - focusing on computational modelling and simulations. Mechanistic computational models are computer simulations built from what scientists already know about how the body and diseases work. They use clearly defined rules such as how a drug moves through the bloodstream, or how the liver breaks it down, to predict what might happen in the body. Two important approaches are Physiologically Based Pharmacokinetic (PBPK) and Quantitative Systems Pharmacology (QSP) models. PBPK asks what the body does to the drug, while QSP asks what the drug does to the body. Together, they help researchers test predictions.

PBPK models predict how a drug moves through the body over time. It focuses on Absorption, Distribution, Metabolism, and Excretion (ADME), the four key processes that determine the journey of a drug in the body: how much of a drug is absorbed into the bloodstream, how it is distributed into each organ, how it is metabolized, and how it is eventually excreted. For example, it might ask “If this drug is taken orally at 100 mg, what concentration will appear in the liver and brain over the next 24 hours?” Drug concentration often determines the benefits and risks; too little drug at the target site may produce no therapeutic effect, while too much may increase toxicity. Hence, PBPK models are useful for predicting human dosing from animal data, estimating drug-drug interaction risk,

A QSP model predicts how the body’s biological systems respond to a drug. Instead of focusing mainly on where the drug goes, QSP models focus on what happens once the drug interacts with its target. They can represent mechanisms such as receptor binding, signalling pathways, immune-cell activity, or biomarker changes. A QSP model might ask “If this drug blocks a particular inflammatory pathway, how much will a disease biomarker fall?” They help researchers identify promising biological targets, choose doses that are biologically meaningful, select biomarkers that reveal whether the drug is working, and explore whether a combination of therapies might succeed where a single drug would not.

Unlike traditional mechanistic models which rely on human-written equations or rules about how biological systems should behave, AI Virtual Organoids use examples from real experiments. They are trained on the behavior of real organoids - their shapes, molecular states, and responses to drugs - allowing them to learn patterns that may be too subtle or complex for humans to formalize.

However, because AI models learn directly from data rather than being built entirely from biological first principles, they need guardrails to ensure their predictions remain biologically sound. Hence, the strongest AIVOs will likely be hybrid systems, combining mechanistic rules that keep the model biologically grounded with AI’s ability to uncover hidden, complex patterns from large experimental datasets.

Several companies are already building pieces of the AIVO ecosystem. Turbine uses AI to run virtual experiments that predict how cancer cells might respond to different drugs. Crown Bioscience tests those predictions in patient-derived tumor organoids - tiny 3D models grown from human tumor cells. DeepLife integrates multi-omics data with generative AI to simulate cellular behavior.

Robotics

Robotics is becoming one of the most interesting areas to watch right now. Advances in AI, sensors, materials, and batteries are making robots much more capable and useful outside traditional factory settings. What used to be limited to assembly lines is now already showing up on roads, hospitals, warehouses, and even in homes.

In this section, I’ll focus on three interesting areas the field is heading toward: autonomous robots, soft robotics, and humanoids. There’s so much more happening in this space (I may cover in future posts), such as swarm robotics, medical robots, microrobotics. But these three are a good starting point to understand how robotics is evolving in different directions.

AI-powered Autonomous Robots

We’re already familiar with some robots in this category: drone delivery systems, robotaxis, warehouse robots, cleaning robots. These are robots that can sense their environment, understand what is happening, make decisions and act with limited human inputs. While older forms of automation repeat a fixed set of actions, autonomous robots can adapt and react accordingly.

There are several layers enabling this. The first is perception: turning raw sensor data into something robots can understand. Cameras, LiDAR, microphones and other tools give robots information about the world, and AI systems are used to interpret that information. The robot also needs localization - it needs to know where it is. Then comes planning, where the robot has to decide what to do next, based on its current understanding. Finally, there is control, where the robot turns its decisions into physical movement, such as its wheels turning, arms grabbing something, or engaging its brakes. These layers are improving because the sensors are getting much cheaper and better, on-device computing is getting stronger, and AI models are becoming better at interpreting messy real-world scenarios.

The maturity of autonomous robots still depends heavily on the industry. Some environments are structured enough for robots to work safely and reliably today. Warehouse robots are a clear example: Amazon has deployed over 1 million Autonomous Mobile Robots (AMRs) that can navigate fulfillment centers, transport heavy goods, and sort packages. Cleaning robots are another commercially active product - both in homes and professional settings. Other areas are moving slower because the stakes are higher. Medical robots are already used in surgery, but full autonomy is treated carefully because the impact of failure is higher. Robotaxis are also already operating in some cities, but their rollout is slower because the issues aren’t just technical, but social and legal too.

One of the biggest technical challenges is the sim-to-real gap. Robots are often trained and tested in simulation environments before they are deployed in the real world, because simulations are faster, cheaper, and safer. A robot can crash or make bad choices in a virtual warehouse without injuring anyone in real life. However, these simulated environments do not perfectly mirror the real world. Humans can behave unpredictably, actual physics may be more complex, and lighting can change. Moreover, the robots themselves may not behave exactly like its simulation, because things like contact forces and battery drain are difficult to model precisely.

A big reason for this is that modern robotics has a data problem. Large language models had the internet to learn from: forums, books, websites, and articles. But robots don’t have an internet-sized archive of physical experiences. More real-world data can make simulations better and train smarter robot AI models. Tools like NVIDIA Isaac Sim are built for robotics simulation, testing, and synthetic data generation, while projects like Open X-Embodiment gather robotic training data across different robot types.

Soft Robotics

Next, we talk about Soft Robotics. Factory robots can be rigid because factories are controlled: objects arrive in the same position and have the same shape. However, the outside world isn’t like this: homes, disaster zones, and hospitals are messy, unpredictable places. Objects come in many shapes and sizes, they can be soft, slippery, fragile, or irregular. A factory robot that can handle a car door might struggle to pick up a strawberry, fold clothes, or hold a human hand without causing injury. This is where soft robotics comes in: it’s about building robots that can safely work in places where traditional rigid machines are too stiff or dangerous.

Soft robotics focuses on building machines from flexible, compliant materials instead of rigid and hard ones. A traditional robot might use a motor-driven metal claw to pick something up. A soft robot instead uses an air-filled silicone finger or a suction that wraps around the object. Essentially, the robot’s body can adapt when it touches something.

A couple of developments are making this possible. The first is better soft materials. Materials such as silicone, rubber, and flexible polymers can deform when they touch something. Hence, a robot doesn’t always need to know the exact shape of an object before gripping it. Its body and hands can partially conform to the object itself. This is known as compliance. A rigid claw has to be carefully controlled so it doesn’t bruise a strawberry, while a soft gripper can spread its force around the fruit, functioning more like our fingers.

We are also moving toward new kinds of actuators. An actuator is the part of a robot that makes it move. In traditional robots, actuators are often electric motors connected to rigid joints. Soft robots can still use motors, but also involve gentler and more body-like methods like air pressure, fluid pressure, and magnetic fields. Instead of just rotating hinges, these actuators can bend, stretch, squeeze, inflate. This is closer to the logic of muscles than it is to mechanical joints.

Sensing is another major part. A soft robot needs to know when it is touching something, how hard it is pressing, or whether an object is slipping. Researchers are building soft sensors that can measure pressure, force, stretch, bending, contact, and shape.

The tradeoff is that soft robots are much harder to control than rigid robots, because they have a much higher degree of freedom. A conventional robot has a fixed number of joints, and each joint moves in a predefined way. Engineers can describe its motion with a small set of numbers, such as the angle of each joint. However, a soft robot arm may not have obvious joints at all, since the whole body can twist, stretch, compress, and deform differently depending on what it touches. The problem is different: we’re now trying to control a continuously changing shape (rather than a discrete action space), which makes such systems much harder to predict and handle.

Soft robotics is already showing up commercially. We see them in soft grippers for packaging, and consumer goods. Piab’s piSOFTGRIP is a vacuum-driven soft gripper designed to safely pick up sensitive and delicate objects, especially in food automation. OnRobot sells a soft gripper with flexible silicone cups that can pick up irregular and delicate items.

There are also softer wearable robots and exosuits for rehabilitation and mobility assistance. Harvard’s Wyss Institute developed soft clothing-like exosuit technology for medical uses such as stroke, multiple sclerosis, and limited mobility. Instead of putting someone inside a hard metal frame, this exosuit is made out of functional fabrics that fit comfortably under standard clothing (pretty cool, huh). It can assist movement without feeling like a machine has been strapped onto the body.

Surgical robots are another frontier being enabled by soft robotics. Soft robots can function as surgical tools that are flexible enough to move through the body more gently, reach awkward places, and reduce damage to surrounding tissue. One research example is STIFF-FLOP, an octopus-inspired soft robotic arm designed to squeeze through small openings, safely navigate around delicate human organs, and stiffen when needed.

Humanoid Robots

Lastly, we talk about humanoids. The world as we know it is built for the human body: doors, stairs, shelves, tools all assume a person with hands, legs, eyes, and a certain range of motions. So, if we can design robots that move through our world the way we do, we don’t have to rebuild the world around them. In theory, humanoid robots can be multipurpose machines, with enough dexterity to carry boxes, open doors, use tools, fold laundry, or assist in factories.

Robots were previously only good at narrow, controlled tasks. But recent progress in large language models and vision-language-action (VLA) models have given robots the ability to connect language instructions with visual understanding and movement. Google DeepMind’s Gemini Robotics is a model family that helps robots “understand, act and react” in the physical world, while its newer Gemini Robotics 1.5 is a VLA model that turns visual information and instructions into motor commands. OpenVLA is an open-source robotics model trained on 970,000 real robot demonstrations, showing that the field is moving away from hand-written instructions toward learning-by-example. At the same time, costs are falling: Bain reported that humanoid robot unit costs fell by at least 40% between 2022 and 2024.

It helps to separate humanoids into two broad categories: professional-use humanoids and consumer-use humanoids. Professional-use humanoids are meant for factories, hotels, and other managed environments; they usually require enterprise integration, safety protocols, and trained operators. Consumer-use humanoids are much more general-purposed, designed for homes or personal use, where they must be safe and useful without professional setup.

Hence, professional-use humanoids are probably going to arrive faster. Their early jobs can include moving parts around factories, sorting parcels, inspecting components, loading machines - focusing on repetitive tasks that are low-value, where labor is scarce, or injury risk is high. For example, Mercedes-Benz has been testing Apptronik’s Apollo humanoid for manufacturing tasks such as moving components and quality checks. UBTECH’s Walker S series is a line of industrial humanoid robots designed for factory and logistics work, and has been used in BYD factories.

Consumer humanoids will need more time to arrive. Today’s consumer robot market is still dominated by single-purposed robot machines such as robot vacuums and educational robots. General-purpose home humanoids are still experimental and are largely in the preorder and pilot stages. Figure AI introduced Figure 03 as its third-generation humanoid designed for home use, but is not broadly available to consumers. Tesla’s Optimus is another major effort aiming for production in late 2026.

China is becoming a major center for humanoid robotics. Price is one reason. Unitree, a Hangzhou-based robotics company, sells several humanoid platforms including the G1, going for around $13.5K, and the lower cost R1, which goes for around $5K. This is shockingly cheap compared with the six-figure research robots that dominated this field only a few years ago. China’s robotics companies are pushing humanoids down the same cost curve that turned drones, smartphones, and electric vehicles from expensive prototypes into mass-produced products. Alongside Unitree, companies like AgiBot and UBTECH are moving fast: AgiBot is building general-purpose humanoids and large real-world training datasets, while UBTECH is testing its Walker S robots in industrial settings such as logistics, sorting, inspection, and assembly.

The companies building humanoid robots fall into three rough camps. The first wants to vertically integrate, building both the hardware and AI models. This includes companies like Tesla, Figure AI, and Unitree. The second group are more hardware-focused, building the robot body while partnering with AI labs or enterprise customers: Boston Dynamics has spent years making robots move with precise balance, and is now working with Toyota Research Institute to teach its Atlas humanoid more general skills. The third group is building the “brains” and developer tools rather than selling hardware: Google DeepMind’s Gemini Robotics and NVIDIA’s Isaac GR00T are AI models that enable robots to understand and act on instructions.