Space Computing: Why Orbit Matters
Every security model has an assumption about physical access. Data centers use locked cages, biometric scanners, and security guards. Cloud providers publish SOC 2 reports and let auditors walk the floor. But at the end of the day, someone can always get to the hardware. An employee, a government with a warrant, an attacker with enough motivation.
Orbit attempts to eliminate that assumption entirely.
Physical isolation as a security property
A satellite in low Earth orbit is moving at roughly 7.5 km/s, hundreds of kilometers above the surface. In practical terms, no one is walking up to it. No one is attaching a probe. No one is swapping out a chip. Once the satellite is launched, the hardware is physically inaccessible for the duration of the mission.
This isn't a policy control or an administrative safeguard, but just plain physics. And it provides a security guarantee that no terrestrial deployment can match:
- No insider threat to hardware. There's no operator who can physically access the machine. The operations team communicates with the satellite via radio uplink. They can send commands and receive data. But they cannot touch the hardware.
- No physical side-channel attacks. Power analysis, electromagnetic emanation probing, cold boot attacks, all these require physical proximity to the hardware. Proximity that doesn't exist when the target is in orbit.
- No supply chain tampering post-launch. Once the satellite reaches orbit, the hardware configuration is fixed and no one is inserting a malicious component. The window for supply chain attacks closes permanently at launch.
- No jurisdictional seizure. No government can compel physical access to a satellite in the way they can serve a warrant on a data center. The hardware is in orbit, and orbital space isn't subject to the same legal frameworks as terrestrial infrastructure.
For most workloads, this level of physical isolation is overkill. You don't need it to serve a web app or run a batch job. But for a specific class of problems such as key generation, cryptographic attestation, randomness generation, confidential computation, the physical isolation guarantee is transformative. It changes the threat model from "we trust the operator's physical security controls" to "physical access is impossible."
How this compares to terrestrial security
It's worth being concrete about the gap. A well-run data center might have:
- Mantrap entry with biometric authentication
- 24/7 security guards and camera surveillance
- Background checks on all personnel
- Tamper-evident seals on server racks
- Visitor escort policies and audit logs
These are good controls. They make physical attacks hard and expensive. But they don't make physical attacks impossible. A nation-state adversary, a compromised insider, or a sufficiently motivated attacker with time can defeat these controls. The history of intelligence operations is full of examples.
A satellite in orbit doesn't need any of these controls because the physical access problem is solved by altitude. The security guarantee doesn't degrade over time, doesn't depend on policy compliance, and doesn't require ongoing vigilance. It just is.
The thermal problem
But there's a catch, and it's a big one: space is terrible for computing at scale.
On Earth, computers stay cool through convection: fans push air over heatsinks, and the air carries heat away. In orbit, there is no air, so convection is impossible. The only way to shed heat is through thermal radiation. For example, emitting infrared photons into space. This works, but it's dramatically less efficient than convection, making thermal management one of the harder engineering problems in space computing.
What this means in practice is that you can't run heavy compute workloads in orbit without massive radiator panels. A rack of GPUs that works fine in a data center with industrial air conditioning would overheat and fail within minutes in a satellite. The surface area needed to radiate away that much heat would make the satellite impractically large and expensive to launch.
This is a hard physical constraint, not an engineering problem that clever design will solve away. The Stefan-Boltzmann law is not negotiable.
The power side of the equation
Thermal limits are closely tied to power limits. Everything in orbit runs on solar panels and batteries. A typical small satellite might generate 10-100 watts of electrical power. Compare that to a single modern GPU that draws 300-700 watts, and the mismatch is obvious. 100W really isn't much for the amounts of energy that we're used to work with on our everyday lives.
Larger satellites can generate more power, but more power means more heat, which brings you back to the radiator problem. Not to mention that larger satellites cost more to build and launch, and the radiator hardware can pose a huge burden in the weight you're trying to put (and keep) in orbit. There's an economic constraint layered on top of the physical one.
None of this is going to change in a fundamental way. Incremental improvements in solar efficiency and thermal management will expand the compute envelope over time, but the gap between orbital and terrestrial compute density will remain enormous for the foreseeable future.
The radiation environment
Space is also a harsh radiation environment. Cosmic rays and trapped radiation in the Van Allen belts can cause bit flips in memory (single-event upsets) and gradual degradation of electronics (total ionizing dose effects). Satellite processors are either radiation-hardened (expensive, slower than commercial parts) or radiation-tolerant with error correction (cheaper, but still constrained).
This is another reason we currently don't see high-performance compute in orbit. The processors that survive the radiation environment are generations behind what we'd find in a terrestrial data center. But again, for the workloads that matter here - cryptographic operations, randomness generation, small attestation tasks - these processors are more than capable.
Interestingly, cosmic radiation is both a challenge (for electronics reliability) and an opportunity (as an entropy source for cTRNG). The same high-energy particles that cause bit flips in processors also drive SpaceComputer's randomness generation. The trick is keeping the two concerns separate: radiation-hardened processing for reliability, dedicated detectors for randomness.
The launch cost factor
Everything in orbit got there on a rocket, and launch costs directly affect what's economically viable. The good news is that launch costs have dropped dramatically over the past decade, driven primarily by SpaceX's reusable rockets. A kilogram to LEO costs roughly $2,000-4,000 today, down from $50,000+ a decade ago.
This matters for space computing because it makes small, specialized satellite payloads affordable. You don't need a billion-dollar flagship mission to get compute hardware into orbit. A payload the size of a shoebox, riding as a secondary payload on a larger launch, can carry the hardware needed for cTRNG or SpaceTEE at a cost that makes commercial sense.
As launch costs continue to fall and rideshare opportunities proliferate, the economics of orbital compute will keep improving. More hardware in orbit means more capacity, more redundancy, and lower per-unit costs for orbital services. In turn, this poses more problems regarding security, encryption, and safety.
Constrained but powerful
So you can't commercially run a GPU cluster in orbit yet. What can you run?
Quite a lot, actually, as long as the workload is lightweight. Modern satellite processors are capable machines: they just can't sustain the thermal output of heavy parallel computation. For workloads measured in milliwatts rather than kilowatts, the thermal constraint barely matters.
And those workloads happen to be exactly the ones that benefit most from physical isolation:
Random number generation. Detecting cosmic radiation events and converting them to random bytes is a low-power operation. The hardware (sensors, basic processing, cryptographic signing) fits comfortably within a satellite's thermal budget. And randomness is one of the workloads where the provenance guarantee (proof that the data was generated in a specific, physically isolated environment) matters enormously.
Cryptographic key generation and custody. Generating a key pair inside a satellite means the private key was created in a physically inaccessible environment. If the satellite never exports the private key, you have a hardware security module that no one can physically attack. The key exists in one place in the universe, and that place is moving at 7.5 km/s above the atmosphere.
Attestation and signing. A satellite can sign data to prove it was generated or processed onboard. The signature is verifiable by anyone, and the physical isolation of the signing key gives the attestation a stronger guarantee than any terrestrial HSM. If you trust the satellite's identity (established at launch and verified through the chain of custody), you trust the signature.
Lightweight confidential computation. Small computation tasks that need provable isolation, like multi-party computation setup, secret sharing operations, or compliance-critical calculations, can run within a satellite's compute and thermal envelope. These aren't big jobs, but they're high-value jobs where the isolation property is worth the operational overhead of orbital deployment.
The pattern is consistent: orbital compute is ideal for operations that are small in computational terms but large in trust requirements. The key insight is that for these workloads, the computational cost is negligible. What you're really "paying" for is the isolation guarantee. And orbit provides that guarantee more convincingly than any terrestrial alternative.
What "lightweight" actually means
To be concrete: a modern satellite processor can comfortably handle AES encryption, SHA-256 hashing, ECDSA signing, and BIP32 key derivation. These operations complete in milliseconds on even modest hardware. Generating a random 256-bit value from a cosmic ray detector and signing it takes trivial compute resources.
Where you hit limits is sustained computation: iterating over large datasets, running inference on even small ML models, or processing high-bandwidth data streams. If your workload requires more than a few watts of sustained compute, it probably needs to stay on the ground.
The sweet spot is operations that are computationally simple but extraordinarily sensitive. A single ECDSA key generation might take microseconds of CPU time, but the security implications of that key could be worth millions. That's the kind of asymmetry where orbital compute makes economic sense.
Not a replacement for the cloud
This should be obvious, but it's worth stating explicitly: space computing is not trying to replace AWS. It's not going to host your database, run your CI pipeline, or serve your frontend. If you hear someone pitch orbital compute as a general-purpose cloud replacement, they're either confused or selling something.
Orbital compute is a complement to terrestrial infrastructure. It handles the small, high-assurance workloads where physical isolation is the critical property. Everything else stays on the ground, where bandwidth is cheap, latency is low, and you can throw as many GPUs at a problem as you want.
The architecture pattern looks like this: your application runs on terrestrial infrastructure, and it calls out to orbital services for the specific operations that benefit from space-grade isolation. Orbitport is the bridge between these two worlds. It's a single API that lets your Earth-bound application consume services running in orbit without managing satellite passes, ground stations, or communication protocols.
Think of it like using a hardware security module (HSM), except the HSM is in orbit and the isolation guarantee is backed by 500 km of vacuum instead of a tamper-resistant enclosure.
When to use orbital services
A practical decision framework:
- Do you need provable physical isolation? If an auditor, regulator, or counterparty needs assurance that no one could have physically accessed the hardware, orbital compute provides a guarantee that's hard to argue with.
- Is the workload small and high-value? Key generation, signing, randomness, small confidential computations. If the operation takes milliseconds to seconds and the trust requirement is high, it's a good candidate.
- Can you tolerate variable latency? Orbital services aren't always instantly reachable. If your workload can wait seconds to minutes (or can use a buffered data source like the IPFS beacon), orbital fits.
- Is the alternative "trust the operator"? If the terrestrial version of your workload requires trusting a cloud provider, a data center operator, or a hardware custodian, and you'd rather not, orbital compute removes that trust requirement.
If you answered yes to most of these, you're looking at a good candidate for orbital services.
Where this is heading
SpaceComputer operates individual satellite payloads that provide specific services (cTRNG being the first). The thermal and communication constraints mean each satellite does a focused job.
The trajectory over the next several months is toward orbital compute networks: multiple satellites with different capabilities, communicating with each other and with the ground, forming a distributed system in orbit. As launch costs continue to drop and satellite-to-satellite communication matures, the range of viable orbital workloads will expand.
Some milestones on that path:
- More satellites, more providers. Orbitport's plugin architecture is designed for a multi-provider future. As more satellite operators offer compute and sensing services, they plug into the same API layer. Redundancy improves. Single points of failure disappear.
- SpaceTEE. Trusted execution environments in orbit, combining TEE-grade software isolation with orbital physical isolation. This is on SpaceComputer's roadmap and builds on the same infrastructure that supports cTRNG today. SpaceTEE is covered in detail in its own concept page.
- Orbital data processing. As thermal management improves and on-orbit power generation scales up, the range of computation you can do in space will grow. Not GPU-scale, but meaningful processing that benefits from the isolation guarantee.
- Inter-satellite networking. Satellites that communicate directly with each other, forming an orbital mesh. This reduces dependence on ground stations and enables new architectures where orbital nodes coordinate without terrestrial intermediaries.
The important thing is that the fundamentals are already here. The security property (physical inaccessibility of the hardware) doesn't require future technology. It's a consequence of putting hardware in orbit, and it works today. cTRNG is the proof. Everything else is a matter of scaling up what's already proven.
Common misconceptions
A few things worth clearing up, since they come up frequently:
"Satellites can be hacked via radio uplink." Yes, satellites receive commands via radio, and command link security is critical. But command authentication and encryption are well-understood problems. The satellite verifies cryptographic signatures on commands before executing them. A compromised uplink doesn't give you physical access to the hardware, it gives you the ability to send commands, which is a different attack surface than physical access.
"Space debris could destroy the satellite." It could. This is an operational risk, not a security risk. Mission planning accounts for debris avoidance, and critical data is managed trustlessly by SpaceComputer designs. The security properties don't depend on any single satellite surviving indefinitely.
"You could retrieve a satellite." In theory, a nation-state could launch a mission to rendezvous with and capture a satellite. In practice, this would be visible to the entire global space surveillance network, would take weeks to months to execute, and would be an act of extraordinary geopolitical significance. It's not a practical attack vector for any realistic threat model. By the time someone could execute a physical retrieval, you'd know about it and could revoke any keys or credentials associated with that satellite. Could be a premise for your own sci-fi novel, though!
"Space computing is just a gimmick." This one's fair to question. The answer depends entirely on your workload: if you're looking for general-purpose compute, yes, space is absurd. No serious person would suggest running a personal website in orbit. But if you're looking for physical isolation as a security property for specific high-assurance operations, space is the only environment that provides it unconditionally.
The question isn't whether space computing is useful in general. It's whether your specific use case benefits from a physical isolation guarantee that no terrestrial deployment can match. If you're generating cryptographic keys, producing verifiable randomness, or running confidential computations where the threat model includes physical access to hardware, then orbital compute isn't a gimmick. It's the only architecture that fully addresses the threat.
Further reading
- How Satellites Communicate -- the connectivity model that shapes orbital services
- How Cosmic Randomness Works -- the first orbital service built on the principles described here
- SpaceTEE: Trusted Execution from Orbit -- the next step in orbital compute, combining TEEs with physical isolation
- Orbitport Architecture -- the gateway that makes orbital services accessible to developers