Research Preview

Cloud infrastructure
for machines.

Deploy software on programmable compute, elastic storage, and reactive infrastructure that autonomous systems can inspect, repair, and control through one API.

$ npx @layerbrain/skills install
Works with any AI coding agent
COMPUTE

Real machines. Programmable in milliseconds.

Each Machine gets its own vCPU, memory, disk, ports, IPv6, and snapshots, ready in milliseconds with no container orchestration to manage.

model train load → train → checkpoint
synthetic data prompt → generate → filter
data pipeline ingest → transform → write
eval sweep fan out → score → compare

AI workloads

Run training jobs, data generators, ETL pipelines, and eval sweeps against real filesystems with durable checkpoints and structured logs. No object-store workarounds or throwaway scratch space.

50 ms
Machine startup → ready
CPU
4 vCPU
Memory
16 GB
Storage
100 GB

Fast cold starts

Machines go from zero to ready in under 50ms. You pay for execution time, not idle capacity, and scale back to zero the moment work finishes.

Traefik
Traefik
n8n
n8n
Plausible
Plausible
Infisical
Infisical
Postgres
Postgres
ClickHouse
ClickHouse

Run any stack

Real ports, IPv6 addresses, and durable block storage. Run Postgres, Traefik, ClickHouse, or whatever your stack needs without fighting custom networking or ephemeral filesystems.

Machine startup n=10k · p50
layerbrain
50 ms
firecracker
450 ms
docker
1,800 ms
k8s pod
8,400 ms
ec2 boot
42,000 ms
Intra-region hop · service rpc round-trip · p50
layerbrain
33 ms
service mesh
95 ms
serverless rpc
140 ms
gateway → fn
210 ms
lambda cold
380 ms
STORAGE

Programmable storage. Fast, durable, forkable.

Storage that lives across Machines, not inside them. Datasets, checkpoints, logs, and model outputs stay mounted and shared between workers with no sync scripts or disk rebuilds between runs.

/data mounted
checkpoint.pt 1.2 GB
dataset.parquet 840 MB
logs 12 items
outputs syncing
repos/my-app git

Normal files

Your code reads normal paths like /data/dataset.parquet. No object-store SDKs, no presigned URLs, no download-then-process glue.

$ aws s3 cp model.pt s3://runs/
upload: model.pt → s3://runs/model.pt
$ aws s3 ls s3://runs/
2026-05-12 model.pt 1.2 GB 2026-05-12 config.yaml 4 KB 2026-05-11 dataset.parq 840 MB
endpoint → storage.layerbrain.cloud

S3 compatible objects

Our S3-compatible endpoint means your existing SDKs, CLIs, lifecycle policies, and CI/CD integrations work without rewriting storage calls or adding proprietary client libraries.

machine_01
save
snapshot_a4f9 2.4 GB · 320 ms
resume
machine_02

State between machines

Share checkpoints, fork experiments, and pick up distributed jobs from where they left off. No rebuilding disks from scratch between runs.

Storage latency · cold p50 GET · 1 KB
layerbrain
45 ms
s3 standard
211 ms
r2
168 ms
gcs
196 ms
b2
384 ms
faster than s3 standard
99% s3 api · drop-in
EVENTS

Reactive infrastructure. Built to recover.

Every primitive in the stack emits typed events with real context attached. When something fails, recovers, or finishes, agents get the signal and the state they need to act on it without polling.

EVENTS
14:32:01 compute.started
14:32:04 storage.put
14:32:09 fs.snapshot.created
14:32:09 fs.commit
14:33:47 compute.exited
14:35:02 compute.failed

Real-time event stream

Every primitive emits structured events with enough context to debug a failure, resume an interrupted run, or kick off the next step in a pipeline.

POST https://your-app.com/hooks/lb
signed · retried · replayable
200 delivered 12ms
503 retry 1/8 +2s
200 delivered 21ms

Webhooks that retry

Deliver events to your systems with signatures, retries, and replay so automation keeps working through outages.

compute.failed
spawn debugger
fork fs from last snapshot
retry with patch
compute.exited · 0

Self healing software

When something breaks, agents inspect the run, fork the filesystem from the last snapshot, patch what went wrong, and retry. Failures become recovery paths, not pages to on-call.

LATENCY

Near edge compute. Globally replicated storage.

We place compute and storage in the same region as the caller so agent loops spend time executing, not waiting on the network. Hot state stays replicated nearby to cut round trips to a minimum.

Response time per zone
Europe
78ms
N. America
84ms
Asia
180ms
S. America
92ms
Africa
95ms
Oceania
97ms
-24hnow
HTTP breakdown
Wait
DNS
TCP
TTFB
Download
🇺🇸 IAD 14ms
🇩🇪 FRA 19ms
🇺🇸 NYC 21ms
🇳🇱 AMS 23ms
🇬🇧 LDN 27ms
🇫🇷 PAR 34ms
🇺🇸 ATL 38ms
🇨🇦 TOR 41ms
🇺🇸 SFO 56ms
🇸🇬 SGP 67ms
🇦🇺 SYD 89ms
🇯🇵 TKY 173ms
🇮🇳 MUM 188ms
🇮🇳 BLR 196ms
🇰🇷 SEO 211ms
PRICING

Only pay for primitives. No seats. No plans. No commitments.

Pay per second for compute, memory, and Machine storage. No seats, setup fees, monthly fees, or hidden egress surprises.

Machines
vCPU $0.000014 per vCPU-second
Memory $0.0000045 per GiB-second
Machine storage $0.00000003 per GiB-second
Egress $0.00 no egress fees
Object storage
Stored $0.04 per GB-month
Writes $0.01 per 1,000 requests
Reads $0.001 per 1,000 requests
Egress $0.00 no egress fees

Cloud infrastructure
machines can operate.

Layerbrain gives software the primitives to operate itself: compute it can reach, state it can fork, events it can trust, and recovery it can run.

$ npx @layerbrain/skills install
Works with Codex Codex, Claude Code Claude Code, Poolside Poolside, Cursor Cursor, GitHub Copilot GitHub Copilot, and open source tools.