How NexApps honors three impossible constraints

An autonomous AI app builder & marketplace running on $0, storing zero binary bytes, and emulating native apps without a single GPU.

1. The $0 Capital Pipeline

NexApps never runs compilers itself. Instead, the orchestrator stitches together free tiers across vendors, choosing the optimal one per platform target:

Each app generated by NexApps gets its own .github/workflows/nexapps.yml — placed in the creator's GitHub repo. NexApps fires repository_dispatch events; the heavy lifting happens on someone else's runners. Even at scale, the marginal cost to NexApps stays at $0.00.

2. The Zero-Byte Storage Constraint

Compiled binaries are large and expensive. So we never touch them.

Each user links their own cloud — Google Drive, Dropbox, OneDrive, GitHub Releases, IPFS, S3 — once. Their OAuth/API token is encrypted at rest with AES-GCM and stored alongside their account.

During a build, the workflow's last step (.nexapps/route.sh) runs inside their CI, uploading the artifact directly to their personal cloud. The runner POSTs back only the resulting URL to NexApps. Our database stores the URL. The bytes never pass through our infrastructure.

Result: NexApps databases hold a marketplace directory, not a content delivery network. Storage cost = $0. Egress cost = $0. Legal exposure for hosting third-party binaries = $0.

3. The Emulation Constraint

Cloud Android emulators need GPUs. iOS simulators legally require Mac hardware. Both cost real money.

The trick: we generate apps in Flutter, whose same source tree compiles to all six targets — including the Web. The Web build is a real Flutter app running in a real WebAssembly canvas — visually and functionally identical to the native app.

NexApps deploys the Web build to Cloudflare Pages and embeds it inside a phone-frame iframe on the marketplace. Users get an interactive emulator that's actually the real app. CPU runs in their browser. Latency is zero. Our cost stays zero.

4. Bring-Your-Own-Key + Owner Defaults

Every user can add their own AI provider keys (Gemini, OpenAI, Anthropic, OpenRouter, Groq, or any OpenAI-compatible custom endpoint). Keys are encrypted with AES-GCM before persistence and never leave the worker except as outbound API calls.

The platform owner provisions a default Gemini key (Google AI Studio) so newcomers can build immediately. Owner-set rate limits cap free generations per user per day; once exceeded, the UI prompts them to bring their own key.

Security

What's next