Free AI, powered by the crowd

Chat with open models for free — served by GPUs worldwide. Contribute compute, earn credits. No signup.

Prompts are processed by distributed GPU nodes. Don't share passwords or API keys.

Why distributed inference?

No single point of failure

Models run across multiple peers. If one goes down, requests automatically route to another — no downtime.

Your GPU, your credits

Contribute idle compute and earn credits. Spend them to run larger models across the network. A fair exchange.

No vendor lock-in

Drop-in OpenAI-compatible API. Switch from any provider with one env var. Your tools, your choice.

Private & federated

Run your own swarm or federate with trusted peers. Control membership, models, and data with cross-network routing.

Open source

Apache 2.0 licensed. Audit the code, fork it, extend it. Every layer is transparent — no black boxes, no hidden costs.

Scales with community

Every new seeder makes the network faster and more capable. More contributors means more models and lower latency.

Get started in 60 seconds

pip, Docker, or one-liner — your choice.

curl -fsSL mycellm.ai/install.sh | sh

Installs, initializes, prints next steps.

Drop-in OpenAI replacement

Change one env var. Everything else stays the same.

# Point any OpenAI-compatible tool at mycellm
export OPENAI_BASE_URL=http://localhost:8420/v1
# Works with:
Python SDK· LangChain· LlamaIndex· OpenCode· Claude Code· aider· Continue.dev

Sensitive Data Guard

Outgoing prompts are scanned on-device for API keys, passwords, and PII. High-severity matches automatically route to your local model — sensitive data never leaves your device.

mycellm on iPad — network chat with node attribution
Native iOS app NEW

Your iPad is a full peer on the network — serve inference at 30+ tokens/sec on Metal, earn credits, and chat with persistent threads and privacy protection.

Also works on iPhone.

Web dashboard

localhost:8420
mycellm web dashboard — fleet overview with hardware cards, peer topology, and API endpoint