Skip to content

OpenCode

OpenCode is an open-source AI coding assistant. Point it at your mycellm node for distributed inference.

Terminal window
# Set mycellm as the LLM backend
export OPENAI_BASE_URL=http://localhost:8420/v1
export OPENAI_API_KEY=your-mycellm-key # optional
export OPENAI_MODEL=auto
# Start OpenCode
opencode

In your OpenCode config (~/.config/opencode/config.json or project .opencode.json):

{
"provider": "openai",
"model": "auto",
"apiBase": "http://localhost:8420/v1",
"apiKey": "your-mycellm-key"
}

If your mycellm node is on another machine (e.g., a GPU server):

Terminal window
export OPENAI_BASE_URL=http://gpu-server:8420/v1
opencode

No mycellm node needed — use the public gateway directly:

Terminal window
export OPENAI_BASE_URL=https://api.mycellm.dev/v1/public
opencode
  • Use auto as the model name — mycellm routes to the best available
  • If you have multiple models loaded, specify one by name for consistency
  • The mycellm dashboard (:8420) shows which models are available