OpenCode
OpenCode is an open-source AI coding assistant. Point it at your mycellm node for distributed inference.
# Set mycellm as the LLM backendexport OPENAI_BASE_URL=http://localhost:8420/v1export OPENAI_API_KEY=your-mycellm-key # optionalexport OPENAI_MODEL=auto
# Start OpenCodeopencodeConfiguration file
Section titled “Configuration file”In your OpenCode config (~/.config/opencode/config.json or project .opencode.json):
{ "provider": "openai", "model": "auto", "apiBase": "http://localhost:8420/v1", "apiKey": "your-mycellm-key"}Using with a remote mycellm node
Section titled “Using with a remote mycellm node”If your mycellm node is on another machine (e.g., a GPU server):
export OPENAI_BASE_URL=http://gpu-server:8420/v1opencodeUsing with the public network
Section titled “Using with the public network”No mycellm node needed — use the public gateway directly:
export OPENAI_BASE_URL=https://api.mycellm.dev/v1/publicopencode- Use
autoas the model name — mycellm routes to the best available - If you have multiple models loaded, specify one by name for consistency
- The mycellm dashboard (
:8420) shows which models are available