Discussion about this post

User's avatar
Ignasi's avatar

Thank you very much for this post! Just in case you haven’t tried it yet, you can also package the model with specific settings, such as the context window, so you don’t have to configure them every time. This also allows you to share the configured model via Docker Hub. I mention this in the following post: https://www.docker.com/blog/opencode-docker-model-runner-private-ai-coding/

JP's avatar

The hybrid setup makes sense. For agentic workflows where you're splitting tasks across specialist subagents, the model choice per role matters a lot though. I ran experiments with reviewer agents in OpenCode and found that shorter, domain-focused prompts per agent beat one big generic model trying to cover everything. Wrote it up with the agent configs here: https://reading.sh/one-reviewer-three-lenses-building-a-multi-agent-code-review-system-with-opencode-21ceb28dde10

No posts

Ready for more?