Integrating Jan.ai with opencode
This weekend I went back to test again Jan.ai for running LLM models locally on my MacBook. I do like the simple interface and already build in web-search using exa. From my crude testing I concluded MLX optimized models run faster compared to llama.cpp and I wanted to test it further with opencode. Since I haven’t found official documentation on how to integrate mlx-lm with opencode I’m providing links and settings on how to achieve it.
Run MLX models with uvx one-liner (kconner.com)
uvx --python 3.12 --isolated --from mlx-lm mlx_lm.serverI wanted to reuse already download MLX models with Jan. By default, they are stored in ~/Library/Application Support/Jan/data/, which can be changed in the settings.1 Alternatively, you can just customize the opencode settings ~/.config/opencode/opencode.json as follows (awni)
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"mlx": {
"npm": "@ai-sdk/openai-compatible",
"name": "MLX (local)",
"options": {
"baseURL": "http://127.0.0.1:8080/v1"
},
"models": {
"~/Library/Application Support/Jan/data/mlx/models/Jan-v3-4B-base-instruct-8bit": {
"name": "Jan-v3-4B-base-instruct-8bit"
}
}
}
}
}Finally, run opencode (copied from awni)
- Enter
/connect - Type
MLXand select it - For the API key enter
none - Select the model