2026-04-18 17:28:17
How to Run #LocalLLMs with #ClaudeCode
https://unsloth.ai/docs/basics/claude-code
How to Run #LocalLLMs with #ClaudeCode
https://unsloth.ai/docs/basics/claude-code
yesterday I upgraded my local #llm to qwen3.5 - and it works pretty well; this is using Unsloth's Qwen3.5-35B-A3B-Q4_K_M.gguf - I also had to upgrade to the latest llama.cpp (and it's got a few rough edges); but it seems as good as the Qwen3-Next-80B I was using, and it's also multimodal (with the mmproj gguf needed) and the multimodal is usefully fast at an image description on CPU only…