doing `ollama pull qwen3-coder-next:latest` ... 51GB #ollama
Got the latest ollama running with image generation models now.Used models are x/flux2-klein:latest and x/z-image-turbotook about 1min 20s to generate these images on a M2 Max CPU with 64GB RAM#ollama #GenAI
ich bin grad relativ viel mit #ollama rum. holy moly, mit Vulkan wird das sogar langsam benutzbar!