Back to home
local LLM
2 articles tagged with this topic
Qwen3local LLM
本地运行 AI 编程时, 要不要关掉「思考模式」?一个值得厘 清的实用问题
Should you disable thinking mode when running Qwen3 locally for coding? A real debate with structural implications for AI dev toolch ains.
Apr 183 min read
llama.cppGemma 4
Gemma 4 llama.cpp Issues Resolved With Recent Fixes
Google Gemma 4 models now run correctly in llama.cpp after critical fixes for output quality and crashes
Apr 41 min read