Ollama
7 articles tagged with this topic
Deploy Gemma 4 Locally on Mac with Public Remote Access
Full- stack guide: Ollama + OrbStack + frp + Nginx exposes local Gemma 4 inference to the public internet via HTTPS.
Local LLM Setup Guide for RTX 5070 12GB VRAM
Choosing local AI models for chat, writing, and music on a 12GB VRAM RTX 5070 build.
Local AI Goes Mainstream When the Tooling Becomes Boring Infrastructure
A Reddit argument: local LLM adoption hinges on reliable tooling stacks, not benchmark gains, mirroring Docker's container revolution.
Qwen 3.5 Tool Calling Bugs: What's Broken and How to Fix Them
Four confirmed bugs break Qwen 3.5 tool calling in agentic setups. Here's what's fixed, what's still open, and client-side workarounds.
Qwen 3.6 Spotted in Official App Alongside 3.5 Max Preview
A Reddit user spotted Qwen 3.6 inside the official Qwen app, suggesting an imminent public release beyond API access.
Chinese AI Labs Delay Open-Source Releases: What Solo Builders Should Do Now
Qwen, GLM, MiniMax all stalling open-weight releases. Here's how solopreneurs should hedge their model stack.
Hermes Agent: Best Open-Source Local LLM Agent Framework in 2025
Nous Research's Hermes Agent offers per-model tool call parsers, Ollama/vLLM support, and MIT license at 22k stars.