Back to home
local-llm
2 articles tagged with this topic
vLLMGemma4
Run Gemma 4 26B Locally with vLLM and NVFP4 Quantization
A working bash script runs Gemma 4 26B via vLLM with NVFP4 quantization in Docker on consumer hardware.
Apr 62 min read
local-llmollama
LLM Test Prompts That Reveal Real Model Quality for Builders
Community-sourced prompts expose reasoning gaps in local LLMs, helping solo builders pick reliable models for production workflows.
Apr 62 min read