Back to home
on-device-inference
2 articles tagged with this topic
llama.cppAndroid
端侧AI 模型部署实战五(Android大模型加载)
Step-by-step JNI bridge implementation for running quantized LLMs on Android using llama.cpp.
Apr 143 min read
CoreMLApple-Intelligence
Apple's On-Device AI Moat: What It Means for Edge Builders
Apple's privacy-first, on-device AI stack may become the default for builders who need inference without cloud costs .
Apr 134 min read