Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is just wrong. Ollama has moved off of llama.cpp and is working with hardware partners to support GGML. https://ollama.com/blog/multimodal-models



we keep it for backwards compatibility - all the newer models are implemented inside Ollama directly


can you substantiate this more? llama.ccp.is also relying on ggml




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: