Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No one should use ollama. A cursory search of r/localllama gives plenty of occassions where they've proven themselves bad actors. Here's a 'fun' overview

https://www.reddit.com/r/LocalLLaMA/comments/1kg20mu/so_why_...

There are multiple (far better) options - eg LM studio if you want GUI, llama.cpp if you want the CLI that ollama ripped off. IMO the only reason ollama is even in the conversation is it was easy to get running on macOS, allowing the SV MBP set to feel included



/r/LocalLlama is a very circle-jerky subreddit. There's a very heavy "I am new to GitHub and have a lot of say"[0] energy. This is really unfortunate because there's also a lot of people doing tons of good work there and posting both cool links and their own projects. The "just give me an EXE types" will brigade causes they do not understand and white knight projects and attack others for no informed logic reason. They're not really a good barometer for the quality of any project, on the whole.

[0] https://github.com/sherlock-project/sherlock/issues/2011


This is just wrong. Ollama has moved off of llama.cpp and is working with hardware partners to support GGML. https://ollama.com/blog/multimodal-models



we keep it for backwards compatibility - all the newer models are implemented inside Ollama directly


can you substantiate this more? llama.ccp.is also relying on ggml




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: