Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
mchiang
66 days ago
|
parent
|
context
|
favorite
| on:
Ollama's new app
This is just wrong. Ollama has moved off of llama.cpp and is working with hardware partners to support GGML.
https://ollama.com/blog/multimodal-models
paulsmal
65 days ago
|
next
[–]
is it?
https://github.com/ollama/ollama/blob/main/llm/server.go#L79
mchiang
64 days ago
|
parent
|
next
[–]
we keep it for backwards compatibility - all the newer models are implemented inside Ollama directly
polotics
65 days ago
|
prev
[–]
can you substantiate this more? llama.ccp.is also relying on ggml
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: