Hacker News new | past | comments | ask | show | jobs | submit login

If I used Gemini 2.0 for extraction and chunking to feed into a RAG that I maintain on my local network, then what sort of locally-hosted LLM would I need to gain meaningful insights from my knowledge base? Would a 13B parameter model be sufficient?



Ypur lovalodel has littleore to do but stitch the already meaningzl pieces together.

The pre-step, chunking and semantic understanding is all that counts.


Do you get meaningful insights with current RAG solutions?


Yes. For example, to create AI agent 'assistants' that can leverage a local RAG in order to assist with specialist content creation or operational activities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: