Hacker News new | past | comments | ask | show | jobs | submit login

I don't think AI LLM are very good at Rust, no ? I tried multiple times with Sonnet 3.5 last year to produce web interface with Tauri and it got in infinite loops with async functions. On the other hand setting up shaders in Rust worked out of the box. But I couldn't ask too much before the AI was looping over bad code over and over.

Do this in C++ and I'm pretty sure you won't have any issue






I've had excellent experience with several models writing Rust. Wonder if there's just a particular issue with Tauri? I'm primarily writing code on top of the Candle ML framework.

Any learned tricks to getting good performance out of LLMs with Rust in particular that would differ from using LLMs to generate Python code?

Which models are the best for Rust? How are you finding Gemini 2.5 Pro?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: