Hacker News new | past | comments | ask | show | jobs | submit login

> The "hallucination" factor means every result an AI tells you about big data is suspect.

AI / ML means more than just LLM chat output, even if that's the current hype cycle of the last couple of years. ML can be used to build a perfectly serviceable classifier, or predictor, or outlier detector.

It suffers from the lack of explainability that's always plagued AI / ML, especially as you start looking at deeper neural networks where you're more and more heavily reliant on their ability to approximate arbitrary functions as you add more layers.

> you're using this heavy, low-performance general-purpose tool to solve a problem which can be solved much more performatively by using tools which have been designed from the beginning to handle data management and analysis

You are not wrong here, but one challenge is that sometimes even your domain experts do not know how to solve the problem, and applying traditional statistical methods without understanding the space is a great way of identifying spurious correlations. (To be fair, this applies in equal measure to ML methods.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: