Hacker News new | past | comments | ask | show | jobs | submit login

Does ChatGPT “have it”? I thought it made stuff up and that the magic it does is being very good at making the stuff up.

We’ve had it come up with solutions using entirely made up functions. They had names that sounded like something Microsoft would’ve put into .Net, but they were entirely made up. As in, they had never existed in any version of .Net ever.

So as much as I like it, I’m treating it with more caution than google results. Honestly though, most of the time it’s frankly faster to just read the damn manual and figure out things for yourself. I don’t say that as some sort of anti-prompt-programming purist, but wading through GPT responses is as about as hopeless as wading through the gazillion medium, dev.to, stack overflow and whatever else people post their useless stuff on. 10 years ago if I needed to do some obscure Sharepoint programming (waaaaaay out of my field) I could realistically make something work with the help of Google, today the same thing is frankly completely impossible.




Its actually quite interesting just how good it is at making things up. When I first started using it I didn't realise it even could, so when I got a non-existent GDScript function, I started to prod it to see where it came from. It was able to explain the function, tell me when it was added (of course Godot is open source so it would in theory have access to this) and even the commit hash used to add it, and all of it sounded very plausible. It was only when I pointed out that it doesn't exist that it admitted it.

Admittedly that was on GPT3, I haven't tried GPT4 as I can't afford it at the moment. No doubt its better, but I'm not sure how much so.


It's always making things up. It's fundamentally a coincidence when it gets it right.

That's the nature of associative statistics -- and why this talk of "hallucination" is more marketing PR.

We hallucinate in that our reconstructions of our environment can be caused by internal states (eg., dreaming) -- whereas veridical perceptual states are usually caused directly by the environment.

Here, it's states (ie., the statistical averaging process over its training data) is NEVER caused veridically -- ie., its prompts are never caused by the environment.


GPT doesn't give correct answers. It gives answers that sound correct.

Those correct-sounding answers are often actually correct, but this is more coincidence than design. Anything it says is suspect, and should be fact-checked.


The more ChatGPT is just a remembering of its content, the better it is. In this case it had clearly remembered just that API (obscure educational API) -- with weird parameter key dictionaries etc. that arent in anyway some Intelligent Generalisation (oh wow! everyone fund this!!11!)

Insofar as it isn't just a regurgitation of "the better ebooks, blogs and docs", the worse it is.

This is why, when prompting ChatGPT, i'm more often aiming to have it use examples (etc.) that have a high likelyhood of exact data in its training set.

Consider eg., the prompt, "write tic-tac-toe in javascript using html and canvas" to "write duck hunt in javascript using html and canvas"

The latter is extremely hard to get out of it, with many prompts -- the former, immediate and perfect.

Why? because there's many examples of tic-tac-toe.


>I thought it made stuff up and that the magic it does is being very good at making the stuff up.

I don't get this problem. Google search doesn't give you answers, it's not reliable, and you still have to double check the answers your.

Remember that? Any information found on the internet used to be assumed to phony too.


> Does ChatGPT “have it”? I thought it made stuff up and that the magic it does is being very good at making the stuff up

ChatGPT is just a frontend UI and refers to two different models (and different tunings of these models over time I guess).

GPT3.5 is kind of useful but it makes up stuff just enough that you can't really trust it and spend so much time verifying it's hard to say whether you saved time vs the old way. But it still produces mostly not made up stuff.

GPT-4, still not perfect, is a game changer though. It's what people generally mean when they talk about ChatGPT. There's far far less hallucination going on (not zero though). There's several ways to access it: phind, bing, ChatGPT. But I still give ChatGPT is the best.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: