I'll add the title is a bit of bait. I don't use the word "vibe" (in any of its forms) anywhere outside of the title.
I'm not baiting for general engagement, I don't really care about that. I'm baiting for people who are extremist on either side of the "vibe" spectrum such that it'd trigger them to read this, because either way I think it could be good for them.
If you're an extreme pro-vibe person, I wanted to give an example of what I feel is a positive usage of AI. There's a lot of extreme vibe hype boys who are... sloppy.
And if you're an extreme anti-vibe person, I wanted to give an example that clearly refutes many criticisms. (Not all, of course, e.g. there's no discussion here one way or another about say... resource usage).
People are really bad at evaluating whether ai speeds them up or slows them down.
The main question is, do you enjoy this kind of process of working with ai.
I personally don't, so I don't use it.
It's hard for me to believe any claims about productivity gains.
This is the crux of the discussion. For me the output is a greater reward than the input. The faster I can reach the output the better.
And to clarify, I don't mean output as "this feature works, awesome", but "this feature works, it's maintainable and the code looks as beautiful as I can make it"
I like it when it works but I literally had to take a break yesterday due to the rage I was feeling from Claude repeatedly declaring "I found it!" or "Perfect - found the issue!" before totally breaking the code.
It's not really 5$ a month. You can pay for just one month and then cancel the app. You can download it and use it even after that 1 month just without getting the updates. What really happens behind the scenes is that you just get removed as a collaborator from the project. So what you have locally (either the whole cloned repo or just the binaries) is all yours.
I am thinking about implementing one time payments but really it's a way of trying to sustain all my OSS projects for the long term.
I think that's true with known optical illusions, but there are definitely times where we're fooled by the limitations in our ability to perceive the world and that leads people to argue their potentially false reality.
A lot of times people cannot fathom that what they see is not the same thing as what other people see or that what they see isn't actually reality. Anyone remember "The Dress" from 2015? Or just the phenomenon of pareidolia leading people to think there are backwards messages embedded in songs or faces on Mars.
"The Dress" was also what came to mind for the claim being obviously wrong. There are people arguing to this day that it is gold even when confronted with other images revealing the truth.
It has not learned anything. It just looks in its context window for your answer.
For a fresh conversation it will make the same mistake again. Most likely, there is some randomness and also some context is stashed and shared between conversations by most LLM based assistants.
Hypothetically that might ne true. But current systems do not do online learning. Several recent models have cutoff points that are over 6 months ago.
It is unclear to which extent user data is trained on. And it is is not clear whether one can achieve meaningful improvements to correctness based on training on user data. User data might be inadvertently incorrect and it may also be adversarial, trying to out bad things in on purpose.
No, 2.5 flash non-thinking was replaced with 2.5 flash lite, and 2.5 flash thinking had it's cost rebalanced (input price increased/output price decreased)
2.5 flash non-thinking doesn't exist anymore. People call it a price increase but it's just confusion about what Google did.
Well it isn’t searching the web, it has a cut off date, and it takes bits of info from various sources often distorting them, hence it’s not a search engine.
ChatGPT.com (and other LLM UIs such as Perplexity) now use a Tool that searches the web if it detects that it is necessary to solve the user's question, and then uses the output of that search query to answer the user's question. This allows it to surface responses that are out of its training data cutoff date, and cite specific data sources.