Yup. I'm more skeptic than pro-AI these days, but nonetheless i'm still trying to use AI in my workflows.
I don't actually enjoy it, i generally find it difficult to use as i have more trouble explaining what i want than actually just doing it. However it seems clear that this is not going away and to some degree it's "the future". I suspect it's better to learn the new tools of my craft than to be caught unaware.
With that said i still think we're in the infancy of actual tooling around this stuff though. I'm always interested to see novel UXs on this front.
Probably unrelated to the broader discussion, but I don't think the "skeptic vs pro-AI" distinction even makes that much sense.
For example, I usually come off as being relatively skeptic within the HN crowd, but I'm actually pushing for more usage at work. This kind of "opinion arbitrage" is common with new technologies.
One recent post I read about improving the discourse (which I seem to have lost the link...) agrees, but in a different way: adding a "capable vs not" axis. that is, "I believe AI is good enough to replace humans, and I am pro" is different than "I believe AI is good enough to replace humans, and I am against" and while "I believe AI is not good enough to replace humans, and I am pro" is a weird position to take, "I believe AI is not good enough to replace humans, and I am against."
These things are also not binary, they're a full grid of space.
> "I believe AI is not good enough to replace humans, and I am pro" is a weird position to take
I think that's just the opinion of someone who doesn't think AI currently lives up to the hype but is optimistic about developing it further, not really that weird of a position in my opinion.
Personally I'm moving more into the "I think AI is good enough to replace humans, and I am against" category.
Yeah, I meant like, at the full extreme of "and it never will". Someone with the position you describe wouldn't be at the far end, but somewhere closer to the middle.
> "I believe AI is not good enough to replace humans, and I am pro" is a weird position to take
Huh? The recipe how to be in this position is literally in the readme of the linked project. You don’t even have to believe it, you just have to work it.
To that I can only respond with never say never. Not this year? Yes. Not next year? Sign me up. Not in the next 10 years? Let’s say I’m looking at my hardware career options after 20 years in software.
> but I don't think the "skeptic vs pro-AI" distinction even makes that much sense.
Imo it does, because it frames the underlying assumptions around your comment. Ie there was some very pro-AI folks who think it's not just going to replace everything, but already is. That's an extreme example of course.
I view it as valuable anytime there's extreme hype, party lines, etc. If you don't frame it yourself, others will and can misunderstand your comment when viewed through the wrong lens.
Not a big deal of course, but neither is putting a qualifier on a comment.
> but I don't think the "skeptic vs pro-AI" distinction even makes that much sense
Tends to be like that with subjects once feelings get involved. Make any skepticism public, even if you don't feel strongly either way, and you get one side of extremists yelling at you about X. At the same time, say anything positive and you get the zealots from the other side yelling at you about Y.
Us who tend to be not so extremist gets push back from both sides, either in the same conversations or in different places, while both see you as belonging to "the other side" while in reality you're just trying to take a somewhat balanced approach.
These "us vs them" never made sense to me, for (almost) any topic. Truth usually sits somewhere around the middle, and a balanced approach seems to usually result in more benefits overall, at least personally for me.
> I don't actually enjoy it, i generally find it difficult to use as i have more trouble explaining what i want than actually just doing it.
This is my problem I run into quite frequently. I have more trouble trying to explain computing or architectural concepts in natural language to the AI than I do just coding the damn thing in the first place. There are many reasons we don't program in natural language, and this is one of them.
I've never found natural language tools easier to use, in any iteration of them, and so I get no joy out of prompting AI. Outside of the increasingly excellent autocomplete, I find it actually slows me down to try and prompt "correctly."
> I don't actually enjoy it, i generally find it difficult to use as i have more trouble explaining what i want than actually just doing it.
Most people who are good at a skill start here with AI. Over time, they learn how to explain things better to AI. And the output from AI improves significantly as a result. Additionally AI models keep improving over time as well.
If you stay with it, you will reach an inflection point soon enough and will actually start enjoying it.
Last night I suddenly got noticeably better performance in Claude Code. Like it one shotted something I'd never seen before and took multiple steps over 10 minutes. I wonder if I was on a test branch? It seems to be continuing this morning with good performance, solving an issue Gemini was struggling with.
You can disable the automatic commits, but you cannot disable the automatic modification of files. One nice thing about Claude Code is that you can give it feedback on a patch before it is even applied.
This looks awesome! Wonder if something like this could be turned into a generalized optimization engine of sorts? Ie if the problem could be generalized for a set of available movement commands relative to used commands, you could apply it to any underlying platform.
Which is to say, i'd love to see this in Helix. I also toy with custom editors, and observability of available commands is high priority for me, a generalized solution here would be an elegant solve for that. It would also adapt to new features nicely.
Has any interface implemented a .. history cleaning mechanism? Ie with every chat message focus on cleaning up dead ends in the conversation or irrelevant details. Like summation but organic for the topic at hand?
Most history would remain, it wouldn’t try to summarize exactly, just prune and organize the history relative to the conversation path?
I've had success having a conversation about requirements, asking the model to summarize the requirements as a spec to feed into a model for implementation, then pass that spec into a fresh context. Haven't seen any UI to do this automatically but fairly trivial/natural to perform with existing tools.
Doing the same. Though I wish there was some kind of optimization of text generated by an LLM for an LLM. Just mentioning it’s for an LLM instead of Juan consumption yields no observably different results.
"Every problem in computer science can be solved with another level of indirection."
One could argue that the attention mechanism in transformers is already designed to do that.
But you need to train it more specifically with that in mind if you want it to be better at damping attention to parts that are deemed irrelevant by the subsequent evolution of the conversation.
And that requires the black art of ML training.
While thinking of doing this as a hack on top of the chat product feels more like engineering and we're more familiar with that as a field.
the problem is that it needs to read the log to prune the log, and so if there is garbage in the log, which needs to be pruned to keep from poisoning the main chat, then the garbage will poison the pruning model, and it will do a bad job pruning.
I mean, you could build this, but it would just be a feature on top of a product abstraction of a "conversation".
Each time you press enter, you are spinning up a new instance of the LLM and passing in the entire previous chat text plus your new message, and asking it to predict the next tokens. It does this iteratively until the model produces a <stop> token, and then it returns the text to you and the PRODUCT parses it back into separate chat messages and displays it in your UI.
What you are asking the PRODUCT to now do is to edit your and its chat messages in the history of the chat, and then send that as the new history with your latest message. This is the only way to clean the context because the context is nothing more than your messages and its previous responses, plus anything that tools have pulled in. I think it would be sort of a weird feature to add to a chat bot to have the chat bot, each time you send a new message, go back through the entire history of your chat and just start editing the messages to prune out details. You would scroll up and see a different conversation, it would be confusing.
IMO, this is just part of prompt engineering skills to keep your context clean or know how to "clean" it by branching/summarizing conversations.
Not a history cleaning mechanism, but related to that, Cursor in the most recent release introduced a feature to duplicate your chat (so you can saveguard yourself against poisoning and go back to and unpoisoned point in history), which seems like an addmision of the same problem.
I don't actually enjoy it, i generally find it difficult to use as i have more trouble explaining what i want than actually just doing it. However it seems clear that this is not going away and to some degree it's "the future". I suspect it's better to learn the new tools of my craft than to be caught unaware.
With that said i still think we're in the infancy of actual tooling around this stuff though. I'm always interested to see novel UXs on this front.
reply