Hacker News new | past | comments | ask | show | jobs | submit | unshavedyak's comments login

Yup. I'm more skeptic than pro-AI these days, but nonetheless i'm still trying to use AI in my workflows.

I don't actually enjoy it, i generally find it difficult to use as i have more trouble explaining what i want than actually just doing it. However it seems clear that this is not going away and to some degree it's "the future". I suspect it's better to learn the new tools of my craft than to be caught unaware.

With that said i still think we're in the infancy of actual tooling around this stuff though. I'm always interested to see novel UXs on this front.


Probably unrelated to the broader discussion, but I don't think the "skeptic vs pro-AI" distinction even makes that much sense.

For example, I usually come off as being relatively skeptic within the HN crowd, but I'm actually pushing for more usage at work. This kind of "opinion arbitrage" is common with new technologies.


One recent post I read about improving the discourse (which I seem to have lost the link...) agrees, but in a different way: adding a "capable vs not" axis. that is, "I believe AI is good enough to replace humans, and I am pro" is different than "I believe AI is good enough to replace humans, and I am against" and while "I believe AI is not good enough to replace humans, and I am pro" is a weird position to take, "I believe AI is not good enough to replace humans, and I am against."

These things are also not binary, they're a full grid of space.


> "I believe AI is not good enough to replace humans, and I am pro" is a weird position to take

I think that's just the opinion of someone who doesn't think AI currently lives up to the hype but is optimistic about developing it further, not really that weird of a position in my opinion.

Personally I'm moving more into the "I think AI is good enough to replace humans, and I am against" category.


Yeah, I meant like, at the full extreme of "and it never will". Someone with the position you describe wouldn't be at the far end, but somewhere closer to the middle.

> "I believe AI is not good enough to replace humans, and I am pro" is a weird position to take

Huh? The recipe how to be in this position is literally in the readme of the linked project. You don’t even have to believe it, you just have to work it.


I mean at the most extreme: that it can NEVER do so. Someone who holds this position would point to commits like https://news.ycombinator.com/item?id=44159659

To that I can only respond with never say never. Not this year? Yes. Not next year? Sign me up. Not in the next 10 years? Let’s say I’m looking at my hardware career options after 20 years in software.

I believe compilers are not good enough to replace humans, and I am pro

> but I don't think the "skeptic vs pro-AI" distinction even makes that much sense.

Imo it does, because it frames the underlying assumptions around your comment. Ie there was some very pro-AI folks who think it's not just going to replace everything, but already is. That's an extreme example of course.

I view it as valuable anytime there's extreme hype, party lines, etc. If you don't frame it yourself, others will and can misunderstand your comment when viewed through the wrong lens.

Not a big deal of course, but neither is putting a qualifier on a comment.


Someone has mentioned in another thread that there is an increasing divide between "discourse skeptics" and fundamental skeptics, more like luddites.

> but I don't think the "skeptic vs pro-AI" distinction even makes that much sense

Tends to be like that with subjects once feelings get involved. Make any skepticism public, even if you don't feel strongly either way, and you get one side of extremists yelling at you about X. At the same time, say anything positive and you get the zealots from the other side yelling at you about Y.

Us who tend to be not so extremist gets push back from both sides, either in the same conversations or in different places, while both see you as belonging to "the other side" while in reality you're just trying to take a somewhat balanced approach.

These "us vs them" never made sense to me, for (almost) any topic. Truth usually sits somewhere around the middle, and a balanced approach seems to usually result in more benefits overall, at least personally for me.


> I don't actually enjoy it, i generally find it difficult to use as i have more trouble explaining what i want than actually just doing it.

This is my problem I run into quite frequently. I have more trouble trying to explain computing or architectural concepts in natural language to the AI than I do just coding the damn thing in the first place. There are many reasons we don't program in natural language, and this is one of them.

I've never found natural language tools easier to use, in any iteration of them, and so I get no joy out of prompting AI. Outside of the increasingly excellent autocomplete, I find it actually slows me down to try and prompt "correctly."


> I don't actually enjoy it, i generally find it difficult to use as i have more trouble explaining what i want than actually just doing it.

Most people who are good at a skill start here with AI. Over time, they learn how to explain things better to AI. And the output from AI improves significantly as a result. Additionally AI models keep improving over time as well.

If you stay with it, you will reach an inflection point soon enough and will actually start enjoying it.


Is that not a valid justification?

Anyone know if this is usable with Claude Code? If so, how? I've not seen the ability to configure the backend for Claude Code, hmm

Last night I suddenly got noticeably better performance in Claude Code. Like it one shotted something I'd never seen before and took multiple steps over 10 minutes. I wonder if I was on a test branch? It seems to be continuing this morning with good performance, solving an issue Gemini was struggling with.

Just saw this popup in claude cli v1.0.0 changelog

What's new:

• Added `DISABLE_INTERLEAVED_THINKING` to give users the option to opt out of interleaved thinking.

• Improved model references to show provider-specific names (Sonnet 3.7 for Bedrock, Sonnet 4 for Console)

• Updated documentation links and OAuth process descriptions

• Claude Code is now generally available

• Introducing Sonnet 4 and Opus 4 models


Yes, you can type /model in Code to switch model, currently Opus 4 and Sonnet 4 for me.

The new page for Claude Code says it uses opus 4, sonnet 4 and haiku 3.5

> so expect the level of service you get for that amount to eventually go down

Yea that's my big problem with expensive subscriptions. If i buy 2.5 Pro today who knows what i'll be in a month.


I got the feeling Jules was targeted at Web (ala Github) PR workflows. Is it not?

The Claude Code UX is nice imo, but i didn't get the impression Jules is that.


At Google, our PR flow and editing is all done in web based tools. Except for the nerds who like vi.


people don't use local editors? it's weird to lock people into workflows like that


Damn... you guys don't use proper text editors?


I took their comments as sarcasm, fwiw.


How close can you get Aider to Claude Code? Ie i liked the Claude Code UX, but i don't use it because i prefer Gemini 2.5 Pro.

I don't really want it committing and stuff, i mostly like the UX of Claude Code. Thoughts?


You can turn off auto commit


You can disable the automatic commits, but you cannot disable the automatic modification of files. One nice thing about Claude Code is that you can give it feedback on a patch before it is even applied.


That's the whole point of /architect mode, no? You refine the solution in the prompt before aider asks you if you want to apply the changes.


No, I just tried this on the latest version of Aider and it automatically made the change with architect mode enabled.



Then use /ask mode. If this still edits your files something is broken.


I’m ok with auto commit because I’m ok with just reversing it. Got to make sure you have a branch


What specific keynote are they referring to? I'm curious, but thus far my searches have failed


MS Build is today


This looks awesome! Wonder if something like this could be turned into a generalized optimization engine of sorts? Ie if the problem could be generalized for a set of available movement commands relative to used commands, you could apply it to any underlying platform.

Which is to say, i'd love to see this in Helix. I also toy with custom editors, and observability of available commands is high priority for me, a generalized solution here would be an elegant solve for that. It would also adapt to new features nicely.


Has any interface implemented a .. history cleaning mechanism? Ie with every chat message focus on cleaning up dead ends in the conversation or irrelevant details. Like summation but organic for the topic at hand?

Most history would remain, it wouldn’t try to summarize exactly, just prune and organize the history relative to the conversation path?


I've had success having a conversation about requirements, asking the model to summarize the requirements as a spec to feed into a model for implementation, then pass that spec into a fresh context. Haven't seen any UI to do this automatically but fairly trivial/natural to perform with existing tools.


Doing the same. Though I wish there was some kind of optimization of text generated by an LLM for an LLM. Just mentioning it’s for an LLM instead of Juan consumption yields no observably different results.


"Every problem in computer science can be solved with another level of indirection."

One could argue that the attention mechanism in transformers is already designed to do that.

But you need to train it more specifically with that in mind if you want it to be better at damping attention to parts that are deemed irrelevant by the subsequent evolution of the conversation.

And that requires the black art of ML training.

While thinking of doing this as a hack on top of the chat product feels more like engineering and we're more familiar with that as a field.


Not sure if that's what you mean but Claude Code has a /compact command which gets triggered automatically when you exceed the context window.

The prompt it uses: https://www.reddit.com/r/ClaudeAI/comments/1jr52qj/here_is_c...


the problem is that it needs to read the log to prune the log, and so if there is garbage in the log, which needs to be pruned to keep from poisoning the main chat, then the garbage will poison the pruning model, and it will do a bad job pruning.


I mean, you could build this, but it would just be a feature on top of a product abstraction of a "conversation".

Each time you press enter, you are spinning up a new instance of the LLM and passing in the entire previous chat text plus your new message, and asking it to predict the next tokens. It does this iteratively until the model produces a <stop> token, and then it returns the text to you and the PRODUCT parses it back into separate chat messages and displays it in your UI.

What you are asking the PRODUCT to now do is to edit your and its chat messages in the history of the chat, and then send that as the new history with your latest message. This is the only way to clean the context because the context is nothing more than your messages and its previous responses, plus anything that tools have pulled in. I think it would be sort of a weird feature to add to a chat bot to have the chat bot, each time you send a new message, go back through the entire history of your chat and just start editing the messages to prune out details. You would scroll up and see a different conversation, it would be confusing.

IMO, this is just part of prompt engineering skills to keep your context clean or know how to "clean" it by branching/summarizing conversations.


Or delete / edit messages in AI Studio or Open Router.


Not a history cleaning mechanism, but related to that, Cursor in the most recent release introduced a feature to duplicate your chat (so you can saveguard yourself against poisoning and go back to and unpoisoned point in history), which seems like an addmision of the same problem.


Isn't this what Claude workbench in the Anthropic console does? It lets the user edit both sides of the conversation history.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: