Hacker News new | past | comments | ask | show | jobs | submit | fosterfriends's comments login

++ Kind of my whole thesis with Graphite. As more code gets AI-generated, the weight shifts to review, testing, and integration. Even as someone helping build AI code reviewers, we'll _need_ humans stamping forever - for many reasons, but fundamentally for accountability. A computer can never be held accountable

https://constelisvoss.com/pages/a-computer-can-never-be-held...


> A computer can never be held accountable

I think the issue is not about humans being entirely replaced. Instead, the issue is that if AI replaces enough number of knowledge workers while there's no new or expanded market to absorb the workforce, the new balance of supply and demand will mean that many of us will have suppressed pay or worse, losing our jobs forever.


That is true regardless of whether there is or isn't a "new or expanded market to absorb the workforce".

It's a crucial insight that's usually missed or eluded in discussions about automation and workforce - unless you're literally at the beginning of your career, losing your career to automation screws you over big time, forever. At best, you'll have to downsize your entire lifestyle, and that of your family, to be commensurate with your now entry-level pay. If you're halfway through the career that suddenly ended, you won't recover.

All the new jobs and markets are for the kids. Mind you, not your kids - your kids are going to be disadvantaged by their household being suddenly thrown into financial insecurity or downright poverty, and may not even get a chance to start a good career path with their peers.

That, not "anti technology sentiment", is why Luddites smashed the looms. Those were people who got rug-pulled by business decisions and thrown into poverty, along with their families and communities.


> A computer can never be held accountable

I feel like I've been thinking along similar lines recently (due to re-read this though!) but instead of "computer" am replacing it with "AI" or "Agents" these days. Same point holds true.


I'll second it - consistency is better than perfect


This is so cool. Alas, probably no way to get at home


Amazon has handheld versions for the low, low price of.....$3,752.


++ love that folks are trying to build companies on MCP. Good luck!


Thanks! :)


Love it - I use a mix of Claude Code and Cursor agentic mode the most locally from this list.

I'll (biasedly) throw in "Diamond" - https://diamond.graphite.dev/, and in general, AI code review tools as a whole category :)


Love this take


Fascinating - are you imagining a sort of adversarial AI situation, where one LLM bot writes the PRs, and a different one reviews them, leading to an organically improving codebase? Kind of a cool idea.


Pull requests are too-downstream - a legacy I believe.

An AGI system that self-improves its code will regenerate every component impacted by the enhancement starting from live system design narratives, useful existing components, relevant design patterns, and intermediate development artifacts that are discarded or become stale in human-driven legacy coding.

I see "agents" as mostly bodies of assembled prompts for LLMs of various strengths used at the appropriate time in the pipeline of code development. A code review agent's prompt would not have the task of generating code and thus not need all that particular context, but would look for historically observed 'gotchas' and flag those for automatic repair, and the repair could go all the way back through the artifact chain to the text requirements and specifications.



I really like the general take that LLMs scanning PRs is simply "zero config CI." We already have a great paradigm for this; we don't need to reinvent a new category. In that light, we can weigh its value more as a fuzzy linter, rather than a be-all-end-all.


I used the first version of Phind for some time and loved it. As Perplexity and ChatGPT got better, I started shifting more of my traffic back to them. Excited to see y’all still in the race and giving competators a run for their money. I appreciate your focus on developers as an audience, might give you an edge over tools serving a broader base.


I'd agree with this. I tried Phind just now and found it still behind Perplexity for the product search use cases I tried it out for. Glad there's competition in the space though.


We have a products UI on the way :)


Pretty cool, visual learners are gonna' be thrilled. Also sorta' ties into r/FUI quite neatly too..


Has anyone tried function? I'm curious how legit it is.


My parents got it for me. A lot of dark pattern upselling on the website you can't correct there but relatively painless to correct that on the phone. All the labwork seems to be done via CLIA labs in the standard way, they grab as many vials of blood as you'd expect and the numbers for one test were close to the ones from a test my doctor ran. Lots of hogwash interpretation in addition.

So: they're predatory but play by the rules.


I have wondered about it as well. I worry that I'll get a few results that are slightly outside of normal range and then sit around wondering if I'm dying, which seems like more stress than it's worth.


i have, feel it's reasonably priced, and i've been pleased with what i've gotten for the money. i wanted it for exactly the reason of "don't wait until after you've got a serious problem".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: