Hacker Newsnew | past | comments | ask | show | jobs | submit | jtwaleson's commentslogin

This is awesome! I'm also working on something like real-time collaboration on codebases, but nothing as freehand as this, really inspiring!

EMCA -> ECMA

True. And that's also a reason why "Javascript" is more human friendly tbh.

One reason it's less friendly is that lots people think it has anything to do with java.

Europe-Canada-Mexico Agreement?

“Easy Cancellation” My Ass

This is super cool. Love the little icons in the left and would be nice if they were clickable.

Very cool but I think my wireless bluetooth one is even cooler ;)

https://blog.waleson.com/2024/10/bakelite-to-future-1950s-ro...

It actually supports using the rotary dial to call phone numbers on your smartphone.


I think my full-blown mobile rotary phone is EVEN cooler:

https://www.stavros.io/posts/irotary-saga/

It actually makes calls itself and has a SIM.


ok you win


Someone should make a Bluetooth box that outputs to a standard phone connector so any standard phone would work. Has anyone done that?


Someone does - the Cell2Jack is only $36 USD:

https://www.cell2jack.com/

I use one with a 1970's vintage rotary desk phone and it works well.


I think I've seen it, I was thinking of making a USB one. I might!


Would love to see an update to 2025



Nix is 11st, and Rust is 13rd, and C is 9th. Interesting!

I really, really want this updated too and saw it in my bookmarks. Figured the historic data was interesting, and that someone might want to give this another go.


+1. This has historical value but 11 years are eons in IT.


Agreed, but: I know a couple of players in the "Enterprise Low-Code" space, who have invested heavily in deeply integrated development environments (with a capital I) and the right abstractions. They are all struggling with AI adoption as their systems "don't speak text". LLMs are great at grokking text based programming but not much else.


To me, enterprise low code feels like the latest iteration of the impetus that birthed COBOL, the idea that we need to build tools for these business people because the high octane stuff is too confusing for them. But they are going the wrong way about it; we shouldn't kiddie proof our dev tools to make them understandable to mere mortals, but instead we should make our dev tools understandable enough so that devs don't have to be geniuses to use them. Given the right tools I've seen middle schoolers code sophisticated distributed algorithms that grad students struggle with, so I'm very skeptical that this dilemma isn't self-imposed.

The thing about LLMs being only good with text is it's a self-fulfilling prophecy. We started writing text in a buffer because it was all we could do. Then we built tools to make that easier so all the tooling was text based. Then we produced a mountain of text-based code. Then we trained the AI on the text because that's what we had enough of to make it work, so of course that's what it's good at. Generative AI also seems to be good at art, because we have enough of that lying around to train on as well.

This is a repeat of what Seymour Papert realized when computers were introduced to classrooms around the 80s: instead of using the full interactive and multimodal capabilities of computers to teach in dynamic ways, teachers were using them just as "digital chalkboards" to teach the same topics in the same ways they had before. Why? Because that's what all the lessons were optimized for, because chalkboards were the tool that was there, because a desk, a ruler, paper, and pencil were all students had. So the lessons focused around what students could express on paper and what teachers could express on a chalk board (mostly times tables and 2d geometry).

And that's what I mean by "investment", because it's going to take a lot more than a VC writing a check to explore that design space. You've really gotta uproot the entire tree and plant a new one if you want to see what would have grown if we weren't just limited to text buffers from the start. The best we can get is "enterprise low code" because every effort has to come with an expected ROI in 18 months, so the best story anyone can sell to convince people to open their wallets is "these corpos will probably buy our thing".


As someone that recently started to look into that space, that problem seems to be being tackled via agents and MCP tooling, meaning Fusion, Workato, Boomi, and similar.


This is really useful. Might want to add a checkbox at a certain threshold, so that reviewers explicitly answer the concerns of the LLM. Also you can start collecting stats on how "easy to review" PR's of team members are, e.g. they'd probably get a better score if they address the concerns in the comments already.


I've seen worse ideas ;)


As far as I know there is currently no international alternative authority for this. So definitely not ideal, but better than not having the warnings.


Yes but that's not a legal argument.

You're honor, we hurt the plaintiff because it's better than nothing!


True, and agreed that lawsuits are likely. Disagree that it's short-sighted. The legal system hasn't caught up with internet technology and global platforms. Until it does, I think browsers are right to implement this despite legal issues they might face.


In what country hasn't the legal system caught up?

The point I raise is that the internet is international. There are N legal systems that are going to deal with this. And in 99% of them this isn't going to end well for Google if plaintiff can show there are damages to a reasonable degree.

It's bonkers in terms of risk management.

If you want to make this a workable system you have to make it very clear this isn't necessarily dangerous at all, or criminal. And that a third party list was used, in part, to flag it. And even then you're impeding visitors to a website with warnings without any evidence that there is in fact something wrong.

If this happens to a political party hosting blogs, it's hunting season.


I meant that there is no global authority for saying which websites are OK and which ones are not. So not really that the legal system in specific countries have not caught up.

Lacking a global authority, Google is right to implement a filter themselves. Most people are really really dumb online and if not as clearly "DO NOT ENTER" as now, I don't think the warnings will work. I agree that from a legal standpoint it's super dangerous. Content moderation (which is basically what this is) is an insanely difficult problem for any platform.


The alternative is to not do this.


It's slides all the way down. Once models support this natively, it's a major threat to slides ai / gamma and the careers of product managers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: