Hacker News new | past | comments | ask | show | jobs | submit | deadmutex's comments login

I don't know the specifics of this case, but maybe the investigators just asked in case there was an accidental trigger, or a real trigger etc. Seems reasonable for the detective to attempt to turn over any stone they can to aid the investigation.


Also, for a lot of people working on hardware, the alternatives aren't great. Big Tech players like Apple, Meta, Amazon, etc. all have downsides. Startsups are extremely risky, and don't pay employees as well (ex: Humane, Rabbit, Peleton, etc.)

A slightly better story for those working on software (e.g. Google Photos App or Backend). They have more options, but relatively good jobs (high pay, flexibility, great coworkers non-crazy hours, etc.) as still hard to come by. They exist, but not sure about the quantity.


+1, grey-on-grey can be hard for older folks too


Sure, and someone else can say otherwise. Comparing anecdotes doesn't provide a global view, IMO, and can lead to incorrect conclusions.

Maybe better to look at data instead, e.g. Netflix ad-supported plans vs ad-free plans, or YouTube Premium vs YT ad-supported, etc.


Interesting. On lmsys, Gemini is #1 for coding tasks. How does that compare?

https://lmarena.ai/?leaderboard


For the lmarena leaderboard to be really useful you need click the "Style Control" button so that it normalizes for LLMs that generate longer answers, etc. that, while humans may find them more stylistically pleasing, and upvote them, the answers often end up being worse. When you do that, o1 comes out on top followed by o1-preview, then Sonnet 3.5, and in fourth place Gemini Preview 1206.


lmsys is a poor judge of coding quality since it is based on ratings from a single generation rather than agentic coding over multiple steps.


"ChatGPT" Coding... is it impartial? the name sorta sounds biased.


ChatGPT was the first to come along, so the subreddit was given a perhaps short-sighted name. It's now about coding with LLMs in general.


> Also, I assume financial applications such as hedge funds would be buying these things in bulk now.

Please elaborate.. why?


I'm assuming hedge funds are using LLMs to dissect information from company news, SEC reports as soon as possible then make a decision on trading. Having faster inference would be a huge advantage.


I think there should be a distinction here. E.g. if you work on a browser, possibly implementing parts of image loading, or javascript parser, etc.

Are you consider a dogfooder if you use the browser? or do you need to lots of write Javascript yourself, etc. to be considered "a user of your product"?

Typically, these are two different sets of people.

So, I don't buy the "always, always" part


I suppose this problem is timeless, back when I was active in the PHP community it was a long-running joke that people who "graduated" to committing to the actual php source (in C) were not doing web development work anymore. And I suppose it was actually true for the majority. On the other hand, designing language features wasn't really related to using it for web work.


Future can be 6 days from now or 6 centuries from now. This statement is useless without specific details.


But by providing such details the statement goes from unknowable to unknown and potentially verifiable at some point.

Avoiding falsifiable statements is a skill set that might be worth having in your communications toolkit.

(I remember reading that some philosophy school had {True, false, unknown, unknowable} but, alas, cannot find any reference to that just now)


LOL. So you want everyone to become skillful in using weasel words? Spoken like a true weasel.


Huh. I Forgot the /s


This sounds plausible, but would love a source


I should probably just write it up into a post, but the git mailing list at the time is the source (I remember reading it from the side a few months after convincing our VP R&D to switch from svn to git). We were chuckling around the same time that FB had to reallocate the stack on Galaxy S2 phones because they were somehow unaware of proguard or unable to have it work properly with their codegen.

Anyways:

1. Github benchmark: https://github.blog/engineering/infrastructure/improve-git-m...

2. The original email thread: https://public-inbox.org/git/CB04005C.2C669%25joshua.redston...

3. There's another email thread that gets linked everywhere - but in light of the prior thread, the numbers don't track: https://public-inbox.org/git/CB5074CF.3AD7A%25joshua.redston...

I recall there being a message from someone either at AirBnB or Uber who mentioned that they have a similar monorepo but without the slow git status, but can't seem to find it now - it's likely on one of the other mailing list archives but didn't make it to this one.

Point being that painting this as "the community was hostile" or "git is too slow for FB" is just disingenuous. The FB engineer barely communicated with the git team (at least publicly) and when there was communication, it was pushing a single benchmark that was deeply flawed, and then ignoring feedback on how to both improve the performance of slow blame, commit by repacking checkpoint packfiles (a one-off effort) and also ignoring feedback that the benchmark numbers didn't make sense in absolute terms.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: