Hacker Newsnew | past | comments | ask | show | jobs | submit | nkingsy's commentslogin

So you’re saying I should’ve eaten the cat that lived in my apartment when I was a baby?


It's possible that if you had more cat dander in your mouth when you were a baby that you wouldn't have developed an allergy. Or, more likely, if you had more mouth exposure to other natural microbes that occur outdoors and around animals, then you would have fewer allergies overall. That is well supported by the evidence.


:)))


I briefly owned a gen 1 leaf. It went from 70 miles of “range” (more like 25) at 100% to 50 in a couple of years of very light driving. Poor guy who bought it from me had all kinds of trouble trying to get it home.


Ebike would be nicer at that point. Battery takes far less time to charge over 110v. You can simply hot swap a charged battery. No insurance or registration requirements. Far lower base cost. Far lower maintenance costs. About the same range. Probably similar cargo volume compared to a cargo ebike. Just need a rain coat and rain pants and you are probably set for most conditions. Call an uber when you actually need to transport 4 people and you'd come out well ahead.

The big secret with bike commuting is that in urban setting on surface streets, its actually the fastest way to get around short of a motorcycle or helicopter. Yes, much faster than cars, thanks to lane splitting.


I have an ebike, it's very nice, and I live ~9 miles from my job, so it's well within range.

But I would not want to be caught in the ever changing weather here on an ebike. Maybe if it had a shell to protect the rider, but then we're getting into it being too pricey for an ebike range.


Rain pants and rain coat work great for keeping you dry


Don’t they usually get a better stock package than the average new hire?


In my experience what the founders usually get is a bigger locked up retention package. The investors want the cash, and the acquirer wants the founders to stay.


The employees along for the ride on an acquihire? Sometimes yes, sometimes no. Depends a lot on how generous the founder/target of the acquihire is.


It isn’t empowered to do anything you can’t already do in the UI, so it is useless to me.

Perhaps there is a group that isn’t served by legacy ui discovery methods and it’s great for them, but 100% of chat bots I’ve interacted with have damaged brand reputation for me.


A chatbot for those sorts of queries that are easily answerable is great in most scenarios though to "keeps the phone lines clear"

The trouble is when they gatekeep you from saying "I know what I'm doing, let me talk to someone"


I’ve only taken it once.

I woke up feeling sick, stiff, and lethargic while staying with a friend in NYC in 2008. My friend said “I’ve got just the thing” and gave me one of his adderall.

20 minutes later I was feeling better than I’ve ever felt in my life. We had one of the most exciting, memorable days in my life, just pinging all over the city. That night we went out to a club, where I somehow charmed a girl way out of my league.

We met up the next day and she was very disappointed.

That is to say, it was quite pleasant for me.

I sometimes think I have undiagnosed ADHD (my daughter has it), but this would seem like evidence against it, as it was undeniably stimulating.


An untrained dose of amphetamine will hit you hard even if you have ADHD, especially if it’s higher than the entry level dose, so I would say it gives you zero information about whether you have the condition.

Funny story though. I have a similar story after my friend walked up to me in a club with a line of coke on his hand. Then I proceeded to charm the girl that became my next girlfriend.


I’m not so sure about that. My very first dose of Adderall was anticlimactic. I was bracing for the rush from an energy drink or something, and instead felt… nothing. I was just able to focus on work better that day.

Also, cocaine and amphetamines are very different drugs. They’re both stimulants, but that’s about all they have in common.


> the rush from an energy drink

I must be dead inside.

(I probably need a caffeine tolerance break...)


You see this in ADHD groups where someone will start stimulant medication for the first time and say "This is incredible. I can't believe this is what everyone else feels like all the time"

And the crowd emerges to reinforce that, no, you're euphoric, this isn't normal, after about a week it'll go away and you'll just feel normal but more productive and have better executive function.

And that's on a starter dose, the parent commenter probably took 2-3x that


The previous poster habitually drinks coffee, and thus already has tolerance to the stimulant effects / increased neurotransmission.


I’m not buying that. An enormous percentage of the US population drinks coffee. I’m not an unusual case study here.


Not all of the stimulants work the same way. When (if?) you build up a tolerance to one, you change to a different med. I expect the same applies to the caffeine -> adderall change.


Coffee is not just caffeine. It contains biologically relevant amounts of monoamine oxidase inhibitors.


The example of the cat and detective hat shows that even with the latest update, it isn't "editing" the image. The generated cat is younger, with bigger, brighter eyes, more "perfect" ears.

I found that when editing images of myself, the result looked weird, like a funky version of me. For the cat, it looks "more attractive" I guess, but for humans (and I'd imagine for a cat looking at the edited cat with a keen eye for cat faces), the features often don't work together when changed slightly.


Chatgpt 4o's advanced image generation seems to have a low-resolution autoregressive part that generates tokens directly, and an image upscaling decoding step that turns the (perhaps 100 px wide) token-image into the actual 1024 px wide final result. The former step is able to almost nail things perfectly, but the latter step will always change things slightly. That's why it is so good at, say, generating large text but still struggles with fine text, and will always introduce subtle variations when you ask it to edit an existing image.


Has anyone tried putting in a model that selects the editing region prior to the process? Training data would probably be hard, but maybe existing image recognition tech that draws rectangles would be a start.


Genuine question - how would such a model "edit" the image, besides manipulating the binary? I.e. changing pixel values programmatically


I think the unstated assumption is that there's a block list somewhere being fed into a guardian model's context


The chain is spazzing out in every shot like it's tightening and loosening over and over again. Mid drives already wear out chains quickly. Seems like this is just asking to snap a chain


There is lots of human feedback. This isn’t a game with an end state that it can easily play against itself. It needs problems with known solutions, or realistic simulations. This is why people wonder if our own universe is a simulation for training an asi.


It would be out of date in months.

Things that didn’t work 6 months ago do now. Things that don’t work now, who knows…


There are still some tropes from the GPT-3 days that are fundamental to the construction of LLMs that affect how they can be used and will not change unless they no longer are trained to optimize for next-token-prediction (e.g. hallucinations and the need for prompt engineering)


Do you mean performance that was missing in the past is now routinely achieved?

Or do you actually mean that the same routines and data that didn't work before suddenly work?


B

Each new model opens up new possibilities for my work. In a year it's gone from sort of useful but I'd rather write a script, to "gets me 90% of the way there with zero shots and 95% with few-shot"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: