Hacker News new | past | comments | ask | show | jobs | submit | horacemorace's comments login

I did the same thing a few months ago with 4o. This stuff works fine if done with care.

Doesn’t that sort of rate make the payoff for a solar system install just a few years?

Yeah - even with PGE trying their best to screw over solar customers in the last few years, I figure we've at least gotten our money back in ~15 years of owning a ~4KW system. Something like 75 MW generated in that time, assuming the inverter is more-or-less correct. At this point, doesn't make any sense anymore, since they only credit you like $0.1 and charge you $0.6 (I haven't looked too closely at it) - you'd have to generate 5-10x your consumption to mostly offset it.

We bought ours in '10 to offset high AC use in the summer - we were paying $1000-1500 a month for 2-4 months in the summer. The first few years, our "year-end" balance was < $1K (we just paid minimum payments the rest of the year), so I figure we easily saved $2-3K/year in those early years, and after the incentives in those days, we paid ~$14K, so maybe 7 years to pay it off. Our year-end balance was more like $3K the last time, and I think we're still producing 80-90% the same power, but PGE keeps changing the plans around. At this point, I'm interested in upgrading our cells from 300W to 450W, but I'd only do that with a battery system that also stores energy so that we could go more or less entirely off-grid. But probably need a new roof first..


An LLM and a modern desktop makes any sufficiently enthusiastic human into an Operator, as the author describes it.


We’ve got the minority opinion; most don’t even notice. I make a point to ask whenever around others near one.


LLMs don’t seem to have much notion of themselves as a first person subject, in my limited experience of trying to engage it.


From their perspective they don't really know who put the tokens there. They just caculated the probabilities and then the inference engine adds tokens to the context window. Same with user and system prompt, they just appear in the context window and the LLM just gets "user said: 'hello', assistant said: 'how can I help '" and it just calculates the probabilities of the next token. If the context window had stopped in the user role it would have played the user role (calculated the probabilities for the next token of the user).


> If the context window had stopped in the user role it would have played the user role (calculated the probabilities for the next token of the user).

I wonder which user queries the LLM would come up with.


On one machine I run a LLM locally with ollama and a web interface (forgot the name) that allows me to edit the conversation. The LLM was prompted to behave as a therapist and for some reason also role played it's actions like "(I slowly pick up my pen and make a note of it)".

I changed it to things like "(I slowly pick up a knife and show it to the client)" and then just confront it it like "Whoa why are you threatening me!?", the LLM really tries hard to stay in it's role and then tells things like it did it on purpose to provoke a fear response to then discuss the fears.


Interestingly you can also (of course) ask them to complete for System role prompts. Most models I have tried this with seem to have a bit of an confused idea about the exact style of those and the replies are often a kind of an mixture of the User and Assistant style messages.


Yeah, the algorithm is a nameless, ego-less make-document-longer machine, and you're trying to set up a new document which will be embiggened in a certain direction. The document is just one stream of data with no real differentiation of who-put-it-there, even if the form of the document is a dialogue or a movie-script between characters.


This so much. My HCOL area uses a hybrid approach: kids can always eat regardless and can run up huge $$$ tabs. The parents get pestered once it gets over a few hundred.


You’re right of course. That’s why these open source / weight releases are so critically important.


In my limited experience, models like Llama and Gemma are far more censored than Qwen and Deepseek.


Try to ask any model about Israel and Hamas


ChatGPT 4o just gave me a reasonable summary of Hamas' founding, the current conflict, and the international response criticising the humanitarian crisis.


It might be bad if its behavior wasn’t so anthropomorphic.


It’s Way Better than what we had before: software vendors making even arbitrarier decisions about how to classify them.

There are far too many bad actors for us to operate as an industry with no yardstick.


I disagree that it is Way Better than before. A judgement call is worth more than a team wasting effort chasing irrelevant pseudo-vulnerabilities being reported as vulnerabilities. A broken yardstick is worse than no yardstick.


But that's an issue organizations bring upon themselves, by defining semi-arbitrary KPIs that are used without proper interpretation. It's not directly caused by CVEs or assigned scores. It's like blaming git that it count lines in diffs, because your company created a KPI that measures developer's based on LOC changes.


Fair point. I was not blaming CVE for the situation, simply bemoaning the situation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: