Because this topic makes people mad and makes politicians who oppose it look bad, which hurts electability. Not to mention it seems like non-elite politicians wouldn’t be affected, so for them this is an easy publicity boost.
> Restricting public servants to only government bond investments would be a great way to discourage anyone with financial sense from running for congress.
Running for congress is already kind of a financially silly thing to do, that ship has sailed. Let’s pull the trigger on this and just see what happens.
>Running for congress is already kind
of a financially silly thing to do
Evidently it's not. There's another problem that it seems to me that it's mainly wealthy elite who get in. How many working class people are there in major leadership positions?
“You understand how the brain works right? It’s neurons and electrical charges. The brain understands nothing.”
I’m always struck by how confidently people assert stuff like this, as if the fact that we can easily comprehend the low-level structure somehow invalidates the reality of the higher-level structures. As if we know concretely that the human mind is something other than emergent complexity arising from simpler mechanics.
I’m not necessarily saying these machines are “thinking”. I wish I could say for sure that they’re not, but that would be dishonest: I feel like they aren’t thinking, but I have no evidence to back that up, and I haven’t seen non-self-referential evidence from anyone else.
> You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content.
The algorithm doesn’t show you “more of the things you engage with”, and acting like it does makes people think what they’re seeing is a reflection of who they are, which is incorrect.
The designers of these algorithms are trying to figure out which “mainstream category” you are. And if you aren’t in one, it’s harder to advertise to you, so they want to sand down your rough edges until you fit into one.
You can spend years posting prolificly about open source software, Blender and VFX on Instagram, and the algorithm will toss you a couple of things, but it won’t really know what to do with you (aside from maybe selling you some stock video packages).
But you make one three word comment about Brexit and the algorithm goes “GOTCHA! YOU’RE ANTI-BREXIT! WE KNOW WHAT TO DO WITH THAT!” And now you’re opted into 3 bug ad categories and getting force-fed ragebait to keep you engaged, since you’re clearly a huge poltical junky. Now your feed is trash forever, unless you engage with content from another mainstream category (like Marvel movies or one of the recent TikTok memes).
> The algorithm doesn’t show you “more of the things you engage with”,
That’s literally what the complaint was that I was responding to.
You even immediately contradict yourself and agree that the algorithm shows you what you engage with
> But you make one three word comment about Brexit and the algorithm goes up
> Now your feed is trash forever, unless you engage with content from another mainstream category
This is exactly what I already said: If you want to see some content, engage with it. If you don’t want to see that content, don’t engage with it.
Personally, I regret engaging with this thread. Between the ALL CAPS YELLING and the self-contradictory posts this is exactly the kind of rage content and ragebait that I make a point to unfollow on social media platforms.
The issue is that it's not symmetric: the algorithm is biased towards rage-baity content, so it will use any tiny level of engagement with something related to that content to push it, but there's not really anything you can do to stop it, or to get it to push less rage-baity content. This is also really bad if you realise you have a problem with getting caught up in such content (for some it's borderline addictive): there's no tools for someone to say 'I realise I respond to every message I see on this topic, but really that's not good for me, please don't show me it in the first place'.
OK sure, if you want to be technically correct, “the algorithm shows you what you engage with” in some sense, but not any useful sense. There’s no proportionality.
As I said above, if you engage heavily with content you like that is outside of the mainstream categories the algorithm has been trained to prefer, it will not show you more of those things.
If you engage one single time, in even the slightest way, with one of those mainstream categories, you will be seeing nothing but that, nonstop, forever.
The “mainstream categories” are not publicly listed anywhere, so it’s not always easy to know that you’ve just stepped in one until it’s too late.
You can’t engage with things you like in proportion to how much you care about them. If something is in a mainstream category and you care about it only little bit, you have to abstain from interacting with it at all, ever, and don’t slip up. Having to maintain constant vigilance about this all the time sucks, that’s what pisses me off.
Like, I get that you were referring to the fact that they keep things scarce even for rich people, but you literally said “everyone”, so I just gotta check: Are you saying that everyday people would be willing and able to spend $15000 on a luxury handbag?
The sale of new Birkin bags is famously invite-only. In that context, to "sell" to "everyone" means making the bag available for sale to everyone. "Anyone" would have been a less ambiguous word choice, but it's a minor grammatical issue and the meaning is still clear.
There was an implied ‘who is on the waiting list for a Birkin bag currently’ in ‘everyone’. They did not mean every single person on Earth, they meant Hermes could sell a Birkin bag to every interested buyer.
I’m not the GP, but the reason I capitalize words instead of italicizing them is because the italics don’t look italic enough to convey emphasis. I get the feeling that that may be because HN wants to downplay emphasis in general, which if true is a bad goal that I oppose.
Also, those guidelines were written in the 2000s in a much different context and haven’t really evolved with the times. They seem out of date today, many of us just don’t consider them that relevant.
> It’s virtually impossible for me to estimate how long it will take to fix a bug, until the job is done.
In my experience there are two types of low-priority bugs (high-priority bugs just have to be fixed immediately no matter how easy or hard they are).
1. The kind where I facepalm and go “yup, I know exactly what that is”, though sometimes it’s too low of a priority to do it right now, and it ends up sitting on the backlog forever. This is the kind of bug the author wants to sweep for, they can often be wiped out in big batches by temporarily making bug-hunting the priority every once in a while.
2. The kind where I go “Hmm, that’s weird, that really shouldn’t happen.” These can be easy and turn into a facepalm after an hour of searching, or they can turn out to be brain-broiling heisenbugs that eat up tons of time, and it’s difficult to figure out which. If you wipe out a ton of category 1 bugs then trying to sift through this category for easy wins can be a good use of time.
And yeah, sometimes a category 1 bug turns out to be category 2, but that’s pretty unusual. This is definitely an area where the perfect is the enemy of the good, and I find this mental model to be pretty good.
Does it actually? One sentence telling the agent to call me “Chris the human serviette” plus the times it calls me that is not going to add that much to the context. What kills the context IME is verbose logs with timestamps.
Sure, but its an instruction that applies and the model will consider fairly relevant in every single token. As an extremely example imagine instructing the llm to not use the letter E or to output only in French. Not as extreme but it probably does affect.
People are so concerned about preventing a bad result that they will sabotage it from a good result. Better to strive for the best it can give you and throw out the bad until it does.
I design projections for independent theatre in Baltimore. I use AI in my workflows where it can help me and won’t compromise on the quality of what I’m making. I frequently use AI to upscale crappy footage, to interpolate frames in existing video (for artistic purposes, never with documentary archival stuff) and very occasionally to create wholesale clips in situations where video models can do what I need.
I recently used WAN to generate a looping clip of clouds moving quickly, something that’s difficult to do in CGI and impossible to capture live action. It worked out because I didn’t have specific demands other than what I just said, and I wasn’t asking for anything too obscure.
At this point, I expect the quality of local video models (the only kind I’m willing to work with professionally) to go up, but prompt adherence seems like a tough nut to crack, which makes me think it may be a while before we have prosumer models that can replace what I do in Blender.
reply