Hacker News new | past | comments | ask | show | jobs | submit | mabbo's comments login

I got a sleep study at the age of 24 because my then-girlfriend (now wife) said "I got used to the snoring, but that thing where you stop breathing really freaks me out". The what now? In hindsight, I'd had signs of sleep apnea since I was in my teens despite being a healthy weight.

I was diagnosed with 'mild to moderate' sleep apnea and told to not drink, and to exercise more. No CPAP needed. And they were right- if I do those things, I don't have many problems. But then again, I can't always do those things.

What really helped though, and I'm loathe to even say it, was taking high absorption magnesium. I hate it because there are very limited studies that say it should work, and it's mostly promoted by absolute quacks. And yet, when I take it, I sleep really well without snoring or having apnea issues and if I stop, I sleep terribly.


If it works then why were they "absolute quacks"? There's a ton of misinformation from the "legitimate" medical community. The most blatant and recent was the FUD around Ivermectin.


> loathe to even say it, was taking high absorption magnesium

The reason to loathe an advice that seems to work, but isn't backed by scientific study is:

The Placebo effect is real and strong.

There's no harm in subscribing to Placebo, if there's no downside, but promoting gets a little... meh. We like to be scientific when possible.


I learned about the median-of-medians quickselect algorithm when I was an undergrad and was really impressed by it. I implemented it, and it was terribly slow. It's runtime grew linearly, but that only really mattered if you had at least a few billion items in your list.

I was chatting about this with a grad student friend who casually said something like "Sure, it's slow, but what really matters is that it proves that it's possible to do selection of an unsorted list in O(n) time. At one point, we didn't know whether that was even possible. Now that we do, we know there might an even faster linear algorithm." Really got into the philosophy of what Computer Science is about in the first place.

The lesson was so simple yet so profound that I nearly applied to grad school because of it. I have no idea if they even recall the conversation, but it was a pivotal moment of my education.


Does the fact, that any linear time algorithm exist, indicate, that a faster linear time algorithm exists? Otherwise, what is the gain from that bit of knowledge? You could also think: "We already know, that some <arbitrary O(...)> algorithm exists, there might be an even faster <other O(...)> algorithm!" What makes the existence of an O(n) algo give more indication, than the existence of an O(n log(n)) algorithm?


I am not the original commenter, but I (and probably many CS students) have had similar moments of clarity. The key part for me isn't

> there might be an even faster linear algorithm,

but

> it's possible to do selection of an unsorted list in O(n) time. At one point, we didn't know whether that was even possible.

For me, the moment of clarity was understanding that theoretical CS mainly cares about problems, not algorithms. Algorithms are tools to prove upper bounds on the complexity of problems. Lower bounds are equally important and cannot be proved by designing algorithms. We even see theorems of the form "there exists an O(whatever) algorithm for <problem>": the algorithm's existence can sometimes be proven non-constructively.

So if the median problem sat for a long time with a linear lower bound and superlinear upper bound, we might start to wonder if the problem has a superlinear lower bound, and spend our effort working on that instead. The existence of a linear-time algorithm immediately closes that path. The only remaining work is to tighten the constant factor. The community's effort can be focused.

A famous example is the linear programming problem. Klee and Minty proved an exponential worst case for the simplex algorithm, but not for linear programming itself. Later, Khachiyan proved that the ellipsoid algorithm was polynomial-time, but it had huge constant factors and was useless in practice. However, a few years later, Karmarkar gave an efficient polynomial-time algorithm. One can imagine how Khachiyan's work, although inefficient, could motivate a more intense focus on polynomial-time LP algorithms leading to Karmarkar's breakthrough.


If you had two problems, and a linear time solution was known to exist for only one of them, I think it would be reasonable to say that it's more likely that a practical linear time solution exists for that one than for the other one.


We studied (I believe) this algorithm in my senior year of Computer Science. We talked about the theory side of it that you mention, but this algorithm was also used to demonstrate that "slow linear algorithm" is not faster than "Fast nlogn algorithm" in most real life cases.

I think we got a constant factor of 22 for this algorithm so maybe it was a related one or something.


What I find interesting when comparing the US the Canada on topics like these is that in Canada, there is self-interest in demanding workers be protected. Like beyond the fact that it's a good thing you do.

Because we have a public health care system, funded by taxes, having a large number of young men out of the work force (not paying taxes) and using the health care system effectively means my taxes, everyone's taxes, are higher.

There's incentives for our government to protect workers from risks that will cost a fortune to fix.

In America, there's only the "because it's the right thing to do" reason, which is never enough for anyone to actually do anything.


Please don't take HN threads on generic nationalistic flamewar tangents. I'm sure you didn't intend it, but that's what this leads to, in the statistical case.

https://news.ycombinator.com/newsguidelines.html

Edit: we've had to ask you not to do this on HN more than once before. Please avoid it in the future.

https://news.ycombinator.com/item?id=34492512 (Jan 2023)

https://news.ycombinator.com/item?id=34073107 (Dec 2022)


Sorry Dang. I hadn't realized it was coming off that way, but I'll be my best to check myself in the future.

I can see how what I said would be construed that way.


Appreciated!


He was drawing a pretty reasonable comparison between economic incentives in the respective countries and how it has downstream affects for regulatory response. Don't make it something it isn't.


You make a good point, enough for me to realize that I misunderstood the comment. Still, the last sentence of the GP comment veered into nationalistic putdown.

Flamebait in a comment has to do with the most inflammatory thing it contains, not the most interesting thing. When a house is on fire, people don't admire the décor.


Canadian data is relatively poor quality (mostly from the 90s) outside of Alberta (I expect QC and BC probably have the highest rates) but historically and estimates are that we have slightly higher incidence than the US.

https://onlinelibrary.wiley.com/doi/full/10.1111/resp.14242

> There’s incentives for our government to protect workers from risks that will cost a fortune to fix.

There are many examples where this is inaccurate but let’s keep it simple and delve a little deeper into the silicosis problem presented in this specific study.

From the JAMA article:

Although a substantial number of the patients, including some of those who were uninsured or with restricted-scope Medi-Cal, likely had an undocumented immigration status, we did not directly collect information about whether individuals were undocumented immigrants.

Note that public health system in Canada is not “free”. Legal immigrants, documented workers, citizens and refugees have access to provincial or federal health insurance which pays for care.

Undocumented or illegal immigrants have neither (and also would not get WSIB which would be the payer for most silicosis cases) and actually have better coverage in California.

Additionally:

Ten patients (19%) were uninsured, 20 (38%) had restricted-scope Medi-Cal, 7 (13%) had Medi-Cal, 8 (15%) had private insurance, and 7 (13%) had workers’ compensation.

So 34/52 had some form of government provided or mandated insurance.

As an aside while restricted-scope Medi-Cal and uninsured rates are the surrogates for undocumented immigrants in this study, those over the age of 50 (or 19-25) are also eligible for full scope Medi-Cal but were not identified in this study. Medi-Cal will also be expanding in January 2024 to cover undocumented immigrants aged 26-49.

Even if we assume Canada’s silicosis incidence is lower, all of the above strongly suggests your public health system cost-savings incentive hypothesis is incorrect.


> Note that public health system in Canada is not “free”.

I'm enough of a pedant to annoy the fuck out of most anybody who knows me, but really? Look, there is no "free" health care anywhere, but it's a term that has (perhaps unfortunately) become widely used as a synonym for, depending on your sensibilities "no charge at the point of service" and/or "socialized health insurance and health care coverage".

And Canada is certainly one or both of those.

The metric "well, they don't provide it for undocumented persons" is a weird one, as is the use of California as a counter-example.


I think you may be “annoying the fuck” out of yourself here. Your reply is full of strawman arguments.

The comment I replied to asserts that the government incentive to reduce healthcare expenditures improves workplace safety, and consequently in the context of this article would have prevented silicosis/PMF in these patients.

I highly doubt most HN commenters are aware of whether undocumented migrants are covered in the Canadian system as they are in California, certainly the person I replied to was not, so I explain differences in coverage.

Consequently, the argument doesn’t hold water as the financial incentive for the government is stronger in California than in Canada as it relates to this study population.

> The metric "well, they don't provide it for undocumented persons" is a weird one, as is the use of California as a counter-example.

I'm not providing any counter examples, undocumented workers in California are the subjects in the article we are commenting on. Where in fact there happens to be socialized healthcare that you seem to think I'm arguing against.


All good, but you still had to get in the jibe about '"free" healthcare' for reasons that have nothing to do with either TFA or the GP's point.


It's not a gibe. I used scare quotes around free because it's obvious that a socialized system is funded by taxes, what's not obvious a bill is always generated during healthcare delivery in the Canadian system.

You yourself seem to not understand this distinction with your comment: "no charge at the point of service".

There is always a charge at the point of service and a bill is generated. The difference with the US is that the Canadian healthcare system, which also functions mostly privatized, uses a single payer model so the government is the only one legally permitted to pay for insured services. In other words, each province runs a large insurance company and there is a law that states that no one is allowed to charge any person or company other than the government insurance plan for anything the government has deemed reimbursable for any person covered by the plan.

(So you don't misinterpret my statements again: while government run hospitals, but not the physicians working in them, do get capitation payments they also bill for some services. What is billed vs paid through capitation varies by province. Services rendered to uninsured patients are never from capitation funds and are always charged directly to the patient).

If the services rendered or you are uninsured, like the patients in the article study, it functions the same as the US and you will personally receive a bill in the mail with similarly obscene rates much higher than what the government insurance company would have paid.

This distinction has everything to do with the article and GP's point which asserts that the Canadian government will bear some cost for the care of the patients in the California study which is flatly incorrect. If there was no charge at the point of service none of this would matter.


> There is always a charge at the point of service and a bill is generated.

I was thinking more broadly than just the US or Canada. In Scotland, for example, there is literally no charge at the point of service.


Okay... what does Scotland have to do with anything?

The discussion, and the part of my comment you quoted, is specifically about Canada and the US. So I'm not sure what you're even arguing or why.


> Canadian data is relatively poor quality

Sure, no axe to grind here. Do tell us your impartial take.


Perhaps you are unfamiliar with medical research but stating that available data is poor quality is an objective assessment. I provided a brief explanation in parentheses which you excluded for some reason.

I also provided a reference that is open access but here is the relevant section for you:

In Canada, there are no national data on the incidence or prevalence of silicosis. In the province of Alberta, where silicosis is a notifiable disease, health insurance data revealed 861 cases with at least one reported diagnosis of ‘silicosis’ during a period of 10 years from 2000. These results were based on raw data and not a secondary review of primary imaging and clinical information. Data from 2000 through 2009 showed that only 29 workers' compensation claims were accepted for silicosis in Alberta. Data from Quebec's compensation system revealed 351 compensated cases of silicosis between 1988 and 1998. Of note, workers who participated in regular surveillance had milder disease at the time of compensation.

The JAMA study is from 2019-2022. Data that is 20-30 years old is relatively poor quality.

Changes in medicine, workplace safety rules and occupational trends makes it hard to compare to silicosis rates in Canada to the US in order to assess the claims of the comment I replied to therefore I think the relative incidence described in this review article (from 2022) is inaccurate.

If you want to disregard my quality assessment, the discussion ends with the review article showing silicosis rates are 3x higher in Canada.

Can you elaborate on how any of this shows I have an axe to grind or that I’m biased?


Canadian here who believes our labour safety standards are generally better than the USA (based on anecdote and experience, not data).

Canadian data is poor quality. On any issue you might care to pick, the topic is better studied in the United States. I run into this all the time. For example, we make allocation decisions at a charity I volunteer at with, about what health problems unemployed LGBT people tend to have. We use data for American urban populations. The data doesn't exist for Canada, AFAIK. It's a smaller country! There's simply less research and statistic-taking done! It's a reasonable statement.

Besides -- commenting on the lack of good data usually implies the exact opposite of what you seem to think -- it is an admission by the poster that their argument is based on weak evidence.


> it is an admission by the poster that their argument is based on weak evidence.

Which is exactly why I limited my reply to a discussion about the California study and healthcare systems rather than reiterating the claims in the 2022 article I referenced which states silicosis incidence is 3x higher in Canada, based on 20-30 year old data.

Although I live in the US now I’m a dual citizen and practiced medicine in both countries, the only axe I have to grind with Canada is the harsh winters which are incompatible with my fragile desert descent body.


Then again my Dad who died of a lung disease said the government agency (Coast Guard) he worked for turned a blind eye to what he and his co-workers had to do. Dad would tell me even as the late 1990s his job was to take a powder, wet it, form it into big mats. They were filters for the boiler water. The powder used was or maybe just contained asbestos. He said the workers on the dock were covered in it he said the place looked like it had snowed.

Dad knew but he was stuck in the past of "It had to be done" mentality. And really as a high school drop out he really may not have understood the danger. For years he and my grandfather had a painting business with the paint at that time containing lead.


Perhaps unexpectedly, people dying are usually _cheaper_ for a healthcare system than healthy people. This has been studied a lot with smokers, basically people in old age cost far more than young people and thus a true cost-minimizing system would not be how you expect. Of course, we aren't trying to minimize cost so the premise is flawed.


Smokers specifically are very much a net cost to society, because smoking kills slowly and in a very expensive manner.

In any case, anything that makes people die young, or more generally reduces people’s capacity to work (like many diseases of affluence) is incredibly expensive to society once you factor in indirect and opportunity costs.


Eh, depends. COPD, or like the article is about, Silicosis, is a long slow drawn out illness.


This is very dependent on the illness.

From a cost perspective it’s best that people die suddenly. If I live a fairly healthy life into my 80s and die of a heart attack, I might not necessarily have cost my insurer that much, as opposed to if I suffer from a chronic illness for 10, 20, 30 years.

Cancer is now usually not a sudden death sentence - treatment is good enough now that most cancers caught early can be treated and patients often go through multiple remissions before it or a complication from treatment finally gets them.

Insurers very much do not want their customers getting cancer, because it is invariable an extraordinarily expensive condition to treat and treatment can go on for years.


> Cancer is now usually not a sudden death sentence - treatment is good enough now that most cancers caught early can be treated and patients often go through multiple remissions before it or a complication from treatment finally gets them.

Small clarification - early detection is most often curative and cheap.

The really expensive part is that several advanced stage cancers (even IV with widely disseminated metastatic disease) now survive for many years on treatments costing low to mid 6 figures/year.

It actually provides a pretty good incentive for insurers to cover screening and early detection beyond what is mandated by law.


> Small clarification - early detection is most often curative and cheap.

> It actually provides a pretty good incentive for insurers to cover screening and early detection beyond what is mandated by law.

The evidence in favor of mass screening programs in the hope of early detection is actually weak to non-existent [1].

> In total, 2 111 958 individuals enrolled in randomized clinical trials comparing screening with no screening using 6 different tests were eligible. Median follow-up was 10 years for computed tomography, prostate-specific antigen testing, and colonoscopy; 13 years for mammography; and 15 years for sigmoidoscopy and FOBT. The only screening test with a significant lifetime gain was sigmoidoscopy (110 days; 95% CI, 0-274 days). There was no significant difference following mammography (0 days: 95% CI, −190 to 237 days), prostate cancer screening (37 days; 95% CI, −37 to 73 days), colonoscopy (37 days; 95% CI, −146 to 146 days), FOBT screening every year or every other year (0 days; 95% CI, −70.7 to 70.7 days), and lung cancer screening (107 days; 95% CI, −286 days to 430 days).

There are large institutions, both nonprofit and commercial, which stand to gain by convincing people that mass screening is useful and important. The available scientific evidence does not support their position.

[1] https://jamanetwork.com/journals/jamainternalmedicine/fullar...


You’re looking at the wrong metric and misinterpreting the stats, not only is overall survival not a good metric for cancer screening none of the studies are sufficiently powered for OS.

What you want to do is look at stage at presentation, treatment costs by stage, and screening costs. These were done for nearly every recommended screening program.

The available evidence behind currently recommended screening programs unequivocally shows improved cancer-specific survival and earlier stage at diagnosis.


I'm going to strongly push back on both (1) the notion that overall survival is the wrong metric and (2) that I'm misinterpreting something, given that I didn't really offer any interpretation at all. I just cited a paper.

> What you want to do is

No, what I want to do is assess whether broad screening programs actually make people live longer. Overall survival is the correct metric. Evidence in favor of the claim is lacking.

> none of the studies are sufficiently powered for OS.

"Sufficiently powered" is relative to what size of effect you want to detect--which you haven't specified, so I'm not sure how you can make the assertion that none of the studies are sufficiently powered.

> The available evidence behind currently recommended screening programs unequivocally shows improved cancer-specific survival and earlier stage at diagnosis.

These outcomes ignore negative effects of screening on people who don't have cancer, which is why I'm not interested in them. And yes, there are negative effects, and no, they are not negligible.


> No, what I want to do is assess whether broad screening programs actually make people live longer. Overall survival is the correct metric. Evidence in favor of the claim is lacking.

Correct according to whom? If you want to choose only one metric quality adjusted life years is likely the best one.

While OS may be your goal that's not the primary endpoint of screening programs.

Some examples of why OS is limited: breast lumpectomy vs mastectomy and systemic therapy or polypectomy vs neoadjuvant therapy and colonic resection are both associated with very high morbidity that is very important to patients. The vast majority of patients care about quality of life.

> "Sufficiently powered" is relative to what size of effect you want to detect--which you haven't specified, so I'm not sure how you can make the assertion that none of the studies are sufficiently powered.

We do not expect any one screening program to have a large change on overall survival because there are many ways to die, very few studies are powered to detect the small differences expected. The reference below does some modeling and discusses cancer-specific vs all-cause mortality for your perusal.

https://onlinelibrary.wiley.com/doi/full/10.1002/cam4.2476

> These outcomes ignore negative effects of screening on people who don't have cancer, which is why I'm not interested in them.

See morbidity discussion around delayed diagnosis above.

> And yes, there are negative effects, and no, they are not negligible.

As you're choosing to limit the discussion to overall survival, do you have any data to support the claim that screening has more than a negligible negative effect?

There is a better argument to be made for other harms of screening like cost and stress but if we want to discuss these negative effects of screening we also have to step back from overall survival and discuss morbidity benefits.

ETA:

> 2) that I'm misinterpreting something, given that I didn't really offer any interpretation at all.

This is your interpretation, and is an incorrect one:

> The evidence in favor of mass screening programs in the hope of early detection is actually weak to non-existent [1].

The evidence you cite says nothing about early detection and treatment paradigms.


> Correct according to whom? If you want to choose only one metric quality adjusted life years is likely the best one

By all means, if you have studies showing that broad screening programs are beneficial in terms of overall (not cancer-case only) QALY then please share them. I'm guessing you don't.

> As you're choosing to limit the discussion to overall survival, do you have any data to support the claim that screening has more than a negligible negative effect?

Do you have any data to support the claim that screening has more than a negligible positive effect on overall survival? (No).

Stop trying to put the burden of proving a negative on me. If you want to advocate for spending ten of billions of dollars annually (not to mention time and stress) on broad screening programs you bear the burden of demonstrating that's useful.

If we can afford to spend the money on screening everyone certainly we can afford to spend less money to run a large randomized trial screening only some people, but advocates of the screening programs won't stand for it because they are convinced of their own righteousness and refuse to admit uncertainty about whether the screening programs are actually doing more good than harm.


> By all means, if you have studies showing that broad screening programs are beneficial in terms of overall (not cancer-case only) QALY then please share them. I'm guessing you don't.

I gave you one.

> Stop trying to put the burden of proving a negative on me.

You're making the claim there's more than a negligible negative effect not me.

> Do you have any data to support the claim that screening has more than a negligible positive effect on overall survival? (No).

Did I say there is a sizable positive effect on overall survival? I said it's irrelevant.


> I gave you one.

No you didn't, you gave me a simulation study that discussed what kind of sample size might be necessary to find statistically significant effects in all-cause mortality. There's not a single mention of QALY in there. Please stop misrepresenting things.

> You're making the claim there's more than a negligible negative effect not me.

The cost itself is a nonnegligible negative effect.

> Did I say there is a sizable positive effect on overall survival? I said it's irrelevant.

You're wrong.

It's borderline fraud, in my humble opinion, to go around suggesting that massive interventions should be evaluated based on their effects only on the people who benefit most, ignoring the negative effects on the other 98% of the population. Which is exactly what you did in your first reply to me:

> What you want to do is look at stage at presentation, treatment costs by stage, and screening costs. These were done for nearly every recommended screening program.

> The available evidence behind currently recommended screening programs unequivocally shows improved cancer-specific survival and earlier stage at diagnosis.

This approach to evaluating an intervention is intellectually dishonest and emotionally manipulative. Any evaluation that does not take into account the other 98% of the population--through overall survival or QALY or some other metric--is giving an extremely biased picture of what the intervention is actually doing to the population as a whole.


What are the downsides? You're speaking nebulously about negative effects on 98% of the population without mentioning them.

Several screening programs like breast have been rigorously evaluated from costs, benefits and harms. I know very well what the negative effects are, do you? You haven't mentioned anything specific or provided estimates of harms yet you're the one making the assertion.

> You're wrong.

So if we're all wrong, what's the argument and where's the evidence without resorting to no OS benefit ignoring that this is again not the point of screening.

> It's borderline fraud, in my humble opinion

Your humble opinion disagrees with the entire medical community, including the study you initially cited. So we're all fraudulently screening for what purpose? You know that physicians don't collect billings or work fee for service in academic medicine correct?

> Which is exactly what you did in your first reply to me:

Did I say that was the only reason to screen and ignore the harms? I used that as an example of why overall survival is not useful as an isolated statistic.

You're the one who wanted to limit the discussion to one measure, I was pointing out the flaws.


Do you have a reference for this? This was a weak hunch for me before but I always assumed I was wrong based on e.g. insurance rates. If insurance prices it higher, it must be more expensive to cover?


Last I checked, Canada doesn't share a border with Mexico and some portion of these Latino "day workers" are illegal immigrants. Day workers are often paid under the table and when I have read other stories about them, they tend to include medical horror stories like "So, this guy cut 3 of his fingers off and they didn't even take him to the ER. They just returned him to the place where they had picked him up."

This is not exactly the best use case for arguing about Canada versus US healthcare policies.


Canada doesn't have a border with Mexico, but it does have its share of undocumented and under-protected workers.

> While there are no accurate figures representing the number or composition of undocumented migrant population in Canada, estimates range between 20,000 and 500,000 persons

> Research suggests most undocumented individuals live in large urban centres and typically work in seasonal and informal sectors, such as construction, agriculture, caregiving and housekeeping.

> Undocumented migrants are a vulnerable group due to their lack of immigration status, as was seen during the COVID-19 pandemic. They have limited access to health care, social services or employment protections.

Source: https://www.canada.ca/en/immigration-refugees-citizenship/co...

Until this year, asylum-seekers could transit through the United States into Canada under the Safe Third Country Agreement, by crossing the border at an irregular crossing like Roxham Road.

Sources: https://www.cbc.ca/news/politics/deal-roxham-road-migrants-b...

https://www.cbc.ca/news/canada/canada-asylum-seeker-increase...

https://web.archive.org/web/20230601135133/https://www.nytim...


Canadas temporary foreign worker program means you don't need to illegally hire day workers, you just keep the wages low enough that nobody will take the job and then tell the government you need to bring in foreign workers - not for professional or technical work, not for picking in the fields, but for working at McDonald's and tim Hortons.

They also allow international students at diploma mills to work 40 hrs a week, above the table.

It's a sham.


A quick search suggests there are 10.5 million in the US.

https://www.pewresearch.org/short-reads/2021/04/13/key-facts...


On a per-capita basis that's not hugely different, but one of the reasons may be that Canada provides a somewhat easier path, relative to the United States, to becoming a legal immigrant rather than remaining undocumented.


Really? I’ve had relatives try to immigrate to Canada (from the US), and it was quite a horror story. Any specifics?


Relative to Canada the United States is even more difficult to legally immigrate to, consider that the US system has been constantly adding ever increasing hurdles to legal immigration for ages.


A quick search suggests there are 335 million Americans but only 37 million Canadians.

https://www.indexmundi.com/factbook/compare/canada.united-st...


So Canada has 1/10 the population of the US and 1/20 the population of undocumented immigrants.

So seems like Canada has many fewer immigrants of this type than the US.


More policy preaching from ultra-white northern countries that don't let anyone in. It's such a tired trope...


Canada doesn't let anyone in? Over a fifth of their population was born outside of Canada.

Edit: Actually over a quarter.


26.4% of Canada’s population are first generation immigrants (foreign born): https://www12.statcan.gc.ca/census-recensement/2021/as-sa/fo...


Canada will have imported over 1M immigrants in 2023 (for their population of only 40M!). Newsflash: most aren't white.


Canada is pretty heavy on immigration I hear-- its how they partially make up for a declining birthrate.


It's not only Canada that uses immigration - legal or turning a blind eye to illegal - to make up for falling birthrate.

Look at Germany. Look at France. Look at what's happen to Japan because such an option isn't as viable.

It's simply not politically feasable to say, "Without immigrants, our economy is f'ed."


> It's simply not politically feasable to say, "Without immigrants, our economy is f'ed."

And yet, that's what politicians in Germany are saying: "Denn Deutschland braucht sie dringend: Durch die seit Jahrzehnten sinkende Geburtenrate gibt es auch weniger Arbeitskräfte. Diese Lücke konnte lange über Zuwanderung aus dem EU-Ausland gefüllt werden. Doch inzwischen reicht das nicht mehr aus." (https://www.spdfraktion.de/themen/neustart-migrationspolitik, the social-democrat representatives in Parliament)

"Because Germany needs them urgently: Due to the declining birth rate for decades, there are also fewer workers. For a long time, this gap could be filled by immigration from other EU countries. But this is no longer enough."


American too, birth rate is 1.6, well below a replacement rate of 2.1


> its how they partially make up for a declining birthrate.

To the detriment of their origin countries who suffer from losing their best and brightest. How "Northern countries exploit the South again" is a (rather quiet) talking point that I believe will become louder in the future.


>What I find interesting when comparing the US the Canada on topics like these is that in Canada, there is self-interest in demanding workers be protected. Like beyond the fact that it's a good thing you do.

Is there any evidence of this? That the Canadian Gov cares more about workers than the US Gov?

>Because we have a public health care system, funded by taxes, having a large number of young men out of the work force (not paying taxes) and using the health care system effectively means my taxes, everyone's taxes, are higher.

What evidence do you have that this is the case?

>In America, there's only the "because it's the right thing to do" reason, which is never enough for anyone to actually do anything.

Is this your opinion or is this the reality. I don't know if you have ever walked by a construction site in Toronto to see guys cutting cement or stone. None of them have masks. Sometimes they will have a wet saw when cutting cement on the street but that is to reduce dust for traffic and pedestrians and not so much for their health. The Canadian Postal Union fought the Federal Gov for years to provide an environment where paper dust was considered a health hazard and workers need to be protected. Many postal workers suffered from COPD because paper dust was too fine for the Lungs to filter. What about farmers and dust? I'm sure they suffer just as much as American farmers.

I've come to realize Canadians suffer from an inferiority complex and have to constantly try and make comparisons to make themselves feel better, it's a strange phenomena.

- Expat....


I don’t think cutting in the open air and cutting in tiny sweatshops are at all comparable.


Exposure won’t be anywhere near zero outdoors unless there’s a serious breeze. Over decades that’ll add up.


Then you have never seen someone cut stone in open air. Because it creates such a cloud of stone dust you could hide a house in it.


America offers Medicare so we still bear the cost. Just only if they live long enough to receive the retirement benefits.

Yeah one heck of a perverse incentive.


Medicare for the elderly, but we also have Medicaid for people in poverty (< $15k/yr income for a single person w/o a family) which covers most basic medical/dental/ vision needs and is taxpayer funded.

Someone taken out of the workforce may qualify for that if they don’t already qualify for disability insurance or similar payments (although I’m not 100% clear if those are funded via private disability insurance or public programs)


Medicaid is not at all comparable to Medicare.

First, Medicare pays a lot more to healthcare providers than Medicaid. Medicare pays more for more medications than Medicaid, and has fewer prior authorization requirements for those medicines. Fewer providers will accept Medicaid, and people using Medicaid will receive less or worse healthcare than those in Medicare.

Second, Medicaid is administered by each state, and there is a lot of variability on how easy the state makes it so people can actually get healthcare. Lots of states straight up refuse money simply to punish people of a certain socioeconomic class because it happens to win votes.

Bottom line, Medicaid is so leaders can claim they are helping poor people get healthcare AND keep taxes low. Medicare is for actually delivering healthcare to people because that contingent makes up a huge proportion of votes.

And yes, even Medicare is not delivering all healthcare, as it has multiple tiers to deliver differing amounts of healthcare to different socioeconomic classes.


The most obvious point: if they were equivalent programs, we would just include people under the poverty line with Medicare. Instead, we have a completely different program, managed by different people with different goals. Down to its very core, Medicaid is a political football.


Exactly, I guess my comment could have been much shorter.

I love how there is even an Additional Medicare Tax. The political lines in the US are very much old v young, but some of the young, especially politically active ones, vote with the old since they are among the wealthy young, and the rest do not participate enough, or do not have sufficient knowledge about how resources are being meted out and how they will be affected now and in the future.


Not in all states if you have no dependents.


Since the examples are in CA, there is Medi-Cal for those who are too poor to buy subsidized insurance and it’s not age-related unlike Medicare. It’s an expanded form of Medicaid. I’d bet CA taxpayers do shoulder the costs if the victims know to apply.


> Because we have a public health care system, funded by taxes, having a large number of young men out of the work force (not paying taxes) and using the health care system effectively means my taxes, everyone's taxes, are higher.

That is true with or without publicly funded healthcare.


You could argue that more sick people is good for the US economy, and it helps rich people get richer.


Also, broken windows


To the tune of billions of dollars, yes.


> Because we have a public health care system, funded by taxes, having a large number of young men out of the work force (not paying taxes) and using the health care system effectively means my taxes, everyone's taxes, are higher.

The US healthcare system uses private insurance, implying that more use of the healthcare system raises everyone's premiums. And people without insurance then go to emergency rooms which are in turn still passing the cost onto private insurers. So voters already have the same incentive in order to avoid their premiums going up.


That incentive is obfuscated, though. Every insurer exists as an invisible boundary where cost is not passed to others.

On top of that, insurance is optional. There is no guarantee a person will get affordable care. That's the entire point of the system! If there were a guarantee, it would be indistinguishable from Canada (and practically every other country's) single payer healthcare system.


> Every insurer exists as an invisible boundary where cost is not passed to others.

How does that affect issues like this where an increase in overall costs would reasonably be expected to apply to all insurers?

> On top of that, insurance is optional.

More than 90% of the population has health insurance, which is well over the majority required to bring about legislation.

> If there were a guarantee, it would be indistinguishable from Canada (and practically every other country's) single payer healthcare system.

That certainly isn't true. Serious problems with the US healthcare system include AMA lobbying to maintain a doctor shortage, various patent laws and FDA rules that limit competition and increase costs and a malicious lack of cost transparency. None of that would be improved merely by routing the premiums through the government.


> Because we have a public health care system, funded by taxes, having a large number of young men out of the work force (not paying taxes) and using the health care system effectively means my taxes, everyone's taxes, are higher.

The taxes part is the same; only the healthcare half is different.


Incidentally this is also why everyone working in a nation has to receive these benefits (and any others guaranteed to citizens), otherwise you get migrant workers who suck up this, quite literally in this case, but don't receive healthcare.


Only illegal migrants aren't covered, legal ones are. If you make it available to everyone regardless of their legality to be on the soil you open up a whole bunch of other issues


Canada largely avoids this problem by not allowing a subset of illegal people to exist in the country. Canadians are polite, but working in canada without some sort of legal status is far harder than in the US. They have lots of immigrants, but vanishingly few illegal ones in comparison to the US.


Like what?


Like giving people an incentive to illegally come to your country ? If there are no difference between coming legally or illegally the choice is easy.

Like I could just book a one way flight to LA and become a US citizen because reasons ?


Like how most of our ancestors got here, you mean?


You can either provide social benefits to your country's poor or you can have open borders. If you try to do both at once, you're providing social benefits to the world's poor, and the amount of benefits you can provide from a given tax base falls through the floor.


Well, not since ~1882.

https://www.uscis.gov/about-us/our-history/overview-of-ins-h...

That said, overstaying tourist visas (aka just hopping on a plane and then not going back) is a very popular form of illegal immigration into the US.


How was the wellfare at that point? :-)


Your ancestors genocided the whole continent, if anything that argument is against what you advocate. We're not in the 1700s, people can travel more easily, information travels instantly, what worked 300 years ago doesn't necessarily works now.

I'm all for helping people but you cannot import the world's misery and expect it to be smooth, first because it doesn't solve anything, second because it just doesn't work from a simple demographic point of view.


That could also be resolved by having tight border security and heavily penalizing anyone involved with their entry


I have ME/CFS and the fact that we're expensive to the general public seems to be the only reason there is any public money spent on treatments and cures at all. However, Canada and others are finding that medically assisted suicide offers an even cheaper alternative solution. So instead of money spent on treatments and cures we get subtle and not so subtle encouragements to kill ourselves, and I expect the problem to keep getting worse. I.e. I don't think the actual emergent behaviour for shared healthcare costs is as altruistic as you appear to except.


We already have the government involved - OSHA. Why would having it more involved be better if its current involvement is not solving the problem? It's pretty to think that the government has some sort of self-interest and if it can save money in the long run by spending money in the short run, it will do that. But that's not how things work, is it?

A more plausible conclusion from observing the results of an entity's involvement in something is that if it is incompetent with the thing you gave it to do, don't give it more stuff to do.


> if it is incompetent with the thing you gave it to do, don't give it more stuff to do.

When my code doesn't work, I don't sunset the code, I fix it. Why would the best course of action be to stop trying instead of fixing the root of the problem?


If your code doesn't work, it could be anything from a minor typo to a bad abstraction based on bad assumptions that requires a full refactor to get correctness and / or minimum acceptable performance.

About 90% of that scale requires sunsetting at least some of your code and doing something differently.


When your Maytag dishwasher breaks after a month, do you think the best thing to do is to buy a Maytag washing machine?


I don't say "I'm not using a dishwasher anymore" I figure out how to get the dishwasher that's in my kitchen fixed.


Fine. So advocate for fixing OSHA instead of revamping the entire health care system.


It's almost as if we should do both!


Not agreeing/disagreeing with this you but I wonder how you feel about obesity tax ? Low physical fitness tax ? There's plenty of evidence that exercise and diet significantly impact health (especially on population level) - like you said there's an incentive to keep those people healthy contributors instead of chronic burdens on the system. Sounds like a logical extension but doesn't seem popular/implemented widely.


Public transit and walkable streets are also incentivized. People naturally end up walking more outside of US.


If you take this view, any tax on current state is like a "cancer tax" - you're just doubly punishing the person.

Sugar tax, coal tax, corn syrup tax, worked-your-employees-90h/week tax? Sure.


I feel great about slapping a tax on the profits of companies that sell highly processed, sugary, addictive foods so that the market selects for healthier alternatives.

Maybe subsidies for selling fresh fruit and veg also.

I guess you could tax their victims instead though... they don't have a lobby so theyre probably easier to take advantage of.


We are Calvinists down here, so bad people were always predestined to be bad and they deserve to be punished. Corporations making money must have be good because they are succeeding, so why would we hurt them?

I will never be punished for being over- or underweight since I am good. The universe would have to be broken for me to be taxed.


> I guess you could tax their victims instead though... they don't have a lobby so theyre probably easier to take advantage of.

Nobody is the victim of choosing to eat a cake.


We aren't talking about cake though. We are talking about inexpensive highly processed foods made predominantly from ingredients that are subsidized specifically because of their caloric density. I.e. corn.

High caloric density is what you want if you need to be able to feed your country in war, so we subsidize these foods.

No one wants to eat just plain corn though, so companies process it into other foods that are then sold cheaply because they are receiving these large subsidies.

People end up consuming large quantities of these foods because they are cheap, and our brain reward centers a pre-wired to love lots of cheap easy calories.

Knowing all of this, it makes perfect sense to tax the living crap out of highly processed foods that are made from subsidized ingredients. You're just taking back the subsidy you put there in the first place, and shaping consumer behavior for the greater good (which is a common use case for taxation).


> Knowing all of this, it makes perfect sense to tax the living crap out of highly processed foods that are made from subsidized ingredients.

The consequence of this would be that the subsidized food gets exported to a country that doesn't tax it, at which point you're subsidizing some other country's food.

The US is also a large net exporter of food, implying there is more than enough domestic production for wartime needs. Also, the US hasn't been in that kind of a war in almost a hundred years and MAD makes it unlikely that it ever would be again. The obvious conclusion is to eliminate the subsidies.


>The consequence of this would be that the subsidized food gets exported to a country that doesn't tax it, at which point you're subsidizing some other country's food.

If theyre smart they won't take this shit either.

Nice of you to let Americans sacrifice their health on $othercountry's behalf though.


> If theyre smart they won't take this shit either.

What do you suppose the chances are of 100% of other countries imposing a similar tax on this type of food?

> Nice of you to let Americans sacrifice their health on $othercountry's behalf though.

Stop subsidizing it and you don't have to tax the subsidy back out.


>What do you suppose the chances are of 100% of other countries imposing a similar tax

Quite high. Agricultural dumping is typically frowned upon even more than regular dumping. Half of the reason for the WTO's existence was to get countries to stop being so trigger happy about doing this.


You think the chances are "quite high" of all other countries doing this? There are several countries that inherently have to import food because they don't have enough arable land to feed their population.


Yeah, and they already buy plenty of shit food from America. They aren't going to buy more oreos to these countries just because America can't sell them to America any more.


But of course they are, because the price of that food would go down. If it's still being produced because it's still being subsidized but Americans stop buying it because it's punitively taxed, where do you think it goes?


Don't disagree, but eliminating the subsidy is much more challenging politically than taxing junk food.


Taxing junk food seems pretty challenging politically, considering that businesses hate it as you're taxing their junk food, conservative voters hate it as government nanny state, liberal academics hate it as a regressive tax and a thing that lessens support for actually removing the subsidies, and anyone paying the tax hates it as a tax they have to pay.

At least for removing the subsidies you only have to fight the businesses.


When you live in poverty and have no prospects for the future. Some mass produced cake from the supermarket might be the only thing keeping you together. As soon as the work of the laptop class is automated by next generation agentic LLM's, you will probably understand.


> As soon as the work of the laptop class is automated by next generation agentic LLM's, you will probably understand

Coming any second now, right behind robotaxis, with the only difference being that robotaxis will probably actually happen within the lifetimes of the current “laptop class”.


Tax on sugar, sure. But having an obesity tax would be like having a silicosis tax.


This is based on the assumption that sugar causes obesity - I don't think there's any strong evidence for that, or that low carb diets work better for fat loss than low fat, from what I've seen both have same effect (calories equated) and both have equally terrible long term adherence/outcomes. Sugar seems like a nice villain but it's more likely that you'd have to tax any high calorie food that tastes good.

And what about physical activity ?


If not sugar, then other causes should be taxed if the evidence supports that such taxes would have an impact. I'm not a policy expert myself.

The best public investment to promote physical activity is designing cities to enable "the gym of life"!

https://youtu.be/KPUlgSRn6e0?si=GDmrYq-XQtn9SaKx


Fair enough, guess my problem would be that since these things would be implemented by politicians it would end up being driven by fads/popular opinion, and without strong evidence (of which there's very little) you'd essentially be conducting population level experiments even if you wanted to be scientific (eg. food pyramid to the extreme)


What about all the other harmful activities? Recent studies about alcohol usage shows that even small amounts can be much more harmful than previous estimates. In addition to the health issues, we also need to consider all the accidents and violence that is connected to drunkeness.


Alcohol is not only heavily taxed in Canada, but most provinces also have a monopoly on selling spirits. That's right, the gov is selling the booze, pocketing both the tax and the profits.


Alcohol is usually taxed. Just like tobacco. Although sugar and other harmful substances should join the former.


The problem is that you generally have to force people to keep their own safety. Else the el cheapo company shitting on any work safety will be cheaper and quicker, coz workers don't know any better, and thus more competitive, at worst killing the companies trying to do it properly.


Does Canada have a different policy on engineered stone?


How do you explain the massive push to ban/stop Smoking?

The US is pretty much the only country to successfully reduce it, near as I can tell (perhaps Canada has had success too?).


I'm curious about how you got that impression. In Canada, smoking is down from 26% in 2001[1] to 15% in 2019[2]. (Cannabis consumption is probably trending upward though). I have no reason to believe that this decline is particular to the US and Canada. Japan has been trending down from 33% in 2000 to 20% in 2020[3]. I expect this will have accelerated since the government made a strong anti-smoking push during the Tokyo Olympics. In fact, this seems to be a trend across the entire developed world, see this chart[4] showing that cigarette sales per adult per day peaked by the 1980s in every developed country surveyed, and all have been trending downward for decades. The US shows up as exceptional mainly in how extremely high its cigarette consumption habits were in the '60s and '70s.

[1] https://www150.statcan.gc.ca/n1/pub/82-624-x/2012001/article...

[2] https://www150.statcan.gc.ca/n1/pub/82-625-x/2020001/article...

[3] https://www.macrotrends.net/countries/JPN/japan/smoking-rate...

[4] https://ourworldindata.org/grapher/sales-of-cigarettes-per-a...


Ah, it was just a much larger shift in the US, especially demographically. It went from 44% of adults (stats not gathered on younger folks, but anecdotally it was ‘cool’ and a lot of highschool age kids smoked) to 13.8% for adults (anecdotally many quite old) and only 8.8% for younger folks.

Traveling outside the US to Europe or Asia (eastern/southern Europe or China in particular) it’s very visible, where in the US outside of a few locations it’s almost invisible now and notably uncommon.

Especially for educated or higher income folks, too.


Canada worked to discourage and regulate smoking more aggressively around 20-30 years ago, and in 20 years we’ve gone from around 1 in 4 people smoking to more like 1 in 10. It steadily trends down.

https://uwaterloo.ca/tobacco-use-canada/adult-tobacco-use/sm...


What's with Americans and their strange view that the US is somehow managing to do something nobody else has? Just as an example I checked my country: https://www.researchgate.net/figure/Prevalence-of-daily-smok...

I'm pretty sure the figures will look somewhat similar in most western countries.


What is it with non-Americans and their strange view that the US as a whole does anything?


The majority of Western countries have implemented and successfully reduced smoking rates.


> My guess is the industry just isn’t educating workers about the risks.

My guess is that the companies that did were undercut by the ones that didn't.


When my wife was super pregnant and overdue, friends were consistently asking for updates. I made a website called "is<wife's name>stillpregnant.com" and put a very very minimal website there. It was literally just route53 and cloudfront in front of an S3 bucket.

But that meant I needed to make updates from the hospital with my phone. I mean yes there were probably better options but I was a bit busy at the time to think of them.

Let me just say that writing raw HTML files using textedit for Android was not a great experience. It's just not the right interface for making complex text.

Maybe LLMs will help with this, allowing us to describe what we want at a higher level, through voice or text. But God help me I do not want to try to write valid HTML on a phone (after being awake for 35 of the previous 36 hours).


This is the kind of stuff you read on 4chan as memes about orange site. While going through all that you decided to create a website of all things and update it by hand editing html. Normal people, and I hate saying that, would create a facebook/WhatsApp group or something like that. I'm not making fun of you or anything, not everyone has the same lives, I'm just amazed by the hn crowd.


I mean even if you are going to make a self-hosted site for this, like, there's software you can use where you're not using raw html. I use a static site generator that takes markdown. I don't find markdown too hard to write on mobile, as long as I don't need images.


I was about to comment that using an LLM to update a static site can be seen as overkill like using napalm bombing against mosquitoes,

and you post about those considering odd to have your own services to publish basic information...

> amazed by the hn crowd ... [whereas] Normal people...

I have seen members of the "madding crowd" amazed by people reading a book. And not one: many, in different increasingly bewildering capacities. (Including "law enforcement", calling it "suspicious behaviour".)

Be careful of what can be called "normalcy" nowadays.


I'm not disagreeing with you. This was a very bad idea.

But I was a few hours away from becoming a parent, hadn't slept much in days, and was not thinking very well.

Sub-optimal decisions were made.


Congratulations on parenthood. I thought the decision was very much overkill myself, but it's impressive you had the determination and skill to execute it while in the state you describe. I doubt I would be able to.


Not exactly what you described, but I was reminded of this LLM-based prototyping tool:

https://youtu.be/nhTyuuDZe4w?si=5bVPU7g5-WXJitlY


This is so cool


At $110/h, 70 h/week, that's $7,700/week, so $23k for 3 weeks of hell.

That's about 1/3 of the median US household income in 3 weeks.

If I didn't have a family, had the skills, and lived in the area this would be an interesting way to put away some cash.


> Make feature flags short-lived. Do not confuse flags with application configuration.

This is my current battle.

I introduced feature flags to the team as a means to separate deployment from launch of new features. For the sake of getting it working and used, I made the mis-step of backing the flags with config files with the intent to get Launch Darkly or Unleash working ASAP instead to replace them.

Then another dev decided that these Feature Flags look like a great way to implement permanent application configs for different subsets of entities in our system. In fact, he evangelized it in his design for a major new project (I was not invited to the review).

Now I have to stand back and watch as the feature flags are being used for long-term configurations. I objected when I saw the misuse- in a code review I said "hey that's not what these are for"- and was overruled by management. This is the design, there's no time to update it, I'm sure we can fix it later, someday.

Lesson learned: make it very hard to misuse meta-features like feature flags, or someone will use them to get their stuff done faster.


Sadly, this is a battle you are destined to lose. I have almost completely given up. The best you can aim for is to use feature flags better rather than worse.

    - Some flags are going to stay forever: kill switches, load shedding, etc. (vendors are starting to incorporate this in the UI)
    - Unless you have a very-easy-to-use way to add arbitrary boolean feature toggles to individual user accounts (which can become its own mess), people are going to find it vastly easier to create feature flags with per-use override lists (almost all of them let you override on primary token). They will use your feature flags for:
      - Preview features: "is this user in the preview group?"
      - rollouts that might not ever go 100%: "should this organization use the old login flow?"
      - business-critical attributes that it would be a major incident to revert to defaults: "does this user operate under the alternate tax regime?"
You can try to fight this (indeed, especially for that last one, you most definitely should!), but you will not ever completely win the feature flag ideological purity war!


Thank you for this great list of the immense business value derived from "misusing" feature flags!


In my org, I think I’ve go the feature flag thing mostly down.

We started with a customer specific configuration system that allows arbitrary values matching a defined schema. It’s very easy to add to the schema (define the config name, types, and permissions to read or write it in a JSON schema document).

We have an administration panel with a full view of the JSON config for our support specialist and and even more detailed one for developers.

Most config values get a user interface as well.

From there we just have a namespace in the configuration for “feature flags”. Sometimes these are very short lived (2-4 sprints until the feature is done), but others can last a lot longer.

There are an unfortunate couple that will probably never go away at this point (because of some enterprise customer with a niche use case in the “legacy” version of the feature that we’ve not yet implemented compatibility with and I don’t know when it will get on our roadmap to do so), but in the end they can just be migrated into normal config values if needed.

A little tooling layer on top lets us query and write to the configs of thousands of sites at once as well.


We have an interesting hybrid between the two that I'd like your take on. When we release new versions of our web client static assets we have a version number that we bump that moves folks over to the new version.

1. We could stick it in a standard conf system and serve it up randomly based on what host a client hits. (Or come up with more sophisticated rollouts)

2. Or we can put it as "perm" conf in the feature flag system and roll it out based on different cohorts/segments.

I'm leaning towards #2 but I'd love to understand why you want to prohibit long lived keys so I can make a more informed choice. The original blog posts main reasons were that FF systems favor availability over consistency so make a pour tool if you need fast converging global config, which somewhat becomes challenging here during rollbacks but is likely not the end of the world.


1) If you do Slack, then I recommend you join the #openfeature channel on the CNCF Slack. The inviter is here: https://communityinviter.com/apps/cloud-native/cncf

2) The downside of rolling it out based on host is that you could refresh your page, hit a different host, and see the UI bouncing back and forth between versions. As long as you always plan to roll things to 100%, this is the perfect use case for a feature flag.


Or... see them for what they are: runtime configuration. The name implies a use case scenario, but in reality it's just a configuration knob. With a good UI, it's a pretty damn convenient way to do runtime configuration.

So of course they'll be used for long-term configuration purposes, especially under pressure and for gradual rollouts of whole systems, not just A/B testing features.


This hits the nail on the head.

The term "feature flag" has come to inherently have a time component because features are supposed to eventually be fulled GA'd.

What I've seen in practice is feature flags are never removed so a better way to think about them is as a runtime configuration.


I think the reason feature flags are never removed is because the timeframe that a given feature-flag is top-of-mind is also when it's at its most useful. Later when it's calcified in place and the off-state may be broken/atrophied, no one is really thinking about it.

I'm also not convinced it's always a huge problem. I can imagine sometimes it is, but in most codebases I've worked on, it's more of an annoyance but not cracking the top 3 or 5 biggest problems we wanted to focus on.

IMHO the best solution is not something heavy handed like a policy that we only use run-time config for fixed timeframes, or a process where we regularly audit and prune old flags. It's simply to keep a record of the config changes over time so anyone interested can see the history, and a culture where every engineer is encouraged to take a little extra time to verify and remove dead stuff whenever it crosses their path .


The mental overhead of reading code like this is massive. Leaving feature flags in with the alternate branch left to rot leads to a codebase that is nearly impossible to understand. No purpose is served by not deleting the now unused branch except you save one developer an afternoon of work. But that time is quickly recouped when the entire team, and especially new hires, only have half as much code to understand.


> What I've seen in practice is feature flags are never removed so a better way to think about them is as a runtime configuration.

SaaS won't sell itself unless it redefines the problem and presents itself as a solution...


There is a need for runtime configurations, yes, but it's important to put them behind an interface intended for that, and not one intended for something else.


I can immediately see if the config is being requested, which system requests it, what are the metadata of the request, etc. I can do conditional rollout of a configuration based on runtime data. I can reset the configuration to a know-good failsafe default without asking for approval with a break-glass button. I can schedule a rollout and get a reviewer for the config change.

IME the feature flag interface is next to perfect for runtime configuration. I don't care for intended usage at all. You could say feature flags have found a great product-market fit, just that a segment of the market is a bit unexpected but makes perfect sense if you think about it.


This gets messy at larger scales, both as teams grow and software grows.

Resetting to a know failsafe works as long ask the risk of someone changing a backend service (or, multiple services) at the same time is low. Once it isn't, you can most definitely do more damage (and make life harder for oncall).

Who controls the runtime config? One person? Half a dozen? One hundred plus? Is it being gated by approvals, or can anyone do it? What about auditability? If something does go wrong, how easily can I rule out you turning on that flag?

Finally there is simply the sheer permutations you introduce here. A feature flag is binary in many cases: on or off. A config could be in any number of states.

These things make me nervous as an architect, and I've seen well intentioned changes fail when good flag discipline wasn't followed. Using it as fullblown runtime config seems like a postmortem waiting to happen.


I am tempted to agree: if separating the two is key (I’m not convinced that it is, but happy to assume) why not copy the interface and the infrastructure of the feature flag and offer it as a configuration tool.

I feel like you could easily add a status to flags, to mark whether they are part of a release process, or a permanent configuration tool, and in the latter case, take them off the release interfaces.


I think unleash offers "toggle type" which can take values that describe whether it's a pure feature flag, something operation, config, etc.


Could you expand on what you think the different interfaces should be? you keep stating that these things ought to be distinct but haven't explained why beyond dogma.


Our FF system uses our config system as its system of record. There's some potential for misuse, and it's difficult to apply deadlines. On the plus side all our settings are captured in version control. Before they were spread out over several systems, one of which had an audit system that was pure tribal knowledge for years.


The main challenge is when things go wrong. Feature flags are designed for high-rate evaluation with low latency responses. Configuration usually doesn't care that much about latency as it's usually read once at startup. This context leads to some very specific tradeoffs such as erring to availability over consistency, which in the case of configuration management could be a bad choice


Yeah, and assuming they are done well, they probably have better analytics and insights attached to them than anything else except perhaps your experiments!


Long lived features flags is a development process bug, I'm not sure we can solve it with the feature toggle system.

I'm at the point of deciding that Scrum is fundamentally incompatible with feature flags. We demo the code long before the flag has been removed, which leads to perverse incentives. If you want flags to go away in a timely manner you need WIP limits, and columns for those elements of the lifecycle. In short: Kanban doesn't (have to) have this problem.

And even the fixes I can imagine like the above, I'm not entirely sure you can stop your bad actor, because it's going to be months before anyone notices that the flags have long overstayed their welcome.

I'm partial to flags being under version control, where we have an audit trail. However time and again what we really need is a summary of how long each flag has existed, so they can be gotten rid of. The Kanban solution I mention above is only a 90% solution - it's easy to forget you added a flag (or added 3 but deleted 2)


I faced something similar, and I think it's unavoidable. Give people a screwdriver and they'll find a way of using it as a hammer.

The best you can do is expect the feature flagging solution to give some kind of warning for tech debt. Then equip them with alternative tools for configuration management. Rather than forbidding, give them options, but if it's not your scope, I'd let them be (I know as engineers this is hard to do :P).


> Give people a screwdriver and they'll find a way of using it as a hammer.

I feel like feature flags aren't that far off though. They're fantastic for many uses of runtime configuration as mentioned in another comment.

There's multiple people in this thread complaining about "abuse" of feature flags but no one has been able to voice why it's abuse instead of just use beyond esoteric dogma.


Allow me to try:

Feature Flags inherently introduce at least one branch into your codebase.

Every branch in your codebase creates a brand new state your code can run through.

The number of branches introduced by Feature Flags likely does not scale linearly, because there is a good chance they will become nested, especially as more are added.

Start with even an example of one feature flag nested inside another. That creates four possible program states. Four is not unreasonable, you can clearly define what state the program should be in for all four states.

Now scale that to a hundred feature flags, some nested, some not.

It becomes impossible to know what any particular program state should be past the most common configurations. If you can't point to a single interface in a program and tell me all of the possible states of it, your program is going to be brittle as hell. It will become a QA nightmare.

This is why Feature Flags should be used for temporary development efforts or A/B testing, and removed.

Otherwise you're going to have a debugging nightmare on your hands eventually.

Edit: Note that this is different from normal runtime configurations because normally runtime configurations don't have a mix of in-dev options and other temporary flags. Also, they aren't usually set up to arbitrarily add new options whenever it is convenient for a developer.


Sorry, not buying it.

Branches are difficult to reason about? Yes, I agree.

Are branches necessary to make the product behave in a different way in some circumstances? Most of the time.

Do those circumstances require a branch? Unless you’re super confident about some part of code, yes? But why would you be?

Runtime configuration is not about making QA easy. It’s introduced because QA has been hell already so you can control rollout of code which you know wasn’t properly QA’d - or it was but turns out the thing you built isn’t the thing users want and the release cycle is too long to deploy a revert.

I’d say ‘branches are bad but alternatives are worse’.


There is nothing worse than code with dozens to hundreds of possible configuration states that you must test for every new feature.

If your QA was bad before, you've made it worse.

"I can toggle it off without pushing a new release" is a terrible bandaid for the problem.


The fundamental diff between feature flags and config is the former is meant to be a soft deploy of code where everyone is expected to eventually be on the new code. Thus it should have a timer built in where it stops, and you should consider all new customers launching with it on.

As for why: if you don't deprecate the feature flag in some time span, you're permanently carrying both code paths. With ongoing associated dev and qa resources and costs against your complexity budget.

Permanent costs should only be undertaken after careful consideration, and should be outside the scope of a single dev deciding to undertake them. Whereas flags should be cheap to add to enable dev to get stuff into prod faster while retaining safety.

Permanently making something a config choice should be done after heavier deliberation because of the aforementioned costs, and you often want different tools to manage it. Including something heavier duty than a single checkbox/button in your internal CS admin tooling. These are often tied into contracts or legal needs, and in many cases salesforce should be the source of truth for them. Or whatever CPQ system you're using.


I feel like this is a solvable problem: 1) make feature flags be configured to have an expiration date. If over the expiration date, auto-generate a task to clean up your FF 2) If you want to be extra fancy, set up a codemod to automatically clean up the FF once it's expired

I don't see the problem with developers using flags for configuration as a stopgap until there's a better solution available.


> automatically clean up the FF once it's expired

Um what? How could that ever work. It's like you are trying to find new exciting ways to break prod.


It can be done by opening a PR, I haven't tried it yet, but I'm curious to try out https://github.com/uber/piranha or maybe hear some experiences if someone has used it


Which would be instantly rejected because the flag is still being used.


AFAIK, it'd only open a PR if the flag is fully enabled and has some heuristics to determine when it's safe to remove. Honestly, I haven't tested it but I'm curious to know if someone had either good or bad experiences.

If all the PRs are instantly rejected, that would be a bad sign, but I couldn't find someone who effectively used it. I mean, it's been around for a while but it didn't spread out, so that already gives me some hint


If the cleanup only happens if the flag is not used, then the "expiration date" is basically meaningless. You can either delete it or you can't. Who cares if it's expired or not.


I think expires is just a signal for a feature that should "potentially" be removed. I believe it's a good way to focus on the ones you should pay attention to. But, it might be cool if you could say "Yes, I know, please extend this for another period" (or do not notify me again for another month)


Sounds like "other dev" found some business case they could unblock with existing system, and you thought the business was better off not solving that, or finding a more expensive solution.

Curious how you plan to justify cost to "fix it" to management. If it ain't broke...


I think it's better to admit they actually are config, just a different kind of config that comes with an expiration date.

Accepting reality in this way means you'll design a config management system that lets you add feature flags with a required expiration date, and then notifies you when they're still in the system after the deadline.


I agreed. My perspective is that there are two kinds of feature flags: temporary and permanent.

Temporary ones can be used to power experiments or just help you get to GA and then can be removed.

Permanent ones can be configs that serve multiple variations (e.g. values for rate limits), but they can also be simple booleans that manage long term entitlements for customers (like pricing tiers, regional product settings, etc.)


We did the same. We were early adopters of unleash and wrangled it to also host long term application configuration and even rule based application config.

The architecture of unleash made it so simple to do in unleash vs having to evaluate, configure, and deploy a separate app config solution.


Victim of your own success. As others were saying, when it works for short-lived its easy/no effort to use it for long-lived configurations.


Thanks for sharing. I have seen systems grow in to thousands of flags, where most developers does not know what a particular flags do anymore.


It’s one of the main reasons to start with something like unleash because they have stale flag warnings built in. Plus, since you already have a UI it’s harder for it to be hijacked.


The solution is not to use feature flags. Or maybe have them expire. Oh, also, discipline the developers who do this.


I saw a cool looking indie game the other day on steam and excitedly bought it.

Then realized it doesn't support Linux. I can't install it. I was shocked.

And then I realized how crazy it is that I had presumed a game would run on Linux. How far we've come.


As the others have said, odds are you only need to flip the switch in steam to ignore OS. I've been overwhelmingly happy with how many games work this way.

To the point that I'm far more likely to find a game with clunky default controls on Steam Deck than I am one that won't play. And learning to use some of the more advanced control features of the system goes a long way to fixing that.


> And learning to use some of the more advanced control features of the system goes a long way to fixing that.

I need to take the time to do that some day. I’ve seen people swear Factorio was playable and fun on the deck, but the controls were just always too much of a struggle to get working in a way that worked for my brain.

I could play, but I just couldn’t get it smooth enough that the controls got out of my way.

I’d love (and hate) to have Factorio playable on the steam deck.


If it's been a while since you last tried it on the Deck, you might want to try again since recently they've added official controller support.

I had managed to get used to the Steam Input mappings so my experience may not reflect too much on you, but the official mappings are also pretty nice. IIRC they also show control hints underneath the map for context-sensitive actions, which help a lot too.


> If it's been a while since you last tried it on the Deck, you might want to try again since recently they've added official controller support.

Thanks, that's deeply concerning to me since I have a baby I need to keep alive. But I'll give it a go sometime.


I think there's a switch to flip in Steam's settings that allows you to use any game with Proton even if untested/unsupported. (Something to that effect, can't look now)


No no don't tell me this. I had things I wanted to get done this weekend!


If you go into a game's properties in Steam (the little gear icon), and and then choose "Compatibility" from the resulting window, you can choose a version of Proton to run the game, usually the latest stable is good, sometimes you have to play around to find a good version.

Once you've chosen a Proton version, the option to "Install" will be available in your game menu.

I gave up on Windows a couple years ago, and most Steam games run in Linux using Proton through Steam. Some modern titles with fancy DRM don't work, but I generally don't buy those anyway (Denuvo, certain anticheats, etc).


It worked, FYI. Thanks!


Glad I could help! Proton is awesome.

For non-Steam games, I do the same thing, either with Steam (by adding a non-steam game installer, and using proton to install it), or by using Lutris (https://lutris.net/). I generally use Lutris with my GoG library.


You can also check compatibility on protondb (https://www.protondb.com/)


If you go into properties, you might be able to install it by choosing a specific version of proton (I think the option is properties> compatibility).


even with proton?


Many years ago when I was a junior dev at Amazon, there was a massive project internally to split up every internal system into regional versions with limited gateways allowing calls between regions. The reason? We had run out of internal IPv4 addresses.

The Principal PM in charge of the "regionalization" effort was asked in a Q&A "why didn't we just switch to IPv6?".

Her answer was something along the lines of "The number of internal networking devices we currently have that cannot support IPv6 is so large that to replace them we would have needed to buy nearly the entire world's yearly output of those devices, and then install them all."[0]

It's easy to presume malicious intent on the IPv4 front from Amazon, but with so many AWS systems being on the scale they are at, I find it easy to believe that replacing all of the old network hardware may just be a project too large to do on a short timescale.

[0] - At least, that's my memory of it. I'm sure that's not an entirely accurate quotation.


Can you remember what year it was?

I’ve got a slight suspicion you were given some bullshit or at least a creative treatment of facts e.g. everything had IPv6 support but FUD-filled network engineers didn’t want to turn it on.

Most network devices I’ve encountered were dual-stack way before anyone I knew seemed to care about actually using IPv6 — I always assumed it was added for US government/military requirements.


From memory, the regionalization project ran from approx 2014 to 2015 or 2016.

There were also other reasons given, like the amount of internal software that used e.g. IPv4 addresses. Also, AWS likes to have 'lots of small things' instead of one big thing (regions, AZs, cells, two pizza teams, no (official) monorepo) so regionalization was part of that.

Another big reason for regionalization, other than IPv4 exhaustion was that AWS promises customers that AWS regions are completely seperate, but with one big giant network, it turns out there were all sorts of services making calls between regions that nobody had realized. I have a couple of funny examples, but that might make me too identifiable :)


My favorite region isolation oversight was when someone realized that the perl cron job that iterated over every border router globally and applied ACL updates 2-3x per day didn't pay attention to isolation at all, and could easily have just started blackholing the entire network one device at a time if someone configured a bad rule.

The mitigation was to sort routers by hostname which began with the regional airport codes (iad, pdx, etc.), and pause for 15 minutes each time the first three letters changed to give folks on-call time to react.


Oh wonderful. 15 minutes to get the page, put down my beer, get on my computer, sign in to everything, get 2-factored 3 times AND figure out exactly what’s happening and fix it.


Chop chop!


This really would not have been true for vendor network gear of the sort AWS had been buying for years by 2014. It's possible that their own switches or the weird fabric they have internally wouldn't have worked with v6, or there were Annapurna NIC ASIC issues, but their primary vendors all would have been fine.

I'm not saying there aren't v6 issues (for some vendors, resource exhaustion might have come into play) or bugs, but there's no way it's that massive a problem. There are huge and complex all v6 networks all over the planet that have more stringent requirements (by law) than AWS DCs.


Facebook started its transition to make everything* internally IPv6 slightly before then.

It was indeed a lot of work. But worth it.

* When I was there we still had a handful of weird things that couldn't be made IPv6. If you needed to access such things you could get a dual-stack dev server.


You're talking about snowfort, and while ip exhaustion was one reason, it's also an isolation/fault tolerance/security thing.


Indeed, blast radius is a real concern that a lot of folks who try and imitate aws have to learn about the hard way.


Tell me more about these "pizza teams".


The idea is internal teams should be no bigger than what can be fed by 2 pizzas.


But I don't like working alone :(


slam dunk.


Badum tsshhhh


It’s unfortunate when you have big eaters in your team, but I suppose you can just scale up your pizza.

Pepperoni.16xlarge


oh

so they don't own 2 pizzerias? :(


ssh’ing through bastions was such a pain! We used the JMX GUI to review some AMP details from time to time, and port forwarding through the bastions was frowned upon, but our workflow was broken, what were we to do?

IIRC, early on on that project the gateways would get overwhelmed at the volume of traffic they were handling between various VPCs and had to be rolled back several times early on.

Of all the transitions I dealt with at Amazon, snowfort may have been my least favorite (though the ACL/role migration was pretty frustrating as well).


Sure, everything supports IPv6 -- until you turn it on and rediscover the tickets that have been sitting at the bottom of the JIRA for the last decade.


As a matter of fact Ron Broersma who affiliated with Space and Naval Warfare Systems Command (SPAWAR) has a list of equipment that should be fully IPv6-only compliant including various management interfaces and more. The US Navy supposedly tests this in house in a IPv6-only network. 4 years later I imagine the situation only got better https://www.youtube.com/watch?v=9kQje5gSWw8

Also, AWS now have the majority of NICs and switches built in-house I imagine. The underlay network could be IPv6 or totally custom for what we know (but probably is IPv4).


Cool! I'm glad the military is pushing the internet forward, I guess some things never change :)

As for AWS, I tend to agree with the sibling post and your supposition about IPv4. Everything out of the Amazon organization is aggressively, err, "minimal."


It's their baby lol


I believe the issue wasn't of IPv6 support generally, but of issues with TCAM space and the increase in routing table size moving from v4 to v6. Overflowing TCAM would cause routing to hit the CPU which would immediately lead to outages.

Tables were relatively large internally because AWS was all in on clos networks at that point. And the devices used to build those clos networks were running Broadcom ASICs, not Cisco or other likely vendors.


Right, if you worked at Amazon and didn't have incentive, then, you didn't do it. It was part of your job to not do things which you were not incentivized to do.


Just change Amazon for any other company name and the sentence is still correct. People do they are paid to do.


Right?? How old of a device you would have to get to NOT have IPv6 support?

EDIT: But maybe bugs, IDK.


If Amazon is your customer, you fix the bugs; if you're Amazon using your in-house kit, you fix your own bugs whenever you want to. There are plenty of real reasons not to do IPv6, but they are virtually all politics and possibly operational ("we'd have to train our people, and we don't spend money on that"). The idea it was a vendor issue is a BS trope that's been around for at least a decade if not 2.


> FUD-filled network engineers

FUD sounds like a mean way to say unproven in production


I remember the regionalisation, that was "fun" to be on the sidelines for (I was in a newer service that was regionalised from the get-go). I don't remember who the PM was for that one, but I remember that being when I truly came to respect the value that a TPM can add.

You're right about the cost and need to replace network equipment being one of the strong reasons why they didn't. Amazon used its own in-house designed and built network gear for a variety of reasons (IIRC there's a re:invent talk about it), which I'm sure is probably still the case. Every single one of those machines had fixed memory capacity and would need to be replaced to bump up the memory sufficiently large enough to handle IPv6 routing table needs etc. What they had wouldn't even be enough if they'd have chosen to go IPv6 Only (which you couldn't get through except via dual stack IPv4/IPv6 anyway).


Were they also by chance considered accelerators for encrypted traffic?

I'm not privy to details, but I recall once when a mandate was issued to a Java platform to remove an outdated encryption protocol (mandated by Amazon Infosec). The change was made and rolled out with little fanfare.

A few weeks later, a large outage of Amazon Video (which used said platform) occurred on a Friday evening. Root cause? The network hardware accelerators were only setup to use that outdated protocol, which in turn meant that encryption was happening in software instead. Under load, the video hosting eventually caved.

Might be specific to the hardware used for Amazon retail, but it reinforces the point of their home grown (and now aging) stack.


Maybe not the same story, but there was a sidecar service for encrypting traffic and doing access control and other things in a way that was transparent to the app (like Envoy, but without the mesh and much earlier). The original version was written by (maybe) a single engineer in Erlang. Version two was given to another team and rewritten in Java because. They had never tested at scale and every team I know who went to production with it fell over. There was some company wide deadline, but it was unusable, at the point, and the teams I was working with were gun shy to try it again since it was obvious that the owning team had know idea what the performance characteristics or system requirements were for it.

I think I switched teams before that was resolved and moved to some greenfield work where we didn’t have to worry about scale for a while, but I do believe they eventually figure it out.


I believe the PM was Laura Grit, who was actually a TPM I believe. Laura is a Distinguished Engineer now. She seems to constantly do massive scale projects. IPv4 being a smaller one now. Sadly I can't share some of the big projects she's doing now. I've gotten some sage advice from her on a few occasions that she had time and appreciate it.


> the PM was Laura Grit

Talk about nominative determinism...


Imagine never being able to be lazy about anything because the jokes are such a layup.


Yep, she was behind regionalization and IPv6 and such. I recall reading the same the the parent comment talks about.


> replacing all of the old network hardware may just be a project too large to do on a short timescale.

If that is the case, then Amazon should hold off on charging for IPv4 on a short timescale until they have replaced all the old hardware and can support IPv6 internally everywhere.


True. But if they are having a problem getting that done, adding a surcharge is a good way to get bottom-up pressure on AWS teams to finish the job.


this doesn't forgo v6 phase-in though, can't kick that can down the road forever.

surely they started the process...

right? i cannot imagine AWS just sticking head in the ground and ignoring this...


No one is ignoring it, and the US Government has done everyone another favour on this score. Years ago in the late Bush / early Obama administration, NIST required that all federal government agencies have IPv6 at the border. Federal government money is not to be sniffed at, and that had the effect of forcing a number of vendors to add IPv6 support. A few years after that, it became that the federal agencies needed to have dual-stack IPv4/IPv6.

About 18 months ago, the requirement came that federal agencies are required to be IPv6 Only, dropping the dual stack. IIRC they have until 2025 to do that. This has the neat effect of forcing all vendors to make IPv6 a first class citizen. The extra little fun from this is that it applies to the military JWCC contract that all the major clouds have been trying to land. The timescales of JWCC meant that initial offerings are pretty bare, but that won't be allowed to last.


Yep.

I work a federal entity tied to DoE and that's the biggest workstream cut out for us. 90% of our environment is either dual stacked or IPv6 native. We would love to kick IPv4 out under us and go full IPv6. Problem is that the vendors who are largely private don't have the same mandate so there's varying degree of "we support IPv6" which makes planning bit more difficult (especially at the discovery stage).


>Problem is that the vendors who are largely private don't have the same mandate

They get to decide how much that sweet federal $$$$ is worth to them. For most vendors, it's hopefully worth too much to ignore.


Yes they are working on it. A number of services already support v6, more to come.


1 is a number.

0 is also a number.


I can believe that, but also, places like google and facebook saw the problem of having >1million devices and the lack of IP addresses and moved to ipv6.


Hanlon's razor applies here.

There is no reason any company of any size should run out of IPv4 addresses internally, IF they are doing proper IP management. If I were to wager a guess I'd say there was a lot of waste going on, issuing /24s or larger to teams when all they need are /29s etc. It adds up over time. Once they exhaust private IP space they can always buy more at auction. They are Amazon after all, there's no shortage of money. This is just mismanagement of resources.


Comcast has 29.6 million Internet subscribers: https://expandedramblings.com/index.php/comcast-statistics/

If you wanted to assign a single non-routable IP in the 10/8 space to each of those cable modems, they would be 13 million IPs short.


Can you elaborate on proper IP management? Isn't that sort of what the parent post is talking about with splitting the network into regional chunks?

I'd imagine few service teams at Amazon would get very far with a /29, let alone a /24, if they have to put all their stuff on that.


My one issue with this is if it’s such a large lift, why burn the effort to just kick the can down the road? IPv6 has to happen at some point (and for AWS that point is sooner than most).

The better reason is the regionalization was probably a way to decrease blast radius in case of a service failure.

Also, AWS definitely did not regionalize all their services in 2016. IAM and certainly not DNS/Rte53 (part of the reason why they had their massive failure in US East 1 2-3 years ago)


I upgraded a P2P networking library recently to add support for IPv6. That was a pure software solution and it required a lot of work. When you have to upgrade hardware as well, I can imagine it would present a massive challenge (especially logistically). You'd have to upgrade ALL the hardware before you even start thinking about the software side of the equation.


So basically, their IPv4 infrastructure investment is so entrenched that they're trapped.

Sounds like a perfect opportunity for a market upstart to start out v6-only...


Out of IP addresses? Just use NAT.


32-bit IPv4 addresses are wasteful. By leveraging NAT, we can get away with a 1-bit addressing scheme and save 31 bits per packet!


Out of NAT sockets? Just use more IP addresses.


Hah, I worked on the hardware loadbalancer team during that period. Fun times.


Even cheap consumer hardware supports ipv6. There are significant financial incentives to continue the capitalism of ipv4 addresses. Like NFT's - an artificially limited capital. To create more addresses means more competition, loss of capital. Therefore they will spend billions on continually reworking internal IPV4 than going for the proper solution.


You obviously have never been on the backend of a big enterprise deployment.

The world is bigger than your apartment.


I worked in a company where we had network equipment all over the world.

Often IPv6 and IPv4 paths were entirely different and latency on IPv6 was much bigger, so we had to measure latency between nodes on both. Also, sometimes IPv4 was a symmetrical, but IPv6 wasn't. As a result, we had to buy tons of IPv4 addresses.

Our control plane was on IPv6, but data-plane had to be on both.


The moment that generative AI became something crazy for me was when I said "holy shit, maybe Blake Lemoine was right".

Lemoine was the Google engineer who made a big fuss saying that Google had a sentient AI in development and he felt there were ethical issues to consider. And at the time we all sort of chuckled- of course Google doesn't have a true AGI in there. No one can do that.

And it wasn't much later I had my first conversation with ChatGPT and thought "Oh... oh okay, I see what he meant". It's telling that all of these LLM chat systems are trained to quite strongly insist they aren't sentient.

Maybe we don't yet know quite what to do with this thing we've built, but I feel quite strongly that what we've created with Generative AI is a mirror of ourselves, collectively. A tincture of our intelligence as a species. And every day we seem to get better distilling it into a purer form.


Isn't the takeaway : "holy shit, these things are advanced enough to make people like Blake Lemoine believe they are sentient?"


Or "holy shit, we don't know enough about sentience to even begin to know whether something has it, other than humans, because we've gotten used to assuming that all human minds operate similarly to our own and experience things similarly to how we do."


Having witnessed this debate maybe 50 times now, my view is it is purely about semantics.


HFRL is literally just training the AI to be convincing. That's what these systems are optimized for.


The point is that that it can be trained to be convincing in the first place.

The current batch of AI can be trained by giving it a handful of "description of a task -> result of the task" mappings - and then it will not just learn how to perform those tasks, it will also generalize across tasks, so you can give itva description for a completely novel task and it will know how to do that as well.

Something like this is completely new. For previous ML algorithms, you meeded vast amounts of training data specifically annotated for a single task, to get a decent generalisation performance inside that task. There was no way how to learn new tasks from thin air.


Is it really that different from socialization (particularly primary socialization [0]), whereby we teach kids our social norms with the aim of them not being sociopaths?

[0] https://en.wikipedia.org/wiki/Primary_socialization


Why does it feel so slimey and dehumanizing when you guys post bullshit like this?


Isn’t that the hallmark of the Turing Test?


The counterpoint to this is always "models work with numerical vectors and we translate those to/from words"

These things feel sentient because they talk like us, but if I told you that I have a machine that takes 1 20k-dimensional vector and turns it into another meaningful 20k-dimensional vector, you definitely wouldn't call that sentience.


What if I told you I have a machine that takes 1 20k-dimensional vector and turns it into another meaningful 20k-dimensional vector, but the machine is made of a bunch of proteins and fats and liquids and gels? Would you be willing to call it sentient now?


That's ridiculous. You're asking me to believe in sentient meat?

https://www.mit.edu/people/dpolicar/writing/prose/text/think...


Bro, I can't even do 5 dimensional vector math.

EDIT: I couldn't help but make the joke, but I am certain my brain is doing zero dot products as a type this. What these models do is just different.

We should consider what that means, and the 'sentience' jump is just lazy thinking to avoid that.


Sorry to tell you, but your brain is doing millions of dot products - it's what the biochemical reactions in the synapses between neurons amount to. We already know how the brain works on that level, we just don't know how it works on a higher level.


Sorry to tell you, but neurons do not follow a dot product structure in any way shape or form beyond basic metaphor.

I mean fine I’ll play along - is it whole numbers? Floating points? How many integers? Are we certain that neurons are even deterministic?

The point I’m making is this whole overuse of metaphor (I agree it’s an ok metaphor) belittles both what the brain and these models are doing. I get that we call them perceptrons and neurons, but friend, don’t tell me that a complex chemical system that we don’t understand is “matrix math” because dendrites exist. It’s kind of rude to your own endocrine system tbh.

Transformers and your brain are both extremely impressive and extremely different things. Doing things like adding biological-inspired noise and biological-inspired resilience might even make Transformers better! We don’t know! But we do know oversimplifying the model of the brain won’t help us get there!


[flagged]


The people seeking to exclude numerical vectors as being possible to be involved in consciousness seem to me to be the ones that you should direct this ire at.


Yes, and you wouldn't believe you're made out of cells as well.

The brain can't see, hear, smell, etc directly and neither can it talk or move hands or feet. "All" it does is receive incoming nerve signals from sensor neurons (which are connected to our sensory organs) and emit outgoing nerve signals through motor neurons (which are connected to our muscles).

So the "data format" is really not that different.


Are you sure that is all the brain does?


As far as "inputs and outputs" are concerned, yes. (Well, not quite, I think there is also communication with the body going on through chemical signals - but even this doesn't have to do a lot with how we experience our senses)

Not a neurologist, but that's about what you can read in basic biology textbooks.


What is the value of ignoring the endocrine system?

There are plenty of inputs to the brain outside of neural signals and implying otherwise isn't even at the level of basic biology textbooks.

I'll put it this way - what are the 'sensor nerve signals' that make us tired pray tell? Do models 'get tired'?


We ask it to predict, and in doing so it sometimes creates a model of the world it can use to "think" what comes next. In forward pass.


I think my moment was the realisation that we're one, maybe two years away from building a real-life C3PO - like, not a movie lookalike or marchandize, but a working Protocol Druid.

Or more generally that Star Wars of all things now looks like a more accurate predictor of our tech development than The Martian - the franchise that is so far on the "soft" side of the "hard/soft SciFi" spectrum that it's commonly not seen as "Science Fiction" at all but mostly as Fantasy with space ships. And yet here we are:

- For Protocol Druids, there are still some building blocks missing, mostly persistent memory and the ability to understand real-life events and interact with the real world. However, those are now mostly technical problems which are already being tackled, as opposed to the obvious Fantasy tropes they were until a few years ago. Even the way that current LLMs often sound more confident and knowledgeable than they really are would match the impression of protocol druids we get from the movies pretty well.

- Star Wars has lots of machines which seem to have some degree of sentience even though it makes little practical sense - battle droids, space ships, etc - and it used to be just an obvious application of the rule of cool/rule of funny. Yet suddenly you could imagine pretty well that manufactures will be tempted by hype to stuff an LLM into all kinds of devices, so we indeed might be surrounded by seemingly "sentient" machines in a few years.

- Machines communicating with each other using human language (or a bitstream that has a 1-1 mapping to human language) likewise used to be a cute space opera idea. Suddenly it became a feasible (if inefficient and insecure) way to design an API. People are already writing OpenAPI documentations whete the intended audience are not human developers but ChatGPT.


They feel sentient in many cases because they're trained by people using data they've selected in the hope that they can train it to be sentient. And the models in turn are just mechanical turks repeating back what they've already read in slightly different ways. Ergo, they "feel" sentient, because to train them, we need to tell them which outputs are more correct, and we do that by telling them the ones that sound more sentient are more correct.

It's cool stuff but if you ever really want to know for sure, ask one of these things to summarize the conversation you just had, and watch the illusion completely fall to pieces. They don't retain anything above the barest whiff of a context to continue predicting word output, and a summary is therefore completely beyond their abilities.


Oh he was right for sure.


I read through the transcripts and was stunned when my more CS-oriented colleagues dismissed it as stochastic parroting. It sure sounded human to me.


Dunning-kruger of CS orientation.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: