Hacker News new | past | comments | ask | show | jobs | submit | wenc's comments login

I can relate. This sort of trade off exists today in almost all commercial code. Sharp enough to do the job, but not sharper.

A solo programmer can afford to be an artisan, but commercial code is rarely artisanal. It's full of trade-offs and compromises.

Commercial code is always about business costs -- maintenance cost, buy-vs-build cost, refactoring cost, business continuity costs, etc. You're always working under constraints like time, headcount, available expertise, maintainability, and coordination bottlenecks.

Craftsmanship in commercial code is rare. Sometimes you get geniuses like John Carmack who writes inspired stuff in code that ships. But most of the time, it's just a bunch of normal people trying to solve a problem under a ton of constraints.


> it's just a bunch of normal people trying to solve a problem under a ton of constraints.

What makes you think that doesn’t involve craftsmanship?


I read the point to be that was the kind of coding that was the current equivalent of functional sharpness.

I type somewhat fast, so I labored for years under the impression that I already knew touch typing. But the reality is I didn't -- like I don't keep my fingers on the home row, and I make tons of mistakes that I mindlessly correct using Bksp.

I find keeping my fingers on the home row quite unnatural and prevents me from typing chords with Ctrl/Alt and Shift. How have folks overcome this?


Touch typing courses focus on writing English prose. Programming requires a far greater range of ability with all the punctuation and special symbols. Then you have all the function and special keys too.

I wouldn't be too hung up on how it is taught. Once you learn the essentials and can type a sentence or so without looking, then you'll be fine. Personally, I just watch the screen and correct and make slight hand position adjustments without looking down at the keyboard. The fun trick is when somebody walks up to my desk to ask a question, and I continue typing whilst engaging in their conversation. It is sort of a file-write-flush for me, to capture whatever was on my mind when I was interrupted.


I'm in the exact same situation.

I can type around 60-70 wpm with non touch typing. Same as you, often have to correct things with Backspace.

I rarely use my pinky. But with touch typing I have to use it more and it feels very unnatural.

When I have to bang out emails or any other form of text for work, I can't see myself slowing down to force sticking on the home row to learn touch typing.

I can practice in the evening. But then you essentially keep using the 'bad' way of typing while also learning the good way. Seems confusing.


> Ctrl/Alt and Shift. How have folks overcome this?

ctrl and shift heavy flows are a remnant from ibmpc / windows era and suck for ergonomics, unfortunately. Most people realise just how much more comfortable cmd+ is from using a mac. You're holding the thumb there anyway, might as well make use of it.


I habe a virtual layer on my keyboard that contains all symbols I need regularly. Holding down a specific key (that is simple to reach) toggles the layer - like the normal Shift key.

How about the one some of us believe:

“Isn’t it cool that I don’t have to write boiler plate and can prototype quickly? My job isn’t replaced because coding is not my job — it’s solving domain specific problems?”

I’m in my late 40s, have written code for 3 decades (yes started in my teens) and have always known that the code was never the point. Code is a means of solving a problem, mostly unrelated to computers (unless you work on pure software tooling).

This is why I chose not to study computer science. I studied something else and kept coding. I’ve always felt that CS as a field is oversubscribed because of $$$ dangling due to big tech.

So many fields are computational these days and the key is to apply coding to these fields. For instance, a PhD in biology gets you nowhere these days so many biologists these days are computational biologists or statisticians. Same with computational chemists, etc.

For most of my career I’ve written code, but in service of solving a real world physical problem (sensor based monitoring, logistics, mathematical modeling, optimization).


And we must all enjoy using Llms because it helps us code? Even if you enjoy coding? What if coding helps me relax? Relaxing isn’t my job.

I didn’t have to write boiler plate before Llms with a code scaffolding tool like create-express-app.


literally no one is telling you to do anything.

you can just chill out and have whatever hobbies you want, using whatever tools you want.


Just not make money at it, that’s for Llm Ceos.

> My job isn’t replaced because coding is not my job — it’s solving domain specific problems?

I would wonder why you are so complacent as to think next year's models won't be able to solve domain-specific problems? Look at how bad text generation, code generation, audio generation, image generation was five years ago versus how capable it is today. Video generation wasn't even conceivable then.

As an equally middle-aged person with children I'm less worried about myself than the next generation. What are people still in school right now, with dreams of being architects or lawyers or artists or writers or doctors or podcasters or youtubers, actually going to do with their lives? How will they find satisfaction in a job well done? How will they make money and feed and house themselves?


The economy only works because people consume goods and services. If they can't do that, then capital can't make any money. So whatever the case is, capital needs to ensure that the ability to consume is ensured.

This is the same conversation that happens decade after decade.


I agree with you, but no one listened back then, why would they ever think about listening now.

Capital formation comes first before everything else, not the other way around, when you have nothing to trade that's of value it simply can't happen, and inevitable hyper-inflation/deflationary cycles begin which once started can't be stopped.

These people think, survival is guaranteed, jobs are guaranteed, the how doesn't matter; it happens because some politician says it does; reality doesn't matter.

That's the line and level of thinking we are dealing with here. How do you convince someone that if they do something, they and their children may die as a consequence; if they can't make that connection.

Communication ceases being possible in a noisy medium at a certain point according to Shannon. Pretty sure we've crossed that point, and where we may have been able to discern and separate the garbage previously, now through mimicry its all but impossible.

Intelligent people don't waste their efforts on lost causes. People make their own decisions, and they and their children will pay for the consequences of those choices; even if they didn't realize that was the choice they were making at the time they made it.


> I agree with you, but no one listened back then, why would they ever think about listening now.

Because we lead vastly better lives today than 100 years ago, when everyone was also raging about technology stealing jobs. The economy has to adapt to technology changes, there is no other way. It is a self healing system. If technology removes a lot of jobs, then new jobs are created. It has to be this way, don't you see?


It can be a self-healing system, and capitalism is generally self-healing, but the former is not necessarily the case in all economic systems.

There is a critical point where factors, and producers leave the market because requirements cannot be met in profit in terms of purchasing power (invariant to inflation). You might think those parties are all that there is, but that's not the case, there is a third-party, the state and its apparatus.

With money-printing, any winner chosen by the state becomes its apparatus. Money printing takes many forms but the most common is debt, more accurately non-reserve debt.

That third-entity is not bound by profit constraints and outcompetes rising in the wake of the destruction it causes, and this is not self-healing, its self-sustaining, and slow, and it does collapse given sufficient time.

New jobs aren't being created in sufficient volumes to provide for the population. If anything, the jobs have been removed en-masse on the mere perception that AI can replace people.

You seem to rely heavily on fallacy in your reasoning. Specifically, survivorship bias. Things are being done that cannot be undone. There are fundamental limits, after which the structures fail.

This is what is coming.


> This is what is coming.

You're saying I rely on fallacy, survivorship bias, but you have no way of knowing what is coming, and yet you state it so authoritatively.

I resort to evidence from history, because these same arguments happen decade after decade, and the doom scenario has not manifested yet. I also find the anti-AI view narrow minded. You're only able to imagine one scenario, the dystopian scenario. And yet none of know this is the likely outcome. It could well be that AI actually does increase the means of productivity, we invent new medical cures, we invent new ways to grow food, we clean up our energy generation, work becomes more optional as governments (who desperately want people to keep electing them) find ways of redistributing all the newly created wealth.

I don't know which will happen, and neither does anyone else.


This is naïve, the government and corporations are already working towards the dystopian result. Just because we don’t “know” doesn’t mean people can’t make an educated guess. You need people to put Llms on the good path before you can say the bad path won’t happen. Right now people are loyal to corporations that offer it, that’s the bad path.

Its like predicting avalanches in avalanche prone areas.

You may not know the individual particle interactions and forces that will inevitably set the next avalanche off, but you know it will happen based on factors that increase the likelihood dramatically.

For example the event of an avalanche increases the more snowpack there is, and it goes to zero when snowpack is gone. The same could be said of LLMs.

You know corporations will do absolutely anything even destroy their business model, so long as they make more money in the short term. John Deere is a perfect example of this, and Mexico just finally took action because we couldn't, that culminated in ~14bn drop in capex on Wall Street for the the stock. It was over 10 years in the making, but it happened.

The more concentrated the marketshare to decisionmaking, the greater the damage, and the more impact bad decisions have compared to good decisions. You tread water until you drown.


> You're saying I rely on fallacy, survivorship bias, but you have no way of knowing what is coming.

Just because you happen to be blind in this area, doesn't mean all people are blind. In the day after tomorrow, you had that group at the library that chose to follow the police officer despite warnings that going out into the storm would kill them. What happened? They died.

That is how reality works, it doesn't care about belief. Its pass fail, live die.

The thing about a classical education (following the greeks/roman western philosophy) is that you can see a lot more of reality accurately than someone who hasn't received it, and an order of magnitude more than someone that's been indoctrinated. You know the dynamics and how systems interact.

The dynamics of systems don't just disappear, there is inertia, and you can see where that is going even if you cannot predict individual details or a timeline. It is a stochastic environment, but you can make accurate predictions like El Nino/La Nina weather patterns with the right know-how and observation skills. Everything we know today originated from observation (objective measure), and trial and error.

This framework is called first principles, or a first principled approach. Its the backbone of science, and it ties everything that is important to objective measure, and the limits of error. When dealing with human systems of organization, you can treat the system in predictable ways at the sacrifice of some of the accuracy, but that doesn't negate it completely.

These are things that matter more than other things, and let one predict the future of an existing system, if carefully observed. Like a dam where the concrete has started cracking might indicate structural weakness prior to a catastrophic collapse.

It is not governments job to redistribute wealth. That is communist/marxist/socialist rhetoric, and it fails for obvious reasons I won't get into. Mises sums it up in his writings back in the 1930s. You like to claim you base reasoning on history, but you have to include parts that you don't agree with to actually be doing that.

Just because you don't know what will happen doesn't mean others can't. These are fundamental biases to your perception that rigorous critical thinking teaches you to avoid so you are not dead wrong.

There are people that see the trends before others because they follow a first principled approach, and they save themselves, or may even profit off that when survival is not at risk.

The blind will often cause chaos to profit, thinking no matter what they do individually they can't end it all. The exact same kind of fallacy that you seem to be falling into, survivorship bias.

There are phase changes in many systems. The specific bounds may not be known or knowable in detail ahead of time, but they have been shown to happen, and in such environments precursor details matter.

The moment you start dismissing likely outcomes without basis, is the moment you and those you care about go extinct when those outcomes happen and you are in the path of that outcome.

No one knows everything, but there are some people that know more than others.

It is a fairly short jaunt in the scheme of things from the falling dominoes caused by elimination of entry level positions (and capital formation as a whole), to socio-economic collapse (where no goods are produced or can be exchanged).

The major problem is no one is listening to the smartest people because they are no longer in the room, only yes people get into the room, the blind leading the blind. That has only one type of outcome given sufficient time. Destruction.


> I would wonder why you are so complacent as to think next year's models won't be able to solve domain-specific problems?

If your domain is complex enough as well as have a critical people-facing component you generally still have some runway. If it’s not then it’s ripe for disruption anyway, if not by LLMs then by something else. I pivoted at age 32 because of this. I pivoted again at age 40 (I took a two level title drop (principal engineer to midlevel), but I got to learn a new domain - and got promoted back to one level below and now make more money).

I always treat my marketability not as a one and done but a perishable quantity. I’ve never taken for granted that I’ll have job security if I don’t strategize — because I grew up in a time of uncertainty and in a society where a high paying job was not guaranteed (some jobs like grocery clerk were however). People talk about “job security” as an entitlement of life are the first ones to be wiped out.

That said, not everyone is capable of constantly upgrading their skills and pivoting — we need some cushion for economic disruption for folks who have limited retrainability. But suspect this is not everyone — most people just haven’t had to do it so they think they can’t.

Americans have not had to face this en mass in the last 30 years but many people around the world have had to. If you’ve lived in competitive societies where there is job scarcity you get quickly used to this reality.

> What are people still in school right now, with dreams of being architects or lawyers or artists or writers or doctors or podcasters or youtubers, actually going to do with their lives? How will they find satisfaction in a job well done? How will they make money and feed and house themselves?

I think those jobs will still exist in some form but there will be a painful period where everyone figures out how to be differentiated. I’m a hobbyist YouTuber in my free time (YouTuber was a job that didn’t exist before) and I think it’s hard to replace parasocial relationships — AI slop already exists on YouTube which gets views but few subscriptions.

The scope of jobs will also shift, and we will see things moving toward realms requiring human judgment — delivering things that require interpretation. Job scopes today are actually already much more than people think. Again no guarantee against disruption but job security was always an illusion anyway and the sooner we realize this the sooner we can adopt a preparatory mindset. (In a way, Americans are actually well positioned due to our relationship with capitalism)

Even the demise of radiologists has been overstated because being a radiologists is much more than just detecting disease from an image.

Writers will still be around — they might not be able to charge per word, but they’ll pivot to a new model. The transactional model will be gone but I’m convince something else will replace it.

I’m not sure about any of this because I can’t predict the future, but I have seen the past and the doomsday scenario doesn’t seem to me the inexorable one.


You should read Hazlitt.

There are things being done which cannot be undone, and there are issues that were long predicted, and ignored, and the consequences are now bearing fruit.

If you haven't heard a real doomsday scenario that's likely, you haven't been listening to the right people, and you rely far too much on the fallacy of survivorship bias.

If you don't have a plan to replace a fundamental societal model, there are two potential outcomes, someone comes up with something because they've been working on it (and it works, which is rare), or all dependencies that rely upon that system fail, and the consequences occur. In other words, everyone starves.

Think about what no exchange being possible suddenly would mean, overnight, for our supply chains with logistics delivering just in time. We've seen it, during the pandemic, but that was just a small disruption, and not a continuing one.

Imagine it. Nothing on the shelves. No amount of money that will let you get what you need (toilet paper). No means that would let this occur in the short timetables of need. What happens. Prior to 2020, people would call you crazy if you said those things would happen.

Bad things happen if you don't have a plan to make sure they don't happen.


I've observed the same. University science PR pieces are usually unreliable -- they are optimized toward generating buzz than scientific accuracy. They usually link to the actual science papers, but the prose is usually a stretch.

Even in this case -- "defying the laws of physics" is sensationalist narrative manufacturing.

The real claim is actually more moderate, and the research is not really close to commercial yet.


It's research-in-progress, but I think the promise is slightly different from dehumidifier bags (also in other parts of the world, Thirsty Hippos [1]) which are single use.

You're correct in that: (1) it doesn't break the law of physics; (2) to remove the droplets, you still need energy. But it sounds like if the droplets are moving to the surface, the energy needed to release the droplets could be far lower than most active dehumidification methods (e.g. Peltier junctions).

[1] Thirsty Hippos -- which are very effective in small spaces.

https://www.amazon.sg/Thirsty-Hippo-Dehumidifier-Moisture-Ab...

Basically a supercharged silica gel.


Probably a small piezo junction could be used to provide a solid-state vibrator for releasing water from a proportionately considerably larger area of the material, or at larger scales perhaps a technique similar to the ultrasonic sensor cleaners built into interchangeable-lens cameras.

Do you mean like an ultrasonic humidifier[1]?

[1] https://www.amazon.com/Ultrasonic-Humidifiers/s?k=Ultrasonic...


Sure, why not?

> > Ultrasonic humidifier

> Sure, why not?

https://dynomight.net/air/ estimates that using an ultrasonic humidifier for one night shortens your life by 50 minutes. Getting rid of any ultrasonic humidifiers is his top tip to extend your life cheaply.

Dedicated post on them: https://dynomight.net/humidifiers/


That was a great read. I didn’t know that blog and a quick glimpse at the about page made me bookmarked it. Thanks for sharing.

I've got some bad news if you live near a road.

I am aware that cars are ruining millions of people's health. That car drivers are privatising the convenience and externalising the harms of driving. That car drivers are a privileged, wealthy, class of people who can literally kill others and walk away without a jail sentence using the defence "I didn't see them":

https://www.cambridge-news.co.uk/news/cambridge-news/cyclist...

https://www.cyclingweekly.com/news/latest-news/driver-carele...

https://veronews.com/2022/08/06/no-jail-time-for-driver-of-c...

many other examples exist


Sure. And if any particulate emitted by an ultrasonic humidifier could be dangerous enough to shorten your life by ~10% with consistent use or 50 minutes per roughly 8-hour night's sleep as this timecuber of yours appears to claim, then I should think the tire and brake dust burden anywhere near an actively used road would be not just instantly but flagrantly fatal.

I'm aware of the hundred thousand words spent justifying the idea. I will consider reading them once I've been convinced to ignore the result of this trivial - and I do use the following phrase with careful consideration aforethought - sanity check. You'll more likely give the goalpost another kick, though, I suspect.


Explain where I have given any goalposts any kick at all?

From the articles:

> A good heuristic is that an increase of 33.3 PM2.5 μg/m³ costs around 1 disability-adjusted life year. Correia et al. (2013) estimated something close to this from different counties in the US, and more recent data from many different countries confirm this. The most polluted cities in the world have levels around 100 PM2.5 μg/m³.

> When inhaled during an 8-hr exposure time, and depending on mineral water quality, humidifier aerosols can deposit up to 100s of μg minerals in the human child respiratory tract and 3–4.5 times more μg of minerals in human adult respiratory tract. > (Yao et al., 2020)

The amount of particles people breathe in in a night of worst case ultrasonic humidifier use is 8x more than the particle level in the air of the most polluted cities in the world.


And of course every relationship is both bijective and linear from one data point over an infinite domain.

We could talk about this utter misrepresentation of https://pubmed.ncbi.nlm.nih.gov/23211349/ but why? You haven't read it. You won't. At most you will follow the examples you cite in prooftexting from it like a Southern Baptist inveighing against homosexuality. Kindly find someone else whose time so to waste.


I said, explain where I kicked any goalposts. You haven't, because I didn't. Ad-homs, against the author and against me, pre-deciding your conclusion, refusing to explain your objections, pretending "we could talk about it" while turning to insults to shut down any talking about it.

I get it, you're desperate to appear smart and superior, but arguing that lamely isn't doing it. Of course I'm not going to read your link, try and guess what misrepresentations you're coming up with, make some argument about them and their context in the wider post, only for you to ignore it and post some more nonsense in response. Or engage with you further.


The link I posted leads to a paper you cited. You've attributed a causal claim to the paper which it not only does not make, but even in its abstract very carefully avoids. If that isn't intentional falsity, then it is certainly a remarkable demonstration of intellectual negligence. In any case "desperate" is not how I would describe the simple fact that I did a better job checking your sources than you have, which by the look of the thing is to say that of the two of us I'm the only one who bothered actually investigating your argument at all.

You could not by now have done more to prove my point that you aren't bothering to actually know anything about what you present yourself able knowledgeably to discuss. Thanks for that. Feel free to embarrass yourself with further flagrant scientism if you like. Enjoy your day.


> a paper you cited.

> You've attributed a causal claim

> your sources

> your argument

> what you present yourself able knowledgeably to discuss.

No, no, no, nope and no. None of these accusations are correct. Feel free to embarrass yourself with lacking basic reading and quoting comprehension; I am not the author of the Dynomight article.


> I am not the author of the Dynomight article.

Who chose to bring it up? Who chose to insist on its baseless conclusions? Who then demonstrated the inability to defend those conclusions for their total lack of substance?

No, you don't get to represent the source you chose as accurate only until that fails to go your way, and then turn around and try to disclaim it. The embarrassment you now feel is amply earned.

This is what it feels like to have failed to evaluate your sources, argued strenuously in support of total nonsense, and thus made a complete and negligent fool of yourself. You should draw a lesson from that for next time you consider starting a conversation like this one.

You won't; you are too deeply in love with the idea of yourself as a clever person, and you won't dismiss the offense I gave to consider the substance of my remarks. This is a level of predictability I would not be comfortable with in myself. But that, too, is no problem of mine.

You've tried moving the goalposts again, had you noticed? If I let you get away with it, we wouldn't be talking about the factual inaccuracies, facial implausibilities, and ignorant misrepresentations of research, in the source you so uncritically chose, at all...


This isn't reddit. Please kindly take your anger, ad homonims, and bad-faith arguments back over there. I'm sorry you had a bad day but nobody in this thread caused it, so take a deep breath.

Sorry, did you have something substantive to add? Your comment history says not, as does that you carefully avoid substance here, preferring to - actually, that is not obvious and makes an interesting question. What is your purpose here?

Yes this requires energy to extract the water, but if it's much less energy than dehumidifiers -say, one order of magnitude less- then it could make harvesting water from humid air economical.

Dehudifier bags (e.g. silica, CaCl) aren't single-use. Microwave, then reuse. Some even are color-changing so you know how much moisture they've absorbed.

Microwaving is adding energy, obviously. But the idea here is that the water is recoverable, not that the air is now drier.

Concur; the idea behind this class of devices is to take advantage of a daily humidity cycle. Whether it's CaCl (absorption) or Silica (adsorption), or the latest lab-designed adsorption surface.

This is a good time to note that I see one of these articles ~once every two years, for the past 10 years. I haven't observed one make it beyond the initial discovery phase.


This, and solutions for male pattern baldness.

Male pattern baldness is a solved problem, if caught early enough; people just don't usually bother, because the cleanest solution (a 5α-RI) can interfere with sexual function, the "proper" fix for that (low-dose topical application) is time-consuming (so people normally just kludge it with Viagra), and the medicines involved can (indirectly) cause breast growth with prolonged use (unlikely to be a problem with low-dose topical application, and can also be mitigated, although overshooting that mitigation can cause osteoporosis) and are "don't even touch this if you're pregnant" class (they can interfere with fœtal development).

If I have to relinquish my sexual function and grow breasts to reduce baldness, then baldness is not a solved problem.

Low-dose topical application doesn't have those problems. Heck, even "dose your entire body" doesn't always lead to sexual dysfunction. (And breast development is a rare side-effect that you'd notice before anything permanent happens, and is easily-addressed.) However, it is topical application of a medicine that can interfere with fœtal development.

Oh, almost forgot: any messing around with sex hormone levels puts you at risk of depression. That's big side effect #3 (though again, many people don't even notice it).


out of curiosity, what else would you expect the side effect profile of something mediating the effects of a potent androgen on the body to look like?

it's not estrogen where you would expect breast growth (and can't count on any particular changes to sexual function anyway), it's inhibiting conversion of testosterone to dihydrotestosterone which could have that effect, much like you could spontaneously develop gynecomastia without intentionally fiddling with your hormone balance. calling it unsolved sounds a lot like calling the very many conditions with medications that have more likely and worse side effects equally unsolved.


  > what else would you expect the side effect profile of something mediating the effects of a potent androgen on the body to look like?
I'm a consumer, not a medical professional. I have no expectations based upon detailed familiarity with the underlying biology. Or is the target market for these products medical professionals?

That's what the patient information sheet is for, and why everyone's supposed to have access to a trained medical professional they can freely consult for things like this.

> it's not estrogen where you would expect breast growth

Actually, it is. Reducing DHT levels causes the body to elevate both testosterone and œstrogen levels, via homeostasis. But yeah, it's not a direct effect, and if it's a problem you can twiddle further to make it go away. (You could even do that pre-emptively, though you normally get days and days of warning before breast development actually starts, so I'd advocate the "wait and see" approach.)


A condition is typically considered solved if there are drugs or procedures that cure it and either (a) have extremely rare side effects, or (b) have side-effects that are not as big a problem as the condition they are curing. If a pill existed that cured trh common cold but had a 1% chance of giving you cancer of the throat, people wouldn't proclaim "we've cured the common cold!".

Started with topical Minoxidil at age 21. Have (almost) a full head of hair. Now I take it in pill form.

No breasts. And no other issues.


Minoxidil is a sledgehammer: it's got all sorts of other effects (e.g. reducing your blood pressure, beta something something). I wouldn't expect it to cause breast development, though, since it doesn't act on œstrogen receptors.

You seem knowledgeable. Where's a safe place to order the topical application from? I'm not in the US or Europe, our doctors aren't going to be bothered with (or knowledgeable about) something like treating baldness.

My Gmail username is the same as my HN username if you prefer to answer in private. Thanks


I don't have the savvy for stuff like actually acquiring medicines, unfortunately. You might be able to just buy it from your local pharmacy; but if not, you could check https://hrtcafe.net/ or – as a sibling commenter suggested – look into minoxidil (which works via a different mechanism). I wouldn't recommend minoxidil unless its other effects would be beneficial to you, since I'm leery of things that affect blood pressure and circulation – but I'm not actually trained in this stuff, so maybe it's considered safer.

Finasteride is less potent, but is normally recommended for cis men; not sure why. Theoretically, I'd expect dutasteride to be the better medication (and https://doi.org/10.2147/CIA.S192435 bears that out) if you can get hold of it.

I'd have thought finasteride and dutasteride weren't safe to take if there's a chance of you getting someone pregnant, but https://www.nhs.uk/medicines/finasteride/fertility-and-pregn... says it's fine, actually. https://doi.org/10.4103/0974-1208.86093 goes into more detail on that. (I'm not aware of any other impacts on fœtal development, only the intersex condition mentioned in that article – note that the backdoor pathway described in https://doi.org/10.1002/dvdy.23892 also requires the 5α-reductase enzyme –, but I'd still advise caution.)


Thank you. This is an extremely informative comment that gives me many avenues to pursue. Much appreciated.

By the 24th century, no one will care that you are bald.

I'm doubtful that President Camacho could've gained so much power without that fantastic coif

Unless you are Brian “Hairlacher” formerly of the Chicago Bears and shilling hair replacement on Chicago area billboards for years now.

Devices that automate this are readily available, I have one running now. "Desiccant dehumidifiers."

If something breaks the laws of physics it simply means the laws of physics were incomplete, so we update them and now it no longer breaks them

However, if something claims to break the laws of physics, 99 times out of 100, it simply means that either a) the person making the claim missed something, or b) the person making the claim is lying.

Or c) the person making the claim has no interest in the truth, but strong interest in some other thing.

Which is the same as b)

Don't complicate


It is really as simple as that and even applies to soon what will seem like free energy. It is not free energy, it is just energy from a field we were previously ignoring and previously fighting against.

What is the source of this seemingly free energy that we've been ignoring and fighting against (I assume at different points in time)?

For example, Earth's magnetic field has been claimed as a source of "free" energy.

Those are usually just calcium chloride in a bag, it's very hygroscopic and fairly cheap... also makes a halfway decent de-icer. The issue I see with this thin-film method is that no mention is made of the rate of production at a given relative humidity for a given area of the film.

It's interesting, but without the details (and with a lot of PR speak) I'm skeptical as hell about this in practice.


I'm casually ideating on a new orthography for Japanese that does not require Kanji.

Because it's hard to remember how to handwrite complex kanji (many people have character amnesia in real life due to smartphone usage), I casually wondered if it was possible create a Japanese orthography that was: (1) easily scannable (which rules out hiragana); (2) disambiguated words without Kanji; (3) still relatively compact? (which hiragana is not).

I figured a good substrate to start from was romaji + a new emoji system.

You know how people think LLMs can't invent things? o3 just invented this system whose goal is to maximally disambiguates homophonic Japanese words (performing the same semantic compression role that Kanji does today). This is the first iteration. After each romaji noun, it tags it with a geometric shape. These are the 30 tags o3 came up with (based on homophones requiring disambiguation):

  | Diacritic     | ◐  living              | ▢  built / object      | ⬢  nature               | ◊  abstract                  | ⟐  action / event           |
  | ------------- | ---------------------- | ---------------------- | ----------------------- | ---------------------------- | --------------------------- |
  | •  top dot    | ◐• adult / main person | ▢• building / place    | ⬢• plant / flora        | ◊• idea / thought            | ⟐• movement / travel        |
  | –  bottom bar | ◐– child / minor       | ▢– tool / instrument   | ⬢– water / liquid       | ◊– emotion / feeling         | ⟐– communication / speech   |
  | +  right plus | ◐+ group / collective  | ▢+ vehicle / transport | ⬢+ weather / sky        | ◊+ social tie / relationship | ⟐+ creation / production    |
  | ×  left cross | ×◐ animal (non-human)  | ×▢ food / consumable   | ×⬢ mineral / material   | ×◊ value / moral             | ×⟐ perception / sense       |
  | | center bar  | ◐| deity / honorific   | ▢| document / media    | ⬢| terrain / landscape  | ◊| state / condition         | ⟐| change / transformation  |
  | ‿  bottom arc | ◐‿ anatomy / body part | ▢‿ container / vessel  | ⬢‿ energy / fire        | ◊‿ knowledge / data          | ⟐‿ rest / passive state     |
I gave it this wikipedia JA text:

  現代における日本語の一般的な表記法は漢字仮名交じり文であり、漢字と平仮名(昔の法令などでは片仮名)を交えて表記する。漢字は実質的な意味を表す語に使われ、平仮名は主に活用語尾[注 1]や助詞に使われる。朝鮮語などでは漢字が主として字音語にしか使われないのに対し、日本語では和語にも使われ、外来語を除いてほとんどの語に使うことができる。煙草(タバコ)や合羽(カッパ)、珈琲(コーヒー)など大航海時代以降にヨーロッパから入った語彙には、外来語であるにもかかわらず漢字が使われるものがある。
And told o3 to rewrite it in the new system it just created:

  gendai◊| ni okeru  nihongo◊‿ no  ippan-teki na hyōkihō◊‿ wa kanji▢|  kana▢| majiri  bun▢| de ari, kanji▢| to  hiragana▢|  (mukashi no  hōrei▢| nado de wa  katakana▢|) o majete  hyōki◊‿ suru. kanji▢| wa  jisshitsu-teki na imi◊• o arawasu  go◊‿ ni tsukaware, hiragana▢| wa  omo ni  katsuyō-gobi◊‿ ya  joshi◊‿ ni tsukawareru. chōsengo◊‿ nado de wa  kanji▢| ga  shutoshite  ji-on go◊‿ ni shika tsukawarenai no ni taishi,  nihongo◊‿ de wa  wago◊‿ ni mo tsukaware, gairaigo◊‿ o nozoite  hotondo no  go◊‿ ni tsukau  koto◊‿ ga dekiru. tabako⬢• (tabako) ya  kappa▢– (kappa),  kōhī×▢ (kōhī) nado dai-kōkai-jidai◊| ikō ni  Yōroppa⬢| kara haitta  goi◊‿ ni wa, gairaigo◊‿ de aru ni mo kakawarazu  kanji▢| ga tsukawareru  mono◊‿ ga aru.
It's pretty readable, and takes care of the homophone ambiguities (remaining ambiguities can be resolved through context). It also naturally resolves onyomi and kunyomi. Add italics and punctuation, and katakana is replaced.

(of course, it's incorrect in parts... because LLM)

But the idea has legs. It will probably not replace Kanji due to cultural inertia, but this could a kind of shorthand especially for handwriting.

I'm pretty impressed! o3 just invented something new. It combined romaji and a tag system that it hallucinated to design a new Japanese orthography. As far as I can tell (I could be wrong), something of this nature has not been done before.


Frequentist methods are unintuitive and seemingly arbitrary to a beginner (hypothesis testing, 95% confidence, p=0.05).

Bayesian methods are more intuitive, and fit how most be reason when they reason probabilistically. Unfortunately Bayesian computational methods are often less practical to use in non-trivial settings (usually involves some MCMC).

I'm a Bayesian reasoner, but happily use frequentist computation methods (max likelihood estimation) because they're just more tractable.


I'm familiar with hypothesis testing, MCMC, and MLE. Can you explain how they are bayesian or frequentist?

p-value testing is problematic, but frequentist CIs typically map to credible intervals with uninformative priors. In practice in Bayesian analyses tend to use so weak priors that they are essentially uninformative.

Maximum likelihood also tends to be equivalent to MAP with uninformative priors.

I find a lot of Bayesian analysis is a bit of cargo culting and frequentist/ML formulations are dismissed with tribalism.


You can already do that. I run VS Code on multiple monitors.

Any tab called be pulled into its own window and moved to a different screen.

Even the terminal panel can be popped out into a tab or panel or window. (The UI is not obvious but once you see it you can’t unsee it)

It’s pretty cool to have the code and terminal side by side in the editor window. (Of course this was always possible with emacs)


Can you give me an example? Spell check only checks if a word is in dictionary. It doesn’t check grammar or context.

"Bob went to Venice to pick up the doge."

Where doge is both the name of a title (like duke) but it is misspelt "dog". The use of "Venice" where doge's are could increase a the likelihood of a smarter spell check keeping doge and not correcting to dog. Looking at a wider context might see that Bob is talking about a pupper.

A simpler example would be "spell cheque"


A spelling error, using one dictionary definition, is "an error in the conventionally accepted form of spelling a word" --- mistaking one word for another does not fall under this definition. It is true that we now expect spell checkers to do grammatical checking as well, but a pure spell checker can indeed rely on a wordlist for English (this wouldn't work in languages with more developed morphology and/or frequent compounding).

Ok, but this is a technicality. Spell-checkers have slowly evolved into grammar checkers and what people really want is error correction. Whether people call it a spell checker a minor language issue (and the kind of things humans do all the time).

When teaching for your dictionary, ask: "is it obvious what they mean if I'm not being pedantic?"


We expect different outputs in these two cases, though. A wrong word choice is usually accompanied by a hint that another word may have been intended, while a wrong spelling can be unambiguously marked as a mistake. These two behaviours can be turned on and off independently, and they need two different labels.

Agreed. "Dessert" vs "desert" - mistaking these two is often not a grammatical error (they're both nouns), but is a spelling error (they have quite different meanings, and the person who wrote the word simply spelled it wrongly).

I agree, but this is definitely the kind of spelling error (along with complementary/complimentary, discrete/discreet, etc.) that we normally don't expect our spellcheckers to catch.

I don’t think I agree with your interpretation of the definition.

If I spell the word “pale” as “pal”, that is not an acceptable spelling for the word “pale”, even if it is coincidentally the acceptable spelling for an entirely different word.

If I asked a human editor to spellcheck the sentence: “His mouth dropped and he turned pal.”, the editor would correctly indicate I had misspelled the word.

Spellcheck hasn’t done this in the past because it can be quite difficult. But that’s a limitation of computer capability, not functionality bounded by the definition of the term “spellcheck”.


Finnish would like a word. Take a random noun like kauppa "shop". It has at least 6000 forms: https://flammie.github.io/omorfi/genkau3.html and that's excluding compounds (written as one word in Finnish) like "bookshop" or "shop-manager" etc. etc. And then you have loan words and slang, derivations into other words classes; all of this is impossible to compactly represent in a full-form word list.

Now consider the many other languages of that family ( https://en.wikipedia.org/wiki/Uralic_languages ) – they also have this extreme potential for inflections, but next to no online resources to train language models on or even scrape decent wordlists from.


Finnish is very different from most other languages, and does not have the user base to be well represented in training data, but that webpage is ridiculous and does not reflect the actual language. No one in the history of Finnish has ever spoken most of those forms. Grammar describes language, it does not define it!

"would like a word". I see what you did there...

That's exactly what they're saying. If you write “the work required deep incite”, a traditional spell checker won't catch the mistake (but people consider it a spelling error).

Cue people mistaking cue for queue

Butt wouldn't you liked if a spell cheque could of fixed these command?

Hah! Apple caught "of" and suggested "consider have instead", but left the rest untouched. Great qed for spell checkers.

Chatgpt fixed it though: "But wouldn't you like it if a spell check could have fixed these commands?"


I'm in my late 40s and have Instagram on desktop. I watch reels from time to time. The clips I watch are definitely >6 seconds (more like 30s to 1.5 minutes -- there are very few clips under 10 seconds on my feed). And agree with emily, Instagram on desktop isn't as addictive. I scroll for 10-15 minutes and I'm done.

Also, Instagram's Reels algorithm isn't that smart. I watch maybe 20% of reels to completion (I skip 80% of reels after 2 seconds). The Reels algorithm shows me a bunch of stuff that it thinks would interest me, but really don't. I don't understand why, because I do follow a lot of content creators. I'm also quite reptilian -- if I see a weird animal or a dam bursting or a powerwashing scene, I will watch it. But Instagram doesn't seem to pick up on that.

Now I've heard TikTok's algorithm is much smarter and thus more addictive than Instagram's. I promised myself that I will never be on TikTok.

YouTube subscriptions are my main form of entertainment. I justify it because I learn so much useful stuff from them.

YouTube Shorts? I don't bother at all -- despite my having curated my subscriptions carefully, the recommended shorts are so boring that I never click on them.


I also use YouTube, but I don't use the internal YouTube subscription function; I just subscribe to each channel's RSS feed and use an RSS notifier browser extension.

I also try to limit how many channels I track to only around a dozen tops (if that), most of which are music artist channels to let me know when they have a new song out.

The few that aren't music channels I just download with yt-dlp and temporarily put them on my NAS to watch with my Emby server. This way, I can watch them from the comfort of my couch and I don't have to deal with ads. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: