> In what world is the correct response "Dear regulators, you're incompetent. Pound sand." instead of "Can you share the IP address you used so my client can address this in their geoblock?"
That would imply that the client actually would like to be contacted every time Ofcom found a leak in the geoblock. Not a good idea imho.
They don't agree that it is a public safety matter, or at least they've clearly taken the position that they don't care about that kind of public safety.
He's just pointing out that Ofcom's behavior is inconsistent with Ofcom sincerely believing it's a public safety matter either.
Most of Europe does have similar thought-crime and censorship laws as the UK now have. Also, the crime of "hindering an official investigation" could be interpreted into this, and this exists practically anywhere.
The only further question would be if the country is friendly enough with the UK to extradite.
Katex produces MathML. The problem katex solves is that MathML is really ungainly for authoring equations. So instead you write equations in a DSL (which most people just call latex) and Katex compiles that to HTML/MathML for you.
You can do this server side or client side and sadly too many people do it client side. If you do it server side, it is just one more step in your build next to transpiling and bundling.
We have web assembly and the power to run whatever you can manage to compile with that. There's no real need for "native" support.
The key issue is that the latex stack wasn't really designed to be packaged up like this. It just has a lot of moving parts that are vaguely dependent on running in a full blown unix like shell environment. So the resulting code would be a rather big blob. Running that in a browser isn't that hard if you can live with that having a fair bit of overhead. This has been done. But it's a bit overkill for publishing content on the web.
Browsers don't have native support for MathML any more for a good reason. Mozilla did support this for a while but dropped it because of limited adoption and high maintenance burden. Rendering formulas is a bit of a niche problem and the intended audience is just kind of picky when it comes to technology and generally not that into doing more advanced things with web browsers. Also, most people writing scientific articles would be writing those for publication and probably use Latex any way. So translating all their formulas to MathML is an extra step that they don't need or want.
At least that's my analysis of this. I'm not really part of the target audience here and I'm sure there are plenty of MathML fans who disagree with this.
In any case Katex makes an acceptable (to some) compromise by packaging this stuff up in a form where it can be run server side and is easy to integrate on a simple web page. A proper solution with buy-in from the scientific community (for e.g. MathML) is a much bigger/harder thing to solve.
IMHO, a light weight solution based on web assembly could be the way to go. But of course the devil is in the details because if the requirements are "do whatever latex does" it gets quite hard. And anything else might be too limited.
>Browsers don't have native support for MathML any more for a good reason. Mozilla did support this for a while but dropped it because of limited adoption and high maintenance burden.
AFAIK Safari was the first browser to support MathML fully, and FF also supports it. Chromium was the latest IIRC. MathML has been baseline-available since 2023 after Chromium got support.
The big issue is that MathML is designed as a target language, not something directly writable. So we still need a KaTeX equivalent, which compiles either LaTeX equations or other markup languages to MathML.
Regardless, the core issue that you have mentioned is now gone (or will be in a few years even if you want more availability).
We will continue to have "workarounds" even after MathML because it is not an authoring-friendly markup. My ideal in this regard is a simplified eqn-like markup, which is not hard to write by hand nor hard to parse either.
Yes, but those workarounds will be author-side ones. Like how HTML isn't very friendly to write by hand for many, so CMSes use e.g. Markdown or WYSIWYG to make it friendlier. In the same way, there will always be preprocessors in authoring tools that might convert e.g. TeX notation to MathML.
My point is that "fast" in those kinds of workarounds wouldn't be a problem for visitors of a site because all the browser gets is just native MathML.
MathML has been supported in all major browsers for several years now. I use it regularly and never had a major issue, just some subtle inconsistencies between different browser engines.
For someone used to the typesetting quality of LaTeX, MathML leaves much to be desired. For example, if you check the default demo at https://mk12.github.io/web-math-demo/ you'll notice that the contour integral sign (∮) has an unusually large circle in the MathML rendering (with most default browser fonts) which is quite inconsistent with how contour integrals appear in print. It's not the fault of MathML of course since the symbol '∮' is rendered using the available fonts. It is not surprising that a glyph designed for 'normal' text sizes doesn't look good when it's simply scaled up to serve as a large integral symbol.
Even if we address this problem using custom web fonts, there are numerous other edge cases (spacing within subscripts and superscripts, sizing within subscripts within subscripts, etc.) that look odd in MathML. At that point, we might as well use full KaTeX. Granted, many of these issues are minor. If they don't bother you, MathML could be a good alternative. Unfortunately, for me, these inconsistencies do bother me, so I've been using MathJax, and more recently KaTeX, since they get you closer to the typesetting quality of LaTeX compared to MathML.
If you want every symbol to look exactly like in Latin Modern (the default typeface in LaTeX), simply use Latin Modern as your math typeface. The size of the circle on the contour integral is a matter of personal preference, but it just depends on the typeface and is orthogonal to the choice of LaTeX/MathML.
> The size of the circle on the contour integral is a matter of personal preference, but it just depends on the typeface and is orthogonal to the choice of LaTeX/MathML.
Indeed! That was precisely the point of my previous comment.
I agree that switching to Latin Modern resolves some of the minor issues I mentioned earlier. However, it does not resolve all of them. In particular, it does not address the spacing concerns I mentioned earlier. For example compare the following on <https://mk12.github.io/web-math-demo/> with Latin Modern selected:
\sum_{q \le x/d}
Or:
\sum_{d \le \sqrt{x}}
The difference in spacing is really small but it is noticeable enough to bother me. Also, this is just one of several examples where I wasn't happy with the spacing decisions in MathML rendering. The more time I spent with MathML, the more such minor annoyances I found. Since KaTeX produces the spacing and rendering quality I am happy with, out of the box, I have continued using it.
Also, my goal isn't to replicate LaTeX's spacing behaviour faithfully. I just want the rendered formulas to look good, close to what I find in print or LaTeX output, even if it's a bit different. It so happens that I find myself often bothered by some of the spacing decisions in edge cases when using MathML, so I tend to just stick with MathJax or KaTeX.
But that's just me. All of this may seem like nitpicking (and it certainly is) but when I'm spending my leisure time maintaining my personal website and blog or archiving my mathematics notes, I want the pages to look good to me first, while still looking good to others. If MathML output looks good to others with certain fonts, that's a perfectly valid reason to use it.
The example with x/d seems to be wrong because the compiler inserts a redundant <mrow> around the slash operator. Temml seems to render it better. (There is still spacing, unlike the LaTeX version, but honestly I prefer that.)
You are right. Getting rid of the stray <mrow> around '/' does make the spacing better. Also, today I learnt about Temml. It looks very interesting and I'll be trying it out. Thanks for this nice discussion!
Let's say you do have a positive test for pancreatic cancer. Overall 5 year survival rate 12%, but other than with other cancers, people continue to die after that. Basically, it is almost a death sentence if it is a true positive. Early detection will increase your odds a bit, and prolong your remaining expected lifetime, but even stage 1 pancreatic cancer, only 17% survive to 10 years. Let's say you are one of the 99% of false positives, because everyone gets tested in this hypothetical scenario. Let's say imaging and biopsy looks clean. No symptoms (which you typically don't have until stage 3 with pancreatic cancer, where it is far too late anyways). With the aforementioned odds, what would you do?
Panic? Certainly, given that if it is a real positive, you might as well order your headstone.
Get surgery to remove your pancreas? Well, just the anesthesia as a 0.1% chance of killing you, the surgery might kill 0.3% in total. No pancreas means you will instantly have diabetes, which cuts your life expectancy by 20 years.
Start chemotherapy? Chemo is very dangerous, and there is no chemo mixture known to be effective against pancreatic cancer, usually you just go with the aggressive stuff. It is hard to come by numbers as to how many healthy people a round of chemo would kill, but in cancer patients, it seems that at least 2% and up to a quarter die in the 4 weeks following chemotherapy (https://www.nature.com/articles/s41408-023-00956-x ). And chemotherapy itself has a risk of causing cancers later on.
Start radiation therapy? Well, you don't have a solid tumor to irradiate, so that is not an option anyways. But if done, it would increase your cancer risk as well as damage the irradiated organ (in that case probably your pancreas).
So in all, from 100 positive tests you have 99 false positives in this scenario. If just one of those 99 false positives dies of any of the aforementioned causes, the test has already killed more people than the cancer ever would have. Even if no doctor would do surgery, chemotherapy or radiation treatment on those hypothetical false positives, the psychological effects are still there and maybe already too deadly.
So it is a very complex calculation to decide whether a test is harmful or good. Especially in extreme types of cancer.
"Let's say you are one of the 99% of false positives, because everyone gets tested in this hypothetical scenario."
This alone is a disqualifier for your scenario. A test with 99 per cent of false positives will not be widely used, if at all. (And the original Galleri test that the article was about is nowhere near to that value, and it is not intended to be used in low-risk populations anyway.)
I am all for wargaming situations, but come up with some realistic parameters, not "Luxembourg decided to invade and conquer the USA" scenarios.
> Nope, there is another important thing that matters: some of the cancers tested are really hard to detect early by other means, and very lethal when discovered late.
You are arguing for testing everyone there. If you cannot detect them by other means, you need to test for them this way. And do it for everyone. You have already set up the unrealistic wargaming scenario. You picked pancreatic cancer as your example where you do have to test every 6 months at least, because if you do it more rarely, the disease progression is so fast that testing is useless. There are no specific risk groups for pancreatic cancer beyond a slight risk increase by "the usual all-cancer risk factors". Nothing to pick a test group by.
And a 99% overall false positive rate is easy to achieve, lot's of tests that are in use have this property if you just test everyone very frequently. Each instance of testing has an inherent risk of being a false positive, and if you repeat that for each person, their personal false-positive risk of course goes up with it. All tests that are used frequently have an asymptotic 100% false positive rate.
Are you mistaking me for someone else? I never said or even implied that.
"And a 99% overall false positive rate is easy to achieve,"
Not in the real world, any such experiment will be shut down long before the asymptotic behavior kicks in. Real healthcare does not have unlimited resources to play such games. That is why I don't want to wargame them, it is "Luxembourg attacks the US scenario".
"There are no specific risk groups for pancreatic cancer"
This is just incorrect, people with chronic pancreatitis have massively increased risk of developing pancreatic cancer (16x IIRC). There also seems to be a hereditary factor.
Czech healthcare system, in fact, has a limited pancreatic cancer screening program since 2024, for people who were identified as high-risk.
Prolonging the expected lifetime by several years nontrivially improves chances of surviving until better drugs are found, and ultimately long term survival. Our ability to cure cancers is not constant, we're getting better at it every day.
Even so. Current first-line treatment for pancreatic cancer is surgery, because chemo doesn't really help a lot. Chemo alone is useless in this case. So any kind of treatment that does have a hope of treating anything involves removing the pancreas.
Take those 99% false positives. If you just remove the pancreas from everyone, you remove 20 years of lifetime through severe diabetes. In terms of lost life expectancy, you killed up to 25 people. Surgery complications might kill one more. In all, totally not worth it, because even if you manage to save everyone of those 1% true positives, you still killed more than 20 (statistical) people.
And the detection rate might be increased by more testing. But it needs to be a whole lot more, and it won't help. Usually pancreatic cancer is detected in stage 3 or 4, when it becomes symptomatic, 5 year survival rate below 10% (let's make it 5% for easier maths). The progression from stage 1 to stage 3 takes less than a year if untreated. So you would need to test everyone every 6 months to get detections into the stage 1 and stage 2 cases, that are more treatable. Let's assume you get everyone down to stage 1, with a survival rate of roughly 50% at 5 years, 15% at 10 years. We get a miracle cure developed after 10 years where everyone who is treated survives. So basically we get those 15% 10-year-survivors all to survive to their normal life expectancy (minus 20 because no more pancreas). Averaging they get an extra 10 years each.
Pancreatic cancer is diagnosed in 0.025% of the population each year. In the US at 300Mio., thats 750k in 10 years. With our theoretical miracle cure after 10 years for 15%, that is a gain of 1.125Mio years lifetime. A 1 hour time needed for testing per each of 300Mio people twice a year for 10 years already wastes 685k years of lifetime, so half the gain already. That calculation is already in "not worth it" territory if the waiting time for the blood-draw appointment is increased. That calculation is already off if you calculate the additional strain on the healthcare system, and the additional deaths that will cause.
Edit: For comparison, a chest X-ray is around 0.1mSv, a chest CT at 6.1mSv, so a factor of 61 between (https://www.radiologyinfo.org/en/info/safety-xray ). Compared to natural exposure (usually 1 to 3mSv/a) however, a chest CT isn't that bad at 2 to 3 years natural dose, 2 polar flights or 1 year of living at higher altitude or Ramsar (https://aerb.gov.in/images/PDF/image/34086353.pdf ). Acute one-time dose damage has been shown above 100mSv, below that there is no damage shown, only statistical extrapolations.
So I'd say that the risk of using a CT right away should be lower than the risk of overlooking a bleed or a clot in an emergency, where time is of the essence and the dance of "let's do an X-ray first..." might kill more patients than the cancers caused by those CTs.
Depends. Some have a paranoid mode without caching, because then a physical attacker cannot snip a cable and then use a stolen keycard as easily or something. We had an audit force us to disable caching, which promptly went south at a power outage 2 months later where the electricians couldn't get into the switch room anymore. The door was easy to overcome, however, just a little fiddling with a credit card, no heroic hydraulic press story ;)
If you aren't going to cache locally than you need redundant access to the server like LTE access and plan for needing to unlock the doors if you lose access to the server.
With Signal, you can't really validate the code running on the client. Signal insists on distributing only via Google Play Store or Apple App Store, so usually updates are automatic and uncontrolled by you. And Signal has a history of not releasing timely updates of their client code, so even if you would do your own builds or compare their released code to their public updates, you would have at least a few weeks latency. And I doubt anyone would notice, since the Signal people tried hard to piss off everyone who did reproducible builds of their code.
The lack of can-do-attitude can be explained by regulations, imho. After you've seen the first few trivial things take ages because somebody has yet to stamp form 23b in triplicate and hand in 1k pages of environmental impact assessment, noise studies and socio-economic impact predictions, you loose the belief that you can do anything here. After you've been stonewalled by a few bureaucrats over a missing comma in their particular interpretation of subsection b12 subparagraph d footnote 11, you start going about your days looking for excuses not do to any work as well. After several people have cited "liability" and "legal risk" as arguments against babysitting their neighours cat for a day, you might start fearing that nebulous liability thing yourself.
The whole culture is poisoned by regulations imho.
Previous poster got a point though. Yes regulations are ridiculous thanks to every German and every EU government piling on more crap.
Zero argument there. 100% true.
But its also true that things CAN get through regulations. 1000 pages of environmental impact assessment takes time. But it doesn't take years. Things can be done in parallel if someone actually gave a fuck.
Sadly no one does because by the time anything starts 2-3 new governments/administrators/mayors have been in place. And people don't like to work on things someone else already took credit for.
> 1000 pages of environmental impact assessment takes time. But it doesn't take years.
Oh, but it does. For example, if there is a suspected hamster population, you need at least one year of data gathering to assess local population state. And then you need a resettlement plan for the hamsters. This alone takes at least a year because of the data gathering, and of course you need an expensive and busy hamster expert to do the gathering and writing.
Oh, and btw, that's just for permitting. After you get your permit, you have to have those hamsters professionally resettled, observed and documented.
You are right that it is theoretically still possible to get stuff done. Prime example is Elon Musks Gigafactory in Brandenburg, where there was enough political and economic pressure to get it done. But that is a rare thing to happen, and lots of those steps you have to do are out of your control and up to some bureaucrat who is of course "very busy" and "cannot at this time give an estimate as to when the permit might be completed". It is just hard to convey how bad it really is...
Wrong. Wrong. Wrong. I'm tired of hearing this age-old propaganda tune over and over again.
Germany had plans (before the Schroeder government laid the foundation for the whole nuclear shutdown) to build new and more nuclear reactors. After the initial buildup phase from 1970 to the late 1980s (latest in operation was Neckarwestheim 2 in 1989 not counting test reactors, only 9 years before Schroeder, not really a "long time"), most good sites had a reactor or maybe 2 or 3. The plan then was to plan for replacing the oldest ones and add a few more to existing sites, starting in the late 1990s when the first reactors start to approach an age of 30, to be replaced by their finished replacement reactor on the same site at 40 before 2010. Those plans included pebble bed reactors (https://de.wikipedia.org/wiki/Kernkraftwerk_THTR-300 unsuccessful due to technical problems), fast breeder reactors (https://en.wikipedia.org/wiki/SNR-300 unsucessful due to green opposition) and improved PWRs (https://en.wikipedia.org/wiki/EPR_(nuclear_reactor) co-developed with France, nowadays a few have come online).
The reason why nobody wanted to build them was green opposition. This started before Chernobyl, for example in opposing the Wackersdorf reprocessing plant https://en.wikipedia.org/wiki/Wackersdorf_reprocessing_plant and blocking the refueling operations of existing plants. The green party never got past 10%, but mostly because the parties in government accepted their demands out of fear of strengthening them, because they needed them for a coalition, or because after Chernobyl saying anything positive about nuclear became political suicide. Misinformation was rampant, any German PWR was equated to a Chernobyl in waiting. Experts disagreed and were ignored by media and politicians, shouted down by the greens as industry minions wanting to poison us all.
The reactors shut down under Schroeder were quite profitable, but getting old enough that they would have been switched off soon anyways. Nuclear reactors become more and more profitable over time, because most of the cost is in the initial construction and the financing. After the building is paid off, running cost is quite low, fuel cost is negligible compared to personnel for example. But at some point, repairs, downtime and necessary improvements make it too costly after all. That's when the originally intended replacement should have started, but this was stopped by the Schroeder goverment and the Greens.
And while I don't know whether the Russian influence on and financing of green movements is true or not, it is logical. Russia never had any chance to export its nuclear technology to western countries. Western nuclear power plants were, at least since the 80s, safer and better. The only thing the west could have bought (and actually does still buy) from Russia is uranium. But that is by far a smaller export for Russia than oil and gas. And there are uranium reserves in many western countries, Canada, Australia, the US, Germany and the Czech republic do have large deposits that are only partially exploited, and many other (third-world) countries do have uranium mines and do export (which is why the west is buying there, it's just cheaper). So uranium isn't really a reliable or big business for Russia. Oil and Gas, however, are. And since oil and gas are high-volume goods, imports are far less flexible than uranium imports. Basically, if you want it cheap, you need a pipeline, which is the perfect leash for the Russians to hold. And lo and behold, Schroeder, while making plans to shut down all German nuclear power plants over time, planned to increase gas imports from Russia, which was upheld during the later Merkel years. Schroeder was, after his term, rewarded for this with a position at Russia's state gas producer Gazprom. So it would be in Russia's interest to reduce nuclear power use in Europe and get Europe dependent on their gas.
Btw. the meaning of "green" has changed. Back in the Schroeder days and before, green was largely pro-environment and anti-nuke. But CO₂ emissions and global warming weren't a huge topic. Open-pit coal mining and coal plants were opposed on grounds of landscape destruction, resettlement and pollution. But CO₂ was never the big topic that it is nowadays. Therefore, back in those days, even for the Greens, "clean" gas power plants were a viable replacement.
I don't think production cost is the big issue. German cars always were premium-priced compared to what you could get from a Japanese, French or US car maker.
The big problem imho is that due to greed and technical incompetence (especially regarding electronics and software), quality and value have gone down. The high prices are no longer justified, and customers are drawing the logical conclusion.
That would imply that the client actually would like to be contacted every time Ofcom found a leak in the geoblock. Not a good idea imho.