As someone here in the heart of it let me elucidate:
1) Residents here on the ground were told Starlink is already active in your area. Just connect through the site with your address. No one was worried about getting "retail dishes" they were told Starlink was providing free satellite internet across the devastated areas, all they had to do was open a website, enter their address and type $0 and boom, they're connected.
2) No one here even COULD get a dish delivered on the fly if they wanted to. Most roads are still to this day broken and impassable.
3) NO ONE HERE THOUGHT THIS WAS A STANDARD "FREE TRIAL" SITUATION. THEY THOUGHT IT WAS ACTUAL RELIEF, NO STRINGS ATTACHED, NO PERSONAL HARDWARE OBLIGATION.
Ultimately, you're making the mistake of thinking this is an article explaining a free trail for those with power, existing internet, passable roads, or existing hardware. The point this piece is making is that it was false advertising to people desperate to let their families know they weren't dead.
It's probably easy from the comfort of your mother's basement out of Helene's wake to spout all of this but a) you missed the point of the article and b) no one gave a fuck about retail dishes. They were told it was available via a website as HELP. As a kindness.
I'd love to see your source on 2000 free dishes. Ask anyone local here when the storm hit if they knew that, how to connect to them (without internet already). I guarantee no one knew until days later.
This came as a huge shock to those who only got the "free Starlink" through word of mouth here on the ground. We were told, "just go to starlink.com/activate" and type in your address. It was complete horseshit. The worst part about it was phones were in SOS mode, there was no internet anywhere, yet we could load that page to activate, which meant THERE WAS INTERNET. Just not to those who didn't pay for hardware and delivery (to places that are impossible to deliver to even today a week out). Infuriating.
Source checked. Sending in free hardware, and we're creating an update to make service free. Oh, and after 30 days, you'll be renewed on a normal retail plan
Sources: Starlink on X, Musk on X, and the Starlink details page on the free service respectively.
Good question. It's because some extreme scale computer science and application work can only be valid with the high core counts/networking available on what is now just one single supercomputer in the U.S. - That machine is now booked, leaving many unable to complete research or have to wait until 2024. Some research can only be done with that scope of system, and people have been waiting years to test exascale software on this machine. I hope this answers the question.
I question your fundamental premise. For example, I used to use supercomputers to run molecular dynamics simulations. When I found that the supercomputer folks didn't want me to run codes (because at the time, AMBER didn't scale to the largest supercomputers) on their machines, I moved to cloud and grid computing and then built a system to use all of Google's idle cycles to run MD. We achieved our scientific mission without a supercomputer! In the meantime, AMBER was improved to scale on large machines, which "justifies" running it on supercomputers (the argument being that if you spend 15% of the cost of the machine on interconnect, the code better use the interconnect well to scale, but can't be embarassingly parallel).
I've seen scientists who are captive to the supercomputer-industrial complex and it's not that they need this specialized tool to answer a question definitively. It's to run sims to write the next paper and then wait for the next supercomputer. Your cart is pushing the horse.
You know the term "embarrassingly parallel" but you seem to ignore that this term exits because there are other classes of problem which lack this characteristic.
Quite a few important problems are heavily dependent on interconnects, e.g. large-scale fluid dynamics and simulations that are coupled with such dynamics: aerodynamics, acoustics, combustion, weather and climate, oceanographic, seismic, astrophysics and nuclear. A primary component of the simulation is fast wavefronts that propagate globally through the distributed scalar and/or vector fields.
As long as there is a future where computers are growing to increase the scope, fidelity, and speed of these applications, there is also a need for infrastructure research to validate or develop new methods to target these new platforms. There are categories of grants that are written to a roadmap, with interlocking deliverables between contracts. These researchers do not have the luxury to only propose work that can be done with COTS materials already in the marketplace.
And conversely, if your application just needs a lot of compute and doesn't need the other expensive communication and IO aspects of these new, leading-edge machines, it _does_ make sense that your work get redirected to other less expensive machines for high-throughput computing. This is evidence of the research funding apparatus working well to manage resources, not evidence of mismanagement or waste.
One thing I've learned is that even when folks think their problem can only be solved in a particular way (fast interconnect to implement the underlying physics) there is almost always another way, that is cheaper and solves the problem, mainly by applying cleverer ideas.
I'll give (yet another) AMBER example. At some point in the past AMBER really only scaled on fast interconnects. But then somebody realized the data being passed around could be compressed before transmit and then decompressed on the other end- all faster than it could be sent over the wire. Once the code was rewritten, the resulting engine scaled better- on all platforms, including ones that had wimply (switched gigabit) interconnect. It reduced the cost of doing the same experiments significantly, by making it possible to run identical problems on less/cheaper hardware.
Second- I really do know a fair amount in this field, having worked on both AMBER on supercomputers (with strong scaling) and Folding@Home (which explicitly demonstrated that many protein folding problems never needed a "supercomputer").
I do not know much about your field of molecular dynamics. But, it is my lay understanding that it tends to have aspects of sparse models in space, almost like a finite-element model in civil engineering. Upon this, you have higher level equations and geometry to model forces or energy transfer between atoms. It may involve quadratic search for pairwise interactions and possibly spatial search trees like kdtrees to find nearby objects. Is that about right? And protein folding is, as I understand it, high throughput because it is a vast search or optimization problem on very small models.
Compared with fluid dynamics, I think your problem domain has much higher algorithmic complexity per stored byte of model data. Rather than representing a set of atoms or other particles, typical fluid simulations represent regions of space with a fixed set of per-location scalar or vector measurements. A region is updated based on a function that always views the same set of neighbor regions. Storage and compute size scales with the spatial volume and resolution, not with the amount of matter being simulated. These other problems are closer in spirit to convolution over a dense matrix, which often has so few compute cycles per byte that it is just bandwidth-limited in ripping through the matrix and updating values. But, due to the multiple dimensions, it is also ugly traversals rather than a simple linear streaming problem.
This is why I wish this thread hadn't gone more mainstream.
It's nuanced and specific to how things work for researchers at labs.
I do not expect non-HPC folks to get why it is a big deal and why the strong language was needed. Intel failed the science community in the U.S. that relies on the limited systems large enough to handle the very few applications that consume massive numbers of cores/parallelism. That's not the whole of science, but this system was central to some of the grandest-scale problem solving that exists (think planet-scale climate simulations in high resolution).
I respect your opinion but my opinion wasn't for you. It was for the HPC comm.
You know that many of us here are HPC folks who know the nuances... and simply disagree? I mean, sure, I don't think Intel should have taken the contract or gotten any positive PR for this, but at the fundamental level, many people on this site did supercomputing and HPC at national labs or universities, and now work on machine learning HPC on the cloud. My experience spanning both makes me think that chasing time on the fastest supercomputers is not the best way for scientists to be productive.
Cheers to the best in the business, here's wishing Ian incredible success. Thanks for so much great reporting and analysis over the years. Few understand how challenging such a job is, having your daily work product (which has to balance technical depth and reader appeal) on stage for the world to grill, year in and out. Ian nailed it with depth, energy, and humor. Can't wait to see what's next from the good Dr. Cutress. - N
"Here’s the thing: That there won’t be one overlay to Earth as you are seeing it. There will be there will be millions of overlays and millions of alternative universes. And people will build some, but AI will build a lot of them. Some of these Omniverse worlds will be some of model our own world, which is the digital twin; some will model nothing like our own world. Some will be temporary worlds, while we’re working on a more persistent world – just like we have scratchpad memory in a supercomputer, there will be scratchpad Omniverse worlds."
EPYC is actually out, so people can actually play with it.
POWER9 made a big splash with Summit, and POWER9 is no bandwidth slouch either. POWER9 also had Talos II so that "normies" could actually get a machine to play with under $10k.
Raptor Computing (makers of Talos) haven't said anything about POWER10 support yet. So its hard to get excited about a product that seems impossible to use.
> IBM needs to understand that if there's no easy onboarding to a platform, it becomes a legacy one, as new things are built elsewhere.
You're absolutely right. 10K is a huge amount of money to spend on a computer, even a workstation. Most startups can't afford that, so they will never run POWER hardware, and the startups of today are the big corporations of tomorrow. POWER will likely disappear just like "minicomputers" did.
This makes me think of Apple making their ecosystem increasingly more locked in and restrictive. The result is that Apple devices become increasingly less fun for developers to play with. The end result is that more developers will migrate towards Linux. These people will then be less likely to develop software for Apple.
> POWER will likely disappear just like "minicomputers" did.
They'll stay with the IBM i, but there isn't much AIX can do that Linux on Xeon can't do as well as. AIX is not like their mainframe business (and even there, they have surrendered to Linux).
> These people will then be less likely to develop software for Apple.
They still have a pretty good UX for developers and entry-level machines. I'll probably replace my aging Mac Mini with an ARM-based one next year or so. What will really upset me is if MacPorts is no longer supported on ARM. I hope Apple is sponsoring someone ensuring key parts of the 3rd party ecosystem are there or Macs will become second-class developer machines.
As someone here in the heart of it let me elucidate:
1) Residents here on the ground were told Starlink is already active in your area. Just connect through the site with your address. No one was worried about getting "retail dishes" they were told Starlink was providing free satellite internet across the devastated areas, all they had to do was open a website, enter their address and type $0 and boom, they're connected.
2) No one here even COULD get a dish delivered on the fly if they wanted to. Most roads are still to this day broken and impassable.
3) NO ONE HERE THOUGHT THIS WAS A STANDARD "FREE TRIAL" SITUATION. THEY THOUGHT IT WAS ACTUAL RELIEF, NO STRINGS ATTACHED, NO PERSONAL HARDWARE OBLIGATION.
Ultimately, you're making the mistake of thinking this is an article explaining a free trail for those with power, existing internet, passable roads, or existing hardware. The point this piece is making is that it was false advertising to people desperate to let their families know they weren't dead.
It's probably easy from the comfort of your mother's basement out of Helene's wake to spout all of this but a) you missed the point of the article and b) no one gave a fuck about retail dishes. They were told it was available via a website as HELP. As a kindness.
I'd love to see your source on 2000 free dishes. Ask anyone local here when the storm hit if they knew that, how to connect to them (without internet already). I guarantee no one knew until days later.