they mentioned that most modern NICs have hardware timestamp capabilities, that is consistent with what I saw. The question is what is the availability of those PTP ready & enabled routers/switches capable of acting as a Transparent Clock? I have two concerns, most of existing routers/switches are not PTP ready, even for those ones with PTP capabilities, its PTP related features may not be enabled in production. Any experience or numbers to share? thanks!
All recent data center switches I've seen advertise PTP; maybe it doesn't work when you turn it on though. Meta makes their own switches so they can enable and debug the software.
From my understanding, basically this new iteration is better then their previous generation chrony based one because the uncertainty in the One Way Delay calculation is largely removed by having those Transparent Clocks capable of reporting their queuing delays. Basically the asymmetry of the delays are gone?
Personally, I don't think Chrony should be considered as a NTP implementation as clearly it can utilize those PTP hardware timestamps as well and those hardware timestamps are the "secret sauce" of PTP's high accuracy. With those PTP enabled switches, together with the fact that Chrony can already send NTP packages as PTP packages, surely Chrony can leverage such new capabilities as well with some reasonable updates.
In 2022, there are not many international matters that can be agreed upon by US, EU and China at the same time. Getting rid of this stupid leap second disaster is one of those rare ones. US, EU and China all agreed that leap second should be eliminated. It is a disaster.
The whole leap second disaster is just beyond imagination - inserting a full second into the system during business hours in Asia when some of the world's largest exchanges are in trading session! When there are hundreds of millions time sensitive devices manufactured by tens of thousands different vendors at vastly different skill levels!
When compared with this leap second invention, Y2K problem is so harmless.
Wait until you hear about the leap years! They insert a whole day, often right in the middle of the work week!
Too bad there weren't any computers around at the time or software developers might have convinced Julius Caesar what a disaster and source of bugs that will be for centuries to come. He might have dropped the whole idea.
> Wait until you hear about the leap years! They insert a whole day, often right in the middle of the work week!
That is pretty fine tuned given they only insert a full day. ;)
In Chinese lunar calendar, and probably other calendars as well, they insert a full month known as the leap month. Yes, you hear me right, 13 months in such leap years.
> Too bad there weren't any computers around at the time or software developers might have convinced Julius Caesar what a disaster and source of bugs that will be for centuries to come. He might have dropped the whole idea.
Indeed. Such 2000 years old garbage is just not very compatible with modern way of life in which lots of things are being changed. From memory, in a few years, the definition of a second will also be reviewed by the international community. The current definition based on some funny behavior of Caesium atom is no longer the best. UTC is another drama deserve to have more care.
One major difference leap day is not inserted by adjusting clocks by 24*60*60 seconds into monotonic clocks. But leap seconds are handled this way in many time sources.
I think the concept itself is fine, but software developers screwed up implementing it when designing unix and NTP time, or how operating systems handle hardware clocks.
Now there are unix and NTP timestamps that don't refer to a unique time point due to leap seconds, as they were rewound by a second at the time point when the leap second occurred. Somehow nobody thought that it would be a good idea for unix and NTP times to be rewound by a whole day after a leap day occurs.
At least Caesar had a plan for managing the change. Gregory sure didn't and Alaska left Russia on Saturday, 7 October 1867 but didn't join the US until Friday, 18 October 1867. Lawless anarchy for 11 days because of meddling with the calendar!
I don't understand - the debugger in Turbo C was such a delight and inspecting structs, linked lists etc was so easy and somehow fit nicely on these old screens.
In VS today everything feels so cumbersome and convoluted I almost never bother witht the debugger, and instead just look at stack traces instead.
glad you mentioned meituan, it is prefect example on how low you can go to survive in business.
let's be honest, meituan is mostly used by not well educated & low income people who don't know anything about food safety & personal hygiene. Quite often you get "food" delivered from some dark alley home to tons of mouses & cockroaches. There are a crap load of videos uploaded by those meituan delivery guys showing people how awful are those "food" & places where "food" get prepared. e.g.
great story, for me another interesting part is that lots of those tools/utils used in their dev work were copied from that dude's home, surely that is very reproducible & auditable.
We're talking early '90s here. Security was a thing of course, it meant you set a five-character root password on your FTP server ;-). MD5 wasn't even around so you had to trust that your source tarballs were not tampered with whatever the origins.
So whether I brought them from home or that company (if it had internet at all...) pulled them from gnu.org probably wouldn't have made a material difference. It was one of the reasons there was a big antipathy towards free software, at least with the vendor tapes you had someone to sue if they got tampered with.
There is a slight risk with auditability, but were it me in the mid 90s I'd be honoured to hire someone who is excited enough to keep source copies of the GNU coreutils at home.
Someone who is eager and creative like that is unlikely to be a sociopathic jobsworth. I.e. the type most likely to steal secretes or undermine your business.
In 1995/96 I was the first tech employee at a startup. We had a Solaris server at the core of our network, and needed a C compiler.
Paying Sun whatever stupid amount of money they wanted didn’t seem to make sense, GNU still didn’t have their own domain, and for whatever reason I couldn’t find a gcc binary for Solaris to download (probably related to the terrible state of web search engines at the time). So I visited my university sysadmin and copied gcc from his Solaris network to use to compile our own fresh copy.
It’s sometimes hard to remember just how bad things used to be.
Mine was on a couple of dozen floppies from a BBS.
Downloading them over my 28.8kbps modem was so expensive and tedious and unreliable that I put my PC in the car, along with a stack of floppies and the 15" CRT monitor, mouse, and IBM Model M keyboard, drove six hours to the opposite side of the country, to where my friend who ran the BBS lived.
We spent the weekend copying floppies and installing very early Slackware on a couple of machines.
Thanks for confirming that my option of asking a local admin with internet access was the right way.
Still... had to load the tape on our office AIX machine (the only Unix box I had access to), then wire up a null modem cable to a PC, install Kermit on both ends, transfer all the floppy images, find an MS-DOS image writer, and finally copy the works on that stack of floppies.
Yes, good times! I have the experience of copying games on multiple floppies; though it might be seen as an inconvenience tech-wise, it had other things which was fun. Travelling to a friend's place, yapping while it was getting copied. Cutting the archive into pieces and joining them back was a small personal learning exercise, and more. Thanks for the story!