It's not useful, nor is it open source. It's a leaked copy of the Windows source code from a long time ago that has been hacked on. You can look up the "history" of it and how it has symbol names for companies with whom Microsoft worked to solve specific application issues. Microsoft knows that no company would be crazy enough to rely on it, so no point in getting bad press trying to kill it.
"It's a leaked copy of the Windows source code from a long time ago that has been hacked on."
That's a claim that... at best it seems that hasn't yet gathered behind it enough motivation to clarify anything, and at worst is just a plain malicious FUD. It doesn't even matter, when you think about it. Even if we'd assume a legal accusation of leaked-resembling code in all that pile of modules, all that ReactOS team needs to do is to formally contract reimplementations by independent teams for any of those allegedly guilty modules. In this light, such an accusation is very unlikely to happen, and probably there won't be even a half-baked effort to officially demand clarifying and putting any potential issues to rest. So, about those people out there with an interest in painting ReactOS as plagued by leaked code, entertaining this uncertainty is all they'll do, nothing more.
"You can look up the «history» of it and how it has symbol names for companies with whom Microsoft worked to solve specific application issues."
I assume you aren't going yourself to provide a reference to what you're alluding to in that public repository?
Not hard to figure this one out: An untraceable connection to any other party anywhere in the world; notably, his handler and anyone willing wanting to know how much a favor will cost.
> So you're saying Chernobyl wasn't caused by hurried and ill-prepared testing of a safety feature they failed to verify during initial startup and instead had to do years later during regular operation?
I'll say it wasn't. You're right about the issues; however, the root cause comes from a culture that views any form of problem as a personal fault. These guys were just the end result of a long list of fails. When you treat any form of failure as personal, there is perverse incentive to ignore or hide faults until all of it lines up, ie the Swiss-cheese model. The people running Chernobyl shouldn't have been in that position in the first place: untrained, ill-informed, and under-the-gun to perform.
It's supposed to go wrong. Government isn't worth anything. Once they "prove" had bad it is, they will bring in corporations that will do it, and they will be able to do it for a profit without having to deal with all those pesky rules that keep us safe.
> C was created to rewrite UNIX from its original Assembly implementation
I don't think I can go with that one. C was created in the same period as people were trying to find a way to create a common platform, but C was more about trying to solve the problem of having a higher-level language that wasn't available for the low-end hardware (ie PDP-11, etc) of the time. Richie wasn't trying to reinvent the wheel.
He would have been happy to use other languages, but they were either design for large platforms which needed more resources (Fortran) or looking to be locked behind companies (IBM's PL/I). Richie considered BCPL, which at the time had a design that made it pretty easy to port if your computer was word-based (same size of bit-width for all numbers regardless of purpose). But, mini-frames were moving towards byte-based data and word or multi-word-based addressing. Plus, mini-frames had poorer hardware to make it cheaper, so typing on them meant more physical work.
A lot of UNIX design came from trying to use less: less memory, less paper, less typing. Richie tried to simplify BCPL to be less wordy by making B, but ultimately decided to jump to the next thing by making a language that would require as few keystroke as possible. That's why C is so symbolic: what is the least amount of typing to perform the concept? That made it pretty easy to translate to a fixed set assembly instructions; however, it hasn't had a symbiotic relationship with assembly.
If anything, it is the reverse. Just look at the compiler for all of the memory addressing it has to know. Look at any reasonably complex program of all of the compiler directives to see all the platform exceptions. C++ really took failure modes to the next level. My favorites is "a = b/*c;" Is that "a equals b divided by value pointed at by c" or "a equals b" with a comment? I left C++ a long time ago because I could take code that would compile on two different platforms and result in totally different behavior.
I think all of this drama has to do with the simple fact of there a bunch of people content to live in a one-langauge dominated environment and the head of Linux doesn't want to decide if that is or isn't the mandate; however, by not taking sides, he has effectively taken the one-language mandate. Rust needs to reimplement Linux.
> C++ really took failure modes to the next level. My favorites is "a = b/*c;" Is that "a equals b divided by value pointed at by c" or "a equals b" with a comment?
That is a really bizarre comment, especially including a comment that is perfectly valid K&R C, and just as "ambiguous" in that. The answer is of course that it is an assignment of the value b to a variable called a, followed by a comment. "/*" is always the start of a block comment.
Since C99 (and since forever in C++) there is also the new style comment, //, for commenting out the rest of the line, and this in fact broke certain older C programs (`a = b//* my comment*/ c;` used to mean a = b / c; in C89, and means `a = b` in C++ or C99).
Well, it is kinda weird to take what would otherwise be perfectly legitimate and meaningful syntax and make it a comment. E.g. Pascal uses (* *) instead, and there's no other construct in the language where those can legitimately appear in this order.
Sure, but it's still a choice that C made, long before C++, so it's bizarre to see it in reference to how much worse C++ is.
As for the actual syntax itself, I do wonder why they didn't use ## or #{ }# or something similar, since # was only being used for the preprocessor, whereas / and * were much more common.
/* */ is a PL/I thing that somehow ended up in B. I suspect that Ritchie just wanted multiline comments (which BCPL didn't have - it only had // line comments), and just grabbed the syntax from another language he was familiar with without much consideration.
Or maybe he just didn't care about having to use whitespace to disambiguate. The other piece of similarly ambiguous syntax in B is the compound assignment, which was =+ =- =* =/ rather than the more familiar C-style += etc. So a=+a and a= +a would have different meaning.
That is the usual cargo cult story of C, systems programming languages go back to JOVIAL in 1958, NEWP in 1961, one of the first systems programming languages with intrinsics and unsafe code blocks.
You surely aren't advocating that hardware predating PDP-11 for a decade are more powerful.
There is enough material that show had UNIX been a commercial product instead of free beer source code, most likely C would have been just another systems language in the mysts of time.
> You surely aren't advocating that hardware predating PDP-11 for a decade are more powerful.
That's correct. The PDP-11 used for the first Unix system had 24KBytes of memory, and no virtual memory. The kernel and the current running process had to both fit in 24KB. This PDP-11 minicomputer was vastly underpowered compared to ten year old mainframes (but was also far less expensive). The ability of Unix to run on such underpowered (and cheap) machines was a factor in its early popularity.
BCPL was first implemented on an IBM 7094 running CTSS at Project Mac at MIT. This was one of the most powerful mainframes of its era. It had 12× the memory of the Unix group’s PDP-11, plus memory protection to separate kernel memory from user memory. One of the historical papers about C noted that a BCPL compiler could not be made to run on the PDP-11 because it needed too much memory. It needed to keep the entire parse tree of a function in memory while generating code for that function. C was designed so that machine code could be generated one statement at a time while parsing a function.
> It is like witchcraft seeing someone produce VGA out with some raw/more tangible chips.
I think what you're missing is: VGA was designed in the era when this was A Thing™. Monochrome/NTSC/CGA/EGA/VGA displays are all about "bit banging," sending signals at the right time. If you can send 1's and 0's faster than the analog reception can update, you can "fake" voltage potentials. I say "fake" because that was actually a way to do it before digital-to-analog converters were easy to implement. Today, we can easily produce chips custom for the purpose; however, "in the beginning" it was really just all about timing.
The witchcraft for me was the fact that while older cards used bit-banging to get signals out the door, it was generally designed with a specific purpose (thus specific timings). If you can get access to the underlying timing control, it [opens a whole new world that will surprise people today](https://www.youtube.com/watch?v=-xJZ9I4iqg8).
Display controllers from the 8-bit era were simple conceptually but had a huge parts count, particularly it needs to have memory access logic very similar to what is in the microprocessor. The earliest home computers (TRS-80 Model I, Apple II) had a large parts count which was reduced in the next generation (TRS-80 Color Computer, VIC-20) because the glue logic and display controllers got the same LSI [1] treatment as the CPU.
People who build modern real-hardware fantasy computers [2] struggle with the cost of the display controller if it is done in an authentic style so they wind up using an FPGA or microcontroller (amazingly easy to do with ESP32 [3])
This thing addresses the problem by reusing many of the parts between the CPU and display controller, plus the contrast is not so stark since the CPU part count is greater than 1, unlike the typical retrocomputer.
It's fascinating! It's a minicomputer in the sense that it is built out of low-integration parts, but it is like a microcomputer in important ways, particularly having the closely integrated display controller.
You need a chip for VGA->HDMI but they exist, and you can buy simple adapters. I think HDMI->VGA adapters might be cheaper (I have one in a draw somewhere) , One of the more tricky points with HDMI is that they are stricter on what they call a valid image and make weird assumptions like All your pixels are the same width.
A CRT can make do with signals to say "go to the next line now", "go back to the top now". and then just output whatever is coming in on the colour signal. It really means there is no concept of a display mode. It's all just in the timing of the signals on the wires. Plenty of modern hardware with digital internals look at a lot of that and just say "That's not normal so I quit".
Analog devices may make a high pitched whine and then explode, but at least they'll attempt the task they have been given.
I have no specific knowledge, but another approach would be to integrate more unusual very-long-instruction-word micro-instructions, like large scale matrix functions, algorithm encode/decode functions, and very long vector operations.
As I recall, Transmeta's CPU could accept x86 instructions because the software translator, called Code Morphing Software (like Rosetta), would decompose the x86 instruction into a set of steps over a very-long-instruction-word. VLIW's design is such that all of the instructions went into separate, parallel pipelines. Each pipeline had specific set of abilities. Think, the first three pipelines might be able to do integer arithmetic, but 3 and 4 can do floats. Also, the CPU implemented a commit/rollback concept which allowed it cause "faults," like branch miss-predictions, interrupts, and instruction faults. This allowed the Transmeta CPU to emulate the x86 beyond just JIT compilations. In theory, it could emulate any other CPU. They tried going after Intel (and failed); but, I think they would have been better off trying go after any one trying to jump start a new architecture.
Part of the reason why CPUs aren't good at GPU activities is because the instructions are expected to have pretty small, definite set of inputs and outputs (registers), use a reasonable number of CPU cycles, and must devote logic to ensure a fault can be unwound (CPU doesn't crash). FPGs are cool because you can essentially have wholly independent units with their own internal state. The little units can be wired any way desired. The problem with FPGs is all that interconnect means a lot of capacitance in the lines, so much slower clock speeds.
So, maybe they are trying to strike a balance. They have targeted instructions are more FPG-like, like "perform algorithm." The instruction receives a set of flags that defines which algorithms to use and in what order (use vector as 8-bit integers, mask with 0x80, compute 16bit checksum) and a vector register. You can loading vectors and running them then finally "read perform algorithm result" with flag "get compute 16bit checksum." FPG-like and registers aren't "polluted" with intermediate state.
Transmeta's whole elevator pitch was a power efficient CPU that through translation software, happened to run x86 instructions so there's no porting nonsense necessary. Only issue was that they made it 1/x efficient, at 1/x the speed.
Interesting fact is that the guy who architected Transmeta's CPUs also worked on Itanium and Russia's Elbrus CPUs. The Elbrus is sort of a spiritual successor to Transmeta's efforts at this translation thing, but it is very much aimed as a hardware root of trust solution to sandboxing software rather than a genuine effort at competing in foreign markets.
> It is not clear what the FBI was seeking when numerous agents entered Coplan's apartment at around 6am, or if Coplan and/or Polymarket are the targets of an investigation.
We have no information about why they are there, so you conclude it must be political retribution and they must be protected. THIS is why Trump won. So many people have zero critical thinking skills. When you see something that for which you have no information, you can say "I wonder what is going on." Then, you stop. Things that could be:
* Using collected data to facility spear phishing campaigns.
* Running a child pornography/sex trafficking ring.
* Participating in dogfighting.
* Been a back channel for selling trade secrets.
* Had some people killed.
* Routing all the information collected to foreign groups, like Russia.
* or.. has the other half of messages to someone under investigation whose phone locked.
But, given I have zero evidence to support any of this, let's stick with "let's see what they say."
Each of these are _completely_ invented and then dishonestly presented as valid:
- Using collected data to facility spear phishing campaigns.
- Running a child pornography/sex trafficking ring.
- Participating in dogfighting.
- Been a back channel for selling trade secrets.
- Had some people killed.
- Routing all the information collected to foreign groups, like Russia.
- or.. has the other half of messages to someone under investigation whose phone locked.
A real-world example of "zero evidence". Let's stick with "no lying". Also, in late 2024, giving the monstrously corrupt FBI the dishonesty-based benefit-of-the-doubt is beyond naive and comfortably in the realm of dishonest.
It has been days since I have seen such an example of "zero critical thinking skills"
Having looked at your project, what would you say is difference in ability or philosophy compared to Open Web UI or FlowiseAI? Or, is this "I want to build this because I want to?" To which there is nothing wrong with that.