There are four major dimensions to security: attack surface; depth of defense, or how much an attacker can do once they're in; proactive measures to find security bugs (e.g., fuzzing); and code quality.
You're focusing on attack surface. But from a security standpoint, attack surface is probably the least important factor. Every sufficiently large application has a hole in it, and all attack surface does is crudely control how likely it is to stumble across that hole. Defense in depth, by contrast, lets you keep the attacker from doing bad things such as installing ransomware on your computer just because your HTML parser had a buffer overflow.
The major browsers spend a lot of time sandboxing their scripts in separate processes, and then disabling capabilities of those processes using techniques such as pledge(1), giving them much better defense in depth. They also put a lot more effort into finding and closing security bugs through use of techniques such as fuzzing. No one questions their much larger attack surface, but they do have much more effort into ameliorating attack vulnerabilities.
I should also bring up Spectre because you did. At its core, Spectre allows you to read arbitrary memory in your current memory space, nothing more. As a result, it basically means that you can't build an effective in-process sandbox... which everyone already knew to begin with. What Spectre did was show how easy it was do such arbitrary memory reads, since you can proxy it through code as innocent as an array bounds check. There are mitigations for this, which requires rebuilding your entire application and all libraries with special mitigation flags... guess which browser is more likely to do that?
This is kind of a strange analysis. Sort of infamously, Dan Bernstein, who is sort of a pioneer in these privilege-separated defensive designs, foreswore them in a retrospective paper about qmail. Really, though, I'm not sure I'm clear on the distinction you're drawing between attack surface reduction and privilege separation, since both techniques are essentially about reducing the impact of bugs without eliminating the bugs themselves.
You might more coherently reduce security to "mitigation" and "prevention", but then that doesn't make much of an argument about the topic at hand.
What I meant by "attack surface" here is probably a lot narrower than what you're used to. I'm using it to focus on the code size concern. I was trying to visualize it in terms of "how many opportunities do you have to try to break the system" (as surface area) versus "what can you actually do once you've made the first breach" (as volume), and didn't fully coherently rewrite the explanation to excise the surface area/volume distinction I originally made.
You're focusing on attack surface. But from a security standpoint, attack surface is probably the least important factor. Every sufficiently large application has a hole in it, and all attack surface does is crudely control how likely it is to stumble across that hole. Defense in depth, by contrast, lets you keep the attacker from doing bad things such as installing ransomware on your computer just because your HTML parser had a buffer overflow.
The major browsers spend a lot of time sandboxing their scripts in separate processes, and then disabling capabilities of those processes using techniques such as pledge(1), giving them much better defense in depth. They also put a lot more effort into finding and closing security bugs through use of techniques such as fuzzing. No one questions their much larger attack surface, but they do have much more effort into ameliorating attack vulnerabilities.
I should also bring up Spectre because you did. At its core, Spectre allows you to read arbitrary memory in your current memory space, nothing more. As a result, it basically means that you can't build an effective in-process sandbox... which everyone already knew to begin with. What Spectre did was show how easy it was do such arbitrary memory reads, since you can proxy it through code as innocent as an array bounds check. There are mitigations for this, which requires rebuilding your entire application and all libraries with special mitigation flags... guess which browser is more likely to do that?