The thing about writing standards is that if you write standards compiler writers vehemently disagree with, they will just not implement them, and they disagree with it because their consumers do. A standard typically documents what is already happening. This is why some languages call their standards “reports”. They investigate and document what the majority of compilers are currently doing and encourage the others to follow suit.
As for overflow, the reality is that most compilers simply assume it won't happen at this point. They do this because the consumers want it because it simply generates far faster code being able to assume that it won't happen. Yes, people often come with pathological examples to show why this is a bad idea of ridiculous optimizations being made no one expects because compilers assume it won't ever happen, but those are pathological, in practice it really comes down to loops. In many loops, compilers having to assume that loop variables can overflow in theory disables all sorts of optimizations and elisions and in practice they won't overflow and if they overflow that's an unintended bug anyway.
Obviously a a very basic example is a loop adding some counter value to a counter and stopping when the counter is past a certain value. Assuming that integers can overflow, and that thus adding a value can make the counter less than what it used to be in theory obviously disables many optimizations in streamlining the logic. Just in general, assuming overflow can't occur means being able to make the assumption that adding a positive integer to another integer will always produce a larger integer than the original, that is a very powerful assumption for optimizations to be able to make obviously, assuming that overflow can happen removes it that's why it's undefined behavior. Compilers are free to assume it will never happen.
As for overflow, the reality is that most compilers simply assume it won't happen at this point. They do this because the consumers want it because it simply generates far faster code being able to assume that it won't happen. Yes, people often come with pathological examples to show why this is a bad idea of ridiculous optimizations being made no one expects because compilers assume it won't ever happen, but those are pathological, in practice it really comes down to loops. In many loops, compilers having to assume that loop variables can overflow in theory disables all sorts of optimizations and elisions and in practice they won't overflow and if they overflow that's an unintended bug anyway.
Obviously a a very basic example is a loop adding some counter value to a counter and stopping when the counter is past a certain value. Assuming that integers can overflow, and that thus adding a value can make the counter less than what it used to be in theory obviously disables many optimizations in streamlining the logic. Just in general, assuming overflow can't occur means being able to make the assumption that adding a positive integer to another integer will always produce a larger integer than the original, that is a very powerful assumption for optimizations to be able to make obviously, assuming that overflow can happen removes it that's why it's undefined behavior. Compilers are free to assume it will never happen.