Sure, but its an instruction that applies and the model will consider fairly relevant in every single token. As an extremely example imagine instructing the llm to not use the letter E or to output only in French. Not as extreme but it probably does affect.
People are so concerned about preventing a bad result that they will sabotage it from a good result. Better to strive for the best it can give you and throw out the bad until it does.