Then you, or someone else, fix the underlying library.
This being Postgres, that process was likely completed decades ago.
Anticipating the next question, "but what if it is still unsafe?", the answer is that there is no fundamental reason why this can't be safe code. Security exploits like this require a cooperation between the receiving code and the incoming data, and it is in fact perfectly possible for the receiving code to be completely safe. There is no such thing as data just so amazingly hackerish that it can blast through all protections. There has to be a hole, and this is among the best-tested and most scrutinized code paths in the world.
There's some nuance here. Even the most battle-tested system, with distinct slots for executable code and primitive values, might have a way in which an untrusted primitive input can overrun a buffer, or be split in an unsafe way, and cause unexpected behavior. But there's a vast difference in attack surface between that, and "just give us a string, don't worry we'll sanitize it on our end."
It's all about defense in depth. Even a system that's tested all the way through with Coq or similar is still at the mercy of bugs in the specification or in underlying system libraries. But intentional API design can make it materially less likely that a security issue will arise, and that's worth a heck of a lot.
> But there's a vast difference in attack surface between that, and "just give us a string, don't worry we'll sanitize it on our end."
If you have a type system that distinguishes between sanitized and unsanitized strings then it's not a very big difference in attack surface.
The main difference between the two methods is the risk that you can forget to sanitize. But that's not what happened here, so calling them dumb for having that risk is not a useful way to analyze the problem.
Parameters are not an extra layer of defense. For anything other than forgetting to sanitize, parameters are a sidegrade, not defense in depth.
This being Postgres, that process was likely completed decades ago.
Anticipating the next question, "but what if it is still unsafe?", the answer is that there is no fundamental reason why this can't be safe code. Security exploits like this require a cooperation between the receiving code and the incoming data, and it is in fact perfectly possible for the receiving code to be completely safe. There is no such thing as data just so amazingly hackerish that it can blast through all protections. There has to be a hole, and this is among the best-tested and most scrutinized code paths in the world.