Isn't it a touch on the required side, though? I'm assuming the orientation is a common metadata element of phone produced images, in particular. I'd assume same for decent cameras.
Would love to see a good rundown of when you should rely on different approaches? Another thread pointed out that you should also use the color space metadata.
Some systems seem to produce images where the pixel arrangement matches the sensor layout, which moves when you rotate the device, and they'll add EXIF metadata to indicate the orientation.
Other cameras and phones and apps produce images where the device adjusts the aspect ratio and order of the array of pixels in the image regardless of the way the sensor was pointed, such that the EXIF orientation is always the default 0-degree rotation. I'd argue that this is simpler, it's the way that people ignorant of the existence of the metadata method would expect the system to work. That method always works on any device or browser, rotating with EXIF only works if your whole pipeline is aware of that method.
The advantage of the EXIF approach is you don't have to do nearly as much post processing of the data? In particular, I don't expect my camera application to need to change memory layout just because I have rotated my camera. So, if you want it to change the rows/columns on saving the image, that has to be post capture from the sensor. Right?
I think this is what you meant by "some systems" there. But, I would expect that of every sensor system? I legit never would have considered that they would try the transpose on saving the image off the sensor.
The transpose is absolutely trivial compared to debayering and compression. It's a lot simpler to do it upfront and not worry about rotation at any later point.
And the odds are very high your camera app did already switch memory layouts when you rotated, at least for the UI. Doing that isn't a big deal.
I mean, I get that it isn't incredibly difficult, but it still feels unnecessary. The cynic in me thinks this explains a bit of why the app based cameras are garbage.
Do you expect the same when recording video if the user rotates the device while recording? Timestamping an orientation flag is trivial. Why not lean on that?
Would love to see a good rundown of when you should rely on different approaches? Another thread pointed out that you should also use the color space metadata.