I have shot with a 1.33X anamorphic lens in 4K APS-C. This gives, after the clips are stretched, a 2.40 aspect ratio.
The questions is, what is the appropriate rendering resolution. Is it 3840x1600, keeping the original horizontal resolution, or 5184x2160, keeping the original vertical resolution? It would seem the latter is appropriate, since one advantage of the anamorphic stretch is "you do not lose resolution." For the former, one gets the same "lost" resolution just cutting off the top and bottom of a 4K video shot with a spherical lens.
The second option, as you say, maintains vertical resolution. But nothing in the anamorphic process actually results in the "you do not lose resolution" myth. Assuming a perfect anamorphic lens (and there are none) you will lose horizontal resoultion. You lose it during capture, because your sensor has a limited number of square pixels. If did not stretch the image in post, and projected it through an complimentary anamorphic lens, you would achieve the original aspect ration and original sensor resolution, but now with rectangular pixels. When you compress with an anamorphic lens, you have already limited the maximum horizontal resolution to the dimensions of the sensor. When you stretch, you do not improve that resolution, but the number of pixes per unit width is less, post stretch, than a full frame at the same height.
The anamorphic lens itself is lossy in terms of resolution and contrast, and has optical artifacts. You're applying an optically lossy horizontal compression to a sensor with a limited number of square pixels. Stretching never increases the number of real pixels. Stretching digitally in post doesn't either, because new pixels made are just interpolations between the original ones and do not contain any new information. Since the strech ratio is not an even multiple of the horizontal resolution of the sensor (or display), the interpolation is a rather ragged process, like it would be if the stretch was a precise doubling or halfing.
I understand the desire for a wide aspect ratio. I do not understand using an anamorphic in lens, in 2024, to get it in digital capture. It limits pretty much everything in the optical part of the process. In addition to the above, there is a color shift, definitely a loss of transmission of 1-2 stops, limited lens sharpness, and some pretty significant limitations on lens selection, weight, and cost. Not to mention the oval bokeh. Unless you want that. Remember that every wide screen process used in film since the 1950s had a compromise, and its use just weighted the wide aspect ratio above the compromses. You are now in that same position.
Philosophically, when anamorphic lens based wide screen films were introduced, the theater screen height did not change (at least in good theaters, in others perhaps slightly), but the width sure did. The goal of increasing audience involvement could be achieved by filling more peripheral vision. That rarely happens in any home video display. Instead, we present 2.40 wide screen in letterbox on a 16:9 screen and maintain the horizontal angle. Outside of some scenes working better with that aspect ratio, the result is actually less involving as the total image area present to the viewer is significantly less than a full screen image. Another compromise, and one we still have to consider. Are we really achieving the wide-screen goal? Or are we throwing one more hurdle at the viewer that could stand between them and the suspension of disbelief?
Theaterical presentation is a different animal of course, but it doesn't sound like that's where your stuff is headed.
Back to my question, which yet no one has answered - which of the above two options for rendering is best? Neither produces a video containing black bars. That is irrelevant.
Every non-cinema display device today is set up for 16:9. Even if you retain the image height and add (faked) width pixels, you're not gaining resolution because the display will force the result down to whatever it can handle. Nothing commonly out there will display native 5184x2160 without resampling. One thing for sure, you won't display the original height resolution on a 16:9 display unless you involve a projection anamorphic lens. Anything else you do will only fill the screen's width, not the height.
Also consider your intended exhibition method, and what display is going to be used. There are tons of real 3840x2160 displays in the world now, fewer DCI 4096 x 2160, concentrated in cinemas, and far fewer at resolutions above that until we get to IMAX. You may not want a display resampling your stuff on the fly, or you may not care. Think of image resampling as an uncontrolled process altering your image. There would be no resampling a native 3840 wide image on the fast majority of displays, of all kinds, if that matters. And no resampling of DCI res in cinemas, if that's your goal. I'm not sure about targeting FUHD quite yet, it's going to take a while before those displays really penetrate the market. But I wouldn't suggest throwing an image at a display that it must do something to just to be displayed.
And, as tempting as putting an anamophic on the projector may seem, and projecting stretched rectangular pixels to retain vertical res, given the lens losses, I wouldn't recommend running your image through a second anamorphic for display, especially one you don't have direct control over.
No, I'm not answering your question. There are too many compromises to consider. The answer must come from you.