Shooting high ISO vs underexposing and lifting in post question

What really stumps me is the ignorance I've come to encounter here by supposedly really intelligent people. Where is the curiosity? Where is this "why are there always these discussions, why are quite a few people out there with these questions, where does it come from?" Instead: "You are wrong, this is how it is supposed to be, ignore what you see. End of discussion."
<snip>
Seriously?

https://www.dpreview.com/forums/post/64442377

Now you can run to PDR and check, but the differences there are a little bit too minimal for visual artifacting like that. Nasty pesky reality.
What is that supposed to prove?
That the A7s3 offers more processing latitude than the A9 at higher ISOs hands down. You could replace the A9 with an A7III or -- if you want to get smacked badly -- an A7rIV. The result with the A7III will be a bit better than the A9, the A7rIV will actually be worse. Reason: The way the RAW data ends up being processed, at least currently. If you have a better idea how to mitigate these clearly visible artifacts, you would make not only me, but a lot of people happy.
To my mind, PDR is not a particularly useful tool for comparing these images.

Your original point made here was that bit-shifting the output from an ISO-invariant sensor introduces quantization artefacts, compared with applying analog amplification before digitisation.

While this may be true when the ADC quantisation step (LSB) is larger than the read noise, the impact decreases to imperceptible levels as read noise rises above the ADC step size - a point already made by Jim Kasson, with Jim's simulation results linked here. To the best of my knowledge, this is accepted wisdom in signal processing circles.

The images in the A7sii vs A9 comparison you linked were captured at ISO 10000. For the A7s this is 3 stops above unity ISO where each photo-electron produces a 1 DN step in digital output. For the A9, we are 4 steps above unity ISO. I would expect quantisation effects to be negligible in both sets of images.

Regarding image noise, after re-sampling to the same resolution as the A7s, I would expect the A9 to be somewhat noisier in the deepest shadows. The per-pixel read noise is slightly higher in the older model and there are twice as many pixels per unit area. At higher light levels, photon (shot) noise will dominate, and the higher quantum efficiency of the A9 sensor will deliver rather better signal to noise ratio than the A7s, for the same exposure.

Is greater processing lattitude evident in the thread you linked? I really can't tell. As J A C S pointed out, the deepest shadows of the A7s images are much darker, suppressing most of the noise. For a useful comparison the images must be processed with identical tone curves and equivalent black levels.

With better matched post-processing the images could be relevant to a discussion of pixel size - but that is a different thread which opens up whole new can of worms.

Cheers.
 
Look at the posterization problem from pushing things around (too much) and people telling me there is none. "ISO-invariant! All the same!". Ok, then well, D810 ISO64 vs. ISO12800 maybe? Because it always helps to put things to the extreme and see if the hypothesis still holds.
Straw man argument. No one is claiming that ISO invariance holds over that range. The model is this: noise comes from pre-PGA sources in the camera, PGA and post-PGA sources in the camera, and shot noise (I'm leaving out some sources). A camera is ISOless when the PGA and post-PGA sources are negligible wrt the others.
 
Good now we are on the same page. Thanks! This is how I have understood that it works. Which is what lead me into wondering what the technical difference is between these two cases and why the camera don't always shot at ISO 800 (second gain step) for the high ISOs and just stores the compensation value in the RAW file and let the RAW-converter always handle it and by that protect the highlights?
The GFX 50S and GFX 50R work that way. It works very well. The way the GFX 100 and GFX 100S do it is a step backwards, in my opinion. But only theoretically, since I don't use the GFX 100x cameras at the ISOs where the GFX 50x cameras handle ISO in the metadata.

--
https://blog.kasson.com
 
Last edited:
Well, it does not prove that. The A7s3 image has darker shadows and hides the noise better. Both at 12mp, they look quite similar.

Most importantly, this is one scene, one pair of cameras, one ISO, one take on what "more processing latitude" means. It proves nothing.
I mean I don't see a difference at ISO100 between a 5D3 and a D750, yet if you start underexposing and pushing things up the well known problems with the 5D3 will become visible. This doesn't make the 5D3 unusable as a camera (which is an opinion), but the D750 just has more processing latitude, obviously.

We apply the very same process at higher ISOs with A7s3 and A9, for example, and see that the A7s3 is holding things better together, which means the A7s3 has more processing latitude.

Now you are saying that applying the same process at ISO10k which you do for ISO100 is somehow not valid and the differences you see there "prove nothing". Maybe you could elaborate on that? Magic handwaving won't do, though.
 
Now you are saying that applying the same process at ISO10k which you do for ISO100 is somehow not valid and the differences you see there "prove nothing". Maybe you could elaborate on that? Magic handwaving won't do, though.
For starters, I do not see the differences you seem to see. Once we are settled on that, we can discuss what it proves even if I can see them.

Do you have the RAW files?
 
Guys, I think the OP has a point, even if he/she might not know all the internal details, and even if it takes several pages of discussion before you all agree on a common vocabulary. I think many of the comments miss the point.

As noted, few cameras, if any, are completely ISO-invariant, but some of them come very close, at least over part of the ISO scale. For example, the 7RIV and the D7200 both have substantial parts of the ISO ranges with almost constant read noise. Is it really necessary that the dynamic range should plummet at high ISO values in these cameras?

Sony 7RM4 and Nikon D7200

Dynamic range plummets

Let's use the 7RM4 as a numerical example.

========================

ISO Read noise Full well capacity(FWC)

400 1.25e- 8748e-

6400 1.13e- 547e-

========================

In a 14-bit raw file the FWC can be represented on a scale of 0 to ~16000 DN. On that scale, at ISO 400 the read noise is 1.25*16000/8748 = 2.3 DN, so we have valid data (i.e., at or above read noise) from 1.25 e- to 8748 e-, or 2.3 to 16000 DN. (These are Claff's numbers. Click on the legend in this link.)

With 4 stops less exposure, still at ISO 400, we still have valid data from 1.25 to 8748 e-, or 2.3 to 16000 DN. But at ISO 6400, we have valid data only from 1.13 e- to 547 e-, or 33 to 16000 DN. Why? We have thrown away 4 stops of highlights, i.e., 4 stops of dynamic range, in exchange for a very small, 0.12 e- improvement in read noise. Why? (Note *)

It's well known that it's easy to circumvent the problem simply by not using ISO from 400 to 6400. DPR even demonstrates that visually with their ISO-invariance tests. But this has a big cost: you give up a usable preview and jpeg, and you must wait until the raw file is developed.

I don't ordinarily second-guess the manufacturers, but in this case I have to wonder why DPR and users are smart enough to circumvent the problem, but the camera firmware is not. At ISO settings between 400 and 1600 it could use the ISO 400 settings internally, but create a preview and jpeg that are appropriate for the ISO setting. As far as I know, there are instances in which the camera manufacturers have done this, but for the two cameras I mentioned, this seems to be botched. How could they have gotten it so wrong? Did I miss something?

====

Note *

Some people will worry about roundoff error, which causes posterization. But the smallest noise level at either ISO setting is 2.3 DN, while the largest possible roundoff error is 0.5 DN. This is not a problem.
 
Well, it does not prove that. The A7s3 image has darker shadows and hides the noise better. Both at 12mp, they look quite similar.

Most importantly, this is one scene, one pair of cameras, one ISO, one take on what "more processing latitude" means. It proves nothing.
I mean I don't see a difference at ISO100 between a 5D3 and a D750, yet if you start underexposing and pushing things up the well known problems with the 5D3 will become visible. This doesn't make the 5D3 unusable as a camera (which is an opinion), but the D750 just has more processing latitude, obviously.

We apply the very same process at higher ISOs with A7s3 and A9, for example, and see that the A7s3 is holding things better together, which means the A7s3 has more processing latitude.

Now you are saying that applying the same process at ISO10k which you do for ISO100 is somehow not valid and the differences you see there "prove nothing". Maybe you could elaborate on that? Magic handwaving won't do, though.
Without equalising the post-processing so that the comparison images have equivalent tone black level and identical tone curves, I have no way of knowing if there is a significant difference to explain.

Regarding digital scaling of the output at ISO 100, compared with ISO 10000. At ISO 10000, the Sony sensors you referenced have very low absolute levels of read noise.

At ISO 100 the Canon 5Diii has rather high read noise, including relatively high levels of downstream conversion noise. The input-referred read noise falls roughly inversely with ISO.

In contrast, the Nikon 750D while not exactly isoless, has a much lower ISO 100 read noise, which changes much more slowly with ISO. This gives a D750 image captured at low ISO higher engineering dynamic range, and much greater processing latitude than an equivalent 5Diii image.

https://www.photonstophotos.net/Charts/RN_e.htm#Canon%20EOS%205D%20Mark%20III_14,Nikon%20D750_14,Sony%20ILCE-7SM3_14

At ISO 3200 and higher, there is little to choose between the 5Diii and D750 in terms of read noise. They are both almost-isoless, with around 2.5 e- read noise.

At ISO 10000 the recent Sony sensors have even less read noise, which is almost independent of ISO.

Explaining the difference in behaviour for Canon 5Diii at ISO 100 compared with Sony A7siii at ISO 10000 is simple physics. No magic handwaving required.

--
Alan Robinson
 
Last edited:
As J A C S pointed out, the deepest shadows of the A7s images are much darker,
ILCE-7SM3 has its ISO calibrated differently from ILCE-9, too.
Now, we can somewhat correct the A9 image by pulling things down (which makes some stuff more difficult to see, but we have only the resulting JPGs, unfortunately):

8f6e8d9592bf45fdb5c6bce075bb8e1e.jpg

I tried to make it somewhat similar, but the A9 has a color drift (expected) and the image is visibly desatured compared to the A7s3 one (also expected), plus the noise killed visible details in the darker parts compared to the A7s3 (also expected). My definition of "same" doesn't really fit to that.

With an A7rIV the differences would be way more pronounced even to the point that magic handwaving like "oh but this image is darker than the other" and "ISO calibration differences" still leave a lot to answer for.
 
What really stumps me is the ignorance I've come to encounter here by supposedly really intelligent people. Where is the curiosity? Where is this "why are there always these discussions, why are quite a few people out there with these questions, where does it come from?" Instead: "You are wrong, this is how it is supposed to be, ignore what you see. End of discussion."
<snip>
Seriously?

https://www.dpreview.com/forums/post/64442377

Now you can run to PDR and check, but the differences there are a little bit too minimal for visual artifacting like that. Nasty pesky reality.
What is that supposed to prove?
That the A7s3 offers more processing latitude than the A9 at higher ISOs hands down. You could replace the A9 with an A7III or -- if you want to get smacked badly -- an A7rIV. The result with the A7III will be a bit better than the A9, the A7rIV will actually be worse. Reason: The way the RAW data ends up being processed, at least currently. If you have a better idea how to mitigate these clearly visible artifacts, you would make not only me, but a lot of people happy.
To my mind, PDR is not a particularly useful tool for comparing these images.

Your original point made here was that bit-shifting the output from an ISO-invariant sensor introduces quantization artefacts, compared with applying analog amplification before digitisation.

While this may be true when the ADC quantisation step (LSB) is larger than the read noise, the impact decreases to imperceptible levels as read noise rises above the ADC step size - a point already made by Jim Kasson, with Jim's simulation results linked here. To the best of my knowledge, this is accepted wisdom in signal processing circles.

The images in the A7sii vs A9 comparison you linked were captured at ISO 10000. For the A7s this is 3 stops above unity ISO where each photo-electron produces a 1 DN step in digital output. For the A9, we are 4 steps above unity ISO. I would expect quantisation effects to be negligible in both sets of images.

Regarding image noise, after re-sampling to the same resolution as the A7s, I would expect the A9 to be somewhat noisier in the deepest shadows. The per-pixel read noise is slightly higher in the older model and there are twice as many pixels per unit area. At higher light levels, photon (shot) noise will dominate, and the higher quantum efficiency of the A9 sensor will deliver rather better signal to noise ratio than the A7s, for the same exposure.

Is greater processing lattitude evident in the thread you linked? I really can't tell. As J A C S pointed out, the deepest shadows of the A7s images are much darker, suppressing most of the noise. For a useful comparison the images must be processed with identical tone curves and equivalent black levels.

With better matched post-processing the images could be relevant to a discussion of pixel size - but that is a different thread which opens up whole new can of worms.

Cheers.
The problem is simple:
  • Processing is part of digital photography. Just talking about effects you see when analysing and comparing RAW data is certainly interesting, but without processing all that stuff is just data. Which means for a full analysis and also understanding of digital photography the processing must be included.
  • Processing is (strongly) non-linear and emphasizes the darker parts over the brighter ones, which is somewhat unfortunate if you have a weak signal there, for example because the pixels are small and there is only so much light to work with. It also means that one should be careful using linear interpretations of RAW data in explaining the results out of the full processing step.
  • Quite a few things in the processing are done per pixel, meaning that the signal quality per pixel has an effect on calculations.
  • Scaling happens towards the end of the processing, means all the artifacts because of a weak signal up to that point will not magically disappear just because you are downscaling, for example. This can be so bad that some of the artifacts are still visible in post stamp sized low res images.
And to come back to the initial problem: Exposure compensation means that you stretch the tonal steps of the darker parts of the image. That also means all the noise in original gets stretched alongside and has no in-between values in the result. If you don't add noise, 14 bit data will invariably have larger steps between the tonal values than for example 16 bit data, and the effect will be even more pronounced if you have only 12 bits or, please avoid, 10 bits to available. It also means the more you push, the more pronounced that problem becomes.

Now, compare that to some analogue processing. You will not have steps in there, because you start with a more or less "stepless" signal, so you can scale that and still be stepless. You will have noise "in-between" automatically, so once you go through an ADC the result will look very different than the digitally processed thing.

Many DSP algorithms rely on the fact that input and output are 1:1, which also means that you don't disrupt the steps between the values too much. Exposure compensation in photography is pretty disruptive, it invalidates this 1:1 assumption big time. That is ok, but if you want results to be more correct, you either have to fix the data up beforehand that it "survives" the treatment, or you have to do post-fixing of it. In both cases noise is the solution.
 
Guys, I think the OP has a point, even if he/she might not know all the internal details, and even if it takes several pages of discussion before you all agree on a common vocabulary. I think many of the comments miss the point.

As noted, few cameras, if any, are completely ISO-invariant, but some of them come very close, at least over part of the ISO scale. For example, the 7RIV and the D7200 both have substantial parts of the ISO ranges with almost constant read noise. Is it really necessary that the dynamic range should plummet at high ISO values in these cameras?

Sony 7RM4 and Nikon D7200

Dynamic range plummets

Let's use the 7RM4 as a numerical example.

========================

ISO Read noise Full well capacity(FWC)

400 1.25e- 8748e-

6400 1.13e- 547e-

========================

In a 14-bit raw file the FWC can be represented on a scale of 0 to ~16000 DN. On that scale, at ISO 400 the read noise is 1.25*16000/8748 = 2.3 DN, so we have valid data (i.e., at or above read noise) from 1.25 e- to 8748 e-, or 2.3 to 16000 DN. (These are Claff's numbers. Click on the legend in this link.)

With 4 stops less exposure, still at ISO 400, we still have valid data from 1.25 to 8748 e-, or 2.3 to 16000 DN. But at ISO 6400, we have valid data only from 1.13 e- to 547 e-, or 33 to 16000 DN. Why? We have thrown away 4 stops of highlights, i.e., 4 stops of dynamic range, in exchange for a very small, 0.12 e- improvement in read noise. Why? (Note *)

It's well known that it's easy to circumvent the problem simply by not using ISO from 400 to 6400. DPR even demonstrates that visually with their ISO-invariance tests. But this has a big cost: you give up a usable preview and jpeg, and you must wait until the raw file is developed.

I don't ordinarily second-guess the manufacturers, but in this case I have to wonder why DPR and users are smart enough to circumvent the problem, but the camera firmware is not. At ISO settings between 400 and 1600 it could use the ISO 400 settings internally, but create a preview and jpeg that are appropriate for the ISO setting. As far as I know, there are instances in which the camera manufacturers have done this, but for the two cameras I mentioned, this seems to be botched. How could they have gotten it so wrong? Did I miss something?
Thank you for understanding and rewriting my question into a more technical version. I can't comment on your science in it, but I can see that much that you seem to fully understand my original question.

Also Jim Kasson seems to have understood what I asked about and he also replied that some Fuji models do just what I asked about, meaning "underexpose" the RAW and lift accordingly in Live View and on in camera JPEG.

And my original question is really why not all manufacturers with an appropriate sensor do this to save highlights?

When I write "seems" above, the slight unsureness in my wording is not because that I mean you are unsure but I am if I have understood you correct. And that is because I can't express myself on this topic much better than I did in my OP nor understand the answers on exact detail level when they get to complex and deep down into sensor/processing detail tech. This comes from a combination of me not knowing the exact vocabulary to use, nor do I understand the underlying technology fully but only vaguely so I have to guess a little what is meant. Being non native English speaker doesn't help either (I'm from Sweden).

There might be more people in the thread who have understood my question fully, just that I haven't been fully able to understand your answers/comments in all cases. Sorry for that.

If you want to answer so I understand then simplification and easy English is the model. I am not a technical illiterate but surely not on the level you guys are. Even if I have trouble follow many of the comments 100% I still pick up stuff and learn so it isn't wasted either and I am very grateful for the involvement this topic has gotten.

--
Best regards
/Anders
----------------------------------------------------
Mirrorless, mirrorless on the wall, say which is the best camera of them all?
When I put my camera in Manual mode, why don't I get any instructions?
Some images:
https://www.dpreview.com/forums/post/65325637
https://www.dpreview.com/forums/post/64169208
https://www.dpreview.com/forums/post/64221482
https://www.dpreview.com/forums/post/65120847
https://www.dpreview.com/forums/post/65121520
https://www.dpreview.com/forums/post/65130731
 
Last edited:
And to come back to the initial problem: Exposure compensation means that you stretch the tonal steps of the darker parts of the image. That also means all the noise in original gets stretched alongside and has no in-between values in the result. If you don't add noise, 14 bit data will invariably have larger steps between the tonal values than for example 16 bit data, and the effect will be even more pronounced if you have only 12 bits or, please avoid, 10 bits to available. It also means the more you push, the more pronounced that problem becomes.

Now, compare that to some analogue processing. You will not have steps in there, because you start with a more or less "stepless" signal, so you can scale that and still be stepless. You will have noise "in-between" automatically, so once you go through an ADC the result will look very different than the digitally processed thing.

Many DSP algorithms rely on the fact that input and output are 1:1, which also means that you don't disrupt the steps between the values too much. Exposure compensation in photography is pretty disruptive, it invalidates this 1:1 assumption big time. That is ok, but if you want results to be more correct, you either have to fix the data up beforehand that it "survives" the treatment, or you have to do post-fixing of it. In both cases noise is the solution.
Whence comes the assumption that “holes” are a problem? If the signal is sufficiently dithered before scaling, it should also be true after scaling. The noise and quantization step are scaled by the same factor, so the former is still larger than the latter. I also don’t see a problem when hovering over the second image in this page (which truncates it by 3 bits, see histogram). What am I missing?
 
Last edited:
As J A C S pointed out, the deepest shadows of the A7s images are much darker,
ILCE-7SM3 has its ISO calibrated differently from ILCE-9, too.
Now, we can somewhat correct the A9 image by pulling things down
Responsivity differences corrected after the exposure has ended bring a lot of uncertainty, especially when being combined with non-linear raw conversion, like Adobe converters perform by default.

Such comparison of processed JPEGs isn't something I would rely upon when comparing cameras and sensors.
 
Why all manufacturers do not move to the ISO invariant direction because it seems be a simpler way and offers several stops of highlight details compared to high ISO.
They don't design for ISO invariance, I doubt that it is even a factor in the specification of cameras. If they were concerned about it they would move away from the ISO exposure management paradigm completely and use a method properly suited for digital.
A single setting (ISO) performs at least three essentially unrelated functions. This sounds like a botch to me.
 
And to come back to the initial problem: Exposure compensation means that you stretch the tonal steps of the darker parts of the image. That also means all the noise in original gets stretched alongside and has no in-between values in the result. If you don't add noise, 14 bit data will invariably have larger steps between the tonal values than for example 16 bit data...
You are forgetting the effects of dither. The differences between 16-bit raw precision and 14-bit raw precision vary between non-existent and subtle.

I once has a Hasselblad H2D-39. It had 16 bit precision. The read noise was about 32 LSBs.
 
Guys, I think the OP has a point, even if he/she might not know all the internal details, and even if it takes several pages of discussion before you all agree on a common vocabulary. I think many of the comments miss the point.

As noted, few cameras, if any, are completely ISO-invariant, but some of them come very close, at least over part of the ISO scale. For example, the 7RIV and the D7200 both have substantial parts of the ISO ranges with almost constant read noise...
Almost constant input-referred read noise.
 
The problem is simple:
  • Processing is part of digital photography. Just talking about effects you see when analysing and comparing RAW data is certainly interesting, but without processing all that stuff is just data. Which means for a full analysis and also understanding of digital photography the processing must be included.
  • Processing is (strongly) non-linear and emphasizes the darker parts over the brighter ones, which is somewhat unfortunate if you have a weak signal there, for example because the pixels are small and there is only so much light to work with. It also means that one should be careful using linear interpretations of RAW data in explaining the results out of the full processing step.
Processing does not have to be "strongly" non-linear. In fact, a simple color transform followed by a tonal curves works quite well already. The tonal curve is mostly linear in the midtones and actually diminishes the contrast in the shadows contrary to what you said. The gamma curve is not a factor here because it eventually gets reversed.

Small pixels are not a reason for small signal (per area).
  • Quite a few things in the processing are done per pixel, meaning that the signal quality per pixel has an effect on calculations.
I am not sure what "per pixel" means but quality per pixel is a questionable metric.
  • Scaling happens towards the end of the processing, means all the artifacts because of a weak signal up to that point will not magically disappear just because you are downscaling, for example. This can be so bad that some of the artifacts are still visible in post stamp sized low res images.
Some will, some will not. I am not sure what your point is here.
And to come back to the initial problem: Exposure compensation means that you stretch the tonal steps of the darker parts of the image. That also means all the noise in original gets stretched alongside and has no in-between values in the result. If you don't add noise, 14 bit data will invariably have larger steps between the tonal values than for example 16 bit data, and the effect will be even more pronounced if you have only 12 bits or, please avoid, 10 bits to available. It also means the more you push, the more pronounced that problem becomes.
Why is that a problem? I can see evidence of gaps in the histogram for strongly pushed images. So what? The natural photon noise still dominates over most of the DR, and you are not going to notice the gaps. Also, they would be filled quickly by donwsizing or even mild NR - so mild that you would not even notice it. I just tried it.
 
If you want to answer so I understand then simplification and easy English is the model. I am not a technical illiterate but surely not on the level you guys are. Even if I have trouble follow many of the comments 100% I still pick up stuff and learn so it isn't wasted either and I am very grateful for the involvement this topic has gotten.
I think you understood pretty well. Other people in this topic have obviously figured this out as well. I can summarize it like this. As an example, imagine a normal exposure made with the Sony 7RM4 at ISO 400.

You could also take a picture of the same subject with 4 stops less exposure, setting the camera at ISO 6400. This picture would obviously be noisier. You could also take the picture of the same subject with 4 stops less exposure, setting the camera at ISO 400. The ISO 6400 setting gives you almost no advantage, but you lose 4 stops of dynamic range. The other penalty is that your review and jpeg photo are much too dark.

The camera should be much smarter about this, and some cameras are. But you knew that already, because it's the whole point of your discussion. I just added some numbers.
 
Guys, I think the OP has a point, even if he/she might not know all the internal details, and even if it takes several pages of discussion before you all agree on a common vocabulary. I think many of the comments miss the point.

As noted, few cameras, if any, are completely ISO-invariant, but some of them come very close, at least over part of the ISO scale. For example, the 7RIV and the D7200 both have substantial parts of the ISO ranges with almost constant read noise...
Almost constant input-referred read noise.
Yes.

And I was very careful to specify the units (electrons), which removes any doubt, but I should have been clearer in that sentence.
 
Last edited:
Look at the posterization problem from pushing things around (too much) and people telling me there is none. "ISO-invariant! All the same!". Ok, then well, D810 ISO64 vs. ISO12800 maybe? Because it always helps to put things to the extreme and see if the hypothesis still holds.
Straw man argument. No one is claiming that ISO invariance holds over that range. The model is this: noise comes from pre-PGA sources in the camera, PGA and post-PGA sources in the camera, and shot noise (I'm leaving out some sources). A camera is ISOless when the PGA and post-PGA sources are negligible wrt the others.
But that is the often stated conclusion out of ISOless sensors, which I just put into a more extreme example where it is obvious that it cannot really work. Which in return means as soon as you start pushing things up you will degrade image quality and with 14 bits the limits are tighter than the analog range of most sensors, even if the sensor is ISOless (or close to that).
 

Keyboard shortcuts

Back
Top