The point is:
The sensor size defines the area of the image circle...
...the FOV is defined by the lens in conjunction with the sensor size.
-
Thus, we have to look at both...
No. If we want to compare what is the sensor performance, we don't have to care about the optics at all.
Aha, but
without optics you get light on the sensor from angles up to 90 degree - on the whole surface, which is pretty uncommon in normal use and completely useless if one want's to get a picture...
You miss the point. We don't need to consider lenses or optics when we consider the performance of the sensor.
This doesn't become true from repeating...
Example:
A sensor which has his highest sensitivity in the UV region is used in visible light...
...A sensor which has the highest sensitivity in IR is used in visible light...
...A sensor, which has the highest sensitivity in the visible light, but also has microlenses,
which filter the approaching light and reflect some light...
...the color matrix that is needed to calculate a useable RGB image from the different signals...
...add different angle of incidence, different penetration depth, different pixel depth etc...
A example (best viewed on the gallery page as original due to color management issues):
a spectrum I took to see and to solve the problem, of a camera, which has too much IR sensitivity
But if you step into the door and claim, that everything need not to be considered, then it is true, or what? :-D
...beside that the pixel of a BSI sensor would have a huge advantage in this case
Not for the metrics we can measure at home without special tools - the full well capacity and read noise are not influenced on how the light is captured.
The biggest problem with measurements is to define and to deliver reliable general conditions for the measurement - not the measurement itself...
...and I have great doubts, that you provide suited general conditions for the measurement...
...no need to think about the measurement itself.
- especially those of the NX1, because of their shallow depth, which allows a higher efficiency at greater angles of incidence.
It sounds like you're copying this from marketing brochure, as NX1 BSI doesn't have any inherit advantage over other BSI-designs beause of what you write. The only reason why it may offer better response to light hitting at large angle than that of the small BSI sensors is because of the lateral size of the pixels.
But this is a bit silly as you really read my words like the devil would read the bible.
No argument, but accusations...
...means no knowledge, but the need to write 'important' things. ;-)
...therefore we need the same FOV in our comparisons...
FOV is not relevant for sensor related measurements. f/2 on 30mm lens and f/2 on 90mm lens both allow the same emount of light to go through.
If you take two different copies of the same lens design, you already get differences in light density per pixel,
Not really. If you consider this from the statistical point of view the differences are minute. Though, no two lenses are identical,
but this has nothing, absolutely nothing to do with the topic. Again just obfuscation.
*MOD EDITED*
because of the different Strehl ratio (due to manufacturing issues) - especially in the corners of the frame...
Strehl ratio is of course not a cause, but just a metric.
You think, that the bottom of a bottle vs. a APO makes no difference...
...just look with your own eyes, instead of a camera sensor, to see the difference.
And also totally irrelevant for this topic. Obfuscation again.
I see, that you are obfuscated, but this is not my fault.
There is copy variation in lenses - yes - and what on earth that has to do with SNR or comparison between formats is beyond me, other than to use for obfuscation purpouses or for psycological reasons.
If you ever used a telescope to see faintest stars, or to photograph them, then you would know, that the detection limit has to do with the Strehl ratio of the used optics.
But if you want to ride to the lens manufacturing road, you should consider that the tolerances for smaller formats are significantly finer than for larger format.
I adjusted, calculated, constructed and built optics...
...to get a hint of the light density (energy per area) that is collected.
Energy-per-area is irrelvant. Much better to think of total light as that if which forms the image. If you compare light density you will have to normalize afterwards so why bother with the extra step?
The image is formed by the light, which hits the sensor, in a defined way...
...through a suited lens in comparably low angles of incidence...
...all contributing to the energy per area, which is
detectable by the sensor...
...thus, the efficiency of the sensor depends on the optics in front of it...
...thus the (useable) SNR depends on the optics.
The sensor SNR over different saturations as well as individual pixel saturations can be measured and calculated and they do not depend on the lens being used. You can even not use the lens at all for this purpouse, though I prefer to keep my sensor clean so...
You seem to have missed the point, that the sensor will deliver a different performance depending on the testing conditions...
...so, the conditions, that match the reality the most, should be the preferred.
But sure, different optics use the sensors potential to a different degree - if we consider just SNR and ignore other possible issues, the vignetting (both lens based and pixel based) will limit the performance. It's the same for all cameras.
The vignetting is nothing to worry about, because you can always use the center part of the frame with the same lens.
However - I have no idea why you think this is relevant to the subject unless you will now start to argue for superiority of lenses for one system over another.
That you have no idea is meanwhile sufficiently clear.
-
Same FOV, same output size and the same average brightness within the available dynamic range (including the same contrast curve)...
Contrast curve is not something that is part of the topic if you're interested in sensor's or camera's performance. Contrast curve is comething you add in processing, something that alters the very information we're interested in analyzing.
Every RAW data
need a contrast curve, to be usable, as picture, or?
No. RAW data has no contast curves. It is just linear data.
"Need" is different to "has", isn't it?
*MOD EDITED*
FOV is only interesting if you either want to talk about equivalence from the point of view of comparing different formats lens properties or are taking test photographs. For analysing the relevant camera/sensor performances it's irrelevant and just obcusfates the discussion.
FOV is what photographers use and need to know, if they want to frame something in a proper way, without the need of excessive cropping.
You're missing the point. This thread was not about how to frame or anything like that.
If you want to review (*MOD EDITED*) theoretical possible performances, if no image is taken, you should open your own thread...
FOV is not relevant when you want to analyse or compare the camera performance.
A sensor on its own is completely irrelevant without considering the general conditions.
(Unless of course you for example shoot a test-target in which case it may be a good idea, but not necessarily is, depending on the nature of the target.)