Reading mode:
Light
Dark
MCLV
Joined on
Jan 12, 2020
|
Featured Videos
Latest reviews
Finished challenges
Most popular cameras
Features
Top threads
MCLV: Wow, 7-megapixel stacked CMOS sensor and achieves up to 0 stops of VR with compatible lenses!
Yes, it's on the third page with the press release. Maybe some numbers got corrupted when importing it. Official text at nikonusa.com seems to be fine.
Wow, 7-megapixel stacked CMOS sensor and achieves up to 0 stops of VR with compatible lenses!
MCLV: What's going on in images 5 and 6? Was there a blurring of areas outside of main subject applied during postprocessing?
@BaggenPhotos: Thank you very much for your response.
I asked because I personally find this effect somewhat weird and distracting. And I would expect that it would be even more visible on a large print.
MCLV: What's going on in images 5 and 6? Was there a blurring of areas outside of main subject applied during postprocessing?
@Eric Hensel: The author already replied here that he is doing some postprocessing work as well.
MCLV: What's going on in images 5 and 6? Was there a blurring of areas outside of main subject applied during postprocessing?
I don't think that I'm mistaking grain for the texture of asphalt since the effect is visible also in other photo on the sign and on the sky. Since I can't attach images here, I made a new thread where I indicated the transition between grainy and smooth areas: https://www.dpreview.com/forums/post/66510967
MCLV: What's going on in images 5 and 6? Was there a blurring of areas outside of main subject applied during postprocessing?
If you look closely at both of these photos, there seems to be an oval area where film grain is visible. The rest of the photo is smooth and without grain. How could that be achieved by tilt/swing movement?
MCLV: What's going on in images 5 and 6? Was there a blurring of areas outside of main subject applied during postprocessing?
I don't think so, if you look below the car, there is rather sharp boundary on the road between area that is sharp and grainy and where it is blurry and without grain. Same on the next photo on the "ROW 1" sign. Its lower left part is sharp and grainy while the upper right one is blurry and smooth.
What's going on in images 5 and 6? Was there a blurring of areas outside of main subject applied during postprocessing?
And now we have one more standard: https://xkcd.com/927/
snapa: Since the human eye can only see a refresh rate of ~72-75Hz, I think 240Hz is a waste of money and hardware. But, people like to get the best and fastest, like mega pixels when they don't really need them. I guess while people have more money than sense, this kind of thing will continue.
@snapa Your original claim was that human eye cannot see more than XY Hz implying that we somehow can't see an improvement beyond that. But that's completely false.
Whether you need something is a completely different question than if it makes a difference or not.
snapa: Since the human eye can only see a refresh rate of ~72-75Hz, I think 240Hz is a waste of money and hardware. But, people like to get the best and fastest, like mega pixels when they don't really need them. I guess while people have more money than sense, this kind of thing will continue.
@onlyfreeman, you don't need any special superhuman eyesight or brain power. You just need to be able to follow a moving object on the screen. If 1920 pixels per second is too fast, try a slower one first. Since abovementioned test always moves image by integer number of pixels, you might try to set your other screen to 120 Hz so you can get exactly the same movement speed as on 240 Hz one. 240 Hz screen should provide sharper results at speed of 240 pixel per second or faster. By the way, what exact models of LCDs do you have?
snapa: Since the human eye can only see a refresh rate of ~72-75Hz, I think 240Hz is a waste of money and hardware. But, people like to get the best and fastest, like mega pixels when they don't really need them. I guess while people have more money than sense, this kind of thing will continue.
@onlyfreeman, I guess it's about using the right test and knowing what look for. Then it's much easier to spot it also in other less controlled circumstances.
I would have following suggestions and remarks:
1) Check your refresh rate setting in operating system (I know that it sounds stupid but sometimes when I connect my LCD to new laptop, highest refresh rate isn't selected).
2) Make sure that content matches refresh rate. The whole thing is about having many unique frames displayed per second. If you frame rate is low, high refresh rate won't help.
3) Run this test https://testufo.com/framerates#count=3&background=stars&pps=1920 on your 240 Hz screen. You should get rows with 240, 120 and 60 fps and you should be able to see different amounts of blur when you follow moving UFO.
4) Try to run the test on 144 Hz screen and compare the results.
5) Slow GtG transition can produce a lot of blur on top of persistence blur and can make distinguishing between different frame rates harder.
snapa: Since the human eye can only see a refresh rate of ~72-75Hz, I think 240Hz is a waste of money and hardware. But, people like to get the best and fastest, like mega pixels when they don't really need them. I guess while people have more money than sense, this kind of thing will continue.
Actually, a lot higher frame rate than 240 fps is necessary in order to remove effects of finite frame rate on a sample-and-hold display, see https://www.dpreview.com/forums/post/66130214
MCLV: I'm a little bit disappointed that DPReview perpetuates the myth that image artifacts visible in Z9 images under flickering lighting are caused by multi-row readout and that they wouldn't be present if single-row readout was used.
That's absolutely not true and ES simply displays more pronounced and obtrusive banding than focal-plane MS because MS is position in front of the sensor and that fact significantly softens edges of banding in the final image. This effect is aperture dependent, large apertures soften banding more than small ones, see this post for a demonstration https://www.dpreview.com/forums/post/65887363 .
For an infinitely small aperture, ES and MS would behave identically. But such aperture is not used in practice very often so the performance of ES and MS isn't identical in practical use.
Thanks for looking that up. I checked that link and it is consistent with what I wrote, for a high frequency flicker, exposure time should be an integer multiple of flicker period. It is written and drawn there in section 3-2.
MCLV: I'm a little bit disappointed that DPReview perpetuates the myth that image artifacts visible in Z9 images under flickering lighting are caused by multi-row readout and that they wouldn't be present if single-row readout was used.
That's absolutely not true and ES simply displays more pronounced and obtrusive banding than focal-plane MS because MS is position in front of the sensor and that fact significantly softens edges of banding in the final image. This effect is aperture dependent, large apertures soften banding more than small ones, see this post for a demonstration https://www.dpreview.com/forums/post/65887363 .
For an infinitely small aperture, ES and MS would behave identically. But such aperture is not used in practice very often so the performance of ES and MS isn't identical in practical use.
@MikeRan: Interesting, would you mind sharing a link?
MCLV: I'm a little bit disappointed that DPReview perpetuates the myth that image artifacts visible in Z9 images under flickering lighting are caused by multi-row readout and that they wouldn't be present if single-row readout was used.
That's absolutely not true and ES simply displays more pronounced and obtrusive banding than focal-plane MS because MS is position in front of the sensor and that fact significantly softens edges of banding in the final image. This effect is aperture dependent, large apertures soften banding more than small ones, see this post for a demonstration https://www.dpreview.com/forums/post/65887363 .
For an infinitely small aperture, ES and MS would behave identically. But such aperture is not used in practice very often so the performance of ES and MS isn't identical in practical use.
I see that there is a lot of trust in Ricci and Nikon, so let me elaborate a bit more.
Software update cannot move ES working plane in front of the sensor neither it can emulate soft start and end of the exposure of MS.
So the only thing that can be done is to adjust the exposure so that it is equal to an integer multiple of light source period.
You can do that by enabling the user to manually adjust exposure in small steps and to select one where banding virtually disappears. However, this doesn't help shooters that have to move between multiple areas with different lighting or when having multiple LED boards with different frequency in the background based on where the camera is pointed (for example when shooting a sports match).
This would be resolved by an automatic system but cases like having multiple LED boards with different frequencies in the background and to distinguish them from other stripy objects and are definitely not trivial to handle correctly.
MCLV: I'm a little bit disappointed that DPReview perpetuates the myth that image artifacts visible in Z9 images under flickering lighting are caused by multi-row readout and that they wouldn't be present if single-row readout was used.
That's absolutely not true and ES simply displays more pronounced and obtrusive banding than focal-plane MS because MS is position in front of the sensor and that fact significantly softens edges of banding in the final image. This effect is aperture dependent, large apertures soften banding more than small ones, see this post for a demonstration https://www.dpreview.com/forums/post/65887363 .
For an infinitely small aperture, ES and MS would behave identically. But such aperture is not used in practice very often so the performance of ES and MS isn't identical in practical use.
I'm sure that the situation can be improved in some cases (e.g. a room illuminated with a single flickering light source) by a software update but it will be very difficult to make it work reliably in all cases. So I'm looking forward to such update to see what Nikon engineers come up with but I wouldn't expect miracles from that.
I'm a little bit disappointed that DPReview perpetuates the myth that image artifacts visible in Z9 images under flickering lighting are caused by multi-row readout and that they wouldn't be present if single-row readout was used.
That's absolutely not true and ES simply displays more pronounced and obtrusive banding than focal-plane MS because MS is position in front of the sensor and that fact significantly softens edges of banding in the final image. This effect is aperture dependent, large apertures soften banding more than small ones, see this post for a demonstration https://www.dpreview.com/forums/post/65887363 .
For an infinitely small aperture, ES and MS would behave identically. But such aperture is not used in practice very often so the performance of ES and MS isn't identical in practical use.
quintana: Wouldn’t it be easier to have a high MP CMOS sensor with Bayer array that uses 4 sensor pixel to produce 1 output pixel?
That way a 96 MP sensor could be used to have a 24 MP output with every pixel having full color and brightness information.
If my theory is correct this should give us the sharpness of Foveon technology with the colors and high ISO capabilities of conventional CMOS sensors with Bayer array.
Or am I wrong? There must be a reason why it hasn’t been done yet. I‘ve been dreaming of such a technology for quite a while now. Maybe the processing power isn’t enough in current cameras to downsample every RAW from 96 to 24 MP while still maintaining a reasonable fps number (5 or more).
Apparently, 8K broadcasting cameras with 3 CMOS sensors exist: https://pro.sony/en_CZ/products/4k-and-hd-camera-systems/uhc-8300
However, the beam splitter for FF sensors would be pretty huge chunk of glass and it would need specially designed lenses which would correct chromatic aberration caused by sheer thickness of glass in optical path. Another point is that the optical path seems to so long that mirrorless lenses certainly wouldn't reach focus and even with SLR lenses it might not work.
Baldur: Hi Richard,
Could you please clarify how this sensor can deliver colour information? The electrons seem to cascade down to the transistor together, so losing the information on which layer they came from. In the Foveon each layer has its own output so there are three values from which to estimate the original colour.
So this is not really a Foveon competition yet. Because "a more complex circuit with dedicated red, green and blue sensors alongside the layered photodetector" seems reminiscent of using CFA in order to resolve color.