DSPographer

DSPographer

Lives in United States United States
Works as a digital signal processing engineer
Joined on Jan 10, 2005
About me:

Canon 5D mark II camera.

Canon 28-135/3.5-5.6_IS_USM, 24/2.8, 50/1.8II, 100/2.8_USM_macro, 200/2.8L lenses.

Sigma EF-500_DG_Super flash.

Comments

Total: 9, showing: 1 – 9

I think this test is a good start to show where the high-ISO advantage of the D7s should be expected: dark tones at exposures for very high ISO.

But, there may also be some sensitivity difference at all ISO settings for very large aperture lenses. This is because it is easier to make large pixels accept extreme ray angles than small pixels. So, I would like to see this test also conducted at f/1.4 to see if there is a noticeable sensitivity difference for bright tones at that f-stop.

Direct link | Posted on Jun 24, 2014 at 13:02 UTC as 28th comment
On Pentax K-3 preview (961 comments in total)

This AA method was discussed almost five years ago in the forums here:
http://www.dpreview.com/forums/post/30005672

Direct link | Posted on Oct 8, 2013 at 13:35 UTC as 218th comment | 7 replies
In reply to:

falconeyes: This is one possible approach to decent low light capablity in video.

The other approach is to stop the nonsense to subsample sensors in video mode, reading out maybe 1 out of 6 pixels. This is what creates noise and aliasing artefacts in video. Unneccessarily so, as a few cameras (Panasonic, Nokia) show which don't subsample but create a video signal from all pixels. I.e., it is quite feasible.

Therefore, Thumbs Down for Canon to work around a problem they rather should solve.

The per-pixel read noise of CMOS sensors has been improving over time with process improvements, and the pixel size has been shrinking over time: but, for a given process it is possible to design a large pixel with about the same per-pixel read noise as a small sensor. Right now that read noise is about 1.5 e- or 1.5 h+ (for pmos sensors). Using large pixels then reduces the read noise per area. With a large pixel and a read noise per pixel of about 1.5 it becomes unnecessary to use photo-multipliers for night vision sensing: that is the purpose of this sensor. Here is another large pixel sensor for this purpose:
http://www.imagesensors.org/Past%20Workshops/2013%20Workshop/2013%20Papers/05-12_029_Tower_Paper.pdf

Direct link | Posted on Sep 17, 2013 at 14:27 UTC
In reply to:

falconeyes: This is one possible approach to decent low light capablity in video.

The other approach is to stop the nonsense to subsample sensors in video mode, reading out maybe 1 out of 6 pixels. This is what creates noise and aliasing artefacts in video. Unneccessarily so, as a few cameras (Panasonic, Nokia) show which don't subsample but create a video signal from all pixels. I.e., it is quite feasible.

Therefore, Thumbs Down for Canon to work around a problem they rather should solve.

The 5D mark III does NOT throw away pixels during video read-out. Instead it uses both horizontal and vertical binning to read the sensor in video mode. That is why its low light sensitivity is so much better than the D800:
http://falklumo.blogspot.com/2012/04/lumolabs-nikon-d800-video-function.html

Direct link | Posted on Sep 17, 2013 at 14:20 UTC
In reply to:

SHood: So how long until this global shutter sensor shows up in cameras? That is the big question.

You just need to step back a few years for global shutter in still cameras. Nikon had it a long time ago. They eliminated the circuitry for it because the negatives outweigh the positives. Notice that the global shutter F55 has ISO 1250 sensitivity while the F5 is ISO 2000.

Direct link | Posted on Nov 1, 2012 at 19:16 UTC
On Nikon D800 preview (1118 comments in total)
In reply to:

Joseph S Wisniewski: I have a simpler, and more likely, theory...

"This seems like an odd way of doing things; why not just remove the filter altogether? Our best guess is that it simply makes manufacturing the two models side-by-side easier: instead of having to make an entirely different filter stack for the D800E, Nikon just needs to change the first low-pass filter in the overall assembly."

I'm guessing Nikon is using a trick first seen on the Canon 5D II. The clear components in front of the sensor, because their index of refraction is higher than air, increase spherical aberration, and therefore decrease resolution, an annoyance that is most noticeable with high resolution sensors.

So, they eliminate one optical component by replacing the clear sensor cover glass with one of the two LiNbO3 filters. Because of that, you can't eliminate that filter without making up a new batch of sensors, which is expensive.

So, instead of eliminating it, you negate it. Cheaper than another sensor run.

Really? How does this work? If I cascade two FIR filters with coefficients of [.5 .5] and [.5 .5] I end up with a filter that is [.25 .5 .25] I am having difficulty seeing how to combine the two images from the first slab without introducing two new faint images further offset from the central image.

Direct link | Posted on Feb 8, 2012 at 16:44 UTC
On Nikon D4 overview article (860 comments in total)

Question: What does the connector on the XQD card look like? Does it still use pins in the camera that can be bent like compactflash? I find it strange that *none* of the pictures I could find of the XQD card show the connector end.

Direct link | Posted on Jan 6, 2012 at 14:42 UTC as 178th comment
In reply to:

Ashley Pomeroy: Coo - but is that 35mm full-frame, or another format?

24.5mm x 13.5mm So about the same as APS-C still sensor:
http://twitter.com/#!/mikeseymour/status/132248706174038017

Direct link | Posted on Nov 4, 2011 at 02:12 UTC
In reply to:

Fine Art: The mission to fix the Hubble telescope cost 1.5 Billion $. If all the braniacs at NASA couldn't do it in software for a billion what are the chances it will be in your photoshop upgrade for $300?

Presenting a perfect deconvolution as something you will get in your consumer software is plain fraud. Deconvolution is real, it works. I use it a lot. It is not going to do miracles.

You can buy deconvolution in other software now. I recommend Images Plus. Ive been buying it since version 2. I get nothing for recommending it.

Actually, the Hubble problems spurred the refinement of one of the most popular deconvolution methods "Richardson-Lucy". The limitations of deconvolution meant that it was still worth the huge expense to fix the telescope. Note: you probably used a variation of this algorithm since the space telescope science institute made the code available without restrictions. Here is one article about it:
http://adsabs.harvard.edu/full/1994ASPC...61..296S
The current Wikipedia article has a nice brief introduction to deconvolution including blind deconvolution:
http://en.wikipedia.org/wiki/Deconvolution

Direct link | Posted on Oct 20, 2011 at 13:23 UTC
Total: 9, showing: 1 – 9