DSPographer

Lives in United States United States
Works as a digital signal processing engineer
Joined on Jan 10, 2005
About me:

Canon 5D mark II camera.

Canon 28-135/3.5-5.6_IS_USM, 24/2.8, 50/1.8II, 100/2.8_USM_macro, 200/2.8L lenses.

Sigma EF-500_DG_Super flash.

Comments

Total: 19, showing: 1 – 19
In reply to:

Peiasdf: The "sport" ISO value shouldn't be counted since NR at RAW level is performed. Very impressive sensor never the less. I think give A7R II's sensor to Nikon and Pantax and they will match Helium 8K in all but "sport" ISO

DXOmark explains that the Red Weapon results are likely due to temporal noise reduction. If not, then I say the results show something has gone wrong with the measurement. I don't think Red has repealed the laws of physics.

Link | Posted on Jan 11, 2017 at 20:59 UTC
On article Canon EOS M5 real-world sample gallery (240 comments in total)

Are there any examples coming with the 15-45mm kit lens?

Link | Posted on Dec 6, 2016 at 14:27 UTC as 52nd comment | 1 reply
On article Canon EOS M5: What you need to know (564 comments in total)
In reply to:

AnuraGuruge: Are you 100% sure that the "borrows a lot from the EOS 80D, including a Digic 7 processor ...". I thought, obviously incorrectly, that the 80D had a Digic 6.

One of the Canon videos mentioned it used the same *sensor* as the 80D.

Link | Posted on Sep 16, 2016 at 17:39 UTC
On article Modern Mirrorless: Canon EOS M5 Review (1590 comments in total)
In reply to:

jim bennett: So my takeaway is that the in body IS only works for video and only in conjunction with lenses that already have IS? So it will not benefit lenses without IS or legacy non stabilized lenses. If true that is a huge miss for Canon. Probably on purpose but still, that really limits the effectiveness and is different than how most in body stabilization cameras work.

I think this is not real IBIS like Olympus or some Sony have. This is just frame-to-frame image shift to make video look more stable, each frame is individually just as blurred by the motion as with no stabilization. That is why it doesn't work for still photos.

Link | Posted on Sep 15, 2016 at 16:01 UTC
In reply to:

junk1: To reduce noise (shoot at 1/2 the ISO), I suppose if you had 2 of these cameras close together, you could shoot at 15FPS (instead of 30) and double the exposure time, and interleave the frames from each of them. Of course the cameras would need to be offset by 1/15th of a second...Since the stars are infinitely away, there would be no parallax, right?
At some point though you can't just keep increasing the exposure time due to the stars moving unless you also are tracking them, in which case why shoot a video.

The processing I am suggesting also results in 30 fps movies: frame 1 out = frame 1 + frame 2 in; frame 2 out = frame 2 + frame 3 in etc.

The temporal characteristics of the shot noise are different, with two cameras the noise will be uncorrelated frame to frame. This means your two camera noise would be less visible.

There is no ability to improve the time resolution with your technique like there is with adaptive post-processed frame averaging.

To reduce noise for far-field images with multiple cameras, you can just gen-lock them for simultaneous exposures then combine their images in post processing. This can also be made to work fairly well for near subjects by having the software estimate the parallax for each part of the image. Or as I mentioned you can create a stereo pair for stereo viewing that will work for near subjects.

Link | Posted on Sep 14, 2016 at 18:02 UTC
In reply to:

junk1: To reduce noise (shoot at 1/2 the ISO), I suppose if you had 2 of these cameras close together, you could shoot at 15FPS (instead of 30) and double the exposure time, and interleave the frames from each of them. Of course the cameras would need to be offset by 1/15th of a second...Since the stars are infinitely away, there would be no parallax, right?
At some point though you can't just keep increasing the exposure time due to the stars moving unless you also are tracking them, in which case why shoot a video.

It is simple post-processing to average consecutive 1/30th exposures from a single camera to yield overlapped 1/15 frames. Processing also exists to adjust inter-frame averaging depending on how stable each region of the frame is, so motion doesn't get blurred. A better use of two cameras would be for stereo, our brains tend to do a good job of ignoring noise that is present in only one eye.

Link | Posted on Sep 14, 2016 at 13:16 UTC
In reply to:

DSPographer: Great video with an impressive camera.

I am surprised that there isn't a mono-red monitoring mode for the screen so you don't kill your night vision.

A "rigged-for-red" mode could be created either by the camera or the monitor, or you could use a monochrome-red only monitor if you could find one. (The ones I have seen were designed to be used on the bridge of ships.) Focus peaking would need to be designed for a monochrome viewer: you might need a mode to use zebras for focus-peaking instead of over-exposure. A scope view or histogram inset would then be useful for exposure.

Link | Posted on Sep 13, 2016 at 13:19 UTC

Great video with an impressive camera.

I am surprised that there isn't a mono-red monitoring mode for the screen so you don't kill your night vision.

Link | Posted on Sep 13, 2016 at 12:48 UTC as 15th comment | 3 replies

Now 3D
I don't know if this happened with the Landsat-8 update, but my area of Connecticut is now fully rendered in 3-D. The houses and trees etc have height that can be rendered with perspective in the 3-D view. The last time I looked the imagery was completely flat.

Link | Posted on Jun 30, 2016 at 12:16 UTC as 4th comment | 3 replies
On article Small but mighty: hands on with the Panasonic GX85/GX80 (309 comments in total)
In reply to:

Joel Halbert: In section 5: "Like the GX8, the GX85 includes 3-axis in-body image stabilization and a Dual IS system which adds two extra axes if you use a lens with built-in stabilization."

This seems directly to contradict the press-release (and the DPR summary just before it), which says that the In-Body system is 5-axis even before you add the O.I.S. tandem assistance:
"Combining an O.I.S.(Optical Image Stabilizer, 2-axis) and a B.I.S.(Body Image Stabilizer, 5-axis)..."

It seems like this needs to be clarified, and possibly section 5's blurb should be re-written. But thanks for getting the three features out so fast.

This write up talks about reading out extra pixels in video mode, which implies EIS. Real IBIS in video mode doesn't require extra pixels to be read and works completely differently than EIS. So, what type of stabilization is possible in video mode with an unstabilized lens: post read-out EIS, or sensor-shift IBIS?

Link | Posted on Apr 5, 2016 at 13:09 UTC
On article Opinion: Did Sony just do the impossible? (1071 comments in total)

So, Sony's badly implemented lossy raw is a big issue even though the probability of it damaging a photo is very low, but the lack of an AA filter is a plus- even though damaging moire is then fairly likely for photographers with decent lenses and good technique?

I would say the 5Ds has one significant IQ advantage over the A7rII, and it isn't the slight resolution increase or lossless raw: it is the AA filter.

Now, if Sony could use the IBIS hardware for multi-exposure high-resolution for static subjects to avoid moire, and to have selectable AA in a single exposure with a small uncorrected motion, then they might fix this issue with just a firmware update.

Of course Sony should still fix their raw format.

Link | Posted on Jun 22, 2015 at 20:35 UTC as 205th comment | 3 replies
On article Photokina 2014: Hands-on with RED's Epic Dragon (50 comments in total)
In reply to:

Miki Nemeth: "pulling high-resolution frames from video footage is obvious, especially for photojournalists and documentary filmmakers" I guess at 1/100s shutter speed this works only on very slowly moving scenes. Do sports videos use higher shutter speed to avoid motion blur in still images from video footage?

If shooting motion for stills, then of course you would choose your shutter speed to optimize the still images instead of just using half the frame interval like video usually uses. With the red cameras you can set an HDR-X mode so it records a short exposure and long exposure image simultaneously. Of course, you will need to boost the brightness of the shorter exposure if it looks under-exposed, so it is a good thing the Epic captures an image with fairly wide latitude.
Since the HDR-X mode doubles the data-rate you will need to halve the frame rate to use it.

Link | Posted on Sep 19, 2014 at 20:05 UTC

I think this test is a good start to show where the high-ISO advantage of the D7s should be expected: dark tones at exposures for very high ISO.

But, there may also be some sensitivity difference at all ISO settings for very large aperture lenses. This is because it is easier to make large pixels accept extreme ray angles than small pixels. So, I would like to see this test also conducted at f/1.4 to see if there is a noticeable sensitivity difference for bright tones at that f-stop.

Link | Posted on Jun 24, 2014 at 13:02 UTC as 46th comment
In reply to:

falconeyes: This is one possible approach to decent low light capablity in video.

The other approach is to stop the nonsense to subsample sensors in video mode, reading out maybe 1 out of 6 pixels. This is what creates noise and aliasing artefacts in video. Unneccessarily so, as a few cameras (Panasonic, Nokia) show which don't subsample but create a video signal from all pixels. I.e., it is quite feasible.

Therefore, Thumbs Down for Canon to work around a problem they rather should solve.

The per-pixel read noise of CMOS sensors has been improving over time with process improvements, and the pixel size has been shrinking over time: but, for a given process it is possible to design a large pixel with about the same per-pixel read noise as a small sensor. Right now that read noise is about 1.5 e- or 1.5 h+ (for pmos sensors). Using large pixels then reduces the read noise per area. With a large pixel and a read noise per pixel of about 1.5 it becomes unnecessary to use photo-multipliers for night vision sensing: that is the purpose of this sensor. Here is another large pixel sensor for this purpose:
http://www.imagesensors.org/Past%20Workshops/2013%20Workshop/2013%20Papers/05-12_029_Tower_Paper.pdf

Link | Posted on Sep 17, 2013 at 14:27 UTC
In reply to:

falconeyes: This is one possible approach to decent low light capablity in video.

The other approach is to stop the nonsense to subsample sensors in video mode, reading out maybe 1 out of 6 pixels. This is what creates noise and aliasing artefacts in video. Unneccessarily so, as a few cameras (Panasonic, Nokia) show which don't subsample but create a video signal from all pixels. I.e., it is quite feasible.

Therefore, Thumbs Down for Canon to work around a problem they rather should solve.

The 5D mark III does NOT throw away pixels during video read-out. Instead it uses both horizontal and vertical binning to read the sensor in video mode. That is why its low light sensitivity is so much better than the D800:
http://falklumo.blogspot.com/2012/04/lumolabs-nikon-d800-video-function.html

Link | Posted on Sep 17, 2013 at 14:20 UTC
In reply to:

SHood: So how long until this global shutter sensor shows up in cameras? That is the big question.

You just need to step back a few years for global shutter in still cameras. Nikon had it a long time ago. They eliminated the circuitry for it because the negatives outweigh the positives. Notice that the global shutter F55 has ISO 1250 sensitivity while the F5 is ISO 2000.

Link | Posted on Nov 1, 2012 at 19:16 UTC
On article Nikon D4 overview (839 comments in total)

Question: What does the connector on the XQD card look like? Does it still use pins in the camera that can be bent like compactflash? I find it strange that *none* of the pictures I could find of the XQD card show the connector end.

Link | Posted on Jan 6, 2012 at 14:42 UTC as 179th comment
In reply to:

Ashley Pomeroy: Coo - but is that 35mm full-frame, or another format?

24.5mm x 13.5mm So about the same as APS-C still sensor:
http://twitter.com/#!/mikeseymour/status/132248706174038017

Link | Posted on Nov 4, 2011 at 02:12 UTC
In reply to:

Fine Art: The mission to fix the Hubble telescope cost 1.5 Billion $. If all the braniacs at NASA couldn't do it in software for a billion what are the chances it will be in your photoshop upgrade for $300?

Presenting a perfect deconvolution as something you will get in your consumer software is plain fraud. Deconvolution is real, it works. I use it a lot. It is not going to do miracles.

You can buy deconvolution in other software now. I recommend Images Plus. Ive been buying it since version 2. I get nothing for recommending it.

Actually, the Hubble problems spurred the refinement of one of the most popular deconvolution methods "Richardson-Lucy". The limitations of deconvolution meant that it was still worth the huge expense to fix the telescope. Note: you probably used a variation of this algorithm since the space telescope science institute made the code available without restrictions. Here is one article about it:
http://adsabs.harvard.edu/full/1994ASPC...61..296S
The current Wikipedia article has a nice brief introduction to deconvolution including blind deconvolution:
http://en.wikipedia.org/wiki/Deconvolution

Link | Posted on Oct 20, 2011 at 13:23 UTC
Total: 19, showing: 1 – 19