ProfHankD

ProfHankD

Lives in United States Lexington, United States
Works as a Professor
Has a website at http://aggregate.org/hankd/
Joined on Mar 27, 2008
About me:

Plan: to change the way people think about and use cameras by taking advantage of cameras as computing systems; engineering camera systems to provide new abilities and improved quality.

Comments

Total: 1396, showing: 81 – 100
« First‹ Previous34567Next ›Last »
On article Sony a9 banding issue: fact or fiction? (733 comments in total)
In reply to:

Aroart: Jared Poland is a Photographer that could not hack it as a professional so he decided to teach.. This is not my conclusion he said it himself in an interview with Scott Kelby... I would not take any photographer seriously when there catch phrase is I shoot RAW and I don't crop...

I believe the complete quote from Annie Hall (written by Woody Allen) is:
“Those who can't do, teach. And those who can't teach, teach gym.”

Fro doesn't teach gym... I think he does teach some useful stuff in an entertaining online sort of way. This A9 problem isn't exactly an A9 problem, but nothing wrong with him calling attention to it and others diagnosing it.

Link | Posted on Jun 30, 2017 at 03:05 UTC
On article Sony a9 banding issue: fact or fiction? (733 comments in total)
In reply to:

new boyz: A92 will have global shutter specifically to eliminates banding issue. Probably.

Global shutter would NOT eliminate THIS TYPE of banding.

Link | Posted on Jun 30, 2017 at 02:13 UTC
On article Sony a9 banding issue: fact or fiction? (733 comments in total)
In reply to:

ProfHankD: Just two comments on this:

1. This isn't really a problem with the Sony A9, but a problem with the whole concept of frame capture. Even a perfect global electronic shutter would reveal temporal artifacts from LED refresh scans. The key to solving this is to make many captures over a longer time interval and recognize that the correct perceptual still rendering should average-out the flicker (which should be feasible using TDCI).

2. If this artifacting was a really important issue to many folks (I'm not convinced it is), I could implement credible computational repair on the A9 because the 12-line pattern is easy to recognize -- that makes repair easier than it would be for most cameras. Repair could be done much like how KARWY removes ARW compression artifacts, but could be done directly on the compressed ARW file by modifying 32-pixel compressed blocks in-place.

Jesse_Just_Him: yup. That's why I'm saying the real fix is longer-term averaging, but the fixed 12-line timing properties would make computational recognition, and thus credible repair, easier.

Link | Posted on Jun 30, 2017 at 02:11 UTC
On article Sony a9 banding issue: fact or fiction? (733 comments in total)

Just two comments on this:

1. This isn't really a problem with the Sony A9, but a problem with the whole concept of frame capture. Even a perfect global electronic shutter would reveal temporal artifacts from LED refresh scans. The key to solving this is to make many captures over a longer time interval and recognize that the correct perceptual still rendering should average-out the flicker (which should be feasible using TDCI).

2. If this artifacting was a really important issue to many folks (I'm not convinced it is), I could implement credible computational repair on the A9 because the 12-line pattern is easy to recognize -- that makes repair easier than it would be for most cameras. Repair could be done much like how KARWY removes ARW compression artifacts, but could be done directly on the compressed ARW file by modifying 32-pixel compressed blocks in-place.

Link | Posted on Jun 30, 2017 at 01:53 UTC as 131st comment | 11 replies
In reply to:

Nick8: Fully articulated screen becomes standard for Canon.
Congratulations for that.

Yup. That's pretty much THE news here and actually very significant -- especially for a camera without an EVF.

Link | Posted on Jun 29, 2017 at 11:46 UTC
In reply to:

Marty4650: Nice photos. Well written article too.

I really think Canon EOS M is the real sleeper in the MILC world. This is a very capable camera that is only hampered by being priced way too high, and having such a limited selection of native AF lenses.

Once those two issues are resolved, it may end up being the dominant crop sensor MILC format, due to the huge installed base of Canon users, their brand loyalty, and the overall competence of Canon in engineering, design and marketing.

Sony, Fuji, Olympus and Panasonic should be very grateful that Canon chooses to price EOS M cameras and lenses so high.

Gotta disagree here. Canon is still using seriously outdated guts in most of their cameras, and that really hurts the M line. Of course, the old guts give Canon a big profit margin, are why you get a "refined" feel (years of minor tweaks), and they enable hacking -- but Magic Lantern only supports the original EOS-M (which I have). There is work on a CHDK port to a newer model....

The only real win I see in the M line is that it has a relatively large diameter mount, which does increase flexibility by avoiding potential vignetting with some styles of lenses and potentially larger sensors.

In any case, I keep hoping that Canon will wise up and (1) start supporting/working with the communities that add features via ML and CHDK to define what future cameras should do and (2) upgrade the guts to be more competitive. I think those are the only ways Canon has a long-term future in cameras. I don't think they have to make things cheaper, because brand loyalty....

Link | Posted on Jun 28, 2017 at 15:09 UTC
In reply to:

ProfHankD: Ok, so they're making a 4-element fast 85mm for soft portraits. Nothing wrong with that (except maybe the price is a bit high).

However, why are at least some of their sample images so HEAVILY postprocessed? It's particularly obvious from the grain pattern in the Venedict shot at https://lensbaby.com/product/velvet-85/ . This isn't an STF lens, so the softness is primarily coming from deliberately undercorrected SA, which is obvious in the glow in most of the shots (e.g., look at the woman's chin in the Venedict shot); most of us who have used an old fast 50 know what SA glow looks like.

PS: I love the CarolineJensen shot, which doesn't look very postprocessed and seems an excellent example of what one can do with this lens and a little creativity. This lens apparently also does macro -- wonder why no shots showing that?

Murdey and dash2k8: Coma isn't quite the same thing as SA: https://www.lensrentals.com/blog/2010/10/the-seven-deadly-aberrations/

Link | Posted on Jun 28, 2017 at 03:04 UTC
In reply to:

ProfHankD: Ok, so they're making a 4-element fast 85mm for soft portraits. Nothing wrong with that (except maybe the price is a bit high).

However, why are at least some of their sample images so HEAVILY postprocessed? It's particularly obvious from the grain pattern in the Venedict shot at https://lensbaby.com/product/velvet-85/ . This isn't an STF lens, so the softness is primarily coming from deliberately undercorrected SA, which is obvious in the glow in most of the shots (e.g., look at the woman's chin in the Venedict shot); most of us who have used an old fast 50 know what SA glow looks like.

PS: I love the CarolineJensen shot, which doesn't look very postprocessed and seems an excellent example of what one can do with this lens and a little creativity. This lens apparently also does macro -- wonder why no shots showing that?

Bromberger: not an old lens user, eh? ;-)

SA is spherical aberration, which causes a halo-like effect around bright areas because the rays coming through the edges of the lens don't focus to the same point as those coming through the middle. Thus, you get a sharp focus from the central rays, but with lots of glow around it. Stopping down clips those marginal rays, removing the glowing halos.

Normally, SA is carefully corrected. However, leaving it slightly undercorrected makes the bokeh for things past the focus point look smoother, so old fast 50s and some portrait lenses deliberately leave SA undercorrected. Velvet clearly leaves SA very significantly undercorrected.

Link | Posted on Jun 27, 2017 at 23:36 UTC

Ok, so they're making a 4-element fast 85mm for soft portraits. Nothing wrong with that (except maybe the price is a bit high).

However, why are at least some of their sample images so HEAVILY postprocessed? It's particularly obvious from the grain pattern in the Venedict shot at https://lensbaby.com/product/velvet-85/ . This isn't an STF lens, so the softness is primarily coming from deliberately undercorrected SA, which is obvious in the glow in most of the shots (e.g., look at the woman's chin in the Venedict shot); most of us who have used an old fast 50 know what SA glow looks like.

PS: I love the CarolineJensen shot, which doesn't look very postprocessed and seems an excellent example of what one can do with this lens and a little creativity. This lens apparently also does macro -- wonder why no shots showing that?

Link | Posted on Jun 27, 2017 at 21:22 UTC as 18th comment | 8 replies

Kodachrome... it gives such nice bright colors... everything looks worse in black and white. Yeah, it does.

Really sad to think that nobody has an alternative chemistry that could at least recover some color information. I wonder how much undeveloped film like that is still sitting around with unseen latent images....

Link | Posted on Jun 27, 2017 at 15:09 UTC as 22nd comment | 2 replies
In reply to:

ProfHankD: Nice little project/demo. Foolish me, I would have moved the film plane to focus... the electrowetting is so much cuter. ;-)

E Dinkla: Nice bit of history. I was aware of various uses of fluid-filled pouches as lenses, but hadn't seen anything as old as what you cited....

Link | Posted on Jun 23, 2017 at 13:21 UTC

Nice little project/demo. Foolish me, I would have moved the film plane to focus... the electrowetting is so much cuter. ;-)

Link | Posted on Jun 21, 2017 at 23:15 UTC as 48th comment | 4 replies
In reply to:

ProfHankD: The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

mosswings: The primary temporal artifact in video is normally that a small shutter angle leaves temporal gaps in the image data, and pretending that a 1/500s shutter sampling of a 24FPS frame actually covers the full 1/24s is the problem -- TDCI interpolates the intervals for which the shutter isn't sampling.

There is a different set of issues when scene motion is faster than sampling rate, which we assume isn't the case for TDCI in general. There is also yet another issue in display motion fusion/flicker, which has nothing to do with TDCI, but is why theaters traditionally display 24FPS frames triple-flashing each frame.

Link | Posted on Jun 21, 2017 at 18:50 UTC
In reply to:

ProfHankD: The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

MrBrightSide: Let's say you have a pixel sampled at 10FPS and 180 degree shutter angle (1/20s shutter speed). Suppose at time T=1s a sample says 200 and at T=1.1s a sample says 180. Our simplest TDCI will model this as T=1-1.05s sample is 200, T=1.05-1.1s the pixel linearly goes from 200 to 180 (average value 190), and T=1.1-1.15s pixel value is 180. A virtual exposure basically integrates the area under the curve; for example, for T=1.05-1.075s, we would say the pixel value is 195. In other words, we interpolate between the samples in the time domain.

The other odd aspect of TDCI is that it's all based on a detailed error model. Thus, for example, if the error model told us that 200 and 180 were really differing only by sampling noise (i.e., combination of photon shot noise and camera noise), we'd say the pixel really had a value of 190 for the entire 1.0-1.15s interval.

Link | Posted on Jun 21, 2017 at 17:39 UTC
In reply to:

ProfHankD: The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

mosswings: It is certainly possible to do motion estimation (that tech was well developed for MPEG), but it's not necessary if frame rate > rate of scene appearance change. The temporal processing is basically creating smooth waveforms for each pixel's value over time and refining the estimates of at what specific times the value changes. Here's another EI2017 paper, this one discussing temporal superresolution processing: http://aggregate.org/DIT/ei2017TSR.pdf

Link | Posted on Jun 20, 2017 at 22:27 UTC
In reply to:

ProfHankD: The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

MrBrightSide: The generic answer is that there isn't any blur per se, but processing is done at a much finer temporal granularity (actually, 1ns resolution in the TIK software).

With a custom sensor, each pixel is independently sampling (not synced to others) and temporal interpolation can resolve scene appearance change events to a limit of about 1us.

With conventional sensors, we still do temporal interpolation, but the fact that pixel samples are temporally correlated does cause artifacting unless the frame rate is faster than the rate of scene content change; empirically, by about 240FPS artifacting is rarely an issue. Keep in mind this is using high framerates for ordinary scenes, not high-speed events. Alternatively, we can make use of the temporal skew of pixel samples by rolling electronic shuttering and of deliberate temporal skews using multiple cameras to capture the same scene (e.g., as in our FourSee camera array).

Link | Posted on Jun 20, 2017 at 18:20 UTC
In reply to:

Matsu: Interesting, but you can't fully converge video and stills shooting, even when the device is fully capable of both. That's because your choice of shutter speeds/angles will differ depending on whether your priority is to freeze a frame or produce smooth motion capture. Same is true of your lighting regimen where lighting for motion vs stills will differ again based on the shutter speeds required for each.

See my TDCI post above. With TDCI you get full control of virtual exposures after capture.

Link | Posted on Jun 20, 2017 at 14:33 UTC

The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

Link | Posted on Jun 20, 2017 at 14:32 UTC as 44th comment | 14 replies
On article Olympus TG-5 gallery updated (71 comments in total)
In reply to:

Ben Herrmann: On the surface, these images look great - with a nice color tonality and clarity typically missing from cameras of this genre. But regardless of MP count, you can't escape the limitations of these small sensors.

Enlarge each image to 100% and you'll see the typical compression artifacts, noise, or what have you that has plagued most of these much smaller sensors, regardless of brand. But Olympus did a great job on this one. I can definitely see bringing this camera along when a heavy duty pocket camera will do.

And wallaaaaa...it has RAW capabilities which will help quite a bit in order to achieve the best IQ possible for the genre.

Enders Shadow: You're right; underwater is an option there. In truth, the awkwardness comes from me mixing underwater and just above the water all the time, making white balance changes by any mechanism a pain....

Link | Posted on Jun 18, 2017 at 22:27 UTC
On article Olympus TG-5 gallery updated (71 comments in total)
In reply to:

Ben Herrmann: On the surface, these images look great - with a nice color tonality and clarity typically missing from cameras of this genre. But regardless of MP count, you can't escape the limitations of these small sensors.

Enlarge each image to 100% and you'll see the typical compression artifacts, noise, or what have you that has plagued most of these much smaller sensors, regardless of brand. But Olympus did a great job on this one. I can definitely see bringing this camera along when a heavy duty pocket camera will do.

And wallaaaaa...it has RAW capabilities which will help quite a bit in order to achieve the best IQ possible for the genre.

Barry Stewart: Underwater photos are a bit of an exception -- JPEGs discard a lot of color information (but not luminance), which is a bad thing if the scene has a very strong color cast that you don't compensate for. At least on the TG-860, it's unfortunate that underwater color balance is a mode, not a setting you can apply to any mode. Still, I've not had any major issues correcting for shallow-depth blue tint on JPEGS in post....

Link | Posted on Jun 18, 2017 at 17:35 UTC
Total: 1396, showing: 81 – 100
« First‹ Previous34567Next ›Last »