ProfHankD

ProfHankD

Lives in United States Lexington, United States
Works as a Professor
Has a website at http://aggregate.org/hankd/
Joined on Mar 27, 2008
About me:

Plan: to change the way people think about and use cameras by taking advantage of cameras as computing systems; engineering camera systems to provide new abilities and improved quality.

Comments

Total: 1306, showing: 1 – 20
« First‹ Previous12345Next ›Last »
In reply to:

ProfHankD: Nice little project/demo. Foolish me, I would have moved the film plane to focus... the electrowetting is so much cuter. ;-)

E Dinkla: Nice bit of history. I was aware of various uses of fluid-filled pouches as lenses, but hadn't seen anything as old as what you cited....

Link | Posted on Jun 23, 2017 at 13:21 UTC

Nice little project/demo. Foolish me, I would have moved the film plane to focus... the electrowetting is so much cuter. ;-)

Link | Posted on Jun 21, 2017 at 23:15 UTC as 46th comment | 4 replies
In reply to:

ProfHankD: The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

mosswings: The primary temporal artifact in video is normally that a small shutter angle leaves temporal gaps in the image data, and pretending that a 1/500s shutter sampling of a 24FPS frame actually covers the full 1/24s is the problem -- TDCI interpolates the intervals for which the shutter isn't sampling.

There is a different set of issues when scene motion is faster than sampling rate, which we assume isn't the case for TDCI in general. There is also yet another issue in display motion fusion/flicker, which has nothing to do with TDCI, but is why theaters traditionally display 24FPS frames triple-flashing each frame.

Link | Posted on Jun 21, 2017 at 18:50 UTC
In reply to:

ProfHankD: The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

MrBrightSide: Let's say you have a pixel sampled at 10FPS and 180 degree shutter angle (1/20s shutter speed). Suppose at time T=1s a sample says 200 and at T=1.1s a sample says 180. Our simplest TDCI will model this as T=1-1.05s sample is 200, T=1.05-1.1s the pixel linearly goes from 200 to 180 (average value 190), and T=1.1-1.15s pixel value is 180. A virtual exposure basically integrates the area under the curve; for example, for T=1.05-1.075s, we would say the pixel value is 195. In other words, we interpolate between the samples in the time domain.

The other odd aspect of TDCI is that it's all based on a detailed error model. Thus, for example, if the error model told us that 200 and 180 were really differing only by sampling noise (i.e., combination of photon shot noise and camera noise), we'd say the pixel really had a value of 190 for the entire 1.0-1.15s interval.

Link | Posted on Jun 21, 2017 at 17:39 UTC
In reply to:

ProfHankD: The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

mosswings: It is certainly possible to do motion estimation (that tech was well developed for MPEG), but it's not necessary if frame rate > rate of scene appearance change. The temporal processing is basically creating smooth waveforms for each pixel's value over time and refining the estimates of at what specific times the value changes. Here's another EI2017 paper, this one discussing temporal superresolution processing: http://aggregate.org/DIT/ei2017TSR.pdf

Link | Posted on Jun 20, 2017 at 22:27 UTC
In reply to:

ProfHankD: The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

MrBrightSide: The generic answer is that there isn't any blur per se, but processing is done at a much finer temporal granularity (actually, 1ns resolution in the TIK software).

With a custom sensor, each pixel is independently sampling (not synced to others) and temporal interpolation can resolve scene appearance change events to a limit of about 1us.

With conventional sensors, we still do temporal interpolation, but the fact that pixel samples are temporally correlated does cause artifacting unless the frame rate is faster than the rate of scene content change; empirically, by about 240FPS artifacting is rarely an issue. Keep in mind this is using high framerates for ordinary scenes, not high-speed events. Alternatively, we can make use of the temporal skew of pixel samples by rolling electronic shuttering and of deliberate temporal skews using multiple cameras to capture the same scene (e.g., as in our FourSee camera array).

Link | Posted on Jun 20, 2017 at 18:20 UTC
In reply to:

Matsu: Interesting, but you can't fully converge video and stills shooting, even when the device is fully capable of both. That's because your choice of shutter speeds/angles will differ depending on whether your priority is to freeze a frame or produce smooth motion capture. Same is true of your lighting regimen where lighting for motion vs stills will differ again based on the shutter speeds required for each.

See my TDCI post above. With TDCI you get full control of virtual exposures after capture.

Link | Posted on Jun 20, 2017 at 14:33 UTC

The temporal trick discussed here is a subset of what I've been publishing in my research for several years: TDCI -- Time Domain Continuous Imaging. The difference is that TDCI doesn't just allow you to pick which frame after the fact, but lets you pick any arbitrary time interval for a virtual exposure after the fact. In other words, you can change BOTH when you took the photo and what the shutter speed was; of course, you can also select any virtual framerate and shutter angle for video rendering.

Although TDCI ideally uses a new type of sensor, it can be done using conventional cameras (preferably with high-speed video modes) and our free TIK software (which is nearly ready for full open source release). Our EI2017 paper on TIK is http://aggregate.org/DIT/ei2017TIK.pdf

Link | Posted on Jun 20, 2017 at 14:32 UTC as 43rd comment | 14 replies
On article Olympus TG-5 gallery updated (62 comments in total)
In reply to:

Ben Herrmann: On the surface, these images look great - with a nice color tonality and clarity typically missing from cameras of this genre. But regardless of MP count, you can't escape the limitations of these small sensors.

Enlarge each image to 100% and you'll see the typical compression artifacts, noise, or what have you that has plagued most of these much smaller sensors, regardless of brand. But Olympus did a great job on this one. I can definitely see bringing this camera along when a heavy duty pocket camera will do.

And wallaaaaa...it has RAW capabilities which will help quite a bit in order to achieve the best IQ possible for the genre.

Enders Shadow: You're right; underwater is an option there. In truth, the awkwardness comes from me mixing underwater and just above the water all the time, making white balance changes by any mechanism a pain....

Link | Posted on Jun 18, 2017 at 22:27 UTC
On article Olympus TG-5 gallery updated (62 comments in total)
In reply to:

Ben Herrmann: On the surface, these images look great - with a nice color tonality and clarity typically missing from cameras of this genre. But regardless of MP count, you can't escape the limitations of these small sensors.

Enlarge each image to 100% and you'll see the typical compression artifacts, noise, or what have you that has plagued most of these much smaller sensors, regardless of brand. But Olympus did a great job on this one. I can definitely see bringing this camera along when a heavy duty pocket camera will do.

And wallaaaaa...it has RAW capabilities which will help quite a bit in order to achieve the best IQ possible for the genre.

Barry Stewart: Underwater photos are a bit of an exception -- JPEGs discard a lot of color information (but not luminance), which is a bad thing if the scene has a very strong color cast that you don't compensate for. At least on the TG-860, it's unfortunate that underwater color balance is a mode, not a setting you can apply to any mode. Still, I've not had any major issues correcting for shallow-depth blue tint on JPEGS in post....

Link | Posted on Jun 18, 2017 at 17:35 UTC
On article Olympus TG-5 gallery updated (62 comments in total)
In reply to:

Ben Herrmann: On the surface, these images look great - with a nice color tonality and clarity typically missing from cameras of this genre. But regardless of MP count, you can't escape the limitations of these small sensors.

Enlarge each image to 100% and you'll see the typical compression artifacts, noise, or what have you that has plagued most of these much smaller sensors, regardless of brand. But Olympus did a great job on this one. I can definitely see bringing this camera along when a heavy duty pocket camera will do.

And wallaaaaa...it has RAW capabilities which will help quite a bit in order to achieve the best IQ possible for the genre.

"And wallaaaaa...it has RAW capabilities which will help quite a bit in order to achieve the best IQ possible for the genre."

Actually, not so much benefit to raw in these. Two reasons:

1. JPEGs can reasonably represent about 9EV DR, and these small sensors really don't exceed that SNR. The lenses aren't really all that sharp at the pixel level either, so the JPEG DCT doesn't smear much real detail. Overall, there's just not that much data lost to high-quality JPEG compression here. Put another way, having per-pixel JPEG data for such small pixels is a lot like having raw data for a similar-sized sensor with somewhat larger, better, pixels.

2. In-camera processing has been getting very finely tuned and is no longer suffering much from lack of sufficient computing resources. Under various circumstances, it's actually getting difficult to get images as good as camera JPEGs using uncalibrated manual postprocessing of raws.

Link | Posted on Jun 18, 2017 at 16:27 UTC
On article Olympus TG-5 gallery updated (62 comments in total)

I still use a TG-860 for this sort of thing. The nicer things about the TG-860 are that it goes down to 21mm equiv. and it has a flip-up rear LCD that can even be used for selfies -- but most importantly means you can still see the LCD to frame your shot when it is impossible to get your face behind the camera.

Yeah, IQ isn't great, but the biggest issue is that immediately after it was underwater (e.g., swimming), I'll have a few droplets on the lens that will interfere with above-water photos. It's better about this than my Stylus 1030SW was, and blowing at the lens helps to push the droplets off, but it's still a huge IQ issue. Actually, I have the same problem even worse with my bigger cameras in waterproof housings.

Link | Posted on Jun 18, 2017 at 12:56 UTC as 14th comment
On article ICYMI you can print your own lens hoods for free (11 comments in total)
In reply to:

photonius: hmm, I thought it's 3D models for 3D printers. Instead an old site ... maybe they thought better late than never ...

This is more of a job for my $200 2.5W laser cutter or programmable paper cutter.

Although I do tons of 3D printing of camera parts, I've not printed many lenshoods because they are big and awkward to use compared to the alternative of holding my left hand in the correct spot... but a flat-storing hood might be nice.

Link | Posted on Jun 17, 2017 at 12:05 UTC
On article Sony a9 Full Review: Mirrorless Redefined (2671 comments in total)
In reply to:

ProfHankD: Very much a camera in the Minolta "9" style, very innovative, yet oddly conservative and "no frills" for it's target pro market. As such, it isn't that interesting to me. However, it also is giving a peek at what technologies are coming -- e.g., that focal plane mechanical shutters aren't really needed anymore -- and it's making very clear Sony's commitment to defining what FF cameras should evolve into. Good for Sony, good for photographers; let's hope that this also inspires some more forward thinking from competitors....

cosmicnode: you're wrong about flash -- if you want to be picky about it, focal plane shutters can't do fast flash sync either. Most older SLRs had the shutter move in the (less artifacting) horizontal direction and the longer travel actually gave them slower flash sync speeds than the A9 electronic shutter mechanism would be capable of. The fastest vertical focal plane shutters are about twice as fast as the A9 electronic... and they aren't getting faster, while in one 2-year FF sensor generation, Sony's electronic shutter got something like 5X faster.

Of course, as any medium-format user will tell you, the correct answer for fast flash sync is a leaf shutter: even some little $80 PowerShot compacts (under CHDK) can flash sync at speeds up to 1/30000s! Then again, electronic global shutters or the electronic TDCI sensors (my research) can effectively sync at shutter speeds of 1/1000000s or faster....

Link | Posted on Jun 15, 2017 at 11:10 UTC
On article Sony a9 Full Review: Mirrorless Redefined (2671 comments in total)

Very much a camera in the Minolta "9" style, very innovative, yet oddly conservative and "no frills" for it's target pro market. As such, it isn't that interesting to me. However, it also is giving a peek at what technologies are coming -- e.g., that focal plane mechanical shutters aren't really needed anymore -- and it's making very clear Sony's commitment to defining what FF cameras should evolve into. Good for Sony, good for photographers; let's hope that this also inspires some more forward thinking from competitors....

Link | Posted on Jun 14, 2017 at 23:24 UTC as 176th comment | 4 replies
On article First pictures from the new Nikon 8-15mm fisheye (139 comments in total)
In reply to:

Old Cameras: This is such an awesome concept. Circular and full frame fisheye lens all in one. I'd love to have one, just might have to.

The catch is that a fisheye is a very special-purpose lens and this isn't cheap. The budget way to do this is either style of fisheye lens and a teleconverter or focal reducer to give you the other style of fisheye. For example, an 8mm Samyang on FF gives a roughly circular image (slightly clipped, but about 190 degrees), but using a 1.4X-1.5X teleconverter makes it cover corner to corner.

Link | Posted on Jun 14, 2017 at 01:41 UTC
On article How water droplets came to life for a Gatorade ad (103 comments in total)

Looks like something my brother (Paul Dietz) might have done...
https://www.researchgate.net/publication/266658188_The_PumpSpark_fountain_development_kit

Link | Posted on Jun 12, 2017 at 23:10 UTC as 27th comment
In reply to:

Tom K.: I guess I don't get it. That first picture looks like poor photoshop bokeh simulation, with the right side looking like bad use of the cloning tool, to boot.

Well, that bokeh is "special" alright; it shows more of the lens defects I discuss in my cameras as computing systems course than any other single image I've seen. I will admit, however, that the close-up images look a lot better... so it was smart for them to let it focus closer. You will not see me buying one of these.

Link | Posted on Jun 8, 2017 at 22:46 UTC
On article Phase One introduces 'Styles Packs' for Capture One (62 comments in total)

Oooh... extra money for postprocessing that looks like each of the overly-extreme built-in styles that I don't use in any of my cameras. Not impressed.

Link | Posted on Jun 8, 2017 at 13:14 UTC as 26th comment
On article Throwback Thursday: The Canon PowerShot G3 (94 comments in total)

The G1 started the series and I give it the credit. BTW, I still have my G1 and G5.

Aside from starting the whole sequence, the G1 is significant in that it used CYGM rather than RGB Bayer filtering and supported raw images... as did the Pro 70 (which, if I'm not mistaken, was the camera that led Dave Coffin to create dcraw).

Link | Posted on Jun 8, 2017 at 12:05 UTC as 62nd comment
Total: 1306, showing: 1 – 20
« First‹ Previous12345Next ›Last »