ProfHankD

ProfHankD

Lives in United States Lexington, United States
Works as a Professor
Has a website at http://aggregate.org/hankd/
Joined on Mar 27, 2008
About me:

Plan: to change the way people think about and use cameras by taking advantage of cameras as computing systems; engineering camera systems to provide new abilities and improved quality.

Comments

Total: 1284, showing: 61 – 80
« First‹ Previous23456Next ›Last »
In reply to:

LoScalzo: I thought it's "Gear Acquisition Syndrome," not "Gear Addiction Syndrome." Which is it?

The most common form of Gear Acquisition Syndrome at DPReview is LBA -- Lens Buying Addiction -- so that's where the Addiction terms creeps in.... ;-)

Link | Posted on Apr 2, 2017 at 16:00 UTC
In reply to:

ProfHankD: Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

Well, then it's completely wrong -- being out of focus does not cause blur!

The applicable dictionary definition of "blur" involves "smearing," but significantly OOF PSF do not smear anything, nor do they convolve. The OOF PSF simply causes rays from the same scene point seen from different points of view (all within the aperture of the lens) to land in different locations on the sensor. The visual ambiguity comes from each sensor point summing rays from many scene points, but only non-occluded rays are summed. This is why I and others are able to recover stereo depth from single images (e.g., Lytro does it using plenoptics; I do it using single-lens analgyph capture, which is really a variant of what's often called coded aperture capture).

Blur does occur in images, but true blur only arises from motion. Thus, if bokeh just meant "blur" it certainly would apply to motion blur... which I have never seen anyone claim. Two rounds of Google translate isn't a valid definition. ;-)

Link | Posted on Apr 1, 2017 at 00:22 UTC
In reply to:

ProfHankD: Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

I think it's pretty clear that bokeh was meant to refer to qualities, of which size is one of only approximate importance (i.e., it takes a major change in PSF radius to make a qualitatively significant change). Of course, at very small sizes you really can't see the other qualities; beyond that, if your OOF PSF are overexposed (which is common), size might be the only property that is obvious.

I hadn't seen Marianne Oelund's stuff, but Vcam looks interesting for modeling nearly-in-focus PSF from some simple "summary" parameters (as opposed to modeling the optics directly). I've been more interested in relating detailed lens or scene properties to very-OOF PSF, and have published on various aspects at Electronic Imaging. The most accessible overview is probably the slides from my "Poorly Focused Talk": http://aggregate.org/DIT/ccc20140116.pdf

Link | Posted on Mar 31, 2017 at 13:05 UTC
In reply to:

ProfHankD: Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

To me, bokeh refers to the summative effect of the OOF PSF for all OOF points in the image, which is easily able to be quantified. It is straightforward to derive the bokeh in an image by applying a simple painter's algorithm (depth-order painting) to the measured OOF PSF. However, not that many folks actually measure OOF PSFs -- it isn't a standard thing to measure, and the 150 or so lenses I've measured it for is probably the largest database of OOF PSFs. Beyond that, I can predict OOF PSF from bokeh and vice versa, but individual preferences are very qualitative.

Link | Posted on Mar 30, 2017 at 23:29 UTC

Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

Link | Posted on Mar 30, 2017 at 20:23 UTC as 107th comment | 9 replies

Interesting. How do you make the paper cuts onsite? Being unskilled but geeky, I use a programmable paper cutter or a cheap laser cutter for anything approaching that complexity of cuts....

Link | Posted on Mar 30, 2017 at 11:11 UTC as 35th comment | 1 reply
In reply to:

alanh42: Even though the Leica Monochrom models don't produce color moiré artifacts, they can still produce false detail and moiré patterns when the frequency of the information in the subject is above the Nyquist frequency of the sensor.

The current fad of "no analog domain low-pass anti-alias filter" is a scientifically and technically flawed implementation that's been obvious since the discovery/invention of the Nyquist-Shannon sampling theorem in the late 1940's. AA filters prevent false detail from appearing in the outputs.

Yeah, you can do some post-processing to remove moiré effects, but what if you're taking a picture of something that actually looks like a moiré pattern? The correction algorithms have no way to know what's real detail and what's false detail.

Actually, there's a lot of controversial stuff, including "compressed sensing," wildly violating Nyquist these days by using "priors" to drive image reconstruction -- basically assumptions about what the scene must have looked like. Many work shockingly well.

However, you're right about AA filters generally being necessary to meet Nyquist sampling constraints, and this article also is a bit loose on what Nyquist really means. In the worst case (it usually does MUCH better), a 24MP Bayer sensor can only guarantee correct reconstruction of a 1.5MP full-color source image... which sounds terrible, but that's about what a lot of 135-format film delivered, and it's good enough for most (especially WWW) uses.

Link | Posted on Mar 30, 2017 at 01:38 UTC

The manual segmentation applications in their paper look interesting. This actually seems a lot like some of the old methods for colorizing B&W films....

Link | Posted on Mar 28, 2017 at 15:05 UTC as 14th comment

Back when I shot film, it was my Minolta MC W Rokkor Si 28mm f/2.5 -- and it is still a favorite. If it's one lens for everything, my Sigma 28-200mm f/3.5-f/5.6 Macro D Aspherical IF, a small and surprisingly sharp lens that cost me under $20, is my current FF answer. If the question is which lens do I actually use most, on APS-C the answer is my Sigma 8-16mm; a lens that is hard to use well, but regularly produces stunning images if used wisely.

However, I have a LOT of (mostly manual) lenses and don't really play favorites. I use what seems right for the circumstances, and deliberately rotate otherwise so I keep aware of my choices. It's really great that mirrorless cameras can use so many different lenses so well and so many old lenses now sell for tiny fractions of their utility-based value (most of mine were under $25).

Link | Posted on Mar 27, 2017 at 13:58 UTC as 176th comment

So, how do you know that there is actually a drone out there? It would be fairly easy to computationally render your flight from a terrain database. Think of it as a sort of "Touring" test. ;-)

Link | Posted on Mar 25, 2017 at 13:08 UTC as 29th comment | 3 replies
On article Re-make/Re-model: Leica Summaron 28mm F5.6 Samples (202 comments in total)

Well, that renders very much like a lens I just got: a 1990s Sigma 18-35mm f/3.5-4.5 zoom that cost me $32. Ok, the Sigma's actually better -- less vignetting and sharper corners -- but has similar colors and handles flare almost identically. I see "incredibly slim" motivation for this Summaron. Is this Leica's version of a lens in a body cap? :-(

Link | Posted on Mar 24, 2017 at 12:08 UTC as 71st comment
In reply to:

ProfHankD: The real advantage of Sony's rather-undersized "medium format" sensor should be that it could be used with FF lenses for alternative aspect ratios without needing a larger image circle. For example, a 30.6x30.6mm square format would fit the same FF-lens image circle while delivering 33MP. You could also do 2.35:1 cinemascope aspect ratio with 39.6x16.8mm at 24MP. It would be very easy for a camera to show these crops in the EVF and tag the EXIF data. However, nobody seems to be doing this. Why the heck not?

Karl Persson: that's a trick that I documented works very well for APS-C lenses on FF (in a paper I published in Electronic Imaging 2016). However, the aspect ratio is different here, and taking advantage of that does more than the teleconverter trick would do.

Link | Posted on Mar 22, 2017 at 11:05 UTC

The real advantage of Sony's rather-undersized "medium format" sensor should be that it could be used with FF lenses for alternative aspect ratios without needing a larger image circle. For example, a 30.6x30.6mm square format would fit the same FF-lens image circle while delivering 33MP. You could also do 2.35:1 cinemascope aspect ratio with 39.6x16.8mm at 24MP. It would be very easy for a camera to show these crops in the EVF and tag the EXIF data. However, nobody seems to be doing this. Why the heck not?

Link | Posted on Mar 22, 2017 at 02:16 UTC as 258th comment | 5 replies
In reply to:

ProfHankD: The phrase "effectively producing full-frame coverage on non-full-frame sensors" doesn't sound to me like a new claim for any of the focal reducers out there... why the "doesn’t work out exactly though" comment here and not on others? It is worth noting, BTW, that focal reducers do vary; of the five I've measured, the Speed Boosters provide the greatest reduction and the Lens Turbo II the least, with a spread from about 0.71x to 0.73x.

Let's put that in context. It turns out that APS-C isn't all the same size. Sensors in most brands actually have a crop factor of 1.52-1.54x while Canon's APS-C is 1.61x. BTW, actual APS film, C format, is 1.43x crop -- so they're all undersize. For that matter, 135 film wasn't always exactly 36x24, and standard slide mounts covered at least 2% of that area (which is part of why a lot of SLRs used to have 98% viewfinders).

Anyway, focal reducers vary in coverage by less than 3% -- while Canon's APS-C is about 6% smaller than most digital APS-C. Meh.

badi: Missed again. The "Meh" is about this particular news item emphasizing a claim that the manufacturer didn't even really make. I.e., there isn't a significant magnification difference BETWEEN DIFFERENT FOCAL REDUCERS.

Link | Posted on Mar 17, 2017 at 15:41 UTC
In reply to:

ProfHankD: The phrase "effectively producing full-frame coverage on non-full-frame sensors" doesn't sound to me like a new claim for any of the focal reducers out there... why the "doesn’t work out exactly though" comment here and not on others? It is worth noting, BTW, that focal reducers do vary; of the five I've measured, the Speed Boosters provide the greatest reduction and the Lens Turbo II the least, with a spread from about 0.71x to 0.73x.

Let's put that in context. It turns out that APS-C isn't all the same size. Sensors in most brands actually have a crop factor of 1.52-1.54x while Canon's APS-C is 1.61x. BTW, actual APS film, C format, is 1.43x crop -- so they're all undersize. For that matter, 135 film wasn't always exactly 36x24, and standard slide mounts covered at least 2% of that area (which is part of why a lot of SLRs used to have 98% viewfinders).

Anyway, focal reducers vary in coverage by less than 3% -- while Canon's APS-C is about 6% smaller than most digital APS-C. Meh.

badi: I think you missed my point: these numbers are all much more approximate than you think; precision stated is much higher than accuracy. In fact, both the Lens Turbo and Lens Turbo II quote the same 0.726 magnification factor. Looking up my actual measurements, SB is actually 0.71x, LT is 0.73x, and LTII is 0.74x (even a little more variation than I remembered above).

The same inaccuracy applies for focal length of most lenses; several % error is normal. In fact, for some lenses the measured focal length is off by over 10% from the published length. Various zooms that supposedly go to 300mm really stop at more like 270mm, many fast "50mm" lenses are more like 53mm, and quite a few wide-angle lenses now are much wider than quoted (in order to compensate for loss of view angle when distortion corrections are applied). On the other hand, there are some "180-degree" fisheyes with more like 150-degree view angles.

So, don't worry about a few % unless you measure everything. ;-)

Link | Posted on Mar 17, 2017 at 15:07 UTC

The phrase "effectively producing full-frame coverage on non-full-frame sensors" doesn't sound to me like a new claim for any of the focal reducers out there... why the "doesn’t work out exactly though" comment here and not on others? It is worth noting, BTW, that focal reducers do vary; of the five I've measured, the Speed Boosters provide the greatest reduction and the Lens Turbo II the least, with a spread from about 0.71x to 0.73x.

Let's put that in context. It turns out that APS-C isn't all the same size. Sensors in most brands actually have a crop factor of 1.52-1.54x while Canon's APS-C is 1.61x. BTW, actual APS film, C format, is 1.43x crop -- so they're all undersize. For that matter, 135 film wasn't always exactly 36x24, and standard slide mounts covered at least 2% of that area (which is part of why a lot of SLRs used to have 98% viewfinders).

Anyway, focal reducers vary in coverage by less than 3% -- while Canon's APS-C is about 6% smaller than most digital APS-C. Meh.

Link | Posted on Mar 17, 2017 at 12:23 UTC as 23rd comment | 6 replies
On article Throwback Thursday: Our first cameras (391 comments in total)

Ok, I'll play too. Technically, my first couple of cameras were cheap plastic junk. The Konica C35V was my first serious camera... at least it was serious enough that I won a photo contest using it, shot my first published photos with it, and sold quite a few images shot with it. Always wished I had the rangefinder version, but zone focus worked with a (surprisingly good) 38mm that was f/2.8 wide open. After I moved on to Minolta SRT101 and XK, I still appreciated the little C35V's utter inobtrusiveness as a street shooter, but alas, it was stolen many years ago.

I used digitized video for a while (anyone remember the video snappy?), but my first digital was a Casio QV100. It had terrible IQ, but was really an impressively digital camera... not just in the lack of an optical viewfinder and the pivoting lens; I even gave a few lectures using slides uploaded to it! Actually, using the QV100 was a lot like using the C35V; it was also a very inobtrusive street shooter.

Link | Posted on Mar 16, 2017 at 11:46 UTC as 281st comment
In reply to:

sleibson: Sentence the van driver to community service... on the salt-flat repair team.

The $5000 fine and 6 months in jail combo does seem less appropriate than having to pay for the towing and restoration effort and being assigned significant community service. Perhaps this is something to prosecute for civil damages, not just as breaking the law?

Link | Posted on Mar 15, 2017 at 13:06 UTC
In reply to:

AlexanderHorl: Does the app save pictures as raw or is just jpeg possible?

Actually, much as I like raw too, the fact that the RESULT is A raw means it can't be doing all that wonderful computational alignment Sony does in most multi-shot capture modes. This is why you need a tripod. Ideal would be a JPEG of the final image and/or the set of raws from which it was constructed along with a specification of how to merge them.

Link | Posted on Mar 15, 2017 at 12:40 UTC
In reply to:

ProfHankD: $30 for something that could easily be a free app or even a firmware update. This is unfortunately copying an old Minolta thing (program cards) too closely. :-(

It shouldn't be too long now before the Open Memories interface starts producing more interesting and useful apps for free....

_Frederico_: I'm a lot more revolutionary than that, and implement lots of stuff inside commodity cameras. ;-) For example, my current work has largely centered on being able to capture and process image data so that you can adjust the time interval represented by an exposure after the fact (TDCI -- time domain continuous imaging) and we've actually implemented this inside Canon PowerShots using CHDK, as reported in a paper at Electronic Imaging 2017.

I'm sure this is a nice app; I just don't think Sony should restrict app development to themselves nor do they need to charge $30 for one.

Link | Posted on Mar 15, 2017 at 12:21 UTC
Total: 1284, showing: 61 – 80
« First‹ Previous23456Next ›Last »