ProfHankD

ProfHankD

Lives in United States Lexington, United States
Works as a Professor
Has a website at http://aggregate.org/hankd/
Joined on Mar 27, 2008
About me:

Plan: to change the way people think about and use cameras by taking advantage of cameras as computing systems; engineering camera systems to provide new abilities and improved quality.

Comments

Total: 1253, showing: 21 – 40
« First‹ Previous12345Next ›Last »
In reply to:

Tungsten Nordstein: 'Foveon sensors don't directly capture red, green and blue information'

Is this an accurate statement? One layer per RGB channel, surely means that they do directly capture red, green and blue information. See this diagram:

https://library.creativecow.net/articles/gondek_mike/Foveon/foveonchip.gif

Anyway, the fact that Sigma quattro cameras can produce RAW is good.

They are separate layers (photosites), but they aren't filtered very precisely.

Link | Posted on Apr 9, 2017 at 14:04 UTC
In reply to:

Tungsten Nordstein: 'Foveon sensors don't directly capture red, green and blue information'

Is this an accurate statement? One layer per RGB channel, surely means that they do directly capture red, green and blue information. See this diagram:

https://library.creativecow.net/articles/gondek_mike/Foveon/foveonchip.gif

Anyway, the fact that Sigma quattro cameras can produce RAW is good.

The top layer sees all colors, but light of different wavelengths statistically penetrates to different depths. Thus, you directly get very high quality monochrome data for every pixel site, but rather sloppy color sampling that requires significant computation to clean up. They are now doing that compute in camera to make a 12-bit uncompressed TIFF (which is marked as DNG).

Link | Posted on Apr 9, 2017 at 11:18 UTC

In other words, Sigma has discovered that they can spit out a color-interpolated "uncompressed" TIFF file like many cameras did 15 years ago. DNG is just one of many variants of TIFF, and all using the DNG marking here buys is the ability to use 12 bits per pixel color channel, while uncompressed TIFFs normally were 8 bit (or 16 bit).

The code in dcraw for Foveon interpolation carries some restrictions that are problematic for tools built using dcraw code (which is nearly all software that can process raws). I think the better answer for Sigma would be to distribute raw decode source code without restrictions....

Link | Posted on Apr 9, 2017 at 11:12 UTC as 70th comment | 4 replies
In reply to:

falconeyes: Interestngly, the first image from their "Try our samples" bar (the one with keywords "business, smiling, woman") scores 0.0%. It looks like the typical stock photo though. So, it must be telling us something about how EA rated photos in the training.

The keywording is impressive. Fair enough, they had a large training set to fetch keywords from. Still, their feature vector generation must be useful. Which may be their real asset. Too sad they did not publish about their feature vector creation algorithm.

I tried two of my images; excellent keywording, scores of 0% and 12%. To put it bluntly, the scoring seems a lot less "refined" than the keywording. Anyway, useful for the keywording alone, I suppose....

Link | Posted on Apr 8, 2017 at 11:21 UTC
On article Canon EOS Rebel T7i / EOS 800D Sample Gallery (110 comments in total)

Interesting phrase: "the midrange camera in Canon's lower-end DSLR lineup" -- perhaps Canon's making too many models? They certainly are in the PowerShot line, with a new crop every year that's nearly identical across multiple models and years, but has a fleet of new model names and minor differences. Also, is $750 body only really a lower-end price now?

Link | Posted on Apr 7, 2017 at 11:28 UTC as 36th comment | 3 replies

"With the help of world famous development engineers" apparently none of whom want their names associated with this lens?

It says "will be available for practically all mirrorless cameras" -- which sensor formats; everything from Pentax Q to Hasselblad X1D and Fujifilm GFX 50S? The stated "84 degree angle of view" would imply full frame for 24mm, but most mirrorless cameras aren't full frame and optimizing the design for different coverage, pixel density, and cover-glass thickness will generate different optical designs. It's not a big deal to mount a manual lens on most mirrorless bodies without tuning the design -- an M42 thread (or some duct tape) can do that.

"We are striving for technical perfection with this lens – but we will not make any compromises when it comes to the creative part of photography. Personality and character are the most important features of all our lenses." What the heck is that supposed to mean? I think it means it will do well on Kickstarter. :-(

Link | Posted on Apr 7, 2017 at 04:02 UTC as 26th comment | 2 replies
In reply to:

TMHKR: With the possibility of CHDK team releasing the hack for it in the future (with RAW support), it would make night and day difference, regardless of the sensor size!

Distortion is VERY heavy on the wide end on the powershots -- they are MUCH wider, computationally undistorted, and very conseravtively cropped. However, the IQ is actually surprisingly good overall, so the primary benefit of raw is being able to crop wide less. For an example with a raw: http://aggregate.org/CACS/elph115is.html

The primary benefit to CHDK as I see it is programmability. You can easily make a $100 CHDK PowerShot do lots of things no other camera can do.

Link | Posted on Apr 6, 2017 at 23:28 UTC
In reply to:

TMHKR: With the possibility of CHDK team releasing the hack for it in the future (with RAW support), it would make night and day difference, regardless of the sensor size!

CHDK already works on more than a few Canon superzooms and, yes, that is a wonderful thing. Not many fully-programmable superzoom competitors... in fact, none I'm aware of. ;-) BTW, I use the Toshiba FlashAir cards for bidirectional wifi communication with CHDK powershots.

Link | Posted on Apr 6, 2017 at 11:51 UTC
In reply to:

princecody: Has Sony perfected the Art of the sensor? Is that why 80% of the camera brands use Sony sensors?

Dudes, they are grinding-down medium-format sensors for BSI in volume production! That they can get practical yields doing that is amazing... and only a couple of years after making FF BSI economically viable. I'm very impressed.

Does this mean sensor tech has reached a stable point? NO -- and that's why this is so exciting: Sony is making very significant hybrid fab improvements in a time where the big digital chipmakers (e.g., Intel) are kind-of stuck.

Link | Posted on Apr 5, 2017 at 01:38 UTC
In reply to:

LoScalzo: I thought it's "Gear Acquisition Syndrome," not "Gear Addiction Syndrome." Which is it?

The most common form of Gear Acquisition Syndrome at DPReview is LBA -- Lens Buying Addiction -- so that's where the Addiction terms creeps in.... ;-)

Link | Posted on Apr 2, 2017 at 16:00 UTC
In reply to:

ProfHankD: Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

Well, then it's completely wrong -- being out of focus does not cause blur!

The applicable dictionary definition of "blur" involves "smearing," but significantly OOF PSF do not smear anything, nor do they convolve. The OOF PSF simply causes rays from the same scene point seen from different points of view (all within the aperture of the lens) to land in different locations on the sensor. The visual ambiguity comes from each sensor point summing rays from many scene points, but only non-occluded rays are summed. This is why I and others are able to recover stereo depth from single images (e.g., Lytro does it using plenoptics; I do it using single-lens analgyph capture, which is really a variant of what's often called coded aperture capture).

Blur does occur in images, but true blur only arises from motion. Thus, if bokeh just meant "blur" it certainly would apply to motion blur... which I have never seen anyone claim. Two rounds of Google translate isn't a valid definition. ;-)

Link | Posted on Apr 1, 2017 at 00:22 UTC
In reply to:

ProfHankD: Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

I think it's pretty clear that bokeh was meant to refer to qualities, of which size is one of only approximate importance (i.e., it takes a major change in PSF radius to make a qualitatively significant change). Of course, at very small sizes you really can't see the other qualities; beyond that, if your OOF PSF are overexposed (which is common), size might be the only property that is obvious.

I hadn't seen Marianne Oelund's stuff, but Vcam looks interesting for modeling nearly-in-focus PSF from some simple "summary" parameters (as opposed to modeling the optics directly). I've been more interested in relating detailed lens or scene properties to very-OOF PSF, and have published on various aspects at Electronic Imaging. The most accessible overview is probably the slides from my "Poorly Focused Talk": http://aggregate.org/DIT/ccc20140116.pdf

Link | Posted on Mar 31, 2017 at 13:05 UTC
In reply to:

ProfHankD: Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

To me, bokeh refers to the summative effect of the OOF PSF for all OOF points in the image, which is easily able to be quantified. It is straightforward to derive the bokeh in an image by applying a simple painter's algorithm (depth-order painting) to the measured OOF PSF. However, not that many folks actually measure OOF PSFs -- it isn't a standard thing to measure, and the 150 or so lenses I've measured it for is probably the largest database of OOF PSFs. Beyond that, I can predict OOF PSF from bokeh and vice versa, but individual preferences are very qualitative.

Link | Posted on Mar 30, 2017 at 23:29 UTC

Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

Link | Posted on Mar 30, 2017 at 20:23 UTC as 107th comment | 9 replies

Interesting. How do you make the paper cuts onsite? Being unskilled but geeky, I use a programmable paper cutter or a cheap laser cutter for anything approaching that complexity of cuts....

Link | Posted on Mar 30, 2017 at 11:11 UTC as 35th comment | 1 reply
In reply to:

alanh42: Even though the Leica Monochrom models don't produce color moiré artifacts, they can still produce false detail and moiré patterns when the frequency of the information in the subject is above the Nyquist frequency of the sensor.

The current fad of "no analog domain low-pass anti-alias filter" is a scientifically and technically flawed implementation that's been obvious since the discovery/invention of the Nyquist-Shannon sampling theorem in the late 1940's. AA filters prevent false detail from appearing in the outputs.

Yeah, you can do some post-processing to remove moiré effects, but what if you're taking a picture of something that actually looks like a moiré pattern? The correction algorithms have no way to know what's real detail and what's false detail.

Actually, there's a lot of controversial stuff, including "compressed sensing," wildly violating Nyquist these days by using "priors" to drive image reconstruction -- basically assumptions about what the scene must have looked like. Many work shockingly well.

However, you're right about AA filters generally being necessary to meet Nyquist sampling constraints, and this article also is a bit loose on what Nyquist really means. In the worst case (it usually does MUCH better), a 24MP Bayer sensor can only guarantee correct reconstruction of a 1.5MP full-color source image... which sounds terrible, but that's about what a lot of 135-format film delivered, and it's good enough for most (especially WWW) uses.

Link | Posted on Mar 30, 2017 at 01:38 UTC

The manual segmentation applications in their paper look interesting. This actually seems a lot like some of the old methods for colorizing B&W films....

Link | Posted on Mar 28, 2017 at 15:05 UTC as 14th comment

Back when I shot film, it was my Minolta MC W Rokkor Si 28mm f/2.5 -- and it is still a favorite. If it's one lens for everything, my Sigma 28-200mm f/3.5-f/5.6 Macro D Aspherical IF, a small and surprisingly sharp lens that cost me under $20, is my current FF answer. If the question is which lens do I actually use most, on APS-C the answer is my Sigma 8-16mm; a lens that is hard to use well, but regularly produces stunning images if used wisely.

However, I have a LOT of (mostly manual) lenses and don't really play favorites. I use what seems right for the circumstances, and deliberately rotate otherwise so I keep aware of my choices. It's really great that mirrorless cameras can use so many different lenses so well and so many old lenses now sell for tiny fractions of their utility-based value (most of mine were under $25).

Link | Posted on Mar 27, 2017 at 13:58 UTC as 176th comment

So, how do you know that there is actually a drone out there? It would be fairly easy to computationally render your flight from a terrain database. Think of it as a sort of "Touring" test. ;-)

Link | Posted on Mar 25, 2017 at 13:08 UTC as 29th comment | 3 replies
On article Re-make/Re-model: Leica Summaron 28mm F5.6 Samples (202 comments in total)

Well, that renders very much like a lens I just got: a 1990s Sigma 18-35mm f/3.5-4.5 zoom that cost me $32. Ok, the Sigma's actually better -- less vignetting and sharper corners -- but has similar colors and handles flare almost identically. I see "incredibly slim" motivation for this Summaron. Is this Leica's version of a lens in a body cap? :-(

Link | Posted on Mar 24, 2017 at 12:08 UTC as 71st comment
Total: 1253, showing: 21 – 40
« First‹ Previous12345Next ›Last »