ProfHankD

ProfHankD

Lives in United States Lexington, United States
Works as a Professor
Has a website at http://aggregate.org/hankd/
Joined on Mar 27, 2008
About me:

Plan: to change the way people think about and use cameras by taking advantage of cameras as computing systems; engineering camera systems to provide new abilities and improved quality.

Comments

Total: 1280, showing: 41 – 60
« First‹ Previous12345Next ›Last »
In reply to:

stevo23: SCOTCH whiskey. I'm just sayin'

Now if it were a bottle of the real deal Pappy Van Winkle...watch out

It's true. Top-quality Scotch Whisky is literally what they make reusing the barrels that are no longer fit to make Kentucky Bourbon (Bourbon is always matured in fresh casks to extract the maximum flavor). Nice to celebrate photographers, but speaking as a Kentucky resident, the "kit" packaging seems a bit desperate.... ;-)

Link | Posted on Apr 14, 2017 at 01:15 UTC
On article Panasonic Lumix DC-GH5 Review (1185 comments in total)
In reply to:

snapa: It still amazes me how a 'm4/3' camera can cost $2,000 (without any lens), and be considered such a great camera. Once you start adding Pro lenses, then think about how well the IQ is at higher ISO levels with still pictures, it does not make any sense to me.

Maybe if you are looking for a very good 'video camera', it would be a good solution to your problem if you are looking to get very good videos. Still, to me, a m4/3 sensor camera for $4-5,000 (with lenses) that take just OK still pictures seems like quite a bit of money to me IMO, considering its competition with larger senor cameras.

BTW, how may professional videographers seriously look to buy a m4/3 cameras for taking serious video?

Astrotripper: all your complaints are on video, and I think they're a bit overstated; heck, the GH5 isn't even coming with log video support by default (and it has obviously poorer DR). In my opinion, the A6500 beats every micro4/3 body for stills by a much larger margin than it loses by on video. I also think $1400 is a lot less than $2000... and if it isn't, then an A7SII, A7RII, or A99II isn't unreasonable either (the GH5 is 42% more than A6500, and A7RII is 45% more than GH5).

In sum, the GH5 is an impressive enough camera, but $2K is too much for what is ultimately a big, video-centric, body wrapped around a small sensor with an aspect ratio that for video discards at least another 25% of the sensor due to choice of the 4/3 sensor aspect ratio. Your opinion may, and is welcome to, vary. ;-)

Link | Posted on Apr 12, 2017 at 14:54 UTC
On article Panasonic Lumix DC-GH5 Review (1185 comments in total)
In reply to:

snapa: It still amazes me how a 'm4/3' camera can cost $2,000 (without any lens), and be considered such a great camera. Once you start adding Pro lenses, then think about how well the IQ is at higher ISO levels with still pictures, it does not make any sense to me.

Maybe if you are looking for a very good 'video camera', it would be a good solution to your problem if you are looking to get very good videos. Still, to me, a m4/3 sensor camera for $4-5,000 (with lenses) that take just OK still pictures seems like quite a bit of money to me IMO, considering its competition with larger senor cameras.

BTW, how may professional videographers seriously look to buy a m4/3 cameras for taking serious video?

This camera benefits greatly from its DPReview classification, which doesn't pit it directly against, for example, the MUCH CHEAPER Sony A6500. This seems like a very good camera, and excellent for its sensor size, but it trails pretty significantly as a $2K body-only stills camera. As a video camera with rolling shutter issues, $2K doesn't really class it as a bargain either, although it's more competitive. I don't think "gold" rating is justified at this price point (especially when the A6500 is "silver"), but micro4/3 definitely has a dedicated following willing to pay more for high-featured big cameras with little sensors....

Link | Posted on Apr 12, 2017 at 03:08 UTC
In reply to:

Internet Enzyme: Sees Headline.

5K video oh man that sounds real fancy but I'm positive it has a horrible bitrate since all of these 360 degree and drone cameras have horrendously low bit rates.

Reads article.
"Videos are recorded using the H.264 codec with a bit rate of 60Mbps,...".

Come on people. When will they know that high resolution doesn't matter if your bitrate is so low.

The real quality issue is stitch errors. This seems to do very well, with only one stitch error obvious to me in the sample video. The rule of thumb is no closer than 30X the difference in camera reference points, which here I'd guess means about 5-6 feet... so no 360-degree macros unless they are carefully aligned within one camera's view. It is kind of odd they don't support a larger battery/storage unit for longer runtimes and higher bitrates....

Link | Posted on Apr 11, 2017 at 01:06 UTC
In reply to:

ProfHankD: In other words, Sigma has discovered that they can spit out a color-interpolated "uncompressed" TIFF file like many cameras did 15 years ago. DNG is just one of many variants of TIFF, and all using the DNG marking here buys is the ability to use 12 bits per pixel color channel, while uncompressed TIFFs normally were 8 bit (or 16 bit).

The code in dcraw for Foveon interpolation carries some restrictions that are problematic for tools built using dcraw code (which is nearly all software that can process raws). I think the better answer for Sigma would be to distribute raw decode source code without restrictions....

Mgrum, the code doesn't look all that clever to me. However, there's a good chance the issue isn't cleverness nor giving competitors a leg up, but rather a bit of legal ambiguity about who gets to open it. Quite often, there's code that isn't owned outright, but is purchased as IP with restrictions; it also wouldn't surprise me if legal oddness came about when Foveon became part of Sigma. Anyway, it's really not helping Sigma/Foveon to have it closed.

Link | Posted on Apr 10, 2017 at 15:07 UTC
In reply to:

ProfHankD: In other words, Sigma has discovered that they can spit out a color-interpolated "uncompressed" TIFF file like many cameras did 15 years ago. DNG is just one of many variants of TIFF, and all using the DNG marking here buys is the ability to use 12 bits per pixel color channel, while uncompressed TIFFs normally were 8 bit (or 16 bit).

The code in dcraw for Foveon interpolation carries some restrictions that are problematic for tools built using dcraw code (which is nearly all software that can process raws). I think the better answer for Sigma would be to distribute raw decode source code without restrictions....

From wikipedia: "Tagged Image File Format, abbreviated TIFF or TIF."

DPReview tends to think in terms of Adobe raw processing being the standard. Adobe does what Adobe does, and since they own DNG, it shouldn't be shocking that they handle it well. Adobe couldn't just lift the Foveon code from dcraw (as they essentially did for other raw formats) because of the Sigma use restrictions, but using dcraw has long worked fine for Foveon interpolation, so image editing tools literally using dcraw as a plug-in have never had a problem.

Dave Coffin is really good about adding support for new raw formats to dcraw, so I'm sure pixel shift stuff will be in there soon... actually: https://www.dpreview.com/forums/post/55912470

Link | Posted on Apr 10, 2017 at 02:04 UTC
On article Sigma sd Quattro H real world samples gallery (108 comments in total)
In reply to:

Gary Dean Mercer Clark: Here is a review that has sliders that compares the output of the Quattro H with the Canon 5Dr. This camera is deffinitely shooting above its pay grade. :) if you use Chrome browser and enable google translate, it will automatically translate this article. I'm pretty impressed with how the Sigma H images hold up to the 50MP Canon images. Excellent job Sigma!
https://www.focus-numerique.com/hybride/tests/sigma-sd-quattro-h-2977.html

Good review you pointed at there. Basically, this camera is a real outlier -- fundamentally very different from the Canons, Sonys, Fujis, etc. If what you want is monochrome resolution or freedom from moire, this is a winner -- and resolution/$ is great. If you want very accurate colors or low noise (especially at high ISOs) it's beaten by APS-C bodies costing 1/3 what it does. Arguably, the noise comparison with 24MP APS-C noise would be more fair if the Quattro H images were SINC-scaled to about 6MP and then SINC-scaled back to 24MP... then they look a lot more competitive in both noise and effective resolution.

Link | Posted on Apr 9, 2017 at 14:37 UTC
In reply to:

Tungsten Nordstein: 'Foveon sensors don't directly capture red, green and blue information'

Is this an accurate statement? One layer per RGB channel, surely means that they do directly capture red, green and blue information. See this diagram:

https://library.creativecow.net/articles/gondek_mike/Foveon/foveonchip.gif

Anyway, the fact that Sigma quattro cameras can produce RAW is good.

They are separate layers (photosites), but they aren't filtered very precisely.

Link | Posted on Apr 9, 2017 at 14:04 UTC
In reply to:

Tungsten Nordstein: 'Foveon sensors don't directly capture red, green and blue information'

Is this an accurate statement? One layer per RGB channel, surely means that they do directly capture red, green and blue information. See this diagram:

https://library.creativecow.net/articles/gondek_mike/Foveon/foveonchip.gif

Anyway, the fact that Sigma quattro cameras can produce RAW is good.

The top layer sees all colors, but light of different wavelengths statistically penetrates to different depths. Thus, you directly get very high quality monochrome data for every pixel site, but rather sloppy color sampling that requires significant computation to clean up. They are now doing that compute in camera to make a 12-bit uncompressed TIFF (which is marked as DNG).

Link | Posted on Apr 9, 2017 at 11:18 UTC

In other words, Sigma has discovered that they can spit out a color-interpolated "uncompressed" TIFF file like many cameras did 15 years ago. DNG is just one of many variants of TIFF, and all using the DNG marking here buys is the ability to use 12 bits per pixel color channel, while uncompressed TIFFs normally were 8 bit (or 16 bit).

The code in dcraw for Foveon interpolation carries some restrictions that are problematic for tools built using dcraw code (which is nearly all software that can process raws). I think the better answer for Sigma would be to distribute raw decode source code without restrictions....

Link | Posted on Apr 9, 2017 at 11:12 UTC as 71st comment | 4 replies
In reply to:

falconeyes: Interestngly, the first image from their "Try our samples" bar (the one with keywords "business, smiling, woman") scores 0.0%. It looks like the typical stock photo though. So, it must be telling us something about how EA rated photos in the training.

The keywording is impressive. Fair enough, they had a large training set to fetch keywords from. Still, their feature vector generation must be useful. Which may be their real asset. Too sad they did not publish about their feature vector creation algorithm.

I tried two of my images; excellent keywording, scores of 0% and 12%. To put it bluntly, the scoring seems a lot less "refined" than the keywording. Anyway, useful for the keywording alone, I suppose....

Link | Posted on Apr 8, 2017 at 11:21 UTC
On article Canon EOS Rebel T7i / EOS 800D Sample Gallery (110 comments in total)

Interesting phrase: "the midrange camera in Canon's lower-end DSLR lineup" -- perhaps Canon's making too many models? They certainly are in the PowerShot line, with a new crop every year that's nearly identical across multiple models and years, but has a fleet of new model names and minor differences. Also, is $750 body only really a lower-end price now?

Link | Posted on Apr 7, 2017 at 11:28 UTC as 36th comment | 3 replies

"With the help of world famous development engineers" apparently none of whom want their names associated with this lens?

It says "will be available for practically all mirrorless cameras" -- which sensor formats; everything from Pentax Q to Hasselblad X1D and Fujifilm GFX 50S? The stated "84 degree angle of view" would imply full frame for 24mm, but most mirrorless cameras aren't full frame and optimizing the design for different coverage, pixel density, and cover-glass thickness will generate different optical designs. It's not a big deal to mount a manual lens on most mirrorless bodies without tuning the design -- an M42 thread (or some duct tape) can do that.

"We are striving for technical perfection with this lens – but we will not make any compromises when it comes to the creative part of photography. Personality and character are the most important features of all our lenses." What the heck is that supposed to mean? I think it means it will do well on Kickstarter. :-(

Link | Posted on Apr 7, 2017 at 04:02 UTC as 27th comment | 2 replies
In reply to:

TMHKR: With the possibility of CHDK team releasing the hack for it in the future (with RAW support), it would make night and day difference, regardless of the sensor size!

Distortion is VERY heavy on the wide end on the powershots -- they are MUCH wider, computationally undistorted, and very conseravtively cropped. However, the IQ is actually surprisingly good overall, so the primary benefit of raw is being able to crop wide less. For an example with a raw: http://aggregate.org/CACS/elph115is.html

The primary benefit to CHDK as I see it is programmability. You can easily make a $100 CHDK PowerShot do lots of things no other camera can do.

Link | Posted on Apr 6, 2017 at 23:28 UTC
In reply to:

TMHKR: With the possibility of CHDK team releasing the hack for it in the future (with RAW support), it would make night and day difference, regardless of the sensor size!

CHDK already works on more than a few Canon superzooms and, yes, that is a wonderful thing. Not many fully-programmable superzoom competitors... in fact, none I'm aware of. ;-) BTW, I use the Toshiba FlashAir cards for bidirectional wifi communication with CHDK powershots.

Link | Posted on Apr 6, 2017 at 11:51 UTC
In reply to:

princecody: Has Sony perfected the Art of the sensor? Is that why 80% of the camera brands use Sony sensors?

Dudes, they are grinding-down medium-format sensors for BSI in volume production! That they can get practical yields doing that is amazing... and only a couple of years after making FF BSI economically viable. I'm very impressed.

Does this mean sensor tech has reached a stable point? NO -- and that's why this is so exciting: Sony is making very significant hybrid fab improvements in a time where the big digital chipmakers (e.g., Intel) are kind-of stuck.

Link | Posted on Apr 5, 2017 at 01:38 UTC
In reply to:

LoScalzo: I thought it's "Gear Acquisition Syndrome," not "Gear Addiction Syndrome." Which is it?

The most common form of Gear Acquisition Syndrome at DPReview is LBA -- Lens Buying Addiction -- so that's where the Addiction terms creeps in.... ;-)

Link | Posted on Apr 2, 2017 at 16:00 UTC
In reply to:

ProfHankD: Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

Well, then it's completely wrong -- being out of focus does not cause blur!

The applicable dictionary definition of "blur" involves "smearing," but significantly OOF PSF do not smear anything, nor do they convolve. The OOF PSF simply causes rays from the same scene point seen from different points of view (all within the aperture of the lens) to land in different locations on the sensor. The visual ambiguity comes from each sensor point summing rays from many scene points, but only non-occluded rays are summed. This is why I and others are able to recover stereo depth from single images (e.g., Lytro does it using plenoptics; I do it using single-lens analgyph capture, which is really a variant of what's often called coded aperture capture).

Blur does occur in images, but true blur only arises from motion. Thus, if bokeh just meant "blur" it certainly would apply to motion blur... which I have never seen anyone claim. Two rounds of Google translate isn't a valid definition. ;-)

Link | Posted on Apr 1, 2017 at 00:22 UTC
In reply to:

ProfHankD: Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

I think it's pretty clear that bokeh was meant to refer to qualities, of which size is one of only approximate importance (i.e., it takes a major change in PSF radius to make a qualitatively significant change). Of course, at very small sizes you really can't see the other qualities; beyond that, if your OOF PSF are overexposed (which is common), size might be the only property that is obvious.

I hadn't seen Marianne Oelund's stuff, but Vcam looks interesting for modeling nearly-in-focus PSF from some simple "summary" parameters (as opposed to modeling the optics directly). I've been more interested in relating detailed lens or scene properties to very-OOF PSF, and have published on various aspects at Electronic Imaging. The most accessible overview is probably the slides from my "Poorly Focused Talk": http://aggregate.org/DIT/ccc20140116.pdf

Link | Posted on Mar 31, 2017 at 13:05 UTC
In reply to:

ProfHankD: Just to be safe, in formal talks (e.g., the Electronic Imaging conference) I have been saying "OOF PSF" (out-of-focus point spread function). ;-)

Actually, I continue to be amazed by how many people in computational photography don't even know the word. I'm also happy you posted this... especially since it seems that I actually was pronouncing "bokeh" correctly. :-)

To me, bokeh refers to the summative effect of the OOF PSF for all OOF points in the image, which is easily able to be quantified. It is straightforward to derive the bokeh in an image by applying a simple painter's algorithm (depth-order painting) to the measured OOF PSF. However, not that many folks actually measure OOF PSFs -- it isn't a standard thing to measure, and the 150 or so lenses I've measured it for is probably the largest database of OOF PSFs. Beyond that, I can predict OOF PSF from bokeh and vice versa, but individual preferences are very qualitative.

Link | Posted on Mar 30, 2017 at 23:29 UTC
Total: 1280, showing: 41 – 60
« First‹ Previous12345Next ›Last »