Mike Davis

Lives in United States Dallas, TX, United States
Has a website at http://www.accessz.com
Joined on Jun 12, 2002

Comments

Total: 57, showing: 1 – 20
« First‹ Previous123Next ›Last »
In reply to:

mmarian: No disrespect to Adams achievement and his status in history of photography but glorification of triviality in this video is a bit sad. All except the multiple lightsource enlarger head with individual switches which usefulness is debatable, the rest is just plain common knowledge and very typical darkroom setup. I had such hand made dodge and burn patches myself and the strip exposure test was a commonplace in those days. I was working on a 10x8" enlarger on horizontal rails in tge darkroom floor and remote control and magnetic wall myslef in a professional colour laboratory many years ago. I was making prints color and b&w up to 15 feet long and 4 feet wide with paper held by magnets. The only guide in total darkness were the tiny florescent patches sticked to the magnets. The magnifying focusing tool seen in the video was something commonly used those days as well. So, what else? That to achieve the desirable final print took a long time a many trials and errors?? Well, those were the times of darkrooms and silver halid paper and chemicals. We used to even tone the B&W photos by immersing them in two hand prepared chemical compound dilutions in two stage proces to get a blue tone etc etc. Not very healthy I have to admit.... And we used to dry the large print by taping them to the walls to achieve "that look" and when dry, cutting the brown paper glue tape with stanley knife alonge the edge of the print. ..Anyway, the video might sound very fascinating to current generation of people who have only experienced the marvels of digital technology but to folks from the yesterdays talking about common darkroom equipment and methods with such an awe and wonder sounds a bit odd and almost off-putting.

Agreed. I should cut him some slack, but as I watched him reaching for words, It struck me that he isn't familiar with the vocabulary of darkroom technique. He would do well to read "The Negative" and "The Print" before conducting any more interviews in his father's darkroom.

Link | Posted on Jul 10, 2016 at 10:54 UTC
On article Virtual Reality: It's not just for gamers anymore (155 comments in total)
In reply to:

jaxson: 3D via this method is always odd for the brain. Normally we focus at different 'depths' into a scene, but with 3D we're constantly focusing very close to our eyes, and the designer determines where the focus point in the scene is. Not sure if it's a major, but it's actually quite different when you think about it.

I'm also still concerned that the tools to view just aren't there yet. I'm not keen on heading to a theatre where I'm wearing head gear someone else just wore.

Pushing the virtual image out to at least 30 inches would greatly reduce this problem, as would increasing viewer brightness to "stop down" our pupils, avoiding the very unrealistic use of selective focus (which only makes sense when shooting for 2D), and lastly, putting an end to the excessively frequent and unrealistic use of negative parallax, as seen in so many 3D films, where otherwise seasoned and accomplished 2D cinematographers just cannot resist poking us in the eyes with everything from shotgun barrels to saw-toothed piranhas. 3D content will never be perceived as real, until it becomes completely transparent to both the storytelling and the visual experience. We don't walk around in the real world saying, "Cool! I can see in 3D!" Neither should such thoughts come to mind when experiencing VR. Less is more!

Link | Posted on Jun 10, 2016 at 16:59 UTC
On article Virtual Reality: It's not just for gamers anymore (155 comments in total)
In reply to:

Mike Davis: As pointed out in part II of this article, the greatest limitation in achieving full "immersion" is that of resolution in the digital displays. To varying degrees, we resist the notion that VR is "real" when the display resolution is so far below that which healthy human eyes can naturally perceive.

THX recommendations for theater and TV viewing distances support the generally accepted figure of 1 arc-minute (1/60th of one degree) of angular resolution as the limit of human visual acuity. None of today's head-mounted displays are getting anywhere close to that.

Even if we we could purchase a viewer that delivers "4k" to each eye (using two 4k smartphones?) , we would only be getting 2160x4096 or 8.85 MP to each eye, which just isn't enough to emulate reality when it is spread across angles of view as great as those expected in VR headsets.

Sadly, the entire digital display market is driven by the video formats streamed into our homes. We're stuck at "4k" for now, and it's unlikely any of the display manufacturers are going to start cranking out "4k" (8.8 MP) displays with a 2-inch diagonal, anytime soon, much less 760 MP displays with a two-inch diagonal.

But... Revisiting the idea of a 40-degree FoV, both horizontal and diagonal - again, similar to an 8-foot square hole in a wall viewed from 10-feet away - we can, today, experience resolutions as high as 1-arc minute (the equivalent of 8 lp/mm in an 8-inch print viewed at 10 inches) using a handheld, illuminated stereoscope made for viewing stills (sorry, no video) captured on medium format color transparency film. Digital display technology has fallen way behind sensor resolutions - it can't compete with analog displays when it comes to presenting a lot of subject detail in small, handheld or head-mounted viewers.

Link | Posted on Jun 10, 2016 at 16:23 UTC
On article Virtual Reality: It's not just for gamers anymore (155 comments in total)
In reply to:

Mike Davis: As pointed out in part II of this article, the greatest limitation in achieving full "immersion" is that of resolution in the digital displays. To varying degrees, we resist the notion that VR is "real" when the display resolution is so far below that which healthy human eyes can naturally perceive.

THX recommendations for theater and TV viewing distances support the generally accepted figure of 1 arc-minute (1/60th of one degree) of angular resolution as the limit of human visual acuity. None of today's head-mounted displays are getting anywhere close to that.

Even if we we could purchase a viewer that delivers "4k" to each eye (using two 4k smartphones?) , we would only be getting 2160x4096 or 8.85 MP to each eye, which just isn't enough to emulate reality when it is spread across angles of view as great as those expected in VR headsets.

A digital VR display would have to present at least 50 MP to each eye, within an angle of view that replicates looking at only an 8-inch square print held 10-inches in front of your face in order to get the angular resolution all the way up to 1 arc-minute. With both the horizontal and vertical angles of view coming to about 40 degrees, it would be equivalent to looking out onto a natural landscape with your naked eyes, through a square hole in the wall of a black room, with the hole measuring 8-feet square while you're standing 10-feet away. That's not too bad, but a 40-degree FoV is nowhere near as "real" as the nearly 180-degree horizontal and 135-degree vertical FoV of the human eye. To truly satisfy the 1 arc-minute acuity of healthy human vision, we would have to capture and display more than 15.2 times as many pixels as could be presented with dual 50 MP displays. We're talking 760 MP per eye.

Link | Posted on Jun 10, 2016 at 16:23 UTC
On article Virtual Reality: It's not just for gamers anymore (155 comments in total)

As pointed out in part II of this article, the greatest limitation in achieving full "immersion" is that of resolution in the digital displays. To varying degrees, we resist the notion that VR is "real" when the display resolution is so far below that which healthy human eyes can naturally perceive.

THX recommendations for theater and TV viewing distances support the generally accepted figure of 1 arc-minute (1/60th of one degree) of angular resolution as the limit of human visual acuity. None of today's head-mounted displays are getting anywhere close to that.

Even if we we could purchase a viewer that delivers "4k" to each eye (using two 4k smartphones?) , we would only be getting 2160x4096 or 8.85 MP to each eye, which just isn't enough to emulate reality when it is spread across angles of view as great as those expected in VR headsets.

Link | Posted on Jun 10, 2016 at 16:21 UTC as 11th comment | 3 replies
On photo Tactical Approach in the Wildlife - Birds in Flight challenge (35 comments in total)
In reply to:

gbdz: You need to be a bit lucky to get a shot like this as well.
Congratulations.

It's obvious you didn't mean to diminish the skill involved here, but "lucky" is what people say when a golfer makes a hole-in-one, which I've only witnessed, just once, never done myself. The fact remains, every hole-in-one is the result of someone's absolute intent to put the ball into that distant cup. This shot is a hole-in-one!

Link | Posted on Apr 16, 2016 at 14:52 UTC
On photo Tactical Approach in the Wildlife - Birds in Flight challenge (35 comments in total)

What they said!!

Link | Posted on Apr 16, 2016 at 14:45 UTC as 4th comment
On photo On the Cat Walk in the My Best Photo of the Week challenge (13 comments in total)

A great capture and deserving of first place, but you're editing was a bit sloppy along the length of the right leg (seen at 100%).

Link | Posted on Mar 30, 2016 at 21:03 UTC as 1st comment
On challenge ND, or not too ND? (6 comments in total)

You win! Thanks for giving us a nice comparison.

Link | Posted on Mar 30, 2016 at 20:38 UTC as 1st comment
On photo The road to heaven in the The Ice road challenge (1 comment in total)

In my impotent opinion, this image deserved 1st Place among the others. It's superbly crafted and it inspires me.

Link | Posted on Nov 20, 2015 at 16:26 UTC as 1st comment

Quoting the article, above: "The reporter is also amazed that the screw heads in the body of JH Darumeya’s stereo camera from 1860 are all perfectly aligned. "

JH Darumeya? LOL

His name was JH Dallmeyer - search for it.

"Darumeya" is how an English-speaking Japanese national would pronounce "Dallmeyer." The reporter needs to do his homework.

Link | Posted on Sep 20, 2015 at 00:06 UTC as 2nd comment | 1 reply
In reply to:

flektogon: Achievements in the electronics are growing almost exponentially, but what about the optical achievements? A sensor with such density would be able to register details equivalent to up to 200 lpm of the lens resolution. Does Canon (or someone else) have such extremely sharp lens? The best lenses for the 35mm film format went up to 100 lpm. How much improvement can we see in the lens design?

The f-Number at which diffraction will begin to inhibit a desired print resolution at an anticipated enlargement factor can be calculated as
f-N = 1 / desired res in lp/mm / enlargement factor / 0.00135383
If (and please don't overlook this "if") we wanted to produce an uncropped, unresampled 360dpi print (the equivalent of 5 lp/mm after enlargement) from this 19,580x12,600 pixel sensor, it would measure 54.4 x 35.0 inches, requiring 44x enlargement, thus the f-Number at which diffraction would begin to inhibit our desired resolution of 5 lp/mm in the final print would be:
f-N = 1 / 5 / 44 / 0.00135383 = 3.36. So... we would not be able to use f/4 or larger f-Numbers without causing diffraction to inhibit our desired print resolution. (This formula assumes the print resolution has been selected for viewing at a distance of 10 inches, so if you are willing to view the print no closer than 20 inches, you can perceive equiv. detail and shoot at f/8 without fear of diffraction (f/7.2)

Link | Posted on Sep 9, 2015 at 23:51 UTC
On photo Waterfall in the Assynt in the Tilt Shift challenge (3 comments in total)

Hi gidgetto!

At this point in time, with 24 submissions thus far, yours is the ONLY submission to this challenge that makes traditional use of Tilt-Shift.

For that, I thank you!

This is just my opinion, of course, but the Tilt-Shift miniaturization fad is getting old, real old.

Link | Posted on Apr 7, 2015 at 17:51 UTC as 3rd comment
On article CP+ 2015: Canon shows off prototype 120MP CMOS sensor (255 comments in total)
In reply to:

mike earussi: With a pixel size of 2.2um, diffraction will start to destroy the resolution after f2.8, so I really don't see this having any practical application for regular photographers. Nor do I think any lens currently on the market can shoot at this level of resolution at f2.8. So this is a technical achievement, not a practical one, unless it's considered for it's PR value or for bragging rights.

I'd be far more impressed by Canon increasing the DR of its sensors, which would have practical value, instead of just their MP count.

Well said, despite the lack of specifying the enlargement factor and desired print resolution at which f/2.8 would begin to be an issue.

Link | Posted on Feb 15, 2015 at 16:47 UTC
On article CP+ 2015: Canon shows off prototype 120MP CMOS sensor (255 comments in total)
In reply to:

John Crawley: When is the industry going to learn that more MP isn't the answer.

I agree with James123 and mosc. Your posts are like a breath of fresh air.

Link | Posted on Feb 15, 2015 at 16:44 UTC
On article CP+ 2015: Canon shows off prototype 120MP CMOS sensor (255 comments in total)
In reply to:

Frank_BR: Many good lenses produce details in the center of the field which can only be revealed by a sensor resolution from 200 to 500 MP. Therefore, the increase in sensor resolution is most welcome. Many people who use the argument of diffraction against increasing sensor resolution forget that the impact of diffraction is gradual, and much of the falling of the response can be compensated via digital processing.

8 lp/mm is generally accepted as the highest resolution any adult with healthy vision can appreciate at a viewing distance of 10 inches. So, even if you desire a print resolution of 8 lp/mm (in a non-resampled 576 dpi print), there's no point in having a 120 MP sensor if you intend to make prints smaller than 15.9 x 23.1 inches - the size you'd get using all 120 MP to secure 8 lp/mm - and remember, even this size print, at this resolution, demands that you avoid stopping down below f/4.6 - thanks to diffraction making all those pixels useless if you do so.

If you intend to make lower than 5 lp/mm resolution prints at that enlargement factor with a 120 MP sensor, then again, you don't need 120 MP.

In short, the only way to actually take advantage of all those pixels on so small a sensor is to forget about using most of the f-Numbers offered by your lenses - thanks to diffraction. Never mind the signal-to-noise ratios suffered with such a tiny pixel pitch.

Link | Posted on Feb 15, 2015 at 16:24 UTC
On article CP+ 2015: Canon shows off prototype 120MP CMOS sensor (255 comments in total)
In reply to:

Frank_BR: Many good lenses produce details in the center of the field which can only be revealed by a sensor resolution from 200 to 500 MP. Therefore, the increase in sensor resolution is most welcome. Many people who use the argument of diffraction against increasing sensor resolution forget that the impact of diffraction is gradual, and much of the falling of the response can be compensated via digital processing.

At f/9, with this 32.1x enlargement factor, you might as well have used a 60MP sensor, because diffraction will reduce your print resolution to 2.5 lp/mm, the equivalent of 180 dpi after AA and RGBG losses.

At f/18, you'd have been fine with a 30MP sensor, because diffraction will reduce your print resolution to 1.25 lp/mm, the equivalent of 90 dpi after AA and RGBG losses.

Is the print too large in my example, above? If you intend to make smaller prints with a 120 MP sensor, then you don't need 120 MP, unless you desire more than the 5 lp/mm (360 dpi) resolution at the print, used in my example - when most people are making large prints at lower resolutions, because they either don't have the pixels to warrant larger prints and/or they assume no one is going to be scrutinizing them from a distance of only 10 inches.

Link | Posted on Feb 15, 2015 at 16:24 UTC
On article CP+ 2015: Canon shows off prototype 120MP CMOS sensor (255 comments in total)
In reply to:

Frank_BR: Many good lenses produce details in the center of the field which can only be revealed by a sensor resolution from 200 to 500 MP. Therefore, the increase in sensor resolution is most welcome. Many people who use the argument of diffraction against increasing sensor resolution forget that the impact of diffraction is gradual, and much of the falling of the response can be compensated via digital processing.

Yes, the effect of diffraction comes on gradually, but the f-Number at which diffraction will just begin to inhibit a desired print resolution in lp/mm, at an anticipated enlargement factor can be calculated as follows:

Max. f-Number = 1 / desired print resolution / anticipated enlargement factor / 0.00135383

Running the numbers for a desired print resolution of 5 lp/mm (equivalent to 360 dpi after AA and RGBG losses), we get

Max f-Number = 1 / 5 / 32.1 = 4.6

Thus, with a 32.1x enlargement factor (for a 25.5 x 36.9-inch 360 dpi print), any attempt to use f-Numbers larger than f/4.6 will reduce the print resolution, thanks to the gradual onset of diffraction.

No amount of post-processing can restore genuine subject detail that was lost due to diffraction at the time of exposure. Acuity (edge sharpness) can be improved, but lost resolution (actual subject detail) cannot be created from nothing.

Link | Posted on Feb 15, 2015 at 16:24 UTC
On article CP+ 2015: Canon shows off prototype 120MP CMOS sensor (255 comments in total)
In reply to:

Frank_BR: Many good lenses produce details in the center of the field which can only be revealed by a sensor resolution from 200 to 500 MP. Therefore, the increase in sensor resolution is most welcome. Many people who use the argument of diffraction against increasing sensor resolution forget that the impact of diffraction is gradual, and much of the falling of the response can be compensated via digital processing.

Despite the gradual onset of diffraction, if anyone tries to use a 120MP capture to make non-resampled 360 dpi prints (equivalent to 5 lp/mm after losses in resolution caused by the AA filter and RGBG algorithm), the resulting print would measure 25.5 x 36.9 inches, while suffering an outrageous enlargement factor of 32.1x from the smaller than full-frame sensor measuring only 20.2 x 29.2mm.

At that enlargement factor, diffraction's Airy disks, for any given f-Number, will be magnified so much in the final print as to force the use of f-Numbers no greater than f/4.6 to deliver a desired print resolution of 5 lp/mm - to actually make use of the resolution promised by the 120 MP sensor.

Link | Posted on Feb 15, 2015 at 16:23 UTC
On article CP+ 2015: Canon shows off new EOS 5DS and 5DS R (124 comments in total)
In reply to:

MarkByland: Sort of false representation taking 100 photographs, stitching them together, and presenting them as some thing that would come straight out of camera. Why not do side-by-side 1:1 series taken from a MkIII? Or, wait, a D810? Show us what you've got, not what can be done with major digital processing.

Also, does this 5Ds R come with a free, multiple terabyte, cloud based storage account?

I would nevertheless be irritated at seeing those stitched prints. Show me what the camera itself can do with a single exposure.

Link | Posted on Feb 15, 2015 at 15:27 UTC
Total: 57, showing: 1 – 20
« First‹ Previous123Next ›Last »