Are insights in imaging sciences helpful for photographers?

There was a somewhat depressing thread over at the medum format forums: https://www.dpreview.com/forums/post/67589152

I started to ask myself if basic understanding of the technology involved in the making of photographs is beneficial to photographers?

Best regards

Erik
Well, I think so. If you want to control the medium to get a particular result, I think you need to understand its foundational concepts and mechanisms.

Now, the required depth of that understanding varies a bit, depending on how much control you need. For me, understanding the basic concepts of saturation and noise floor are essential to managing a good exposure, but I don't think I need to understand the particulars of semiconductor layering to make sensor wells if the essential behavior can be abstracted. Different needs for different outcomes...
 
Yeah, I think a basic understanding of how imaging works is essential to being able to produce the photographs one wishes to produce, for instance understanding the EV system. If instead one just wants to snap away, these days auto everything and smartphones get you a decent picture.

Beyond that some of us are curious. What exactly does deconvolution do? How does it do it? Is it just a perceptual effect or does it get us closer to the ground truth? What are undesired side effects? How can I make it work at its best? Are there better alternatives? Etc. But that's just a fun way to spend a rainy afternoon ;-)

Jack
 
Yeah, I think a basic understanding of how imaging works is essential to being able to produce the photographs one wishes to produce, for instance understanding the EV system. If instead one just wants to snap away, these days auto everything and smartphones get you a decent picture.

Beyond that some of us are curious. What exactly does deconvolution do? How does it do it? Is it just a perceptual effect or does it get us closer to the ground truth? What are undesired side effects? How can I make it work at its best? Are there better alternatives? Etc. But that's just a fun way to spend a rainy afternoon ;-)

Jack
Thanks again for your discussion of deconvolution.
 
I think a photographer needs to understand the "how," but not the "why."

E.g. one does not need to know why an automatic transmission works in order to drive an automatic car; only how to shift it. And one does not need to know why increasing the ISO setting works, only that it makes the image "brighter and noisier."


And I think it depends on if a photographer is more of a technician, or more of an artist... there are photographers of both types, and everything in between. There are even artists who use photographic media as part of their work; but I wouldn't classify them as "photographers" perse, and their need for understanding is even lower.

--
https://www.flickr.com/photos/skersting/
 
Last edited:
I think that it depends on what one is trying to achieve.

For example, if one uses the AI noise system that Lightroom now has do I need to understand how it's programmed or is the interface and effect knowledge sufficient?

I like to understand how such things work because I am an engineer but I am not sure it makes the products delivered better.

So, nice to understand but perhaps not required?
 
I think that it depends on what one is trying to achieve.

For example, if one uses the AI noise system that Lightroom now has do I need to understand how it's programmed or is the interface and effect knowledge sufficient?

I like to understand how such things work because I am an engineer but I am not sure it makes the products delivered better.

So, nice to understand but perhaps not required?
AI is a bit of a conundrum in that regard. In a neural network-based tool for example, there's no understanding of the network itself, and the coded algorithms are arbitrary when considered in a domain context.

Instead, I'd imagine that knowing about the training datasets would be more pertinent. That sort of insight would probably be useful in debugging cases where the tool doesn't work so well...
 
AI is a bit of a conundrum in that regard. In a neural network-based tool for example, there's no understanding of the network itself, and the coded algorithms are arbitrary when considered in a domain context.
+1

To expand on it a bit, first, there is no good understanding why Neural Networks, ML, AI, etc., work when they do. Next, the whole point of those methods is to "learn" the outcome, not the reasons for it.

Similarly, a good or a great photographer can just act on experience, intuition, etc., without understanding the technical part just like AI does (well, AI is getting smarter and smarter every day). In fact, AI methods are trying to simulate the way (uneducated but experienced :-) ) humans act, but as I said, they are becoming more sophisticated every day.
Instead, I'd imagine that knowing about the training datasets would be more pertinent. That sort of insight would probably be useful in debugging cases where the tool doesn't work so well...
 
Last edited:
There was a somewhat depressing thread over at the medum format forums: https://www.dpreview.com/forums/post/67589152

I started to ask myself if basic understanding of the technology involved in the making of photographs is beneficial to photographers?
Benefit implies utility. Do you know how the ice machine works in your fridge? Would it matter if you did? It only matters when the fridge stops ( or won't stop ) making ice.

Additional knowledge lets you solve problems and experiment more efficiently. However with enough time, one would discover these things even in the absence any enhanced understanding of the fundamentals. Much of photography is rules of thumb that summarize the complex underlying technical realities. Very few of the most famous photographers whose works you are familiar with, I would suggest, were physicists, chemists, opticians, etc. They were artists for which the technology was a means to an end.

-- Bob
http://bob-o-rama.smugmug.com -- Photos
http://www.vimeo.com/boborama/videos -- Videos
 
There was a somewhat depressing thread over at the medum format forums: https://www.dpreview.com/forums/post/67589152

I started to ask myself if basic understanding of the technology involved in the making of photographs is beneficial to photographers?

Best regards

Erik
It sure helps me make pictures better. Ansel Adams taught me that it's worth paying careful attention to the technical matters, and I didn't stop learning after that.

For a start, technical understanding helps me get the right exposure. In fact, I was just working out the optimum settings for the eclipse.

There are many other ways as well. I remember one day taking a macro picture of a flower or something, being unsatisfied with the background, and realizing that I had another lens in my bag that would blur the background better. I had never thought about this before, but I knew to do it because I understand how lenses work. Right now I'm working on quite a difficult photographic problem, trying to do what no one else does or can do. (It might be expensive for me.)

Some people don't want to know, and others want to know but won't get it right. That's not your problem. By the way, I have noticed real advances in understanding at DPR. Concepts that most of us didn't realize at first are now common knowledge.
 
Last edited:
Beyond that some of us are curious. What exactly does deconvolution do? How does it do it? Is it just a perceptual effect or does it get us closer to the ground truth? What are undesired side effects? How can I make it work at its best? Are there better alternatives? Etc. But that's just a fun way to spend a rainy afternoon
Good question. Ground truth? It concerns me, and it's a good question for discussion. I've come to depend on one of the new AI programs. It supposedly doesn't do generative AI, but in the only comparison with ground truth I have done, the results were spectacularly bad. Discussion for another day.
 
Last edited:
Beyond that some of us are curious. What exactly does deconvolution do? How does it do it? Is it just a perceptual effect or does it get us closer to the ground truth? What are undesired side effects? How can I make it work at its best? Are there better alternatives? Etc. But that's just a fun way to spend a rainy afternoon ;-)
That is just one algorithm to do some approximate deconvolution, given the positivity constraint, and knowing the convolution kernel and the character of the noise. It does not claim to reconstruct the ground truth - just to guess it, roughly speaking. There are many other methods, see the bottom of this page:

Deconvolution - Wikipedia

None of them restores the ground truth; they all get close with some luck, depending on some additional assumptions, often not explicit.
 
I believe a basic understanding is helpful. However I believe thinking about the science while shooting is huge detriment, at least I've found that's the case for me. I attribute it to right-left brain modal thinking, a concept I know has been stretched but it definitely applies to me. If I'm thinking about any technical aspects while shooting then my photography suffers - I'm not able to creatively "see" nearly as well. Quite simply my creative brain doesn't work if I'm having any technical thoughts.

And I don't even mean deep technical things like readout noise or sensor readout speeds, etc... Even basic thoughts like exposure interfere with my creative process. That's why I'm a huge fan of camera automation (AF, exposure, etc...), and also why I believe many of the best photos taken today are with smartphones. Photographers sometimes say "that's a great photo in spite of it being taken with a smartphone". I say that's a great photo *because* it was taken with a smartphone. Camera automation liberates me from the workload of shooting, and more importantly, keeps my brain hemispheres separated, allowing me to focus on just seeing.
 
Last edited:
one way to look at a product and judge if it is good or not is to consider if the user needs to have an understanding of how the internals work in order to effectively use it.

if users can get result with little understanding to the internal workings, then the designers deserve a round of applause.
 
Last edited:
And I don't even mean deep technical things like readout noise or sensor readout speeds, etc... Even basic thoughts like exposure interfere with my creative process. That's why I'm a huge fan of camera automation (AF, exposure, etc...), and also why I believe many of the best photos taken today are with smartphones. Photographers sometimes say "that's a great photo in spite of it being taken with a smartphone". I say that's a great photo *because* it was taken with a smartphone. Camera automation liberates me from the workload of shooting, and more importantly, keeps my brain hemispheres separated, allowing me to focus on just seeing.
For a long time, I've tended to distrust camera automation, because it frequently gets things wrong. I still almost universally distrust autoexposure, partly because of the fact that I know roughly how most camera AE algorithms work and I disagree with them. (Specifically, very few cameras offer effective highlight metering).

With smartphones (especially Google's smartphones), they have published multiple papers describing various aspects of their pipeline.

I know that Google's phones meter exposures to preserve highlights. I agree with this approach.

I know that metering for highlights carries a risk of reduced SNR not only in the shadows but also in midtones. I also know from reading their published research that Google compensates for this risk by taking multiple exposures and stacking them.

I know that exposure merging can have issues with motion unless subpixel alignment is allowed and/or local motion estimation and motion compensation (as opposed to global motion estimation/compensation). I know from reading their published research that their newer implementations support this. (Legacy HDR+ did not, their modern MFSR that originally was used in Night Sight and is now default in all modes does.)

I know that exposure compensating/tonemapping can have tradeoffs that an automated system can get wrong. I also know that I have found historically that Mertens' exposure fusion algorithm (aka enfuse) happens to be highly robust and rarely gets things "wrong" in a way that I dislike. (yes, I know this is a matter of taste, but my personal opinion is that I'm almost always happy with the results from enfuse with little to no need to tweak things significantly). Last, I know from Google's published research that they use a variation of Mertens' algorithm to perform their local tonemapping.

I distrust neural networks, but I know that Google only uses them for preview and for AWB in their "default" (e.g. no Magic Eraser or whatever) pipelines/settings.

This knowledge of the image processing fundamentals, and of the design of the automation system, helps me gain trust in the automation. This trust has led me to use my Pixel far more often than my Sonys nowadays, because the Pixel is so much more compact and I always have it with me, and I trust most of the fundamental decisions made in its design. (I disagree with Google's stance on what Adobe calls the "LinearRAW" photometric interpretation, e.g. storing an image that has been demosaiced without color conversion. Apple does this one better as Google's approach is inferior when your stacking algorithm does subpixel alignment.)
 
I definitely find it helpful to understand things like flange focal distance, optical design, weather sealing, the why behind chromatic aberration, and the mechanisms behind achieving focus.

Examples:

I wanted to start using vintage lenses after my Canon M50's lens mount was discontinued, so understanding flange focal distance helped me understand why I couldn't mount R-mount glass to my camera, and why some mounts' adapters are much longer than others'.

Understanding the basics of optical design also helps me identify trends of lenses I really like; I'm less sad about the eventual loss of my 32mm f1.4, because I can tell the Fuji 33mm f1.4 and 35mm f1.4 are optically similar and provide similar rendering. I'm also never sad when a fast lens is big, because I understand why that has to be the case. Dropping to f2 or f2.8 for portability never feels like a compromise now, given that it's governed by physics, not camera manufacturers.

Knowing it's impossible to weather seal a rotating element makes me not feel grouchy that Voigtlander's manual focus lenses aren't weather sealed, and helps me know not to use weather sealed lenses in manual focus mode in poor conditions, lest I defeat the weather sealing. Also, shopping vintage lenses and seeing the prevalence of dust makes me appreciate weather sealing and internally focusing zooms.

Understanding why chromatic aberration happens helps me know when a lens is worth taking a chance on vs. skipping, because I know my personal preference is always for lenses with the lowest CA in their class. That purple fringing drives me nuts.

Understanding how focus is achieved made me better at using manual focus, because I can visualize the plane of focus moving even before I make a choice to turn the focus ring.

There are many similar instances of this with video; knowing that a still from 4K footage is still only 8mp makes me understand the importance of taking dedicated stills on set rather than pulling frames from video.

As a guitarist, I found it enriching to understand my preferences for certain types of amps, strings, clipping distortions, signal paths, etc. and I've found that similarly, my enjoyment of photography is enriched by understanding the tools I'm using and knowing why they were designed the way they were.
 
obviously not, first question to ask when buying a new camera ,is how good is auto focus.

I was reading on FM forums where a guy sold the latest MF and bought a nikon Z9 and it wasnt just the advanced AF that made him switch.
 
No - they are only helpful to gearheads arguing on fora
 
No - they are only helpful to gearheads arguing on fora
If all you know is the science and technology, then you are likely to take boring photos. But if you don't know the tech, you are likely to take blurry photos. You need both art and technology.

Even traditional painting needs some technical knowledge about pigments, fading, and media.

Don
 
Last edited:
No - they are only helpful to gearheads arguing on fora
If all you know is the science and technology, then you are likely to take boring photos. But if you don't know the tech, you are likely to take blurry photos. You need both art and technology.

Even traditional painting needs some technical knowledge about pigments, fading, and media.

Don
A very quotable response. Thanks, Don.
 

Keyboard shortcuts

Back
Top