Tom-Thomas

Joined on Jul 8, 2018

Comments

Total: 19, showing: 1 – 19
On article DPReview TV: Scan film negatives with the Nikon ES-2 (279 comments in total)

I just don't understand why so many people insist on using the misnomer. Everyone knows that this isn't scanning. Nobody would accept it if I would have said, "I am going to scan some photos at the wedding with my new Z6." Nor would it be acceptable to say, "I am printing this photo on my 4K big screen TV." A strange world.

Link | Posted on Mar 19, 2020 at 17:54 UTC as 7th comment
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

Imagine a flashlight shining onto a piece of cardboard from a certain distance. Without changing position, the circle of light produced by this flashlight is the same regardless of the size of the cardboard. The only thing the size of the cardboard affects is the amount of the circle of light the cardboard can cover. The property of the flashlight will not be altered by the sensor.

Link | Posted on Jul 4, 2019 at 18:36 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

It's very confusing because these terms are arbitrary jargons. On top of that, even many camera manufacturers use the terms interchangeably because their goal is to sell merchandise but not to educate. It is immensely easier for them just to use the terms interchangeably than to explain them.

I think the more accurate and clearer way to think about them in term of the concept they refer to rather than the terminology itself (see post below):

AOV is the "amount" of the SCENE a LENS can cover. A shorter focal length lens produce an image circle that covers a wider scene (hence, short FL lenses are called wide angle lenses) than a longer focal length lens. This doesn't change no matter what sensor you behind the lens because the lens produces the same image circle regardless of what is behind it.

FOV is the “amount” of the IMAGE CIRCLE a SENSOR can cover. This coverage of course is affected by the sensor size. The larger the sensor, the more of the image circle the sensor can cover.

Link | Posted on Jul 4, 2019 at 18:35 UTC
In reply to:

nonuniform: Why does anyone care how an image was created? Except for images that claim to be documents of actual events, the rest of it doesn’t matter when the overwhelming majority of images are viewed as jpegs on a digital device. Who cares if it was painted with oils, or a photographic composite? What difference does it make.

I ask because so many people have responded with very rigid views, but I wonder truly, why it matters.

nonuniform,

While I agree that art shouldn't have boundaries and labels can sometimes be silly, but a painter ought to paint and a sculptor ought to sculpt. There is no doubt that his works are art but the question is whether or not it's photography.

No matter how much post-image-capture manipulation is involve, photography still starts with capturing an image. If you don't even capture the image yourself, how can you call yourself a photographer and your work photography? It's like if I cut other people's paintings and paste them together into a composite. Can I call myself a painter? Karcz's works may be great art, just not photography, that's all. The whole debate can be avoided if only people just call his works what they really are: composite art of found photographs.

Link | Posted on Jun 28, 2019 at 03:34 UTC
In reply to:

photolando: Having gone a to school where the great Jerry Uelsmann was a frequent speaker ( I was in Daytona. He taught at UF) I don't really have a problem with composites. But I do have problem if YOU didn't take all the images in the composite. Well, I don't have problem if you're calling yourself a "compositor" but please don't call yourself a photographer. You're nothing of the sort. And especially using copyrighted material.

If YOU are not using light as part of the process to create your art, you are not a photographer. Doesn't mean you're not an "artist" but it's not "photo-" (aka "light") graphy. Plain and simple.

Kevin Barrett,

I don't think the debate is about whether or not Karcz is an artistic or about whether or not his works are art. He is an artist and his works are art.

The debate also isn't about whether or not post-processing (darkroom or digital) after image capture is part of the entire photography process. It is.

But Karcz isn't a photographer and his work isn't photography. Think about it this way: If I cut and paste other people's paintings and glue them together in a composite, can I honestly call myself a painter?

Link | Posted on Jun 28, 2019 at 03:05 UTC
In reply to:

Constantin V: > I agree with them, manipulated images are not photography.

I somehow doubt that people who liked that on dpreview are never liked news about new photoshop plugin or new low ISO achievements (which are basically new computation algorithms powered by new CPU) or leica cutting their jobs in favor of image manipulation team... Generally speaking I doubt these digital camera users are totally consistent in their opinion. Digital is all about easiness of manipulation even if it's a latent manipulation you don't see. Considering photography is a hunt in some sense, I would call autofocus an auto rifle aiming and image manipulation is like making a haircut to lion you've just killed.

p.s.
Ok, a small manipulation counts for cleaning him for bragging.

p.p.s.
Nothing against the guy in the topic or what he is drawing. Collage is as old as the world.

NancyP, you really can't compare Karcz with Jerry Uelsmann. (1) Uelsmann takes every photo he uses in his composites. They are all his. (2) Uelsmann's works are hand-crafted in darkroom. It takes tremendous amount of skills and talents. As a heavy Photoshop user myself, I am NOT saying that it doesn't take skills and talents to use PS to do photo manipulations. It does. But it is a far cry from Uelsmann's manual operation.

It's like comparing a gourmet meal crafted by hands using ingredients made from scratch vs a dish assembled with the help of automated kitchen tools using store bought ingredients.

Link | Posted on Jun 28, 2019 at 02:50 UTC

What? The same product is offered to different customers at different prices? Base on what exactly? Huh, I am not a lawyer so I don't know. If there is any lawyer here, maybe you guys can tell be whether this is legal or not.

Link | Posted on Jun 9, 2019 at 00:29 UTC as 14th comment
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

Richard, I have more time to make a more proper illustration to explain my points: https://i.imgur.com/6Mb3ifR.jpg

I know I am a little off topic right now b/c I'm talking about the "equivalent focal length" concept, but it is just a pet peeve of mine. I have seen so many people talking about how a lens on a APS-C camera will have a longer reach. No, it doesn't. It just a misconception and an illusion. The "reach" of a lens is determined by the focal length of the lens which doesn't change according to the sensor size. Explanation: https://i.imgur.com/hUpif5I.jpg

Film people don't have this problem b/c film people don't see an enlarged image filling up on a LCD screen. They look at the the images on different formats of film and understand what is really going on.

Link | Posted on Jun 1, 2019 at 15:29 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

Actually, with the diagram I was trying to illustrate that how it is impossible for the light, after passing the lens, to get bent inward to ONLY COVER THE SMALLER SENSOR AREA but not more. Oh well, it isn't a good illustration of what I want to say b/c I didn't spend a lot of time on it. Anyway, at this point I think I just have to agree to disagree.

Link | Posted on May 31, 2019 at 21:01 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

We are still not communicating. I edited my post above. See the better illustration here: https://i.imgur.com/rCtDFj0.jpg

Link | Posted on May 31, 2019 at 18:08 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

Look at the diagram again on that Panasonic article. The "cone of light" projected by the lens won't change when a smaller sensor is place there instead (because the focal distance won't change) — in other words, even when a smaller sensor is placed there instead, the "cone of light" won't get bent more inward after the light has passed the lens. The calculation there only demonstrates how one would reverse calculate the AOV assuming the sensor cover the entire width of the image circle. See illustration here: https://i.imgur.com/rCtDFj0.jpg

Link | Posted on May 31, 2019 at 17:52 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

I edited my post while you post. New edit here:

The calculation you mentioned doesn't mean that the AOV is dependent on the sensor size. What it shows there is how may you calculated the AOV when the sensor covers the entire diameter of the the image circle. You really need to read the entire article instead just one page.

Link | Posted on May 31, 2019 at 17:23 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

You are not getting what I am saying ... and IMO, you aren't getting the explanation from the 2 articles either ... The calculation you mentioned doesn't mean that the AOV is dependent on the sensor size. What it shows there is how may you calculated the AOV when the sensor covers the entire diameter of the the image circle. You really need to read the entire article instead just one page.

Link | Posted on May 31, 2019 at 17:11 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

Correction: In the illustration linked above, the text should read "This gives the illusion of a ... " instead of "This gives the illustration of a ..."

This is the link for the corrected illustration: https://i.imgur.com/yx2zmVT.jpg

Link | Posted on May 31, 2019 at 15:48 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

The sensor size affects the FOV but not the AOV. Think about it this way: With a zoom lens, you get a wider AOV on the short end and a narrower AOV on the long end. You accomplish this by turning the focal length barrel. When you turn that barrel, the glass elements in the lens move and change position to achieve different focal lengths (hence different AOV). This is the only way the focal length of a lens will change. For a prime lens, the glass elements don't move (except in focusing but that doesn't change the focal length). Clearly, the sensor won't cause the glass elements in a lens to move; hence, the sensor won't change the focal length of the lens.

The whole "equivalent" focal length thing is a misconception. It's just a "shortcut" to assist you to mentally "visualize" the FOV captured by different sizes sensors. Sensor don't change a len's AOV. See this illustration: https://i.imgur.com/7r4KExU.jpg

Link | Posted on May 31, 2019 at 15:19 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

You are confusing ANGLE of View (AOV) with FIELD of View (FOV). I understand that many people, including camera manufacturers, often use the 2 terms interchangeably when they really shouldn't. Read the 2 articles below to understand the difference:

http://pl.panavision.com/sites/default/files/docs/documentLibrary/2%20Sensor%20Size%20FOV%20l.pdf

https://books.google.com/books?id=Jr4YsaTrIBgC&pg=PA116&lpg=PA116&dq=AOV+angle+of+view&source=bl&ots=rt21Hb7DVu&sig=BWLclnvxzjbA2nciSsaM0pLaYoM&hl=en&sa=X&ei=5OT7U6OWG4K3iwLiqYDIBA&ved=0CEwQ6AEwBTgK#v=onepage&q=AOV%20angle%20of%20view&f=false

Link | Posted on May 31, 2019 at 15:18 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

{Continued from the last post]

With the same lens, in order for a DX sensor to get the same FOV a FX sensor would capture, the DX camera needs to shoot from farther away. Inverse Square Laws dictates that the light reaching the DX camera would be lower in intensity and thus you need to adjust your exposure setting in order to get the same exposure as you would with a FX camera positioned at a closer distance. This happens only because you insist on getting the same FX FOV with a DX sensor but not because any light is "lost".

Link | Posted on May 30, 2019 at 21:18 UTC
On article The effect of pixel size on noise (145 comments in total)
In reply to:

Tom-Thomas: Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

You say, "If you use a (longer FL) lens to give you the same field of view as you're getting on the D7000, then you get to exploit that extra sensor region, and gain additional light/information about the scene, if you're shooting at the same F-number."

But now you are comparing 2 different images taken with 2 different FL lenses. Different FL means different ANGLE of View (AOV) and thus different Image Circle (IC) — or the "cone of light" in your way of explaining it. You are comparing apples with oranges.

With the same FL lens, the AOV remains the same no matter what size sensor is placed within the Image Circle (IC) produced by this lens. The AOV and thus the IC are not affected by sensor size, but the Field of View (FOV) — i.e., the portion of the IC a sensor covers — would be. [Continues in the next post].

Link | Posted on May 30, 2019 at 21:16 UTC
On article The effect of pixel size on noise (145 comments in total)

Your explanation baffles me. You say, "... the smaller sensor can only see the inner, bronze-colored cone of light, the rest of the original yellow cone of light is no longer available for capture. ... It's not condensed down onto the smaller sensor (if it were, then you'd still get the same field-of-view), it's just lost."

The lights that are outside of the smaller sensor and therefore not captured by the smaller sensor aren't needed to form the image to begin with. Those lights aren't, like you say, "lost". They aren't needed.

Imagine a 6" sq piece of stencil with a 3" sq cutout in the middle of it. So you spray paint on this stencil in order to get a 3" sq image. Any paint that falls outside of the 3" sq cutout won't matter to the final 3" sq image b/c the areas outside of the 3" sq cutout aren't part of the final 3"sq image. And it won't matter to the final 3" sq image whether the 3" sq cutout is on a 6" sq or 4" sq or 50" sq stencil or any size of stencil for that matter.

Link | Posted on May 27, 2019 at 18:55 UTC as 12th comment | 23 replies
Total: 19, showing: 1 – 19