Where to set resolution during workflow?

Flycaster

Senior Member
Messages
2,825
Solutions
22
Reaction score
501
Location
Boynton Beach, FL, US
I have just started using FastRawViewer to cull my RAWs (from FZ1000), I then move selected images to Photolab 4 Elite and finally finish off with Photoshop Elements 18. I now have been noticing that when the images get to PSE, they are at 72 pix/in resolution. I would like to have them at 260pix/in. How do I accomplish this? Thanks.
 
Late to this party, but I recently wrote a blog post attempting to clear up the PPI/DPI question and maybe put a stake through its heart:

https://jimhphoto.com/index.php/2020/12/21/dpi-a-concept-you-should-forget/
This isn't quite right:

"A photo from a camera has just pixels. That’s all – no physical size and therefore, no inches. It has pixel numbers, like 6000×4000, depending on its sensor size, but the image has no physical size. So it can’t have a DPI or PPI value. It just can’t."

Before demosaicking it's Bayer raster, thus one can say they are dots, like dots in CMYK raster. Also, one can argue that the image has the physical size, and that's the sensor image area size. This makes dpi metrics for a raw image valid, and ppi metrics - valid for demosaicked images.
You're really clutching at straws with that scenario.

A camera produces raw data. Nothing more. It's up to the software that that data is presented to to produce something resembling an image on your camera/computer's monitor and even that doesn't have a "size."
The reason why size makes sense is because, for example, the effect of diffraction, as well as artifacts that have fixed real dimensions, scale with the size. Imagine having a 0.1 mm dust spec on a m43 and FF sensors having the same pixel counts; if images are printed at the same size, magnification is different and the dust spec will look larger on the print from m43.
The physical speck that lands on your sensor might well be of measurable dimensions but that same speck, as it appears on an image is no longer that measurable speck but a representation of it as a part of the raw data that I mentioned above.


"It's good to be . . . . . . . . . Me!"
 
Late to this party, but I recently wrote a blog post attempting to clear up the PPI/DPI question and maybe put a stake through its heart:

https://jimhphoto.com/index.php/2020/12/21/dpi-a-concept-you-should-forget/
This isn't quite right:

"A photo from a camera has just pixels. That’s all – no physical size and therefore, no inches. It has pixel numbers, like 6000×4000, depending on its sensor size, but the image has no physical size. So it can’t have a DPI or PPI value. It just can’t."

Before demosaicking it's Bayer raster, thus one can say they are dots, like dots in CMYK raster. Also, one can argue that the image has the physical size, and that's the sensor image area size. This makes dpi metrics for a raw image valid, and ppi metrics - valid for demosaicked images.
You're really clutching at straws with that scenario.
ROTFLMAO
 
Last edited:
I don't know exactly who Mr Borg is, but he rarely if ever calls it wrong
 
Last edited:
the metrics of a sensor chip have nothing to do with the DPI tag in an image file which is, for the OP's purpose, absolutely meaningless.
You've misunderstood me.

I was referring to "the image has no physical size".

My point is this: how do we apply vignetting, angular response, and other math / physics corrections for optical effects not knowing the geometry of the sensor and a rule to calculate (polar) coordinates of a pixel? What do we need to calculate those coordinates without a notion (explicit or implicit) of pixel density / image physical size?

--
http://www.libraw.org/
 
Last edited:
the metrics of a sensor chip have nothing to do with the DPI tag in an image file which is, for the OP's purpose, absolutely meaningless.
You've misunderstood me.

I was referring to "the image has no physical size".

My point is this: how do we apply vignetting, angular response, and other math / physics corrections not knowing the geometry of the sensor and a rule to calculate polar coordinates of a pixel? What do we need to calculate those coordinates without a notion (explicit or implicit) of pixel density / image physical size?
Iliah, I agree with you 100%. It just has nothing to do with the OP's question about the DPI tag in an image, and diverts this thread to a completely different place than the OP was asking about. Forum threads do that a lot, and I am not complaining. I simply wanted to make that clear to the OP.
 
the metrics of a sensor chip have nothing to do with the DPI tag in an image file which is, for the OP's purpose, absolutely meaningless.
You've misunderstood me.

I was referring to "the image has no physical size".

My point is this: how do we apply vignetting, angular response, and other math / physics corrections not knowing the geometry of the sensor and a rule to calculate polar coordinates of a pixel? What do we need to calculate those coordinates without a notion (explicit or implicit) of pixel density / image physical size?
Iliah, I agree with you 100%. It just has nothing to do with the OP's question about the DPI tag in an image, and diverts this thread to a completely different place than the OP was asking about. Forum threads do that a lot, and I am not complaining. I simply wanted to make that clear to the OP.
The link that was suggested to OP contained something that I wanted to address.
 
I could make a case that "pixel" and "dot" are names for visual attributes and meaningless outside the context of a human observer.

Let's say that internal to the camera, there's significance to the physical dimensions of the sensor and the density , and even the 2-dimensional area, of its "cells".

--
www.jimhphoto.com
 
Last edited:
Although my posting has lead to further discussion that is beyond my ken or needs, I have no problem as I got the responses that I was looking for. And, you guys got to discuss stuff at a higher level. So, all is well...
 

Keyboard shortcuts

Back
Top