sethmarshall
Well-known member
During what circumstances in photoshop do you convert to 16-bit or 32-bit mode? Since the lower mode is 8-bit by default are the higher modes reserved for images above 8-bits per channel? Since all jpgs are 8-bit (correct?) is there any benefit with working in photoshop with these images in 16-bit mode?--would that benefit remain if saving back as jpg or only if kept in a 16-bit psd, tiff, or equivalent?
I'm unsure if RAW photos from all cameras are higher than 8-bit so I avoid assuming the benefits are only with Tiff or RAW images. My Canon 5DmarkII claims "Its 14-bit analog/digital conversion allows you to create stunning 16-bit TIFF images from RAW 14-bit data.". But when I open images in ACR on the bottom it always says "Adobe RGB (1998); 8 bit ; 5616 by 3744 (21.0MP); 240ppi". At what point am I supposed to realize the supposed fact that these RAW files are 14-bit?
When it comes to HDR imaging I read higher bit becomes necessary. I understand that higher bits per channel allow an exponentially greater number of colors (shades per channel) but as far as when to use it I'm a little confused. Will someone please elaborate on this higher bit imaging principle and practices?
Thank you
I'm unsure if RAW photos from all cameras are higher than 8-bit so I avoid assuming the benefits are only with Tiff or RAW images. My Canon 5DmarkII claims "Its 14-bit analog/digital conversion allows you to create stunning 16-bit TIFF images from RAW 14-bit data.". But when I open images in ACR on the bottom it always says "Adobe RGB (1998); 8 bit ; 5616 by 3744 (21.0MP); 240ppi". At what point am I supposed to realize the supposed fact that these RAW files are 14-bit?
When it comes to HDR imaging I read higher bit becomes necessary. I understand that higher bits per channel allow an exponentially greater number of colors (shades per channel) but as far as when to use it I'm a little confused. Will someone please elaborate on this higher bit imaging principle and practices?
Thank you