A pixel is not a little square, revisited

I see way too many oversharpened images on DPR, and this may be one of the reasons.
I get that and I dislike over sharpened images as well - sharpening is something that needs to be done either in the file or at the printing process or at both levels but with the right amount and this is a function of lens, camera and process.

Maybe other things I did not think of ...

Recently I had been researching about the Hawking radiation - and I did not get it until I found a YT video that explained the dilemma about the 1. and 2. thermodynamical laws and the implication if that would be real - in a nutshell Hawking solved a dilemma that would have led to a universe where fundamental things are wrong and thus we have to suspect that nothing is explainable or at least lot of interpretation are questionable.

Sharpening is a matter of HONESTY to yourself - you can't expect people to compete with others and not to overdo it - it is s probably mostly a social media phenomenon.

Thx for your explanation - I did not get the intention. I am sharpening in LRC and at printing moderately - it is very visible in case you overdo it. But most people don't print at home and most don't print ( very ) large at all.
 
A pixel is not a little colored tile. It is a number ... that represents a sample taken from an underlying continuous image. [...]

A better way to think of pixels is as infinitesimal points on a smooth landscape.
Good post Jim, true of both the input and output chains. When the continuous image on the sensing plane is digitized we can envision pixels as infinitesimal points, as opposed to little squares,
TBH this discussion of how we envisage it is a lot less helpful than Jim's post "consequences of this confusion"

As I said above there is the "rain falling into buckets" analogy. The bucket has an area - like a photoreceptor - in which raindrops (photons) land. We are interested in the amount of water (charge) resulting from the rainfall (light). We can pour the water into a measuring vessel or we could use a dipstick - it doesn't much which but we get one number per bucket. Ditto we get one number per receptor.
That's pretty good James, though probably a better metaphor would be a leaky, uneven cheese cloth funneling rain to a particular spot, where a flow meter sits.

However the point of my post was the low pass function provided by pixel aperture on the pre-capture image, which is then complemented by picking the level of intensity of the respectively smoothed image at the center of gravity of pixel aperture by a grid of infinitesimal delta functions.

The insight here is that, absent further processing, the best we can aim for is the reconstruction of the image smoothed by pixel aperture - because all we have is a grid of evenly spaced point measurements of it (as opposed to lots of little buckets in equivalent positions in the pre-capture image).

This ties into Jim's post showing the effect of different reconstruction kernels on point samples. Either way, no squares or buckets, just points.

Jack
 
Last edited:
As I said above there is the "rain falling into buckets" analogy. The bucket has an area - like a photoreceptor - in which raindrops (photons) land. We are interested in the amount of water (charge) resulting from the rainfall (light). We can pour the water into a measuring vessel or we could use a dipstick - it doesn't much which but we get one number per bucket. Ditto we get one number per receptor.

NOW: some people will think of "Pixel" as the bucket, a little square (or circle or any shape we like) we say a sensor is "50 megapixels because" it is divided into 50 million little squares. IF we stop there, we only have instructions for building a mosaic when we come to turn those readings back into something we can see.
On the other hand IF we say 50 million pixels is 50 million samples ... it opens the door to more intelligent processing into ink dots or screen dots. If we only ever think of them of instructions for placing 50Million tesserae it leads to doing daft things during processing.
The "pixel is just a value, NO, REALLY, DON'T YOU UNDERSTAND THAT THE PIXEL IS JUST A VALUE???" thing feels a bit like an elaborate piece of sophistry whose purpose is to demonstrate that photodiodes - physical "pixels" - don't actually exist.
As a 22 year veteran of DPR who recently went medium format and had the audacity to show his face in this forum, I'm surprised how many "elaborate pieces of sophistry" there are here, which try to prove... I don't know what, other the cleverness of those who put them forward.

James O'Neill, post: 68472383, member: 2235122"]
TBH this discussion of how we envisage it is a lot less helpful than Jim's post "consequences of this confusion"
The implication of that whole discussion is that the observer literally cannot see what is in front of his own eyes, that he is oblivious to the way the final physical output actually looks. Only if one perceives that something isn't right with the output does the question arise of how best to tune the on-screen processing analog to facilitate achieving optimal output in the chosen physical medium at the desired scale.
Indeed.

[/QUOTE]
 
I think actually it does show that "Pixel" is vague term. Is it a photodiode, or a value, or a dot on a display, or spot of ink ? "Pixelated" and "Pixel peeping" and colloquial uses of the term may be correct. But then what ?
Not vague. Ambiguous. There are a few common usages of the word, as you note. Jim is clarifying the technical usage, which is important to know if you want to understand how digital systems actually work, and if you want to avoid some common mistakes.

The other usages are fine. Just be clear about which one you're using. Language is full of these "overdetermined" words. Careful speaking and writing helps us survive them.
 
I think actually it does show that "Pixel" is vague term. Is it a photodiode, or a value, or a dot on a display, or spot of ink ? "Pixelated" and "Pixel peeping" and colloquial uses of the term may be correct. But then what ?
Not vague. Ambiguous. There are a few common usages of the word, as you note. Jim is clarifying the technical usage, which is important to know if you want to understand how digital systems actually work, and if you want to avoid some common mistakes.

The other usages are fine. Just be clear about which one you're using. Language is full of these "overdetermined" words. Careful speaking and writing helps us survive them.
Indeed. As I've explained in this thread, I'm talking here about what the values in the file mean, and how to properly reconstruct an image. I should have made that clearer in the initial post.

That phenomenon is called polysemy: the coexistence of multiple related meanings for a single word.

In the case of pixel, all the meanings share a common conceptual core: the idea of a small element in an image. But the referent changes depending on context: numeric sample, sensor element, or display emitter. That makes pixel polysemous, not homonymous (homonyms are words with entirely unrelated meanings, like bat the animal and bat used in baseball).

You could also describe the situation more generally as terminological overloading, a term often used in technical or programming contexts when one label is applied to distinct but related entities.

In a digital file, a pixel is simply a number, or a small set of numbers, that records the result of sampling a continuous image. It has no physical size or shape. The pixel in this sense is an abstraction: an address in an array containing values that describe color or intensity at that location. It’s a data sample, not a dot or a square.

On a camera sensor, the word refers to something entirely physical. A pixel there is a photosite, a tiny light-sensitive element that converts photons into electrical charge. Each photosite responds to a limited region of the image, and in a color-filter array sensor it records only one color component. These sensor pixels define the sampling density of the captured image, but they’re not yet colors in the photographic sense. They’re measurements.

On a display, a pixel is again physical, but now it’s an emitter rather than a sensor. It’s the smallest addressable element that produces light on the screen, typically composed of red, green, and blue subpixels. Together, these emitters reproduce the color values derived from the file’s digital pixels as mediated by a reconstruction algorithm. Their spacing and size determine the screen’s resolution and how finely the image can be rendered.

--
https://blog.kasson.com
 
Last edited:
I enjoyed the article, even if I found it a bit elliptical. The question that needs to be answered is “so what”?
 
I enjoyed the article, even if I found it a bit elliptical.
Want to take a crack at a more linear version?
The question that needs to be answered is “so what”?
Reprinted from above.
One of the more misleading habits encouraged by modern image editors is the belief that the pixel-level view, the grid of colored squares you see when you zoom in to 100 percent or beyond, represents something physically meaningful or visually optimal. It does not. That view is a diagnostic aid, not an aesthetic or technical goal. The colored squares are artifacts of the display system’s nearest-neighbor reconstruction, not the actual structure of the image. Judging an image at that level tempts photographers to fix problems that do not exist in the continuous image and to pursue spurious sharpness that will not survive proper resampling or printing.

The most common result is oversharpening. When the viewer sees slightly soft or blended transitions between those squares, the natural impulse is to increase sharpening until each boundary looks crisp. But those boundaries are not features of the image. They are artifacts of the display. Sharpening adjusted to make the pixel grid look snappy exaggerates high-frequency components that lie above the sensor’s sampling limit, creating halos, false textures, and brittle-looking detail. These effects may look dramatic when magnified on screen, but they degrade the photograph when viewed at its intended size.

Another problem is that the pixel grid’s hard-edged representation exaggerates noise and demosaic artifacts, encouraging needless denoising or selective blur. The photographer ends up optimizing the image for the wrong domain, the screen’s grid of samples, rather than for the continuous image that will eventually be reconstructed, resized, and printed. The cure is to evaluate images at an appropriate viewing scale, ideally one that matches the intended print or display size and resolution. Only then do sharpening, noise reduction, and tonal adjustments correspond to what will be seen in the finished photograph, not to the misleading staircase pattern of a zoomed-in pixel view.
 
This is my favourite bot’s version:

We're all familiar with the idea of a pixel. Zoom in far enough on any digital image, and you'll see it—the building block of the digital world, a tiny, colored square. This mental model is simple and intuitive, but it's also wrong. And understanding why it's wrong is the key to understanding digital imaging.
A pixel is not a little square. A pixel is a sample measurement.
Think of a digital image as a recording of a real-world scene. Just as a microphone samples a sound wave at various points in time, a digital camera's sensor samples a scene at various points in space. Each of these samples is a pixel, a measurement of the light and color at a single point. The "squares" you see are just a convenient way for software to display these samples.
So why does this distinction matter?
Because if you think of pixels as squares, you'll wrongly judge image quality by the sharpness of those squares. But if you think of pixels as samples, you'll understand that image quality is determined by how well the scene has been sampled.
If the sampling is too coarse (the pixels are too far apart), you get aliasing. This is the cause of digital artifacts like moiré patterns (the strange, wavy lines you see on patterned fabrics) and jagged edges. Image processing techniques like sharpening and scaling are forms of reconstruction, where the software tries to guess what's in the gaps between the samples. The better the original sampling, the better the reconstruction, and the better the final image.
This is especially important when it comes to printing and displaying images. Different software (like Photoshop, Lightroom, or QImage) have different algorithms for resampling images, and their effectiveness can vary greatly. Understanding that a pixel is a sample will help you make better decisions about how to capture, process, and display your images, leading to more realistic and higher-quality results.
 
A pixel is not a little colored tile. It is a number ... that represents a sample taken from an underlying continuous image. [...]

A better way to think of pixels is as infinitesimal points on a smooth landscape.
Good post Jim, true of both the input and output chains. When the continuous image on the sensing plane is digitized we can envision pixels as infinitesimal points, as opposed to little squares,
TBH this discussion of how we envisage it is a lot less helpful than Jim's post "consequences of this confusion"

As I said above there is the "rain falling into buckets" analogy. The bucket has an area - like a photoreceptor - in which raindrops (photons) land. We are interested in the amount of water (charge) resulting from the rainfall (light). We can pour the water into a measuring vessel or we could use a dipstick - it doesn't much which but we get one number per bucket. Ditto we get one number per receptor.
That's pretty good James, though probably a better metaphor would be a leaky, uneven cheese cloth funneling rain to a particular spot, where a flow meter sits.

However the point of my post was the low pass function provided by pixel aperture on the pre-capture image, which is then complemented by picking the level of intensity of the respectively smoothed image at the center of gravity of pixel aperture by a grid of infinitesimal delta functions.

The insight here is that, absent further processing, the best we can aim for is the reconstruction of the image smoothed by pixel aperture - because all we have is a grid of evenly spaced point measurements of it (as opposed to lots of little buckets in equivalent positions in the pre-capture image).

This ties into Jim's post showing the effect of different reconstruction kernels on point samples. Either way, no squares or buckets, just points.
I tried to explain this a bit better here

 
A pixel is not a little colored tile. It is a number ... that represents a sample taken from an underlying continuous image. [...]

A better way to think of pixels is as infinitesimal points on a smooth landscape.
Good post Jim, true of both the input and output chains. When the continuous image on the sensing plane is digitized we can envision pixels as infinitesimal points, as opposed to little squares,
TBH this discussion of how we envisage it is a lot less helpful than Jim's post "consequences of this confusion"

As I said above there is the "rain falling into buckets" analogy. The bucket has an area - like a photoreceptor - in which raindrops (photons) land. We are interested in the amount of water (charge) resulting from the rainfall (light). We can pour the water into a measuring vessel or we could use a dipstick - it doesn't much which but we get one number per bucket. Ditto we get one number per receptor.
That's pretty good James, though probably a better metaphor would be a leaky, uneven cheese cloth funneling rain to a particular spot, where a flow meter sits.

However the point of my post was the low pass function provided by pixel aperture on the pre-capture image, which is then complemented by picking the level of intensity of the respectively smoothed image at the center of gravity of pixel aperture by a grid of infinitesimal delta functions.

The insight here is that, absent further processing, the best we can aim for is the reconstruction of the image smoothed by pixel aperture - because all we have is a grid of evenly spaced point measurements of it (as opposed to lots of little buckets in equivalent positions in the pre-capture image).

This ties into Jim's post showing the effect of different reconstruction kernels on point samples. Either way, no squares or buckets, just points.
I tried to explain this a bit better here

https://www.strollswithmydog.com/sampling-in-imaging/
Good job , Jack.
 
Here's Alvy Ray Smith's paper for those who haven't seen it:

https://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

I'm going to push back just a bit. At the level of the physical structure of our sampling device - the image sensor in our cameras - the pixel *is* a rectangle, not a dimensionless point. (If it were a dimensionless point, it wouldn't be able to record anything.)
That's a good viewpoint for folks to hear, but I'm gonna haggle a bit about the terminology used...

First off, the thing that does the sampling or capture is called a sensel, and yeah, they are often square. Then again, the microlens typically over each one sort of isn't. What's more, even in a monochrome sensor, there are light-insensitive gaps between sensels (the active area percentage is called the fill factor) and there is usually an AA (antialias) filter that tries to bring that sampling process closer to meeting Nyquist constraints... In other words, even the sampling process is complicated.

Jim's assertion that a pixel refers to the sampled value is certainly not wrong, but the term is ambiguous because it also is used to describe the mechanism used to render a sampled value. Display pixels or output pixels can have a range of strange properties, especially in color displays. For example, each color LCD pixel typically consists of Red, Green, and Blue stripes within a square area -- and the stripes are typically vertical so that the finer horizontal structures in text can be rendered more precisely by subpixel rendering (e.g., what Microsoft falsely claimed to have invented when they introduced "ClearType"). However, a pixel in a JPEG file is arguably even stranger, undergoing colorspace transformations and frequency-domain compression across samples; remember that JPEG compression is based on human perception, so output to a JPEG file really is a rendering process.

Personally, I usually use sensel for the measuring device samples, and pixel for the unit of output render processing. When I'm talking about the data, which is what us computer types would call elements of a matrix or an array, I use any of several terms that more specifically describe how the data has been processed. Minimally-processed samples constitute raw data. However, lots of modern systems perform extensive processing of samples, for example using spatio-temporal interpolation to create a scene appearance model (or less formally image data). That is my preferred term for data that has been significantly transformed, but not yet rendered for viewing. Scene appearance models can be created by processes as complicated as sub-pixel alignment and weighted averaging of multiple raw captures (stitching) or as simple as CFA (color filter array) demosaicking. Certainly correcting "bad pixel" samples, color bias, and lens issues like vignetting and distortion are not 1:1 mappings from sensel samples...

In sum, I would describe the flow as:

9a2f85d3b656409f90e63f8e5fd58661.jpg.png
 
Last edited:
Jim's assertion that a pixel refers to the sampled value is certainly not wrong, but the term is ambiguous because it also is used to describe the mechanism used to render a sampled value. Display pixels or output pixels can have a range of strange properties, especially in color displays. For example, each color LCD pixel typically consists of Red, Green, and Blue stripes within a square area -- and the stripes are typically vertical so that the finer horizontal structures in text can be rendered more precisely by subpixel rendering (e.g., what Microsoft falsely claimed to have invented when they introduced "ClearType"). However, a pixel in a JPEG file is arguably even stranger, undergoing colorspace transformations and frequency-domain compression across samples; remember that JPEG compression is based on human perception, so output to a JPEG file really is a rendering process.
I don't think the takeaway is that someone is wrong to use these other definitions.

But it's important to understand what a pixel means in the mathematical / sample theory sense, so you don't make bad assumptions when working with digital files. For example, you should know that the square boxes you see when you zoom in on a photoshop file are a display rendering artifact, and not a predictor or how pixels will be rendered by a printer.
 
Last edited:
Here's Alvy Ray Smith's paper for those who haven't seen it:

https://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

I'm going to push back just a bit. At the level of the physical structure of our sampling device - the image sensor in our cameras - the pixel *is* a rectangle, not a dimensionless point. (If it were a dimensionless point, it wouldn't be able to record anything.)
That's a good viewpoint for folks to hear, but I'm gonna haggle a bit about the terminology used...

First off, the thing that does the sampling or capture is called a sensel, and yeah, they are often square. Then again, the microlens typically over each one sort of isn't. What's more, even in a monochrome sensor, there are light-insensitive gaps between sensels (the active area percentage is called the fill factor) and there is usually an AA (antialias) filter that tries to bring that sampling process closer to meeting Nyquist constraints... In other words, even the sampling process is complicated.

Jim's assertion that a pixel refers to the sampled value is certainly not wrong, but the term is ambiguous because it also is used to describe the mechanism used to render a sampled value. Display pixels or output pixels can have a range of strange properties, especially in color displays. For example, each color LCD pixel typically consists of Red, Green, and Blue stripes within a square area -- and the stripes are typically vertical so that the finer horizontal structures in text can be rendered more precisely by subpixel rendering (e.g., what Microsoft falsely claimed to have invented when they introduced "ClearType"). However, a pixel in a JPEG file is arguably even stranger, undergoing colorspace transformations and frequency-domain compression across samples; remember that JPEG compression is based on human perception, so output to a JPEG file really is a rendering process.

Personally, I usually use sensel for the measuring device samples, and pixel for the unit of output render processing. When I'm talking about the data, which is what us computer types would call elements of a matrix or an array, I use any of several terms that more specifically describe how the data has been processed. Minimally-processed samples constitute raw data. However, lots of modern systems perform extensive processing of samples, for example using spatio-temporal interpolation to create a scene appearance model (or less formally image data). That is my preferred term for data that has been significantly transformed, but not yet rendered for viewing. Scene appearance models can be created by processes as complicated as sub-pixel alignment and weighted averaging of multiple raw captures (stitching) or as simple as CFA (color filter array) demosaicking. Certainly correcting "bad pixel" samples, color bias, and lens issues like vignetting and distortion are not 1:1 mappings from sensel samples...

In sum, I would describe the flow as:

9a2f85d3b656409f90e63f8e5fd58661.jpg.png
Makes, ahem, sense, and that's what I did until about 10 years ago when I was corrected by Eric Fossum.

For now, I'm happy with polysemy.

--
 
Makes, ahem, sense, and that's what I did until about 10 years ago when I was corrected by Eric Fossum.

For now, I'm happy with polysemy.
Fair enough. ;-)

There are a lot of folks at the Electronic Imaging conferences who insist on calling whichever one of the above things they are working with pixels. I think the problem is that not so many people -- even highly technical people -- regularly deal with the full breadth of what has been labeled as pixels. Almost everybody works in a subdomain in which there is only one common definition of a pixel, and they tend to forget that the other layers work significantly differently.

Eric primarily works on sensors, for which he is the leading authority (and you're not the only one he has corrected on some things ;-) ). However, you and I regularly touch more pieces of the pipeline, so the ambiguity is more problematic for us. Anyway, a lot of raw data doesn't look like an array of pixels, from so-called single-pixel imagers to the data stream from an event sensor or QIS Jot, so for now I'll keep up the good fight on the terminology...
 
ProfHankD wrote: [...] However, you and I regularly touch more pieces of the pipeline, so the ambiguity is more problematic for us. Anyway, a lot of raw data doesn't look like an array of pixels, from so-called single-pixel imagers to the data stream from an event sensor or QIS Jot, so for now I'll keep up the good fight on the terminology...
Point well taken, though since the scientists that work with imaging sensors, not to mention their literature, mostly call the sensing elements pixels, I think we should stick with that around here for consistency:

Pixels -> Raw Data -> resampling/processing -> Pixels.

So in a photography context the raw data represents point samples of the optical image on the sensing plane smoothed by (convolved with) ... pixel aperture, taken at the center of each ... pixel. From which under certain circumstances one can obtain a perfect reconstruction of the continuous convolved image:

Noiseless intensity profile of two stars separated by the Rayleigh Criterion, sampled at the Nyquist rate. From the earlier link
Noiseless intensity profile of two stars separated by the Rayleigh Criterion, sampled at the Nyquist rate. From the earlier link

Thereby losing the need for one-to-one correspondence with the original pixels in processing.

Jack
 
Last edited:
ProfHankD wrote: [...] However, you and I regularly touch more pieces of the pipeline, so the ambiguity is more problematic for us. Anyway, a lot of raw data doesn't look like an array of pixels, from so-called single-pixel imagers to the data stream from an event sensor or QIS Jot, so for now I'll keep up the good fight on the terminology...
Point well taken, though since the scientists that work with imaging sensors, not to mention their literature, mostly call the sensing elements pixels, I think we should stick with that around here for consistency:

Pixels -> Raw Data -> resampling/processing -> Pixels.
My complaint is that even at Electronic Imaging people essentially say:

Pixels -> Pixels -> Pixels -> Pixels

which I find about as satisfying as the "turtles all the way down" explanation of how the planet Earth is supported in space. ;-)

Incidentally, this confusion has caused real harm. I've found that a lot of researchers are misusing pixels of the wrong type. For example, a lot of people using the OpenCV library don't realize that the JPEG they are reading in has an unspecified gamma; they assume it is linear (raw).
So in a photography context the raw data represents point samples of the optical image on the sensing plane smoothed by (convolved with) ... pixel aperture, taken at the center of each ... pixel.
In fact, not always so. The "single pixel" imaging samples the sum of a random group of "pixel sites" in each reading and then solves the resulting system of linear equations for the individual contributions. The scary part is that the set of equations isn't large enough to solve directly, but are used as constraints to combine with "priors" to synthesize the image data.
From which under certain circumstances one can obtain a perfect reconstruction of the continuous convolved image:

Noiseless intensity profile of two stars separated by the Rayleigh Criterion, sampled at the Nyquist rate. From the earlier link
Noiseless intensity profile of two stars separated by the Rayleigh Criterion, sampled at the Nyquist rate. From the earlier link

Thereby losing the need for one-to-one correspondence with the original pixels in processing.
That is how Nyquist sampling works with a correctly-sampled array. Unfortunately, most sensels don't exactly sample the right stuff for Nyquist to apply, so Sinc reconstruction is only approximately right.

Then again, we're estimating scene appearance using photons as the sampling mechanism, and photon emission rates are stochastic as is photon absorption at the detector (i.e., it is all noisy). Every time I try to get very precise about things I find that there is some aspect that cannot be precisely known. ;-)
 
ProfHankD wrote: [...] However, you and I regularly touch more pieces of the pipeline, so the ambiguity is more problematic for us. Anyway, a lot of raw data doesn't look like an array of pixels, from so-called single-pixel imagers to the data stream from an event sensor or QIS Jot, so for now I'll keep up the good fight on the terminology...
Point well taken, though since the scientists that work with imaging sensors, not to mention their literature, mostly call the sensing elements pixels, I think we should stick with that around here for consistency:

Pixels -> Raw Data -> resampling/processing -> Pixels.
My complaint is that even at Electronic Imaging people essentially say:

Pixels -> Pixels -> Pixels -> Pixels

which I find about as satisfying as the "turtles all the way down" explanation of how the planet Earth is supported in space. ;-)

Incidentally, this confusion has caused real harm. I've found that a lot of researchers are misusing pixels of the wrong type. For example, a lot of people using the OpenCV library don't realize that the JPEG they are reading in has an unspecified gamma; they assume it is linear (raw).
So in a photography context the raw data represents point samples of the optical image on the sensing plane smoothed by (convolved with) ... pixel aperture, taken at the center of each ... pixel.
In fact, not always so. The "single pixel" imaging samples the sum of a random group of "pixel sites" in each reading and then solves the resulting system of linear equations for the individual contributions. The scary part is that the set of equations isn't large enough to solve directly, but are used as constraints to combine with "priors" to synthesize the image data.
From which under certain circumstances one can obtain a perfect reconstruction of the continuous convolved image:

Noiseless intensity profile of two stars separated by the Rayleigh Criterion, sampled at the Nyquist rate. From the earlier link
Noiseless intensity profile of two stars separated by the Rayleigh Criterion, sampled at the Nyquist rate. From the earlier link

Thereby losing the need for one-to-one correspondence with the original pixels in processing.
That is how Nyquist sampling works with a correctly-sampled array. Unfortunately, most sensels don't exactly sample the right stuff for Nyquist to apply, so Sinc reconstruction is only approximately right.

Then again, we're estimating scene appearance using photons as the sampling mechanism, and photon emission rates are stochastic as is photon absorption at the detector (i.e., it is all noisy). Every time I try to get very precise about things I find that there is some aspect that cannot be precisely known. ;-)
Right? Heisenberg would be proud.
 
Hi,

So certain, are you? :P

My thinking regarding all this is: Good Enough for Government Work. Given it all came about for Federal Sector needs. My inital introduction to Kodak Digital back in the early 1980s was thru a Federal Sector project which they and IBM were both a part of.

So, call it a Pixel or a Sensel, doesn't really matter. Heck, I call them Photodiodes. Just don't think the structure on the sensor is what that screen shows it as being.

Stan
 
That is how Nyquist sampling works with a correctly-sampled array. Unfortunately, most sensels don't exactly sample the right stuff for Nyquist to apply, so Sinc reconstruction is only approximately right.
The sinc reconstruction assumes ideal point sampling preceded by a perfect low-pass filter with a sharp cutoff at the Nyquist frequency. If your actual sampling aperture deviates from that, you can first perform an ideal sinc reconstruction to bring the sampled data back into a continuous domain, then apply a deconvolution filter matched to the aperture’s MTF.

Conceptually, the sinc step restores the sampled signal under the assumption of ideal sampling, and the deconvolution step then compensates for the real-world prefiltering that occurred because of the pixel aperture and optics. The two together approximate the theoretical inverse of the system’s combined MTF, limited to frequencies below Nyquist to avoid alias amplification.

So while a single, globally optimal filter could in theory incorporate both effects at once, the two-step “sinc then deconvolve” approach is usually a good and stable practical approximation.
 
That is how Nyquist sampling works with a correctly-sampled array. Unfortunately, most sensels don't exactly sample the right stuff for Nyquist to apply, so Sinc reconstruction is only approximately right.
The sinc reconstruction assumes ideal point sampling preceded by a perfect low-pass filter with a sharp cutoff at the Nyquist frequency. If your actual sampling aperture deviates from that, you can first perform an ideal sinc reconstruction to bring the sampled data back into a continuous domain, then apply a deconvolution filter matched to the aperture’s MTF.

Conceptually, the sinc step restores the sampled signal under the assumption of ideal sampling, and the deconvolution step then compensates for the real-world prefiltering that occurred because of the pixel aperture and optics. The two together approximate the theoretical inverse of the system’s combined MTF, limited to frequencies below Nyquist to avoid alias amplification.

So while a single, globally optimal filter could in theory incorporate both effects at once, the two-step “sinc then deconvolve” approach is usually a good and stable practical approximation.
Yeah, I think ProfHank did not understand what I wrote, or I did not what he wrote:

in photography the raw data is always a record of point samples of a band limited continuous image, so Nyquist-Shannon always potentially applies (see the link upthread Prof, if you have questions about this I am happy to entertain them in the PST). Under certain stringent conditions the continuous image can be reconstructed virtually perfectly, as shown in the earlier ideal two-star example captured by a 100% Fill Factor square pixel.

It's just that the continuous image is not the optical image projected by the lens onto the sensor: the reconstructed continuous image is the optical image convolved with the pixel aperture function (in the example a square), as mentioned previously .

If we then apply deconvolution in PS or RawTherapee, typically RL with a gaussian kernel, we are imperfectly undoing some of that as well as diffraction, aberrations etc. So we end up with something that is in between the optical and the geometrical image, including errors.

Jack
 
Last edited:

Keyboard shortcuts

Back
Top