pixel density - camera companys fool the people

2) if the pixels are not fully activated by photos images wont be
created fully. all the pixels should be activated by photons to
create a proper output. what more explanation is needed?
You could explain where you got that fictitious model of sensors from.

All pixels are equally activated. Some collect more photons than others. When they don't collect any, that's because they're not supposed to.

--
John

 
i think i hav to use ur sentence only
"photosite produces a signal that is commensurate with the amount of
light that was collected during exposure" ---> this is what i meant
with fully activated. whatever is required should be got for pixels.
Nothing is required. Your model is fictitious.
if the pixels are tightly packed some pixels wont get enough photos
which is required .So this can lead under exposure or not the desired
output
Not at all.

Imagine that you had a handful of beads. You toss them on a square box with 9 smaller squares within it. You get beads in every smaller box. Now, imagine that you made the same toss, but the bigger box was the same size, but had 16 partitions instead of 9 in it. Now, one of the partitions didn't get any beads.

That doesn't matter, because now we have more accurate information about the shape of the bead-toss. The empty partition should be empty.
if G10 was 8 megapixels it wud have created fantastic output than
with its 14 megapixels.
There has been no evidence of such a trend.

Most of the newer, denser P&S cameras that do RAW give lower noise and higher DR than their less dense predecessors. People, in their ignorance, judge cameras by 100% pixel view, which zooms more into denser images, and the companies respond by giving ridiculous noise reduction levels.

--
John

 
It can but very low.
Okay. Why very low? Why do you need the VHF antenna so that it's not very low?

My point is that complex technologies are not so easily equated sometimes. What's true for one type of wave may not be true for another.
 
I knew I should't have followed AMradioreview advice!
Well, it certainly doesn't hurt to have a big antenna. I used to have a wire several hundred feet long (an abandoned telephone wire) connected to an AM car radio, and got a station at every allocated frequency, even during the daytime, some stations up to 450 miles by groundwave (thousands at night with the ionosphere, but you don't need a big antenna for that). But that's a bigger sensor, not just a bigger pixel.

--
John

 
Then why can’t a UHF antenna pick up VHF signals?
Most UHF antennas are closed loops, so a longer wavelength will node in the center of the loop, and have a very weak signal at the two terminals. Virtually a short circuit for the VHF.

--
John

 
My point is that complex technologies are not so easily equated
sometimes. What's true for one type of wave may not be true for
another.
The bottom line is whether or not you lose efficiency. If the photonwave has an effective offset due to phase, sampling these offsets at a higher rate should never make the record less accurate; it should simple give diminished returns in accuracy.

--
John

 
I knew I should't have followed AMradioreview advice!
Well, it certainly doesn't hurt to have a big antenna. I used to
have a wire several hundred feet long (an abandoned telephone wire)
connected to an AM car radio, and got a station at every allocated
frequency, even during the daytime, some stations up to 450 miles by
groundwave (thousands at night with the ionosphere, but you don't
need a big antenna for that). But that's a bigger sensor, not just a
bigger pixel.

--
but doesn't sound too practical for long trips.

--
-------------------------------------------------------
My Galleries: http://webs.ono.com/igonzalezbordes/index.html
 
So why do they have to smear detail away at lowest ISO? Why can't I get a new P&S that can shoot indoors without noisy shadows? Why can I shoot RAW with my old 5mp 1/1.8" camera without using any noise reduction? Why does it have more dynamic range?

I agree sensors have improved in reduction of noise as well as NR algorithms, but they have raised the pixel density so much that it more than offset the progress.

I remember back in 1999 when I bought a 2mp Kodak, thinking how great digital cameras will be in 10 years. Boy, was I wrong.
 
the parameter "pixel density" will confuse people.
1/2.5 sensor with 15 megapixels is going to perform BAD compared to
1/2.5 sensor with 10 megapixel. its a trade off.

more packed pixels in small sensor means some pixels wont be
activated by photons. so output will not be good.

so dont go for higher megapixel cameras until u hav big sensors

camera companys are fooling the people just by increasing megapixels
only

--
sangu
Sure all the pixels are activated. High count MP can deliver a good image in bright light. However, the world is not always bright sunny days. NR that turns detail into mush and loss of DR make me hate these pixel stuffed toys.

They work perfect for the consumer who never prints photos, or only prints 4x6" or an occasional 5x7. The image looks fine for them and they like the big numbers on the camera.
 
Sure all the pixels are activated. High count MP can deliver a good
image in bright light. However, the world is not always bright sunny
days. NR that turns detail into mush and loss of DR make me hate
these pixel stuffed toys.
What evidence do you have that a high MP count sensor delivers a worse image (note, IMAGE, not 100 % crops) with less detail and DR than a low MP count sensor of the same size?

If you get less detail from a sensor with more resolution, you or the conversion algorithm is doing something terribly wrong.
 
I remember back in 1999 when I bought a 2mp Kodak, thinking how great
digital cameras will be in 10 years. Boy, was I wrong.
You were right. You are wrong. Upsample an image from your 2MP to 14/15MP and compare the same shot to a current 14/15MP P&S. The former will look like relative cr*p.

--
John

 
By using them.

Sure I can down sample to match, but why not stay at the lower resolution to begin with?

What about dynamic range? I must admit that I can shoot RAW on my old camera an it makes a difference. The dense sensors are down to what, 8 or 9 stops of range now. Ugh.
 
By using them.

Sure I can down sample to match, but why not stay at the lower
resolution to begin with?
Because an 10 mp image is better for bigger prints, and also for some PP operations. Just print an A3 from your 2 mp camera and the modern one, and see if you still prefer the 2 mp image. You don't have to downsample to match anything, it's the end product that matters, to me at least. By downsampling one can demonstrate that the 2 mp image is in no way superior to the information contained in the high mp image, that is all.
What about dynamic range? I must admit that I can shoot RAW on my old
camera an it makes a difference. The dense sensors are down to what,
8 or 9 stops of range now. Ugh.
 
Graystar wrote:
If the statistical variation is due to photon counting statistics,
then yes the statistics is the same upon downsampling. This may help:
http://forums.dpreview.com/forums/read.asp?forum=1019&message=31922352
http://forums.dpreview.com/forums/read.asp?forum=1019&message=31922793
Oh please. I hardly consider dpreview forum postings to be
authoritative references.
I'll remember that whenever I'm reading one of your posts. Did you even look? It was a simple explanation in layman's terms of why aggregating samples lowers sample fluctuations.

But the basic point is that for any sampling obeying Poisson statistics, the standard deviation of samples is the square root of the sample size. If you aggregate samples, you have a larger sample, S is higher, N=sqrt is higher but not as much, and S/N=sqrt grows. If one had started off with coarser samples whose size was that of the aggregates, the statistics would be the same since the statistical fluctuations only depend on the mean count, and not on how that count is put together.
Also, the size of the pixel relative to the wavelength has little to
do with it. Radio waves have a wavelength of many meters for the AM
band, that doesn't mean you need an AM radio the size of a football
field to detect a signal.
Then why can’t a UHF antenna pick up VHF signals? A radio signal
won’t follow a waveguide but a microwave will.
For that sort of frequency range, the wavelength is of order the size of the antenna and the antenna has a resonant response to the signal. Off resonance (ie for different wavelengths than the one designed for) there is a response but is not nearly as big.

A radio wave can follow a waveguide; power transmission lines are wave guides and 60Hz is at the very long end of the radio spectrum.

But your point is well taken, how materials behave is strongly frequency dependent. Your bathroom mirror is highly reflective in the optical but transparent to hard UV; salt water transmits optical frequencies but absorbs radio.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Sure I can down sample to match, but why not stay at the lower
resolution to begin with?
Because there are better ways of reducing the noise than simply downsampling, which filter the noise in a way that preserves much of the image detail and resolution.

The only difference in noise between a higher and lower resolution image is noise at frequencies beyond the resolution of the coarser sensor. One only need filter out that high frequency noise, leaving the high frequency detail, to do better than the lower resolution sensor.
What about dynamic range? I must admit that I can shoot RAW on my old
camera an it makes a difference. The dense sensors are down to what,
8 or 9 stops of range now. Ugh.
DR and noise go hand in hand. Dynamic range depends on the image scale at which it's being measured; since finer resolution comes with more high frequency noise, DR is lower. That does not mean that DR at lower frequencies, such as the pixel scale (Nyquist frequency) of the coarser sensor, is worse for the higher resolution sensor. People are all too fooled by what they see in the image at 100% views, not taking into account how what they are seeing will scale as the viewing size is scaled (with or without downsampling).

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
My original digital camera Canon A70 3MP sensor 1/2.7 from 2003. Gave it to my son years ago but recently I took a shot with it at 400 ISO (max possible on this camera). Enlarged to 100% it looks hideous compared to my current Canon A570IS 7MP 1/2.5 at 50% viewing (more or less the same image size) But I guess the more MP is bad myth will never die.
Bert
 
Oh please. I hardly consider dpreview forum postings to be
authoritative references.
I'll remember that whenever I'm reading one of your posts.
Good move. Everything read in forums should be independently verified.
Did you
even look? It was a simple explanation in layman's terms of why
aggregating samples lowers sample fluctuations.
Yes I’ve read it, as well as your other comments on the subject.
But the basic point is that for any sampling obeying Poisson
statistics, the standard deviation of samples is the square root of
the sample size.
Right.
If you aggregate samples, you have a larger sample,
S is higher, N=sqrt is higher but not as much, and S/N=sqrt
grows.

This is the part I’m having trouble with. Doesn’t a Poisson distribution govern events over time? Aggregating isn’t an event. Why should it follow a Poisson distribution?

It’s possible that you will try to bin 4 pixels into one, where each pixel is at its max Poisson determined value. When you bin them the variation in signal to noise will be greater than the Poisson variance of a single larger pixel. That would likely be a very rare aggregate, but the fact that it is possible means that the variance curve of the binned pixels should be slightly wider than the Poisson distribution curve of larger pixels.

You’d have to explain how it would be impossible to end up binning 4 pixels that are at their max variance.
 
This is the part I’m having trouble with. Doesn’t a Poisson
distribution govern events over time? Aggregating isn’t an event.
Why should it follow a Poisson distribution?
It’s possible that you will try to bin 4 pixels into one, where each
pixel is at its max Poisson determined value. When you bin them the
variation in signal to noise will be greater than the Poisson
variance of a single larger pixel. That would likely be a very rare
aggregate, but the fact that it is possible means that the variance
curve of the binned pixels should be slightly wider than the Poisson
distribution curve of larger pixels.
You’d have to explain how it would be impossible to end up binning 4
pixels that are at their max variance.
You really need to describe your hypothetical issues better.

Are they at maximum value?

Then they would have 4x the value in a 2x2 binning, just as they would with a bigger pixel.

If you're talking about clipping, look at the link emil linked to, where someone charted a concept I had been describing in prose; smaller maximum photon counts per pixel (or any source of greater minimum noise near saturation, for that matter) lowers the level at which the sensor enters a non-linear response just below clipping, and raises the top end of that non-linear range, above mean clipping.

--
John

 

Keyboard shortcuts

Back
Top