G7, FZ50. wanna ride a pig or a horse??

paddmannen

Member
Messages
10
Reaction score
0
Location
SE
What is the major problem with the G7??
A lot of people talk about the absence of raw and swivelling screens.

What is the major problem with the Panasonic FZ50??
Well, apparently it's the noise.

Wanting to buy one of these cameras I have been spending some time comparing images. I am not a professional photographer but I am not an idiot either (I hope...)

-I think that the real issue with BOTH cameras is the invading noise in images from 200 and on. Furthermore I can't understand how anyone can say that the FZ50 is noisier than the G7. The TRUTH (according to me) is that they are BOTH incredibly useless at higher ISO settings, and the eventual differences are not worth talking about. You can perhaps say that they are worthless in different ways...

Look at these pictures at ISO 400:

G7:



FZ50:



To illustrate my point that the picture quality between these cameras is basically the same, look at the same motive shot by a canon 400D at ISO400.

400D:



For me, arguing about whether the G7 or the FZ is the best is like quarrelling about whether it's better to ride a pig or a goat when you need a horse if you wan't to ride at all. DON'T say that nothing can be done about this, because there it can. Canon and Pana could build if not a horse so at least a pony for us today if they were concerned by picture quality instead of marketing megapixels.

Look at these figures:

FZ 50: sensor size: 7.18x5.32mm. 10.4MP > > > > Number of pixels per square mm: 272000

G7 : sensor size: 7.18x5.32mm. 10.4MP > > > > Number of pixels per square mm: 272000

Canon 400D: sensor size: 22.8x14.8mm. 10.5MP > > > > Number of pixels per square mm: 40 000

Pentax K100D: sensor size: 23.5x15.7mm. 6.31MP > > > > Number of pixels per square mm: 15500

You don't have to be rocket scientist to understand the figures...

Imagine a FZ50/G7 with 6MP which is enough to print an A3 image with a sensor size somewhere around 10x7.5mm. With some engineering the cameras would not be much bigger because of this but the pixel concentration would fall to 80000....

We could have one-step-only ISO disadvantage compared to a 400D......In a camera that you actually can carry with you.

My conclusion: The technology to build the perfect compact enthusiast camera exists already, but the camera does not, so I'll bide my time. Sooner or later somebody will wake up and build my pony.
 
My conclusion: The technology to build the perfect compact
enthusiast camera exists already, but the camera does not, so I'll
bide my time. Sooner or later somebody will wake up and build my
pony.
I personally think that the perfect compact enthusiast digital camera is a DSLR with a compact prime, and an option to put on a standard zoom when compactness is less of an issue.

From the rest of your post, you seem to have in mind a digicam with a fixed lens and a large sensor, rather than a SLR, probably for reasons of size. In my opinion, the problem is that if a manufacturer opts for no providing a zoom, it ends up creating a camera of limited (as opposed to universal) use. The Ricoh GR-D has a nice 28 mm prime, and this means that it is not very appropriate for portraits. What I mean is that a zoom is expected in a universal camera, and if possible a zoom with good close-up capability, and which starts at a reasonably wide angle.

Such a camera exists. It is the Sony, and is everything but compact - compare it with a Olympus E-400 with a pancake lens. And there is not much one can do: a camera with a large sensor and a lens with similar flexibility as the one in the G7 will always be bulky. Actually, almost as bulky as a SLR. Why not buy a SLR instead, then?
  • Arnand
 
Well, let's keep the sensor size of the G7, but we reduce pixels to 5MP. We will still gain one full ISO step. That is worth a lot already. An SLR with a pancake is compact but not versatile as is a G7 (or a FZ50).

Then there are some bizarre drawbacks with SLRs: You cant use the lcd screen for previewing and there is no video. Does anyone have an explanation for this
 
The FZ50 pic looks actually bette than the G7's, as far as chroma noise is concerned! But there is very litle room on the pic to evaluate shadow noise. In any case, those pics could both be used for rgular prints.

Anothe surprise : the shot from the 400D looks much softer than the other ones? Focus issue, processing isue, lens issue? I don't know, but even though noise is better, detail seems to be lacking, which does not make much sense...
--
bdery

Québec city, Canada
C A N O N S 2
C O O L P I X S Q
http://s108.photobucket.com/albums/n13/bdery/
 
Well, let's keep the sensor size of the G7, but we reduce pixels to
5MP. We will still gain one full ISO step.
You get the same, regarding signal/noise, by downsampling the 10 MP image. Except that in good light, the 10 MP sensor has the ability to record more detail (and maybe less dynamic range).

By the way, the 6.2 MP sensor in the Fuji F10 and successors is larger than the G7 sensor. The pixel size is not very different from the hypothetical camera you describe.
  • Armand
 
but RAW makes a big difference:

FZ50, 100% crop, ISO 400 RAW:



And the JPEG file (noise reduction low):



--
Regards,

Robert
 
You get the same, regarding signal/noise, by downsampling the 10 MP
image. Except that in good light, the 10 MP sensor has the ability
to record more detail (and maybe less dynamic range).
  • Armand
I wish this were correct. You could only recover signal from noise if you have enough samples of the SAME signal. That's not the case, as every pixel can be different from those surrounding it in an image. If we had a section of the image that contained, say, 100 pixels, and all were supposed to be a uniform color, then IF the noise is random (and it often isn't), we could sample and recover the true color. But, we seldom have this situation in the most critical parts of an image. Also, when we get close to the darkest parts of an image, since the color cannot go below black (no negative values) any averaging ends up with a false color. The same is true at the high end, though our signal to noise ratio is typically better there.

In theory, one could record multiple images of the same scene and average them together to reduce noise -- since each pixel would have multiple samples (again assuming random noise.) But that would require a very stationary camera and fixed scene.

This all means that there is a definite trade off in using a larger number of smaller photosites. As the reduced exposure lowers the signal one eventually passes a point in which a sensor with fewer, larger photosites stores more useful information than the sensor with more, smaller photosites.

--
Bitplayer

To err is human, to post-process -- divine.
 
You get the same, regarding signal/noise, by downsampling the 10 MP
image. Except that in good light, the 10 MP sensor has the ability
to record more detail (and maybe less dynamic range).
  • Armand
I wish this were correct.
It is. At least as a first approximation.
You could only recover signal from noise
if you have enough samples of the SAME signal. That's not the
case, as every pixel can be different from those surrounding it in
an image.
Would you explain how the signal impacting a photosite that has a given capture area is different from the signal impacting two photosites, which, taken together, have the same capture area? It is the same signal, isn't it?

There are reasons why what I wrote is an approximation, but yours is not one.
  • Armand
 
There is such a beast, the Fuji F30, compare the studio shot right here on DPReview against the G7 and the Fuji ISO 800 is about as clean as the Canon ISO 80.
 
Your figures don't make sense, try again.
--
Jerry
 
Wow, that's the worst case of noise reduction I ever saw.
I hope it's nut business as usual for the Pana camera.

--
Stephane

 
You get the same, regarding signal/noise, by downsampling the 10 MP
image. Except that in good light, the 10 MP sensor has the ability
to record more detail (and maybe less dynamic range).
  • Armand
I wish this were correct.
It is. At least as a first approximation.
You could only recover signal from noise
if you have enough samples of the SAME signal. That's not the
case, as every pixel can be different from those surrounding it in
an image.
Would you explain how the signal impacting a photosite that has a
given capture area is different from the signal impacting two
photosites, which, taken together, have the same capture area? It
is the same signal, isn't it?
Gosh, no. If it were, the two resulting pixels would have the same color. (By the way, we're both ignoring the whole "Bayer thing" -- but that is understood.) Each position in the sensor gets different information from a different part of the scene. With larger photosites you are averaging the signals over a wider area of the scene. Until you get down to photon size, each small area of the sensor gets its own, unique signal from the scene. If not, more pixels would not increase resolution.

Two cases:

1) 1 photosite, signal X, noise N. The signal to noise ratio is X/N.

2) That photosite split in half. Each half gets a signal that sums with the other photosite's signal to X. Each could be anywhere from 0 to X. On average each is X/2 -- which is actually the best case. Each then has a signal to noise ratio of (X/2) * N, or half the first case. In the other possibilities, one photosite has a better signal to noise ratio and the other has one that is worse. The signal to noise ratio of our best case is then half what it was. These pixels do not average together unless you are talking about a large, uniform area (such as part of the sky.) Once you mix in noise, it is difficult to take out. You must be able to characterize the noise (random, periodic, etc) and have many samples -- the more the better.

The basis of all of this is that when you split the photosite the signal is cut in half but the noise remains the same. Electronic noise will be the same (or worse!), but sampling error noise goes with the square root of the signal. Either way, the S/N ratio is worse.

I'm probably not doing the best job of explaining this. Anyway, give it some thought.
There are reasons why what I wrote is an approximation, but yours
is not one.
  • Armand
--
Bitplayer

To err is human, to post-process -- divine.
 
Having owned a G7 for about 3 weeks, I am begining suspect that too much pixel peeping can be misleading. I find the G7 does produce a fair bit of noise at higher iso settings, but what I am now appreciating is the beautiful prints I am getting. So much better than my G5. image detail and colour balance are streets ahead. Macro on the G7 is also so much better. I also find the G7 has a large fun factor about it, plus I can pop it into my pocket.
 
You get the same, regarding signal/noise, by downsampling the 10 MP
image. Except that in good light, the 10 MP sensor has the ability
to record more detail (and maybe less dynamic range).
  • Armand
I wish this were correct.
It is. At least as a first approximation.
You could only recover signal from noise
if you have enough samples of the SAME signal. That's not the
case, as every pixel can be different from those surrounding it in
an image.
Would you explain how the signal impacting a photosite that has a
given capture area is different from the signal impacting two
photosites, which, taken together, have the same capture area? It
is the same signal, isn't it?
Gosh, no. If it were, the two resulting pixels would have the same
color.
No. Read again, please: I am talking about the hypothetical case where the light impacting one large photosite is the same as the light impacting the two smaller photosites, taken together. It is the same light, the same photons, the same signal. In one case, you detect it using a single photodetector, and in the other one, with a pair of detectors. If those had the ability to perfectly count single photons, they would count exactly the same number of hits, taken together, as the single, larger detector does.
The basis of all of this is that when you split the photosite the
signal is cut in half but the noise remains the same. Electronic
noise will be the same (or worse!), but sampling error noise goes
with the square root of the signal. Either way, the S/N ratio is
worse.
Within the simplified model sensor we are talking about here, random noise from sampling (shot noise) will get no worse when doubling the number of photosites, provided one sums the signal between adjacent sites.

Readout noise and dark current may indeed worsen the S/N ratio for the sensor with more, smaller sensors, but that was besides your initial point, I would say. Dark curent can be dealt with by dark frame substraction, and compact digicams routinely do it for longer exposures.
  • Armand
 

Keyboard shortcuts

Back
Top