12 million is 12 million is still 12 million

Y Is Saturation Levels.

This Chart is from RAW Results, in JPG the Dynamic Range is above
10.5EV, All in this site knows that RAW have "lost" Dynamic Range
when converted to JPG.
I know that Saturacao means Saturation but what are you expressing there? dB, Volts, Photons? Are you linear or log??

What is the delta DR in stop from the S2 to the S3?

Thanks for confirming that jpeg lose DR (unless you artificially squeeze the range) because a lot of people seem not to understand that.

--
Regards
Gabriele
California, CA
 
Even though I disagree with your proposition, I respect your opinion. You are owed that courtesy.

But remember Leo, the empy drum often makes the most noise!

An American idiom. I'm sure you have something similar in Portugese.
Y Is Saturation Levels.

This Chart is from RAW Results, in JPG the Dynamic Range is above
10.5EV, All in this site knows that RAW have "lost" Dynamic Range
when converted to JPG.
--

This info has been brought to you by a 100% genuine S3 user/owner. Beware of substitutes!

Ghosts ghosts go away, please don't come back another day...
 
Converted by S7RAW and Hyper Utility 2.



All in histogram but wide have more consistent transitions and textures.



In this sample I use S7RAW do make a picture in STD mode, Wide Mode e finaly a Create a R mode and mix in PS w/ STD mode using the same process used to create Sinar Dual Scan 44MP images.

 
Is made w/ Color Saturation Levels, Like Films (I don't Have how be measure Db or Volts in my camera :( ).
This is a Linear measure.

If you have other methodology w/ other units and this methodology isn’t destroy my camera I can do it to. :)
 
I do see the slightest bit of difference in detail and tones in the wide conversions in the midtones. I almost went blind trying to see the difference, but I do see a very slight difference.

Have you compared the standard file to the wide file with the R file extraxted out in s7raw?

Take any of your wide files, and convert with s7raw, but pull out only the S pixel. Show only the S pixel.

Compare that to the Standard file. I'll bet the two have differnt exposures, and this propably accounts for what you are seeing.

Not that this is not a real advantage, just that it isn't due to ehance detail resolution simply tonal seperation.
Converted by S7RAW and Hyper Utility 2.



All in histogram but wide have more consistent transitions and
textures.



In this sample I use S7RAW do make a picture in STD mode, Wide Mode
e finaly a Create a R mode and mix in PS w/ STD mode using the same
process used to create Sinar Dual Scan 44MP images.

--

This info has been brought to you by a 100% genuine S3 user/owner. Beware of substitutes!

Ghosts ghosts go away, please don't come back another day...
 
The S2 has 6million+ photosites. It's a 6MP camera.
The S3 has 6million+ photosites. It's a 6MP camera. The only
difference is that it also has 6million+ photosites for highlights
which are blended into the image to provide a wider dynamic range.
It's still a 6MP camera.
Totally correct. You can't have it both way. It is mathematic. If
you use the extra information for extra DR you can't now have 12MP
of resolution. Period, End of the story, Finito.
You are conveniently forgetting the test results, especially those results from Anders Uschold that showed an actual resolution increase that is beyond normal interpolation.

I am going to go out on a limb here - but I suspect the reason that you are stuck on the math is because you are not taking into account the need to turn the 45 degree mosaic into a rectilinear pattern before blending the images. I suspect that this is where the extra information is coming from. The two sensor types occupy different positions, even if they are under the same lens, the illumination will not be uniform. Depending on how you "fill in the blanks" when you create a 12mp image from each set of photosites, you can get the extra information.

When theory contradicts evidence, the theory is flawed. Basic scientific method. (This is where scientists do repeated experiments to replicate results. In this case Anders came late to the party but wound up doing perhaps the most thorough job at verifying the fuji hypothesis.)

In a thought experiment, try this - for each set of photosites, create a 12mp image and instead of interpolating data, put a black pixel in the empty space. When you do this, you wind up with a checkerboard pattern. The two checkerboards from the two sets of sensors wll be complimentary, rather than identical. Blend them by letting the illuminated pixel from one checkerboard replace the black pixel from the other. Viola' you have an image with more information.

This is my admittedly amateur attempt to understand where the extra data comes from. (actually an attempt to explain the discrepancy between experimental results and simple math 6mp reasoning)
--
Best regards,
Jonathan Kardell
'Enlightenment isn't anywhere near as much fun as I thought it would be'
 
I think Satori misunderstood what the poster meant. The original poster was talking to what you and I are talking about. Satori is making the point that teh R pixles can be used for DR, OR resolution, but not both at th esame time.
The S2 has 6million+ photosites. It's a 6MP camera.
The S3 has 6million+ photosites. It's a 6MP camera. The only
difference is that it also has 6million+ photosites for highlights
which are blended into the image to provide a wider dynamic range.
It's still a 6MP camera.
Totally correct. You can't have it both way. It is mathematic. If
you use the extra information for extra DR you can't now have 12MP
of resolution. Period, End of the story, Finito.
You are conveniently forgetting the test results, especially those
results from Anders Uschold that showed an actual resolution
increase that is beyond normal interpolation.

I am going to go out on a limb here - but I suspect the reason that
you are stuck on the math is because you are not taking into
account the need to turn the 45 degree mosaic into a rectilinear
pattern before blending the images. I suspect that this is where
the extra information is coming from. The two sensor types occupy
different positions, even if they are under the same lens, the
illumination will not be uniform. Depending on how you "fill in
the blanks" when you create a 12mp image from each set of
photosites, you can get the extra information.

When theory contradicts evidence, the theory is flawed. Basic
scientific method. (This is where scientists do repeated
experiments to replicate results. In this case Anders came late to
the party but wound up doing perhaps the most thorough job at
verifying the fuji hypothesis.)

In a thought experiment, try this - for each set of photosites,
create a 12mp image and instead of interpolating data, put a black
pixel in the empty space. When you do this, you wind up with a
checkerboard pattern. The two checkerboards from the two sets of
sensors wll be complimentary, rather than identical. Blend them by
letting the illuminated pixel from one checkerboard replace the
black pixel from the other. Viola' you have an image with more
information.

This is my admittedly amateur attempt to understand where the extra
data comes from. (actually an attempt to explain the discrepancy
between experimental results and simple math 6mp reasoning)
--
Best regards,
Jonathan Kardell
'Enlightenment isn't anywhere near as much fun as I thought it
would be'
--

This info has been brought to you by a 100% genuine S3 user/owner. Beware of substitutes!

Ghosts ghosts go away, please don't come back another day...
 
My STD files in this test w/ S7RAW is only from S pixels (I had disabled R pixels).

S+R is from S and R w/ level 100 of mixing.

I can see the texture difference in mid tones (By my tests, in extremes of dynamic range we can’t see more details, you can see only in mid tones : ).

I had observed a collateral effect to, S3 is the only camera w/ noise at highlights, in PS mixing this problem can be observed to : .
 
Is made w/ Color Saturation Levels, Like Films (I don't Have how be
measure Db or Volts in my camera :( ).
This is a Linear measure.
You made a spreadsheet, you plugged numbers. This numbers must have some measurament value in one way or another. They must be expression of something right?

Did you take the RAW and analized the value of each pixel expressed in 14/16 bit? I don't know what you are measuring and what it represents beside the generic definition of saturation and I'm just trying to understand how to interpreter your chart and when the camera start to show initial sensitivity. Is it when the first horizontal line is touched?

--
Regards
Gabriele
California, CA
 
My STD files in this test w/ S7RAW is only from S pixels (I had
disabled R pixels).
Yes, but were the R pixels disabled in camera, (Normal Mode) or did you make your Standard files by turming off the R pixel in s7raw?
S+R is from S and R w/ level 100 of mixing.

I can see the texture difference in mid tones (By my tests, in
extremes of dynamic range we can’t see more details, you can see
only in mid tones : ).
Interesting. I'd like to see more.

--

This info has been brought to you by a 100% genuine S3 user/owner. Beware of substitutes!

Ghosts ghosts go away, please don't come back another day...
 
Why not for both?
The information is there.

You can take that information of highlight w/ bracket, Sinar use dual Scan to take 44MP files in your back, if I can underexpose R pixels and use RAW converter to recover thath w/ highlights I will have both information (pixel position w/ color and highlight.
 
I think Satori misunderstood what the poster meant. The original
poster was talking to what you and I are talking about. Satori is
making the point that teh R pixles can be used for DR, OR
resolution, but not both at th esame time.
Yes, but my point is that you can actually use them for both, however you will wind up "blending" away much of the resolution increase because of the need to equalize luminance between the two sets of pixels. Otherwise you would get a "screen door" effect from the two sets of checkerboards. I can think of no other explanation for the slight bump in resolution that the S3 gets which is beyond that accounted for by simple interpolation.
--
Best regards,
Jonathan Kardell
'Enlightenment isn't anywhere near as much fun as I thought it would be'
 
I think Satori misunderstood what the poster meant. The original
poster was talking to what you and I are talking about. Satori is
making the point that teh R pixles can be used for DR, OR
resolution, but not both at th esame time.
Yes, but my point is that you can actually use them for both,
however you will wind up "blending" away much of the resolution
increase because of the need to equalize luminance between the two
sets of pixels. Otherwise you would get a "screen door" effect
from the two sets of checkerboards. I can think of no other
explanation for the slight bump in resolution that the S3 gets
which is beyond that accounted for by simple interpolation.
How does the S3 do that jorse and pony trick without any R pixels.

There's been a ton wriiten here. Some says it just an measure increase in the vertical and horizontal)at the expense of the diagonal, becuase the diagonal patter has a greater pixle density in those direction, which is true. Others say, like Fuji, that's it is these enhanced vertical and horizonatl numbers that matter, since the human optical system (eye/retina/brain) is suppossedly more sensitive to detail in these directions.
--
Best regards,
Jonathan Kardell
'Enlightenment isn't anywhere near as much fun as I thought it
would be'
--

This info has been brought to you by a 100% genuine S3 user/owner. Beware of substitutes!

Ghosts ghosts go away, please don't come back another day...
 
I don't know the words used to explain this in English :(
I try to measure using Film color saturation method. :/

But yes, when lines touch the horizontal line you don’t have more sensibility on the Sensor.
 
How does the S2** do that horse and pony trick without any R pixels

--

This info has been brought to you by a 100% genuine S3 user/owner. Beware of substitutes!

Ghosts ghosts go away, please don't come back another day...
 
Totally correct. You can't have it both way. It is mathematic. If
you use the extra information for extra DR you can't now have 12MP
of resolution. Period, End of the story, Finito.
You are conveniently forgetting the test results, especially those
results from Anders Uschold that showed an actual resolution
increase that is beyond normal interpolation.

I am going to go out on a limb here - but I suspect the reason that
you are stuck on the math is because you are not taking into
account the need to turn the 45 degree mosaic into a rectilinear
pattern before blending the images. I suspect that this is where
the extra information is coming from. The two sensor types occupy
different positions,
Jonathan. You are conveniently mixing two things. The 45 degree positioning of the pixels is what was originally referred to by Anders Uschold and gave to the S2 (not the S3) explanations for augmented resolution. I do have an S2 and I agree with that. WIth the S3 you can't get 12MP of spatial information because the extra pixel is giving its info to the DR definition (rightly so). In order to augment the Dr info you need to sum the big pixel and the small pixel analog info into a single piece of information. Also remember this very important thing. The small pixel in the S3 is so small that will have almost in any condition diffraction problem if used as an individual pixel for the resolution. This is a fact. hence no extra resolution.

If you read my past post as the intelligent and open minded man that you are (instead of some of my detractors that are just looking at what is convenient for them), you will see that I wrote a post where I talk about S3 and extra resolution in real terms when compared to the S2 because of the extra DR. This is what is giving a bit better resolution on the charts that LEO from Brasil is publishing here. If you have extra DR chances are that you have better contrast or if you will an extra piece of information that helps the eyes in decoding resolution. This is a fact as well. Extra DR can (depending on light conditions, colors etc.) be equivalent to another piece of resolution but again, it is a collateral fact, important but a fact that has nothing to do with the second pixel per-se. Actually if you can make a CCD as good in DR, even without the second pixel you would have this extra information.

Don't be fooled saying that you have 12MP here. You DEFINETLY have extra resolution when compared to 6MP, no contest here, It has been said 1M times that 8 is probably a resonable number but when you compare a real 12MP camera you will see a HUGE difference in resolution. THis speak by itself I don't think it needs extra comments. Pick up the resolution charts of the D2X and you'll see what I'm talking about.

I think our disagreement was related to two different things. You thinking that I was considering the S3 equivalent in res. to any other 6MP and mine where I was talking about not real 12MP equivalency but I never denied that you have better res with a S3 (or S2) than a D100.

--
Regards
Gabriele
California, CA
 

Keyboard shortcuts

Back
Top