S3 Important discoveries !!

I havn't had a chance to do any testing myself but you may be on to something here. As I have mentioned here before, when I tried to over expose with recoverable highlights, I always felt that i was getting a sharper and more detailed image. This may or may not be true, or may or may not be related to what you have come up with, but I have felt that way for a while.

Thanks for the very interesting thread. When I get a chance, Ill probably read everything here and do my own test.
Best regards.
 
Hi Cyrus.

Yes! Cha-ching! I look forward to seeing your results too Cyrus. This is making the S3 even more drool-worthy, indeed! :p- :D

Sincerely,
Huy (sounds like "we")
 
Zarathustra wrote:
snip
Sure you can do that, but the whole point of the S3 sensor is that
you can get in one shot all the goodies, the thing is to find that
key exposure that balance S&R nicely.
snip
I have read through this thread with interest

while I find it a dubious proposition that third party converters offer great insight into the workings of the S/R sensor, I think you demonstrated well what I have long advocated about shooting the S3

this is a radically different DSLR which requires users abandon much of what held true for most other DSLRs ...the biggest advantage of the S3's sensor is indeed in getting all the goodies in one shot and one conversion ...indeed it is very hard to beat the S3's jpgs when the settings are aRGB org org off AutoDR 12 MB Fine and one shoots to the right, even pushing the histogram a bit up which is to say overexposing

I think Fujifilm's engineers designed the sensor to be used in this fashion as in life details are lost in the dazzle of highlights

it seems they did an outstanding job of combining the output from the two sensors providing one shoots to push the histogram to the right ...in RAW of course one can adjust the mix of S & R input manually, but only rarely do I find this helps with the final output and perhaps only a bit less rarely shooting jpgs in Auto DR (in which the camera decides the mix) does one lose any data

I really think the S3's advantage in jpg stems as much from its 14 bit ADC (presenting better quality data to the jpg engine) as from the extended DR
--
pbase & dpreview supporter
Fuji SLRT forum member since 5/2001
http://www.pbase.com/artichoke
 
I'm glad you are reading.
As a S3 veteran perhaps you can shed some more light in here.
Sure you can do that, but the whole point of the S3 sensor is that
you can get in one shot all the goodies, the thing is to find that
key exposure that balance S&R nicely.
snip
I have read through this thread with interest
while I find it a dubious proposition that third party converters
offer great insight into the workings of the S/R sensor, I think
you demonstrated well what I have long advocated about shooting the
S3
Well, I dare to disagree here.
S7RAW is a heck of a RAW software.

AFAIK, this is the only program that at the click of a checkmark allows you to see what is going on with the S & R channels while you are adjusting the RAW settings.
Please, give it a shot. It truly is a great program.
The more I use it, the more I realize how good it is.
We all have this thing in our minds that if is free it must be bad.
Well, S7RAW in one of those exceptions. trust me.
this is a radically different DSLR which requires users abandon
much of what held true for most other DSLRs ...the biggest
advantage of the S3's sensor is indeed in getting all the goodies
in one shot and one conversion ...indeed it is very hard to beat
the S3's jpgs when the settings are aRGB org org off AutoDR 12 MB
Fine and one shoots to the right, even pushing the histogram a bit
up which is to say overexposing
I think Fujifilm's engineers designed the sensor to be used in this
fashion as in life details are lost in the dazzle of highlights
Indeed, too bad the S3 don't have a S and R Histogram, so you can really get the most for the exposures.
But as Leo Terra said, the trick is to expose for the shadows.
Phil Askey said something along that lines too, in the DPR S3 review.

But I believe he kept using the S3 as any other ordinary camera exposing just the S pixels for the sample and test shots.

From my testings I now know the reason why ACR was delivering more noise. It was because most of the time the R pixels were being underexposed.
S7RAW provided that window to see it happening.
it seems they did an outstanding job of combining the output from
the two sensors providing one shoots to push the histogram to the
right ...in RAW of course one can adjust the mix of S & R input
manually, but only rarely do I find this helps with the final
output and perhaps only a bit less rarely shooting jpgs in Auto DR
(in which the camera decides the mix) does one lose any data
Yes, just in RAW, but I'm clueless to why You, the S3 master can't get to see the benefits of the Custom S+R mixing.

Please Arti, go and install the S7RAW and spent some time fiddling with it, try it with shots of different exposures, with Low contrast and High contrast subjects.

Big tip: Use the checkmarks S and R to see those 'channels' directly while adjusting settings and study carefuly the Detail tab. the Detail tab is hugely important for a quality output, so study that one too.
I'm sure you'll like it.
I really think the S3's advantage in jpg stems as much from its 14
bit ADC (presenting better quality data to the jpg engine) as from
the extended DR.
Yeah, have you seen how most if not all of the other cameras struggle to capture flowers accurately.

Flowers are such a difficult subject because many reflect a wide light spectrum running from the Infrared to the Ultraviolet.

Even thoght the S3 is designed for 'visible' light. I think it is the best equiped to deal with such large gamuts.

IMHO the S3 is not much better for the layman, is because to produce compressed HDR in a RGB jpegs, requires a lot of in-camera processing power. Let alone the subjective trade-off that each subject requires
The good thing is that that power is available when using RAW.

--
 
Been on holiday and just read through all posts re S/R and its very interesting indeed, it certainly highlights the value of this forum, thank you.
 
I was going to stay out of this discussion but I feel this comment may add a different perspective. First, it is an interesting thread and shows a lot of dedicated experimental work by several participants.

But I think one needs to be clear what is being claimed or demonstrated. You (Zarathustra) have said that the SR sensor is equivalent to RrGgBb and I entirely agree. But 6million sets of RrGgBb is not geometrically equivalent to 12 million sets of RGB. True geometrical 12mp resolution depends on an array of 12million logically square pixels in a rectangular array. There is no geometrical way in which a 12mp rectangular RGB array can be deduced from a 6mp rectangular array of RrGgBb.

So any perceived increase in clarity through the alternative processing under discussion must arise from a different reason. That reason is to do with the "quality" (in information terms) of the data available to the 6mp array. No camera is able to produce an image which displays physical resolution at the theoretical limit of an electronically and optically perfect array of its nominal number of pixels. This is due to noise, electronic distortion and optical aberrations.

By overexposing and then combining overexposed data from s and r pixels, you will, in certain circumstances obtain data, averaged across each Rr pair (and similarly for Gg and Bb) such that distortion and noise are reduced relative to either R or r alone or to the default combination method. Thus, adjacent real world points which could not be resolved in the default processed image due to noise or distortion may be resolved in the new image due to reduction of obscuring confusions.

The effect is similar to the improvement gained by scanning a negative at double resolution and downsampling, as compared to scanning at original resolution. Errors are averaged out in the image of higher sample rate. In your example, the samples being averaged are the S and R pixel values for the same geometrical point.

The additional noise you demonstrated in the S+3EV image may be electronic or optical distortion caused by overload of the s pixel and its amplification chain.

What you are establishing looks like an interesting way of getting a clearer 6mp image in certain circumstances but it is still a 6mp image at below the theoretical limit of 6mp resolution, let alone anywhere near the 12mp limit. That is not to deny that the resulting image may be an improvement with regard to perceived resolution.
 
firstly I am hardly a Fuji Master
but I did try S7RAW which is indeed free
we discussed this converter a great deal here about 1 year ago

after fairly intensive testing, Dillon James (who for reasons inexplicable was banned from this forum) felt it did a superior job with RAW conversions at high ISO only ...I played around with it, but found it inferior for color transitions and tonal range to HU2 at low ISO (as did Dillon) and furthermore found at high ISO the jpgs from the S3 superior when shot in aRGB org org off 12MB AutoDR Fine ...I may not have played around with it sufficiently, but since I am quite pleased with the high ISO jpgs I get & almost would never shoot high ISO in RAW with the S3, I just don't see the reason to try it out further

I still think Fujifilm has a lock on how to convert files from the sensor they designed

S7RAW's main advantage seems its price, though the HU2 converter came bundled with my camera
--
pbase & dpreview supporter
Fuji SLRT forum member since 5/2001
http://www.pbase.com/artichoke
 
library well stocked! I too have CS2, plus HU2, Nikon Capture and now S7RAW. Forgive for making such a silly suggestion -- I just thought if you really wanted to try put a program available only for PC, it might work to satisfy your curiousity, even if it was painfully slow.

On my dual core PC laptop with 2 gig RAM, HU and S7RAW conversions only take about 7-10 seconds to process, even if I'm running Illustrator, Firefox, PS and the raw converter at the same time. So I would imagine that a similarly outfitted Apple would certainly move that fast, possibly even through a virtual PC program. What kind of computer are you using? If you're performance is lagging, I highly suggest the new dual-core technology -- it will fulfill any need for speed :).

Regards,
Crystal

--
http://treehuggergirl.zenfolio.com/
 
library well stocked! I too have CS2, plus HU2, Nikon Capture and
now S7RAW. Forgive for making such a silly suggestion -- I just
thought if you really wanted to try put a program available only
for PC, it might work to satisfy your curiousity, even if it was
painfully slow.
It's not a silly sugestion but it might be a long way to go to find out the store is closed... know what I'm saying? ;-)
On my dual core PC laptop with 2 gig RAM, HU and S7RAW conversions
only take about 7-10 seconds to process, even if I'm running
Illustrator, Firefox, PS and the raw converter at the same time. So
I would imagine that a similarly outfitted Apple would certainly
move that fast, possibly even through a virtual PC program. What
kind of computer are you using? If you're performance is lagging, I
highly suggest the new dual-core technology -- it will fulfill any
need for speed :).
1.67GH Dual 2 gigs of ram.. waiting for the release of PS.. 10 .. that will run native on dual core intel ;-)
it will also be OS 10.5 and virtual PC I doubt will be supported.
FOTOMAT
http://www.fotomat.net/screensaver/collection01_gal.html
 
Little head? Why the rudeness?

Both Phil Askey and Thom Hogan have said that the S3 DOES NOT use an adjacent R AND S pixel for resolution. It's the R or** the S pixel. Thom Hogan speculated that someone might come up with a way to use adjacent pixels for resolution. But as far as I know, s7raw does not do this. S7 raw simply pulls two seperate S and R images, and then blends them. Why don't you email the creator of S7raw and ask him?

The R pixels are quite limited in what they can do. There's a whole lot of noise in those tiny digicam-sized pixels, so therefore, they are most useful at low ISO, i.e, bright conditions. That's why HU applies such heavy handed noise reduction, as documented by Phil Askey in his excellent review. Other reviewers have commented on this detail smearing as well.

And there is absolutely NO way your F700 could ever do that, as the pixels are both under one microlens, both receiving the same spatial data. I know, I have an F700 as well.


As you can see, the R pixel in the S3 is like another pixel,
occupying an exclusive space in the sensor, just like any other
photosite, but with less sensitivity.
If you make them active by overexposure they will contribute with
more resolution and less noise.

In addition this should be even more true for the S3 than the F700
due that the F700 SR sensor share the same microlens for each SR
pair.

Theories aside, even having a shared microlens I can see from my
tests the improvement in resolution and noise.

The idea is to take the exposure in which both S & R grab the
detail and overlap in the area of the image you need the extra
detail.

I don't need to argue with you about this.
Download S7RAW, do your test, the true is right there.
The two sets of sensors, R and S, have nothing to do with
resoltion, only with dynamic range. If you're underexposing, the
noise will interfere with perceived resolution (noise is definitely
a function of resolution). That is why, no matter what the camera,
to always expose as far to right of the histogram as possible,
without overexposing. This way, you get more dynamic range where it
counts, at the bottom, where there is very little date devoted to
shadow areas. That's the way digital works. The extra set of
sensors help achieve dynamic range, nothing more. Resolution may
appear greater when you expose more carefully, because you're
recording more USABLE data, but not more resolution. One big help
for better resolution is better lenses. Use the very best lenses
you can for digital. Crappy lenses really show, sometimes very
badly. A great lens always shines.
--
 
Little head? Why the rudeness?
Dude, chill out, I don't really mean it.
Both Phil Askey and Thom Hogan have said that the S3 DOES NOT use
an adjacent R AND S pixel for resolution. It's the R or** the S
pixel. Thom Hogan speculated that someone might come up with a
way to use adjacent pixels for resolution. But as far as I know,
s7raw does not do this. S7 raw simply pulls two seperate S and R
images, and then blends them. Why don't you email the creator of
S7raw and ask him?
If you heard that DIRECTLY from the S7RAW author then that would put an end to the independance of R photsites discussion.
But, chances are that he doesn't even know.
He might be using some SDK or DLLs supplied by Fuji.
Or some kind of DLL hack.
The R pixels are quite limited in what they can do. There's a whole
lot of noise in those tiny digicam-sized pixels, so therefore, they
are most useful at low ISO, i.e, bright conditions. That's why HU
applies such heavy handed noise reduction, as documented by Phil
Askey in his excellent review. Other reviewers have commented on
this detail smearing as well.
No a secret. I don't see the point here.
And there is absolutely NO way your F700 could ever do that, as
the pixels are both under one microlens, both receiving the same
spatial data. I know, I have an F700 as well.
Well, going by your logic, then no sensor would be able to resolve more than one pixel, since they all are under one big lens.

What the microlens does is to compensate for the area loss for the adjacent supporting cicuitry. IOW microlenses help the sensor to increase the 'fill factor'.

They just 'funnel' the light of that area into the discrete pair of S & R photodiodes.
Just like a regular camera lens do with the whole sensor or film.

As long as the Circle of Confussion is equal or smaller than the R photodiodes, I don't see why the SR sensor can resolve them as discrete pixels.

Now, telling by the amount of Moire I can see when I uncheck the moire reduction in S7RAW, it leads me to believe that the AA filter in the F700 is very weak, weak enought to let the lens go to a small CoC, as small as the R diodes.

BTW. Have you ever heard about sub-pixel demosaicing algorithms ?

They are probaly doing something similar here given the peculiar array of Fuji Super CCD SR sensors.

slick !

--
 
Interesting.
This could be a possible explanation.

I also was wondering if the demosaicing routines are indroducing this blur like look due to interpolation of 'missing data'.

Thanks for posting.
Sorry the images are so big but I wanted y'all to easily see the
difference.
To me it looks like your second image suffers from a lot of motion
blur. To see what I mean, look at what Focus Magic "Motion Blur"
filter can do with a little trial and error to find the best angle
and radius to correct it (it could be further improved with a
little more patience):



The first image seems to have some blur caused by camera shake as
well, but seems to be a little out of focus too.

Can you redo your tests with a tripod and manual focus (or lock
focus after autofocusing), just to be sure there are no focus or
movement-induced blur difference between the images?

Thanks,

Marcos
--
 
I was going to stay out of this discussion but I feel this comment
may add a different perspective. First, it is an interesting thread
and shows a lot of dedicated experimental work by several
participants.
Thanks
But I think one needs to be clear what is being claimed or
demonstrated. You (Zarathustra) have said that the SR sensor is
equivalent to RrGgBb and I entirely agree. But 6million sets of
RrGgBb is not geometrically equivalent to 12 million sets of RGB.
True geometrical 12mp resolution depends on an array of 12million
logically square pixels in a rectangular array. There is no
geometrical way in which a 12mp rectangular RGB array can be
deduced from a 6mp rectangular array of RrGgBb.
Who cares, nobody is counting.!

The bottom line is that for whatever reason a bit of quality can be achieved by over exposing, the reason why is just an intellectual exexrcise. Or rather an speculation excersise, given the lack of official technical information about the SR sensor technology.
So any perceived increase in clarity through the alternative
processing under discussion must arise from a different reason.
That reason is to do with the "quality" (in information terms) of
the data available to the 6mp array. No camera is able to produce
an image which displays physical resolution at the theoretical
limit of an electronically and optically perfect array of its
nominal number of pixels. This is due to noise, electronic
distortion and optical aberrations.
I can agree with this.
Less noise=more detail, simple.
By overexposing and then combining overexposed data from s and r
pixels, you will, in certain circumstances obtain data, averaged
across each Rr pair (and similarly for Gg and Bb) such that
distortion and noise are reduced relative to either R or r alone or
to the default combination method. Thus, adjacent real world points
which could not be resolved in the default processed image due to
noise or distortion may be resolved in the new image due to
reduction of obscuring confusions.

The effect is similar to the improvement gained by scanning a
negative at double resolution and downsampling, as compared to
scanning at original resolution. Errors are averaged out in the
image of higher sample rate. In your example, the samples being
averaged are the S and R pixel values for the same geometrical
point.

The additional noise you demonstrated in the S+3EV image may be
electronic or optical distortion caused by overload of the s pixel
and its amplification chain.
That is what I thought. Something along those lines.
What you are establishing looks like an interesting way of getting
a clearer 6mp image in certain circumstances but it is still a 6mp
image at below the theoretical limit of 6mp resolution, let alone
anywhere near the 12mp limit. That is not to deny that the
resulting image may be an improvement with regard to perceived
resolution.
How come are you so sure to know if SR are being resolved in pair or as discrete pixels ?

The 45 tiltet array or its octagonal formation have nothing to do with the ability to read those SR pixels apart keeping track of their discrete physical location.

Sony have some sensors with RGBE or CMYK.

Some printers have 8 CcMcYyKk. and that doesn't stop them to use those droplets to resolve more dots independently. They don't have to pair Cc and Yy, etc.
You can think of the SR sensor as having a set lighter RGB and a darker rgb.

Unless someone can confirm from an official source or have a way to prove it, all that we can do is argue about its theoretical feasability.

Bottom line:
The reasons are secondary. The facts remains.
Overexposing F700 and S3 shoots increase quality.

Too bad no more people are posting comparison tests to confirm and understand this issue.

--
 
I did shoot on a tripod (albeit a not so good one) at 1/20 sec. But now I think there' s should be no reason to have to use a slow shutter speed -- we should be able to get similar results at faster shutter speeds. When I have a chance, sometime this weekend, I will reshoot (once at same SS, once fast enough to avoid motion blur), using a shutter delay to see if camera mirror shake is playing a role, too.

--
http://treehuggergirl.zenfolio.com/
 
I also was wondering if the demosaicing routines are indroducing
this blur like look due to interpolation of 'missing data'.
Don't think so. Can't think of an explanation for a 5 pixel blur to occur at 115 degrees as a result of demosaicing. Fuji's demosaicing is actually quite good even for cameras that don't have R pixels, and images from "normal" SuperCCD cameras do not look as blurred as seen in Crystal's examples, except in case of camera shake or misfocus. That's why I would like to see his test repeated under more controlled conditions.

Marcos
 
Hello folks,
Took this shot in raw at ISO 800 and exposed one stop over
I made the following conversions:
1- a regular conversion in s7raw with 100% mixing of S&R

2- A double conversion in S7raw, one for S and one for R pixels and manually mixed the results.

3- a conversion in CS for highlights and a conversion for mid tones and mixed the two conversions.

Now, I'm NOT a good blender as I have a very limited experience in blending exposures. I did the two conversions in Cs just because someone may say, well, if you have to go through the trouble of mixing of S and R, why not mixing two exposures in CS.

No curves or color enhancement was done in these conversions. One thing that I found out after upload is that for the conversion in CS, contrast was set at 10 and sharpening and color noise reduction at 25. Sharpening and contrast was set at 0 in S7Raw.

I know what seems the best of the bunch to me and I may try it in the future with some shots. You may consider anyone of themto have produced a better result if it looks better to you. There were two observations though:

1-When the highlights start to over expose, it seems to be the S pixels starting to give up. The R pixels stayed robust well over 2/12 stops. Will we see three pixel of photo sites in the S4, R, S and M for mid tones?! :)sets
2- You notice the century old discussion about CS's noise in the R pixels.

First shot is a regular S7raw conversion, second is a manually blended S&R, third one is the conversion in CS to preserve the highlights and the last one is a 100% comparison between the shots blended from S&R conversion and the shots blended from CS conversions and I think you can tell them apart.
No other changes in post were made, except for resizing.
Best regards.









 

Keyboard shortcuts

Back
Top