bobn2

bobn2

Lives in United Kingdom Worcestershire, United Kingdom
Joined on Aug 28, 2007

Comments

Total: 115, showing: 1 – 20
« First‹ Previous12345Next ›Last »
On article Take a look inside Leica's factory in Wetzlar, Germany (136 comments in total)

Thank you especially for photo 15, which should come in useful if a forum expert decides to tell me another time that Leica works in Imperial units.

Link | Posted on Nov 10, 2017 at 14:33 UTC as 2nd comment
In reply to:

bobn2: 'How did Sony do this given the already low levels of read noise its known for? Possibly by going to better or higher native bit-depth ADCs,', unlikely, if as you reported the sensor is the same. More likely they have cleaned up the power supplies a bit, which has lowered the reset noise. It's often been the case that Nikon gets a little more out of Sony sensors than Sony, and the speculation at the time has been that they used a different version. It's emerged since that they didn't (for instance, the D3X used the same sensor as the A900, not a Nikon special) - a lot of the difference was cleaner power supplies - they are quite critical in these sensor chips.

'Has it been confirmed that the D850 sensor is manufactured by Sony?'
Who's going to confirm it, until TechInsights does its teardown and finds 'Sony' on the silicon? All the evidence is that it is a Sony sensor. It has DRPix technology, that is only available to Sony and On Semiconductor. It is a FF BSI sensor, which so far is something only Sony has done technologically. The layout of the chip looks just like Sony's design team always does it, which is nothing like anyone else does it. It performs almost identically at the pixel electronic level as the D500 sensor, which is confirmed as Sony. Sony provides sensors for most of Nikon's cameras. It's a Sony.

"If the Nikon ISO 64 is due to the ability of the 36MP and 46MP sensors to accept more total charge, why is it not possible for Sony to do the same with the 42MP sensor?"
Who says it isn't possible, looks like Sony just decided not to do it.

Link | Posted on Nov 3, 2017 at 20:41 UTC

At your next editorial meeting, perhaps you could discuss a small amendment to your house style to use a better notation for f-numbers. The 'F2.8' notation you use confuses a lot of people into thinking that aperture is measured in units of F. Hasselblad gets it right. In their release above the refer to 'The XCD 135mm f/2.8' and so on. This is easy for people to understand as a formula, where 'f' is the focal length. I discussed this once with one of your staff (as I remember, Richard) who agreed that the 'f/2.8' notation is clearer, but said he couldn't do much because it was your house style. Maybe you could change your house style?
A small point maybe, but anything that makes it easier for people to understand should be welcomed.
Sorry, a bit OT here, but just seeing the contrast between what you wrote and Hasselblad's release brought it to mind,

Link | Posted on Nov 3, 2017 at 16:24 UTC as 26th comment | 4 replies
In reply to:

bobn2: 'The a7R III, like many Sony predecessors, has a second gain step at the pixel level that amplifies signal, at the cost of higher tones, to preserve higher signal, and less noise, in dark tones." - that's not what it's doing. It's a change in capacitance of the pixel, which changes the relationship between the pixel charge and the downstream voltage, thus making the electronic noise look like fewer photoelectrons. It isn't a second gain step.

Hi Rishi, 'conversion gain' is a bit of a misleading term though a common one. The DR pix circuit is quite simple. The relationship of charge to voltage is simply related to the capacitance of the read transistor, by the well known equation V = Q/C, so the smaller is C, the more V you get for each Q (in this case, the Q of an electron). So the idea of 'gain' is quite misleading (it almost always is, with respect to sensors). What matters here is how much apparent electron noise each dollop of electronic noise is worth. If C is small, a volt of electronic noise looks like not much electron noise, if it's large, then it looks like more.
The reason you can't just make a sensor with low C is then it can't accept much charge, so its base ISO is high. The DRPix trick allows for the best of both by allowing an extra capacitor to be switched in, big C for low base ISO, low C for low apparent electron noise in low light.

Link | Posted on Oct 31, 2017 at 21:35 UTC

'The a7R III, like many Sony predecessors, has a second gain step at the pixel level that amplifies signal, at the cost of higher tones, to preserve higher signal, and less noise, in dark tones." - that's not what it's doing. It's a change in capacitance of the pixel, which changes the relationship between the pixel charge and the downstream voltage, thus making the electronic noise look like fewer photoelectrons. It isn't a second gain step.

Link | Posted on Oct 31, 2017 at 20:13 UTC as 59th comment | 2 replies

'How did Sony do this given the already low levels of read noise its known for? Possibly by going to better or higher native bit-depth ADCs,', unlikely, if as you reported the sensor is the same. More likely they have cleaned up the power supplies a bit, which has lowered the reset noise. It's often been the case that Nikon gets a little more out of Sony sensors than Sony, and the speculation at the time has been that they used a different version. It's emerged since that they didn't (for instance, the D3X used the same sensor as the A900, not a Nikon special) - a lot of the difference was cleaner power supplies - they are quite critical in these sensor chips.

Link | Posted on Oct 31, 2017 at 20:13 UTC as 60th comment | 2 replies
On article Here's why your beloved film SLR is never going digital (285 comments in total)

Nice article. Richard. Actually this is something I've looked at hard. In fact, it's not too hard to convert your old SLR to digital (the first DSLRs were just that), what would be more difficult would be to make a drop-in digital film which doesn't involve butchering the camera. What the 'I'm back' seems to do is place a translucent screen in the film gate and then have a digital camera take a photo of that, not an excellent solution. I wondered, though, whether there would be a market for converted DSLR. There's a fellow called Huw Finney that once went half way through a conversion of a Leica M2. The real problem with it all, though, is what you end up with would be less functional and more expensive than a modern DSLR.

Link | Posted on Oct 11, 2017 at 13:31 UTC as 98th comment
On article Yashica is teasing a comeback to the camera market (299 comments in total)

Look over here:
https://www.yashica.com/ourglory
It seems there is a Yashica product, a wide angle adapter with claimed 4k capability for smart phones.
It gives some specs
High Resolution
Upto 20M pixels meets all shooting requirements
Unique Aspheric Lens Design
Eliminate dark corners & blurry edges
Crisp & Colourful Image
HD optical glass with extra high transmission rate
Universal Clip
For any smartphone
Multi-functional Clip Bag
Suede-made clip bag can be used for both storage & lens-cleaning

Link | Posted on Sep 15, 2017 at 17:57 UTC as 89th comment | 1 reply
On article Nikon D850 Review (2116 comments in total)
In reply to:

calson: Not a proper test. If there is noise at ISO 64 when the EV is adjusted in processing then the image was underexposed. Nikon DSLRs since the D3 (2007) have produced better image files from overexposure than underexposure. Recognition of this is why the advice has been to expose to the right on the histogram.

Take any camera and underexpose by 1EV and then overexpose by 1EV and the overexposures will nearly always produce the better image after converting the RAW file. With an underexposure there is less real data in the file and so adjusting the EV after the exposure will of course increase the visible noise in the file.

The D850 is evidently more forgiving of a bad exposure made by its user. Nikon with the D3 and subsequent cameras designed them to be more forgiving of overexposed elements in a scene (less data lost) as when photographing a bride in her white dress in full sunlight and not blowing out her dress.

@calson - 'If there is noise at ISO 64 when the EV is adjusted in processing then the image was underexposed.'
However the image is exposed, there will be a low end of the exposure range. The test just shows how much noise there is in that low end.
'Nikon with the D3 and subsequent cameras designed them to be more forgiving of overexposed elements in a scene (less data lost) as when photographing a bride in her white dress in full sunlight and not blowing out her dress.' I think you're just referring to the raw headroom available. Nikon's is pretty much average for the class.

Link | Posted on Sep 11, 2017 at 19:34 UTC
On article Nikon D850 Review (2116 comments in total)
In reply to:

HenryDJP: The DR on my D810 is amazing! I love it, but if I was considering the D850 I would be even more excited with this camera had Nikon gave it a hybrid viewfinder. The best of both worlds of EVF and OVF. Truth be told, even with mirrorless advances I would go for an OVF any day of the week first. I never understand why Canon users continue to buy Canon's DSLR's when the DR sucks so badly?

I think that a hybrid finder would have made the VF quite a bit larger and resulted in a loss of light from the DSLR VF. Essentially it would need a beam splitter before the eyepiece (thus the light loss) and a display squeezed in there somewhere.

Link | Posted on Sep 11, 2017 at 16:08 UTC
On article Nikon D850 Review (2116 comments in total)

"Theoretically we might also see some benefit in performances in the corners with wide-angle lenses, but that doesn't come into play with the 85mm lens we use for these tests."
It's not so much to do with the angle of view as the position of the exit pupil, and pretty much every wide angle for digital has its exit pupil a long way from the sensor (that's why they are so big). Flip the mirror up and put on a Hologon, you might see a difference.
The advantage of the BSI sensor might be less low-f-number shading, due to faster microlenses. It would be interesting if you could find a way to test for this. It's known that manufacturers correct for this in the signal processing, so what you'd want to look for is the noise - is it producing less noise than a D810 at f/1.4 and the same shutter speed?

Link | Posted on Sep 11, 2017 at 16:05 UTC as 297th comment

I notice also, if you go to the Impossible Project website it says 'The Impossible online store will be closed from September 10th – September 13th. Watch this space.' Interesting coincidence of dates. Looks like it will be rebranded.

Link | Posted on Sep 9, 2017 at 21:00 UTC as 9th comment

Those don't look like a FF 52 and 36mm looking at the diagrams, the angle of view looks quite small, more as though they a re telephotos for the 1 system, especially given that they are both telephoto designs with negative power at the back. Mind you, they have negative power (huge) at the front, too. But the patent does give a image height of 21.6mm, which would be FF. Odd.

Link | Posted on Sep 7, 2017 at 19:33 UTC as 35th comment | 3 replies
In reply to:

photoMEETING: I can't follow the line of arguments in that article.

DR is depending on two factors:
1. Full well capacity of the sensor [e-]
2. Readout noise of the A/D electronics [e-]

There is a formula to calculate DR from this two values.

Nothing about bit depth so far. :)

Another story is how many distinctive brightness levels are beeing saved in that given physical range of the sensor. The more bits per pixel you have, the more different shades of gray you can differentiate.

I can easily imagine an image of a sensor with a DR of only 8EV, but an 16 bit output with a rich differentiation of the brightness shades.

There can also be an image with a 14EV DR, but an 8bit output only. Many 8-bit-JPGs, developed from a ISO 100 NEF file out of a Nikon D750, are pretty good examples!

Probably, the artice has to be revised.

@photoMEETING
"I have not forget the issue of noise.
To keep it simple, I made some qualitative, not quantitative statements."
Then let me rephrase. You don't understand the consequences of noise.
"Adding some readout noise may changing some numbers, but not the quintessence."
In fact, it entirely changes the 'quintessence', since in the presence of noise you cannot get an accurate determination of what the lower bits are.

Link | Posted on Sep 2, 2017 at 16:34 UTC
In reply to:

Kubicide: Sorry but this article simply doesn't make sense! A sensor DR cannot 'outstrip it's bit depth' whatever that means.

Dynamic range is determined by the source (the sensor). The bit depth or sampling rate will not affect the sensor's dynamic range by either expanding it or reducing it. Using higher bit depths is always better as it helps when reproducing the image precision and accuracy (of the color, brightness, darkness, etc.). Increasing the bit depth means the sampling rate has changed so that the subtle shifts in color/tint/hue/etc. within the dynamic range of the scene can be reproduced more faithfully. But using a 12-bit or 14-bit RAW won't result in different dynamic ranges of the same image; if it's xx dB then it will remain xx dB.

Perhaps the author has confused terms, or is trying to describe something other than dynamic range or bit depth (?)

@Kubicide
But back to DR: it will not be affected by changing the menu setting from 12-bit to 14-bit RAW.

Typically DR is indeed affected by changing the menu setting from 12-bit to 14-bit raw, if the camera is capable of delivering more than 12 stops of DR in the first place. In fact, the affect is not straightforward, because what happens is at the 12 bit setting, the read noise gets undersampled, which effectively means that quantisation 'noise' is being added into the Poisson noise of the read noise (and any other 'noises' that hang around at the noise floor). The result is that the character of the shadow noise changes somewhat if you use 12 bits on a high DR capable camera.

Link | Posted on Sep 2, 2017 at 12:09 UTC
In reply to:

Kubicide: Sorry but this article simply doesn't make sense! A sensor DR cannot 'outstrip it's bit depth' whatever that means.

Dynamic range is determined by the source (the sensor). The bit depth or sampling rate will not affect the sensor's dynamic range by either expanding it or reducing it. Using higher bit depths is always better as it helps when reproducing the image precision and accuracy (of the color, brightness, darkness, etc.). Increasing the bit depth means the sampling rate has changed so that the subtle shifts in color/tint/hue/etc. within the dynamic range of the scene can be reproduced more faithfully. But using a 12-bit or 14-bit RAW won't result in different dynamic ranges of the same image; if it's xx dB then it will remain xx dB.

Perhaps the author has confused terms, or is trying to describe something other than dynamic range or bit depth (?)

@Jetfly - I would caution about getting 'the basics' for Cambridge in Colour, they are very often wrong, though in the quote you use, I can't see an error, though it's not particularly relevant to this discussion.

@theongu: "No!!!! You take the entire Dynamic Range (considering perceptual encoding, sampling, etc.) Then you divide that range by 4096 for 12-bit or divide by 16384 for 14-bit. That is the number of steps to get from darkest to brightest in that original range (Dynamic Range). Adding an extra bit does not add 1EV!"
I'm beginning to see why we're going round in circles here. The qualifier that Richard didn't add is 'if the sensor information is to be completely encoded'. Where he's also right is that the unencoded information will always be the least significant bits.

Link | Posted on Sep 2, 2017 at 11:57 UTC
In reply to:

pulsar123: There are situations when having more bits than what is strictly necessary from the dynamic range perspective can be beneficial. For example in stacked photographs (astrophotography etc.), where one can recover useful signal hiding well below the "noise floor". Similarly, one can do pixel binning of a single noisy photo to reduce the effective signal-to-noise ratio (by sacrificing the effective resolution) - this again requires the noise to be resolved with as many bits as possible.

Yes indeed. The exposure is increased by adding additional exposures. We are in agreement, just looking from slightly different angles.

Link | Posted on Sep 2, 2017 at 11:49 UTC
In reply to:

LP0WELL: Pretty typical of a DPReview article, starts out with factual information, but drops the ball at the end. The final conclusion, "If your camera doesn’t capture more than 12 stops of DR, you probably shouldn’t clamor for 14-bit Raw" is myopic and misleading.

When the camera encodes a 12-stop DR in a 12-bit RAW file, just as the article states, the brightest stop of highlights is encoded using half the steps of the 4096-step 12-bit range. But take a look at that darkest 12th stop of shadow detail (first diagram) - it's encoded with just 2 steps of detail, which makes it nothing more than a dark silhouette. With a 14-bit RAW image, that 12th stop of DR would be encoded with 8 levels of precision, preserving four times as much shadow detail as a 12-bit RAW image.

As for misnaming noise 'random modulation', that is exactly what noise is. If you want to assert otherwise, please do give us your definition of noise. I'm sure we could all do with some light relief.
As for 'a dilettante's grasp of DSP engineering', you haven't a clue how silly you are being. One of the strange things about this kind of conversation is the ridiculous debating tactics used by people who insist on spreading misinformation, which seems to include you.

Link | Posted on Sep 2, 2017 at 11:19 UTC
In reply to:

LP0WELL: Pretty typical of a DPReview article, starts out with factual information, but drops the ball at the end. The final conclusion, "If your camera doesn’t capture more than 12 stops of DR, you probably shouldn’t clamor for 14-bit Raw" is myopic and misleading.

When the camera encodes a 12-stop DR in a 12-bit RAW file, just as the article states, the brightest stop of highlights is encoded using half the steps of the 4096-step 12-bit range. But take a look at that darkest 12th stop of shadow detail (first diagram) - it's encoded with just 2 steps of detail, which makes it nothing more than a dark silhouette. With a 14-bit RAW image, that 12th stop of DR would be encoded with 8 levels of precision, preserving four times as much shadow detail as a 12-bit RAW image.

"LOL that is the most clueless question I've ever read on DPReview. What do you think the data read from an image sensor is comprised of, pixie dust?"
Probably best before writing something like that it's best to ensure that you know what you're talking about, otherwise you make a fool of yourself. Of course, you might be happily unaware of that, but it happens anyhow.
A pixel describes the exposure at a point, it is a scalar, it can't describe 'detail', that happens in the context of multiple samples over a distance. The frequency of those (what we in signal processing call 'the sampling frequency') determines how much 'detail' can be captured.

Link | Posted on Sep 2, 2017 at 11:19 UTC
In reply to:

Raist3d: @Richard/Bob2- interesting article Richard and something tricky to explain.
Question for you and Bob2- I am still having a hard time with the "more bits do not improve tonality" since if we do a mental exercise of having raw files from less bit depth it becomes very obvious fast we can only represent less and less tones.

Or maybe you just mean that at the point we are dealing with 12 bits compared to 14, the importance in tonality differences is very small compared to the DR retain advantage? (Asumming sensor capable of capturing that range of dr). Actually in a third read ;-) I think this is what you meant? That would make sense to me.

Thanks for the article and thanks Bob2 for chirping in with other explanations.

'Yes, fundamentally if the camera can capture only 16384 levels of gradation. Adding more bits will not add more details. But that is not what other commenters object to."
Different 'commentators' are objecting to different things. None of them seems to understand the information theoretic background of that, and instead are making up misleading thought experiments.
But, I see from your comments that we are essentially in agreement, you just have a bit of an objection to the way Richard has crafted the article. I write articles on similar subjects, and I know how hard it is to pace and explain things for the lay reader. He's done a very good job, in my opinion.

Link | Posted on Sep 2, 2017 at 10:35 UTC
Total: 115, showing: 1 – 20
« First‹ Previous12345Next ›Last »