The Nikon Z9 Launches a New Era for Photography....

Can someone explain the difference to me? They sound like the same goals are meant to be achieved, so is this more marketing or something real? Thanks.
Lossless compressed = 14-bit, full-res RAW files with no loss in quality whatsoever.

High Efficiency * = 14-bit, full-res RAW files which Nikon says is indistinguishable from lossless, but some data is being thrown away.

High Efficiency = 14-bit, full-res RAW files with losses that may be perceived.

I'll let anyone decide if this is marketing. I don't really consider it any more marketing than my D750 not explicitly saying "lossy" for the "compressed RAW" option.
 
Can someone explain the difference to me? They sound like the same goals are meant to be achieved, so is this more marketing or something real? Thanks.
Lossless compressed = 14-bit, full-res RAW files with no loss in quality whatsoever.

High Efficiency * = 14-bit, full-res RAW files which Nikon says is indistinguishable from lossless, but some data is being thrown away.

High Efficiency = 14-bit, full-res RAW files with losses that may be perceived.

I'll let anyone decide if this is marketing. I don't really consider it any more marketing than my D750 not explicitly saying "lossy" for the "compressed RAW" option.
Yes, Ricci is still doing tests, but he says he’s not really seeing a considerable difference between Lossless compressed and High Efficiency. When people hear that, they seem to think it’s some kind of gimmick, but if that’s really true, that’s a major shift.

Full resolution 45 MP images with significantly reduced file sizes? Yes, please.
 
Can someone explain the difference to me? They sound like the same goals are meant to be achieved, so is this more marketing or something real? Thanks.
Lossless compressed = 14-bit, full-res RAW files with no loss in quality whatsoever.

High Efficiency * = 14-bit, full-res RAW files which Nikon says is indistinguishable from lossless, but some data is being thrown away.

High Efficiency = 14-bit, full-res RAW files with losses that may be perceived.

I'll let anyone decide if this is marketing. I don't really consider it any more marketing than my D750 not explicitly saying "lossy" for the "compressed RAW" option.
Here is what the Nikon Z9 brochure says:

"High Efficiency RAW which retains the same level of high image quality as the conventional uncompressed RAW in an approx. 1/3 smaller* file size — making RAW files easier to handle than ever."

And the licenced Intopix Tico raw:

Lossless quality of CFA data.
 
D3 Moment ?

Much more because it equates more to the Nikon D1 Launch and arguably events that unfolded into Digital Photography ....

The loss of the Mechanical Shutter to an entirely On-Sensor Electronic Shutter precedes more and more integration of imaging functions in silico....

https://bythom.com/newsviews/yes-the-camera-world-change.html

and https://www.zsystemuser.com/nikon-z-system-news-and/winners-and-losers.html
I dont think so,they did that with the launch of the Z6/7. " Reinventing Mirrorless" if I remember correctly.

A new sensor with an updated processor has them possibly catching/surpassing the CanSon AF levels.We dont know until the release firmware is finished.

With all new FlagShips there comes a few unique features and thats GREAT,espescially for Nikon shooters.For shooters of other systems,THEIR companies have to respond,so everyone benefits.
 
Can someone explain the difference to me? They sound like the same goals are meant to be achieved, so is this more marketing or something real? Thanks.
Lossless compressed = 14-bit, full-res RAW files with no loss in quality whatsoever.

High Efficiency * = 14-bit, full-res RAW files which Nikon says is indistinguishable from lossless, but some data is being thrown away.

High Efficiency = 14-bit, full-res RAW files with losses that may be perceived.

I'll let anyone decide if this is marketing. I don't really consider it any more marketing than my D750 not explicitly saying "lossy" for the "compressed RAW" option.
Yes, Ricci is still doing tests, but he says he’s not really seeing a considerable difference between Lossless compressed and High Efficiency. When people hear that, they seem to think it’s some kind of gimmick, but if that’s really true, that’s a major shift.

Full resolution 45 MP images with significantly reduced file sizes? Yes, please.
Well, if one starts with, say, 60MB worth of data and ends up with 30MB or 20MB, something went out the window. Technically, either there is extra stuff which wasn’t needed, or something was lost, we will know soon. My guess, it’s like going from 14 to 12 bits, which is, technically, a to 1/4th reduction of original info. But one may think that some parts of the scene do not need so much info collected, others need more, thus a smart compression would be able to result in minimal actual cost.

--
Renato.
http://www.flickr.com/photos/rhlpedrosa/
OnExposure member
http://www.onexposure.net/
Good shooting and good luck
(after Ed Murrow)
 
Last edited:
I'm sure it will be a great camera but so many seem to be over-playing the shutter removal. Sure, it is unique but it is not like A9, A1, 1DX4, even Fuji XT3's, Olympus EM1's etc have not been making significant use of their cameras in ES mode. Some, like the A1's etc are obviously intended to mostly be used in ES mode... but they do have a shutter when higher DR is needed.

Maybe Nikon solved most, if not all of the issues with a faster readout but I bet they could have squeezed even more DR out if they have left that shutter in for the landscape shooters, etc.

Removing it at this point seems more of a disadvantage than any advantage beyond marketing and of course, allowing them to replace it with that stronger protective shutter which quite amazing.
It's not like they're completely removing the mechanical shutter for all their cameras. There are also the lower tier cameras which will not have stacked sensors for a long time.

This is just starting a change in how the companies think about making cameras, but it's going to be a long, slow and protracted process before it can be translated down to the masses.
 
Yes,now that the top end beast is here're will see the processor in the next generation Z*

and Z** models.

The D500 was and still is a very good camera but a Zee version should come I'd guess next year?Maybe with a stacked sensor and no mechanical shutter.

Then the follow up bodies to Z5/6/7 just with BSI,maybe Z5 will stick to FSI to fill the entry FF nplace?So non stacked with a mechanical shutter?
 
Exactly and there's more. It's clear from many responses reveal the core implications are not recognized. It it doesn't need an MBA to understand significantly lower retail price from lowered production costs (including testing and tuning mechanical shutters)

1. the last surviving major mechanical component of the ILC is removed,

2. Nikon R&D can shut the door firmly on dead end returns striving to increase fps from 14 to 16 fps etc...

3. Integrated shutter into the sensor allows new features with high temporal precision. Core R&D product development can be cloned widely into patented range of products,

4. Competitors are suddenly at a major disadvantage, hereafter at the back of the queue struggling to catch up,

5. New generation of stacked sensors will be prioritized for mid range and prosumer MILCs,

6. Major step into increasing trend for 3D multi tasking sensors in ILCs, which is already happening in leading smartphones. Nikon is now leading this trend in the ILC industry.... as happened with the D1

The longer aim to move to a purely electronic shutter likely originated in primary R&D which interfaced with the strategic planning of the Z system early on within Nikon to move to a stacked sensor - the long R&D demanding time points to this likely being a few years before launch of Z9, at least.

Their 1" industrial stacked sensor was the first public evidence of the eventual outcome.

But these complicated sensors must surely have been the outcome of a protracted process, in which relied on strategic partnerships to fab prototype sensors, and ultimately the final product. Simulations and highly advanced in silico models of circuits etc can only get so far.

I'm sure it will be a great camera but so many seem to be over-playing the shutter removal. Sure, it is unique but it is not like A9, A1, 1DX4, even Fuji XT3's, Olympus EM1's etc have not been making significant use of their cameras in ES mode. Some, like the A1's etc are obviously intended to mostly be used in ES mode... but they do have a shutter when higher DR is needed.

Maybe Nikon solved most, if not all of the issues with a faster readout but I bet they could have squeezed even more DR out if they have left that shutter in for the landscape shooters, etc.

Removing it at this point seems more of a disadvantage than any advantage beyond marketing and of course, allowing them to replace it with that stronger protective shutter which quite amazing.
It's not like they're completely removing the mechanical shutter for all their cameras. There are also the lower tier cameras which will not have stacked sensors for a long time.

This is just starting a change in how the companies think about making cameras, but it's going to be a long, slow and protracted process before it can be translated down to the masses.
 
The throat/width architecture of the Z Mount is highly strategic. The design space has been taken to its limits in the 16mm throat, as tolerances are very tight to fit the shutter and protective screening(s), and minimize damage when lenses are changed. The very tight tolerances of the Z mount have probably taken the dimensions of the ILC lens mount to the absolute minimum. One outcome is it is impossible to adapt a Z lens to work on a rival mount.

This video is the latest in accumulating evidence that Nikon thought out the Z system with great care for the long term.

shared by an industry engineer in Tokyo on nikongear https://nikongear.net/revival/index.php?action=post;quote=177194;topic=10206.90;last_msg=177274

Re: Z9 Release Thread
« Reply #103 on: October 31, 2021, 19:13:34 »
Quote
"This Japanese YouTuber is an retired engineer who apparently had worked for Panasonic until fairly recently. He whole-heartedly admires the Nikon managers and engineers for having developed technologies incorporated int


(His narration is Japanese.)
He suspects that the sensor was made by Tower Semiconductor which originally was part of Panasonic.

Http:towersemi.com

The follow up post in the same NG thread by the technical expert in Japan, who shared this video, has kindly translated the key points of the video:

"...Here are the comments in the video I consider essential.

He suspects that the sensor was developed by Tower Semiconductor (formerly called Tower Jazz) based on the technology they had established when they developed their 1” stack BSI sensor.

He calculates that the electronic shutter “curtain” should run only slightly slower than a mechanical 1/8000 shutter unit, which at the same time keeps its rolling shutter effect at almost the same level as that caused by a mechanical 1/8000 shutter.

And the 1/32,000 sec. exposure time is enabled thanks to the fact that an electronic shutter is free from the mechanical instability which makes it difficult to keep the extremely narrow slit for the 1/32,000 sec.

He was impressed by the fact that the brave decision to eliminate the mechanical shutter altogether was made neither by Canon or Sony but by Nikon! He also points out that the development of a mechanical shutter of this level would have cost extremely high, and its manufacturing process would be extremely complicated, which would raise the production cost even more. (I think this is part of the reason for the relatively low price of Z9 for a flagship model packed with the game-changing new technology).

He also suspects that the engineers could concentrate on the development of other technologies like the image stabilization because they didn’t need to worry about the development of a new shutter unit.

He also assesses the Expeed 7 processor to be at least comparable to the image processor of Canon’ R3, enabling 8k video, fast read-out of the sensor data and Ai-enforced AF. The Expeed 7 can take full advantage of the blindingly fast sensor.

The resolution of EVF is 3.69MP which is relatively low, compared to those of the higher-end models of other manufacturers. He thinks that the resolution could be the highest data rate possible to enable the “dual streaming processing”.

In the video, he doesn’t make an elaborate comparison to Canon R3, the potential direct rival of Z9, because Z9 is worldly different from R3!

Based on his own experience of developing sensors and image processing engines (probably at Panasonic), he believes that the vision of the planning and the designing of Z9 is extremely well focused. And he highly admires that the cooperation between the managers and the engineers have worked very well.

It is funny that he goes so far as to say that Panasonic should have been the first camera manufacturer ever to develop the technology of eliminating the mechanical shutter, because it was the first company to offer the interchangeable lens mirrorless camera. He says he would have been happy if only his bosses are like the ones of Nikon.

In the previous videos assessing Z6 and Z7, he pointed out that a flange back of 16mm could be very challenging for the engineers to pack the mechanical shutter unit, UV/IR-cut filter and the IBIS mechanism, compared to Canon and Sony whose flange backs are around 22mm. But now he admires the decision on the short 16mm flange back, if Nikon envisioned the omission of the shutter unit.

He considers that Z9 is a great proof of Nikon’s solid fundamental technology. Apparently, He almost compares Nikon having endured severe situation of being left behind Sony and Canon to La Comte de Monte-Cristo.

He also says that such a ground-breaking camera may suffer from the unbalanced performance caused by the unbalanced developing level of technology for each function of the camera. But he would choose to admire the challenging aspect of Z9 rather than nit-picking the unbalanced performance."
 
Last edited:
Nikon applied for an interesting patent mid 2021, commented on in asobinet.com. Thom Hogan has commented a couple of times over the past year or more on a noticeable increase of Nikon patents in sensor technology

https://asobinet.com/info-patent-nikon-global-shutter-for-af/

This is intriguing, considering the timing of submission (March 2021): https://www.j-platpat.inpit.go.jp/s0100
JP,2021-100287,A

https://www.zsystemuser.com/nikon-z-system-news-and/did-nikon-just-provide-a-z9.html

https://www.fredmiranda.com/forum/topic/1707731/0#15639102ypothesis
 
Well, if one starts with, say, 60MB worth of data and ends up with 30MB or 20MB, something went out the window.
Not necessarily, no. That is how lossless compression works: By exploiting redundancy in the original data you can reduce the total file size without removing any information.
Technically, either there is extra stuff which wasn’t needed, or something was lost, we will know soon.
It will be a combination of both no doubt. Any redundancy in the file will be exploited, and there will be some massaging of the data to make it even easier to compress.
My guess, it’s like going from 14 to 12 bits, which is, technically, a to 1/4th reduction of original info.
Again with this misconception :(

Storing 14 bits take up 14 bits of storage. Storing 12 bits takes up 12 bits of storage. That's a reduction in storage requirements of about 15%, not really enough to explain the difference.

I will give Nikon's engineers some credit and assume they have cooked up something a bit more sophisticated ;)
But one may think that some parts of the scene do not need so much info collected, others need more, thus a smart compression would be able to result in minimal actual cost.
Exactly. Compression is often about realizing how the data is interpreted by the observer, and which details will be noticed and which will not.

As an example, whenever you have noise in an image this noise will require a lot of data to accurately represent (try adding noise to a JPG image and see how it affects the file size). But we, as observers, don't really care how the noise looks on a pixel level. If the code is able to store the image without noise, and just 'fill in' the noise when you open the file, then it would reduce file size a lot without making the image look different to a human observer.
 
How about a specialized battery grip that can hold an M.2 ssd card with 4 PCie lanes and write speeds of 3300MB/s? Would instantly drop storage costs and improve performance. Third party would do it in an instant if there was a PCie pipeline to that part of the camera.
You don't need that. All you need is for them to finally support writing out to the USB C port. Ricci in the last Matt Irwin video was asking about this and he said HE keeps asking for it but not yet.

If they start writing out you don't need anything more than a fast external drive. That means No more card readers. Just plug into the computer and start editing.

Smaller companies having been offering this for years.
 
  1. rhlpetrus wrote:
Can someone explain the difference to me? They sound like the same goals are meant to be achieved, so is this more marketing or something real? Thanks.
Lossless compressed = 14-bit, full-res RAW files with no loss in quality whatsoever.

High Efficiency * = 14-bit, full-res RAW files which Nikon says is indistinguishable from lossless, but some data is being thrown away.

High Efficiency = 14-bit, full-res RAW files with losses that may be perceived.

I'll let anyone decide if this is marketing. I don't really consider it any more marketing than my D750 not explicitly saying "lossy" for the "compressed RAW" option.
Yes, Ricci is still doing tests, but he says he’s not really seeing a considerable difference between Lossless compressed and High Efficiency. When people hear that, they seem to think it’s some kind of gimmick, but if that’s really true, that’s a major shift.

Full resolution 45 MP images with significantly reduced file sizes? Yes, please.
Well, if one starts with, say, 60MB worth of data and ends up with 30MB or 20MB, something went out the window. Technically, either there is extra stuff which wasn’t needed, or something was lost, we will know soon. My guess, it’s like going from 14 to 12 bits, which is, technically, a to 1/4th reduction of original info. But one may think that some parts of the scene do not need so much info collected, others need more, thus a smart compression would be able to result in minimal actual cost.
Lossless compression exists, this is being described by the designers as the the first Raw codec. It doesn't seem to start with a raw files and compress them more it's a different raw file. Lossless compressed codecs also exist and can be very effective. You don't just have to get rid of stuff. You can plot thousands of points to describe a still imperfect circle or just find a better way to encode the same circle.
 
Last edited:
Can someone explain the difference to me? They sound like the same goals are meant to be achieved, so is this more marketing or something real? Thanks.
Lossless compression is a specific technical term from computer science. It means that you can decompress the file and get precisely the original file — not a single bit is allowed to differ.

As an aside, processors are now so fast that it can be faster (and generate less heat) to compress and then store the compressed file than simply to store the uncompressed file.

Say you have a 45 M pixel image and change the R value of a single pixel from 15,498 to 15,499 you’d then have a different image. But you wouldn’t be able to tell. Especially if that change was in a somehow noisy or unimportant part of the image.


Now, the claim for HE* compression is that while the odd bit here and there may differ from the original, you won’t be able to tell the difference in terms of noise, dynamic range or any artefacts.

If you don’t mind the odd bit being different, you get much better compression ratios (ie. faster shooting, effectively unlimited buffer, and space for a lot more images on your card. And less heat. And faster transfer to the computer.)

If that’s the case (ie. that you can’t tell the difference between lossless raw and HE*) then I guess a lot of people would be perfectly happy to use it.
 
Well, if one starts with, say, 60MB worth of data and ends up with 30MB or 20MB, something went out the window.
Not necessarily, no. That is how lossless compression works: By exploiting redundancy in the original data you can reduce the total file size without removing any information.
Technically, either there is extra stuff which wasn’t needed, or something was lost, we will know soon.
It will be a combination of both no doubt. Any redundancy in the file will be exploited, and there will be some massaging of the data to make it even easier to compress.
My guess, it’s like going from 14 to 12 bits, which is, technically, a to 1/4th reduction of original info.
Again with this misconception :(

Storing 14 bits take up 14 bits of storage. Storing 12 bits takes up 12 bits of storage. That's a reduction in storage requirements of about 15%, not really enough to explain the difference.
can you explain that a little further, genuine doubt. Thinking mathematically, a bit doubles the number of levels/data points, how does it end with only 15% reduction in size?
I will give Nikon's engineers some credit and assume they have cooked up something a bit more sophisticated ;)
But one may think that some parts of the scene do not need so much info collected, others need more, thus a smart compression would be able to result in minimal actual cost.
Exactly. Compression is often about realizing how the data is interpreted by the observer, and which details will be noticed and which will not.

As an example, whenever you have noise in an image this noise will require a lot of data to accurately represent (try adding noise to a JPG image and see how it affects the file size). But we, as observers, don't really care how the noise looks on a pixel level. If the code is able to store the image without noise, and just 'fill in' the noise when you open the file, then it would reduce file size a lot without making the image look different to a human observer.
The rest is ok, I did not say there would be losses proportional to storage reduction, only that some loss would follow, otherwise, why have lossless compressed? These would be the new lossless compressed. I’m curious to see Bill Claff testing these modes in terms of PDR.
 
Can someone explain the difference to me? They sound like the same goals are meant to be achieved, so is this more marketing or something real? Thanks.
Lossless compressed = 14-bit, full-res RAW files with no loss in quality whatsoever.

High Efficiency * = 14-bit, full-res RAW files which Nikon says is indistinguishable from lossless, but some data is being thrown away.

High Efficiency = 14-bit, full-res RAW files with losses that may be perceived.

I'll let anyone decide if this is marketing. I don't really consider it any more marketing than my D750 not explicitly saying "lossy" for the "compressed RAW" option.
Here is what the Nikon Z9 brochure says:

"High Efficiency RAW which retains the same level of high image quality as the conventional uncompressed RAW in an approx. 1/3 smaller* file size — making RAW files easier to handle than ever."

And the licenced Intopix Tico raw:

Lossless quality of CFA data.
Is Nikon using the Intopix Tico raw in this case? They just recently announced a new version:

 
Well, if one starts with, say, 60MB worth of data and ends up with 30MB or 20MB, something went out the window.
Not necessarily, no. That is how lossless compression works: By exploiting redundancy in the original data you can reduce the total file size without removing any information.
Technically, either there is extra stuff which wasn’t needed, or something was lost, we will know soon.
It will be a combination of both no doubt. Any redundancy in the file will be exploited, and there will be some massaging of the data to make it even easier to compress.
My guess, it’s like going from 14 to 12 bits, which is, technically, a to 1/4th reduction of original info.
Again with this misconception :(

Storing 14 bits take up 14 bits of storage. Storing 12 bits takes up 12 bits of storage. That's a reduction in storage requirements of about 15%, not really enough to explain the difference.
can you explain that a little further, genuine doubt. Thinking mathematically, a bit doubles the number of levels/data points, how does it end with only 15% reduction in size?
For example, say you want to be able to store the numbers 0 through to 7. You need 3 bits for that:

0 = 000

1 = 001

2 = 010

3 = 011

4 = 100

5 = 101

6 = 110

7 = 111

You may have spotted that the last but changes every row, the middle bit changes every 2 rows, and the first but changes every 4 rows. If you add another bit, you can change that every 8 rows. The other bits just keep flipping according to the pattern they already have. Each bit combination is unique for that number. So by adding 1 more bit, you can encode (that’s what we call this) numbers of twice the value.

We call the leftmost bit the most significant bit, and the rightmost bit the least significant bit.
 
Last edited:
Good I hope the Z9 used is $2000
 
Well, if one starts with, say, 60MB worth of data and ends up with 30MB or 20MB, something went out the window.
Not necessarily, no. That is how lossless compression works: By exploiting redundancy in the original data you can reduce the total file size without removing any information.
Technically, either there is extra stuff which wasn’t needed, or something was lost, we will know soon.
It will be a combination of both no doubt. Any redundancy in the file will be exploited, and there will be some massaging of the data to make it even easier to compress.
My guess, it’s like going from 14 to 12 bits, which is, technically, a to 1/4th reduction of original info.
Again with this misconception :(

Storing 14 bits take up 14 bits of storage. Storing 12 bits takes up 12 bits of storage. That's a reduction in storage requirements of about 15%, not really enough to explain the difference.
can you explain that a little further, genuine doubt. Thinking mathematically, a bit doubles the number of levels/data points, how does it end with only 15% reduction in size?
For example, say you want to be able to store the numbers 0 through to 7. You need 3 bits for that:

0 = 000

1 = 001

2 = 010

3 = 011

4 = 100

5 = 101

6 = 110

7 = 111

You may have spotted that the last but changes every row, the middle bit changes every 2 rows, and the first but changes every 4 rows. If you add another bit, you can change that every 8 rows. The other bits just keep flipping according to the pattern they already have. Each bit combination is unique for that number. So by adding 1 more bit, you can encode (that’s what we call this) numbers of twice the value.

We call the leftmost bit the most significant bit, and the rightmost bit the least significant bit.
I know base 2 math. And II think I know how it works, as a byte is 8 bits. Thanks.
 

Keyboard shortcuts

Back
Top