Thom Hogan
Forum Pro
The above is an important thing to understand. Much of the progress on image sensors is "enabled" by something. For instance, you can't have stacked sensors without BSI. You couldn't have BSI CMOS without first having CMOS.A stacked sensor provides capabilities for faster offloading of data off the sensor and to the data pipelines & storage. It serves as a enabler for a global shutter.
No one will show you their proprietary charts of interlocking parts without an NDA (and probably a big contract in place), but an image sensor is today a series of technology components that are integrated together, sometimes in new ways. Sony Semiconductor has huge libraries of options, including the basic EXMOR set that's often used as a base. Nikon, too, has libraries of options they've designed and prototyped. So does Fujifilm, Olympus, and Panasonic, and so does Sony Imaging.
You have to think of sensor development these days in terms of chefs (the individual companies and their individual and combined libraries of ingredients), while the fabs are the line cooks that put the meal (sensor) together. One of the critical steps that I look at is who is taping out the sensor? If it's the fab, then likely you're looking at a standardized sensor offered to anyone, with minor tweaks. If it's the camera company, then it's truly their recipe (which may involve stock ingredients).
Sony Semiconductor, in particular, has a huge pantry full of possible ingredients. Many of those came from things that the camera companies themselves created. In essence, Sony Semiconductor wants to be able to "make anyone's meal." Thus they've been aggressive about licensing things that they can stuff into the pantry for possible use.
Nikon, for instance, has an ingredient that we've not yet seen in a commercial sensor, but have seen in a research project: using a stacked sensor (the enabler) to produce localized dynamic range (e.g. each 16x16 block of pixels can be exposed differently). They couldn't have made that ingredient without someone first making stacked sensors possible. Nikon also is in a unique position because their manufacturing equipment (steppers) is often used to create image sensors, so sometimes you can't make a new enabling technology without first making it possible on the equipment.
So, it's entirely possible that Sony Imaging, with the A9 Mark III, decided to let the stacked technique enable global shutter, while Nikon Imaging might decide to have it enable dynamic range extension instead.
Finally, note that the volume of image sensors being produced for dedicated cameras is now small overall compared to the entire market for image sensors. This has flipped where the innovations and changes (enablers) come from. Also, instead of always having to come up with the enabler themselves, now they can look at what auto, security, and smartphone has created and pick and choose the things that make dedicated cameras better. I don't think it's a surprise that Nikon dropped the shutter on their top models or that Sony went for highest-possible bandwidth once stacked became an enabler. With reduced volume, you tend to have to do the "new sensor" thing at the top, where the R&D payback will happen with fewer sales.