To discuss dynamic range I think we need first to define what is the object, and what is the signal, and what is the application.
In practical terms, I like to plot resolution loss against exposure.
On a normally exposed real life shot where green pixels may reach very close to clipping blue and red often do not, and that's different for different sensors and different light SPDs. Reducing sensor performance in terms of DR to one number may be not all that's needed.
What to make of pixel DR if we are trying to apply it to sensors with different pixel densities, sizes, and counts?
Even for film print size implies different enlargement of same-sized negative, is anyone claiming that a given film emulsion has different dynamic range between 35mm and 120 film ?
Not sure what the question is. Is there a fundamental difference in the effect of photon shot noise depending on the media?
How do we define dynamic range for a stochastic raster? Suppose I use a very contrasty film, and all I have on it after development of a step wedge is Dmax and Dmin, no intermediate Ds. If I print it small, I can have many perceivable shades of grey, essentially a half-tone print. If I print it large, I will have only 2 perceivable shades (at some viewing distance).