Unless one is fixated on pixels and not on images, high DR and high pixel density are not in conflict.
Consider camera A whose pixels are half the size of those of camera B, but having the same quantum efficiency (light-gathering ability
per unit area ). With equal area sensors, camera A has four times the number of megapixels as camera B. In the space taken by one pixel of camera B there are four pixels of camera A. The same number of photons captured by one pixel of camera B are spread over four pixels of camera A.
Naively, a measurement of DR would say that camera A has two stops less DR than camera B (since a single pixel only captures 1/4 the light, and the smaller pixels will max out at a light level proportional to their area for a given technology). Yet both sensors are capturing as many photons per unit area, the only difference being that they are resolved better spatially by camera A.
Camera A has the
same light gathering capacity as camera B, and
better resolution. Its main drawback is the size of its raw files and the processing overhead (both in cam and in post) that entails.
Dynamic range specs should be quoted on a per area basis if you want to compare them fairly across camera models with different pixel densities. Perhaps one should also divide by the square of the crop factor, to normalize DR relative to overall sensor area.
--
emil
--
http://theory.uchicago.edu/~ejm/pix/20d/