1. Modern cameras (even quite cheap ones) have sensors with native DR of 12-13 stops (several have more and the trend is upwards) while virtually all makers restrict their JPG output to about 9 stops. It's true that there are plenty of times when the scene DR is 9 stops or under, but also plenty where it's over 9 stops. Shooting JPG is throwing away up to 1/3 of the luminance data that you bought your camera to record.
That's interesting. I need to correct highlights and shadows on most of my shots, hence retrieving data that are present in the RAW files. I'm constantly wondering why this idiot (my camera) clipped them. Is there a rationale why camera makers downgrade the JPGs compared to what the sensor actually recorded from the scene, using only part of the DR?
I don't know this for certain but it's what I believe is the explanation.
The final output picture is seen on a medium, whether it be paper, screen or whatever, that has a relatively low tonal range. Put this another way - the contrast between the darkest point on the picture and the brightest is quite small.
Our eyes don't register tonal range a such; they work by detecting differences between neighbouring sensors (rods and/or cones). Those differences are, of course, contrast. What matters most to us looking at a scene is local contrast in quite small patches of the total scene. Our brains blend those patches into apparently seamless whole views.
Come back to photographs, though, and (unless we resort to selective editing in PP) we can't control local contrast but only global contrast. Now, if the top and bottom of the tonal range are fixed - as the output device makes them - there's only a given amount of global contrast available. Spread the wide DR (= tonal range) of a whole scene into the narrow tonal range of the output and at any point the local contrast is reduced.
As a general rule we tend to notice the mid-tones more than the tonal extremes; that's why all conversions from raw data need a tone response curve that is some sort of S-shape. Experience has shown that for the viewing devices usually available a DR in the image (file or negative) works well if it is held to about 9EV.
Photos would look nicer with good looking skies and more detailed shadows. Do they want to make them bland?
No; it's actually the other way round. What they want is for
most scenes to look contrasty - the opposite of bland - so they compress the total DR to achieve this. If we put the clock back a bit, film is quite tolerant of overexposure of the skies so many photos survived the narrow tonal range of film and paper at a relatively modest cost - shadow areas became blocked.
This was so common for so long that many people came to accept that photos miss shadow detail (my sister-in-law once complained that it it's wrong to open shadows because we don't see that way). Early digital cameras also had narrow DR so this way of seeing things was perpetuated.
It's only quite recently that the ability to get decent detail in shadows and good colours in skies at the same time as acceptable mid-tone contrast has become possible, and then only by using different tone curves than typical in-camera JPG.