20 years later: Why are digital cameras still emulating instant film?

Started Jul 6, 2012 | Discussions thread
ForumParentFirstPreviousNextNext unread
Flat view
DigVis
Forum MemberPosts: 97
Like?
20 years later: Why are digital cameras still emulating instant film?
Jul 6, 2012

I guess most can relate to the two concepts of exposure and development . To me, they mean:

Exposure: the capture of light. The aim of a good exposure is to, as well as possible, record the part (and form) of the visual information that the photographer is interested in. "As well as possible" will often mean with the highest signal to noise ratio (other aspects are also relevant).

Development : the presentation of an image captured by an exposure. There is no right or wrong way to develop an image – it's all up to the photographer and his/hers artistic intent.

With the distinction above, it seems to me that still after almost 20 years of consumer digital cameras, all of them are designed backwards. Their design means that like a instant film Polaroid, the standard (JPEG) mode forces you to chose how to develop an image prior to taking it. You are limited to a few predefined development profiles, and have to chose an exposure that fits the selected profile.

The automatic functions of the cameras then become unneccesarily complex, as they have to try to solve the convoluted problem of fitting the exposure to the predefined development profile ("the exposure problem"), rather than the other way around. More and more complex algorithms are implemented to solve the wrong problem. Some modern cameras implement complex scene classification algorithms, and contain thousands of reference photos just for this purpose. Even face detection algorithms are used to compute the exposure.

The RAW format still feels like an afterthought. While it allows proper separation of exposure from development, the camera does its best to be unhelpful. The automatic exposure functions still assume a particular development profile and incorrectly try to solve the exposure problem.

Things could have been so much better. With a digital sensor, ETTR is often an excellent exposure strategy. It would have been dead easy to implement automatically in Live View mode. Other selective clipping modes would probably account for the needs of 99 % of all photos.

Why do cameras still try the best to make my work harder by:

  • Forcing a workflow based on histogram chimping rather than automating this simple task

  • Presenting the wrong data in these histograms (developed rather than exposed)

  • Supplying blinking modes that do the wrong thing (blink when all channels rather than any channel is saturated, and also based on the wrong data)

  • Presenting image previews based on incorrectly assumed development profiles (for example when purposely "underexposing" (wrong word) a scene at maximum analogue gain ISO)

  • Forcing film emulation with stupid digitally scaling "ISO" modes

  • Digitally scaling and thereby artificially clipping the R and B color channels

  • and so on…

I'm sure the camera manufacturers are not unaware these artificial limitations and understand how much better a camera could be by making a break with the past. Is it the fear of change that is holding them back?

ForumParentFirstPreviousNextNext unread
Flat view
Post (hide subjects)Posted by
ForumParentFirstPreviousNextNext unread
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow