ProfHankD, you are making the prospect of apps sound appealing. What are the chances that there will be reasonably convenient access to a quite raw level of pixel information, allowing app-controllable pixel binning (and possibly new native file storage types as well) at reasonable speed? Be fun to have a 6 megapixel Nex 7 option that stored .DNG files directly for example.
Pixel binning works better in the analog domain, which I doubt is controllable. However, the manipulation of the raw capture buffer is a piece of cake...
if and only if Sony wants to allow it .
The real questions are:
1.
Who gets to write apps? If it's just Sony, it's pointless. I've been suggesting for years that the right way to do this is to allow
anyone to write and run their own apps, but to mark the camera warranty as void the first time an app not approved by Sony is run. That way, you'd have "Sony approved" (digitally signed) apps and a very easy and open path for developers. A more likely alternative would be to have a process for becoming an approved developer, which may require NDAs. For example, this is how Canon distributes the SDKs for Windows machines to access their cameras.
2.
How does an app hook into the camera firmware? CHDK uses several different methods: scripts written in Lua or Basic, compiled C code run as a an application (e.g., reversi), a command in a script, or triggered by events (e.g., screen update causing zebra highlighting or storing an image causing conversion to a DNG). It is actually quite awkward to run your own C code in a CHDK camera, and a bug will easily crash (but not brick) the camera. I'm hoping Sony is using one of the cleaner interfaces available under Linux to allow apps to do any of these types of operations; I'd love an SDK allowing compiled C code to securely access things via standard Linux mechanisms (sockets, dev files, etc.) -- that way, no app should have to be part of the system firmware. I hope Sony doesn't restrict things to a simple scripting interface; that way the mistake made by Digita -- you could run code in your camera, but that code had no access to the camera per se... for example, on my Kodak DC260, a script couldn't even click the shutter! This is why Digita never caught on....
3.
How much compute resource is available? Processors in cameras are fairly beefy, and many have a little custom accelerator or two (e.g., hardware JPEG compression engine), but cameras are real-time systems with limited resources. For example, the depth-from-focus that my students implemented in a CHDK PowerShot took no more than a few seconds to capture the scene, but something like 30 seconds to generate the depth map. Perhaps Sony will allow us to get around this by offloading compute to another system via wireless... not an option for CHDK.
