Grant Hutchins: This maybe a stupid question, but aside from button clicks to set the camera vs. mouse clicks to set panel attributes, wouldn't post processing a non-camera anti-aliased image be preferable? Of course, that assumes you have photo editing software, so I guess my question is to photogs and graphic designers with Photoshop. Given the multitude of different blur filters, de-speckle, etc in PShop, wouldn't you have more ultimate control to just leave anti aliasing off on the camera, and adjust as needed (if needed) in Photoshop?
"wouldn't post processing a non-camera anti-aliased image be preferable"
Theoretically, no. "proper" anti-aliasing must be carried out before you sample the signal to be effective. Think of a political poll. If the selection of participants is biased, you have a flawed dataset. You can do all kinds of advanced statistics on the dataset, but the proper solution is to gather the data correctly in the first place.
Practically, there might be software solutions that works "well enough" for real-world images/sensors/lenses/humans and (like you say) add flexibility. I am aware of none.
danijel973: This is not really impressive as I duplicated this result with a simple "sharpen" command in Gimp. Also, you can't get more information than you put in, meaning that you can't create detail from blur. You can clarify detail that's already there, but I would always prefer to do it optically to the maximum possible extent, and only then use software to try to go even further. Intentionally designing bad lenses and relying on software to make them mediocre is not a good idea.
>>Also, you can't get more information than you put in, right>>meaning that you can't create detail from blur. wrong
If you encrypt your harddrive, the bits will look like a blurry mess. Given the right algorithm and key, you can have all of the information back, though.
ProfHankD: I've been studying PSFs for several years now. The biggest problem with deconvolution is that the PSFs are not really convolved in the first place -- especially for out-of-focus regions of the image. Still, there's lots one can do with better computational methods; I use genetic algorithms for this sort of thing.
What do you mean by the PSFs "not really convolved"? Does it mean that the idealized model of a (slowly varying) linear convolution does not describe the errors contributed by the lens? If not, what kind of physical process is it?
If you had access to highly detailed info about the lens (e.g. sweep monochromatic light from 400-800nm on a target print of impulses (or wavelets) distributed across the frame and sweep this target from close focus limit towards infinity), how much better could things be? Is it fundamentally a problem of gathering enough data, or is it about finding the right algorithms to apply?
If the lens designers knows that a given lens correction is available, they might be able to "tailormake" a PSF that is easy to correct (no deep zeros, gaussian-like?), rather than a PSF that is as small as possible.
Perhaps that would allow better system-performance for a given cost/size/Weight?
123Mike: I think the example is fake because there details in the "improved" version that do not exist in the "original".
Visual inspection is not sufficient to determine that such examples are fake.
Kirppu: So it can magically guess the texture patterns that objects have even if it originally was just a blur... I would like to see that happen. I bet it would have some funny end results. :)
And didn't Adobe all ready do this deblurring thingy?
The key is that the "just a blur" thingy can be (more or less) accurately described as a function of the original, sharp image. Find that function, find a suitable inverse, and you can remove some blur.
Bill Bentley: I can't see a tripod mount anywhere. There certainly is the space for one. I think it would be helpful for a device like this.
The most positive part of this product:It might force Sony to update the PlayMemories application (or even open up its APIs), some thing that would (for most other companies except Sony) be a good thing for my RX100M2 camera.
Interesting on a technological level, uninteresting as a photography tool (for me).
I wonder why they essentially make a "camera sans display". It would be interesting to see what they could do if they rather did (lens + sensor), and moved the image processing into an Android app and relying on the phone for power supply (if possible). This might slow down processing but also reduce cost and size/weight.
On a personal level, I would be willing to shell out the cost only to tinker, if I could have access to RX100M2 quality image aquisition delivered raw and realtime to a simple API on my Android phone for developing my own image development and control apps using high-level languages with richly supported infrastructure, sensors and touch support in a small package.
ezradja: Canon should at least raise the MP to 24MP, IMO
I have a long list of things that I would like to be improved in my 7D. MP count is somewhere near the bottom of that list.
Josh152: Personally I am more interested to see if Canon has finally caught up on dynamic range and color depth than I am in the on sensor phase detection.
@Josh152: I agree that image quality is still a concern@cs hauser: I agree that this AF has the potential to be really nice
I don't see the point in deragotary remarks about those whose expectations and use-cases are different from yours.
I am positive.
photogeek: Still no GPU acceleration. Epic fail.
GPU acceleration is hard. It can give tremendous acceleration for a few "well-behaved" problems, and less so for other problems. It also leads to more hw dependencies, more intricate code, harder-to-test applications, and generally slower feature development.
I think it is fair to say that most people outside of Adobe can only guess at what the cost and benefit of doing the Lightroom thing on GPUs. I sure don't know. Is it an fixed-point or floating-point pipeline? Is it compute or memory bound? Are many (time-consuming) operations per-pixel, or is there a lot of spatial dependency on reads/writes? Is it written in C?
jon404: Great. Just what we need... a new technology allowing even-smaller viewfinders. Hey, camera makers! We baby boomers, the only segment left with some savings, have poor eyesight! All the pixels in the world can't help if the display device is physically very small...
I guess it makes sense that an EVF should use less energy than a regular LCD screen. It is smaller, and shielded from stray light, meaning that less brightness (backlight) is needed to present a usable image?
Color-sequential is known from DLP projectors, but how is it used in a transmissive LCD? 3 sets of r-g-b LED backlights that take turns lighting a achromatic LCD panel?
120Hz is interesting from a latency/movement perspective. Even though you need 3 cycles to present a full color image, perhaps the ""rainbow effect" can benefit in having some indication of what is happening in the scene?
marike6: DxOMark just tested the D7100 and it's total score of 83, the second best score of all APS-C cameras (one point behind the D5200).
Color Depth and High ISO are improved over the D7000, while DR is .2 EV less.
Nice update on the already superb D7000.
@hiro_pro:The 7100 sensor is not 50% larger than the 7000 sensor, they are equally large.
tkbslc: Most cell-phone camera shooters barely take the time to focus before the shot, you think they will care about it afterward?
Doing "autofocus" afterhand might make their images look better without requiring any user-intervention.
Software-based lens tilt?
This is most easily solved by the market. If hardcore camera customers refused to purchase any camera without DNG output, the manufacturers would get the hint. The fact that we don't do this, suggests that it is really not that important to us.
You still can convert any acr-supported raw file to dng? This will be a well-documented file that anyone with access to old DNG specification printouts, the raw dng file and an interest in image processing and programming will be able to dechifre?
Did anyone notice that Adobe Lightroom adds lots of meta-info to exported jpegs? Seems to me to be a complete "cookbook" of how the user edited the file?
From the article: "But retaining support that's already provided shouldn't be an issue."
This is not true. Keeping old, hard-to-test, seldom-used code forever is in itself a problem for software.
I think that for important stuff it makes sense to render a jpeg version. You loose the ability to reprocess the raw file, but if the image was that important for you, it seems safe to assume that you put lots of energy into making the settings look good.
CAcreeks: If cameras wrote 16-bit per color lossless files, such as JPEG 2000, we would have no need for RAW. Also now that cameras produce too many megapixels for most purposes, downsampling is common practice. JPEG is sufficient for that, so perhaps we no longer need RAW anyway.
The main problem with JPEG/JPEG2k is not that the formats are lossy. The main problem is that those formats discards information necessary to do proper after-the-fact white-balancing, recovery of clipped highlights etc.
What you need (for full flexibility in editing) is a lossless representation of what the camera sensor recorded + any relevant camera settings. This means non-standard colorspace, bayer (or not) pattern etc.
Even if this was properly interfaced/programmed to use the internal bus/USB to give functionality comparable to what you can get with a Windows PC...
Would it be preferreable to a smartphone/tablet with USB host functionality (not too many out there) doing similar things? Then you would have a large touch-screen to control everything.
Gordon Brown: Good on Sony for their development of this technology.I have held the opinion for some time now that ALL camera manufacturers should have been doing extensive R & D of EVFs.After all, shouldn't it be the ultimate goal to be able to see exactly what the sensor is seeing? Surely.The OVF with the mirror box arrangement was developed for a specific need in film cameras and does not have the same relevance with today's technology.I sincerely hope that this sparks a trend towards developing the ultimate viewfinder. Well done SONY!
Najinsky:What would the benefit of looking "through the sensor" be if the sensor exceeded the human vision anyways?
Just like looking through the lense lets you assess the "distortions" made by the lense, looking through the sensor lets you assess its distortions. The important point is that the EVF should be "better" than the image sensor, and that in-camera raw-development should be close to your typical raw development. Both difficult goals I guess.