0.01 lux is -8 EV.
I heavily doubt it was this dark, as it would correspond be about a moonless starry night. But the fireflies then would illuminate the scenery (or their direct surrounding at least) as main light source and I didn't notice the effect in the video.
Anyway, while dark, -8 EV corresponds to (F/1.4, 1/24s) ISO 1,200,000. I saw extreme noise at 640px web resolution. Scaling web resolution to full 24 MP resolution, the would be the screen pixel noise (24MP) at ISO 14,000. I'd say a normal full frame 24MP dSLR doesn't have this much pixel noise at ISO 14,000.
Therefore, I conclude that the demo shown by Canon is technically lame, more of a marketing gig than anything else.
This is one possible approach to decent low light capablity in video.
The other approach is to stop the nonsense to subsample sensors in video mode, reading out maybe 1 out of 6 pixels. This is what creates noise and aliasing artefacts in video. Unneccessarily so, as a few cameras (Panasonic, Nokia) show which don't subsample but create a video signal from all pixels. I.e., it is quite feasible.
Therefore, Thumbs Down for Canon to work around a problem they rather should solve.
I am sorry, but I cannot read the most important bit from the Adobe release, i.e., that the rental fee is fixed for a lifetime. The Adobe FAQ says:> Customers who sign up by December 31, 2013 will be able to continue their membership. This price is not a special introductory price for your first year only; it is the standard price for this level of membership. But if you cancel your membership in the future, you will not be able to re-join at this special price.
So, Adobe says that $9.99 is the "standard" price. It does NOT say it will never increase. Where did DPR read this most important bit from?
Without, this newest offer is no different from previous ones for PS CC alone.
I would have been keen to learn about AF consistency using the new 70D's dual pixel live view AF.
There is further evidence not mentioned in the DPR article:
Both designs (the patent according to Egami and Olympus according to their website) use a 10 elements in 9 Groups - 3 ED lenses, 2 HR lenses design.
Where the usage of same ED and HR cannot be pure coincidence (even if similiar software optimizations are applied) and Olympus would then break Sigma's patent anyway.
So yes, this is a Sigma patented design licensed to Olympus.
InTheMist: Hm, the Bigma tested better than I expected.
That's not a criticism of DXOMark, I think it's good testing.
Thats's true.I always go to the profile tabs to see actual measurements. Both the Score and the PMpix value are to be ignored totally. With this in mind, their tests are a goid resource.
Go to the profiles tab for Sigma and Nikkor and compare directly 500/8 vs. 400/8. The difference is whopping. Another example that DxO lens tests are useful, their scores are not.
wisep01: Would anyone here at DPReview care to address a perceived contradiction in the test results of this lens--on the D800, it receives both a lower overall and a lower sharpness score than on the D600.
DxO scores are blind for very high resolutions because of their weighting. Go to the profile tabs to really read their results. And keep in mind that D800 pixels are smaller, so similiar acutance means much higher sharpness.
How many more times needs this be said?
It makes no sense to compare 35mm-equivalent focals and not compare 35mm-equivalent apertures. Sensor size is not even mentioned in the table, so it is impossible to draw own conclusions. Even if all cameras share one size, this should have been said.
The article would have been of value if it contained a hint about the cost to OEM of sensors of varying sizes (smart phone to FF). In its current form, the article contains zero bits of information.
After all, a driving force for "larger" would be the falling cost of large sensors.
> though he acknowledges the industry needs a better way of describing sensor size than the current obscure 'inch-type' naming system.
There ALREADY is a better way! Industry simply has to stop the nonsense of mixing equivalent (normalized) focal length with unnormalized aperture and iso ratings. The latter have no meaning whatsoever without taking the sensor size into account. The normalized, equivalent ratings however shine with increasing sensor sizes and bring their benefit to the customers awareness without even talking about the sensor size which, in itself is an impementation detail. It's not meaning anything without taking the aperture into acoount etc.
E.g., describe the Sony RX100II as 28-100mm F/4.9-13.4 ISO1200+and everbody will understand immediately what camera it is. No need to mention a cryptic 1" size ...
I invite everybody to have a look at a lecture given by one of this project's authors last year:http://www.mpi-inf.mpg.de/departments/d4/areas/giana/Teaching/ComputationalPhotographySS2012/It has some interesting ideas about how the art of photography may change in the future, esp. that in the studio. E.g., scene illumination may be added after the shot etc. ...
Sweet stuff :)I know the institute which did it and most of their research is top notch, actually.I'd say the main application isn't what they describe (the conversion of a high end consumer camera into an industrial one). Even though it may be an important application for German machine engineering.
I'd say the main application is as a research and prototype device to path the way to lens-array based smartphones (which most likely will produce an array of 3x3 images, each of at least HD quality, too). Interesting for high end smart phone lens makers like Zeiss.
A 3x3 lens array-based smartphone reduces the crop factor by 3 (such as from 4.5 to APSC) and brings smartphones on par with dSLRs. Esp. as the array allows for parallax-accelerated autofocus (before shot) and plenoptics-like focus-tune after the shot, beating phase detect AF. This device from Max-Planck Institute will help explore stuff like this.
SiriusDoggy: Serious astrophotographers have known about this tech for years. In the astronomy community it's called adaptive optics and works almost exactly the same. https://www.sbig.com/products/adaptive-optics/ao-8t/
Thanks for the link. But ...1. The SBIG product is an optical image stabilizer, not adaptive optics (AO aspherically deformes a mirrors surface in real time), in an attempt to correct for wavefront errors. SBIG abuses the term AO here...-> http://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Adaptive_optics_system_full.svg/1000px-Adaptive_optics_system_full.svg.png2. The SBIG product works at 10Hz rather than 1000Hz.
20-1200 mm F/16-33
and according to optyczne.pl, it isn't even sharper at F/33 than F/47.
Can't see any other application than bright daylight video from a tripod ...
The work is from 2011. The same institute, in 2009, demoed a fast contrast detect AF (CDAF) system:-> http://www.k2.t.u-tokyo.ac.jp/mvf/FocusingVision/index-e.htmlA video where CDAF tracks an object in real time is here:-> http://www.k2.t.u-tokyo.ac.jp/mvf/FocusingVision/FV_FocusTracking.wmv(AF time is 16ms).Just a brute force approach, but nice to see anyway.
Here is additional genuine information:-> http://www.mechatronic.me/69-projection-mapping-system-lumipen
It includes a complete description of the optics of their so-called pupil-shift system:http://www.mechatronic.me/images/a/13/06/Lumipen_Saccade_Mirror.jpgIt is composed of three lenses. The article links to the research papers too:-> http://www.k2.t.u-tokyo.ac.jp/members/okumura/pdf/okumura_icra11.pdf (engl., with illustrations)
JoKing: I wonder if the system blows up if someone throws another ping-pong ball into the mix? :)
it might loose track, or switch tracking between balls when their trajectories overlap. That's all.
Tracking is the hard part in this demo. Needs computing power and a lot of light. And in this case, a simplified task where the target stands out (a white football against green grass may do as well). A similiar situation to tracking a PlayStation Move controller.
falconeyes: Unfortunately, this technology is old and has no theoretical advantage.
It is a variant of the class of possible Bayer filter spectra which may be wider or narrower. Clarity+ just uses an extremely wide green filter, the one with 100% transmission for all wavelengths.
However, current Bayer filter spectra are ALREADY optimized to be in a sweet spot: Make it narrower and luminance noise will increase. Make it wider and color noise will increase.
You actually see it in the Clarity+ sample image if you look at the white text's color artefacts.
In a DxO test, this sensor would score better in the landscape and sports scores. But worse in the portrait score. DxO anticipates vendors playing tricks with the color matrix which is why their portrait score has a relatively high overall weight.
Other tests such as DPR's may be fooled though.
Nevertheless, this technology brings no progress whatsoever.
To promise +1EV better sensitivity is a false news statement. Sorry, DPReview ...
You seem to be biased wrt DxO. So, forget my comments regarding DxO.
The Clarity+ approach increases chroma noise and reduces luminance noise (leaving aside the additional problem of a higher native ISO to avoid clipping of whites).
This is and was always possible. By using a wider transmission spectrum for the Bayer filter colors (what Clarity+ does and what Canon does to some extent too). It can't even be licensed because the original patent does not restrict the spectra one may apply.
Some may not care about chroma noise. However, I think chroma noise is much worse because it is ugly and does not look like analog grain which can be artistically pleaseant. Chroma noise destroys portraits, not the grain.
The current RGGB Bayer filter is an optimized balance between luminance (resolution/noise) and color (resolution which is half only/noise). The Clarity+ idea is old. I imagine it would be a step back for any color camera.
OTOH, a monochrome camera removes the Bayer filter.