Is it april 1. or is DxO killing all objectiveness in sensor testing?
I'm also chocked that they choose to manipulate the score on their own product by image stacking.
DxO, you wont work as a neutral source to sensor tests anymore to me.
qwertyasdf: Well....the last time I heard Pentax delaying some stuff (Mysterious voice: FF camera), it was delayed a decade +....
The Pentax MZ-D didn't get delayed, it was totally canceled and never woken to life again. A FF camera didn't make sense at the time (y 2000) due to both cost, financing and competition. But that of course isn't a written in stone for ever. Things chance. Now it makes sense and Pentax started a new FF project in about 2010, not based on the MZ-D. Thats what was announced in February and is bearing fruits probably in October or November.
JustDavid: ...and this new 70-200mm F2.8 example was due to be the first full frame lens to be graced with the Star designation...Not exactly correct, there were Pentax FF* (FA*) lenses back in the film era, some of them are still available to buy. This would be the first digital optimised Pentax FF* lens (D FA*)
Maby the article Writer, Damien Demolder, forgot about the long history of the Pentax star series and just forgot to write "first D FA* ..". A minor glitch. Lets hope he rediscovers Pentax soon. It's a lot to like and love.
Cameron R Hood: Pentax is doomed...DOOMED!
Sell your Pentax stocks to me for nothing :dribble:
Pentax will grow a lot on FF.
Mika Y.: Personally I've been happy for many years with a plain basic no-name GPS module that I just keep in my camera bag and geotag the images later based on the logs on my computer with a free application. The Canon's unit has a few nice additional features such as adjusting the camera clock and also providing direction information in addition to basic location data, but they're just not worth the price difference for me personally.
With todays prices and sizes of GPS chips we all should be freed from the extra unit, extra cost and extra steps of fusing the information.
People take for granted that a 200$ phone has GPS. Why shouldn't we take for granted that 1000$ cameras have it?
arhmatic: I vote to boycott these products. Is there a good reason for GPS not to be included in any camera? - other than greedy camera makers hoping to charge for them separately?
GPS is now part of almost every little tiny mobile phone. The chip is tiny and inexpensive... It could be just included with a minimal effort... Additionally, I personally refuse to have a dongle attached to my already bulky and bully SLR, AND it take up a camera port, that could be used for something else.
Look at what Pentax did to their GPS attachment O-GPS1 when they upgraded the K-3 to K-3II. The GPS unit got integrated, not just free of charge but the newer camera got a 200$ lower price tag then the K-3 without GPS.
Who doesn't love what they did there and the signals it sends to the competition?
Earth Art: I saw this article and thought the light was much smaller than it actually is after doing a look on their website. This would be cool for light painting objects in landscapes for a much softer lighting compared to a small headlamp.
It might help to mention these produce over 1100 lumens which is really damn bright! My biking headlamp is 800 lumens and is capable of burning retinas. :)
A really neat product for sure.
Edit: I think the lumen output I mentioned was for the old model. If the new one is 50% brighter, then wow. Very impressive. Portable tanning salon.
There is a difference between lumen and lux. Lumen is the amount of light the light puts out. Lux is how concentrated this light hits an object at a choosable distance. Having a 800 lm flashlight right against your eyes at short distance is certainly going to hurt your eyes. Having a 1100 lumen light source light up a wall at a few meters distance is much lower in lux and doesn't hurt your eyes.
geo444: .how about a Massive 5600 mm F/5 for..... $800 ?- a sample with M1 Crab Nebula : 6 x 4 arc minute apparent dimension :www.dpreview.com/galleries/7467909648/photos/3128975
what you need for the challenge : - Pentax Q or Q10 ($200) - Pentax K for Q Adapter ($250) - SkyWatcher 200/1000 Newton ($350)
Thats not 5600mm f/5. Its 1000mm f/5.
Or 35mm equivalent 5600mm f/28 if 35 mm is your flavor of equivalency.
Scottelly: Fankly, I'm surprised it's less than $100,000 dollars. You'd think something made by NASA or for NASA would be really expensive, just because of what it represents. Imagine having one of Henry Ford's cars or a suit, horse, or hat belonging to George Washington? How about a pen used by Thomas Jefferson or better yet, how about a quill and ink well used by Shakespeare? One day this stuff will be legendary, and people in the future will pay big money to get anything from Earth, let alone something like a telescopic lens, which represents our curiosity about space and other planets and stars, solar systems, and galaxies, where most humans will live one day, in maybe as little as 100 years. Give something like this to a child, who is 5 or 6 now, but will be 20 or 30 when given something like this, and that child may get a large fortune for it in 100 years, when people are living an average of 150 years or more and space travel is as normal as air travel is today.
NASA throw a way lots of junk every day. Most of it is less interesting and never reaches ebay. NASA is not Shakespeare.
There is many untold stories about investment items that just turned uninteresting and valueless over the years. Only time will tell if the seller here gets better out of it then the buyer.
spikey27: Minimum focusing distance?
My 1325 mm f/13 Maksutov-Cassegrain has a minimum focusing distance at about 10 meters. Thats more then close enough. A sparrow will fit in the image frame, not a seagull at this distance.
I guess the 2450 mm has a minimum focusing distance at 20-30 meters.
Besides, it will be impractical for macro photography and i can't imagine who really needs long distance macro.
Hugo808: What's the EQV focal length on micro 4/3s?
Focal length is a physical length. Its 2450 mm regardless of what sensor sits at the focal plane.
Focal lenght equivalency is about field of view. This 2450 mm will get you the same 2450 mm focal length on a mFT as you do with any other 2450 mm lens on mFT. About the same FoV as a 4900 mm on 35mm or a 9800 mm on a 6x6 cm sensor/film, or a 440 mm lens on Pentax Q.
But this will most likely be used with a 0,79x CMOS medium format camera like Pentax 645Z. Meaning you will get the same FoV by using a 1930mm on FF or 1300mm on APS-C. That FoV happens to be just marginally larger then a super full moon.
VidJa: does anyone know where we are in efficiency of the sensors? with other words, how many of the available photons do we measure with current sensors and how far can we expect to improve?
martin, your have got your numbers very very wrong.
QE of a modern FF CMOS sensor without color filter array can reach about 90%. The color filters are intentionally not blocking out the two other colors completely. Its actually blocking about 50% of the light. Transmission losses in the lens is about 10-30%, where the 30% number accounts for sub optimal light angle and reflections in the filter stack or in the sensor it self (assuming FSI type sensor). The loss of electrons on the chip, between the well and the ADC is actually very low. From a few percent in worst case to less then a percent in best case.
For comparison 70-style color film makes use of only about 2% of the light hitting it.
Another comparison: Commercial cheap multi crystalline solar cells have an efficiency of about 10-25%.
martindpr: Most important thing is the type of those test tubes you got there/their efficiency. Cause 2day with our tech and translated in layman's terms, they have a small opening, bad angle towards the rain and those raindrops evaporate too soon too early before reaching the bottom and collecting in the sampler. And all manufacturers use this same bad recipe. In more technical terms, the color filter reduces at least 2/3 of light (practically 95%), the CMOS samples only 1/100 of the light transmitted by the filter and rest is reflected out or lost in the process, and then at least 30-40% is lost in the circuitry. Then the jpeg looses additional 80% of the information and we're left with 1/1000 of the light information of the original scene. Solution? Refractive type of color filter and graphene instead CMOS for no visible nose at any sensor size.
Where did you get that 1000x figure from?
The numbers you are presenting is nowhere near the reality.
bwana4swahili: "once captured, the signal-to-noise ratio of any tone can't be improved upon." Maybe for a particular image but stacking multiple images of the same scene can improve the signal-to-noise ratio (SNR) by the SqRt of the # of images, i.e.: stacking 4 images will give a 2x improvement in SNR. This approach is very effective and used extensively in low light situations such as astro/nightscape photography.
I just use single exposure ISO 80 if max DR is a priority. I live in northern Norway and here the summer night is to bright for long exposures so i only shoot real night photos in cold weather, meaning low thermal noise. I usually don't take longer exposures then a few minutes and thermal noise is not a problem. So i don't mind shooting those in a single exposure. If i travel south of the polar circle to someplace with warm dark nights i might try to compare again. A single long exposure to the same total length chopped up in smaller pieces/exposures.
bakhtyar kurdi: From experience I found something interesting, but I didn't know the reason, as we know stopping down the aperture gives sharper images until the diffraction starts,and we have a sweet spot depending on the pixel density of the sensor,usually around f8, there is something similar to that related to exposure time (shutter speed).with longer shutter speeds, you get more saturated, less noisy images, but something similar to the diffraction happens in longer shutter speeds, the sweet spot is between 4-15 seconds, after that suddenly it gets worse and worse.To make it simpler, if we take two images of the same sense, the first image taken at ISO 100, F8, 1/125 sec, then we take the second picture using ND filters until our settings become ISO 100, F8, 8seconds, the second image is more saturated, has less noise, better DR and much richer files.I think this article explains my findings, what do you think?
1/125 - 1/60 - 1/30 - 1/15 - 1/8 - 1/4 - 1/2 - 1 - 2 - 4 - 8s.
Seems like 10 stops to me. ;)
falconeyes: Interesting and important article.
However, it should have used fewer words. The article makes a simple matter look more complicated than it really is. And may discourage some to read it.
Everybody thinking that noise is (mostly) a camera artefact should read the article tough.
DR might be improved by having a non uniform micro lens array, directing more light to some pixels and less to others. Almost like having ND filters on select pixels, except that its not filtering away useful light, just redirecting. Thanks to Fujifilm SuperCCD SR for the inspiration to this thought. I think this would be more useful then the Magic lantern dual ISO approach because it doesn't clip highlights from half of the pixel array.
The way QE is calculated (electrons/photon) its actually possible with over 100% QE if one photon releases more then one electron. That is possible with energetic photons like green, blue and UV. The solar cell industry already experiments with multi junction technology to exploit the possibilities on that. Its not economicaly for solar cells yet, but imaging sensors are priced far higher per unit area and might use this technology to achieve higher QE soon.
FWC, or to be more exact, FWC/unit area is also something to improve. Even NIR might be used to improve noise levels.
Color filter efficiency might be further improved by letting two colors in at each pixel in stead of one. A subtractive technique will convert the image to RGB.
With pixel sizes aproaching diffraction limit it might be useful to start thinking bigger/different color arrays then RGGB. As a start i would suggest four pixels: RGIR - RBIR - GBIR - RGBIR.
tlinn: Great article, Richard. A couple questions:
1) Does shot noise present as primarily luminance noise or color noise too?
2) Am I correct to infer that part 2 of this series will answer the question of whether or not it is beneficial to ETTR at ISOs other than the base ISO?
All photons have a single distinct color or wavelength. So physically luminance noise doesn't exist. Its all chroma noise. But color filter arrays doesn't filter exact wavelengths. The filters are covering a wavelength spectrum even with overlapping to other colors. Its the with of the color spectrums and amount of overlapping that makes a balance between chroma and luminance noise in a raw image, and software that further adjusts the balance.
There might also be the effect of mirror shake and shutter shake making longer exposures sharper. These shake sources affect the image most between 1/10 to 1/100 s.
At medium long exposures 1/10 - 10 s wind might effect a lot. At longer then 10s dark current and softness on the ground under the tripod might affect sharpness. A warm camera and lens in a cold environment might also give thermal focus creep at very long exposures.
Trying to improve shot noise by having longer exposures get ourselvs into a battle between dark current noise and read noise. This is very relevant when taking night scapes and astrophotos. Averaging many short exposures gives a lot of read noise, while having one extremly long exposure gives much dark current noise.
This was analyzed over at Clarcvision.com some years back with variable settings: http://www.clarkvision.com/articles/night.and.low.light.photography/