Earth Art: I saw this article and thought the light was much smaller than it actually is after doing a look on their website. This would be cool for light painting objects in landscapes for a much softer lighting compared to a small headlamp.
It might help to mention these produce over 1100 lumens which is really damn bright! My biking headlamp is 800 lumens and is capable of burning retinas. :)
A really neat product for sure.
Edit: I think the lumen output I mentioned was for the old model. If the new one is 50% brighter, then wow. Very impressive. Portable tanning salon.
There is a difference between lumen and lux. Lumen is the amount of light the light puts out. Lux is how concentrated this light hits an object at a choosable distance. Having a 800 lm flashlight right against your eyes at short distance is certainly going to hurt your eyes. Having a 1100 lumen light source light up a wall at a few meters distance is much lower in lux and doesn't hurt your eyes.
geo444: .how about a Massive 5600 mm F/5 for..... $800 ?- a sample with M1 Crab Nebula : 6 x 4 arc minute apparent dimension :www.dpreview.com/galleries/7467909648/photos/3128975
what you need for the challenge : - Pentax Q or Q10 ($200) - Pentax K for Q Adapter ($250) - SkyWatcher 200/1000 Newton ($350)
Thats not 5600mm f/5. Its 1000mm f/5.
Or 35mm equivalent 5600mm f/28 if 35 mm is your flavor of equivalency.
Scottelly: Fankly, I'm surprised it's less than $100,000 dollars. You'd think something made by NASA or for NASA would be really expensive, just because of what it represents. Imagine having one of Henry Ford's cars or a suit, horse, or hat belonging to George Washington? How about a pen used by Thomas Jefferson or better yet, how about a quill and ink well used by Shakespeare? One day this stuff will be legendary, and people in the future will pay big money to get anything from Earth, let alone something like a telescopic lens, which represents our curiosity about space and other planets and stars, solar systems, and galaxies, where most humans will live one day, in maybe as little as 100 years. Give something like this to a child, who is 5 or 6 now, but will be 20 or 30 when given something like this, and that child may get a large fortune for it in 100 years, when people are living an average of 150 years or more and space travel is as normal as air travel is today.
NASA throw a way lots of junk every day. Most of it is less interesting and never reaches ebay. NASA is not Shakespeare.
There is many untold stories about investment items that just turned uninteresting and valueless over the years. Only time will tell if the seller here gets better out of it then the buyer.
spikey27: Minimum focusing distance?
My 1325 mm f/13 Maksutov-Cassegrain has a minimum focusing distance at about 10 meters. Thats more then close enough. A sparrow will fit in the image frame, not a seagull at this distance.
I guess the 2450 mm has a minimum focusing distance at 20-30 meters.
Besides, it will be impractical for macro photography and i can't imagine who really needs long distance macro.
Hugo808: What's the EQV focal length on micro 4/3s?
Focal length is a physical length. Its 2450 mm regardless of what sensor sits at the focal plane.
Focal lenght equivalency is about field of view. This 2450 mm will get you the same 2450 mm focal length on a mFT as you do with any other 2450 mm lens on mFT. About the same FoV as a 4900 mm on 35mm or a 9800 mm on a 6x6 cm sensor/film, or a 440 mm lens on Pentax Q.
But this will most likely be used with a 0,79x CMOS medium format camera like Pentax 645Z. Meaning you will get the same FoV by using a 1930mm on FF or 1300mm on APS-C. That FoV happens to be just marginally larger then a super full moon.
VidJa: does anyone know where we are in efficiency of the sensors? with other words, how many of the available photons do we measure with current sensors and how far can we expect to improve?
martin, your have got your numbers very very wrong.
QE of a modern FF CMOS sensor without color filter array can reach about 90%. The color filters are intentionally not blocking out the two other colors completely. Its actually blocking about 50% of the light. Transmission losses in the lens is about 10-30%, where the 30% number accounts for sub optimal light angle and reflections in the filter stack or in the sensor it self (assuming FSI type sensor). The loss of electrons on the chip, between the well and the ADC is actually very low. From a few percent in worst case to less then a percent in best case.
For comparison 70-style color film makes use of only about 2% of the light hitting it.
Another comparison: Commercial cheap multi crystalline solar cells have an efficiency of about 10-25%.
martindpr: Most important thing is the type of those test tubes you got there/their efficiency. Cause 2day with our tech and translated in layman's terms, they have a small opening, bad angle towards the rain and those raindrops evaporate too soon too early before reaching the bottom and collecting in the sampler. And all manufacturers use this same bad recipe. In more technical terms, the color filter reduces at least 2/3 of light (practically 95%), the CMOS samples only 1/100 of the light transmitted by the filter and rest is reflected out or lost in the process, and then at least 30-40% is lost in the circuitry. Then the jpeg looses additional 80% of the information and we're left with 1/1000 of the light information of the original scene. Solution? Refractive type of color filter and graphene instead CMOS for no visible nose at any sensor size.
Where did you get that 1000x figure from?
The numbers you are presenting is nowhere near the reality.
bwana4swahili: "once captured, the signal-to-noise ratio of any tone can't be improved upon." Maybe for a particular image but stacking multiple images of the same scene can improve the signal-to-noise ratio (SNR) by the SqRt of the # of images, i.e.: stacking 4 images will give a 2x improvement in SNR. This approach is very effective and used extensively in low light situations such as astro/nightscape photography.
I just use single exposure ISO 80 if max DR is a priority. I live in northern Norway and here the summer night is to bright for long exposures so i only shoot real night photos in cold weather, meaning low thermal noise. I usually don't take longer exposures then a few minutes and thermal noise is not a problem. So i don't mind shooting those in a single exposure. If i travel south of the polar circle to someplace with warm dark nights i might try to compare again. A single long exposure to the same total length chopped up in smaller pieces/exposures.
bakhtyar kurdi: From experience I found something interesting, but I didn't know the reason, as we know stopping down the aperture gives sharper images until the diffraction starts,and we have a sweet spot depending on the pixel density of the sensor,usually around f8, there is something similar to that related to exposure time (shutter speed).with longer shutter speeds, you get more saturated, less noisy images, but something similar to the diffraction happens in longer shutter speeds, the sweet spot is between 4-15 seconds, after that suddenly it gets worse and worse.To make it simpler, if we take two images of the same sense, the first image taken at ISO 100, F8, 1/125 sec, then we take the second picture using ND filters until our settings become ISO 100, F8, 8seconds, the second image is more saturated, has less noise, better DR and much richer files.I think this article explains my findings, what do you think?
1/125 - 1/60 - 1/30 - 1/15 - 1/8 - 1/4 - 1/2 - 1 - 2 - 4 - 8s.
Seems like 10 stops to me. ;)
falconeyes: Interesting and important article.
However, it should have used fewer words. The article makes a simple matter look more complicated than it really is. And may discourage some to read it.
Everybody thinking that noise is (mostly) a camera artefact should read the article tough.
DR might be improved by having a non uniform micro lens array, directing more light to some pixels and less to others. Almost like having ND filters on select pixels, except that its not filtering away useful light, just redirecting. Thanks to Fujifilm SuperCCD SR for the inspiration to this thought. I think this would be more useful then the Magic lantern dual ISO approach because it doesn't clip highlights from half of the pixel array.
The way QE is calculated (electrons/photon) its actually possible with over 100% QE if one photon releases more then one electron. That is possible with energetic photons like green, blue and UV. The solar cell industry already experiments with multi junction technology to exploit the possibilities on that. Its not economicaly for solar cells yet, but imaging sensors are priced far higher per unit area and might use this technology to achieve higher QE soon.
FWC, or to be more exact, FWC/unit area is also something to improve. Even NIR might be used to improve noise levels.
Color filter efficiency might be further improved by letting two colors in at each pixel in stead of one. A subtractive technique will convert the image to RGB.
With pixel sizes aproaching diffraction limit it might be useful to start thinking bigger/different color arrays then RGGB. As a start i would suggest four pixels: RGIR - RBIR - GBIR - RGBIR.
tlinn: Great article, Richard. A couple questions:
1) Does shot noise present as primarily luminance noise or color noise too?
2) Am I correct to infer that part 2 of this series will answer the question of whether or not it is beneficial to ETTR at ISOs other than the base ISO?
All photons have a single distinct color or wavelength. So physically luminance noise doesn't exist. Its all chroma noise. But color filter arrays doesn't filter exact wavelengths. The filters are covering a wavelength spectrum even with overlapping to other colors. Its the with of the color spectrums and amount of overlapping that makes a balance between chroma and luminance noise in a raw image, and software that further adjusts the balance.
There might also be the effect of mirror shake and shutter shake making longer exposures sharper. These shake sources affect the image most between 1/10 to 1/100 s.
At medium long exposures 1/10 - 10 s wind might effect a lot. At longer then 10s dark current and softness on the ground under the tripod might affect sharpness. A warm camera and lens in a cold environment might also give thermal focus creep at very long exposures.
Trying to improve shot noise by having longer exposures get ourselvs into a battle between dark current noise and read noise. This is very relevant when taking night scapes and astrophotos. Averaging many short exposures gives a lot of read noise, while having one extremly long exposure gives much dark current noise.
This was analyzed over at Clarcvision.com some years back with variable settings: http://www.clarkvision.com/articles/night.and.low.light.photography/
"There are three factors that affect how much light is available for your sensor to capture: your shutter speed, f-number and the size of your sensor."
I don't quite like this formulation for two reasons:1. You forgot to take the amount of light in the scene into account2. f-number and sensor size should be rephrased to effective aperture area (including lens transmission losses) and field of view. I think this is more informative and describes the reason for why sensor size matters, more directly (larger aperture area for a given field of view)
It should also be noted that sensor sizes don't have a finite scale ranging from 1/3" to full frame. It's an infinite scale, but have some physical limitations (from quantum physics to astro physics) and practical current technological limitations: from about 1/10" to about the diameter of a large wafer. And some economical limitations reducing max size to medium format for DPR readers. (NASA doesn't look for telescope sensors here)
SimenO1: Nobody commented on the price yet? K-3 was introduced at 1299 $. K-3II adds a 199$ worth O-GPS1 unit (and other goodies) and is priced DOWN to 1099 $. Thats a bargain!
Makes me wonder if the pricing makes room for a ~1500 $ full frame later this year. I can't belive Pentax will introduce only a low cost FF model. They probably will introduce a high end model with class leading features too at maby 2500 $.
Alex: Pentax are underpricing the competition on high end APS-C. I think they will do the same on both low and high end FF. Maybe not 1500 $, but maybe 1800?
DVT80111: Sorry for my ignorant. How does it work at slow speed? Do I have to wait for the camera to finish 4 pictures? What if the object is moving? So will I get a noiseless picture, but blurry image?
The 8,3 fps is probably limited by bandwith and A/D circuits, not by shutter. And stacked APS-C BSI chips with built in burst memory are not coming yet. I guess its more then three years before that happens. Samsung NX1 has APS-C BSI without stacking and Sony has a 1/2,3" BSI with stacking but none of them stacked with burst memory (witch requires TSV and more layers). Thats several years into the future.
zakaria: I assume there will be a mark2s including built in flash and Wi Fi.this is what ricoh did with k5 series.there were 3 k5s at the market at the same time.
This IS the mark II (noted by the II after K-3). Do you mean a K-3 (mark) III?
wolfie: So no increased MP from sensor shift?
Better RESOLUTION, not MP. Just like Foveon have resolution advantages over a bayer sensor with the same Mp count.
Nobody commented on the price yet? K-3 was introduced at 1299 $. K-3II adds a 199$ worth O-GPS1 unit (and other goodies) and is priced DOWN to 1099 $. Thats a bargain!