At its 2020 X Summit in London earlier this week, Fujifilm announced it’s working on adding a 400 Megapixel ‘pixel shift’ mode for its GFX 100 medium format mirrorless camera system.
The impending feature was teased as Fujifilm engineers talked about adding a new function to its GFX 100 (around the 39:30 mark in the above video). According to the engineers, the ultra-resolution mode would use ‘sub-μm order control’ pixel pitch to create 400MP stills from the 100MP sensor inside its GFX100. Specifically, the engineers noted the new technology would be able to control pixel shift with 10x more precision than is currently available using the in-body image stabilization.
Aside from the aforementioned details, no other information was given, as Fujifilm engineers progressed through the 2020 X Summit. So, until Fujifilm reveals more information, it’s just a matter of waiting to see how long it is until the new feature finds its way to the company’s 100MP medium format mirrorless camera.
One would need a $50,000 Mac Pro just to process the files. But I imagine that those who can afford a $10,000 camera body can also afford a $50,000 computer.
I currently use the GFX100 with Imac 10 cores 2017. When you start to layer the files in PS you can easily hit 2GB files. That 2017 imac handles them fine. With the 2020 Imacs has more processing power and cores are getting cheaper, the MacPro would be an overkill.
400MP! Blimey that's roughly an 800MB file. You'll need the latest Mac Pro fully loaded @ $50K plus about 16 monitors just to view the image! For 99.9% of people that's not viable....for the 0.1% Happy days!
Sounds exciting, but leaves questions. If it implies tripod - makes little sense to me. 4 times higher resolution for a 4 times larger frame? Also pretty useless. Makes sense only if the same frame gets 4x pixel density
Fujifilm is living its own dream still thinking more megapixels will sell more cameras.
Those who wanted a GFX100 by now have one and sales of this camera have since stalled. No 400MP pixel shift technology is going to change that, unless Fujifilm will be planning to sell this camera for below the 3500 dollar mark.
you have insight into fujis ordering books? the demand in the beginning was much higher than fuji anticipated and it was quite a long time until the camera was actually available in stores in the US and europe. this new feature might raise that demand even further
That demand has stalled MirKoK - It is easy to see in the turnover figures recently released by Fujifilm for Q3. A 20% loss in camera sales and a 53% loss in revenue compare to Q3 the year before.
Funny - As if a camera needs competition for sales to fall... No - there is just no interest any longer and yes the sales have stalled. Why else give a rebate of 2000 euro on a new GFX100. ;)
When such rebates are given you are desperate for customers.
fuji stated themselves that gfx100 sales are still solid, but you know what, i dont care, i have my gfx100, i am happy with it and i will be even happier once the multi shot feature arrives, have a good day Duncan
Usually these pixel-shift modes produce images that are a bit blurry... I hope Fujifilm will offer as well a more reasonable output option, like 200 megapixiels, but sharp 200 megapixels...
@Roland non square pixels? You can easily reduce a 400MP image to 200MP and it will look sharper... Just open Photoshop and try... No need for non square pixels LOL... The pixel shift implementations I have seen try to get too much resolution,...
So Fuji is actually improving their cameras. Good to know that Fuji means more than in-camera JPEG quality and options. Their medium format cameras mean that they are going forward instead of sticking with old ways.
I hope same for Canon and Olympus. They seem to have obsession with old ways.
Most of us do not of course, but there is interest for fields like art-work documentation and analysis for preservation and restoration. For this, it might be the greater color accuracy achieved by the "color oversampling" that is more important than the increased resolution.
But it's very usefull for big brother, to watch big crowd and strange persons, it can be usefull....in a consumer camera? it gives a nice holiday picture of the pool :-)
By the way, the "400MP" is a bit misleading: even though 400 million pixel values need to be output to contain all the information, it is not equivalent to a single shot 400MP sensor. Instead the clearest benefits seen so far in multi-shot implementations are improving color accuracy and reducing aliasing.
They still don't get it? High-res modes are gimmicky marketing nonsense. Simple image stitching beats all the high-res modes. Because it doesn't need tripods and dead-still subjects.
Spoken like someone who hasn't actually used it. I'm a stitching fan, but HiRes is at least as useful and less trouble. Both have uses, and both have limits.
"stitching requires longer lenses" - Stitching eliminates the need for wider lenses.
"and sometimes a panoramic head" - No. It only requires a normal head with a working brain in it.
"HiRes is at least as useful and less trouble" - It is useless for anything that is moving. And it's a lot of trouble. Specially when trying to fix it in post.
"both have limits" - See, this HR mode has a 400mp limit. Stitching has no limits.
"bayer filter mechanism and its disadvantages" - Compared to what exactly? If you'd understand what the 'image quality' is, then you'd care more about Bayer advantages.
"And how do you do that without a panoramic head?" - With my hands. I've got two.
"stitching *and* high-res for each shot" = PITA^2
Look, stitching FF images gives me Medium Format results with plenty of megapixels. Why would I need HR mode? Besides, I rarely stitch into square(ish) images. I shoot verticals (portrait orientation), which possibly helps keeping the F-point in place (give or take) by turning it with my wrist (which also puts the potentially softer corners to the top and bottom edge of the stitched frame). Normally it doesn't require critical accuracy. Why don't you try it? It is very easy. Way easier than dealing with HR modes, trust me. I've stitched trees at close distance. Works fine. I would really prefer to capture it in one shot. But the HR mode doesn't solve the problem. It is too unusable (in most scenarios). Even if I could afford a Medium Format system (which is like $12+K investment), I think I would still need to do some stitching (less than FF, of course).
@LoneTree1 Only if you are shooting long exposures. And you can't really do those with HR modes while not in studio or some sort of controlled environment. Normally it takes 5-10sec.
Most people don't give a damn about quality. I get it. What I don't get though, is why you care about HR at all? Why you need those 400mp if you don't care about all the messy artifacts it produces?
Problems with stitching. 1. Takes a long time relative to in-camera high resolution modes. 2. You cannot use a lens that distorts much or you end up with lousy images. 3. The tedium of doing it in post-process. How much time do people have on their hands? Positives of stitching. Allows for slightly less weird effects on moving objects though you can get "duplicates" of people walking, etc. You can do as many images as your computer can handle, therefore theoretically infinite resolution.
@IdM photography Don't forget that you might need to use a special software to open those pixel-shifted HR RAWs and convert them to DNG or TIF for processing.
@Hunter_C Stitching needs a computer and a stitching-capable software. Now, shooting for stitching doesn't necessarily require a tripod and still subjects. It's not mandatory. - How??? - Elementary, my dear Watson :). Aim and shoot.
The latest Olympus camera doesn't need a tripod to do high-resolution shots, there are some constraints on it though. But to do stiching, you are better-off using a tripod with an indexed alt-azimuth head of some sort to allow uniform shooting. There are also motorized heads that will do it automatically. Panoramas are easy, but if you really want high resolution you need to shoot rows and rows of images.
Re: "special software" - I'm sure that's a huge problem for the .0001% of photographers who know how to stitch but don't know how to process raw files.
@LoneTree1 #1. Not true. And I really doubt that everyone here have the computers capable of dealing with 400mp RAW files and process them fast. #2. You can easily use distorting lenses if your stitching software corrects those distortions. #3. Time is always a problem. And if you want to use the HR mode for fast snapshots (in-camera compiled JPEGs), then we have absolutely different goals. I don't care about such gimmicky or extremely specialized feature for shooting oversized snapshots. It doesn't make any sense to me.
1. I'm referring to the time it takes to acquire the image in the first place. HR takes seconds. A high res image from stitching takes much longer. 2. You can correct distortions but you cannot maintain the same quality as with non-distorting lenses, you also end-up with a cropped image. 3. No one is using the HR mode for fast snaps shots that I've seen. People tend to use it for specific shots and not grab shots.
@TomFid Like I said, time is a problem. "Special software" is an extra time-consuming hassle.
@LoneTree1 Clearly, you have no idea how easy stitching really is in 2020. You don't need special heads or tripods or too much time for it. "if you really want high resolution you need to shoot rows and rows of images" - What I really don't need is a 400mp !4:3! image with over 50% throwaway megapixels and limited ultra wide options. I would rather stitch 5 vertical shots into 400mp larger format panorama. Kapish?
"HR takes seconds" if it's even possible. And then you need to take a few of those HR ... just to increase your chances of success and not producing a total failure, only slightly messed up result. For me (or any serious photographer), that's not acceptable, considering the cost of the GFX system.
I don't really get what you are talking about. HR shots are just like normal shots but they take a second or two. 8 images (Olympus) are taken in rapid succession and combined in-camera. They are then rendered as RAW or JPEG files (big ones) that are treated like any other file. I have no idea why you would think you would need to take "a few" shots to get one good one.
@ecka The special software to convert pixel shift files to DNG is Sony's (bad) method, hopefully Fujifilm will come out with a better method: in-camera processing...
"I don't really get what you are talking about" - That sums it up nicely :).
"8 images (Olympus) are taken in rapid succession and combined in-camera" - And during that time, wind blows, trees move, water flows, people walk, birds fly, etc. With HR, every pixel is a potential problem. While with stitching, only some few stitching lines might produce visual artifacts, which can be corrected much easier than HR mess.
"They are then rendered as RAW or JPEG files (big ones) that are treated like any other file" - Which is the case with Olympus' 20mp sensors. Sony HR modes require special software.
Re "special software" - like I said, spoken like a non-user. Panasonic and Olympus HR files don't need anything unusual. Presumably Fuji will use the same approach, because it's just not that complicated. They require less time for me to use than stitching (but I still stitch many things).
I think your entire argument can be summed up as, "stitching is great, and hires doesn't work FOR ME, therefore it's a marketing gimmick FOR EVERYONE."
"Panasonic and Olympus HR files don't need anything unusual" - And they are pretty useless too. No better than a single FF shot. A waste of time really. "Fuji will use the same approach" - It's not about the approach. It's about what's possible with current level of technology. "I still stitch many things" - You are either doing it wrong, or lying. Otherwise you wouldn't argue.
With nearly every innovation in digital, Olympus has done it right. From HR to live view, to IBIS to sensor cleaning. Not their fault if Sony went the stupid route on HR.
I don't really have a good impression of Fujifilm because the "film" in “Fujifilm makes me think that it's focused on film rather than digital. Can anybody prove me wrong please? I mean it!
Fujifilm is different from Kodak. Kodak's camera problems began LONG before the current state of the industry we see now. Kodak had 62% of the entire compact camera market, but were making little to no profit off it. They also inexplicably abandoned higher-end cameras after helping to create the market. It's like they purposely committed economic suicide.
I just mentioned in another article that except for some bad HDR and the more recent Panasonic and Sony models pixel shift options, SLRs and MILCs have not embraced any computational photography, which I consider a huge missed opportunity.
To me, 400 MP seems like way overkill, but I can't see the problem in having it as an offering. I'm still stunned at the Panasonic and Sony examples shown in the DPR comparison tool and could see using pixel shifting for select, static situations (e.g., cataloging one's artwork, detailed forensic shots of the side of a building before adjacent construction for potential damages claims, etc.).
that is very interesting because I shoot pixel shift regularly for archiving photos and you can see a big difference in the pixel shift images vs the single shot files after they are exported from Capture One. Apple's Finder doesn't handle them at all but Capture Ones seems to.
wouldn't it using fujifilm software as a plugin? Like as if I currently go in and out of differerent programmes like PTGui, Photoshop. The actual merging surely would be best to use FF own software then import the output into C1.
Most people need more mega pixels, like they need another hole in their head. Most people don't need more than 20-24MP's for sharing on the Internet or printing up to or around ~16x24" prints. I suppose, there is that .05% of photogs that need 400MP's, but for the rest of us, it's a waste of HD space and money. I wonder how many lenses will properly resolve all 400MP's? Does anyone know the answer to that questions?
This is a free software update and owners of this camera don't have to use it if they don't want to.
But let's face it: the market for a 100MP MF $10,000 camera is in and of itself a rather specialized niche. I would think that a good number of the owners of this camera might be interested in such a feature.
In fact, if you make the rather safe assumption that owners of this camera are really into high resolution imagery, then this feature makes probably more sense with this camera than with most.
So while I agree with you that most people don't need this ultra high resolution, the purchasers of this camera are not most people.
This isn't really about mega pixels as the final image will be the same size as a 100mpx image. It is about having separate shots for each color and then combining them. This results in the elimination of false color (moire) , clean color channels, and incredible resolution.
Most people don’t need above 3Mp, because 90% of us are still using FHD monitors. Even if they shoot for 4K display, 12Mp is enough. 20Mp and above are for prints.
OK, I agree, for the .05% of people who need this free upgrade that can make bill board size prints, it is a great upgrade. But, like I said, for the rest of us normal photogs that don't need it, it's just another worthless, click bate article and FW upgrade. BTW, are any of you going to use it? I rest my case.
For viewing and sharing on the internet you dont need more 2MP. I wonder how many people actually print out any of their photo's very often.
As for the use of mega mega pixel cameras, there are people, organisations and companies that do require these types of cameras in order to produce very large prints, among other reasons.
These are specialised units, no different to any other specialised pieces of equipment used in other specialised areas areas. (Notice the over use of the word specialised. Done to highlight the the fact that this will not be a common, everyday piece of equippment used by everyday people.)
It is not about higher resolution so much as moire elimination. Look at the studio test images of cameras that have pixel shift but poor moire elimination computational algorithms like Sony A7RIV. Look at the small text.
- extremely good computation: Panasonic S1R. also has pixel shift, but uses better math than average when operating without it. - mediocre computation: Sony - no computation: Hasselblad X1D does moire elimination selectively with a mask in Phocus. Of course dpreview studio image didn't do that, so you see raw extreme moire.
You rested your case before hearing any answers...yes I will use it for studio work and product shots were I control the subjects movement.... it’s a fantastic capability.... as someone already said, people who buy a 100mpix camera are not shy about using more megapixels when they are available...even if only for very specific use cases....
it really isn't about mega pixels. But here reading isn't fundamental. But sure - I guess someone doing web posting isn't going to foist $10,000 on a medium format 100mpxl camera. For the love of mike.....
@Snapa .. I'm wondering if you understand the context when they say "400mp"; it's not the same as a 400mp single shot.
** .. and for the love of peaches, it doesn't take a lot of pixels to make a silly billboard. That's one thing you -- DON'T-- need a lot of pixels to make.
@EricAotearoa : "For viewing and sharing on the internet you don't need more 2MP"
(sigh)... So you're saying that, on the web, a 2mp macro image of the human eye on a 33" monitor looks the same as a 100mp macro image of the human eye?
You're saying that a 2mp image of a postage stamp has the same detail on the web as a 150mp image of a postage stamp?
I think you're statement would be more accurate if you said "I don't need more than 2mp" than speaking for others when the difference between hi and low resolution cameras are evident on the web.
@Snapa - "BTW, are any of you going to use it? I rest my case."
Please don't be so simple. As you said, very few people need this. It's also true very few people need a GFX100. It's not for you. It's not for me. But for those who it is, this is a very nice upgrade.
Probably you should go watch a proper explanation on megapixels. No lens can resolve all the megapixels there is. And even if one lens can, you have to be taking a picture in which that everything is at the same time, exactly red, exactly blue, and exactly green.
Yep, i used to shoot the Canon 6D and the only time that I needed more megapixels was when the client wanted to shoot wide and crop in. The Canon EOS R is as much as i will ever need.
to me pixel shift is not a means to get more resolution, but a means to negate the need for bayer interpolation. Is this accurate? This is why I like pentax's application...because you end up with the same sized file, but with most of the bayer interpolation errors which can be seen when pixel peeping, eliminated. It also makes it easier to process because the image sizes are manageable.
@gravis92 - that is, kind of correct. The most important result is the cleaner image. But, the Bayer interpolator needs to blur the image somewhat to avoid Bayer artifacts, so yo also get more resolution.
It isn't a question of "handling". Whenever a lens OR sensor is improved, you get a better image - period. Of course you will be able to get more out of the sensor with better lenses, but the same can be said if you turn the equation around.
I've worked with Phase One, Hasselblad digital, and Fuji. Fuji's lenses are fantastic! Their standard focal length lens is considerably better than the Phase One offering but it does cover a smaller image circle. Yes, 400mp is a LOT to ask of a lens. All of these systems are often used with large format lenses designed for digital.
I know, and not only that, it is a fake Bill Gates joke. Bill Gate has never said that 640 KB will be enough for all future. He is by far to knowledgeable for that. It is a myth.
I know you're being facetious, however.. It would be amazing to see a crop of a macro of an artist's reproduction work demonstrating how authentic the layered repro-work looks. Or showing of the detail in sculpted work. Both would easily be plainly visible on an iPhone, because the detail in question is small enough to be easily discernible on a small screen and can also be appreciated as a much larger image on a larger screen.
The smaller the detail, the easier it is for viewing on a smaller screen. Conversely, I wouldn't want to view a wide-angle, 150mp aerial view of the Amazon, on my phone... even though the same phone allows me to view minute tissue detail of the human eye when taken with a high resolution camera.
Hmmmm ... what happens at sub pixel pixel shift aliasing? All images will have aliasing from 100 MP. And then you combine. Will that get the same aliasing as a 400 MP image?
@roland for multishot you dont want a heavy aa filter. The aliasing components that enter each of the shots is precisely the reason that multi shot offers increased resolution when combined
Most people do not get the benefit from this. Nobody wants a 400MP output! But if you develop a 400MP image and at the end you resize it to lets say 8k resolution you have better IQ than if you take the 100MP image that is resized to 8k resolution. You have huge reducements in noise and bayer artifacts.
That is the reason why megapixel numbers counts!
Anybody can test that easily by themself if they take the raw test images supplied by dpreview from different resolution cameras and resize them to 4k or 8k images and compare at pixel level.
And yes, of course the photographer makes the good picture an not the camera.
@gianstam: Google maps do not use 400MPix images. Advertising companies do not need ultra-high resolution images: they would not find where to print them. (For example, giant ads are very-low-resolution! When I asked professionals what resolution would BENEFIT the clients, all of them I could find say that about 30MPix cameras would saturate the ACTUAL needs of the clients.)
This leaves science — and amateurs. An amateur with $30 budget can print a 30in×60in image with a very-high-quality rendition of 110MPix. With 3:2 aspect ratio of a camera, this is a 150MPix RENDERED image. To have a high-quality 150MPix output, you need about 300MPix Bayer input.
(This assumes a perfect lens; with real lenses, you better have yet-higher resolution sensor to grab EVERYTHING which the lens can deliver.)
400MPX pixel shift from a 100MPX sensor does not mean 400% percent resolution increase. Resolution increase alone with pixel shift is ca. 50% which modern lenses are possibly capable of resolving. Even if not, the sharpness/microcontrast increase will still be substantial. Additionally, there are equally positive benefits like dynamic range, true color readout and much reduced noise.
It depends. Fuji is claiming ten times better precision with their system so resolution should be better than previous pixel shift efforts. Of course the proof of the pudding is in the eating so we'll just have to wait and see.
100MP is enough, I think many of the lenses could not resolve 100MP even in this "medium" format (that is not Medium Format).
Real-world 100MP, it's still exciting for me as a Portraitist. I use a 36MP camera with a lens that definitely have the needed definition, and it's a breakthrough from my precedent APS-C camera with its 12MP.
100MP or more is for a portraitist the ability to dig into the beautiful scare life makes into one's face, their eyes, and produce it in a larger scale print, with incredible details, lively details. A lifetime in a single shot, a dream!
If you can tolerate the loss of some tonality the images would compress down pretty well using Adobe's lossy DNG compression. A sample 240-megapixel Sony A7r IV 16-shot pixel shift uncompressed raw file I shot comes in at around 1.82GB. That compressed down to only 73MB using DNG lossy compression (full resolution).
Very good point. Now, think back to March 8, 1983, the day IBM launched the PC/XT, the first serious desktop computer to include an internal hard drive. It was a 5.25-inch, full-height drive that held all of 10 megabytes. We used to wonder how in the world we'd ever fill it up. That wouldn't even hold one JPG from today's cameras. And the first CF card I bought for my 8.2MP Canon 20D in 2004 was 1GB, and cost about $150.
@DotCom, I still remember reports of Bill Gates saying nobody wouldn't ever need more than 640KB of RAM. He denies he ever said it but the legend lives on :)
@Horshack I don't remember Bill saying that, and I covered him for years as a technology journalist with many trips to Redmond each year. But, it sounds like something he would have said back in the DOS 2.1 days with the 8-bit 8088 or 8086 processor. Once 16-bit Windows arrived, starting with Win 3.0 in May 1990, you needed megabytes of RAM to get any decent performance. With the 16-bit bus limitation, Windows had to do memory address bank switching to access larger amounts of RAM. My first computer, in 1983, was the original dual-floppy (720 KB capacity if dual-sided) IBM PC Model 5150, which maxed out 256 kilobytes of RAM on the motherboard. I bought an AST Six Pack Plus board to bring RAM up to a whopping 320 KB (and add parallel and serial ports, which the PC did not natively have). It was expensive. Remember, a bank of 64 KB was actually 9 DIP chips that you had to insert into separate sockets. (The ninth chip was for parity checking.) I have thousands of stories from those days.
I vividly remember upgrading a bunch of old IBM PC’s with either 16 or 64kb native ram, two full height 256k floppy drives and a clock speed of 4.7MHz (I think). They got accelerator cards with 8MHz processors and an additional 576kb of ram to bring a 64k machine up to 640k, installed a 1.2mb floppy and a 20mb hard drive and a new power supply. And some how that made economic sense. I felt like I was dropping a 426 Hemi into a Toyota Corolla.
Well, well well, The X-T4 is coming in a few weeks as Fujifilm confirmed. It is rumored to have IBIS. Fujifilm is adding Pixel Shift to their latest camera with IBIS.
Any bets on this making its way to the X-T4? (only it won't be 400MP)
Probably not. They would need to switch to the Bayer pattern first. Which is possible because with multishift the moire problem is solved and x-trnas is nit needed any more.
At some point this exercise in reaching for even more pixels of resolution must become pointless in practical use. For example, 400MP may actually exceed the resolving capability of even the finest of glass lens manufacturing.
@Imageof, Thanks for your reply. Unfortunately I'm still having trouble working it out. For example some images posted online are crops, which benefits from higher resolution imagery, esp in focal-length scenarios.
Most people do not get the benefit from this. Nobody wants a 400MP output! But if you develop a 400MP image and at the end you resize it to lets say 8k resolution you have better IQ than if you take the 100MP image that is resized to 8k resolution. You have huge reducements in noise and bayer artifacts.
That is the reason why megapixel numbers counts!
Anybody can test that easily by themself if they take the raw test images supplied by dpreview from different resolution cameras and resize them to 4k or 8k images and compare at pixel level.
And yes, of course the photographer makes the good picture an not the camera.
@landscaper1 It doesn't matter if 400mp exceeeds what the best lenses can resolve. All this technique needs for 400mp is the lenses to be able to resolve 100mp, that's it.
What matters is angular movement, and how quickly it can make captures between. In theory, if you can shoot at a shutter speed as long or longer than the entire time it takes to capture without getting any blur, it should work. In practice it's a little different as when I say "without getting any blur" I mean "if you zoom into 400% and scan across the entire image and can't find any area of blur at all" and many people read it to mean "looks sharp to me" which is no where near enough. Also if there are small vibrations during capture (from the shutter, etc) it can exacerbate the pixel shift errors more than a single long exposure.
(Been working with pixel shift on Sinar and Hasselblads for coming on 15 years now).
Foveon sensors produce incredible images with real light. Other sensors fake it when the light gets slow, and fill in the gaps with algorithms. Its just like film. Fovenon sensors are for film shooters who want more convenience occasionally. If you dont primarily shoot film, you'll miss the point of these cameras.
A lot cheaper than the other 400MP sensor-shift camera out there (The $46,000 Hasselblad H6D-400c MS.) Though the Hasselblad does have a noticeably larger sensor: 50x43.3mm vs 44x33 on the Fuji.
Can someone point to an article that explains how camera's shutter speed and Pixel shift work ? Is the implementation drastically different between manufacturers? Thanks.
The camera takes 4 shots, similar to an HDR, just with the same exposure on each and moving the sensor a tiny amount each time. So your chosen shutter speed is what each shot is taken at, with a tiny bit of time between each shot for the sensor to move. So for this iteration of High Res Mode overall time for the shot is shutter speed x 4 + small time for sensor to move.
Different approaches are available to the number of shots not shutter speed, Olympus for example take 8 shots not 4 like Fuji.
As far as I am aware Olympus have the most complete approach to high res, with the E-M1X even offering Hand Held High Res, where no Tripod is needed as it can stabilise and move the sensor the required amount at the same time.
I don't know what your question about shutter speed is refering to but I'll say one thing about it... no matter what your shutter speed is, it will take substantially longer to capture a pixel shift image than just the length of the shutter speed. Because each shot that it takes will be at the shutter speed, and then there is time between each shot to actually move the sensor, deal with things like clearing the dada from the sensor to the buffer or resetting the sensor/shutter.
There are several different implimentations of pixel shift. But let's start with a single shot:
Single shot (no pixel shift): Camera takes a single image under a bayer filter with 1/4 of the pixels being red, 1/2 being green, and 1/4 being blue and interpolates the 2 missing colors of any pixel from the adjacent pixels. There is twice as much green because human vision sees brightness (and sharpness) based around green.
Actual pixel shift: 4 shot un-interpolated: sensor shifts exactly one photo site so it gets red, green, and blue at every location. The file is the same number of megapixels as the sensor (and TIFFs are the same size) but with 2x the amount of green the image appears twice as sharp, and the extra red and blue minimize color fringing caused by moiré.
16 shot: The simplest (mathematically) way to double the resolution in each direction (or quadruple the MP) is to take the 4 shot mode and then move 1/2 a pixel in each direction from that. It takes a lot of time to capture but the image processing is simpler and it keeps the uninterpolated (sharpness/moiré) benefits of the 4 shot
6 or 8 shot: Start with something like the 4 shot uninterpolated method, then move a 1/2 pixel between in a few directions. Not enough to get full color at every pixel, but enough to interpolate as we do with the bayer filter and still give you a 4x MP count. While it's a little softer than 16shot, it's quicker
How fun. Good for Fuji for figuring this out. I'm impressed given the mass of the assembly that they have the precision to do it. I hope the raw files are fully assembled in camera for easy consumption - .RAF files for Fuji, right?
There are two types of sensor pixel-shifts on bayer sensors - one which shifts the sensor by 1 full pixel in each direction (4 shifts), and one which shifts the sensor by a sub-pixel amount (8 or 16 shits).
The full-pixel shift method results in a raw with the same resolution as a standard, non-shifted raw. The difference is that each pixel within the raw will have 3 separate color samples - one for each color of the bayer sensor (Red, Blue, and Green x2 averaged), whereas a standard bayer photo can only sample a single color at each pixel. This results in a raw with better color resolution and lower artifacts.
The sub-pixel shift method results in a raw with more resolution, because data is being sampled "between" pixels (for lack of a better analogy). This has the potential to further reduce artifacts and moiré, although it may ultimately be limited by the resolving power of the lens since the effective sampled pitch is much finer.
The resolving ability of lenses is measured as the number of uniquely identifiable pairs of lines per millimeter of projected image (LP/mm), independent of the sampling medium the lens image is projected upon. Digital cameras use image sensors as the sampling medium, and their method of sampling is a grid of evenly-spaced fixed pixels. The density of these pixels determines the maximum identifiable pairs of lines per millimeter of sensor area that can be sampled from the image projected by the lens. In sub-pixel shifting, the effective number of pixels sampled per millimeter of sensor area increases, since the sensor is being shifted by distances smaller than the fixed pitch of the pixels. The additional amount of line pairs that can be sampled from this increased increased sampling per millimeter is ultimately capped by the LP/mm resolving ability of the lens.
Can't speak for Horshack, but take a look at this wikipedia article https://en.wikipedia.org/wiki/Super-resolution_imaging particularly 2.2.3 There is also an article by the google team that does photography that explains how their zoom feature works,https://arxiv.org/abs/1905.03277. They don't do sensor shift though, just rely on multiple hand held shots, which adds additional challenges (alignment). Now when one says "it works" it just means "there is a clear improvement over not doing anything" it doesn't mean it matches exactly the output of higher rez sensor or telephoto.
full pixel/4-shot pixel shift gives you the same MP file, but an apparent doubling of the sharpness (humans see luminace based around green so 2x the green makes the image appear twice as sharp). In most cases the extra red and blue resolution isn't as noticeable, but helps a lot in cases where you would get moiré.
An interesting experiment to help understand the importance of more green is to take a color image (doesn't have to have a ton of color in it, but just needs to be RGB) in photoshop. Go to the channels and blur the Red channel by something like 8 pixels, then look at the RGB view. Undo it, then do the same to just the Blue. Then undo that and do the same to the Green. You'll see that the image doesn't look nearly as soft when you blur the red or blue channels as much as the green... human vision can be weird.
@ozturert, The only way you can demonstrate your observation that your resolution increased in line with pixel-shift subpixels is by plotting the MTF of a regular vs sub-pixel shifted image. Short of that your observation is only empirical in nature.
I don’t think anybody here claimed that the resolution increase would be in line with the ‘pixel’ increase. Here a test with images illustrating the difference: http://www.wrotniak.net/photo/m43/em1.2-hires.html
(See also his follow-up article linked at the end where he compares different lenses.)
Horshack, it was your claim: "although it may ultimately be limited by the resolving power of the lens since the effective sampled pitch is much finer."
I asked for demonstration, you couldn't. I'd like to see picture examples because we are talking about photography, not theoretical physics.
@ozturert, How would a 'picture example' demonstrate the full increase in resolution? You can't measure resolution in that fashion to validate your assertion that the full increase in sub-pixel shift resolution is achieved. If you'd like some actual data of the resolution increase from sub-pixel shifting I believe Jim Kasson has some MTF measurements on his site.
Yes, it's just not "limited" in the sense that if the lens doesn't resolve x lines on any sensor it won't be able to resolve say 1.2 x lines with super-resolution techniques. That's what these techniques are for.
Horshack, you can measure MTF values maybe. If you can't measure this, then why claim something you cannot prove? You just throw something in the air that you cannot show.
@ozturert, I said the sampling resolution of a sensor may ultimately be limited by the resolving ability of a lens. Lenses don't have unlimited resolving ability. I'm not sure why that's a concept you have trouble accepting. And why can't it be measured?
@piccolbo Luckily nobody ever claimed that specifically: “If the lens doesn't resolve x lines on any sensor it won't be able to resolve say 1.2 x lines with super-resolution techniques.”
Horshack just said that this super resolution technique might run quickly into diminishing returns if the lens doesn’t have enough resolution to start with. Meaning with a lower IQ lens one might be hard-pressed to see much of a difference at all and only with the best lenses will one see a clear enough difference.
You could do as precise as you want (1600MP, 6400MP, extrapolate m43 pixel shift precision to MF size), but you would need - a lens capable of resolving it - while not being limited by diffraction (wide-ish open) - and, unless you're shooting test charts, enough DoF (so stopped-down-ish) and - a subject (and camera) still enough to not move a bit during multiple exposures.
Not an easy task. And I would have use for such system (just not funds probably).
400Mpixel... being low... how do you mean... why do you need more than 400mpixel? And not because you want it... have a real reason... because I don't hear any...
While you can move in as many sub pixel step as you want and upscale the images to eventually terrapixels, the actual detector size will act as a low pass filter, so you will eventually not get more resolution. I guess it is not so easy to get more that twice the resolution, i.e. four times the number of pixels. Maybe three times the resolution (i.e. nine times as many pixels) also would get an increase in actual resolution, or maybe not.
Not to worry man... you good. It will instantly notify you of "upload failed" which should take a huge worry off your mind. Nothing like instant gratification or lack of...... :-)
@jovdm I help a local museum with digitization of artwork. We currently stitch multiple shots and it takes time. A lot of time. Having it automated (and compatible with external lighting) is a very nice time saver.
We need higher resolutions especially for sharing photos on the web. Consider that they will pretty much always zoom into the image to examine the details. Overall it is best to have more pixels to peep at when it comes to general web/ social media sharing. For more professional work nothing on the market comes close to providing enough detail, thus camera makers need to improve their lenses and sensors.
I myself digitise artwork for museums and artists... but its to use in catalogues... where 20mpixels is already overkill in res... and we chose what details they need... I guess having a 400Mpixel image would help the color accuracy... and having some res left for details... but never, ever, have someone asked me for more resolution (knowing that most professionals I know that are doing the same have lower resolutions at hand...) most of us cater for the situation that is at hand... but now I understand why you would like higher res... I suppose you shoot with a view camera (the likes of Sinar and such) so you can move the back around to shoot your reproduction in different pieces to stitch... it can/will help you if you do that to make sure the stitch itself isn't affected by lensfaults & perspective changes...
and professionals that need more detail? please... they need to learn how to frame, not to crop from a highres image... that's what a real pro does... a pro shoots the needed image, with a small bit extra (for trimming in print...) but no need to crop a 10Mpixel image out of a 400 because they forget to shoot the right frame :-D
Actually, right now, in pilot study, it's lowly 28MP Samsung NX500 and an automated pano head. I have been burned by estimating too low in past (we'll never need more than 720p video - right?) so we try to get as much data as feasible now. We have settled for at least 600dpi for flat-ish art (preferably 1200 for "important" or smaller items). This comes out to 200+MP on average.
This is a large undertaking and it simpler to do right than to repeat. They are currently evaluating Panasonic and Sony offerings, but this throws a wrench in plans. We'll just have to wait and see.
Camera (and lens) is just a part of the problem, lighting and color correction is very hard to do right reliably and repeatably. It's a learning process, but it's very rewarding for me.
off course... 720p will be enough (I don't see anything 'needed' in 8K video...) but I hear what you are saying... I see what you are working with (and looking at working with) color correction is at least as important as camera and lens in these cases... why do they even consider Panasonic and SONY?! I mean... get something better... medium format is the way to go... for this type of work... For instance Hasselblad has the H6D400MS and a collegue who is using the 100Mpixel version in studio tells me he can calibrate his shots by just shooting a reference... letting Hasseblad Phocus measure that... and basic color is set... for all images... get a reliable set of flashes: Broncolor... does multishot like a charm... the generators tell you when your heads are no longer providing the right coloroutput... (more yellow when flashbulbs age) so you can switch them in time before you get differences in °K between your lightsources... really... who is deciding this low-tech solutions there?!
@otto k "a lens capable of resolving it" => you don't need the lens to resolve that, you get extra resolution by combining many images. It's not linear, so you need a lot of images to get just 2x the resolution, but the lens doesn't need to have 2x the resolving power for example. Same for 4x.
If you want to read a pro's take on this camera and large files, check out this: https://luminous-landscape.com/a-tale-of-two-fujis-part-i-introduction-and-the-gfx-100/ "Prints larger than that are the domain of the GFX 100. There are places where they’re needed – large institutional installations where it is possible to approach the print closely come to mind. If you are placing 60×80” images in the hallways of hospitals, banks or similar clients, the GFX 100 is the perfect camera for that task." Of course you may or may not agree with that, but he pays his bills this way. I think nobody pays attention to the images in a corporate env. But he says museums don't request sizes that require 100MP+. So there may be applications where you need wall-sized 300dpi prints, but not an awful lot of them.
@pullup - Unfortunately it doesn't work that way. Regardless of the method of sampling (static sensor, shifting sensor, film, eye, anything really) you can not get more information than the lens provides.
While that is the case, the hope would be to ensure that the only bottleneck to the image detail is the lens and not the sensor, by pixel shifting enough to make sure that the lens is the only bottleneck.
The lens is a kind of inefficient AA filter. The lens is sharper in the middle and it has a soft cut off. And it has different cut off tangential and radial.
400MP is a 2.2GB TIFF, which is bigger than most scans I made dealing with 4x5 and 5x7 color transparencies. In many cases we typically we didn't scan much higher than around 500MB which is around the 80MP range. Maybe go a little higher for ultra fine grain 50 ASA film.
Where I have done much larger files was from APS-C and 135 format digital SLRs by stitching monster panoramas.
Depends on what I'm working on. I needed a wall done in extreme detail but not have stitching errors, so had to build a 3D model of it to avoid parallax error (as we had to move the camera to shoot around objects). The final TIFF might only be 1.8GB, but the files used to build it are around 60GB. These are for documenting artwork.
Doing pixelshift with x-trans is much more difficult than with bayer. Maybe Fuji will move to bayer because of that. Because with pixel shift the moire problems are strongly reduced, so x-trans is no longer needed.
@ kb2zuz: Why exactly so you think that I'm wrong. The only thing what I said is that even with pixel shift the lens does not need to resolve more than the 100 MP of the sensor.
That's why they only managed to shoehorn it into a single model (and one that flopped). That doesn't mean it won't be in future X mount cameras, but they made it harder on themselves if they choose to.
@bluevellet: so please also read the article if you post it! They just said that at that time, that the diameter of the x-mount is to small to shift the sensor without vigneting. Everybody knows this was just an excuse in 2016 not to have ibis. The diameter is large enough vor movement also compared to competitors with ibis. So the choice of the diameter of the x-mount was no wrong choice.
The H1 flopped not because of its very good ibis, but because the sensor and processor, which was quickly outdated as the XT-3 became shortly available. Another reason is that the design of the camera is not in X-mount tradition.
Do me and others a favour and do not push argumentation in a wrong way.
There's not a single lens for a medium format camera that can resolve such high resolution, I bet it will only help reduce moire in a very few limited situations. Not sure if it's really worth the significantly increased file size, however.
That's the trick with pixel shift technology. The lens does not need to resolve a 400 MP file. The final image is merged of 4 pictures which each having "only" 100 MP.
Mnemon and Toni... you're both wrong. the 4 shot mode that is Uninterpolated 100MP has the pixels in the same places, but you do pick up twice as many green, and if the lens was soft and the bayer interpolation was hiding the softenss you're not going to gain much there. But the bigger issue is when you go to 400MP you're shifting the sensor halfway between the pixels... this leads to sub-sampling just as if the sensor had 400MP worth of files. So yes, the lens does need to resolve 400MP to truly get the full advantage.
That said... there isn't a magic cut off with lens and the file looks exactly the same after a certain point. And finally you can fudge a little sharpness by adding sharpening.
I have the Hasselblad 400MP cameras in the studio and most of the time they stay in 100MP 4-shot mode, but there have been cases where 400MP has shown to pull out just a little more detail. So clearly there are lenses that can at least resolve something more than 100MP, if not the full 400MP.
@Mnemon: you are absolutely wrong. If a lens does not pass through any info about a small detail, no manipulation with sensor would restore these details. With 400MPix sensor shifts, you better have a lens “with 400MPix resolution” (whatever this means for you — there is no such term in optics).
@kb2zuz: the problem with Hasselblad implementation is that it is a COMPLETE HOAX! If one looks at the geometry of the sensor shifts, it is clear that they read information FROM ONLY 200M positions of photosites. THIS is why the increment w.r.t. 4-shifts mode is not very strong…
(IIRC, to add insult to injury, they read RGBG from only 100M positions; then they have RG from 50M other positions, and BG from the remaining 50M positions…)
I am curious, isn't another benefit that a 400MP image could be downsized to 100MP with greater DR and less noise than the standard 100MP image?
Luckily for this camera, when pixel shift is not an option or with motion and moving subjects, one still gets wonderful 100MP stills.
Sorry to go off topic, but for perspective, if you stack 2 8K monitors (we all have those right?) you get a display with an 8640 pixel height. This camera's 100MP images are 8736 pixels tall. So, I think 100MP is enough for me. :)
Absolutely. For a limited number of photographers who do need the little bit of added quality or the increase in print size and can handle it, this is a valuable tool. For the rest this does not bring any additional benefit.
Tony Northrup made an instructive video about the limits of pixel shift. When it works, it´s brilliant, but the slightest ground shake ruins the shot.
Unless Fuji has figured out a way to mask motion artefacts - or at least install motion detection in camera to warn if the shot needs to be repeated. The Sony a7r IV has neither.....
Jim, it can do both. Shifting the sensor in full pixel widths to get an uninterpolated image doubling the green which increases apparent sharpness even without increasing Megapixels or filesize. Having red and blue at every photosite also means you practically eliminate moiré.
But many cameras can shift the sensor by less than a full pixel. Sampling between the photosites as if teh sensor had an even higher resolution. This is how you get 400MP out of a 100MP sensor because it actually captures 400MP of information.
You are absolutely right about the limitations. but it has been very valuable on our Hasselblads in the studio. Even then if working with something that is draped/hanging where it might blow from a slight draft from the air vents, it has had challenges (but then we can either just to back to regular single images, or blend in a single-resolution capture just in the area where movement occurs).
@JimKasson I understand your point, but as @kb2zuz mentions, I'd like to precise that it does increase sharpness.
The easiest example is the fact that a bayer sensor captures 1/3 of data (2/3 is estimated). Pixel shift allows real data increase (even though resolution is the same).
The same principle can be applied to sub-pixel shift, as more real information composes the final image (thus the sharpness increase).
In summary, sensor shift principally minimises a sensor's limitation (which is principally not a lens limitation).
Guys, ON ONE HAND what Jim says is 100% correct. ON THE OTHER HAND, I strongly disagree with his claims! Just go to his site, and see his comparisons of 100MPix of Fuji vs. 16×61MPix of Sony: the latter images are MUCH sharper.
The reason for double-think is the meanings of the words. Typically, ∘ the “sharpness” describes something like “resolution of tiny details”. There are two obstacles to such resolution: • “blurriness” created by the diffraction and aberrations of the lens, and averaging over a sensel; • “artefacts” created by de-Bayering and/or interpolation from the sensel grid.
With these terms, what Jim says can be restated as: • blurriness remains the same; • artefacts are almost completely avoided. CONCLUSION: the sharpness may be strongly enhanced.
Furthermore: a significant part of the blurriness can be computationally eliminated by an appropriate post-processing. (The price is a strong noise amplification — so one should better start with little noise!)
Sony has just released a trio of impressively small, light, ultrawide lenses for APS-C. These lenses are designed for vloggers, so Chris decided to film himself and find out how they perform.
The Fujifilm X-H2S is the company's latest APS-C flagship, using a 26MP Stacked CMOS sensor to deliver the fastest shooting, best autofocus and most extensive video specs of any X-series camera yet. Here's what's new and what we think so far...
How do you make weird lens even weirder? Put a periscope on it! We check out the new Laowa Periprobe 24mm F14 2X and explore some of the creative things you can do with such a bizarre lens.
What’s the best camera for around $2000? These capable cameras should be solid and well-built, have both speed and focus for capturing fast action and offer professional-level image quality. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing around $2000 and recommended the best.
What's the best camera for shooting landscapes? High resolution, weather-sealed bodies and wide dynamic range are all important. In this buying guide we've rounded-up several great cameras for shooting landscapes, and recommended the best.
Most modern cameras will shoot video to one degree or another, but these are the ones we’d look at if you plan to shoot some video alongside your photos. We’ve chosen cameras that can take great photos and make it easy to get great looking video, rather than being the ones you’d choose as a committed videographer.
Although a lot of people only upload images to Instagram from their smartphones, the app is much more than just a mobile photography platform. In this guide we've chosen a selection of cameras that make it easy to shoot compelling lifestyle images, ideal for sharing on social media.
In our continuing series about each camera manufacturer's strengths and weakness, we turn our judgemental gaze to Leica. Cherished and derided in equal measure, what does Leica get right, and where can it improve?
A dental office, based in Germany, had a team of pilots create a mesmerizing FPV drone video to give prospective clients a behind-the-scenes look at the inner workings of their office.
Samsung has announced the ISOCELL HP3, a 200MP sensor with smaller pixels than Samsung's original HP1 sensor, resulting in an approximately 20 percent reduction in the size of the smartphone camera module.
Street photography enthusiast Rajat Srivastava was looking for a 75mm prime lens for his Leica M3. He found a rare SOM Berthiot cinema lens that had been converted from C mount to M mount, and after a day out shooting, Srivastava was hooked.
The lens comes in at an incredibly reasonable price point, complete with a stepping motor autofocus system and an onboard Micro USB port for updating firmware.
The new version of the Blackmagic Design Pocket Cinema Camera 6K brings it much closer to the 6K Pro model, with the same battery, EVF but a new rear screen. New firmware for the whole PPC series brings enhanced image stabilization for Resolve users
The OM System 12-40mm F2.8 PRO II is an updated version of one of our favorite Olympus zoom lenses. Check out this ensemble gallery from our team, stretching from Washington's North Cascades National Park to rural England, to see how it performs.
The first preset, called 'Katen' or 'Summer Sky,' is designed to accentuate the summer weather for Pentax K-1, K-1 Mark II and K-3 Mark III DSLR cameras with the HD Pentax-D FA 21mm F2.4 ED Limited DC WR and HD Pentax-DA 15mm F4 ED AL Limited lenses attached.
As we continue to update our Buying Guides with the cameras we've recently reviewed, we've selected the Sony a7 IV as our pick for the best video camera for photographers. It's not the best video camera we've tested but it offers the strongest balance of video and stills capabilities.
For the next several weeks, many observers will be able to see Mercury, Venus, Mars, Jupiter and Saturn in the predawn sky with the naked eye. Of course, a camera with a telephoto lens or telescope attached will get you an even closer look.
The June 2022 Premiere Pro update adds a collection of new and improved features and performance upgrades, including a new Vertical Video workspace, improved H.264/HEVC encoding on Apple silicon and more.
Researchers at NVIDIA have created a new inverse rendering pipeline, 3D MoMa. It turns a series of images of a 2D object into a 3D object built upon a triangular mesh, allowing it to be used with a wide range of modeling tools and engines.
Light Lens Lab is a rather obscure optics company, but their manual lenses for Leica M-mount camera systems tend to offer a unique aesthetic at what usually ends up being reasonable price points.
We've updated our 'around $2000' buying guide, to include cameras such as the Sony a7 IV and OM System OM-1. We've concluded that the Sony does enough to edge-out our previous pick, the Canon EOS R6.
This compact shotgun microphone will convert the analog audio signal to digital internally before sending it as a digital signal to compatible MI Shoe cameras, such as the ZV-E10 and a7C.
In addition to the Amber and Blue versions, which give flares and highlights warm and cool tones, respectively, the new Silver Nanomorph option offers a more neutral flare that changes with the color temperature of the lights being used.
The organizers of the Bird Photographer of the Year competition have revealed the top finalists, showcasing the incredible photography of avian photographers from around the globe.
Both the 27" and 32" models use a 3,840 x 2,160 pixel IPS LCD panel that offers 98% DCI-P3 coverage and Pantone validation for accurate color representation.
A very special Leica camera just became the most expensive ever sold. Chris and Jordan were in Germany for the auction, and to tell you why this particular camera is so special.
As part of any mission to Mars, there will be garbage and discarded components. The Perseverance rover recently spotted a piece of trash, a bit of shiny thermal blanket. It's believed to be from Perseverance's landing operation, but it's not clear how it ended up where it did on the red planet.
Fujifilm has announced the Instax Mini Link 2 smartphone printer. The compact printer features new customizable frames, image modes and a feature called INSTAXAiR that lets you 'draw' designs onto your prints.
DxO has announced Nik Collection 5. The suite of eight plug-ins includes redesigned Color Efex and Analog Efex plug-ins, plus Viveza and Silver Efex, which were rebuilt last year.
Comments