The Sony Alpha 1's sensor is so fast that it got Jordan thinking... Could the new a1 use computational photography like a smartphone to create even better photos? We explore this idea, and what it might mean for future cameras.
Mirrorless without a Mechanical Shutter, now and in the Future? I have had that on my 1999 FOVEON Prism camera for 22 years and counting with Solid State Memory storage! (now owned by Sigma).
I'm glad to see the world is finally catching up to the works of Carver Mead, Richard Merrill, and Richard F. Lyon in making a great camera (for studio in their case) such a long time ago. But it was $50,000 back then, so not many people were interested!
No. Computational photography will NEVER appear in cameras and you people have only yourselves to blame. In 2013 Sony gave you a thrilling glimpse at the possibilities with the DSC-QX100 and QX-10 lensor cameras but everyone copped a very superior attitude and didn't buy them. Why would any well-run company make the same mistake twice?
The achilles heel for that tech. was the stupid LONG lag time to take a photo with the smart phone display. So, basically you are touting old technology that didn't cut the mustard and didn't sell. But you can't blame Sony. They are a giant that tries to do everything small with big results. 20 years ago, Sony was making 80% of the worlds' small sensors. They are still way ahead of the pack!
Sadly, you’re right, the engineers might have had to make performance trade offs that users found frustrating, but if we met them half way I think they would have developed the lensor camera into something really useful.
Given that current cameras lack the computational power for processing the images in this way, and it will be a while before they do, maybe the next step is to provide better interfaces and options within the camera to 'set up' the captures ready for post-processing. For example, if a 1s exposure is needed in a given situation, but you're hand-holding, maybe you have the option to split this exposure into a burst of 10x 1/10s shots, giving the same overall exposure, but each one being sharper. These are grouped on the card in some way (folder or metadata), so they can easily be identified, stacked and post-processed accordingly. This is just one example, and I'm sure similar things have been done before on on other cameras. I'm just saying that there's lots of scope for improving the way cameras allow us to make multiple captures intended for further computation, rather than trying to do the computation at the point of capture.
Take this technology to the 1 inch sensor, where it would be more affordable... Sony's disincentive to do this is that then they would not sell high bodies and lenses. Google or Apple processing of a 1 inch image sensor would obsolete ILCs as we know them.
I guess you are not following sensor evolution. Sony has been doing the opposite but logical direction. From small sensors to larger. Their 1" sensors have much of the features of A1's since long. RX10 and 100 series take advantage of the reading speed, for instance capturing ca. 1000 fps video, or doing tens of fps stills with electronic shutter. We have so much to learn from Sony's wisdom in the camera/sensor business. The first lesson is to start with the basic and maybe timidly. Then, grow progressively step by step. The result is the subject of this discussion.
That software processing you are referring to is still far away. Serious photographer can't allow processing that would change the actual material of a shot subject such as type of wood or something similar.
Yes, if you had Apple or Google apply their computational photography techniques to even a 1 inch sensor the results would be phenomenal.
But as noted by a former iOS engineer below, so much more is required than just a sensor with fast readout. You need a very powerful processor, far more powerful than current cameras, you need the OS and the computational algorithms, and you need the power management.
The fast read speed of sensor is only 1 part of a very complex equation. Currently no camera company is close.
Looking at the dreadful results from the stills Dpreview have posted, I am confident my 2015 20MP Lumia 950 would do better, just set to Pro Auto 50 and DNG, in most cases. It cost £120 and it also has an f1.9 26mm lens that in my copy is sharper than the lenses used for these review samples. Until noise levels equal the Sony A7 III, regardless of the advantages we see claimed here, and prices,I know which one I would use
Computational Photography throws out the exposure triangle with regards to the final image and the EXIF Data will be meaningless causing photography to be classified as virtual reality rather than photography.
The a1 gallery looks fine, but nothing you can't achieve or surpass IQ wise with much cheaper FF cameras from for instance Nikon or Canon. I'm also pretty amazed, at the amount of noise in the ISO 1250 image in the gallery.
So this is actually the camera (sensor) who killed the "decisive moment" and we are all going to use "frame grabs" in the future. Photography is actually getting more uninteresting for every iteration of sensor. Why use a rifle during hunting when you can use "hellfire" or cluster bombs?
This kind of technology only created another kind of medium between photography and video - animated images. We are all familiar with gif files as low resolution and choppy and I hope they will be given justice with the latest advances in tech.
Cameras can't kill the decisive moment, because it's life that's happening, it's not a photographic phenomenon.
If you want to chase the decisive moment one button push at a time, well, nobody is stopping you, but it's guaranteed that you won't be capturing it very well in a photo.
If you are talking about doing it with video, it probably won't be with the same raw p.q. that you get in a photo, video can't do that. For instance, raw video is all compressed by a factor of at least 3:1, probably over 5:1 on cameras like the R5.
Sony didn't invest a bushel into photography for nothing. Clearly they realize they have a major tech advantage and a plan that the competition will struggle with. Computational Photography is a part of that.
The problem is this: spend 6-8k on this camera. There will be a much ‘better’ one in terms of specs in a year, from Sony or some other company, two at the outside. And smartphones. They will almost certainly offer ff image quality by TODAY’S standards within a few years, albeit through digital trickery. Buying a new camera is fast becoming a mugs game. How many people, even pros, really know the differences between models, let alone need those differences.
No other company makes stacked sensors for ILC, so who besides Sony is going to improve on what Sony did.
This isn't smartphones, the ILC market is so tiny that we had to wait over 4 years to get a sensor update to the a9/a9ii stacked sensor.
The a1 won't be bested anytime soon, and probably not then by Canon, given that Canon can't seem to even make a BSI sensor for their most expensive cameras.
The current generation of mirrorless or DSLR cameras has no GPU at all and has much less CPU computing capability than any modern mobile device. I strongly doubt that any complex algorithm can be implemented on such devices.
What is certainly doable is to get a fast burst of images and then post process it on applications like Lightroom, to generate panoramas and HDR images. Indeed being able to capture frames closely spaced in time minimizes inter-frame subject motion, allowing for a more successful post processing.
I also wanted to point out that the image fusion and denoising algorithms currently implemented in desktop software are really slow and far less sophisticated than those implemented by either Apple or Google. Just think of how much fusion artifacts you get in Lightroom's HDR and compare that to the latest generations of iPhone HDR or OIS (iPhone X and following).
Thank you for your insight based on real world knowledge and experience.
I think you nail it with this summary: "The issue with computational photography is computation: there is lots of it."
Right now the dedicated camera companies lack the processor chips, the OS, and the software algorithms to do this. The read out speed of the sensor is but one part of the puzzle.
Yes, and also the power budget, battery life is already a big issue in mirrorless cameras, if you had to perform a whole bunch of computation at every frame captured, the power issue would be even bigger :P
It's control over presets pretty much. iPhone, Pixel, Samsung work on presets and if conditions match they can do a good job. Most of the time end result is somewhere in between really. I've seen them do something properly but I've also seen them fail miserably. Aggressive post processing on camera is not a way to go but what camera manufacturers need to do is provide desktop software that can work in conjunction with extensive metadata and give user a choice for vfx implementation.
@Supercool: Can you give any idea of what such "extensive metadata" would contain? You already have the RAW from the sensor, a timestamp, lens identifier and the camera settings used: what other information could the camera provide that would be useful in post-processing?
Are you suggesting that the cameras need to be outfitted with extra sensors to be logged alongside the above?
Having worked on some computational photography aspects of the iPhone for several years (2011 to 2016) I have a few remarks for Jordan.
The issue with computational photography is computation: there is lots of it.
The iPhone comes with a really powerful mobile GPU, and for that reason at Apple we were able to implement features like HDR, advanced denoising, image fusion (OIS), etc., to produce high quality images in reasonably "real time".
Even with the availability of such computing resources, it takes many months (years) for very skilled engineers to first of all to come up with the algorithms in question, and then to put together an implementation that is nimble and fast to utilize a minimum of resources and generate high quality images in a few hundred of milliseconds.
"it takes many months (years) for very skilled engineers to first of all to come up with the algorithms..."
Or a few weeks for their masters to reach an agreement to sell/share existing technology (yes, I know Apple will not do that but they are not the only source). Sony camera division could have a meeting with Sony phone division.
It is a bit puzzling that it is taking so long to migrate phone tech to standalone cameras. Maybe because standalone cameras are a pittance compared to phones and just not worth the bother. Existing technology adaptation still has startup costs that are not spread out over huge volumes, as in phones. And maybe phone tech works great for tiny sensors but isn't all that beneficial for large sensors?
Lol, you overestimate "their masters". These algorithms are really complex and highly tuned to the hardware they run on, porting them to a different platform might be just more difficult than writing them from scratch. And yes, the hardware platform of a camera is way less sophisticated than that a phone.
Congrats DPR. You were straight on the core. And you made it clear that the FF stacked sensor in A1 represents one more technological hurdle the industry has overcome. And it took just 7 minutes, not 16 as opportunistic you tubers have done, to explain the marvels that future cameras may bring, starting with this 1. People are biting their tongues trying to compare a camera that has not even been released, missing the main point that you brilliantly addressed. And, I know, thi is just the first impression.
What is Computational Photography? "Everywhere, including Wikipedia, you get a definition like this: computational photography is a digital image capture and processing techniques that use digital computation instead of optical processes. Everything is fine with it except that it's BS. The fuzziness of the official definitions kinda indicates that we still have no idea what are we doing.
Stanford Professor and pioneer of computational photography Marc Levoy (he was also behind many of the innovations in Google's Pixel cameras) gives another definition computational imaging techniques enhance or extend the capabilities of digital photography in which the output is an ordinary photograph, but one that could not have been taken by a traditional camera. I like it more, and in the article, I will follow this definition."
Funny thing about DPR, They can cuss in their articles but if you repost the same article, they won't allow it until you remove, Their Cuss Words. Aka B and S. BS
I would say we are already doing, Computational Photography of a sort in larger Cameras? The moment this happened, it's just a matter of just how far will it go?
Is in-camera lens distortion correction cheating or not?
ZDNethttps://www.zdnet.com › Topic › Mobility Jun 2, 2010 Camera manufacturers can make digital cameras ever smaller and less expensive is by using in-camera software to correct for lens distortion
Sony had a mode that combined images for a improved night scene in camera. I don't know if it is still on newer cameras. Olympus has their hand held HR mode that produces high resolution RAW files with improved IQ. They and others also have 'focus stacking' in camera for macros with greater DoF. I would say these appear to fit that definition.
One has to wonder how the courts of the future will view, "photography in which the output is an ordinary photograph, but one that could not have been taken by a traditional camera."
Putting computational photography in such a high end camera has one drawback - it's not perfect, by far. You still have a lot of artefacts in phone cameras. Would you tolerate these in a 6500,- cam?
And you still can do summing and calculation stuff in post. Just shoot a fast burst of 4-10 pictures and align - DR will raise significant. You have the multi shot mode for high res pictures already (hope it will be included in useful RAW software in the near future). And I bet there is a pretty nice auto bracketing mode for HDR.
Would be GREAT if the DPReview crew will make a special and show us some sorts of these calculations in real life, how much benefit it brings and how cumbersome the handling is.
I don't mean to make a shortcut, but the first computation processing that comes to my mind is Focus Stacking. This is the pleasure and workhorse of many today. Since long I imagine that this could be done straight and instantly during capture. I guess it will not be difficult to this right now in A1. Linear motor focusing lenses are quite helpful in this regard, as the hundreds of focusing points. And future global shutter can make the computing work easier. I just wonder if this would be a cost effective implementation.
@entropy512 For unusual applications, I agree with you. But for many every day practical solutions, such as focus Stacking or selective focus, perspective correction or diferencial exposure, every in camera implementation will be much welcome. We already got the simplest and basic ones, like many pictures effects, lens corrections, dynamic range control (DRO, HDR, XLOG...) and tho most praised by the average photographers: OOC JPEG. Like the latest, the camera may never beat the computer, but it may offer a faster solution when and where needed.
Smartphone sensors are not as fast as you think, the majority are 30FPS, yet they can still do interesting tricks. It's not about the speed, it's about whether the company needs these tricks to sell the camera. I'd expect Panasonic to pull something off first since they are not in a very good shape but not as broke as Pentax.
Would you PLEASE start a Podcast?? I know, I know you are not bored right now, but I would really love to hear some smart folks talk about photography gear, DPreview style.
Phone cameras are relying on fake details because very often the sensor is too small to record good real details. Nokia 808 Pureview tried to improve phone camera image quality with bigger sensor, but it seems that most people didn't like bulkier form. So the only option is to make a convincing lie and it goes well with the modern trends which are going towards lies anyway. It is like more and more of people are volunteering to live in utopia because it makes them feel better.
Putting more complex image softwares inside the camera just compromises power consumption, heat emission and hardware requirements. External computer will outperform cameras easily in that. Minor convenience to use an inferior solution in a camera which is not made for convenience as top priority.
Personally, I would like to see custom camera control scripts to optimize speed and accuracy of shooting different types of photo stacking.
I always take raw photos as I see no reason to make the computations in camera. A computer has far more processing power, bigger screen and You can take the time You want afterwards for processing. By saving a bunch of pictures and then afterwards process them together could give us even better pictures than the smartphones small sensors. So software combining the shots is what we need. This is only valid until the cameras have enough processing power to do it in camera.
... also I'd argue that computational photography helps creating better photos. Personally I can tolerate those "enhancements" in my phone but not in my main camera.
I'm very distracted by Richard's nice speaker(s)... ;)
Anyway, my body is ready for this (pun fully intended). I think once stacked sensors get more affordable it's gonna open up the floodgates for more experimentation with computational approaches... I'm not sure manufacturers will be so willing to experiment like that on high end pro bodies where said features might come off as gimmicky tho, there's a lot of skepticism which Jordan hinted at.
I've got one question regarding the mechanical shutter, or rather the space it takes up, instead of just leaving it empty or pocketing the savings in complexity wouldn't it make more sense to replace it with something else? Would an in-body ND feature not slot in nicely there? Heck, chuck in a more robust trap door if nothing else to prevent more dust from entering the body...
To me, the question is why this is not implemented yet. AI and machine learning are finding their way into smartphones that cost a fraction of this new Sony camera. Surely it would not be that difficult technically, or not very expensive, to add a similar chip to a hugely expensive, top-end digital camera.
The readout speed has been the impediment or the reason it hasn't been implemented, that's what the whole video was about, it's not about a lack of processing power after the fact... It's about being able to capture frames fast enough so that there's little subject/camera motion in between them in order to have a better stack of frames to work with.
Well but then why can smartphone with downsampled 48 Mpx sensors to 12 Mpx output can readout and process this, but a high-powered camera body with DSPs dedicated for image processing can't?
Because of the size of the sensor, the way sensors are read out when using an electronic shutter for super fast bursts is line by line top to bottom, and that's simply quicker (and/or easier) to do on a smaller sensor that's putting out far less data.
It's the same reason M4/3 & APS-C sensors have faster readout speeds (1/40-1/60) than most FF sensors (1/10-1/30) outside of the pricier stacked sensors Sony has in the A1/A9. So larger sensors are catching up, but it's a costly endeavor.
@impulses - also as sensors get larger, advanced manufacturing techniques such as stacked BSI (key to the A9/A9II, VENICE, and A1 performance) become much more expensive - and it's a nonlinear function of silicon area.
Which is why stacked BSI (Exmor RS in Sony terminology) has been standard for smartphone sensors for years, while until the A1, only one or two (It's still not quite determinate whether A9/A9II share a sensor with VENICE) RS sensors larger than 1" were in production in the world, and now only 2 or 3.
The readout speed of the A9 was insane by mirrorless/DSLR standards but it was "yawn" by smartphone standards. A1 closes the gap, but there's still a significant gap between that and "can continuously readout multiple frames with reset immediately following readout, eliminating almost any gap between frames" like we see in Pixels.
Also it's a difference in use cases - these mirrorless cameras are professional tools, and their job is to provide image data that can be used in post, not do all of the work in-camera. There's nothing preventing you from taking any current camera, putting it in continuous drive mode, holding down the shutter, and feeding the result to something like Tim Brooks' HDR+ algorithm, except:
All cameras, even the A9, have significant interframe deadtime in burst mode, and significant differences between individual frames starts straining some algorithms. Even the A1's 30 FPS is insufficient to eliminate motion blur within a frame AND not have significant dead time between frames. Also, at least Tim Brooks' implementation of Google HDR+ doesn't handle rotational motion very well.
but isnt this old tech, because when you extract an image from a video file it uses temporal NR and uses the images before and after combined ? its what i found shooting my em12 4k and extracting images the video extract had much less noise than the still image taken at iso 6400.
Stills extracted from video have other downsides tho... Lower res, usually JPEGs with less processing leeway, the video has to be shot with stills exposure in mind so it's not as useful as an actual video, more battery drain, etc.
Sony's smaller sensors (like the M4/3 one in the E-M1 II) can hit similar fps as the A1 but the readout rate is still several times slower, even during video where the whole sensor isn't read, so it's more prone to rolling shutter artefacts... The E-M1 II/III have a 1/60 readout for stills, vs 1/200 on the A1 (or 1/160 on the A9?).
theres no downside you either get the shot or you dont, at 30 frames per/sec at 1/200th shutter i can get good 8meg dance concert images that could be never got with shooting stills. I guess your needs are different than mine :-)
I described the downsides, you're just ignoring them which is fine, but that doesn't mean they aren't there... And rolling shutter in particular would heavily impact the ability to "get the shot" if the subject is moving at a decent clip.
Just want to comment on the video making process here.
The cutaways didn't work for me because everyone is in a different location. Usually cutaways are done within the same environment, ie. an interviewer and interviewee inside the same room.
1st thing: Richard Butler is Brit ? Rishi an Indian ? Chris Nichols a Canadian ? and, Jordan an American ? WoW !
OK, What Jordan is telling us from the video "smartphones are giving waaay better image than they should" There you are, from Jordan. Smartphones gives better image than they should thru computational technique competing with M 4/3rds image quality. Because the bigger sensors lag in read-out. What Jordan is telling us is when it comes to sensor read-out the larger sensor/larger camera are so behind in technology than smartphones
So, smartphones image quality is almost or surpass micro four thirds image quality.
Smartphones are doing some amazing stuff. Among other things, I am still impressed by what night mode does on my iPhone.
And of course the majority smartphone players, like Apple and Google, have far more resources to throw at the problem than any camera company and far more financial incentives to do so.
So we can expect these companies to gallop ahead and advance far more rapidly than any dedicated camera company. Apple esp has an advantage in designing their own custom silicon chips which are class leading.
Hey Handsome, why didn't you ask how he knows Richard is a Brit? Or that Jordan is Canadian? It's possible to have a Brittish accent and not be Brittish, and im pretty sure Chris is Canadian too. So why ask about Rishi and not the others?
@KAAMBIC, Because I’m already familiar with them. All I know is Rishi did a PhD and then joined DPR. I don’t know where he was born, what accent he has etc etc.
How is this news to you? This isn't the first video to feature any of the fine folks at DPR, and you've been around a good while... :P This isn't the first article where they've suggested smartphone IQ is encroaching on formats like M4/3 & APS-C either... This one's from 2018:
" Such 'merged' Raw files represent a major threat to traditional cameras. The math alone suggests that, solely based on sensor size, 15 averaged frames from the Pixel 3 sensor should compete with APS-C sized sensors in terms of noise levels. " -Rishi
" 2 Our own signal:noise ratio analyses of Raw files from the Pixel 4 and representative APS-C and four-thirds cameras show the Pixel 4, in Night Sight mode, to be competitive against both classes of cameras, even slightly out-performing four-thirds cameras (for static scene elements). See our full signal:noise analysis here. " -Rishi
That last bit is pretty key tho, for static scenes, smartphones tend to struggle more with lots of motion, or say, if you happen to wanna shoot at any focal length but that of the main camera module (which tends to be the brightest and best optimized for the computational tricks). If you care about things like compression or simply prefer much longer/wider FLs, phone output and options start falling apart...
Nothing new, if you stack enough pinhole shots it will outperform medium format SNR. The question is, how wide is the use envelope? For most phone users it's fine, which is why it's on phones.
@handsome, how do you know what the OP meant by Indian, did he mean nationality or ethnicity? Which were you in question of? Given Rishi's physical appearance, it's not odd to assume perhaps he's of Indian heritage. Is that somehow wrong to assume?
It is tough to know indeed. There are many people who hold american passport and born and brought up in america. Rish, though, is an indian name so probably he is guessing. Or DPR might have the profile that says so.
Interesting idea. So now that the image sensor has read out speed fast enough...that to me leaves 2 more problems to be solved in order to accomplish this advanced machine learning that smartphones use on their images:
1) The processor has to have enough computational power to handle these suggested functions. And these computational photography algorithms require a lot of processing power. Look at Apple's A14 SoC. It has what they call a neural engine to handle these type of tasks and that has 16 cores and is capable of up to 11 trillion ops per second. And the far larger FF image sensor will mean even more data to be processed.
2) The software has to be created and that is not trivial. Companies like Apple and Google have some of the best machine learning software teams in the world.
Now this doesn't mean that computational photography can't be done on large sensor cameras. But that it requires more than just the fast sensor readout.
Because Sony make smartphones (even if few buy them), they in theory should be well positioned to try this approach. But whether they will remains to be seen, and of course it may not be practical to do this given the other constraints.
But if any camera company does bring the tricks of computational photography to the much larger sensor cameras, then it should be very interesting.
I don't think the processing power is as much of a roadblock as your second point... Apple has some impressive hardware but it may well be overkill for some of these things.
Google dropped their so called Neural Core from it's latest phones and even with some decidedly midrange SoCs (no faster than what was found on the 2 year old Pixel 3) they're still able to accomplish all the same computational tricks they've always had in their cameras, processing might take half a second longer when you go to playaback images but that's about it.
The software and the massive R&D behind those algorithms, well, that's another story...
"The software has to be created and that is not trivial. Companies like Apple and Google have some of the best machine learning software teams in the world."
Google actually publishes fairly decent descriptions of their algorithm (although their MFSR paper seems to deliberately leave certain definitions very vague), which is why implementations of the older HDR+ algorithm exist. https://www.timothybrooks.com/tech/hdr-plus/ - attempts to implement Google's MFSR algo seem to have so far stalled.
There's also the fact that this sort of thing should not be done in camera. A camera is a data collection tool. Get the sensor data, feed it to a computer for postprocessing. Although as has been discussed elsewhere, the readout rate of larger sensor cameras leads to differences from smartphones that can strain some algorithms. The A1 starts to close that gap though, with full resolution RAW capture at 20 FPS.
That said, camera manufacturers need to improve connectivity for getting that data to a PC. Sony's wifi protocols are crippled (no RAW) and their USB protocols are undocumented (impossible to reliably integrate without using a bloated Sony-specific architecture-limited blob)
I think current smartphone cpu's like snapdragons can handle it. If these chips can already process 100 megapixel images from tiny sensors, what difference will it make if it's from a full frame one - they are still 100 mp. Larger sensor simply means more light gathering capacity so perhaps even the cpu doesn't really have to process that many images in the buffer since they are much 'cleaner' than the ones coming from smaller sensors resulting to even faster operation.
^ Doesn't quite work like that, large sensor files still contain a ton more data regardless of megapixel count (just compare file sizes as a crude measure of that), and the files would still undergo a lot of the same processing because the need to chop each frame into tiles and sort/compare them before re-assembling and stacking would be the same if the goal is, well, to actually do that.
^ It doesn't have to be 100MP for the cpu to crunch all that data. A workable and manageable resolution would be somewhere between 12 to 24mp from a larger sensor. The apsc or m4/3 could be the goldilocks zone for large sensor computational potential.
Sure, I wasn't debating that, I don't think the processing power is the biggest roadblock either way tho (as I said in my earlier comment)... Google's low end $350 smartphone manages to get by just fine without a "Neural Core" and a midrange SoC that's slower than some high end SoC from 3+ years ago.
Developing the software/algorithms to do all this *and* convincing people (specially some old school enthusiasts) is probably more of a hurdle than the processing power, the next biggest hurdle is the readout rate which is what the article starts with... There's some M4/3 bodies/sensors that already have a readout rate (1/60) that just might be fast enough.
It's gonna take a lot of iniciative and some software R&D to leverage that, I think Oly was more inclined in that direction than Pana but I don't see OMDS throwing money at features like that. Pana has shown to be pretty nimble in the past tho (copying features from Oly and implementing them better, iterating quickly on IBIS, etc.).
"Developing the software/algorithms to do all this *and* convincing people (specially some old school enthusiasts) is probably more of a hurdle than the processing power"
There's a lot of unlocked potential for computational optics to make it in dedicated cameras. The target demographic would be the smartphone users who wanted to go beyond the limits of their own phones (larger sensor, responsive auto focus, lens selection) without having to take a course on photography and still expecting good results. It would also make taking pictures enjoyable and experimental for the non-technical crowd. Camera and lens makers can benefit from expanding their market to this demographic or category of camera users.
Oh I agree completely, I came to ILCs from phones and P&S and I'm <40, when I discovered mirrorless I was honestly surprised I hadn't been exposed to it earlier. Photography is more popular than ever despite the market conditions for dedicated cameras makers.
I just haven't seen much evidence that they're prepared to make that leap or get behind the marketing push it would require, but the potential is absolutely there... Maybe I'm just being pessimistic but I almost see photography becoming more niche (like HiFi audio), I sure hope not tho, if nothing else the pro market will always push some level of development.
"Andy Jassy, the chief executive of Amazon Web Services, will take over as CEO of Amazon. Jeff Bezos said Tuesday that he will step down as chief executive of Amazon, leaving the helm of the company he founded 27 years ago.1 hour ago"
Why would anyone want to spend his time selling stuff, even tons of stuff, when there are so many more interesting things to do? Maybe he is preparing to run for President :)
I think it'll be a few years, hopefully by the time sensors like this start showing up in consumer cameras the manufacturers will have had time to work on the software and computational techniques to leverage them to great effect.
Oh you want one that is full frame, stacked, and sub-$3000? Good luck with that. Stacking gets harder as the silicon gets larger which is why a grand total of 3 (possibly only 2, VENICE may share with A9) stacked BSI sensors larger than 1" have ever been released to production.
Let's supposed a "A6700" with DRAM/cmos. It would end up similar price of A7 series or even bit higher. It is not worth considering quality difference between APSC and FF.
I would definitely consider it, cropping is inevitable when shooting wildlife, birds etc, especially if your lens budget is limited, that means you aren't using the full sensor anyway, so why pay for it?
Question: Do you thing the improvements in processing these faster images be primarily hardware or software? Is there any necessity for them being hardware based? That is, could the A1 be improved in future upgrades by simply downloading (and paying for) different processing algorithms into the camera ?
Software, and just download your images to a PC and stack them.
Tim Brooks' HDR+ Kandao RAW+
At least two implementations of Google's HDR+ multiframe stacking exist, likely more.
Google MFSR is right now impossible to find outside of a Google phone, partly because their MFSR paper on arXiv has errors (or may be deliberately vague) - see https://github.com/timothybrooks/hdr-plus/issues/40 for a discussion of some of the known errors. I've read the arXiv paper, and for example, it defines a "normalized dynamic range" of D with a value from 0-1 - but doesn't define how this value is calculated. I can't think of any dynamic range metric that normalizes to a 0-1 range, nor how they are "normalizing" the DR of the scene.
I wonder if they've managed to improve on color science. Burnt skintones out of the box was the main reason why I ditched my Sony a7III. Switching back to Canon was a relief.
Crazy? No, this is more of a "duh." They're probably doing at least a tiny bit of that (perhaps the flicker sync stuff?), but I think the target audience for the A1 needs provably-undoctored images. It's also worth noting that Sony makes a lot of those fancy cell phone sensors, and the RX100V got such a sensor in 2016.
Now to where Sony has messed up....
If you think about it, this is THE SENSOR TO MAKE APP PROGRAMMABLE CAMERAS WITH... ideally this sensor in a cheaper mechanical-shutter-less A7C body.
Sony used to have an Android-based in-camera app environment (PlayMemories); bring that back & open to 3rd-party developers... even professors like me who make open source software for cameras. ;-) Perhaps connectivity is now good enough to do apps outside the camera with Sony's new API, and that would give you bigger computing resources, but I'm not sure you can use that interface to do things like changing focus or exposure on the fly within a 30FPS burst....
It is quite possible that this stacked sensor alone costs more than A7C or A7III camera, so it is very unlikely we will see technology like this in lower end cameras anytime soon. The higher cost comes from the fact that the sensor is two separate semiconductor dies welded together. That's twice the silicon area plus the process of bonding them together.
What jekabs said... yield on stacking probably goes south as die size increases and tiny errors get magnified... Also the huge stacked chip's cost may get really large if it's in a recent technology...
Jekabs: I didn't say I expect it to be as cheap as an A7C. Right now, I'd not be surprised if the sensor costs more than the A1 is selling for... but fab yield improvements usually come quickly and change everything.
There are some real savings in leaving-out stuff like the mechanical shutter and other "pro" frills from an A7C form. In any case, I'd expect a 2-year design cycle, so if it isn't already in the pipe, it's probably a while off. I don't really know how the costs scale for stacking with larger sensors, but it ought not be that bad (as compared to BSI, which I understand was terrible to scale to bigger sensors). I'd actually expect more cost to be associated with buffer memory size than with stacking tech, and there were (are?) pandemic-caused supply shortages.
Anyway, I've been expecting somebody to show the right way to drop the mechanical shutter from a FF body. The Sigma FP isn't, the A7C is pretty right with half a shutter... so what's next, Sony? :-)
Agreed there was a lot of untapped potential in PlayMemories and if they'd open it up to developers even more so. That would also benefit Sony by further locking people into their alpha ecosystem- if your camera can run all these cool apps you can't get on competitors, you're less likely to switch brands.
I still think it's better to improve connectivity than onboard processing.
People wouldn't be begging for on-camera apps if their Wifi remote protocol weren't garbage (can't transfer RAW images) and their USB remote protocol weren't undocumented leading to all third-party implementations being unreliable.
"Could the new a1 use computational photography like a smartphone to create even better photos?"
How about using it in Bridge Cameras and the like, the market, they seem to care little to less about these days. I don't always want to drag out the big guns and variety of lens. Those Bridge Cameras and even the Sony Rx100 series could use it. A ton.
So it's almost we are nearing the point of blurring the line between actual photography and the Camera doing all the work. So called pros taking photos then editing OUT key details yet get rewarded for composition. Then there the TurboSquid, a 3D asset marketplace acquired by stock image company Shutterstock.
I shoot Raw and all this AI stuff is not necessarily good. Makes for Lazy photography.
I suspect the new sensor will have even more read noise than the stacked CMOS inside A9. Faster readout means more read noise. Mechanical shutter is still useful for high res cameras with best image quality. I expect that low ISO image quality is not that important for the users targeted by A1.
Who would have thought that Sony pushing out groundbreaking camera after groundbreaking camera and pushing the industry forward unlike any company in decades would make other fanboys so mad? At least 70% of Sony comments are butthurt Canon fans. You should be out there shooting with the R6, not downplaying this amazing tech. Just like with the A7III shutting down the crappy EOS R, this camera is going to push Canon to make their next camera even better.
I agree that the trolling is really obnoxious, but to be fair it's no worse than the one that accompanied the Canon R5 release, or practically every m4/3 article.
Canon, Sony and Nikon all introduced new flagship mirrorless cameras in 2021: the Canon EOS R3, Sony a1, and Nikon Z9. We compare them across several critical areas of performance to see how they stack up.
Japanese photo site Digital Camera Life (DC Life) has discovered the sensor inside Sony's flagship a1 camera is the IMX610, a full-frame backside-illuminated sensor that appears to be exclusive to Sony.
After two rounds of voting, DPReview readers have decided on their favorite product (and runners-up) of 2021. Find out which cameras and lenses topped the list!
A plane crash leaves the mirrorless flagships overexposed to the dynamic range of elements. Will Canon, Nikon & Sony come to their sensors and group together to weather the aberrations ahead, or will they cannibalize one another?
2021 was a busy year at DPReview TV, with over 100 new episodes added to our YouTube channel! In this retrospective, Chris and Jordan look back at some of their most memorable moments from the past year.
The a7R V is the fifth iteration of Sony's high-end, high-res full-frame mirrorless camera. The new 60MP Mark IV, gains advanced AF, focus stacking and a new rear screen arrangement. We think it excels at stills.
Topaz Labs' flagship app uses AI algorithms to make some complex image corrections really, really easy. But is there enough here to justify its rather steep price?
Above $2500 cameras tend to become increasingly specialized, making it difficult to select a 'best' option. We case our eye over the options costing more than $2500 but less than $4000, to find the best all-rounder.
There are a lot of photo/video cameras that have found a role as B-cameras on professional film productions or even A-cameras for amateur and independent productions. We've combed through the options and selected our two favorite cameras in this class.
What’s the best camera for around $2000? These capable cameras should be solid and well-built, have both the speed and focus to capture fast action and offer professional-level image quality. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing around $2000 and recommended the best.
Family moments are precious and sometimes you want to capture that time spent with loved ones or friends in better quality than your phone can manage. We've selected a group of cameras that are easy to keep with you, and that can adapt to take photos wherever and whenever something memorable happens.
What's the best camera for shooting sports and action? Fast continuous shooting, reliable autofocus and great battery life are just three of the most important factors. In this buying guide we've rounded-up several great cameras for shooting sports and action, and recommended the best.
While peak Milky Way season is on hiatus, there are other night sky wonders to focus on. We look at the Orion constellation and Northern Lights, which are prevalent during the winter months.
We've gone hands-on with Nikon's new 17-28mm F2.8 lens for its line of Z-mount cameras. Check out the sample gallery to see what kind of image quality it has to offer on a Nikon Z7 II.
The winning and finalist images from the annual Travel Photographer of the Year awards have been announced, showcasing incredible scenes from around the world. Check out the gallery to see which photographs took the top spots.
The a7R V is the fifth iteration of Sony's high-end, high-res full-frame mirrorless camera. The new 60MP Mark IV, gains advanced AF, focus stacking and a new rear screen arrangement. We think it excels at stills.
Using affordable Sony NP-F batteries and the Power Junkie V2 accessory, you can conveniently power your camera and accessories, whether they're made by Sony or not.
According to Japanese financial publication Nikkei, Sony has moved nearly all of its camera production out of China and into Thailand, citing geopolitical tensions and supply chain diversification.
A pro chimes in with his long-term impressions of DJI's Mavic 3. While there were ups and downs, filmmaker José Fransisco Salgado found that in his use of the drone, firmware updates have made it better with every passing month.
Landscape photography has a very different set of requirements from other types of photography. We pick the best options at three different price ranges.
AI is here to stay, so we must prepare ourselves for its many consequences. We can use AI to make our lives easier, but it's also possible to use AI technology for more nefarious purposes, such as making stealing photos a simple one-click endeavor.
This DIY project uses an Adafruit board and $40 worth of other components to create a light meter and metadata capture device for any film photography camera.
Scientists at the Green Bank Observatory in West Virginia have used a transmitter with 'less power than a microwave' to produce the highest resolution images of the moon ever captured from Earth.
The tiny cameras, which weigh just 1.4g, fit inside the padding of a driver's helmet, offering viewers at home an eye-level perspective as F1 cars race through the corners of the world's most exciting race tracks. In 2023, all drivers will be required to wear the cameras.
The new ultrafast prime for Nikon Z-mount cameras is a re-worked version of Cosina's existing Voigtländer 50mm F1 Aspherical lens for Leica M-mount cameras.
There are plenty of hybrid cameras on the market, but often a user needs to choose between photo- or video-centric models in terms of features. Jason Hendardy explains why he would want to see shutter angle and 32-bit float audio as added features in cameras that highlight both photo and video functionalities.
SkyFi's new Earth Observation service is now fully operational, allowing users to order custom high-resolution satellite imagery of any location on Earth using a network of more than 80 satellites.
In some parts of the world, winter brings picturesque icy and snowy scenes. However, your drone's performance will be compromised in cold weather. Here are some tips for performing safe flights during the chilliest time of the year.
The winners of the Ocean Art Photo Competition 2022 have been announced, showcasing incredible sea-neries (see what we did there?) from around the globe.
Venus Optics has announced a quartet of new anamorphic cine lenses for Super35 cameras, the Proteus 2x series. The 2x anamorphic lenses promise ease of use, accessibility and high-end performance for enthusiast and professional video applications.
We've shot the new Fujinon XF 56mm F1.2R WR lens against the original 56mm F1.2R, to check whether we should switch the lens we use for our studio test scene or maintain consistency.
Nature photographer Erez Marom continues his series about landscape composition by discussing the multifaceted role played by the sky in a landscape image.
The NONS SL660 is an Instax Square instant camera with an interchangeable lens design. It's made of CNC-milled aluminum alloy, has an SLR-style viewfinder, and retails for a $600. We've gone hands-on to see what it's like to shoot with.
Recently, DJI made Waypoints available for their Mavic 3 series of drones, bringing a formerly high-end feature to the masses. We'll look at what this flight mode is and why you should use it.
Astrophotographer Bray Falls was asked to help verify the discovery of the Andromeda Oxygen arc. He describes his process for verification, the equipment he used and where astronomers should point their telescopes next.
Comments