The coolest technology to come out of Adobe MAX is, sadly, not the technology we already have access to. Like Adobe's Project Cloak we showed you earlier today, it's the incredible 'Sneaks' sneak peeks that really wow the audience. Case in point: check out Project Deep Fill, a much more powerful, AI-driven version of Content Aware Fill that makes the current tool look like crap... to put it lightly.
Deep Fill is powered by the Adobe Sensei technology—which "uses artificial intelligence (AI), machine learning and deep learning"—and trained using millions of real-world images. So while Content Aware Fill has to work with the pixels at hand to 'guess' what's behind the object or person you're trying to remove, Deep Fill can use its training images to much more accurately create filler de novo.
The examples used in the demo video above are impressive to say the least:
And just when you thought the demo is over, you find out that Deep Fill can also take into account user inputs—like sketching—to completely alter an image:
In this way it's a lot more than a 'fill' feature. In fact, Adobe calls it "a new deep neural network-based image in-painting system." Check out the full demo for yourself above, and then read all about the other 'Sneaks' presented at Adobe MAX here.
The result of "deep fill" on the people in the rock arch was pretty awful, requiring at the very least use of the clone stamp. And it would be very simple to remove those people against a blue sky without this chicanery anyway.
Looks great if you can understand he is talking about. What happened to the one exposure one image - the skill is in the person who presses the shutter. Remember the day when, to enter into a competition, one had to submit affidavit to confirm that the image was not altered in any way. AI is not photography. It is incredibly clever but not real.
I like photography but I am neither a pro nor an artist. Just having fun.
After thousands and thousands of photographs it appears to me that the appeal of AI photography is, at its very best, extremely limited. It may be useful to me once every 2218 photos.
Th last time I had an award winning picture that required a pimple removed was in 2008. Oh, i forgot, but yes, in 2013 I had this insane photo of my girl friend and right in front of her nose there was this ball. If Deep Fill had been available, then I could have restored her nose and she would have been fitted with a new nose, learned by analyzing litterally millions of nose photos.
And right next after this, Google will propose to replace our neurons with AI neurons, trained by the analysis of litterally billions of thoughts.
A new age, a new dawn has arrived. Welcome to the Deep Fill World.
"Here's how our current product's feature sucks. But there's a version that relies on being online or having access to some massive computing power and database. You guys excited yet?"
Man, just make CAF less stupid with ordinary computers.
You take a photograph of a group of people. There are 9 white people and one black person in the group (or vice versa). Automated AI grabs hold of your image and says to itself "something's wrong here" and changes all the people in the group to the same colour.
You take a landscape and there are a couple of electricity pylons within the frame. Automated AI grabs hold of your image and says to itself "something's wrong here", and adds another 5 pylons to make the composition "correct".
Whatever you do, don't watch the video. Form your own conclusions based on nonsense your brain made up about how it works, not on actually how it works.
Wy? Stuck in the dark ages... Adobe CC is £8 a month for me with Lightroom, Bridge and Photoshop, that is a lot cheaper than it used to be when buying it every year or two.
Constant updates, syncing with different devices and apps etc
So thrown off by how many people thought that, before the invention of content aware fill, photographs told the truth. This is Day 1 stuff in any photography class: no photograph has ever told the truth. Every decision little decision you make projects your viewpoint onto the scene. We're talking about the sophistication of the tools you use to do that, nothing more.
Ah, the whole photos as record vs. art thing again....
Nothing wrong with this as art, everything is wrong with it as record. If you look closely at the resulting images, you'll see that it doesn't just remove the intended item, but actually changes everything within the rectangle. This is most obvious in the video where they remove the second group of people in the second image and the rock structure behind them changes dramatically, but it's also visible in the last image (upper right corner of the rectangle). If you can select a more precise crop region (shaped to the subject to remove), then presumably it will not alter the surroundings, but changing things that aren't occluded is qualitatively different from hallucinating things that were not visible.
There are still various ways to affirm the out-of-camera validity of an image, for example, the methods used by http://www.fourandsix.com/
How about building 3D/VR environments from multiple images?
You can already calculate depth from multiple photos, and map these photos together in to a rough looking 3D environment.
What this tech does is allows the computer to fill in the gaps. Result: Realistic 3D environment you can actually navigate around.
Okay, it's not traditional photography, but I'd love to be able to do this. Imagine being able to recreate the internals of the house you lived in 20 years ago in full 3D....
mgrum - You have made some very good points. As with most technological advances, there are always benefits that critics overlook. The uses you describe are certainly legitimate and desirable by many.
However, it's likely that a high proportion of camera users (note I don't use the term Photographers) will decide that it's easier to produce images by fakery.
That encourages laziness at the taking stage. People will make even less effort to compose a photograph properly. That will take away much of the enjoyment and most of the skill needed to take a good photograph.
Eventually the process will become automated, in image editors and later in cameras. Users will initially have the choice whether to opt out, but even that will eventually disappear, and the art of photography will suffer.
It's already very difficult to find an image that hasn't been altered in some way. Photographs are becoming increasingly truthful, and before long will become a complete lie.
People have been decrying new technologies due to what they see as the inevitable losses of older practices since Plato, who (in writing) essentially predicted the death of memory because of the growing popularity of writing. New technologies change things and we adapt to those changes. The same predictions you are making here were also made about calculators, computers and ... digital cameras!
phototransformations - I can tell from your handle that you like to transform rather than to capture! I prefer photographs to be an accurate representation of the beauty of the scene or subject that prompted me to photograph it.
My #1 use for content aware fill when an 8x10 or 5x7 crop cuts off parts of the subject!!! Its an massive time saver. A better content aware fill will be even more of a win.
@entoman, do you have the same objections to the manipulations photographers such as Ansel Adams did in the darkroom? All that dodging and burning in and, not to mention using red or yellow filters to turn blue skies to black, and for that matter black-and-white photography itself (and long lenses or wide lenses) are all conscious manipulations of the images that most people see to feel, except in journalism, are fine. This doesn't seem substantially different to me. It's just another tool to try to achieve a vision, which you can use or not use as you wish.
phototransformations - I see a fundamental difference. I see nothing wrong with using filters or with dodging and burning, as these are used to INTERPRET and ENHANCE an image artistically, but do not alter it.
What I don't like is the prospect that at some stage in the fairly near future, editing sofware, and at a later date cameras themselves, will use AI techniques that take control away from the user.
AI uses intelligent learning, which means that it will learn your preferences and use them on future images. You will be presented with a "before" and "after" image and be asked to choose one or the other. If you frequently choose the image in which AI has added or removed "blemishes", the software will "learn" and will apply that preference to all of your images.
At first you will have something that "you can use or not use as you wish", but after a time that choice will be taken away from you.
@entoman - "That choice will be taken away from you" is not what I've experienced so far in my lifetime (and I'm 66) with technological advances.
I still drive a stick shift car, though automatic transmission vehicles have been around for generations. I still wash dishes by hand, though dishwashers have also been around for generations. And I can still shoot in manual mode, with everything the same as it was in 1969, when I bought my first SLR except I'm shooting digital. "Program" mode "takes away" my choices and "Auto" even more so, but I can choose not to use them. Maybe generations from now the camera will do everything for us, we won't have cars we can drive ourselves, and we'll live in Wall-E world, but I don't see this happening soon. There're always people who like to do things themselves, particularly in creative fields, and therefore there's always going to be a market for products we can control ourselves, even if we're considered "retro" when we do that.
Coming soon: cameras that don't have memory cards or inbuilt memory, but instead transmit all your images directly to "the cloud", with the manufacturers charging you a monthly fee to access your pictures!
I propose a new term "snapograph" to describe images that have been produced with little thought, and can only be "rescued" by adding or removing conspicuous features such as people in the wrong place, or trees growing out of people's heads.
Another term "journograph" could describe images that are intended as factual representations, and are absolutely unaltered, with the exception of a limited degree of cropping.
I hereby define a "photograph" as a previsualised image, carefully composed, and (where desired) subjected to manipulation of colour balance, brightness, contrast, and saturation, with the intention of producing something with artistic merit. A very limited degree of "healing" is permitted to remove dust specks or unwanted artefacts such as out of focus highlights that render as aperture hexagons.
See also: when cinema camera companies show demo reels full of shots from major motion pictures which are like 80% CGI. What do you think I learned about you from a Transformers shot except that you know Michael Bay?
Easy to answer, it was the day you stopped using film! Every pixel in every digital photograph you've ever taken goes through a computer algorithm that manipulates it before it gets stored on your memory card. Even if you are shooting RAW.
How to lie with photo technology. I object to image filling methods, whether AI based or not. Once, you start removing objects from a photo, you are creating a lie. If you want to create art, then do so and say so to your viewers.
The power of photography loses something important when viewers cannot trust that what they see is real. Thumbs down to Adobe's Project 'Deep Fill'.
David, I understand where you are coming from, but historically photography has always involved manipulating images to some extent -- it's just gotten easier and more refined. One of the most amazing prints I have ever seen is an image titled "When the Day's Work is Done" by British photographer Henry Peach Robinson in 1877. The image is a composite from 6 six negatives. I agree that photographers who substantially alter their images should be clear to viewers that that is the case. That said, I have always admired the creative work of photographers like Jerry Uelsmann -- precisely because he makes no pretense about the realism of his images.
I think the biggest concern of people is not really people who want to make "Art" and edit it like many other photographers already did back then or, even more, painters (they could always easily add/remove features from the scene they were portraying).
The issue here, IMHO, is to create fake NEWS picture, for example. Even more than it has always been. Or pictures that wins prizes and such. Situations were one would expect to see a real thing and instead is getting a completely manipulated image.
CC is going to have more and more value for the subscribers. Initially, CC made "no need" for other software solutions. But in the near future, buying the camera won't be necessary to make beautiful "photographs".
And now we know the technical reason why Adobe is moving Lightroom to a suite of Cloud tools. Their next features will require Big Data processing impossible without a CPU farm. Of course, there's the question of whether WE actually need features like this.
I don't work for adobe, and I don't appreciate your ad hominem attack. You'll notice that I questioned the need for such a functionality, which is something that an adobe employee would be stupid to say.
I didn't perceive giorgionerd's comment as an ad hominem attack - rather as just another way to say 'we all work for adobe' by putting all our images into Adobe's cloud.
There is no technical reason for them to move to a suite of cloud tools other than profit. You would have to completely delude yourself to think otherwise.
Yes, there is. It's called Sensei. It's the same reason why Google search isn't a tiny app on your computer or phone. Google is crawling the entire web continuously to map webpages and content. That takes warehouse sized server farms. Then all you do is use a thin client on your device to enter the search target and retrieve the results. Adobe's Sensei is the same idea, only using images. It is a learning-type of tool that requires vast amounts of input to understand content relationships.
But, yes, if Adobe has spent the time to develop big data driven tools like Sensei, then it sees an opportunity to profit from it. But that's why any company develops products.
Now, if you're ticked off by Adobe's subscription model, say so. I'd agree with you that their handling of the cloud transition has not been transparent or trust-inducing. But part of their justification for going to a cloud of micro-tools does make sense.
I know perfectly well about the various ways to use deep learning on big data and it has absolutely nothing to do with adobes decision to go to a subscription model (where they have to offer some type of cloud services to justify the massive increase in cost). You're conflating them finding ways to monetize their new found data with their reasoning for having cloud services in the first place, which was to charge people a monthly fee.
Yes, it could go either way. Certainly Adobe was struggling to maintain steady income growth with a suite of products entering maturity and therefore less and less able to entice customers to upgrade. The subscription model nicely deals with that nasty little problem. I'm sure that if the camera manufacturers could do the same, they would. The cloud/rental model allows software manufacturers to effectively enforce the ownership rights that they have always claimed - that's what a license is; we as customers have never really own our software tools but only use them for a specified length of time and purpose. Adobe's original justification for moving to the CC model was not tools but reliable and regular updates more frequently, more targeted, and better tested than the massive yearly or biennal version pushes they had been doing. So their cloud connection was really a phone home link. I guess the question is one of what is a fair price for the capability. Certainly $240/year is ouch.
mosswings - "I'm sure that if the camera manufacturers could do the same, they would"
OMG, PLEASE don't put thoughts into the heads of camera manufacturers!!!
Imagine Canon, Nikon or Fuji designing future cameras that didn't have memory cards or inbuilt memory, but instead transmitted all your images directly to 'the cloud" and charged you a monthly fee to access your pictures!
Certainly we accept the idea of renting a car - it's called a lease - as long as we accept that the low cost of acquisition means that even though we might drive the latest iteration of a car, we never own it - and the car manufacturers accept that in order to enforce their ownership rights they'd have to send repo agents to confiscate their now-stolen property.
It's only in the software industry that we can expect to be shut down automatically and effortlessly by vendors at their whim - but with the Internet of Things rapidly evolving, I would not be surprised that our connected toys and vehicles and such will eventually demand their monthly fee and the repo agents will be obsolete.
It's all about reversing the nature of the free market - instead of buyers choosing their manufacturers, now the reverse is becoming ever more true.
I'd also note that the up and coming generation places little value on ownership of - anything, really. What is important to them is access. Ride-sharing services and self-driving cars are just examples of this new thing-as-service world. I can't say I disagree with it...the supposed freedom and convenience that a personal car provides comes with a staggering price tag for most folks. Far better is to rent what you need when you need it, and leave the maintenance costs to the rental vendor. Ride-hailing and self-driving cars will only quicken the transition to a tiny-house just-in-time way of life.
I don't know why, but from the title is was expecting a content aware fill tool for movie creators. It obviously isn't, so you see me a bit disappointed here ...
ozturert - Yes, I've noticed a recent pronounced tendency for dpr to use sensationalist and sometimes misleading titles. It seems to be a recent management decision, designed to grab attention and boost dpr readership.
The fact that it also whips up a frenzy on the forums is of course even better for dpr. The more time people spend here, the more likely they are to buy something via the Amazon links on every page.
Not a bad thing really - it's what pays dpr staff wages, and enables us to have access to all the reviews.
Look example the top left corner of the window where there is a 90 degree angle behind the flagpole. Or the bottom metal plate that is behind supports etc.
Yet the algorithm finds out the lines like the glass frames, finds out the straight line for the metal plate and finds the 90 degree corner behind the flagpole, and rebuilds all.
And that is without Adobe "AI Deep Fill" or requirements for a leasing software.
It only samples what is in the picture already which is the same as all current algorithms. IF it works then an AI based solution would create new pixels based on what it has learnt from other images. I am sceptical because a lot of auto fix software works really well in examples but less well in the real world. But if it did it would be completely different to the sample you have posted. Why not try it on the 1st Image from the article to see what happens.
It would be nice to try, if you can find the original source file of the sample photo in the video. Now it has the square white box around the "obstacle" that needs to be removed, so it is not fair test at all.
And many of the photos there are, Affinity would definitely clear up as nicely.
But where Adobe really has the edge is the "reshape" by the drawn shape form. But that requires to use on Affinity tools like a "Liquidity Persona" afterwards with a layermask.. So what Adobe does, it combines a two tools for one.
Some marketeer came up with the buzzword 'deep'. Now it has been glommed onto and applied to almost everything marketing is trying to sell. And don't get me started on fake AI claims. Put on your hip boots - it's getting "deep."
@mattd007 "Unfortunately, the deep-learning catch-phrase is now morphing into the more general and ambiguous term of AI or artificial intelligence. The problem is that terms like ‘learning’ and ‘AI’ are overloaded with human preconceptions and assumptions – and wildly so in the case of AI."
: The real tragedy of it all, is that soon we dont need to bother taking pictures at all anymore what so ever. As everything gets more manipulated and faked out, the act of taking a picture of anything for the purpose of portraying any kind of reality, becomes pure obsolescence. We just need 3D to be come better and eventually good enough to compose any image with whatever elements we want in it.
And hey, I have nothing against it and progress surely is inevitable. Just that the time is soon coming to call "photography" something else as it irreversibly fades out from what it once meant or was supposed to be.
3D CGI is mature for quite some time now and is able to produce real-life quality fairly easy. The only problem is time and resources it takes, it's still a lot quicker and easier to capture and PP a photograph than build it in 3D. But I agree, we will see that charge very soon.
Yes, "authentic" photography may become a niche, however, such status might amplify its appeal to some of us who like "swimming against the current". Kind of like film photography now, or vinyl discs in music. So I think real photography will survive.
To me, this is no different from adding artificial flavours and colors to food. As long it is "stated on the label", I see no problem with it. The problem of course that, unlike food packaging, there is no full disclosure of the degree of manipulation with photographs right now. I think a simple scale of what percentage of the image content was "artificially added" would be a fair addition to the exif data. Yes, I know exif data could be altered or erased, blah, blah. Still, I think we would all feel better if we had some sort of way of knowing whether it is an "organic" or "processed with artificial additives" photograph.
Tried quite a few of these content aware fill apps over the years, by far the best for at least 3 years was the one in Pixelmator. Adobe Photoshop being one of the worst.
This is so at odds with the recent Getty ban on manipulated body appearance. Imagine the possibilities. Should be a boon for the porn industry; now everyone, anywhere, is a model!
PS has had things like multi-step undo for a very long time: It's called the history panel and you can select how many steps you want saved - hundreds if you'd like, then with a click, you can back up as far as you'd like. It's far better than hitting control-Z many many times. Certainly no application is perfect, but you're criticizing them for lacking something that they've had for years.
Ctrl+Z is a HIC standard for how to go back in the history of actions and Ctrl+Y to go forward.
Now some has tried to change that to Ctrl+Z and Ctrl+Shift+Z but it is terrible idea.
And then Adobe over decade ago decided that they can include the new way where Ctrl+Z goes only undo once, while needing other functions to do "step back"!
At least Affinity Photo follows Ctrl+Z correctly, and is anyways more powerful and flexible than Adobes.
To me, content aware fill was the best innovation PS offered in 10 years. This takes it to a new level. I'm not sure how much I would use it (traditional CAF works fine for 90% of my requirements), but it will be nice to have when I need it.
The content aware really mainly changed how the work was faster. If you used a clone stamp or a simple selection to do a selection, add it to a another layer and then come with a layer mask to clean the edges and use little a clone stamp to fix the edges, curves or levels for color (like skies etc), you got often same result. But when difference was like couple minutes vs couple seconds, it was a time saver.
Absolutely right. Most of the time it nails it for removing annoying objects from scenes, or even the odd sensor dust spot. Improving an already useful that tool is very welcome.
On this line, you really ought to try out the Affinity Photo healing tool.
Instead of repeatedly trying to outwit Adobe's 'guesses'/'ai' cleaning things near a dark boundary, you find a small _window_ moving with the mouse so that you can see exactly what you're going to get.
As well, the tool has smart boundary-noticing , which lets you go right up the edge of a cleanable area, and with a clever rolloff, correct just as you need.
I found this out after many minutes getting nowhere with a 'let's just use Photoshop quickly' foray -- and then took beneath-flowerbox rust stains off a stucco outside wall, preserving its weathered look, in one pass and a few seconds only.
Other people can indeed do very clever clever optical algorithms, and with great user experience.
@NarrBL, thanks for the Affinity tip. I pull Affinity out from time to time for its path clipping abilities which work better than PS in many cases. I'll also try out this competency you just mentioned.
As a water photographer, sometimes there are un-natural droplets in the way or flare / ghosting from direct sunlight. This kind of tool would be a real time saver for me! Can't wait to try it out.
Well for some, recording the truth "is not" the most valuable aspect. With most of my shots, the most valuable aspect is simply the art of it. Reference shots usually bore me.
Never once in history has a photograph recorded the "truth". Every image is one view manipulated by the lens, sensor, perspective and in today's digital world, the internal processing the engineers at the camera company decided to put in the camera for you. Personally I am not interested in photography for documentation. It's a medium of expression, not a xerox machine.
I'm absolutely for recording the truth. Fake content is just lame. BUT! There are occasions when you have a great shot but one freaking item destroys it. I recently had a spruce in the wrong place that pulled the eye on it. There wasn't really a different spot as this patch of extreme red just grew there. I needed several passes of Gimp-Resynthesize to remove the spruce. Having a better content-aware fill would have resulted in a better photo and likely faster.
And hopefully having such tools at hand makes it less likely that people just remove plants of parts of them in the field to get a better composition. Which drives me crazy especially if "protographers" advertise such "techniques" in their videos.
I agree completely with you - that is, if it were not for the fact that there is no 'truth'. During our perception just 10 % of information processed stems from our senses. The most part comes from our previous experience and is used in processing/interpreting the incoming stuff.
But 'honest photography' is still something to aim for. Am starting to wonder whether slides/trannies photography could be part of my future - and not only of my past.
"Adobe's Project 'Deep Fill' is an incredible, AI-powered Content Aware Fill" "When we can do that with a single line draw already in a second, this ain't so impressive what Adobe will do." Dr Phi.... "Someboy's lyin here." =)
Don't you think it is a bad way to start discussions or even communication thinking by default that other is lying unless otherwise proven? Shows more about your character being a dishonest as you expect others to be so....
It's Dr Phils character you need to check on.... He's the one quoted.... I bet if he was sittin between you and the person that wrote that title, that's exactly what he'd say. Gotta make sure those arrows are pointed in the right direction.... :=)
It is you who is making quotation.... So get real. When YOU call other as liar without proof, it makes you dishonest.
Even as how much YOU try to be a clever, it is YOU who is hinting such things. Do you know how rumors work? Someone dishonest starts it and others dishonest repeat it.
That is as well how a media does judge someone by asking questions it shouldn't or report information that is incomplete.
Sarcasm and jokes, trying to be a funny in other expenses are as well dishonest person qualities.
So you better be back up your claims, as I do have....
Finger editing with a touchscreen. using a 4th gen i3-4020Y processor (compared to todays 8th generation), why it is slightly slower to process (progress bar not visible in recording).
I remember some years ago seeing a similarly impressive demonstration of the "Shake Reduction" option under the Filter menu. I had some minor success using it but after a while gave it up since more often than not the results were poor. I hope this new feature doesn't disappoint me down the road too.
I assume that is in response to where I said, with tongue firmly in cheek "And what about all those lovely people shots ruined by having landscapes behind them"
Well, actually there are many such scenarios - the most well known one being the portrait in which a tree or pole appears to be growing out of the subject's head. Think about it, and you'll realise that there are many situations where hurried and thoughtless snapography results in distractions behind (or in front) of the person being snapographed.
Real photography is about LOOKING at what you are doing, and making sure that you COMPOSE the photograph in such a way that the distracting object doesn't interfere.
This entire thread reminds me of the constant b/s of film vs digital, jepg vs RAW, zoom vs prime, nikon vs canon, and on and on.
Photo's have always been altered. Always. The lens, film, camera, filters and now the possibility of extreme extended post have pretty much been there since the beginning.
HDR is not a new thing, it was done in the 1800s. "Stitching" panoramas was also done. Using lenses other than 35-50mm in SLR format is a form of alteration. Yet even 35-50mm lenses do not capture what our eyes fully see. It can be argued that if you do not shoot a 180 degree or so pano you are not capturing what we see with our eyes if that is what is meant by a "true" photo.
Photography today is a set of tools just like it always has been. The tools today are more diverse and more powerful than ever. Use the ones you want to, drop those that don't fit your needs. Stop debating worthless topics like this.
Exif tag 0xA301 -> a "photograph" (sums it up nicely, I think)
Anything else -> not a "photograph"
i.e. if you add or remove something that was not present (if I said "visible" someone would bring up IR/UV) in the original scene (even if you had to turn your head to see it... like enjoying a panoramic view...) then it's something other than a photograph. IMO - but I realize that stating that it's my *opinion* will not stop someone chiming in to tell me I'm wrong *sigh*
We don't often get excited about $900 cameras, but the Fujifilm X-T30 has really impressed us thus far. Find out what's new, what it's like to use and how it compares to its peers in our review in progress.
The S1 and S1R are Panasonic's first full-frame mirrorless cameras so there's a plenty to talk about. We've taken a look at the design and features of both cameras and have some initial impressions, as well.
The Olympus OM-D E-M1X is a more powerful dual-grip evolution of the E-M1 II. Aimed at sports shooters it promises improved AF, including advanced subject recognition, along with the highest-ever rated image stabilization system.
One of three lenses launched alongside the Nikon Z6 and Z7, on the face of it the Z 50mm F1.8 S might appear the most pedestrian of the group, but it might just be the niftiest fifty we've ever seen.
Following testing of the Panasonic Lumix DC-LX100 II, we've added it to our Pocketable Enthusiast Compact Cameras buying guide as joint-winner, alongside Sony's Cyber-shot RX100 VA.
If you're looking for a high-quality camera, you don't need to spend a ton of cash, nor do you need to buy the latest and greatest new product on the market. In our latest buying guide we've selected some cameras that while they're a bit older, still offer a lot of bang for the buck.
A question frequently asked on the Internet is 'what's the best DSLR?' In this buying guide we've answered that question – but also whether it's the right question to be asking in the first place.
What's the best camera for under $500? These entry level cameras should be easy to use, offer good image quality and easily connect with a smartphone for sharing. In this buying guide we've rounded up all the current interchangeable lens cameras costing less than $500 and recommended the best.
Whether you've grown tired of what came with your DSLR, or want to start photographing different subjects, a new lens is probably in order. We've selected our favorite lenses for Sony mirrorlses cameras in several categories to make your decisions easier.
Independent lens manufacturer Sigma has announced that its new 28mm T1.5 cine lens for full frame sensor cameras will be available from the middle of March.
At Dubai's recent Gulf Photo Plus event, Fujifilm showed off several of its early concept mockups for GFX cameras that (sadly) never made it into production. We took a closer look.
Panasonic is well known for including impressive video features on its cameras. In this article, professional cinematographer Jack Lam explains one killer feature the company could add to its S series that would shake up the industry – and it all comes down to manual focus.
Full-frame cameras get a lot of attention lately, but Technical Editor Richard Butler thinks that APS-C makes the most sense for a lot of people – and there's just one company consistently giving the format the support it deserves.
The 12th International Garden Photographer of the Year winners have been announced. We've gathered the top photos from each category and rounded them up into a slideshow.
Uber software engineer Phillip Wang has created a website that shows a portrait of a person that doesn't actually exist by using AI to merge multiple faces together.
Want to know more about the Canon EOS RP? Dying to ask a question that hasn't been addressed anywhere else online? Join the editors of DPReview for a live Q&A about this new camera next Tuesday, Feb. 19 on our YouTube channel. Click through for details.
Got a couple of minutes? Then you have all the time you need to learn about Canon's second full-frame mirrorless camera body – and why it's a compelling option for someone stepping into full-frame for the first time.
A quick glance at the spec sheet doesn't make the Canon EOS RP look that exciting. But having shot with it, we've become oddly fond of this little full framer.
Pixelmator Pro has received an update with new and improved features, including support for Portrait Masks with images captured by the iPhone's Portrait Mode.
Alongside the EOS RP, Canon showed us mockups of the six lenses it says are in development for 2019. There's a distinct high-end flavor to the options in the works.
The new X-T30 may not be Fujifilm's flagship model, but it arrives with some very impressive features and specifications. Chris and Jordan have been shooting it for a few days and share their first impressions, along with a look at an iconic new building in their hometown of Calgary.
We don't often get excited about $900 cameras, but the Fujifilm X-T30 has really impressed us thus far. Find out what's new, what it's like to use and how it compares to its peers in our review in progress.
The Fujifilm X-T30 is equipped with the same 26.1MP X-Trans sensor and X-Processor 4 Quad Core CPU as the X-T3, along with some autofocus improvements. The new camera arrives in March for $900 body-only.
Fujifilm's XF 16mm F2.8 is one of the widest lenses in the company's lineup of compact primes for its X-series interchangeable lens cameras. We've been up and down the streets of snowy Seattle - a rare sight - to see just what our pre-production copy of this petite prime is capable of.
Canon has unveiled its second full-frame mirrorless camera: the entry-level EOS RP. Touting its compact size and approachability for beginners, the RP uses a 26.2MP sensor and will sell for $1300 body-only this March.
A pre-launch event gave us a chance to shoot a sample gallery to show what sort of image quality you can expect from the least-expensive digital full frame camera ever launched.
Nikon has taken the wraps off a new standard zoom lens for mirrorless, the Z 24-70mm F2.8 Z. The new 24-70mm has been on Nikon's Z-series roadmap since the mount was announced last August, and it will ship in spring for $2299.
Canon has announced the development of six RF lenses, including the incredibly compact RF 70-200mm F2.8L IS USM, two variations of an RF 85mm F1.2L USM, plus a 15-35mm F2.8L IS USM, 15-35mm F2.8L IS USM and 24-240mm F4-6.3 IS USM.
Nikon has announced more details of firmware in development for the Z6 and Z7. As previously reported, firmware is being planned that will add Eye-detection AF, CFexpress support and Raw video over HDMI.
Comments