Olympus to use Apple portrait mode someday ?

Terry Breedlove

Veteran Member
Messages
1,215
Reaction score
987
Location
Forks, US
I wonder if it would make any sense for Olympus to use that Apple (and others) technology for more Bokeh. Maybe in camera and maybe even on the computer for more control. If it worked good it might make the F 1.2 lenses a little to expensive for many so maybe they won't.
 
Terry, I don't know much about computer technology, but it seems to me that it might be possible for software that could be included in a m4:3 camera that would look at the focal length, f stop selected and the area in sharp focus, and then automatically soften the out of focus areas to very closely simulate what a FF photo would look like. I could visualize a selection on the super control panel on a m4:3 camera that would let you select "FF" or "m4:3". That way if you wanted "the look of FF" you could have it.

I'm not sure just how important super thin DOF is to most of us any way. For what I do I've never really missed it. In fact with my little 45 1.8, I have to be careful doing head and shoulders portraits to make sure everything I want in focus actually is. For most of what I do I find having more DOF is better than having less. Many times I'm a lot more concerned about a higher shutter speed at a given f stop than I am about DOF.

However it would be nice to have such a feature that would give us the option and the iPhone proves it can be done...but I do think on a m4:3 camera it needs to be a good bit more sophisticated.
 
I wonder if it would make any sense for Olympus to use that Apple (and others) technology for more Bokeh.
Well, for now, that's impossible unless they would add a second lens and sensor to the camera. The iPhone 7 and up (and Galaxy Note) use their dual cameras to generate a depth map (3D), and then they apply the effect based on the information from that map. That's a much more sophisticated way of mimicking bokeh than what can be done with software from a single 2D photo.

Future generations of smartphones will get 'dual pixel' type sensors, in which each pixel is split in two, each with a slightly different perspective. That allows the camera to see depth too, and it can use that to imitate bokeh realistically. I doubt if that technology will also be used in regular camera sensors though.
 
I wonder if it would make any sense for Olympus to use that Apple (and others) technology for more Bokeh.
Well, for now, that's impossible unless they would add a second lens and sensor to the camera. The iPhone 7 and up (and Galaxy Note) use their dual cameras to generate a depth map (3D), and then they apply the effect based on the information from that map. That's a much more sophisticated way of mimicking bokeh than what can be done with software from a single 2D photo.

Future generations of smartphones will get 'dual pixel' type sensors, in which each pixel is split in two, each with a slightly different perspective. That allows the camera to see depth too, and it can use that to imitate bokeh realistically. I doubt if that technology will also be used in regular camera sensors though.
wasn't there a forgotten Panasonic 3D lens for M4/3? perhaps that can be a possible alternative to explore further with some computer software update to make it work. :-D
 
For reality, I prefer to get real and adjustable bokeh effect (when needed) from a good fast lens. The best I have for smooth creamy bokeh is the Sigma 30 f1.4.
 
There's no reason why a camera or post-processing software couldn't be programmed to do exactly that.

The camera already records the f stop used and can manage focus depth (the E-M1 has a focus limiter), so it wouldn't be that much of a stretch for a setting to be invoked that adds blur to anything beyond a certain distance when using maximum aperture.

We already have focus stacking to maximise depth of field, so doing the opposite is not inconceivable. Mind you, it would be intolerable for FF aficionados, as that would put another nail in their FF is better coffin. :)
 
For reality, I prefer to get real and adjustable bokeh effect (when needed) from a good fast lens. The best I have for smooth creamy bokeh is the Sigma 30 f1.4.
of course, the m4/3 lenses is already available to do so, I have my Voigtlanders f0.95 to do all of that too.

I was referring to Danielvr's comment about dual cameras in those smartphones that perhaps a dual lens like the panasonic 3D lens can create similar effect.
 
I wonder if it would make any sense for Olympus to use that Apple (and others) technology for more Bokeh.
Well, for now, that's impossible unless they would add a second lens and sensor to the camera. The iPhone 7 and up (and Galaxy Note) use their dual cameras to generate a depth map (3D), and then they apply the effect based on the information from that map.
Um Pixel 2 does it all with one camera.

It beats the iPhone at it too.
 
Um Pixel 2 does it all with one camera.
That's because it doesn't do depth detection, but face detection


"The Pixel 2 has been programed to detect faces of people in Portrait Mode which means that for now it doesn't recognize much else as the subject. Apparently dogs fall in the "face" category because it seemed to do well with the two pooches we photographed, but objects where either out of focus entirely as you can see with the playing cards in the shot bellow, or not in the portrait mode at all."
It beats the iPhone at it too.
Matter of opinion. Same article:

"The iPhone nailed the effect almost every time even when dealing with inanimate objects like food or plants. It's not perfect, but I would chose it over the Pixel for portraits because it's consistent and easy to use."
 
There's no reason why a camera or post-processing software couldn't be programmed to do exactly that.

The camera already records the f stop used and can manage focus depth (the E-M1 has a focus limiter), so it wouldn't be that much of a stretch for a setting to be invoked that adds blur to anything beyond a certain distance when using maximum aperture.

We already have focus stacking to maximise depth of field, so doing the opposite is not inconceivable. Mind you, it would be intolerable for FF aficionados, as that would put another nail in their FF is better coffin. :)
 
There's no reason why a camera or post-processing software couldn't be programmed to do exactly that.

The camera already records the f stop used and can manage focus depth (the E-M1 has a focus limiter), so it wouldn't be that much of a stretch for a setting to be invoked that adds blur to anything beyond a certain distance when using maximum aperture.

We already have focus stacking to maximise depth of field, so doing the opposite is not inconceivable. Mind you, it would be intolerable for FF aficionados, as that would put another nail in their FF is better coffin. :)
 
If only there was a way to somehow move the camera's sensor to allow depth information to be calculated.
Good thinking! :-)

I'm not sure though if Olympus would be willing to offer a faux bokeh mode.. firstly because it might be beneath them as an optical company, and secondly, they've just introduced a line of pricey f/1.2 lenses! When phone cameras are offering more glorious bokeh than the typical M43 kit though, they may have to give in to market forces. Sensor shift would be a great way to collect 3D info, if it can be done quickly enough.
 
If only there was a way to somehow move the camera's sensor to allow depth information to be calculated.
Good thinking! :-)

I'm not sure though if Olympus would be willing to offer a faux bokeh mode.. firstly because it might be beneath them as an optical company, and secondly, they've just introduced a line of pricey f/1.2 lenses! When phone cameras are offering more glorious bokeh than the typical M43 kit though, they may have to give in to market forces. Sensor shift would be a great way to collect 3D info, if it can be done quickly enough.
You don't need sensor shift. The camera can already determine focus depth through the AF limiter control.

All the camera would need to know is what type of scene you're shooting. Tell it that you're shooting portraits and it can then determine what needs to be done.

Whether this is actually done in-camera or via software such a OV3 is a different matter. The data will be recorded in the image for either to be able to output an appropriate result.
 
There's no reason why a camera or post-processing software couldn't be programmed to do exactly that.

The camera already records the f stop used and can manage focus depth (the E-M1 has a focus limiter), so it wouldn't be that much of a stretch for a setting to be invoked that adds blur to anything beyond a certain distance when using maximum aperture.

We already have focus stacking to maximise depth of field, so doing the opposite is not inconceivable. Mind you, it would be intolerable for FF aficionados, as that would put another nail in their FF is better coffin. :)
 
If only there was a way to somehow move the camera's sensor to allow depth information to be calculated.
Good thinking! :-)

I'm not sure though if Olympus would be willing to offer a faux bokeh mode.. firstly because it might be beneath them as an optical company, and secondly, they've just introduced a line of pricey f/1.2 lenses! When phone cameras are offering more glorious bokeh than the typical M43 kit though, they may have to give in to market forces. Sensor shift would be a great way to collect 3D info, if it can be done quickly enough.
You don't need sensor shift. The camera can already determine focus depth through the AF limiter control.

All the camera would need to know is what type of scene you're shooting. Tell it that you're shooting portraits and it can then determine what needs to be done.

Whether this is actually done in-camera or via software such a OV3 is a different matter. The data will be recorded in the image for either to be able to output an appropriate result.
 
The camera can already determine focus depth through the AF limiter control.
I don't see how, but the question is not how far the lens was focused but what the distance of every pixel in the image is.
All the camera would need to know is what type of scene you're shooting. Tell it that you're shooting portraits and it can then determine what needs to be done.
Some apps and cameras use face recognition to isolate the main subject, then blur the rest, but the results are mediocre compared to technologies that use depth information to determine the required blur.
 
Don’t forget the Nikon DC lenses. That technology worked very well.
 
= fake bokeh.
 

Keyboard shortcuts

Back
Top