DPReview.com is closing April 10th - Find out more

I am confused. Anyone understand this new 3-D Facebook app?

Started Apr 20, 2019 | Discussions
Gerry Siegel
Gerry Siegel Veteran Member • Posts: 3,244
I am confused. Anyone understand this new 3-D Facebook app?

Sounds like some kind of simulation. All I have seen is a wiggle GIF ish effect that is designed for single eyes If so, does it qualify as stereoscopic?

Language gets loosey goosey huh?. My idea is still separate images separated for each eyeball.   Anybody have a clue what is going on?

World will not fall apart. One eye works. Two eyes better.. There is already lots of confoosing lingo..will we see more?  Got to keep up with the millennials...:-)

Any info?

 Gerry Siegel's gear list:Gerry Siegel's gear list
Panasonic ZS100 Panasonic Lumix DMC-GX7 Olympus E-M1 Panasonic Lumix DMC-GX8 Panasonic Lumix DC-G9 +4 more
Brian F Flint
Brian F Flint Regular Member • Posts: 184
Re: I am confused. Anyone understand this new 3-D Facebook app?

I think only one image is taken and some clever software does a 2d to 3d conversion and and as the smartphone tilts you can see slightly behind the main object in the front of the image. What is interesting is you can not only see around the sides of the object ( as in a standard stereo wiggle effect ) but also see over the top of the object as the smartphone is tilted so that the top of the smartphone comes towards you. - very clever.

I found this info which gives an insight to what is going on.

**********************************

Take Better 3D Photos for Facebook

You'll quickly notice that not all of your portraits will look good as 3D Photos on Facebook. To make sure yours look better, follow some of the below tips, which will also make better portrait photos in general.

  1. Put your main subject at least three or four feet away from your phone.
  2. Capture scenes with at least three layers of depth: foreground, your subject, and background.
  3. Don't let your subject blend in with the background — use contrasting colors to make them stand out, and therefore pop in 3D.
  4. Make sure the subject has some texture, as it won't pop as much without it.
  5. Shoot subjects with solid edges, so there is a clear line of separation between other depth layers.
  6. Avoid shiny objects, which could
  7. Avoid transparent objects, such as glass or plastic, which could fool depth sensors.
  8. Avoid added effects, like scene lighting. Portrait Lighting mode on iPhones may also work against you.
  9. Avoid mono-style lighting that drowns out all of the colors that makes 3D photos stand out.
 Brian F Flint's gear list:Brian F Flint's gear list
Panasonic Lumix DMC-ZS60 Sony RX10 III Canon MP-E 65mm f/2.5 1-5x Macro
3D Gunner Senior Member • Posts: 1,031
Re: I am confused. Anyone understand this new 3-D Facebook app?

The short story is they use an Artificial Intelligence trained to reconstruct volumetric environments from pictures taken from single cameras. Multiple layers (photos focused at different depths) are reccomended to aquire a more precise volumetric information.

Then, the volumetric info is loaded in an app which offer the effect in discussion.

From the volumetric information, is eassy to generate stereoscopic images with an effect even more complex than a single stereoscopic pair, but it seems that is not their intention. Yet.

Gerry Siegel
OP Gerry Siegel Veteran Member • Posts: 3,244
Re: I am confused. Anyone understand this new 3-D Facebook app?

Processors and storage being what they are, I can see how manipulation can  render things any which way now..  My lingering thought is this:  Can one via this method,  see solid visual depth without separating the incoming data to each eye?  ...so there are physical constraints, maybe. Evolution made them to provide binocular vision is what I have been given to accept.

It tells me this much though.  I need to go back and start my  learning all over.  I will say that conversion in movies look pretty darn good. But the eye separation thing still puzzles me.

 Gerry Siegel's gear list:Gerry Siegel's gear list
Panasonic ZS100 Panasonic Lumix DMC-GX7 Olympus E-M1 Panasonic Lumix DMC-GX8 Panasonic Lumix DC-G9 +4 more
3D Gunner Senior Member • Posts: 1,031
Re: I am confused. Anyone understand this new 3-D Facebook app?

The volumetric environment is perceived inside brain by fusioning information gathered by binocular vision. From a static point of view, monocular vision can not do a proper evaluation of relative position between different objects situated at different distances from the observer.
But a monocular vision can better perceive a volumetric environment if the observation point is in motion. In motion, the brain receive a lot of succesive images collected from different points of view.
So, the entire story is based on ideea to obtain a volumetric environment which is perceived to be volumetric by the movement inside a single image on screen (not by a stereoscopic pair).

Open Google Earth, aproach an area with some mountains and will see a flat picture which suggest some mountain relief. Then drag the image in any direction and soon you will perceive the 3D depth of the mountain relief.

I am not a native English speaker, so can be difficult for me to explain some complex technical aspects.

Turbguy1
MOD Turbguy1 Senior Member • Posts: 1,467
Re: I am confused. Anyone understand this new 3-D Facebook app?

The "new" facebook 3D photos (where moving the mouse over the photo seems to move the viewpoint) is typically generated by use of a 2D image obtained by one main high-rez rear camera, and a depth map obtained by using a secondary low-rez rear camera that has a (very) small interaxial separation. Software generates the depth map (two files are generated).

There are many modern phones that have this arrangement, which is typically used to obtain a false bokeh (blurring of distant elements), as well as some other computational photography purposes.

These files are uploaded (they might be combined) and facebook performs the rest.

There are frequent artifacts generated by this technique, particularly around edges, where software has to "guess" at filling in obscurations, or other errors (such as partial pseudo stereo sections) arise.

Here's a side by side crosseye example of two views (capturing screen shots) of the extreme left and right renderings of an actual "facebook 3D photo".  While is works well, note the pseudo rendering of the closest stones at the bottom...

Note the problem pseudo at the bottom... also the background has been computationally blurred by the software.

Its obvious that with the close separation, it works best for close subjects.

Is it 3D?  Yes.

Is it "stereoscopic"? It can be, if you accept some computational errors.

There is currently a difference in meaning between "3D" and "stereoscopic".

-- hide signature --
 Turbguy1's gear list:Turbguy1's gear list
Minolta DiMAGE 7 Konica Minolta DiMAGE Z5 Konica Minolta DiMAGE A2 Fujifilm FinePix Real 3D W3 Nikon D300 +3 more
Gerry Siegel
OP Gerry Siegel Veteran Member • Posts: 3,244
Re: I am confused. Anyone understand this new 3-D Facebook app?

So there are two images taken by the camera. I would need to see a diagram I guess. And the term "depth map" is new to me. Need to better understand it. I keep thinking of the Avatar movie where multiple cameras shot objects placed on the subjects to get a spatial profile which was then placed in a computer like a sort of clay model of the subject with pins on various targeted places on the clay model in spatial relationship which can be transferred to the final bronze casting ... Maybe I am getting at least closer to the concept.  Thanks for the help,to all who replied.

 Gerry Siegel's gear list:Gerry Siegel's gear list
Panasonic ZS100 Panasonic Lumix DMC-GX7 Olympus E-M1 Panasonic Lumix DMC-GX8 Panasonic Lumix DC-G9 +4 more
3D Gunner Senior Member • Posts: 1,031
Re: I am confused. Anyone understand this new 3-D Facebook app?

This was just the beginning. They can Transform Paintings and Photos Into Animations With AI .

"Researchers from the University of Washington and Facebook recently released a paper that shows a deep learning-based system that can transform still images and paintings into animations." 

Turbguy1
MOD Turbguy1 Senior Member • Posts: 1,467
Re: I am confused. Anyone understand this new 3-D Facebook app?

Here's an image of a modern smartphone with dual rear cameras:

Note the separation is only about 1/2".

One camera is 13 MP color, the other 2 MP B&W. This arrangement is sufficient to generate a depth map file, but is FAR from being capable true stereo photography.

One smartphone was made the DOES produce true stereo photos. That was the HTC EVO 3D. It also had a lenticular (glasses-free) stereo screen for stereo viewing.

The cameras are equal, 5MP color, with a separation of about 1 1/4". Note that the name used here is 3D, and is actually stereo!

Oh, and it takes stereo videos as well as stills...

About $30 or less (used) on eBay.

About depth maps (also called disparity maps)...

Here is what a depth map image looks like. Note that the shade of grey represents the values of the "Z" axis...kinda crude!

BTW, StereoPhotoMaker WILL generate a depth map, and enable uploading the "3D" files to Facebook.

-- hide signature --
 Turbguy1's gear list:Turbguy1's gear list
Minolta DiMAGE 7 Konica Minolta DiMAGE Z5 Konica Minolta DiMAGE A2 Fujifilm FinePix Real 3D W3 Nikon D300 +3 more
Turbguy1
MOD Turbguy1 Senior Member • Posts: 1,467
Re: I am confused. Anyone understand this new 3-D Facebook app?

Here's another crazy idea, sixteen lenses to produce one image!

The Light 16 camera.

-- hide signature --
 Turbguy1's gear list:Turbguy1's gear list
Minolta DiMAGE 7 Konica Minolta DiMAGE Z5 Konica Minolta DiMAGE A2 Fujifilm FinePix Real 3D W3 Nikon D300 +3 more
3D Gunner Senior Member • Posts: 1,031
Re: I am confused. Anyone understand this new 3-D Facebook app?

Turbguy1 wrote:

Here's an image of a modern smartphone with dual rear cameras:

Note the separation is only about 1/2".

One camera is 13 MP color, the other 2 MP B&W. This arrangement is sufficient to generate a depth map file, but is FAR from being capable true stereo photography.

BTW, StereoPhotoMaker WILL generate a depth map, and enable uploading the "3D" files to Facebook.

An advanced AI is trained at DeepMind to be capable to generate the 3D environment from any single picture, so no special camera (with multiple lenses), no multiple shots and no depth map is needed.

This AI that has the ability to create full-fledged 3D scenes merely after observing them in 2D images. Their AI was trained to guess how things look like from different angles that it has not yet seen.

A very impressive fact it is to note that the AI does not use any human-labeled input!

Turbguy1
MOD Turbguy1 Senior Member • Posts: 1,467
Re: I am confused. Anyone understand this new 3-D Facebook app?

Certainly "possible". Computer networks can recognize depth clues we learned from childhood. These clues include:

Obscurations

Shading/shadowing

Object size "dimunation" with distance

Reflections

We (humans) can certainly step into a furnished room with our eyes closed, open one eye, and immediately be able to navigate around, by applying those same (and other) depth clues. We can drive a car (and fly a aircraft) with sight in only one eye.

I suspect it works "best" with simple "scenes", such as a handful of computer-generated "objects" in a computer-generated "room", with computer-generated "colors" and "lighting". REAL WORLD everyday scenes could be a challenge, generating a lot of stereo "errors" (?)

Got any real world examples?

Currently, the Facebook App requires a depth mapping file...

-- hide signature --
 Turbguy1's gear list:Turbguy1's gear list
Minolta DiMAGE 7 Konica Minolta DiMAGE Z5 Konica Minolta DiMAGE A2 Fujifilm FinePix Real 3D W3 Nikon D300 +3 more
3D Gunner Senior Member • Posts: 1,031
Re: I am confused. Anyone understand this new 3-D Facebook app?

Turbguy1 wrote:

I suspect it works "best" with simple "scenes", such as a handful of computer-generated "objects" in a computer-generated "room", with computer-generated "colors" and "lighting". REAL WORLD everyday scenes could be a challenge, generating a lot of stereo "errors" (?)

Got any real world examples?

Waiting for new releases...

"What’s happened this year is just a small wave of ML tools, mostly doing primitive kind of things kind which roughly relate to non-domain specific research papers. Looking at the amount of money that’s been invested into domain-specific VFX-AI R&D over the past year, I’d say there is a tsunami on the horizon of the next two or three years."(rossdawson)

Real life applications are in development. Facebook 3D is one of them, but is still primitive.

Gerry Siegel
OP Gerry Siegel Veteran Member • Posts: 3,244
Re: I am confused. Anyone understand this new 3-D Facebook app?

Hi, and thanks all. I am a little less confused. I just opened a brief description on the NSA site which filled me in more. Quite interesting to keep up with the power of information tech. Examples and how to do it though it has a number of processing steps.. Likely not for me personally. I am pleased with two image results without mapping if I understand same. But such learning is always of some value.

https://www.facebook.com/groups/11288931988/?ref=bookmarks

 Gerry Siegel's gear list:Gerry Siegel's gear list
Panasonic ZS100 Panasonic Lumix DMC-GX7 Olympus E-M1 Panasonic Lumix DMC-GX8 Panasonic Lumix DC-G9 +4 more
3D Gunner Senior Member • Posts: 1,031
Re: I am confused. Anyone understand this new 3-D Facebook app?

I hope the good and fast conversion from any 2D picture/video combined with micro LED technology will pave the way for mass adoption of high resolution TVs and other display devices with 3D without glasses, in not too distant future.

Keyboard shortcuts:
FForum MMy threads