The third dimension at a glance
- Two images of the scene are taken by the eyes at the same time
- Disparity and parallax yield the perception of the depth
- The visual field is constrained by overlapping areas
What's the point?
We live in a three-dimensional world ...
- ... we gather information about the widht, height and depth of objects
- ... we avoid obstacles by sensing real distances and movement
- ... we sense negative space
The brain can be tricked
3D films exploit stereopsis so you can rebuild depth and distances inside your brain
Head Mounted Displays show a slightly displaced image for each eye
A commercial explosion
How can machines compute depth?
How can machines see?
Image perception capabilities of the human eye are simulated by digital cameras, which are becoming progressively cheaper, smaller and more accurate.
Cameras are available everywhere
Your imagination is the limit!
However, only the hardware isn't enough ...
... there are several ways of computing depth taking into account the number of views
If the positions of the lenses are fixed and well-known, then depth is inversely proportional to the disparity between each pair of pixels.
Multiple View Stereo
But, could we perform real-time 3D reconstruction with only one lens?
- stereopsis helps us to perceive depth
- digital cameras are the "eyes" of the robots
- algorithms to retrieve depth heavily rely on the available hardware
Thanks for your attention!
And finally, this is what you get
Use a spacebar or arrow keys to navigate