Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Using artificial intelligence to generate 3D holograms in real-time

Press contact :, media download.

hologram display

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

hologram projection

Previous image Next image

Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One reason: VR can make users feel sick . Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.

“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” says Liang Shi, the study’s lead author and a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”

Shi believes the new approach, which the team calls “tensor holography,” will finally bring that elusive 10-year goal within reach. The advance could fuel a spillover of holography into fields like VR and 3D printing.

Shi worked on the study, published today in Nature , with his advisor and co-author Wojciech Matusik. Other co-authors include Beichen Li of EECS and the Computer Science and Artificial Intelligence Laboratory at MIT, as well as former MIT researchers Changil Kim (now at Facebook) and Petr Kellnhofer (now at Stanford University).

The quest for better 3D

A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene’s colors, but it ultimately yields a flat image.

In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene’s parallax and depth. So, while a photograph of Monet’s “Water Lilies” can highlight the paintings’ color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.

First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase. This reference generates a hologram’s unique sense of depth.  The resulting images were static, so they couldn’t capture motion. And they were hard copy only, making them difficult to reproduce and share.

Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” says Shi. “That increases the complexity significantly.” Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don’t model occlusion with photorealistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.

They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.

The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.

By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.

“We are amazed at how well it performs,” says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multicamera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What’s more, the compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest cell phone,” he says.

The research “shows that true 3D holographic displays are practical with only moderate computational requirements,” says Joel Kollin, a principal optical architect at Microsoft who was not involved with the research. He adds that “this paper shows marked improvement in image quality over previous work,” which will “add realism and comfort for the viewer.” Kollin also hints at the possibility that holographic displays like this could even be customized to a viewer’s ophthalmic prescription. “Holographic displays can correct for aberrations in the eye. This makes it possible for a display image sharper than what the user could see with contacts or glasses, which only correct for low order aberrations like focus and astigmatism.”

“A considerable leap”

Real-time 3D holography would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.

Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.

“It’s a considerable leap that could completely change people’s attitudes toward holography,” says Matusik. “We feel like neural networks were born for this task.”

The work was supported, in part, by Sony.

Share this news article on:

Related links.

  • Video: Towards Real-time Photorealistic 3D Holography with Deep Neural Networks
  • Wojciech Matusik
  • Computer Science and Artificial Intelligence Laboratory
  • Department of Electrical Engineering and Computer Science,

Related Topics

  • Augmented and virtual reality
  • Artificial intelligence
  • Electrical Engineering & Computer Science (eecs)
  • Computer Science and Artificial Intelligence Laboratory (CSAIL)

Related Articles

A new glasses-free 3-D video system uses three layered LCD panels displaying bizarre patterns (top three images) that collectively produce a coherent, high-resolution, multiperspective 3-D image. The bottom image illustrates, roughly, the composite image that would reach one eye at one viewing angle.

Glasses-free 3-D TV looks nearer

research paper on 3d holographic projection technology

Cheap, color, holographic video

research paper on 3d holographic projection technology

Explained: Neural networks

Previous item Next item

More MIT News

Two panels show diagonal streaks of green-stained brain blood vessels over a background of blue cells. The green staining is much brighter in the left panel than in the right.

Study: Movement disorder ALS and cognitive disorder FTLD show strong molecular overlaps

Read full story →

A group photo of eight women and one man in two rows, with back row standing and front seated, on a platform with dark curtains behind them.

Students explore career opportunities in semiconductors

Scaffolding sits in front of red brick rowhouses in Amsterdam being renovated

Think globally, rebuild locally

Three students sit in unique wood chairs against a white background

For MIT students, there is much to learn from crafting a chair

About two dozen people walking, biking, and relaxing in a verdant park next to a lake on a sunny summer day

A new way to quantify climate change impacts: “Outdoor days”

A copper mining pit dug deep in the ground, surrounded by grooved rings of rock and flying dust.

Understanding the impacts of mining on local environments and communities

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 10 March 2021

Towards real-time photorealistic 3D holography with deep neural networks

  • Liang Shi   ORCID: orcid.org/0000-0002-4442-4679 1 , 2 ,
  • Beichen Li   ORCID: orcid.org/0000-0002-9271-0055 1 , 2 ,
  • Changil Kim 1 , 2 ,
  • Petr Kellnhofer 1 , 2 &
  • Wojciech Matusik   ORCID: orcid.org/0000-0003-0212-5643 1 , 2  

Nature volume  591 ,  pages 234–239 ( 2021 ) Cite this article

37k Accesses

267 Citations

271 Altmetric

Metrics details

  • Computational science
  • Computer science
  • Optical manipulation and tweezers

An Author Correction to this article was published on 26 April 2021

This article has been updated

The ability to present three-dimensional (3D) scenes with continuous depth sensation has a profound impact on virtual and augmented reality, human–computer interaction, education and training. Computer-generated holography (CGH) enables high-spatio-angular-resolution 3D projection via numerical simulation of diffraction and interference 1 . Yet, existing physically based methods fail to produce holograms with both per-pixel focal control and accurate occlusion 2 , 3 . The computationally taxing Fresnel diffraction simulation further places an explicit trade-off between image quality and runtime, making dynamic holography impractical 4 . Here we demonstrate a deep-learning-based CGH pipeline capable of synthesizing a photorealistic colour 3D hologram from a single RGB-depth image in real time. Our convolutional neural network (CNN) is extremely memory efficient (below 620 kilobytes) and runs at 60 hertz for a resolution of 1,920 × 1,080 pixels on a single consumer-grade graphics processing unit. Leveraging low-power on-device artificial intelligence acceleration chips, our CNN also runs interactively on mobile (iPhone 11 Pro at 1.1 hertz) and edge (Google Edge TPU at 2.0 hertz) devices, promising real-time performance in future-generation virtual and augmented-reality mobile headsets. We enable this pipeline by introducing a large-scale CGH dataset (MIT-CGH-4K) with 4,000 pairs of RGB-depth images and corresponding 3D holograms. Our CNN is trained with differentiable wave-based loss functions 5 and physically approximates Fresnel diffraction. With an anti-aliasing phase-only encoding method, we experimentally demonstrate speckle-free, natural-looking, high-resolution 3D holograms. Our learning-based approach and the Fresnel hologram dataset will help to unlock the full potential of holography and enable applications in metasurface design 6 , 7 , optical and acoustic tweezer-based microscopic manipulation 8 , 9 , 10 , holographic microscopy 11 and single-exposure volumetric 3D printing 12 , 13 .

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

research paper on 3d holographic projection technology

Data availability

Our hologram dataset (MIT-CGH-4K) and the trained CNN model will be made publicly available (on GitHub) along with the paper.

Code availability

The code to evaluate the trained CNN model will be made publicly available (on GitHub) along with the paper. Additional codes are available from the corresponding authors upon reasonable request.

Change history

26 april 2021.

A Correction to this paper has been published: https://doi.org/10.1038/s41586-021-03476-5

Benton, S. A., Bove, J. & Michael, V. Holographic Imaging (John Wiley & Sons, 2008).

Maimone, A., Georgiou, A. & Kollin, J. S. Holographic near-eye displays for virtual and augmented reality. ACM Trans. Graph . 36 , 85:1–85:16 (2017).

Article   Google Scholar  

Shi, L., Huang, F.-C., Lopes, W., Matusik, W. & Luebke, D. Near-eye light field holographic rendering with spherical waves for wide field of view interactive 3D computer graphics. ACM Trans. Graph . 36 , 236:1–236:17 (2017).

Tsang, P. W. M., Poon, T.-C. & Wu, Y. M. Review of fast methods for point-based computer-generated holography [Invited]. Photon. Res . 6 , 837–846 (2018).

Sitzmann, V. et al. End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging. ACM Trans. Graph . 37 , 114:1–114:13 (2018).

Lee, G.-Y. et al. Metasurface eyepiece for augmented reality. Nat. Commun . 9 , 4562 (2018).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Hu, Y. et al. 3d-integrated metasurfaces for full-colour holography. Light Sci. Appl . 8 , 86 (2019).

Melde, K., Mark, A. G., Qiu, T. & Fischer, P. Holograms for acoustics. Nature 537 , 518–522 (2016).

Article   ADS   CAS   PubMed   Google Scholar  

Smalley, D. et al. A photophoretic-trap volumetric display. Nature 553 , 486–490 (2018).

Hirayama, R., Plasencia, D. M., Masuda, N. & Subramanian, S. A volumetric display for visual, tactile and audio presentation using acoustic trapping. Nature 575 , 320–323 (2019).

Rivenson, Y., Wu, Y. & Ozcan, A. Deep learning in holography and coherent imaging. Light Sci. Appl . 8 , 85 (2019).

Shusteff, M. et al. One-step volumetric additive manufacturing of complex polymer structures. Sci. Adv . 3 , eaao5496 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Kelly, B. E. et al. Volumetric additive manufacturing via tomographic reconstruction. Science 363 , 1075–1079 (2019).

Levoy, M. & Hanrahan, P. Light field rendering. In Proc. 23rd Annual Conference on Computer Graphics and Interactive Techniques 31–42 (ACM, 1996).

Waters, J. P. Holographic image synthesis utilizing theoretical methods. Appl. Phys. Lett . 9 , 405–407 (1966).

Article   ADS   Google Scholar  

Leseberg, D. & Frère, C. Computer-generated holograms of 3-D objects composed of tilted planar segments. Appl. Opt . 27 , 3020–3024 (1988).

Tommasi, T. & Bianco, B. Computer-generated holograms of tilted planes by a spatial frequency approach. J. Opt. Soc. Am. A 10 , 299–305 (1993).

Matsushima, K. & Nakahara, S. Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method. Appl. Opt . 48 , H54–H63 (2009).

Article   PubMed   Google Scholar  

Symeonidou, A., Blinder, D., Munteanu, A. & Schelkens, P. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling. Opt. Express 23 , 22149–22161 (2015).

Article   ADS   PubMed   Google Scholar  

Lucente, M. E. Interactive computation of holograms using a look-up table. J. Electron. Imaging 2 , 28–35 (1993).

Lucente, M. & Galyean, T. A. Rendering interactive holographic images. In Proc. 22nd Annual Conference on Computer Graphics and Interactive Techniques , 387–394 (ACM, 1995).

Lucente, M. Interactive three-dimensional holographic displays: seeing the future in depth. Comput. Graph . 31 , 63–67 (1997).

Chen, J.-S. & Chu, D. P. Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications. Opt. Express 23 , 18143–18155 (2015).

Zhao, Y., Cao, L., Zhang, H., Kong, D. & Jin, G. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method. Opt. Express 23 , 25440–25449 (2015).

Makey, G. et al. Breaking crosstalk limits to dynamic holography using orthogonality of high-dimensional random vectors. Nat. Photon . 13 , 251–256 (2019).

Article   ADS   CAS   Google Scholar  

Yamaguchi, M., Hoshino, H., Honda, T. & Ohyama, N. in Practical Holography VII: Imaging and Materials Vol. 1914 (ed. Benton, S. A.) 25–31 (SPIE, 1993).

Barabas, J., Jolly, S., Smalley, D. E. & Bove, V. M. Jr in Practical Holography XXV: Materials and Applications Vol. 7957 (ed. Bjelkhagen, H. I.) 13–19 (SPIE, 2011).

Zhang, H., Zhao, Y., Cao, L. & Jin, G. Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues. Opt. Express 23 , 3901–3913 (2015).

Padmanaban, N., Peng, Y. & Wetzstein, G. Holographic near-eye displays based on overlap-add stereograms. ACM Trans. Graph . 38 , 214:1–214:13 (2019).

Shimobaba, T., Masuda, N. & Ito, T. Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane. Opt. Lett . 34 , 3133–3135 (2009).

Wakunami, K. & Yamaguchi, M. Calculation for computer generated hologram using ray-sampling plane. Opt. Express 19 , 9086–9101 (2011).

Häussler, R. et al. Large real-time holographic 3Dd displays: enabling components and results. Appl. Opt . 56 , F45–F52 (2017).

Hamann, S., Shi, L., Solgaard, O. & Wetzstein, G. Time-multiplexed light field synthesis via factored Wigner distribution function. Opt. Lett . 43 , 599–602 (2018).

Nair, V. & Hinton, G. E. Rectified linear units improve restricted Boltzmann machines. In Proc. International Conference on International Conference on Machine Learning (ICML) 807–814 (Omnipress, 2010).

Sinha, A., Lee, J., Li, S. & Barbastathis, G. Lensless computational imaging through deep learning. Optica 4 , 1117–1125 (2017).

Metzler, C. et al. prdeep: robust phase retrieval with a flexible deep network. In Proc. International Conference on International Conference on Machine Learning (ICML) 3501–3510 (JMLR, 2018).

Eybposh, M. H., Caira, N. W., Chakravarthula, P., Atisa, M. & Pégard, N. C. in Optics and the Brain BTu2C–2 (Optical Society of America, 2020).

Rivenson, Y., Zhang, Y., Günaydın, H., Teng, D. & Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci. Appl . 7 , 17141 (2018).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ren, Z., Xu, Z. & Lam, E. Y. Learning-based nonparametric autofocusing for digital holography. Optica 5 , 337–344 (2018).

Wu, Y. et al. Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery. Optica 5 , 704–710 (2018).

Horisaki, R., Takagi, R. & Tanida, J. Deep-learning-generated holography. Appl. Opt . 57 , 3859–3863 (2018).

Peng, Y., Choi, S., Padmanaban, N. & Wetzstein, G. Neural holography with camera-in-the-loop training. ACM Trans. Graph . 39 , 185:1–185:14 (2020).

Jiao, S. et al. Compression of phase-only holograms with JPEG standard and deep learning. Appl. Sci . 8 , 1258 (2018).

Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S. & Vedaldi, A. Describing textures in the wild. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 3606–3613 (IEEE, 2014).

Dai, D., Riemenschneider, H. & Gool, L. V. The synthesizability of texture examples. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 3027–3034 (IEEE, 2014).

Kim, C., Zimmer, H., Pritch, Y., Sorkine-Hornung, A. & Gross, M. Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph . 32 , 73:1–73:12 (2013).

Article   MATH   Google Scholar  

Matsushima, K. & Shimobaba, T. Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields. Opt. Express 17 , 19662–19673 (2009).

Shimobaba, T. & Ito, T. A color holographic reconstruction system by time division multiplexing with reference lights of laser. Opt. Rev . 10 , 339–341 (2003).

Hsueh, C. K. & Sawchuk, A. A. Computer-generated double-phase holograms. Appl. Opt . 17 , 3874–3883 (1978).

Mendoza-Yero, O., Mínguez-Vega, G. & Lancis, J. Encoding complex fields by using a phase-only optical element. Opt. Lett . 39 , 1740–1743 (2014).

Xiao, L., Kaplanyan, A., Fix, A., Chapman, M. & Lanman, D. DeepFocus: learned image synthesis for computational displays. ACM Trans. Graph . 37 , 200:1–200:13 (2018).

Wang, Y., Sang, X., Chen, Z., Li, H. & Zhao, L. Real-time photorealistic computer-generated holograms based on backward ray tracing and wavefront recording planes. Opt. Commun . 429 , 12–17 (2018).

Hasegawa, N., Shimobaba, T., Kakue, T. & Ito, T. Acceleration of hologram generation by optimizing the arrangement of wavefront recording planes. Appl. Opt . 56 , A97–A103 (2017).

Sifatul Islam, M. et al. Max-depth-range technique for faster full-color hologram generation. Appl. Opt . 59 , 3156–3164 (2020).

Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR) (2015).

Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI) 234–241 (Springer, 2015).

Yu, F., Koltun, V. & Funkhouser, T. Dilated residual networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 472–480 (IEEE, 2017).

Download references

Acknowledgements

We thank K. Aoyama and S. Wen (from Sony) for discussions; J. Minor, T. Du, M. Foshey, L. Makatura, W. Shou and T. Erps from MIT for improving/editing the manuscript; R. White for the administration of the project; X. Ju for the design of iPhone demo; and P. Ma for providing an iPhone 11 Pro for the mobile demo. We acknowledge funding from Sony Research Award Program.

Author information

Authors and affiliations.

Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA

Liang Shi, Beichen Li, Changil Kim, Petr Kellnhofer & Wojciech Matusik

Electrical Engineering and Computer Science Department, Massachusetts Institute of Technology, Cambridge, MA, USA

You can also search for this author in PubMed   Google Scholar

Contributions

L.S. conceived the idea, implemented the proposed framework, built the display prototype, performed experimental validation, and conducted the iPhone and Edge TPU demo. B.L. performed the pipeline evaluation and made the Supplementary Videos. B.L., C.K. and P.K. were involved in the design of the proposed framework. L.S. and P.K. led the writing and revision of the manuscript. W.M. supervised the work. All authors discussed ideas and results, and contributed to the manuscript.

Corresponding authors

Correspondence to Liang Shi or Wojciech Matusik .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information Nature thanks Tomoyoshi Shimobaba and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 visualization of masked fresnel zone plates computed by oa-pbm and performance comparison of foreground occlusion..

a , A depth image cropped from a frame of Big Buck Bunny . Three regions with different depth landscapes are highlighted in different colours. b , Masked Fresnel zone plates computed for the centre pixel of each highlighted region. Three pixels are propagated for the same distance for ease of comparison. The flat depth landscape around the green pixel results in a non-occluded Fresnel zone plate. The masked Fresnel zone plates of red and blue pixels contain sharp cutoffs at their long-distance separated occlusion boundaries, and freeform shapes at occlusion boundaries with moderate distance separation and varying depth distribution. c , Comparison of foreground reconstruction by the PBM, OA-PBM and Fresnel diffraction. The scene is a cropped modulation transfer function bar target with a step depth profile. The PBM leaks a considerable portion of the background into the foreground due to a lack of occlusion handling. The artefacts are clearly visible in the original unmagnified view. The OA-PBM removes a considerable portion of the artefacts and the remaining artefacts are visually inconsequential in the unmagnified view. d , Comparison of focal stacks reconstructed by the PBM and OA-PBM for the Big Buck Bunny . The orange bounding boxes mark the background leakage in the PBM reconstructions. a , d , Images reproduced from www.bigbuckbunny.org (© 2008, Blender Foundation) under a Creative Commons licence ( https://creativecommons.org/licenses/by/3.0/ ).

Extended Data Fig. 2 Samples of the MIT-CGH-4K dataset and comparison with the DeepFocus dataset.

a , The RGB-D image, amplitude and phase of two samples from the MIT-CGH-4K dataset. The RGB image records the amplitude of the scene (directly visualized in sRGB space) and consists of large variations in colour, texture, shading and occlusion. The pixel depth has a statistically uniform distribution throughout the view frustum. The phase presents high-frequency features at both occlusion boundaries and texture edges to accommodate rapid depth and colour changes. b , A sample RGB-D image from the DeepFocus dataset 51 . c , Histograms of pixel depth distribution computed for the MIT-CGH-4K dataset and the DeepFocus dataset. b , Image reproduced from ‘3D Scans from Louvre Museum’ by Benjamin Bardou under a Creative Commons licence ( https://creativecommons.org/licenses/by-nc/4.0/ ).

Extended Data Fig. 3 Schematic of the midpoint hologram calculation.

a , A holographic display magnified through a diverging point light source. b , A holographic display unmagnified through the thin-lens formula. c , The target hologram in this example is propagated to the centre of the unmagnified view frustum to produce the midpoint hologram. The width of the maximum subhologram is considerably reduced.

Extended Data Fig. 4 Evaluation of tensor holography CNN on model architecture and test patterns.

a , Performance comparison of different CNN architectures. b , Performance comparison of different CNN miniaturization methods. c , CNN prediction of two standard test pattern (USAF-1951 and RCA Indian-head) variants made by the authors.

Extended Data Fig. 5 Evaluation of tensor holography CNN on additional computer-rendered scenes.

a , b , CNN prediction of amplitude and phase along with focused reconstructions for holograms of a living room scene from the DeepFocus dataset 51 ( a ) and a night landscape scene from the Stanford light field dataset 29 ( b ). a , Certain still images from ‘ArchVizPRO Vol. 2’ were used to render new images for inclusion in this publication with the permission of the copyright holder (© Corridori Ruggero 2018), under a Creative Commons licence ( https://creativecommons.org/licenses/by-nc/4.0/ ). Panel b reproduced with permission from ref. 29 , ACM.

Extended Data Fig. 6 Evaluation of tensor holography CNN on real-world captured scenes.

a , b , CNN prediction of amplitude and phase along with focused reconstructions for holograms of a statue scene ( a ) and a mansion scene ( b ). Both scenes are from the ETH light field dataset 46 .

Extended Data Fig. 7 Comparison of the original DPM and the AA-DPM.

Reconstruction of two real-world scenes from the encoded phase-only holograms. The couch scene is focused on the mouse toy and the statue scene is focused on the black statue. Orange bounding boxes highlight regions with strong high-frequency artefacts. Left: DPM. Right: AA-DPM.

Extended Data Fig. 8 Holographic display prototype used for the experimental results shown in this paper.

The control box of the laser, Labjack DAQ and camera are not visualized in the figure.

Extended Data Fig. 9 Additional experimental demonstration of 3D holographic projection (part 1).

The RGB-D input can be found in Extended Data Fig. 6 .

Extended Data Fig. 10 Additional experimental demonstration of 3D holographic projection (part 2).

The RGB-D inputs can be found in Extended Data Fig. 6 for a , and Extended Data Fig. 4 for b . Panel a reproduced with permission from ref. 29 , ACM.

Supplementary information

This video demonstrates a simulated focal sweep of a CNN predicted hologram computed for a real-world captured 3D couch scene. The image resolution is 1080p.

This video demonstrates a simulated focal sweep of a CNN predicted hologram computed for a computer-rendered 3D living room scene. The image resolution is 1024*1024.

This video demonstrates a photographed focal sweep of a CNN predicted hologram computed for a real-world captured 3D couch scene. The video is captured by a Sony A7 Mark III mirrorless camera paired with a Sony GM 16-35mm/f2.8 camera lens at 4K/30 Hz and downsampled to 1080p. Only green channel is visualized for temporal stability.

This video demonstrates real-time 3D hologram computation on a NVIDIA TITAN RTX GPU. The video is captured by a Panasonic GH5 mirrorless camera with a Lumix 10-25 mm f/1.7 lens at 4K/60 Hz (a colour frame rate of 20 Hz) and downsampled to 1080P. The color is obtained field sequentially.

This video demonstrates interactive hologram computation on an iPhone 11 Pro using a mini version of tensor holography CNN (see Fig. 2 caption for network architecture details).

This video demonstrates a simulated focal sweep of a CNN predicted hologram computed for a 3D Star test pattern. The image resolution is 1550*1462.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Shi, L., Li, B., Kim, C. et al. Towards real-time photorealistic 3D holography with deep neural networks. Nature 591 , 234–239 (2021). https://doi.org/10.1038/s41586-020-03152-0

Download citation

Received : 22 April 2020

Accepted : 21 December 2020

Published : 10 March 2021

Issue Date : 11 March 2021

DOI : https://doi.org/10.1038/s41586-020-03152-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Liquid lens based holographic camera for real 3d scene hologram acquisition using end-to-end physical model-driven network.

  • Zhao-Song Li
  • Qiong-Hua Wang

Light: Science & Applications (2024)

Waveguide holography for 3D augmented reality glasses

  • Changwon Jang
  • Kiseung Bang
  • Douglas Lanman

Nature Communications (2024)

Intelligent optoelectronic processor for orbital angular momentum spectrum measurement

PhotoniX (2023)

Laser nanoprinting of 3D nonlinear holograms beyond 25000 pixels-per-inch for inter-wavelength-band information processing

  • Pengcheng Chen

Nature Communications (2023)

Deep learning-based incoherent holographic camera enabling acquisition of real-world holograms for holographic streaming system

  • Hyeonseung Yu
  • Youngrok Kim
  • Hong-Seok Lee

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research paper on 3d holographic projection technology

The 3D holographic projection technology based on three-dimensional computer graphics

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

For IEEE Members

Ieee spectrum, follow ieee spectrum, support ieee spectrum, enjoy more free content and benefits by creating an account, saving articles to read later requires an ieee spectrum account, the institute content is only available for members, downloading full pdf issues is exclusive for ieee members, downloading this e-book is exclusive for ieee members, access to spectrum 's digital edition is exclusive for ieee members, following topics is a feature exclusive for ieee members, adding your response to an article requires an ieee spectrum account, create an account to access more content and features on ieee spectrum , including the ability to save articles to read later, download spectrum collections, and participate in conversations with readers and editors. for more exclusive content and features, consider joining ieee ., join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, join the world’s largest professional organization devoted to engineering and applied sciences and get access to this e-book plus all of ieee spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, access thousands of articles — completely free, create an account and get exclusive content and features: save articles, download collections, and talk to tech insiders — all free for full access and benefits, join ieee as a paying member., sharper 3d holograms come into focus, new projection method is key to depth control and reducing image bleed-through.

A green three dimensional dolphin shape emerges from a blow out of two squares of material.

According to new research, dynamic holographic projection of 3D objects is possible using ultrahigh-density successive planes—achieving both fine-grained depth control and reduced crosstalk between planes. This approach could enable realistic representations for use in virtual reality and other applications.

Actual 3D holograms may be achievable in a projected medium that aren’t blurry or fuzzy but still appear to have real depth, according to a new study. The researchers, based in China and Singapore, exerted a new level of control over the hologram projection’s scattering medium.

Much like flying cars or warp-speed travel , holograms are a kind of technology that was overpromised by science fiction but underdelivered in reality. Today this technology is advanced enough to resurrect pop stars, like Whitney Houston , for convincing stage shows, but the depth of these projections mean that the hologram experience lacks convincing three-dimensionality. Low axial resolution—which is equivalent to the distance from the nearest image plane in focus to the farthest field in focus, also called depth of field—and high levels of crosstalk interference between projection planes have long prevented 3D holograms from achieving finer depth control.

One of the innovations the team developed is a modulating medium for projecting images—similar to what LCD display screens use.

Now, a research team from the University of Science and Technology of China and the National University of Singapore have reported a new technique to solve both of these problems at once to create ultrahigh-density 3D holograms.

“Our work presents a new paradigm towards realistic 3D holograms that can deliver an exceptional representation of the 3D world around us,” says senior author on the paper, Lei Gong , an associate professor of optical engineering at the University of Science and Technology of China. Gong and colleagues call this method 3D scattering-assisted dynamic holography.

“The new method might benefit real-life applications such as 3D printing, optical encryption, imaging and sensing, and more,” he continues.

Large-scale 3D holograms are typically created by scattering a projection across many planes to create a stack of pixels that when viewed together give the impression of a virtual, 3D object. Stacking these image planes close together can create high-density images. However increasing the plane density can also generate interference in the form of cross talk, Gong says.

“In short, cross talk is the mutual-intensity interference between images projected at different depths,” he says.

Even though a projection is broken into separate image fields, interference can occur when light from one plane filters through into the others. Like a pen bleeding through from the front of a sheet of paper to the back, this filtering can cause interference because the light for each plane contains unique pixel information. As a result, blurred images are created when this light bleeds through.

“[This] restricts the depth control of 3D projections,” Gong says.

In order to increase image-plane depth without creating blurry images, Gong and the team focused on how to shape the projection photons before they’re scattered across multiple planes. Typically, the hologram projection is controlled by passing it through a spatial light modulator. SLMs are mediums made of physical and reflective barriers that modulate a light beam’s amplitude, phase, and intensity. A TV with a liquid crystal display (LCD) is one example of an SLM.

However, these SLMs alone can limit the final hologram’s resolution. To solve this and the cross-talk problem, the team introduced a scattering medium made from zinc oxide nanoparticles to help further scatter the projection’s light. Introducing this additional medium served to increase the total amount of scattering the light experienced and created a greater range of diffraction angles. This greater range of angles made it possible to decorrelate, or disentangle, the image fields to reduce pixel correlation between planes. As a result, cross-talk interference via light bleeding through the image layers was reduced.

With the cross-talk issue addressed, the team was then able to stack the image planes more densely with less depth between them which helps to increase the 3D experience of the projection.

To verify the benefit of this approach, the team used both simulated and experimental scenarios. Where other scattering approaches limited the number of image layers to only 32 at a depth interval of 3.75 millimeters, simulations of the new scattering approach could generate 125 image layers at a depth interval of just under 1 mm.

In experimental trials, the researchers also reported reduced depth intervals and plane density compared to existing scattering techniques while generating minimal cross talk.

In addition to improving naked-eye 3D hologram visualizations and improving the viewing angle of holographic VR headsets, Gong says that this technology could also be implemented beyond virtual reality as well.

“For the biomedical field, it might be used to project the 3D medical images that help the diagnosis and treatment.”

But before these advancements can take place, the technology will need to move from point-cloud projections to solid ones. In order to bridge this gap, Gong says it will be necessary to develop new algorithms to handle the complexity of these 3D images.

“A much higher pixel-count hologram is required to project complicated 3D scenes,” Gong says. “New algorithms, such as learning-based methods, should be developed for this purpose.”

  • Tired of Zoom Calls? Try Beaming in on a Hologram ›
  • Deep Learning Enables Real-Time 3D Holograms On a Smartphone ›
  • Impossible Photo Feat Now Possible Via Holography - IEEE Spectrum ›
  • Holography - Wikipedia ›
  • Using artificial intelligence to generate 3D holograms in real-time ... ›

Sarah Wells is a science and technology journalist interested in how innovation and research intersect with our daily lives. She has written for a number of national publications, including Popular Mechanics, Popular Science, and Motherboard.

Grokking X.ai’s Grok—Real Advance or Just Real Troll?

Elastic patch tech helps vocally impaired speak, math that predicts system failures makes solar smarter, related stories, holograms supercharge nanoscale 3d printing, tired of zoom calls try beaming in on a hologram, new laser technique promises photonic devices inside of silicon.

share this!

April 6, 2023

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Technology advance paves way to more realistic 3D holograms for virtual reality and more

Technology advance paves way to more realistic 3D holograms for virtual reality and more

Researchers have developed a new way to create dynamic ultrahigh-density 3D holographic projections. By packing more details into a 3D image, this type of hologram could enable realistic representations of the world around us for use in virtual reality and other applications.

"A 3D hologram can present real 3D scenes with continuous and fine features," said Lei Gong, who led a research team from the University of Science and Technology of China. "For virtual reality , our method could be used with headset-based holographic displays to greatly improve the viewing angles, which would enhance the 3D viewing experience. It could also provide better 3D visuals without requiring a headset."

Producing a realistic-looking holographic display of 3D objects requires projecting images with a high pixel resolution onto a large number of successive planes, or layers, that are spaced closely together. This achieves high depth resolution, which is important for providing the depth cues that make the hologram look three dimensional.

Gong's team and Chengwei Qiu's research team at the National University of Singapore describe their new approach, called three-dimensional scattering-assisted dynamic holography (3D-SDH), in the journal Optica . They show that it can achieve a depth resolution more than three orders of magnitude greater than state-of-the-art methods for multiplane holographic projection.

"Our new method overcomes two long-existing bottlenecks in current digital holographic techniques—low axial resolution and high interplane crosstalk—that prevent fine depth control of the hologram and thus limit the quality of the 3D display," said Gong. "Our approach could also improve holography-based optical encryption by allowing more data to be encrypted in the hologram."

Technology advance paves way to more realistic 3D holograms for virtual reality and more

Producing more detailed holograms

Creating a dynamic holographic projection typically involves using a spatial light modulator (SLM) to modulate the intensity and/or phase of a light beam. However, today's holograms are limited in terms of quality because current SLM technology allows only a few low-resolution images to be projected onto sperate planes with low depth resolution.

To overcome this problem, the researchers combined an SLM with a diffuser that enables multiple image planes to be separated by a much smaller amount without being constrained by the properties of the SLM. By also suppressing crosstalk between the planes and exploiting scattering of light and wavefront shaping, this setup enables ultrahigh-density 3D holographic projection.

To test the new method, the researchers first used simulations to show that it could produce 3D reconstructions with a much smaller depth interval between each plane. For example, they were able to project a 3D rocket model with 125 successive image planes at a depth interval of 0.96 mm in a single 1000×1000-pixel hologram, compared to 32 image planes with a depth interval of 3.75 mm using another recently developed approach known as random vector-based computer-generated holography.

Technology advance paves way to more realistic 3D holograms for virtual reality and more

To validate the concept experimentally, they built a prototype 3D-SDH projector to create dynamic 3D projections and compared this to a conventional state-of- the-art setup for 3D Fresnel computer-generated holography. They showed that 3D-SDH achieved an improvement in axial resolution of more than three orders of magnitude over the conventional counterpart.

The 3D holograms the researchers demonstrated are all point-cloud 3D images, meaning they cannot present the solid body of a 3D object. Ultimately, the researchers would like to be able to project a collection of 3D objects with a hologram, which would require a higher pixel-count hologram and new algorithms.

Journal information: Optica

Provided by Optica

Explore further

Feedback to editors

research paper on 3d holographic projection technology

NASA touts space research in anti-cancer fight

4 hours ago

research paper on 3d holographic projection technology

Beyond cloning: Harnessing the power of virtual quantum broadcasting

12 hours ago

research paper on 3d holographic projection technology

Prestigious journals make it hard for scientists who don't speak English to get published, study finds

Mar 23, 2024

research paper on 3d holographic projection technology

Scientists develop ultra-thin semiconductor fibers that turn fabrics into wearable electronics

research paper on 3d holographic projection technology

Saturday Citations: An anemic galaxy and a black hole with no influence. Also: A really cute bug

research paper on 3d holographic projection technology

Research team proposes a novel type of acoustic crystal with smooth, continuous changes in elastic properties

research paper on 3d holographic projection technology

New findings shed light on finding valuable 'green' metals

research paper on 3d holographic projection technology

No 'human era' in Earth's geological history, scientists say

Mar 22, 2024

research paper on 3d holographic projection technology

Research uncovers a rare resin fossil find: A spider that aspires to be an ant

research paper on 3d holographic projection technology

Using physics principles to understand how cells self-sort in development

Relevant physicsforums posts, explanation for homemade digital microscope's optical systems, was thomas young wrong.

Mar 13, 2024

Horizon and Curvature Observations

Laser and nano-holes experiment, measure the transmission curve and wavelengths of colored mineral glass.

Mar 11, 2024

How to build a simple polarization filter

Mar 5, 2024

More from Optics

Related Stories

research paper on 3d holographic projection technology

Bringing angular momentum to holograms and metasurfaces

Mar 31, 2023

research paper on 3d holographic projection technology

Using model-driven deep learning to achieve high-fidelity 4K color holographic display

Feb 22, 2023

A new approach to 3-D holographic displays greatly improves the image quality

Jan 27, 2017

Multicolor holography technology could enable extremely compact 3-D displays

Jan 24, 2019

research paper on 3d holographic projection technology

Improvements to holographic displays poised to enhance virtual and augmented reality

Jan 28, 2021

research paper on 3d holographic projection technology

World's thinnest hologram paves path to new 3-D world

May 18, 2017

Recommended for you

research paper on 3d holographic projection technology

New milestone in laser cooling: Research team cools silica glass by a record 67 Kelvin

Mar 21, 2024

research paper on 3d holographic projection technology

High-quality microwave signals generated from tiny photonic chip

Mar 20, 2024

research paper on 3d holographic projection technology

Using polarization to improve quantum imaging

research paper on 3d holographic projection technology

Continuous non-invasive glucose sensing on the horizon with the development of a new optical sensor

Mar 19, 2024

research paper on 3d holographic projection technology

Research team establishes synthetic dimension dynamics to manipulate light

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

research paper on 3d holographic projection technology

  • About the Journal
  • Editorial Board
  • Call for Paper
  • JHL Academic Committee
  • Submit a Manuscript
  • Data Sharing
  • Reviewers Guidance
  • Submit Your Review Report
  • Editorial Policy

research paper on 3d holographic projection technology

  • Share facebook twitter google linkedin

research paper on 3d holographic projection technology

Holography, and the future of 3D display

research paper on 3d holographic projection technology

  • Light: Advanced Manufacturing  2 , Article number: (2021)
  • Wyant College of Optical Sciences, University of Arizona, Tucson, AZ, USA
  • Corresponding author: Pierre-Alexandre Blanche ( [email protected] )
  • Received: 06 May 2021 Revised: 05 November 2021 Accepted: 19 November 2021 Accepted article preview online: 25 November 2021 Published online: 20 December 2021 Available Online: 2021-12-18 -->

doi: https://doi.org/10.37188/lam.2021.028

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article′s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article′s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

通讯作者: 陈斌, [email protected]

沈阳化工大学材料科学与工程学院 沈阳 110142

research paper on 3d holographic projection technology

Figures( 6 )

Research Summary

A review of holography for 3D display applications

The pioneers of holography, Gabor, Leith, Upatnieks, and Denisyuk, predicted very early that the ultimate 3D display will be based on this technique. This conviction was rooted on the fact that holography is the only approach that can render all optical cues interpreted by the human visual system. In this review article, Dr. Pierre-Alexandre Blanche from the University of Arizona, USA, is discussing the recent accomplishments made in the field of holographic 3D display. More specifically, the real time computation of holograms with neural networks and other algorithms, the efficient transmission and distribution of the information over long distance, and the new diffractive elements allowing the rendering of the holograms in   various 3D display setups.

Article Metrics

Article views( 18525 ) HTML views( 9089 ) --> PDF downloads( 3510 ) Citation( 0 ) Citation counts are provided from Web of Science. The counts may vary by service, and are reliant on the availability of their data. Received: 06 May 2021 Revised manuscript received: 05 November 2021 Published in print: 18 December 2021 -->

  • PDF Downloads( 3510 )
  • Abstract views( 18525 )
  • HTML views( 9089 )

Proportional views

research paper on 3d holographic projection technology

  • Light: Advanced Manufacturing 2 , Article number: (2021)

Corresponding author: Pierre-Alexandre Blanche,  [email protected]

  • Received: 06 May 2021
  • Revised: 05 November 2021
  • Accepted: 19 November 2021

Abstract:  The pioneers of holography, Gabor, Leith, Upatnieks, and Denisyuk, predicted very early that the ultimate 3D display will be based on this technique. This conviction was rooted on the fact that holography is the only approach that can render all optical cues interpreted by the human visual system. Holographic 3D displays have been a dream chased after for many years, facing challenges on all fronts: computation, transmission, and rendering. With numbers such as 6.6 × 10 15 flops required for calculations, 3 × 10 15 b/s data rates, and 1.6 × 10 12 phase pixels, the task has been daunting. This article is reviewing the recent accomplishments made in the field of holographic 3D display. Specifically, the new developments in machine learning and neural network algorithms demonstrating that computer-generated holograms approach real-time processing. A section also discuss the problem of data transmission that can arguably be solved using clever compression algorithms and optical fiber transmission lines. Finally, we introduce the last obstacle to holographic 3D display, which is is the rendering hardware. However, there is no further mystery. With larger and faster spatial light modulators (SLMs), holographic projection systems are constantly improving. The pixel count on liquid crystal on silicon (LCoS) as well as microelectromechanical systems (MEMS) phase displays is increasing by the millions, and new photonic integrated circuit phased arrays are achieving real progress. It is only a matter of time for these systems to leave the laboratory and enter the consumer world. The future of 3D displays is holographic, and it is happening now.

Not long after the first demonstration of holographic images by Leith and Upatnieks 1 , as well as by Denisyuk 2 , a similar technique was used by De Bitetto and others to display motion picture holograms 3 , 4 . As the name indicates, a motion picture hologram uses a rapid succession of static holographic images to reproduce the effect of movement in 3D. Because of these successes, it was hoped at the time that a holographic television would be developed soon after. Unfortunately, more than 50 years later, there is still no holographic television in sight.

Compared to motion picture, a holographic display system needs to capture, transmit, and render 3D information. In the case of an interactive display system, such as a computer screen, there is the additional constraint that real-time manipulation of 3D data is required. This makes a universal holographic 3D display much more challenging to develop than the simple successive projection of pre-recorded holograms.

research paper on 3d holographic projection technology

It should be noted that elements should be approximately 10 times smaller than predicted by equation 1 to achieve high-efficiency blazed diffraction gratings.

research paper on 3d holographic projection technology

Fig. 1 plots the data rate in bits per second for different telecommunication systems according to the time of their introduction. Starting with the optical telegraph (or Chappe's semaphore) presented to Napoleon Bonaparte in 1798, the optical telegraph had a typical rate of transmission of approximately 2 to 3 symbols (196 different types) per minute, or 0.4 b/s. Consequently, the electrical telegraph, popularized in the early 1840s using Samuel Morse's code, achieved a rate of approximately 100 b/s. Graham Bell's telephone was introduced in 1876 and supported voice frequency transmission up to 64 kb/s 6 . The early NTSC black and white electronic television, available in the 1940s, had 525 interlaced lines and displayed images at a rate of 29.97 frames per second at a bit rate of 26 Mb/s 7 . The color NTSC format was introduced 10 years later and tripled the black and white bandwidth to accommodate red, green, and blue channels 8 . More recently, the digital video format makes it easier to establish the bit rate based on pixel count (excluding compression) with HDTV [email protected] Gb/s in 1990, ultra-HDTV 2160p(4K)@12.7 Gb/s in 2010, and currently 4320p(8K)@47.8 Gb/s. Note that these values are for uncompressed data feeds, and for the sake of comparison, do not include any type of compression algorithm.

Fig. 1   Stairway to holography: approximate bit rate magnitude of various telecommunication devices according to their year of introduction.

research paper on 3d holographic projection technology

In this manuscript, we investigate the reasons why holography is still perceived to be the ultimate technique to develop a commercial 3D display, review the progress that has been accomplished toward this goal, and discuss the missing technologies that are still needed to promote the emergence of such a 3D display.

Understanding the human visual system and how it perceives the third dimension is key to developing a 3D display 9 − 11 . The human visual system takes input from many different cues to determine depth perception. It should be noted that most of these cues originate from 2D phenomena. Among these are shading, shadowing, perspective, relative size, occlusion, blurriness, and haze. The example presented in Fig. 2 shows how three simple discs, presented on whatever 2D display you are reading this article on, are interpreted as 3D balls owing to these cues.

research paper on 3d holographic projection technology

Fig. 2   Examples of some of the 2D visual cues that affect depth perception.

Because these 2D cues are processed by the human visual system to determine the depth of a scene, then a painting, a photograph, or a movie is intelligible as long as these cues are correctly reproduced. When they are not, this leads to optical illusions such as infinite staircases and other impossible shapes.

The same applies to any 3D display system, which must, first and foremost, represent these 2D cues before introducing any additional cues. Additional 3D cues are stereo disparity, motion parallax, and accommodation. We briefly review these cues in the following sections.

Stereo Disparity

Stereo disparity is the change in parallax of the scene observed between the left and right eyes. It only requires that two images be reproduced, and as such, is the most technologically manageable 3D cue. It is so manageable in fact that the introduction of stereoscopic displays pre-dates the invention of photography. The first system was invented by Sir Charles Wheatstone in the early 1830s using drawn images 12 . This was then followed by taking pictures from two positions, or with a camera having two objectives.

When a stereo projection is meant for a single individual, such as a head-worn display, it is relatively easy to keep the left and right views separated. Images are separated by simply introducing a physical partition between both eyes 13 . For a larger audience, the separation between left and right views is often achieved by having the viewers use eyewear with different left and right lenses. The left and right image coding can be achieved using color (anaglyphs), orthogonal polarization, or alternating shutters 14 , 15 .

From a user perspective, the eyewear requirement for stereo display has been accepted in special venues such as theaters, where large productions continue to be released in stereoscopic 3D. However, the commercial failure of stereoscopic 3D television seems to indicate that for everyday experience, the public is not enthusiastic about wearing special glasses in their own living rooms 16 .

Autostereoscopy

Autostereoscopic displays achieve stereoscopy without the need for special glasses. The left and right views are directly projected toward the viewer’s intended eyes using parallax barriers or a microlens array 17 − 19 . To ensure that the correct eye intersects the correct projection, autostereoscopic systems require that the viewer be located at a particular position. This inconvenience has proven sufficient to limit the adoption of autostereoscopic 3D television by the consumer market 20 . It should also be noted that autostereoscopic systems with an eye tracking mechanism that mitigates the fixed viewer zones have been developed, but have not achieved wide popularity 21 − 23 .

Motion Parallax

Motion parallax requires many views to be projected, allowing the viewer to see the correct parallax even when moving in front of the display. The density of the different views that are projected needs to be such that the autostereoscopic information is correctly reproduced. Therefore, at least two views per inter-pupillary distance are required. However, to achieve a smooth transition from one perspective to the next, a much larger density of views is required 24 . The optimum view density depends on the exact configuration of the display and the expected viewer distance, but numbers are on the order of one view per degree 25 − 27 .

In most of the literature, a display that reproduces motion parallax is called a "multiview" or "multi-view" display while a "light-field" display reconstructs 3D images based on the concept of ray-optics and integral imaging 28 − 32 . In a multiview display, the display is designed such that the motion parallax can be reproduced smoothly when a viewer’s position changes. This is considered a multiview-type autostereoscopic display. However, when the display is also capable of reconstructing virtual or real images, it is usually called a light-field display.

research paper on 3d holographic projection technology

When the viewer remains motionless in front of a multiview display, the observed parallax provides an experience similar to that of autostereoscopic displays 33 . However, because of the much larger number of views, a light-field display is not subject to the same limited number of view zones as autostereoscopic systems 34 . Therefore, user experience is much better, and acceptance is more likely.

Considering their somewhat achievable data rate and advantages over auto-stereoscopy, multi-view and light-field displays are currently the subject of intense research 35 − 40 . This technology certainly represents the next 3D display platform that will appear in the marketplace, and some specialized applications have already started to emerge 41 .

The Vergence-Accommodation Conflict

The vergence-accommodation conflict is the Achilles heel of all the display systems that we have introduced thus far: stereoscopic, autostereoscopic, multiview, and light-field (with some exception for the latter) and occurs when mismatched visual 3D cues are presented to an observer. The vergence-accommodation conflict occurs because the images projected by these displays are located at a fixed distance, thus producing a constant accommodation cue that cannot be adjusted, whereas vergence is provided by the parallax, which can be reproduced, and thus may vary within a scene. Disparity between accommodation and vergence cues can create a conflict in perception. This conflict leads to some visual discomfort, which is well documented in the literature 42 − 45 .

Light-field displays can reproduce some amount of accommodation when the ray density is sufficiently large. This condition is often referred to as a super-multiview 46 , 47 . Accommodation occurs in a light-field display because the image plane can be moved in and out of the display plane. This is achieved by directing the light rays from different sections of the panel toward one voxel region, as shown in Fig. 3a .

research paper on 3d holographic projection technology

Fig. 3   Illustration of the projection of a voxel out of the emission plane by

However, there is some belief that if the view density keeps increasing in a light-field display, the accommodation distance can be extended at will. This belief arises from the extrapolation that light-field displays approximate a wavefront curvature by using line segments. If these segments are sufficiently small, they may become indistinguishable from the true wavefront curvature. Unfortunately, this ray-tracing simplification does not occur because diffraction along the pixel edges takes place, limiting the voxel resolution. Even with a pixel density in the 100s per degree, when an object is projected too far from the plane of the light-field display, it becomes blurry because of the diffraction among pixels. This diffraction effect cannot be avoided and intrinsically reduces the depth resolution and accommodation of light-field displays 48 , 49 .

To eliminate the diffraction phenomena experienced with smaller pixel sizes, strong coherence among pixels is required so that the light-field display becomes indistinguishable from holography.

The difficulty of reproducing accommodation induces visual discomfort by having to limit the display depth of the field. To reproduce a voxel out of the plane of the display, the light should be focused at that point by the optical system. Without the capability to refocus subpixels at will, the light-field display can only produce a flat wavefront from the emission plane. As presented in Fig. 3a , when a light-field display attempts to reproduce a voxel that is too far away from the emission plane, the voxel invariably becomes blurry.

To address this problem, researchers have developed multiplane light-field displays 50 − 52 . This is possible because the plane of emission can be refocused by optical elements and moved along the view depth. However, this requires some multiplexing to generate different planes in time or space, which increases the bandwidth required by the system. Another aspect that should not be overlooked is that occlusions between different planes are difficult to control when there are many view zones 53 .

Volumetric Displays

Volumetric displays have voxels located in 3D space and are affected by the same occlusion problem as a multi-plane light-field display. For both systems, the occlusions can only be correctly reproduced for one point of view 54 , 55 . Some systems (both volumetric and light-field) use an eye tracking mechanism to re-calculate the occlusions and present the correct image wherever the viewer is located 56 . However, only one correct perspective can be achieved, precluding its application for multiple observers.

In a volumetric display, the occlusion problem occurs because the emission of the voxel is omnidirectional, and there is no absorptive voxel. Nevertheless, volumetric displays have the advantage of being able to reproduce the field depth without resolution loss. They can be somewhat more natural to view when they do not use a screen to display an image. In this case, the image appears floating in thin air, which has a dramatic effect on the viewer’s perception 55 , 57 , 58 .

Volumetric displays also have the disadvantage of not being capable of projecting images outside a limited volume. The image depth is bounded by that volume, and a deep landscape or object that seemingly reaches out of the display cannot be reproduced 54 .

The mathematical computation of the bit rate for a volumetric display is as simple as multiplying the resolution of a 2D screen by the third dimension, refresh rate, and dynamic range. In Fig. 1 , the data rate for a 4K volumetric display is

However, because volumetric display setups are easily scalable, lower-resolution systems can be readily used to showcase the potential of the technology 59 − 61 .

Since the studies by Leith, Upatnieks, and Denisyuk, static holograms have demonstrated that the technique is capable of reproducing all the cues that the human visual system uses to comprehend the three dimensions 1 , 2 . By using high quality photosensitive materials, it is now possible to copy existing artifacts and display convincing holographic reproductions in full color 62 , 63 . The question that remains is how to do the same with a refreshable display.

There are three fundamental problems to be solved to create a holographic television: the computation of the holographic pattern from the 3D information, the transmission of the data from where it is captured to where it needs to be displayed, and the reproduction of the holographic pattern on the screen to display the 3D image.

Computed Generated Holograms

research paper on 3d holographic projection technology

It should also be noted that the Fourier transform yields a real and an imaginary solution. These two components correspond to the amplitude and phase values of the hologram. Most of the time, the element used to display the diffractive pattern (such as a spatial light modulator) can only reproduce one or the other, but not both. This means that the result from a single Fourier transform will have a significant amount of noise when reproducing an image. Other sources of noise in holograms originate from the quantization error of the phase levels, diffraction in the pixel structure, and speckle caused by the random phase 70 .

To reduce the noise and boost the signal, Gerchberg and Saxton developed an iterative algorithm (GS) in 1972 71 . However, the GS algorithm only works for 2D input images and does not accept 3D information. Nevertheless, to obtain some image depth, it is possible to compute individual holograms for different discrete planes 72 − 74 . This solution renders both vergence and accommodation. However, because the holograms for the different image planes have been computed separately from one another, and not as a whole 3D scene, occlusions can only be reproduced for a single perspective. This is the same problem previously mentioned regarding volumetric displays. More recently, some algorithms have been developed to address the occlusion problem in multi-plane hologram computations. Such algorithms have demonstrated the capability to render correct occlusions in a limited view zone 75 − 77 .

For 3D displays, hologram computations directly based on 3D models can be accomplished 78 . Although the detailed description of these algorithms is beyond the scope of this article, it should be noted that these algorithms can be separated into two broad categories: wavefront-based methods, and ray-based methods.

In the ray-based technique, the hologram is calculated from incoherently captured 2D image planes of a 3D scene, and relies on the geometric optics formalism of light propagation. The ray-based methods include two distinct categories: holographic stereogram (HS) and multiple viewpoint projection (MVP) 79 − 81 . Because they do not compute the wavefront propagation, HS and MVP techniques are much faster than wavefront-based methods and can render photorealistic images 82 . However, because the full wavefront of the object is not taken into account, ray-based methods can have difficulties rendering some of the 3D optical cues. Moreover, HS techniques experience some limitations in depth of field because the different views are combined incoherently 48 . Another drawback of the MVP approach is the need to capture or render a large number of images involving small increments in the location of the camera. Otherwise, the motion parallax can be jumpy, and occlusions are not represented well. In a sense, HS and MVP holograms are hybrids falling between light-field displays and holographic displays.

In the wavefront-based methods, the propagation of the light wave is computed starting from a point light source that is illuminating either a point cloud or a polygon representation of the object. The CGH is calculated by simulating the interference between the wavefront coming from the object and another point light source or reference beam. The advantage of these methods is that they natively consider occlusion and parallax cues, so their rendering is accurate. However, this accuracy comes at the cost of high processing demand, as described previously 48 , 83 , 76 . This processing demand is driven by the fact that each point of the CGH must take into account the interference between each and every beam projected from the portion of the object that is visible from that location in the hologram plane. When the point on the hologram moves, the perspective of the object also moves correspondingly.

Some of the computations for generating the CGH can be stored upfront in a look-up table 84 . This information can be retrieved much faster than if it is computed multiple times. There is a fine balance to achieve between large memory storage and computational complexity. To accomplish this balance, different types of algorithms have taken advantage of look-up tables at different stages of CGH computations 85 − 87 .

To handle the massive amount of data required to compute 3D holograms in a reasonable amount of time, computations are sometimes performed on specially built hardware accelerators 88 . This hardware is dedicated to the calculation of the Fresnel phase pattern 86 , 88 − 91 .

Despite recent advances in the field of computer holography, it seems that from the published images available, the realism of the projected 3D images computed with wavefront-based algorithms requires significant improvement to become convincing (see Fig. 4 ) 75 , 92 . This is by no means a criticism of the notable achievements accomplished in that discipline, it is rather a testament to how difficult it is to reproduce a fully detailed holographic image.

research paper on 3d holographic projection technology

Fig. 4   Examples of optical reconstructions of wavefront-based computer generated holograms from recently published articles.

In many cases, the holographic image computed using a wavefront-based method lacks texture (see Fig. 4(2) ). This should not be surprising, considering that texture rendering considers the finest details of the material surface. This level of detail cannot yet be achieved by computer. A parallel can be drawn with the more familiar world of 2D animation, where early movies depicted blocky characters lacking luster but in modern times, the production team can use any level of photorealism that suits the artistic needs of the story (see for example, 93 ).

Techniques such as machine learning, neural networks, and artificial intelligence have recently been applied with great success to the computation of holograms 94 − 98 . In the most general sense, the algorithms associated with these techniques work by training a computing network with some input images (ground truth) and use a camera to observe holograms generated by an optical system. The parameters of the code are optimized by the algorithm to minimize a loss function that forces the holograms to converge toward the original images. Once training is complete, the parameters are frozen, and the algorithm is eventually capable of calculating the hologram of any input image. These techniques are particularly effective at performing fast computations once the training period is complete and at solving the problems associated with texture rendering 99 , 96 . However, for the most part, these algorithms work with 2D images, but are expected to soon be extended to 3D images.

Transmission of Holograms

The image captured for a holographic display can satisfy the minimum requirements of the human eye and does not have to use coherent illumination and resolve nanometric interference fringes, as is the case with static holograms.

To suit the human eye accommodation, the 3D information to be reproduced can have a depth resolution of only a few centimeters, instead of the nanometers achieved with holography 10 . Such an image can even be compressed into a 3D mesh model overlaid by a texture pattern, as that used in modern video games. Video games handle this information together with the location of a virtual camera to display a 2D image. In the same way, the game engine could display a 3D image if the display required it. As a case in point, video games can be adjusted to and played using stereoscopic virtual reality headsets.

research paper on 3d holographic projection technology

Because of this increase in data size, it may be much more efficient to transmit the 3D image/model rather than the holographic pattern 100 , 101 . In this case, the computation of the hologram should be performed at the client (receiver) location. This model is named the "thick client" because the computation is performed locally to avoid overwhelming the long-distance transmission medium. This means that the local site requires significant computational power to support this decoding (see the above section regarding Computed Generated Holograms).

research paper on 3d holographic projection technology

A model of the transmission and reproduction of holographic images is presented in Fig. 5 , along with the different orders of magnitude of the computation and data rate needed at each stage.

research paper on 3d holographic projection technology

Fig. 5   Schematic of a holographic television transmission process.

research paper on 3d holographic projection technology

There is no clear advantage between the thick and lean client models. The main reason is that the need for the transmission of 3D holographic images and movies does not yet exist. However, it should be noted that the compression algorithm for hologram storage and transmission is not as effective as algorithms for natural images, such as JPEG and MPEG. This arises from the fact that the resolution of a diffraction pattern cannot be decreased without destroying the light interference it is suppose to generate and therefore the holographic image. Diffraction patterns need to be compressed using a near-lossless algorithm 104 − 107 .

Another important point regarding the transmission of holograms is that the computation of the interference pattern is specific to the display architecture. For the proper reproduction of a hologram, the computation of the interference pattern must consider whether the display is operating in full parallax versus horizontal parallax only, what are the exact illumination wavelengths, and what is the pixel density (among other parameters). Likewise, legacy displays that will be operating at the same time, such as 2D televisions, stereoscopic, auto-stereoscopic, and eventually volumetric displays, must be considered. To ensure compatibility among all devices, a lean client configuration would have to send the various display parameters to the server and receive the pre-calculated data in return. In the case of a thick client architecture, the server can invariably send the same model to the client, which is then further transformed locally. From this perspective, the thick client is simply another type of display that can be integrated into a lean client network, making these two concepts complementary rather than antagonistic.

Holographic Display Setups

Spatio-temporal product.

research paper on 3d holographic projection technology

These ludicrous numbers again point to the difficulty of the task at hand. Nevertheless, researchers have demonstrated that such a tiling approach actually works, although on a smaller scale 108 − 111 .

research paper on 3d holographic projection technology

Another possible approach to reduce the STP of a full holographic system is to limit the eye box in which the hologram is projected. Using this technique, the light is directed toward the viewer using an eye-tracking system or a head-mounted display (AR/VR headset) 116 − 120 . Knowing the location of the viewer dramatically reduces the calculation of the hologram because only a limited number of viewpoints need to be taken into account. Likewise, if the viewer is within a predefined region, the angular extent of the hologram (its diffraction angle) can be narrowed, and the number of diffractive pixels decreases. The advantage of this technique is that it does not sacrifice image quality or 3D cues.

Spatial Light Modulators and Phase Array Devices

research paper on 3d holographic projection technology

To increase the STP of SLMs, it is possible to move away from the LCoS technology and use microelectromechanical systems (MEMS) instead. MEMS are composed of micro-mirrors that can be tilted or moved to interact with light. They can have the same number of pixels and approximately the same pixel pitch as LCoS. However, their refresh rate can be orders of magnitude higher 122 , 123 . This increases their STP by the same factor, reducing the number of units needed to create a holographic display 124 − 126 .

Early MEMS examples include micro-ribbons developed by Sony that were used to construct a diffractive light modulator (or grating light valve). This technology boasted an impressive switching speed of 20 ns 127 , 128 . However, micro-ribbons are one-dimensional, which requires another scanning mechanism to form a 2D image.

At about the same time, Texas Instruments experimented with a phase modulator that moved pixels up and down to modulate the phase 129 . Unfortunately, this MEMS modulator was not commercialized. Instead, Texas Instruments invested in one of the most popular MEMS, the digital light processor (DLP) 130 .

research paper on 3d holographic projection technology

Most recently, Texas Instruments has rejuvenated its earlier attempt at a phase modulator and is developing a piston MEMS capable of achieving a much higher efficiency 132 − 134 . This phase light modulator (PLM) should be extremely useful in the development of holographic 3D display systems. If the PLM is capable of operating at 20 kHz as some DLPs can, it will increase the STP of this MEMS by a factor of 100 compared to typical LCoS SLMs.

Another approach that can increase the intensity of a hologram using low-efficiency devices is to use a refreshable holographic material. Refreshable materials, such as photorefractive polymers, can record the wavefront generated by the SLM and, owing to their high diffraction efficiency, amplify the intensity of the hologram. Video-rate holographic projection, as well as large holographic displays, have been demonstrated using this type of material 34 , 135 , 136 . However, it should be noted that these materials currently rely on an electronically addressable device (SLM, DLP, or other) to display the initial holographic pattern.

Considering that the STP is the key to unlocking a practical holographic 3D display, the approach taken by a group of researchers at MIT and BYU (initiated before the DLP was available) was to start with the device that had the largest STP at the time, acousto-optic material (AOM) 137 , 138 . Regarding acousto-optic materials, the propagation of a sound wave creates a density modulation that diffracts light. If the sound wave is correctly programmed, the diffracted light can form a holographic image. In its waveguide format, the acousto-optic modulator allows for a longer interaction length between light and the acoustic wave generated, which further increases the STP 139 − 141 . A single leaky acousto-optic waveguide can have a 50 MHz usable bandwidth per color, which corresponds to 1.67 Mpixels at 30 Hz. By fabricating multiple waveguide channels in a single crystal, these numbers can easily be increased by several thousands to reach an STP of 50 Gpixels/s. Although the initial demonstrations using AOM provided horizontal parallax only, it is theoretically feasible to feed the different waveguides using a single laser source and control the phase such that horizontal and vertical coherent beam steering can be achieved 142 , 143 .

Another high-STP device is the phased array photonic integrated circuit (PIC) 144 , 145 . In this approach, nanophotonic phased arrays are built by recording branching waveguides on a photonic wafer (see Fig. 6 ). The waveguides are organized such that they distribute light projected from a single source over a 2-dimensional grid. The phase at the end of each waveguide can be adjusted by an electro-optic or thermo-optic phase regulator. The light is extracted orthogonally from the wafer by a grating outcoupler terminating each waveguide. Analogous to phased array radar, the grating outcoupler is also called an optical antenna.

research paper on 3d holographic projection technology

Fig. 6   Schematic of a photonic integrated circuit optical phased array. A single coherent laser source is directed inside a waveguide, from which light is extracted by multiple grating couplers (acting as light antennae). The phase at each antenna can be tuned using a phase modulator to create a hologram.

research paper on 3d holographic projection technology

The preferred material for PIC is silicon, which does not transmit visible light. Other materials with better transmission in the visible wavelengths should be used for display purposes. Silicon nitride or silica platforms have already been explored for optical phased arrays in the literature, but remain in their experimental phase 145 , 146 , 148 , 149 .

Compared to MEMS and LCoS that have a fill factor above 90%, the fill factor of the phased array is fairly low, at approximately 25%. The fill factor affects the diffraction efficiency owing to the presence of side-lobe emissions that cannot be canceled if the antennae are too far apart. This separation is due to the limited turn radius of the waveguide and the required separation between waveguide elements to avoid cross-coupling 150 . Both factors, turn radius and waveguide separation, are dictated by the difference in the index of refraction between the inside and outside of the waveguide. A larger index difference would allow for a larger fill factor.

The phase control of the pixels is better in LCoS than in both MEMS and phased arrays. The LCoS phase is analog and proportional to the applied voltage and is therefore uniform across pixels. In contrast, current MEMS micro-mirror levels are discrete and limited to 4 bits, and exhibit some nonlinearity 134 . For phased arrays, the phase control is analog and accurate, but has to be characterized for each element individually owing to manufacturing inconsistencies 145 .

In summary, none of the current SLM technologies is sufficiently mature to meet all the criteria required for a large-size, high-definition holographic 3D display. This should not overshadow the considerable progress that has been made over the past years, bringing the end goal ever closer.

Holography is still considered as the ultimate technology that will enable rendering of all the optical cues needed for the human visual system to see projected images in 3D. All other technologies, such as (auto) stereoscopy, light-field, or volumetric displays suffer from trade-offs that limit 3D rendering. Nonetheless, these technologies will likely prove to be stepping stones leading to better visual comfort until holographic displays are achieved.

Some of the doors that were preventing holographic television from being made possible only a few years ago have already been unlocked. The fast computation of 3D holograms to properly control occlusions and parallax is now within reach as is a solution to the problem of data transmission. The exact architecture of the network (thick or lean client) is unclear, but higher compression rates and ever faster telecommunication infrastructures supporting the Internet mobile communications make streaming the data for a holographic television feasible, if not yet accessible.

However, some challenges remain to be solved. The two main obstacles at the time this manuscript was written are the computation of photorealistic 3D holograms in a reasonable amount of time, and a suitable electronic device for the reproduction of large holographic 3D images with high resolution.

For the pioneers of holography, Gabor, Leith, Upatnieks, and Denisyuk, the challenge of controlling diffractive pixels could only have been a physicist’s dream. This dream has now been transformed into an engineering nightmare, as devices that can project holographic images exist, but scaling the format to project large images can be surprisingly difficult. Computational time becomes prohibitive, and controlling trillions of pixels currently requires an extremely large number of graphic cards.

Nonetheless, the difficulty of projecting large holograms will soon no longer be in the engineering realm, but will rather transform into an economic problem. Initially, the hardware needed to build a holographic television will be too expensive to be successfully commercialized as a television set. Once price becomes affordable, the holographic television will face the same hurdle that each new media is bumping into, which is the availability of correctly formatted content. There is no advantage in owning a holographic television if creators are still producing 2D movies exclusively. For these reasons, it is likely that the market will move incrementally, starting with HPO multiview, expanding to light-field and full parallax, and finally reaching holography.

To achieve this dream of the perfect display, we should remember that the bit rate progression reported in Fig. 1 is not an immovable fact of nature. It is a testament to human ingenuity and hard work. This exponential growth can be influenced in one way or another by our own actions. Ultimately, where there is a will, there is a way, and the desire for a truly immersive visual experience is ingrained in human nature. It is this desire that will make the holographic television a reality sooner than later.

Copyright 2021 Ji Hua Laboratory All Rights Reserved.   粤ICP备18069055号

Address: NO.28 HUAN DAO NAN ROAD, GUICHENG STREET, NANHAI DISTRICT, FOSHAN CITY,  GUANGDONG PROVINCE, CHINA

Tel: 0757-63220902   Fax: 0757-63505252  

Export File

shu

  • Fig. 1 Stairway to holography: approximate bit rate magnitude of various telecommunication devices according to their year of introduction.
  • Fig. 2 Examples of some of the 2D visual cues that affect depth perception.
  • Fig. 3 Illustration of the projection of a voxel out of the emission plane by a a light-field display, and b a holographic display.
  • Fig. 4 Examples of optical reconstructions of wavefront-based computer generated holograms from recently published articles. (1) Reproduced from [ 75 ]. Numerical and optical reconstruction results when focusing on the a, c head and b, d tail of the dragon. (2) From [ 93 ] presenting rendering images and optical reconstruction images of different surfaces a, d rough surface b, e smooth surface c, f rough surface with texture.
  • Fig. 5 Schematic of a holographic television transmission process. Comparison between thick client and lean client architectures.
  • Fig. 6 Schematic of a photonic integrated circuit optical phased array. A single coherent laser source is directed inside a waveguide, from which light is extracted by multiple grating couplers (acting as light antennae). The phase at each antenna can be tuned using a phase modulator to create a hologram.
  • Open access
  • Published: 04 March 2020

Holographic capture and projection system of real object based on tunable zoom lens

  • Di Wang 1 , 2   na1 ,
  • Chao Liu 1 , 2   na1 ,
  • Chuan Shen 3 ,
  • Yan Xing 1 &
  • Qiong-Hua Wang 1 , 2  

PhotoniX volume  1 , Article number:  6 ( 2020 ) Cite this article

7352 Accesses

114 Citations

Metrics details

In this paper, we propose a holographic capture and projection system of real objects based on tunable zoom lenses. Different from the traditional holographic system, a liquid lens-based zoom camera and a digital conical lens are used as key parts to reach the functions of holographic capture and projection, respectively. The zoom camera is produced by combing liquid lenses and solid lenses, which has the advantages of fast response and light weight. By electrically controlling the curvature of the liquid-liquid surface, the focal length of the zoom camera can be changed easily. As another tunable zoom lens, the digital conical lens has a large focal depth and the optical property is perfectly used in the holographic system for adaptive projection, especially for multilayer imaging. By loading the phase of the conical lens on the spatial light modulator, the reconstructed image can be projected with large depths. With the proposed system, holographic zoom capture and color reproduction of real objects can be achieved based on a simple structure. Experimental results verify the feasibility of the proposed system. The proposed system is expected to be applied to micro-projection and three-dimensional display technology.

Introduction

With the rapid development of the information age, people’s demand for information display is increasing gradually. The next generation of display technologies such as virtual reality, micro-projection display and three-dimensional (3D) display are gradually appearing in various applications [ 1 , 2 , 3 ]. The traditional micro projectors are based on amplitude modulation and they usually use multiple solid lenses to form a projection lens [ 4 , 5 ]. In contrast, holographic projectors have higher light efficiency and the feasibility of real 3D images by encoding corresponding grayscale images on a spatial light modulator (SLM) [ 6 , 7 ]. Therefore, micro-projection technology based on the holography has attracted much attention. Holographic capture and projection technology for real objects in real-time has important application value in military, medical and other fields.

Although holographic projection technology has made some progresses, there are still some issues to be solved:

It is difficult to acquire the image source in real time. For the imaging capture process, in order to realize high quality holographic projection effect, we hope to reproduce real or virtual objects with full color and ideal size. In the process of image acquisition, virtual objects can be obtained by modeling, and real objects are usually captured by a CCD camera [ 8 ]. The traditional zoom camera is achieved by changing the distances between the solid lenses [ 9 ], so that detailed information of the image can be captured. But the bulky size is inconvenient to be designed for micro projectors. To acquire a 3D object, multiple cameras are required for shooting [ 10 , 11 ]. Therefore, the current cameras are difficult to meet the needs of real-time acquisition for different scenes [ 12 ].

The size of reproduced image is relatively small. For the holographic reconstruction process, a solid lens is usually used to implement the Fourier transform [ 13 ]. When the position of the receiving screen changes, the image becomes unclear. Therefore, it is necessary to change the position and size of the reproduced image by adjusting the position or focal length of the solid lens [ 14 , 15 ]. Some researchers used scaled Fresnel diffraction to realize zoomable holographic projection [ 16 ]. Liquid crystal lens has also been used in the holographic system to adjust the size of the reproduction [ 17 ]. However, the size of the liquid crystal lens is small and the aberration exists, so it is difficult to achieve color reproduction.

On the other hand, the chromatic aberration in the system also affects the effect of holographic projection. Color reproduction method based on time-division multiplexing is to load red, green and blue holograms in turns onto the same SLM [ 18 , 19 ]. This method requires the switching time of the light source and the hologram to be strictly consistent. Color reproduction method based on spatial multiplexing is to divide an SLM into three parts or using three SLMs. When three color light beams are used to illuminate the corresponding regions, color reconstructed image can be seen due to the accurate coincidence in space [ 20 , 21 ]. Some researchers put forward the methods of frequency shift and image shift to realize the perfect coincidence of color image [ 22 ]. However, the system uses a 4 f lens and the size of the reproduction cannot be changed. At present, in order to achieve color zoom projection without chromatic aberration, the system is usually more complex. In addition, these holographic zoom systems currently reproduce virtual objects. If the real scene is acquired, the system will be even larger.

Adaptive liquid lenses have been studied recent years due to the unique advantages of large focal length tuning, fast response, and light weight [ 23 , 24 , 25 , 26 ]. In 2014, two liquid lenses were produced and used together with a digital lens, then the holographic projection system with an optical zoom function can be realized [ 27 ]. In 2018, an optical see-through head mounted display was proposed by using a liquid lens [ 28 ]. Although the liquid lens can change the size and position of the reproduced image adaptively, the depth of the reproduced image is constant. When the receiving screen is moved away from the focal plane, the reconstructed image becomes blurred.

In order to solve the above problems, in this paper, we propose a holographic capture and projection system of real objects based on adaptive lenses. Different from the traditional holographic system, a tunable zoom camera is produced based on liquid lenses in order to capture the real objects. The liquid lens is electrically driven, so the zoom camera has a fast response speed. Real objects can be captured and the detail part can be optically magnified by adjusting the focal length of the zoom camera. Moreover, in the holographic reproduction, we use a digital conical lens instead of the other lenses. By loading the phase of the conical lens on the SLM and adjusting the focal length of the corresponding colors, color holographic projection of the real object can be realized without chromatic aberration. Compared with the previous systems that use liquid lens or digital lens for reconstruction [ 29 ], the size and position of the reconstructed image can be changed easily without any optical components. The structure of the proposed system is simplified to a great extent and the reconstructed image can be projected with a large depth.

Structure and operating principle

Structure of the proposed system.

Figure  1 is the schematic diagram of the proposed system. It consists of a zoom camera, three lasers, three filters, three solid lenses, a mirror, three beam splitters (BSs), an SLM, a computer and a receiving screen. In the process of acquiring images, the zoom camera is used to capture the image of the real object. The zoom camera is connected to the computer, then the information of the real object can be transferred to the computer. The hologram of the object can be generated through the computer. The lasers, filters and solid lenses are used to generate the collimated light. The mirror and the BSs are used to adjust the angle of light so that the collimated light can illuminate the SLM. When the hologram is loaded on the SLM, the diffracted light is reflected by the BS. Finally, the reconstructed image can be seen on the receiving screen.

figure 1

Schematic diagram of the proposed system

Principle of the zoom camera

In the image acquisition part, a zoom camera based on adaptive liquid lenses is designed by Zemax and its structure is shown in Fig.  2 . The zoom camera consists of two electrowetting liquid lenses and four solid lenses. The two liquid lenses can not only play a role of zoom part, but also keep the image plane fixed during the zoom process. The two liquid lenses are both actuated by electrowetting and have the functions of variable focal length. The focal length of the liquid lens is changed by tuning the liquid-liquid interface due to the electrowetting effect. According to Young-Lippmann equation, the relationship of the contact angle and the applied voltage U can be described as follows:

where θ 0 is the initial contact angle without external voltage, θ Y is the contact angle when the voltage is applied to the liquid lens, U is the external voltage applied to the ITO electrode, d is the thickness of the dielectric insulator, ε is the dielectric constant of the dielectric insulator and γ 12 is the surface tension between the two liquids that filled in the liquid lens chamber.

figure 2

Structure of the zoom camera. a Zoom state when effective focal length is F 1 ; b zoom state when effective focal length is F 2

The effective focal length f of the two variable liquid lenses can be expressed by the following equations:

where n 1 and n 2 are the refractive indexes of the two liquids filled in the liquid lens, l is the distance between the two liquid lenses, and r 1 , r 2 are the radii of the liquid-liquid interfaces under the voltages of U 1 and U 2 , as shown in Fig.  2 a. When the two liquid lenses are applied with different voltages of U 3 and U 4 , the effective focal length of the system can be changed from F 1 to F 2 , as shown in Fig.  2 b. The proposed zoom camera can vary the optical power with the fixed back focal distance L .

Principle of the holographic reconstruction

In the holographic reconstruction part, three colors of the collimated light illuminate one third of the SLM area, respectively. When the collimated light is used to illuminate the SLM loaded with hologram, the reconstructed image can be displayed on the receiving screen after the modulation of the SLM. In the traditional Fourier holographic system, a solid lens (assuming that the focal length is f 0 ) is used for Fourier transform and the receiving screen is placed at the focal plane of the solid lens. Then the light field on the receiving screen U f 0 ( u , v ) can be expressed as follows:

where λ is the wavelength of the parallel light source, k  = 2 π / λ , U 0 ( x , y ) is the hologram distribution. For different wavelengths, the corresponding focal lengths are different accordingly ( f r , f g , f b are the focal lengths for red, green and blue colors respectively). So, the position and size of three color-reconstructed images are different, as shown in Fig.  3 .

figure 3

Principle of the chromatic aberration

In the proposed system, a digital conical lens is used to replace the traditional solid lens. Figure  4 shows the principle of the conical lens. The phase of the conical lens φ ( d c ) with the focal length f 0 can be expressed as follows:

where d c is the radial coordinate, z is the focal depth, and r is the radius of the conical lens. High horizontal resolution requires a large numerical aperture, while longer focal depth requires a small numerical aperture. A conical lens can have the both advantages. So, when the conical lens is used in the holographic reconstruction, we can see the clear image in the focal depth of z . In the proposed system, the SLM is used to record the phase information of the conical lens. According to Eq. ( 6 ), the phase of conical lens can be calculated. The final phase loaded on the SLM φ ’ can be expressed as follows:

where φ o is the phase information of the recorded object. In this way, we can use the SLM to realize the function of the digital conical lens.

figure 4

Principle of the conical lens

In order to keep the position of three colors reconstructed images in the same position, the focal depth z of conical lens is used to compensate for the difference between the focal lengths of the three color images. According to Fig.  3 , the blue reconstructed image has the shortest reconstructed distance. So, the focal depth z of the conical lens satisfies z  ≥  f r - f b . By setting the corresponding parameters, the digital conical lens is generated and added to holograms of three colors. In this way, three color reconstructed images can coincide in the same position without axial chromatic aberration. Moreover, by changing the focal length of the digital conical lens, the size and the position of the image can be changed easily.

Simulation, experiments and results

In order to verify the feasibility of the proposed system, the optical experiment is built. The wavelengths of the lasers used in the experiment are 671 nm, 532 nm and 473 nm, respectively. The lasers are manufactured by Changchun New Industries Optoelectronics Technology Co., Ltd. The filters and the solid lenses are manufactured by Daheng New Epoch Technology Inc. The focal length of the solid lens is 300 mm. The SLM is manufactured by Xi’ an CAS Microstar Optoelectronic Technology Co., Ltd. and the refresh rate of the SLM is 60 Hz. The resolution and pixel size of the SLM are 1920 × 1080 and 6.4 μm, respectively. We chose a commercial liquid lens Arctic 39 N0 produced by Corning, US. The size of the CCD is 1/2.5” and the pixel size is 2.2 μm.

Simulation and experiments of the tunable zoom camera

In the Zemax simulation, the focal length of the proposed zoom camera can be tuned from ~ 26.08 mm to ~ 32.26 mm. In the zoom process, f /# varies with the focal length, airy disk and spot diameter, as shown in Table  1 . As can be seen from Table  1 , the spot diameters are smaller than the airy disk during the focal length tuning process, which indicates that the zoom camera has a reasonable high image quality. We also simulate the point spread function (PSF) and modulation transfer function (MTF) at the effective focal lengths of 26.08 mm, 27.90 mm and 32.26 mm, as shown in Fig.  5 . From Fig.  5 we can see that the spatial frequency can reach to 60 lp/mm when MTF > 0.5 (the center field) during zooming, which means the camera has a high resolution in the center field. Although the other field, the zoom camera has a relatively large astigmatism especially at the focal length of 32.26 mm, as shown in Fig.  5 c. We can add more liquid lenses in the zoom camera to balance the aberration.

figure 5

PSF and MTF of the zoom camera with different effective focal lengths. a F  = 26.08 mm; b F  = 27.90 mm; c F  = 32.26 mm

Then we fabricate the zoom camera to evaluate the optical performance. The zoom camera consists of a liquid lens cavity, a back-lens group with two solid lenses, a front-lens group with two solid lenses, two commercial liquid lenses and a CCD circuit board, as shown in Fig.  6 .

figure 6

Zoom camera fabrication

In the first experiment, we put two panda dolls 100 mm away from the zoom camera and optimize the radii of the two liquid lenses in order to get the optimized solutions for six focal lengths in Zemax, as shown in Fig.  7 a. Then the applied voltages can be got based on the optimized solutions. When the optimized voltages are applied to the two electrowetting liquid lenses, we can obtain the magnified images, as shown in Figs.  7 b-f. During the driving process, the effective focal lengths can be varied from 26.08 mm to 32.26 mm. The dynamic response video of the image capture process is included in Additional file  1 : Media S1. The curve radius of the two liquid lenses during the focal length varying is also measured, as shown in Fig.  8 .

figure 7

Captured images of two panda dolls with different effective focal lengths. a F  = 26.08 mm; b F  = 27.31 mm; c F  = 27.90 mm; d F  = 29.97 mm; e F  = 31.87 mm; f F  = 32.26 mm

figure 8

Curve radius of the two liquid lenses during the focal length varying

Experiments of the holographic reconstruction

In the second experiment, when the object is captured, the scene information of red, green and blue colors is separated. The holograms of the recorded object for three colors can be generated by the iterative Fourier transform algorithm. The phase information of the digital conical lens can be generated according to Eq. ( 5 ). The final hologram can be generated by adding the phase of the digital conical lens to the that of the recorded object, as shown in Fig.  9 .

figure 9

Process of the hologram

To verify the advantage of the digital conical lens, a solid lens and a digital lens with the same focal length are used for experimental comparison. The focal lengths of the solid lens, digital lens and digital conical lens are set to be 500 mm. The focal depth of the digital conical lens is set to be 200 mm. Then the reconstructed image can be seen on the receiving screen, as shown in Fig.  10 . When the receiving screen is placed at the focal plane of the corresponding lens, the result by using the solid lens, digital lens and digital conical lens are shown in Figs.  10 a-c, respectively. When the receiving screen moves backward from the focal plane position, the results are shown in Figs.  10 d-f. It can be clearly seen that at this time, the reproduced images of the solid lens and digital lens appear to be blurred, while the reproduced image by using the digital conical lens is clear. It can be seen the reconstructed image can be projected in a wider depth range by using the digital conical lens. Then we change the parameters of the digital conical lens and compare the reconstructed image of the panda. The focal length is set to be 600 mm and the focal depth is 500 mm. When the position of the receiving screen changes, the details of the panda can be reproduced clearly with a larger depth, as shown in Fig.  11 . So, by changing the focal length and focal depth of the conical lens, the size and depth of the reconstructed image can be adjusted easily.

figure 10

Results of the reproduced image with different lenses. a Result with the solid lens when the receiving screen is in the focal plane; b result with the digital lens when the receiving screen is in the focal plane; c result with the digital conical lens when the receiving screen is in the focal plane; d result with the solid lens when the receiving screen moves backwards; e result with the digital lens when the receiving screen moves backwards; f result with the digital conical lens when the receiving screen moves backwards

figure 11

Results of the reproduced image when the parameters of the digital conical lens are changed. a Result when the reproduction distance is 700 mm; b Result when the reproduction distance is 800 mm; c Result when the reproduction distance is 900 mm

In order to eliminate the lateral chromatic aberration, the sizes of the three color components are scaled at the process of color separation. We verified the three colors separately. Since the conical lens has the large depth, when the position of the receiving screen is fixed, the reproduced images of the three colors can be clearly displayed, as shown in Figs.  12 a-c. Therefore, axial chromatic aberration can be eliminated. In order to achieve color coincidence based on an SLM, the SLM is divided into three parts in space and each part is illuminated with the corresponding color light respectively, as shown in Fig.  12 d. Figure  12 e is the color reconstructed image, and the result shows that three color reconstructed images can coincide in the same position without chromatic aberration. When the focal length of the zoom camera changes, the size of the captured object is different accordingly. In this way, the magnified scene of the object can be captured. The results of the holographic reconstructed image on the receiving screen for different captured object are shown in Fig.  13 . Fig.  13 shows the partial reconstruction of the object. With the proposed system, we can take the detail of the object by optical zoom and project it simultaneously.

figure 12

Results of three color-reconstructed images. a Green result; b blue result; c red result; d experimental diagram of the SLM; e color result

figure 13

Results of the reproduced images when the focal length of the zoom camera changes. a Reconstructed image when F  = 26.08 mm; b reconstructed image when F  = 31.87 mm

The proposed zoom camera has the resealable fast response time (within 200 ms), thus it can also be used as a depth acquisition camera. When only one liquid lens is actuated under the voltages of 36 V and 55 V, the results of the captured images are shown in Fig.  14 . It can be seen clearly that by adjusting the focal length of the liquid lens, objects of different depths can be photographed. The dynamic response video of the image capture process with single liquid lens is also included in Additional file  2 : Media S2. Figure  15 is the reconstructed images of the captured images when only one liquid lens is actuated. At present, the switching time of a single liquid lens is ~ 200 ms. When the switching time is fast enough, we can consider using it to obtain the information of the 3D object. There are already technologies that can control the response time of a liquid lens within a few tens of milliseconds. We believe that with the optimization of the system, zoom cameras are expected to be applied to the acquisition of 3D objects in the future.

figure 14

Results of the captured images when only one liquid lens is actuated. a Applied voltage U  = 36 V; b applied voltage U  = 55 V

figure 15

Reconstructed images when only one liquid lens is actuated. a Applied voltage U  = 36 V; b applied voltage U  = 55 V

In the proposed system, as the digital conical lens has a large focal depth, the reconstructed image of the object can be clearly seen in the focal depth, as shown in Fig.  10 c and f. On the other hand, by changing the focal length of the digital conical lens, the size and the position of the reconstructed image can also be adjusted easily, as shown in Fig.  11 . In the holographic reconstruction, the reproduced image is disturbed by zero-order light and high-order diffraction images. Figures  10 , 11 , 12 , 13 show the first-order diffraction images. We can load the offset on the hologram to separate the reproduced image and zero-order light, then the undesirable light can be eliminated using an aperture or a filter in the system. Compared with the existing holographic projection system, the proposed system is designed with a zoom camera, which is very small in size and fast in response time. Therefore, the proposed system can easily capture the details of the object without moving the position of the zoom camera. In addition, compared with the previous systems, we use digital conical lens instead of solid lens or other lens for projection. For the same focal length, the digital conical lens has a large depth of focus and the projected image is clear in the focal depth range. Color holographic projection can be realized without chromatic aberration. The size and position of the projected image can be changed according to the requirement easily. In the process of generating the hologram, the iterative Fourier transform algorithm is used to calculate the phase information of the object. Of course, if we use GPU or other acceleration algorithms, the calculation speed can be faster. For the zoom camera, we are developing a circuit board and a control software for tuning the focal lengths of the liquid lenses, synchronously. Through the above methods, the whole response time of the system can be improved effectively. With the decrease of switching time of zoom camera and the improvement of hologram calculation, the real time capture and projection of 3D object can be realized eventually. In the next work, we will continue our research to improve the performance of the system. We believe that our work can promote the development of micro-projection technology and 3D technology.

In this paper, a holographic capture and projection system of real objects based on zoomable lenses is proposed. A liquid lens-based zoom camera and a digital conical lens are used as key parts to reach the functions of holographic capture and projection, respectively. The liquid lens is electrically driven, so the zoom camera has a fast response speed and light weight. As another tunable zoom lens, the digital conical lens has a large focal depth and is used in the holographic system for adaptive projection. By adding the phase of the conical lens to that of the captured object, the reconstructed image can be projected with a large depth. With the proposed system, holographic zoom capture and color reproduction of real objects can be achieved based on a simple structure. The proposed system is expected to be applied to the real-time acquisition and reproduction of 3D objects.

Availability of data and materials

All data generated or analyzed during this study are included in this published article and its additional files.

Bastug E, Bennis M, Medard M, Debbah M. Toward interconnected virtual reality: opportunities, challenges, and enablers. IEEE Commun Mag. 2017;55:110–7.

Article   Google Scholar  

Wakunami K, Hsieh PY, Oi R, Senoh T, Sasaki H, Ichihashi Y, Okui M, Huang YP, Yamamoto K. Projection-type see-through holographic three-dimensional display. Nat Commun. 2016;7:12954.

Hirayama R, Plasencia DM, Masuda N, Subramanian S. A volumetric display for visual, tactile and audio presentation using acoustic trapping. Nature. 2019;575:320–3.

Griffiths AD, Herrnsdorf J, Strain MJ, Dawson MD. Scalable visible light communications with a micro-LED array projector and high-speed smartphone camera. Opt Express. 2019;27:15585–94.

Zhang H, Li L, Mccray DL, Yao D, Yi AY. A microlens array on curved substrates by 3D micro projection and reflow process. Sens Actuators A Phys. 2012;179:242–50.

Wang Z, Chen RS, Zhang X, Lv GQ, Feng QB, Hu ZA, Ming H, Wang AT. Resolution-enhanced holographic stereogram based on integral imaging using moving array lenslet technique. Appl Phys Lett. 2018;113:221109.

Li G, Lee D, Jeong Y, Cho J, Lee B. Holographic display for see-through augmented reality using mirror-lens holographic optical element. Opt Lett. 2016;41:2486–9.

Wang YJ, Lin YH. An optical system for augmented reality with electrically tunable optical zoom function and image registration exploiting liquid crystal lenses. Opt Express. 2019;27:21163–72.

Li M, Lavest JM. Some aspects of zoom lens camera calibration. IEEE T Pattern Anal. 1996;18:1105–10.

Park J, Lee K, Park Y. Ultrathin wide-angle large-area digital 3D holographic display using a non-periodic photon sieve. Nat Commun. 2019;10:1304.

Kozacki T, Kujawińska M, Finke G, Zaperty W, Hennelly B. Holographic capture and display systems in circular configurations. J Disp Technol. 2012;8:225–32.

Kakue T, Wagatsuma Y, Yamada S, Nishitsuji T, Endo Y, Nagahama Y, Hirayama R, Shimobaba T, Ito T. Review of real-time reconstruction techniques for aerial-projection holographic displays. Opt Eng. 2018;57:061621.

Buckley E. Holographic projector using one lens. Opt Lett. 2010;35:3399–401.

Wang D, Liu C, Wang QH. Holographic zoom system having controllable light intensity without undesirable light based on multifunctional liquid device. IEEE Access. 2019;7:99900–6.

Ducin I, Shimobaba T, Makowski M, Kakarenko K, Kowalczyk A, Suszek J, Bieda M, Kolodziejczyk A, Sypek M. Holographic projection of images with step-less zoom and noise suppression by pixel separation. Opt Commun. 2015;340:131–5.

Shimobaba T, Makowski M, Kakue T, Oikawa M, Okada N, Endo Y, Hirayama R, Ito T. Lensless zoomable holographic projection using scaled Fresnel diffraction. Opt Express. 2013;21:25285–90.

Lin HC, Collings N, Chen MS, Lin YH. A holographic projection system with an electrically tuning and continuously adjustable optical zoom. Opt Express. 2012;20:27222–9.

Lee JS, Kim YK, Won YH. Time multiplexing technique of holographic view and Maxwellian view using a liquid lens in the optical see-through head mounted display. Opt Express. 2018;26:2149–59.

Yang SJ, Allen WE, Kauvar I, Andalman AS, Young NP, Kim CK, Marshel JH, Wetzstein G, Deisseroth K. Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing. Opt Express. 2015;23:32573–81.

Sando Y, Barada D, Yatagai T. Full-color holographic 3D display with horizontal full viewing zone by spatiotemporal-division multiplexing. Appl Opt. 2018;57:7622–6.

Senoh T, Mishina T, Yamamoto K, Oi R, Kurita T. Viewing-zone-angle-expanded color electronic holography system using ultra-high-definition liquid crystal displays with undesirable light elimination. J Disp Technol. 2011;7:12060091.

Lin SF, Cao HK, Kim ES. Single SLM full-color holographic three dimensional video display based on image and frequency-shift multiplexing. Opt Express. 2019;27:15926–42.

Malyuk AY, Ivanova NA. Varifocal liquid lens actuated by laser-induced thermal Marangoni forces. Appl Phys Lett. 2018;112:103701.

Liu C, Wang D, Wang QH. Variable aperture with graded attenuation combined with adjustable focal length lens. Opt Express. 2019;27:14075–83.

Dong L, Agarwal AK, Beebe DJ, Jiang H. Adaptive liquid microlenses activated by stimuli-responsive hydrogels. Nature. 2006;442:551–4.

Ren H, Wu ST. Variable-focus liquid lens. Opt Express. 2007;15:5931–6.

Chen MS, Collings N, Lin HC, Lin YH. A holographic projection system with an electrically adjustable optical zoom and a fixed location of zeroth-order diffraction. J Disp Technol. 2014;10:450–5.

Lee JS, Kim YK, Lee MY, Won YH. Enhanced see-through near-eye display using time-division multiplexing of a Maxwellian-view and holographic display. Opt Express. 2019;27:689–701.

Wang D, Liu C, Wang QH. Method of chromatic aberration elimination in holographic display based on zoomable liquid lens. Opt Express. 2019;27:10058–66.

Download references

Acknowledgments

Not applicable.

This work is financially supported by the National Natural Science Foundation of China under Grant No. 61805130, 61805169 and 61535007.

Author information

Di Wang and Chao Liu contributed equally to this work.

Authors and Affiliations

School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China

Di Wang, Chao Liu, Yan Xing & Qiong-Hua Wang

Beijing Advanced Innovation Center for Big Data-based Precision Medicine, Beihang University, Beijing, 100191, China

Di Wang, Chao Liu & Qiong-Hua Wang

Key Laboratory of Intelligent Computing & Signal Processing, Ministry of Education, Anhui University, Hefei, 230039, China

You can also search for this author in PubMed   Google Scholar

Contributions

DW and CL conceived the initial idea and performed the experiments. CS and YX analyzed the data. Q-HW discussed the results and supervised the project. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Qiong-Hua Wang .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Dynamic response video of the image capture process.

Additional file 2.

Dynamic response video of the image capture process with single liquid lens.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Wang, D., Liu, C., Shen, C. et al. Holographic capture and projection system of real object based on tunable zoom lens. PhotoniX 1 , 6 (2020). https://doi.org/10.1186/s43074-020-0004-3

Download citation

Received : 07 December 2019

Accepted : 23 December 2019

Published : 04 March 2020

DOI : https://doi.org/10.1186/s43074-020-0004-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Spatial light modulator
  • Holographic system
  • Zoom lenses
  • Display technology

research paper on 3d holographic projection technology

Help | Advanced Search

Computer Science > Computers and Society

Title: holographic projection technology: the world is changing.

Abstract: This research papers examines the new technology of Holographic Projections. It highlights the importance and need of this technology and how it represents the new wave in the future of technology and communications, the different application of the technology, the fields of life it will dramatically affect including business, education, telecommunication and healthcare. The paper also discusses the future of holographic technology and how it will prevail in the coming years highlighting how it will also affect and reshape many other fields of life, technologies and businesses.

Submission history

Access paper:.

  • Download PDF
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

DBLP - CS Bibliography

Bibtex formatted citation.

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. 3D Holographic Projection Technique ppt

    research paper on 3d holographic projection technology

  2. 3d Holographic Projection Technology

    research paper on 3d holographic projection technology

  3. (PDF) 3D Holographic Projection

    research paper on 3d holographic projection technology

  4. Landing

    research paper on 3d holographic projection technology

  5. PPT

    research paper on 3d holographic projection technology

  6. 3d holographic projection ppt

    research paper on 3d holographic projection technology

COMMENTS

  1. An interactive holographic projection system that uses a hand-drawn

    On the other hand, many research topics aimed at improving the performance of holographic displays-e.g., enlarging the viewing angle 16,17, projecting full-color images 18,19, or implementing ...

  2. Using artificial intelligence to generate 3D holograms in real-time

    Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms. Holograms deliver an exceptional representation of 3D world around us.

  3. Research on Hologram Based on Holographic Projection Technology

    In this paper, it mainly studies the hologram to analyze its algorithm and production process and its application prospects. Research on Hologram Based on Holographic Projection Technology. ... 3D holographic projection technology as a prevailing visual high-tech has been very popular in recent years. The stereoscopic images (holograms ...

  4. Towards real-time photorealistic 3D holography with deep neural

    Video 4. This video demonstrates real-time 3D hologram computation on a NVIDIA TITAN RTX GPU. The video is captured by a Panasonic GH5 mirrorless camera with a Lumix 10-25 mm f/1.7 lens at 4K/60 ...

  5. Holographic Projection Technology: The World is Changing.

    Holographic projection is the new wave of technology that will change how we view things in the new era. It will have tremendous effects on all fields of life including business, education, science, art and healthcare. To understand how a holographic projector works we need to know what a hologram is.

  6. The 3D holographic projection technology based on three-dimensional

    This paper made a more detailed description of 3D holographic projection, and explored the principle and technology about holographic projection based on computer three-dimensional graphics. It will have some reference value for the future development.

  7. Sharper 3D Holograms Come Into Focus

    Lei Gong/University of Science and Technology of China. Actual 3D holograms may be achievable in a projected medium that aren't blurry or fuzzy but still appear to have real depth, according to ...

  8. Technology advance paves way to more realistic 3D holograms for virtual

    A volume-rendered image of the 3D rocket projected by the random vector-based computer-generated holography (RV-CGH) method is shown in (c), using a single 1000×1000-pixel hologram. The 3D ...

  9. The 3D holographic projection technology based on three-dimensional

    The impetus behind the system used in this paper is the fusion of augmented reality and 3D holographic projection with sensor technology to provide a more absorbing experience of viewing the 3D ...

  10. (PDF) Hologram Based Three Dimensional Projection

    Hologram Based Three Dimensional Projection. August 2017. C. R. Balamurugan. T Vishnu. C. Tony Roy. This paper reports an effort to develop the new technology called Holographic Projection (HTDP ...

  11. (PDF) Studying the Recent Improvements in Holograms for Three

    In this paper, the experimental steps of making a transmission hologram have been mentioned. In what follows, current advances of this science-art will be discussed. The aim of this paper is to ...

  12. Holography, and the future of 3D display

    SID Symposium Digest of Technical Papers 51, 1013-1016 (2020). doi: 10. ... N. et al. Full-color interactive holographic projection system for large 3D scene reconstruction. ... multi-view and light-field displays are currently the subject of intense research 35 − 40. This technology certainly represents the next 3D display platform that will ...

  13. Holographic capture and projection system of real object based on

    In this paper, we propose a holographic capture and projection system of real objects based on tunable zoom lenses. Different from the traditional holographic system, a liquid lens-based zoom camera and a digital conical lens are used as key parts to reach the functions of holographic capture and projection, respectively. The zoom camera is produced by combing liquid lenses and solid lenses ...

  14. The 3D holographic projection technology based on three-dimensional

    This paper made a more detailed description of 3D holographic projection, and explored the principle and technology about holographic projections based on computer three-dimensional graphics. This paper made a more detailed description of 3D holographic projection, and explored the principle and technology about holographic projection based on computer three-dimensional graphics. It will have ...

  15. Holographic Projection Technology: The World is Changing

    This research papers examines the new technology of Holographic Projections. It highlights the importance and need of this technology and how it represents the new wave in the future of technology and communications, the different application of the technology, the fields of life it will dramatically affect including business, education, telecommunication and healthcare. The paper also ...

  16. Holographic Projection Technology: The World is Changing

    Abstract. This research papers examines the new technology of Holographic Projections. It highlights the importance and need of this technology and how it represents the new wave in the future of ...

  17. PDF A Review Paper on Holographic Technology: Three Dimensional ...

    Abstract: This paper examines the new technology "Holographic Projection". It represents the new wave in the future of technology and communication, the different application of the technology, fields of life it will dramatically affect including business, education, telecommunication and healthcare. This paper also discusses the future of ...

  18. 3D Holographic Projections using Prism and Hand Gesture Recognition

    Black Book on "3D holographic projection using prism" by former students of K.J. Somaiya College of Engineering Ms. Ashwini Wagh and Mr. Heramb Jadhav under the guidance of Prof. Irfan Siddavatam. Google Scholar; Li Weiying, "The 3D Holographic Projection Technology Based on Three dimensional Computer Graphics," ©2012 IEEE ICALIP 2012 Google ...

  19. Research on Hologram Based on Holographic Projection Technology

    The purpose of this paper is to explore the technical principles of holographic projection technology and its application in the field of digital media art, so as to provide suggestions for the ...

  20. Holographic Projection Technology: The World is Changing

    This paper proposes combining two technologies viz., Sixth Sense and Holographic projection to achieve a more efficient and versatile projection system which can reach integrity and efficiency levels which current systems can never do owing to physical constraints. Expand. 4.

  21. [Retracted] Holographic Projection Technology in the Field of Digital

    The audience's perception of the holographic projection technology and its application in digital media art fields such as digital art galleries, digital galleries, digital art exhibitions, and other art exhibitions are shown in Figure 6.It can be seen from the figure that most audiences have a high evaluation of the multisensory virtual experience of holographic projection and up to 59.86% ...

  22. (PDF) Holography

    Generally, 3D hologram technology provides a 3D visualization tool, displayed using a photographic technique that records the coherent light of light beams and then represents the recorded image ...

  23. A Study on Advances in Creating 3D Holographic Images and Optical

    The most advantage of this technique is the possibility to observe 3D images without using glasses. The quality of created images by this method has surprised everyone. In this paper, the ...