Sei sulla pagina 1di 17

Signal Processing: Image Communication 70 (2019) 114–130

Contents lists available at ScienceDirect

Signal Processing: Image Communication


journal homepage: www.elsevier.com/locate/image

Signal processing challenges for digital holographic video display systems


David Blinder a,b ,∗, Ayyoub Ahar a,b , Stijn Bettens a,b , Tobias Birnbaum a,b ,
Athanasia Symeonidou a,b , Heidi Ottevaere c , Colas Schretter a,b , Peter Schelkens a,b
a Dept. of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Pleinlaan 2, B-1050 Brussels, Belgium
b imec, Kapeldreef 75, B-3001 Leuven, Belgium
c
Dept. of Applied Physics and Photonics, Brussels Photonics Team (B-PHOT),Vrije Universiteit Brussel (VUB), Pleinlaan 2, B-1050 Brussels, Belgium

ARTICLE INFO ABSTRACT


MSC: Holography is considered to be the ultimate display technology since it can account for all human visual cues such
00-01 as stereopsis and eye focusing. Aside from hardware constraints for building holographic displays, there are still
99-00 many research challenges regarding holographic signal processing that need to be tackled. In this overview, we
Keywords: delineate the steps needed to realize an end-to-end chain from digital content acquisition to display, involving the
Digital holography efficient generation, representation, coding and quality assessment of digital holograms. We discuss the current
Computer generated holography state-of-the-art and what hurdles remain to be taken to pave the way towards realistic visualization of dynamic
Holographic visualization
holographic content.
Holographic imaging
Data compression
Quality assessment

1. Introduction an essential component in many visualization applications in entertain-


ment, medicine and industry.
Holography is a technique that can record and reconstruct the Current solutions for 3D displays like those based on optical light
full wavefield of light. Its invention in 1948 is attributed to Denis fields [9,10] can only provide a subset of the required visual cues
Gabor, who initially developed the technique for improving the quality due to inherent limitations, such as a limited amount of discrete
of electron microscopes [1]. At the time, no adequate coherent light views. By contrast, digital holography [11] is considered to be the
source was available, forcing him to arrange everything along one ultimate display technology since it can account for all visual cues,
axis, known today as in-line holography. Because image quality was including stereopsis, occlusion, non-Lambertian shading and continuous
poor, holography remained obscure. With the invention of the laser and parallax. Representing the full light wavefront avoids eye vergence–
Leith and Upatnieks’ off-axis holography [2] in the 1960s, high-quality accommodation conflicts [12].
holograms became possible. However, current holographic displays do not yet offer the resolu-
However, holography was still purely analogue, since everything
tions and angular Field-of-View (FoV) required for acceptable visual
had to be recorded on photographic film. This did change in the late
quality — but this may change soon. Although significant technological
1980s: with the emergence of Spatial Light Modulator (SLM) technolo-
hurdles still need to be overcome, steady developments in photonics,
gies, Charged-Coupled Device (CCD) image sensors and increasingly
microelectronics and computer engineering have led to the prospect
powerful computers, it became finally possible to digitally capture,
to realize full parallax digital holography displays with acceptable
process and display holograms. Digital holography is used today for
rendering quality [13,14]. The extreme required image resolutions,
various purposes, such as microscopy [3], interferometry [4], surface
measurements [5,6], storage [7] and three-dimensional (3D) display paired with the video frame rates needed for dynamic video holography
systems [8]. In this paper, we particularly focus on the realization of could lead up to terabytes-per-second data rates in its plain form [15].
the latter. Much of prior research efforts on holographic display systems fo-
3D displays enable depth perception, thereby showing a scene in its cused on the hardware challenges, namely designing and developing
3 dimensions. Contrary to conventional two-dimensional (2D) displays the relevant optical and electronic technologies. Examples include high-
or stereoscopic displays, which are only providing a monoscopic or bandwidth Spatial Light Modulators (SLM) and specialized optics [16–
a stereoscopic view, 3D displays provide different visual information 18]. Thorough overviews of the latest developments of 3D displays can
depending on the viewer’s eye position and gaze. This technology is be found in recent papers [8,14,19–21].

∗ Corresponding author at: Dept. of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Pleinlaan 2, B-1050 Brussels, Belgium.
E-mail address: dblinder@etrovub.be (D. Blinder).

https://doi.org/10.1016/j.image.2018.09.014
Received 14 May 2018; Received in revised form 24 September 2018; Accepted 27 September 2018
Available online 4 October 2018
0923-5965/© 2018 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

Fig. 1. Pipeline summarizing the challenges in signal processing that should be solved to enable high-quality holographic display systems.

An aspect which has received much less attention by comparison • Modelling perceptual visual quality for holographic content requires
to hardware designs is how the potentially huge data volumes should accurate models for the subjective quality assessment. These
be handled for efficient generation, storage and transmission. The metrics will be needed to steer the other components of the holo-
signal statistics of a hologram differ considerably from regular natural graphic pipeline (CGH, coder/decoders, displays) as to optimize
image and video. Hence, conventional representation and encoding the global quality of experience (see Section 6).
algorithms, such as the standard JPEG and MPEG families of codecs,
are suboptimal. When generating synthetic holographic content by de- It is important to understand that all challenges outlined above
ploying Computer Generated Holography (CGH) techniques, algorithms are further complicated by the large computational load and mem-
are required that differ considerably from those needed for classical ory/signalling bandwidth associated with the realization of
synthetic image rendering that are based on ray-space physics and a full-parallax, wide viewing angle dynamic digital holographic display
particle model for light transport. Indeed, holography adheres to a wave- systems [22]. Before digging deeper into the physics of holography
based transport model of coherent light instead. (Section 3), we first address the high-level properties of holographic
This overview paper focuses on the signal processing challenges displays to better position the challenges that are faced by the modules
needed for the realization of holographic video systems from source that are processing the holographic signal prior to display.
signal generation to display. We identify challenges in hardware and
software and collect a set of important research targets to address 2. Holographic displays
them. The steps that need to be undertaken are illustrated in Fig. 1;
a global comprehensive approach is required for designing a consistent At present, most commercially available 3D displays are stereo-
holographic video display system. scopic: two different 2D images are fed to the viewer, one per eye (see
The body of this document is organized as follows: Fig. 2(a)). This is either achieved with auto-stereoscopic displays or with
specialized eye wear, e.g. with passive glasses (anaglyph, polarized),
• Digital holographic display systems are currently very premature and active glasses (shutters), or Head-Mounted Displays (HMD). However,
heterogeneous in design, so there exists no established standard the 3D effect is still limited, because several cues of the human visual
that specifies how to supply holographic data to the display. system are not accounted for. Such displays lead to eye strain and
Hence, it is important to have a signal processing framework that nausea after prolonged wear. A mismatch between the vergence and
is sufficiently generic as to allow for compatibility across display accommodation of the eyes during viewing leads to the ‘‘vergence–
systems (see Section 2). accommodation conflict’’, which results in discomfort, see Fig. 3.
• Recording digital holograms at high resolutions is difficult. Not only The important cues needed for 3D perception [19,23] can be divided
do these acquisitions need specialized optical setups, but they into two categories: monocular and binocular. Monocular cues are:
require expertise to build and operate. Furthermore, the size of
objects that can be recorded is limited (see Section 3). • Accommodation — controlled by muscle’s strain for adjusting the
• Computer-generated holography (CGH) is much more calculation- eye’s focal length;
intensive than classic image rendering: every point in the scene • Motion parallax — the observed relative velocity of objects with
can potentially affect every hologram pixel. This many-to-many respect to the eye’s position;
relationship compounded with the large hologram resolutions is • Occlusion — indicate the relative distance of objects to the viewer
too costly to compute by brute-force (see Section 4). due to the observed partial overlap;
• Novel transforms and coding technologies are needed for digital • Shading — providing visual information on the shape and 3D
holograms. Conventional transforms and representations perform position of objects.
suboptimally because they do not match the statistical properties
Binocular cues are:
of holographic signals. Standard codecs such as JPEG and MPEG
are solely designed for natural photography and video but operate • Vergence — the angular difference between the gaze direction of
poorly on holographic content; novel transforms and motion the eyes as a function of viewing depth;
estimation & compensation techniques need to be integrated (see • Stereopsis — emerging from correspondences between the pro-
Section 5). jected image on the retina of both eyes.

115
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

Fig. 2. Different 3D display types: (a) stereoscopic display, (b) multi-view or light-field
display, (c) holographic display.

Fig. 4. Illustration relating the dimensions, pixel pitch, viewing angle and resolution of
holographic displays. This diagram assumes a screen ratio of 16:9 and a wavelength of
𝜆 = 460 nm (blue light). The graphs indicate how the parameters are related for display
diagonals ranging from 1 to 40 inch. The green regions approximately indicate the desired
properties of holographic displays with screen sizes for HMD, mobile phones, tablets and
table top.

2.1. Types of holographic displays


Fig. 3. Illustration of the vergence–accommodation conflict. In (a), e.g. in holographic
displays, the accommodation and vergence always match up. In (b), e.g. in stereoscopic We will focus on dynamic digital pixel-based holographic displays,
displays, accommodation is constrained at a fixed focal distance yielding a mismatch. also known as ‘‘electro-holographic displays’’. Contrary to e.g. printed
or analogue film holograms, they can be electronically updated at video
rates. Further guidance on holographic displays and alternatives can
Many types of 3D display technologies exist that attempt to better be found in the introductory paper from Dodgson [35], the recent
address the important cues with varying degrees of efficacy. 3D displays summary [34], or the elaborate review [20].
come in many forms, such as volumetric displays [24] and multi-focal One of the first digital holographic displays was developed by
displays [25,26]. One of the most popular are multi-view (or light- MIT Media Lab [36]. It utilizes bulk acousto-optical modulators to
create successive horizontal-parallax-only line holograms for bandwidth
field) displays. They generalize the stereoscopy approach by increasing
reduction. From then on, electro-holographic displays have come in
the amount of 2D views from two to many more (Fig. 2(b)). These
many shapes and sizes, varying widely in resolution, FoV and image
displays [27,28] discretize the light field into many views emitted in
size. Today’s SLM market is dominated Liquid-Crystal On Chip (LCOS)
multiple directions. and Micro-ElectroMechanical Systems (MEMS) elements [17].
Because of the current physical limitations in terms of screen reso- Although a true holographic display should modulate both the
lution, and consequently on how many views can be displayed at once, amplitude and phase of a wavefront, full complex modulation in a
this will degrade the viewing experience. The main noticeable deficits single device is still hard to achieve, but some examples exist [37]. Most
are the step-wise changes in the motion parallax, and accommodation– SLMs can either modulate amplitude or phase [17]. The latter is often
vergence mismatches. Furthermore, typically only horizontal parallax is preferred, since phase-only SLMs have the advantage of high diffraction
supported. efficiency, which theoretically can reach 100%. The phase component
Multi-view displays come in many forms, including 360◦ table- of a hologram is also generally more important for image quality than
tops [28] and HMDs [29–31]. It has to be noted though that in principle, the amplitude.
future light field displays which exhibit sufficiently high spatial and Current commercially available SLMs typically have resolutions of
angular resolutions enable the super multi-view principle [32,33]: this up to 1920 × 1080 (Full HD) or 3840 × 2160 (4K UHD), which do not
yet reach the resolutions needed for large, full-parallax realistic displays:
happens when the angular sampling pitch is small enough so that
depending on the screen dimensions and viewing angle requirements,
multiple rays emanating from the same screen point enter the viewer’s
the needed resolution can surpass 1012 pixels/frame. Moreover, pixel
pupil, hence alleviating the above deficits. Nonetheless, the ray bundle
pitches of commercial solutions are currently limited to approx. 4 μm,
size dictates a trade-off between the maximal spatial and angular cf. Fig. 4; the pixel pitch being the distance between the centres of two
resolution, which is not present in holography [34]. adjacent pixels. More details are given in Section 2.3. Hence, engineers
However, holographic displays have the promise to account for all vi- need to be resourceful with the available bandwidth. This has led to
sual cues. In principle they can display virtual objects indistinguishably several types of holographic displays, which we can categorized in three
from real ones. But, realizing high-quality holographic display systems groups (see Figs. 5 and 6). The three following sections describe each of
comes with many software and hardware challenges. them.

116
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

Fig. 5. Different types of holographic displays: (a) head-mounted holographic display,


(b) external single-user holographic desktop display (with gaze tracking), (c) multi-user
holographic display.

Fig. 7. Diagram of the valid viewing zone of a holographic display by illuminating the
SLM with either (a) planar wave or (b) spherical wave illumination. The opening angle of
the blue cones is given by the grating equation 𝜃𝑚𝑎𝑥 , and their intersection forms the valid
viewing zone. Note that for the same viewing distance, a larger 𝜃𝑚𝑎𝑥 is needed for planar
wave illumination (and hence a smaller pixel pitch 𝑝).

gaze in real-time [42,52]. Due to screen size limitations and/or tracking


area those displays are meant to be operated by a single user with a
rather limited motion range, e.g. when seated or standing in a specific
area. The range of applications extends most usages of head-mounted
displays by scenarios where the display does not have to be wearable,
so larger optical setups (and bandwidth) can be allowed for; examples
include high-end 3D-TV or simulations needing advanced calculation-
intensive CGH algorithms making the HMDs portability requirement a
hindrance. This class is projected to mature shortly after head-mounted
displays are commercially available, because many of the challenges
overlap.

Fig. 6. Classification of various holographic displays based on their pixel count and screen 2.1.3. Multi-user displays (MUD)
size. These displays can present full-parallax holograms at extreme res-
Source: Bilkent [38], ETRI [39], Microsoft [34], NICT [40], NVIDIA [41], Seereal [42],
olutions to multiple viewers at once. They are highly useful when
UTokyo [43], WUT [44].
multiple users need to simultaneously observe and interact with 3D
models, especially over long periods of time, enabling applications in
design, manufacturing, medicine, sports, etc. [53]. Currently horizontal-
2.1.1. Head-mounted displays (HMD)
parallax-only tabletop displays such as [43,54,55] can give a first
HMDs have applications in personalized virtual and augmented
impression of this ultimate class of holographic displays. However,
reality. Holographic head-up displays have already been proposed early
present implementations are still limited in resolution and can often
on for aircrafts [45], cars [46] or to aid medical diagnostics and
only display pre-rendered holograms of small objects, a few centimetres
surgery [47]. More recent usage scenarios [48] involve human–machine
across. They are mostly based on many multiplexed SLMs [40,56,57],
interfaces for visual content creation, computer games, CAD/CAE de-
or ultra-fast SLMs and rotating optical elements [39].
sign, training simulators, education and 3D-TV. Virtual environments
can be rendered with high fidelity, and virtual objects can be seam-
2.2. Limits of holographic displays
lessly integrated with the real environment, providing an immersive
experience. Usually these display types use two monoscopic holographic
Although steady progress is made in SLM technology, satisfactory
displays [34,41,49,50]. Because only a small subset of the information
holographic displays are still not realizable. The large amount of infor-
needs to be transported per eye to the user’s eyebox, the required
mation contained in a hologram brings current hardware to its limits:
bandwidth is drastically reduced. Head-mounted holographic displays
Giga/Tera-pixel resolutions, pixel sizes approaching the wavelength of
will thus most likely be the first kind of holographic displays that will
light (i.e. < 1 μm), high interconnection bandwidths and sufficient
be available to consumers in the near future.
switching speeds in the case of time multiplexed methods (requiring
up to 10 kHz refresh rates) are not yet supported.
2.1.2. Single-user desktop displays (SUDD)
The relationship between the pixel pitch 𝑝 and the supported angular
These displays are similar to the previously mentioned class. These
FoV is given by the grating equation: for every spatial frequency 𝜈, the
devices are either too small in display size or field-of-view to be effec-
associated diffraction angle 𝜃 is given by:
tively used simultaneously by multiple users [38,44,51]. Some displays
even steer the content solely to the eyes of a viewer by tracking his/her 𝜆𝜈 = sin(𝜃) (1)

117
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

Fig. 9. Simplified diagram of a holographic on-axis recording setup in transmission mode.


A laser beam is expanded as to fully illuminate the sample, then split into a reference beam
and an object beam. Both beams are recombined after the object beam passes through the
object and form an interference pattern on the image sensor.
Fig. 8. Diffraction of an incoming planar wave going through slits, resulting in point-
spread functions. Regions where the wavefield 𝑈 has phase 𝜑 = 2𝑛𝜋, 𝑛 ∈ Z are drawn
with red lines. (a) single point-spread function. (b) Young’s double slit experiment, with
alternating constructive and destructive interference. diffraction efficiency of a device will affect the display quality. Other
issues pop up too when the pixel density increases, such as cross-talk
and noise. There is some trade-off between modulation efficiency and
The maximal spatial frequency determined by the hologram’s pixel pitch pixel count and density. All those aspects will need to be taken into
𝜆
𝑝 will thus correspond to a maximum diffraction angle 2𝑝 = sin(𝜃𝑚𝑎𝑥 ). consideration when optimizing these variables for designing the best
In this context, we also define the space-bandwidth product (SBP), possible display. Accurate simulations of these variables could help to
which actually defines in Wigner domain the product of the spatial and model their impact on visual quality. Moreover, this may require the
spectral footprint of the holographic signal [58,59]: application and design of (new) algorithms to post-process the hologram
𝑊𝑥 × 𝑊𝑦 as to obtain better visual quality, such as dithering, or the Gerchberg–
𝑆𝐵𝑃 = (2) Saxton algorithm [60] to generate phase-only holograms.
𝑝𝑥 × 𝑝𝑦
where 𝑊𝑥 and 𝑊𝑦 are representing respectively the spatial width and Multi-colour display. The laws of diffraction are wavelength-dependent
height of the hologram and 𝑝𝑥 and 𝑝𝑦 , respectively the pixel height and and colours are not interchangeable. Blue light has the shortest wave-
width, i.e. pixel pitch in both dimensions. A large spatial size (e.g. a 44- length, meaning it will have a lower diffraction angle for the same
inch display) and a large pixel pitch of e.g. 127 μm basically results in a maximal SLM spatial frequency than the other colours, which can lead to
2D 4K colour television screen, while a small pixel pitch of e.g. 230 nm problems at large viewing angles. Another consequence is that higher
would produce a 10 Terapixel holographic display with a maximum wavelengths can carry more information per unit area. An additional
angular FoV of 180◦ (see Fig. 4). problem is that lasers have (by design) a very narrow bandwidth in the
The maximum diffraction angle bound will impose limits on the valid electromagnetic spectrum. This can lead to perceptual colour distortion
viewing zone geometry. This zone is delineated by all positions which and to highly visible chromatic aberrations, because the individual
are reachable for rays emitted from every hologram point: namely the colour components will be much better distinguishable than in the
intersection of all cones placed at every hologram point with an angle incoherent white light case.
𝜃𝑚𝑎𝑥 . Their orientations (and by consequence, the resulting shape of Speckle-noise reduction. Highly coherent light sources give rise to
the valid viewing zone) will depend on the shape of the illuminating speckle noise, which can be mitigated though by various approaches.
reference wave. This is shown in Fig. 7 for two typical examples: planar Partially coherent light sources can reduce speckle, but will cause some
wave and spherical wave illumination. The former will have an infinitely blur and some loss of depth perception due to their reduced coherence
extended viewing zone, but at a larger minimal viewing distance. The length. More information on the relation between the coherence and
latter allows for a wider viewing angle closer to the holographic screen, image quality can be found in [61]. Other solutions involve diffusers,
but generally limits the valid viewing zone to a small, fixed region. That temporal modulation of the laser beams, frequency shifting over time,
is why spherical wave illumination is better suited for HMDs, while superposition of multiple reconstructions, and quickly repositioning the
planar illumination might be preferred for full-parallax holographic laser beam.
displays.
Embedded signal processing. Due to vast number of pixels that needs
2.3. Challenges to be processed, it is to be expected that internal signalling channels
with intrinsic bandwidth constraints will be under high stress. Hence,
The following challenges in the design of holographic display impact processing – e.g. decoding and rendering – will need to be parallelized
signal processing requirements: and hierarchically distributed such that data can be locally processed
(cf. JPEG Pleno requirements [62]), as it is also the case for high-end
Ultra-high resolution. A very small pixel pitch is required for the display light field displays [33].
of macroscopic objects at a reasonable level of detail. Sub-wavelength
pixel sizes enable the display of higher frequencies, i.e. finer resolved
3. Digital hologram recording
fringe patterns, which allow for large viewing angles. At the same time
smaller pixel sizes further intensify the resolution problem by increasing
The holograms to be rendered on the holographic display can be
the number of pixels required for a hologram of the same physical size
obtained either by recording the hologram on an optical capturing
(see Fig. 4). This search for high-resolution and wide-angle holograms
setup, either by numerically calculating the hologram using computer-
results in large space-bandwidth products.
generated holography algorithms. In this section, we will focus on the
Light modulation efficiency. Digital SLMs have a limited number of first approach, Section 4 addresses the latter. To understand the require-
degrees of freedom per pixel, expressed by the number of bits used ments and challenges, we first summarize the underlying fundamental
per pixel to specify amplitude or phase, i.e. bit-depth, and to what physics. Thereafter, the acquisition of high-resolution holograms and
extent phase and amplitude can be modulated, if at all. Furthermore, the the connected inverse reconstructions methods are discussed.

118
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

Fig. 11. (a) The intensity of an off-axis hologram of a speckle phantom, (b) its Fourier
representation with annotated terms and (c) reconstructed hologram amplitude.
Fig. 10. These diagrams illustrate the solution spaces, per pixel, that are consistent Source: [63].
with the measurements. (a) For a known value of 𝑅 and a measured
√ intensity 𝐼, the
valid solutions for 𝑂 lie on a circle centred at 𝑅 with radius 𝐼. (b) When taking two
measurements, there still is some ambiguity (two valid solutions), so (c) at least three
complex-valued 𝑂. The interferogram further consists of several other
different 𝑅𝑖 are needed to find a unique solution for 𝑂.
undesired terms. For example, since the sought 𝑂 is complex-valued,
the squared magnitude term |𝑂|2 will not contain any of the important
phase information.
3.1. Physics of holography Because of the multiple valid solutions for 𝑂 (see Fig. 10), additional
steps need to be taken to unambiguously retrieve 𝑂. The types of setups
Holography relies on the wave model of light governed by the laws of can be classified into two categories: single- and multiple-exposure
diffraction. This representation is dual to the ray (or particle) model of methods. One of the most popular single-exposure techniques is called
light. The wave nature of light is captured by Young’s famous double-slit ‘‘off-axis holography" [2] (Fig. 11). By using a tilted plane wave for 𝑅,
experiment, see Fig. 8. Although electromagnetic waves are generally the terms in Eq. (5) will get separated in the frequency domain; the
described by Maxwell’s equations, in a sufficiently large, homogeneous, desired term can the subsequently be extracted using a bandpass filter.
dielectric, isotropic medium, the equations can be simplified to a scalar The disadvantage of this technique is that the bandwidth of 𝑂 will be
model. only a fraction of the camera bandwidth, because most of it will be
Holography is thus typically described by scalar diffraction theory: discarded by the filter.
the wavefield of a monochromatic coherent light source is modelled by Multi-exposure methods on the other hand, consist of taking several
a scalar field of complex-valued amplitudes, describing any component measurements in succession. The best known example is phase-shifting
of the electromagnetic field for every point in space. The magnitude digital holography [64], where generally at least 3 exposures are needed
denotes the amplitude, and the angle denotes the (relative) phase delay to unambiguously retrieve the complex amplitudes (cf. Fig. 10(c)). More
w.r.t. to some reference wavefield. This contrasts with conventional exposures can be taken to improve noise robustness. The main draw-
imaging techniques, which only measure intensity (squared amplitude), back is that such methods cannot easily capture dynamic phenomena,
but lack phase data to encode depth. reducing their application domain. Combinations such as parallel phase-
Because holograms can capture all visible features of light waves, shifting holography [64] exist as well.
they can provide all needed 3D cues. An illuminated object in space will
3.2. Acquiring high-resolution holograms
induce interference patterns that can be captured by the hologram to
encode the light field emanating from the object.
One of the current bottlenecks for acquiring high-quality holograms
Mathematically, diffraction can be expressed as a (infinite) collection is the limited image sensor resolutions: typical digital sensors consist
of point emissions. A single point spread function is given by: of multiple megapixels, which is several orders of magnitude below the
𝑒𝑖𝑘‖𝐩−𝐱‖ resolutions needed for large, high angular FoV displays.
𝑂(𝐩) = 𝑈 (𝐱) (3)
‖𝐩 − 𝐱‖ A straightforward optical solution is to use multiplexing: by using an
array of sensors and/or moving the camera between successive acquisi-
evaluating the scalar field 𝑈 at a point 𝐩 in the wavefield 𝑂 on the
tions, a large synthetic aperture can be created by stitching numerically
hologram plane, for a spherical wave centred at a luminous point 𝐱; 𝜆
many sub-holograms together [65,66]. This has applications beyond
is the wavelength of the light, and 𝑘 = 2𝜋 is the wave number. For an
𝜆 display, such as in holographic tomography and microscopy [67,68].
arbitrary surface, we can integrate over all points using the Huygens–
However, the setup will become very complex and acquisitions time-
Fresnel principle [59]:
consuming, thereby precluding the capture of dynamic scenes. Other
1 𝑒𝑖𝑘‖𝐩−𝐱‖ optical solutions involve modifications to the capturing system as to
𝑂(𝐩) = 𝑈 (𝐱) 𝐧 ⋅ (𝐩 − 𝐱)𝑑𝐱 (4)
𝑖𝜆 ∬𝑆 ‖𝐩 − 𝐱‖2 extract more information from the recorded intensity pattern, such as
where 𝐱 ∈ 𝑆 are points on a surface 𝑆 over which is integrated and 𝐧 is utilizing a coded aperture [69,70].
the surface normal on 𝑆 in 𝐱. On the other hand, by accounting for the statistical features of
holographic signals, it is possible to enhance the resolution of acquired
Since no physical detector can measure the phase directly, as phase
holograms without modifying the recording setup. This can be done with
carries no energy, this information needs to be captured indirectly by
inverse methods, whose goal is to extract a maximum of information
means of interference. A typical holographic recording setup (Fig. 9)
from relatively small sets of intensity measurements, thereby improving
consists of a laser, whose beam is expanded as to be able to fully
the resolution without increasing acquisition setup complexity. The
illuminate the object to be recorded. Next, the beam is split into a known
number of intensity measurements is generally considered insufficient
reference beam 𝑅 and the sought object beam 𝑂, which will illuminate
according to the Shannon–Nyquist sampling theorem, resulting in many
the object and encode its information. These beams are superimposed holographic signals that can explain the measurements.
and we measure the intensity of the two interfering beams on the In this context, three questions arise. First, we have to identify what
detector. The measured 2D intensity fringe pattern 𝐼 is given as: constitutes a good set of measurements. Secondly, we have to select one
𝐼 = |𝑅 + 𝑂|2 = |𝑅|2 + |𝑂|2 + 𝑅𝑂∗ + 𝑅∗ 𝑂 (5) of the holographic signals that best explains the intensity measurements
(e.g. the maximum likelihood solution). To make a substantiated choice,
where ∗ denotes the complex conjugate. The last term 𝑅∗ 𝑂 contains prior information about the holographic signal may be used by inverse-
the sought 𝑂. Since 𝑅 is known, we can in principle retrieve the problem reconstruction approaches .

119
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

Optical distortions. Recorded holograms are typically contaminated


with speckle noise and by optical aberrations. Hence, additional de-
noising and aberration compensation techniques need to be deployed.
Fortunately, there is another way of acquiring holographic data,
namely by numerically synthesizing holograms from a 3D scene model
using computer-generated holography. Its benefits and challenges will
be discussed in the next section.

4. Computer-generated holography (CGH)

Computer generated holography (CGH) is an alternative to optical


hologram acquisition. It has the important advantage that the object
information can be obtained by means of conventional (multi-)camera
setups, point cloud data, or even computer graphics. One of the main
Fig. 12. Illustration of the virtual setup geometry for CGH with planar illumination. Scene
challenges of CGH is to generate realistic holograms within acceptable
objects should be positioned inside the aliasing-free zone. Objects outside of this zone will computation times. This can be inferred from the Huygens–Fresnel prin-
emit rays with incident angles on the hologram surpassing 𝜃𝑚𝑎𝑥 , causing aliasing. ciple in Eq. (4): computing the hologram using brute-force is inefficient,
since the integral has to be evaluated over all scene points, for every
pixel.
3.3. Inverse reconstruction methods
4.1. Types of CGH methods

One example of a class of inverse methods is compressed sensing The computational load of the many-to-many mapping of the diffrac-
(CS), which is a generic theory in signal processing that deals with tion model is compounded with another difficulty: realistic holograms
sparse signal recovery from an underdetermined set of linear measure- need very high resolutions. This fact can be derived from Eq. (1):
ments [71]. Free space diffraction provides a natural signal mixing suppose we have a green laser (𝜆 = 532 nm), and a desired viewing
operator, which makes CS an excellent fit for holography. angle of 𝜃𝑚𝑎𝑥 = 40◦ . This amounts to a pixel pitch 𝑝 of at least 0.414 μm.
CS has been successfully applied for holographic tomography [72], For a display of 10 × 10 cm, we then need a massive ultra-high definition
3D scene segmentation [73] and measuring 3D refractive index dis- display of 58 Gigapixels.
tributions of cells. In recent work [74,75], the groundwork was laid Aside from the valid viewing zone, this also imposes restrictions on
the scene geometry and the CGH calculations: incident light at too large
on how to determine optimal measurement conditions for holographic
angles will otherwise result in aliasing, see Fig. 12. Furthermore, Eq. (4)
recording setups and what CDF wavelet bases are best suited for image
does only account for free space propagation, but not for effects such as
reconstruction in compressed sensing.
occlusion, shadows, refraction, etc. That is why over the years, many
In the case of off-axis geometries, a separation is created in the different techniques were developed to accelerate and/or improve the
Fourier domain. Direct image reconstruction approaches will manipu- realism of CGH. They can be classified as follows.
late the frequency domain and apply a spatial filter for recovering only
the complex amplitude information of the object beam, discarding the 4.1.1. Point cloud methods
complex conjugate twin image as well as the zero-order illumination. Point clouds (Fig. 13(a)) may represent objects as a collection of
These frequency-domain manipulations techniques are straightforward discrete luminous points [80–84]. This will discretize Eq. (4) into a
if the object wavefield is bandlimited [76] and may yield exact recon- summation over all points. These methods can be augmented using
struction. However, in practice, the sample contains fine structures and look-up tables with precomputed point-spread functions, or even surface
details, independent of the lateral resolution of the detector. Recently, elements with a precomputed light distribution. Occlusion can be
regularized inverse reconstruction approaches have been developed cumbersome to handle, but this can be achieved e.g. by attributing a
for holographic signals and impressive reconstruction quality may be small volume to each point blocking incident light.
obtained [77–79], at the price of an increase in computation.
4.1.2. Polygon methods
Surface elements such as triangles (Fig. 13(b)) leverage the fact
3.4. Challenges that diffraction between planes can be efficiently computed using a
convolution [85–87], e.g. the Angular Spectrum Method (ASM). These
methods are particularly efficient when the amount of hologram pixels
Optically recording digital holograms still comes with several chal- greatly exceeds the polygon count. Generally, the polygons will have an
lenges: associated complex-valued wavefield, where the amplitude will encode
the texture and the phase will encode the light emission distribution
Resolution limitations. The maximum obtainable resolution is limited by (e.g. diffuse/specular). Occlusion is more straightforward, but can be
the issued detector. It can be extended by deploying inverse methods if computationally demanding.
a good signal model can be defined, exploiting its sparsity properties.
However, results will be still limited by the signal statistics. 4.1.3. RGB+Depth methods
With a depth map (Fig. 13(c)), we can encode holograms with
Physical limitations. Aside from resolution these setups have physical pairs of colour images and per-pixel depth information [88]. They
limitations on the types of objects and scene size that can be captured. are computationally efficient and can be of high quality, but they
Tomographic solutions can be utilized for static scenes, but these are restricted in the supported viewing angles. For example, objects
techniques are hardly applicable to dynamic scenes. Hence, recording cannot be placed behind others with a depth map. These methods are
holograms requires specialized equipment and optically captured holo- especially suitable when the viewer should gaze at the display from
grams for holographic display might only be suitable for very particular a fixed position. This shortcoming may be addressed though by using
use cases, e.g. (bio)medical imaging and non-destructive testing. more depth maps for several viewpoints.

120
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

Fig. 13. Comparison of various types of CGH algorithms. The object (blue) emits wavefronts (red) that propagate to the virtual hologram (black) at the bottom. There is a dependence
between scene representations and their associated numerical light propagation methods.

4.1.4. Ray-based methods


Ray-space representations (Fig. 13(d)) will approximate the holo-
gram by a discretized light field, which is converted into a holo-
gram by assigning an incident-angle-dependent impulse response to
every ray [89–94]. One notable subset are the phase-added stereogram
methods: the hologram is subdivided into blocks, each individually
represented in its FFT domain (similar to the blockwise DCT). Every
frequency component within a single block will approximately corre-
spond to a ray of light. That way, a set of rays can be sampled for every
point/coefficient, followed by a block-wise inverse Fourier transform.
This mutual conversion between the hologram and discrete light field
representation is also known as ‘‘ray-wavefront conversion’’ [95]. The
advantage is that those methods are comparatively fast and can use
existing ray-tracing software, discretizing the light field will reduce the
quality of the generated hologram, degrading some of the 3D cues such
as continuous parallax.

4.2. Sparse acceleration techniques

CGH algorithms can be accelerated further using sparsifying trans-


forms. Sparsity is a well-known concept in signal processing literature.
It refers to the property that signals drawn from a particular source can
be well-approximated by a few coefficients in a certain, sparse basis. Fig. 14. Diagram illustrating how distance to a plane will affect the point spread function
This property is very useful for many applications, such as compression, support size. The maximum diffraction angle 𝜃𝑚𝑎𝑥 will determine the opening angle of the
denoising, and compressed sensing. Particularly, the notion of sparsity is cone of influence, 2 ⋅ 𝜃𝑚𝑎𝑥 , shown on the left. The classic WRP method will reduce the
support of point spread function w.r.t. the hologram plane, but placing the WRP in the
highly useful for CGH. The following subsections describe three common middle of the point cloud is even more sparse.
accelerations strategies: (1) using an intermediate wavefront recording
(WRP) plane, (2) using a stack of WRPs and (3) computing PSFs (Point
Spread Functions) in the STFT domain.
starting from the one furthest to the hologram plane. Furthermore, this
method is combined with an occlusion processing technique locally ap-
4.2.1. Single wavefront recording plane acceleration
plied per point. The wavefield is progressively processed from the back
Although the wavefield in the hologram plane is very dense, it can
become highly sparse when expressed in the right transform domain. to the front WRP and finally propagated to the hologram plane [96,97].
This can be leveraged for computational efficiency, since then only a Another important acceleration technique relies on look-up tables
small subset of the coefficients needs to be updated. The most well (LUT). That way, scene elements can be precomputed and stored in any
known example is called Wavefront Recording Planes (WRPs) [82]. It transform domain. For example, PSFs quantized at discrete levels around
utilizes the property that points close to the hologram plane have a the WRP can be reused for all the WRP zones. But LUTs are not limited
small spatial footprint. By placing a WRP close to the object in 3D to storing PSFs. This approach has been extended for diffuse surfaces
space, only a small subset of the WRP pixels needs to be updated by modulating the PSFs with a random phase factor [98], resulting into
(Fig. 14), resulting in high sparsity. After all PSF additions, the WRP can a simulated diffuse scattering of the light. More recently, an extension
be efficiently propagated to the hologram plane using e.g. the angular which utilizes the Phong illumination model and shadow mapping has
spectrum method. been proposed [97]. Here, each point contributes to the WRP according
to the properties of its associated surface, enabling the generation of
4.2.2. Multiple wavefront recording planes acceleration high quality colour holograms, as shown in Fig. 16.
The WRP methods can be extended further by using multiple parallel
equidistant WRPs, so as to optimize calculation times. An efficient CGH 4.2.3. Wavelet and STFT-based accelerations
method which exploits multiple WRPs has been proposed in [83]. As A more recent development is to compute PSFs in non-spatial
shown in Fig. 15 the object is segmented in WRP zones and the points transform domains. PSFs are sparse in the wavelet domain (WASABI
are allocated to their closest WRP. The WRPs are computed sequentially method) [100], and even sparser in the Short-Time Fourier Transform

121
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

classes are not mutually exclusive, and many existing methods are
hybrids [95,102].

4.3. Challenges

CGH methods are a compromise between calculation time, realism


and supported scene parameters (e.g. dimensions, viewing angles, occlu-
sion, lighting, transparency). Optimizing CGH methods both for offline
(video) rendering and for real-time display are still very active domains
of research. The main challenges to overcome are:

Synthesizing large CGH. The CGH generation must be effective com-


putationally, especially for full-parallax holographic displays. Gener-
ating holograms in real-time at video rates requires specialized soft-
ware/hardware implementations. Solutions involve adapting the algo-
rithms for multi-threading [103], implementations on Graphics Process-
ing Units (GPUs), [84,91], high-performance computer clusters [104] or
FPGA systems [105].

Local CGH updates. Local memory and caching methods are indispens-
able regarding computational complexity and principles of efficient
transforms and sparse representations may be used. This challenge
Fig. 15. Illustration of the multiple WRP CGH method showing the smaller support
required per point, compared to the single WRP methods in Fig. 14. Additionally, the overlaps with the transform design problem for compression(cf. Section
variation of the support of the PSF per depth level of the LUT is shown, which is determined 5).
by the distance to the WRP and the maximum diffraction angle.
Realistic scene rendering. The aim is to obtain photo-realistic CGH.
Although steady progress has been made, the realism is still limited
compared to its state-of-the-art counterpart in computer graphics. Aim-
ing to go beyond Phong shading would require adequate models for
shadows, transparency and refraction, complex lighting, volumetric
light scattering, and many others.

5. Transforms and coding

High-resolution holograms need large volumes of data to be repre-


sented, especially for dynamic holography: e.g. the considered 10×10 cm
display in Section 4 needs 58 Gigapixels. Supposing we have a frame
rate of 30 fps and a phase-only SLM with 1 byte per pixel bit-depth,
the required bandwidth would amount to 1.75 TB/s. Because of these
data rates, dynamic hologram transmission will be unfeasible with
currently available (or near-future) hardware unless we have adequate
compression techniques. Efficiently coding holograms will thus be of
utmost importance for viewing dynamic content.
Conventional image and video codecs are unfortunately suboptimal
for holograms, notably for macroscopic objects; this is because the char-
acteristics of holographic signals differ substantially from natural image
content. This becomes immediately clear when one tries to decompose a
hologram with wavelets: in Fig. 17, the decomposition is not sparse. This
is why novel codec architectures need to be developed to adequately
represent holographic data. One of the core codec components that need
to be modified is the chosen transform. Furthermore, the relationship
between the objective hologram distortion and the perceived subjective
Fig. 16. View reconstructions extracted from a 65,536-by-65,536 pixel hologram gen- distortion is still poorly understood.
erated with the multiple WRP method. Reconstructions (a) and (b) are two views taken
with a small aperture (6.5 mm – 8𝑘 × 8𝑘 pixels), the two reconstructions (c) and (d) are
refocused respectively at the front and back using a larger aperture (13.1 mm – 16𝑘 × 16𝑘 5.1. Coding of static holograms
pixels). The Biplane model is courtesy of ScanLAB Projects [99].

A number of different coding solutions for static holograms have


been proposed over the years, and up to recently most of them concerned
(STFT) domain [101]. High realism can be achieved using less than 2%
relatively rudimentary solutions [22]. We can categorize these codecs
of the highest-magnitude coefficients, with reported 30-fold speedups. in four classes, coding: (1) the input content for CGH, (2) the complex
Actually, the (phase-added) stereogram method can be viewed as a amplitude data in the hologram plane, (3) the backpropagated hologram
particular case of sparse STFT-based CGH. Note that the aforementioned in the object plane and (4) an intermediate representation.

122
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

codecs, which are modified and/or extended to better match the


statistics of holographic signals as to improve their rate–distortion
behaviour w.r.t. the default unmodified codec. They mostly involve
transforms that are better tailored to handle high frequencies and
strong directionality, caused by the interference fringes. For static
holograms, various transform extensions were proposed: directional-
adaptive wavelets and arbitrary packet decompositions [108], vector
quantization lifting schemes [109], wavelet–bandelets [110], the wave
atom transform [111], and mode dependent directional transform-based
HEVC [112,113]. Another family concerns Gabor wavelets, which ob-
tain an optimal compromise between spatial and angular resolution per-
mitted by the Heisenberg principle [114]. It was shown by Viswanathan
et al. [115] that this transform can be used for adaptive view-dependent
coding. El Rhammad et al. specified a matching pursuit approach for
Fig. 17. Example of a hologram transformed with a dyadic wavelet transform. The an overcomplete Gabor wavelet dictionary [116] and further expanded
transform coefficients are clearly not sparse, which explains why conventional image
their support for efficient view-point dependent decoding in support of
codecs will not perform well on holographic data. Hologram: courtesy of b-com [106].
holographic HMDs [117].
Many holographic displays are driven by phase-only SLMs. For these
displays, one can discard the amplitude since we need to code only the
5.1.1. Coding of CGH input argument of the complex-valued wavefield, also known as the ‘‘wrapped
These methods encode the 3D scene representations (e.g. phase’’, typically represented by values between 0 and 2𝜋. Because the
RGB+Depth, meshes, point clouds, light fields) to circumvent the
wrapped phase behaves non-linearly, it should not be processed with
problem of efficiently compressing the holographic signals. Thereafter,
conventional linear transforms such as the DCT or wavelets, since they
CGH will generate the output hologram. For example; the RGB+Depth
will incorrectly treat phase wraps as edges and wrongly model phase
method can be used with compressed textures and depth maps to
distance. For example, for some very small 𝜖 > 0, the values 0 and (2𝜋−𝜖)
generate holograms on-the-fly [107]. The advantage is that the 3D
will be treated as highly different intensity values, even though they are
content itself can be efficiently compressed with existing solutions,
very close in phase distance.
resulting in small file sizes. The main disadvantage is that the hologram
One possible solution is to use phase unwrapping [118] to linearize
generation is shifted to the display side: CGH has a high computational
load, making real-time holography only possible for simple scenes. the signal, but this is an ill-posed problem, computationally intensive
Many types of holograms (especially optically acquired ones) cannot be and can drastically increase the dynamic range of the signal. Other
encoded with these methods. This solution will thus be impractical for solutions are the use of nonlinear phase-only transforms, such as modulo
broadcast scenarios. Moreover, the encoded 3D content will still contain wavelets [119,120]. The drawback is that lossy coding is more tricky;
redundancies, since only a part of the scene’s information will reach the care has to be taken when quantizing the signal because of the trans-
hologram due to e.g. occlusion. forms’ non-linearity.
Methods coding the hologram plane data can in principle represent
5.1.2. Coding in hologram plane any content and are fast, since they leverage existing coding architec-
These methods on the other extreme treat the raw holograms as tures, and do not involve any generation of holographic data. Their main
a special type of image. These are generally existing image or video drawback is that it becomes much harder to efficiently compress data,

Fig. 18. Amplitudes of a reconstructed dice hologram using various compression methods at 0.25 bpp. The upper row of subfigures is focused at the back plane, the lower row is focused
at the front plane. Hologram: courtesy of b-com [106].
Source: [101].

123
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

primarily because the transforms lack awareness of the 3D scene con-


tent. This problem becomes even trickier for holographic video: unlike
for regular video content, local (e.g. block-based) motion estimation and
compensation techniques are inadequate rendering them ineffective,
because small motion in the 3D space will influence all hologram pixels.
Another important issue is the rate–distortion behaviour: as the content
is modelled indirectly, the relationship between objective and perceived
(subjective) distortions is not straightforward.

5.1.3. Coding in object plane


This class of compression methods utilizes a single backpropagation,
typically numerical Fresnel diffraction or ASM, which is efficiently
computable with 1 or 2 FFTs. When the hologram has a large depth-
of-focus (small aperture, large pixel pitch) or a scene which is (nearly)
flat, then a backpropagation to the right depth will effectively refocus
the entire scene. That way, the refocused hologram will resemble an
image, making it subsequently better compressible by conventional
codecs [121]. This parallels the use of the WRP in CGH, which utilizes
its sparsity in the object plane for simplifying computations.
A notable example of this approach are the Fresnelets [122]: they are
a closed-form mathematical expression of Fresnel diffraction combined Fig. 19. Representations of a holographic signal propagating at various depths, with
their associated time–frequency diagram. The energy in the time–frequency domain is
with B-spline wavelets, with demonstrated use for compression [123].
delineated by a blue polygon, whose shape will progressively shear as the signal is
Object-plane coding was recently investigated by combining it with propagating forward. The red and green light rays will correspond to points in the time–
modern codecs as well [113,124]. However, an important limitation of frequency domain, as shown on the diagram.
using a global Fresnel (or ASM) transform is that it is ill-suited for deep
scenes: multiple objects or single extended-depth objects cannot all be
completely brought in focus simultaneously.

5.1.4. Coding of intermediate representations


Coding in the object plane works well for flat scenes like for example
microscopic data; however, when encoding deep scenes, it is impossible
to identify one single object plane that is suitable for globally encoding
the data. Choosing a particular object plane results in large parts of the
scene being out of focus. Consequently, this gives rise to poor encoding
performance due to a weak match with natural image statistics –
significant fringe patterns are still present – and to loss of the depth
information. This is illustrated for the Fresnel backpropagation case in
Fig. 18(b) where numerical reconstructions of the decoded holograms,
with either focus on back or front planes of the scene, are poor.
For comparison purpose, encoding in the hologram plane with plain
JPEG 2000 is shown as well. Fig. 20. Diagram of how non-planar diffraction warps the time–frequency domain. The
To discuss the design of an intermediate representation that allows left column shows the depth (map) of a surface. The right column shows the corresponding
for the preservation of depth information, we need to introduce the time–frequency warping. The top row shows a nonlinear canonical transform using a
piece-wise linear approximation of the surface. The bottom row shows the closest LCT
time–frequency representation – a.k.a. space–frequency representation,
by taking the average constant depth, which is insufficient for efficient coding.
the Wigner representation or phase space – in which the properties of Source: [101].
holographic signals can elegantly be represented visually and math-
ematically. It is a key tool to build a deeper understanding of the
characteristics of interferometric holographic signals. For simplicity, we
transforms. Furthermore, they proposed an efficiently computable sub-
restrict in Fig. 19 the graphical portrayal to 1D signals. Similarly to a
set using a piecewise linear approximation of the depth profile of the
music score, time (or space) and frequency are jointly represented on scene surface (Fig. 20). This was achieved by segmenting the hologram
the horizontal (𝑥) and vertical (𝜔) axes, respectively. and applying a series of (complementary) unitary transforms based on
Because frequencies correspond to diffraction angles given by the depth profile of the object or scene and as such seamlessly tile the
Eq. (1), points in the time–frequency domain will correspond to in- time–frequency domain.
dividual rays. Note that we cannot have perfect localization in both This will result in an intermediate representation where all segments
space and frequency simultaneously due to the Heisenberg uncertainty are brought into focus and hence a classic image or video coding
principle, so in practice we cannot extract individual rays from a wave- strategy can be deployed. These transforms do not exhibit redundancy,
based representation such as holography. but can be more computationally costly and have some restrictions
A mathematically elegant solution to address the intermediate rep- on the depth maps it can represent. The proposed transform shows
resentation problem is to use Linear canonical transforms (LCTs). They markedly improved reconstructions and demonstrates the importance of
form a class of unitary transforms, i.e., linear operations on the time– modelling non-linearities for successfully capturing depth information
frequency domain. LCTs can be used to model Fresnel free-space back- from scenes (see Fig. 18(c)).
propagation as a simple global shear in the time–frequency domain.
Unfortunately, LCTs cannot adequately represent deep scenes, similarly 5.2. Coding of dynamic holograms
to the backpropagation case.
To that end, Blinder et al. [101] derived the reversibility conditions Conventional video codecs will attempt to compensate for motion
on the scene depth map needed for having valid nonlinear canonical by using localized prediction schemes (e.g. block matching algorithms).

124
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

This is ineffective for holography, because even small motions will cause
pervasive changes to the signal because of the nature of diffraction,
i.e. for a larger angular FoV, one can expect that every pixel contains
information about every point in the scene, and hence also information
about all motion present in the scene. Consequently, motion estimation
will fail. This fact was e.g. noticeable in the work of Xing et al. [125],
where holographic videos were compressed with a standard HEVC
codec. Inter-frame decorrelation brought little benefit for compression.
Hence, due to the problematic nature of motion estimation, it makes
more sense to inherit motion information from the source data that
was used to computer-generate the holograms. The motion vectors can
then subsequently be used to steer the motion compensation process.
For translational and rotational motion parallel to the hologram plane
this can easily be facilitated, but out-of-plane rotations/translations
need operations in the frequency domain. Note that since full-parallax
holographic data contains information for different viewing angles, Fig. 21. Motion compensation for dynamic holographic video using affine time–
frequency transforms. Both object and viewer motion can be accurately compensated,
inherently information is present to support compensation for rotations
resulting in sparse difference frames.
around the horizontal and vertical axes defining the hologram plane.
Translational motion on the axis perpendicular to the hologram plane
results in shearing of the spectrum in the time–frequency domain (cf.
Speckle reduction. Speckles arises on diffuse surface elements and they
Fig. 19).
are difficult to capture due to the random phase distribution at point
A solution recently proposed by Blinder et al. [126] is to generalize
of emission. It is hard to separate the speckle noise realization from the
LCTs using affine symplectic time–frequency transforms, enabling to
underlying signal, which will not only impact compression performance
model all small rigid-body motions (6 DOF) in 3D space as particular
but quality assessment as well. For example, diffuse surfaces have a
unitary transforms. That way, the transform can be used for motion
random phase delay distribution which will significantly increase the
compensation (Fig. 21). This was successfully utilized for holographic
entropy of the hologram, but this cannot be eliminated as it would
video coding, resulting in substantial gains surpassing 5dB over standard
remove the diffuseness, even though the precise random instance is
HEVC codecs using a simple global motion compensation scheme [126].
unimportant for visualization. Potential solutions involve encoding
Even higher gains are expected when using more advanced motion com-
the phase probability distribution instead, from which many random
pensation architectures. Furthermore, the same scheme can compensate
instances can be generated, thereby reducing entropy considerably.
for the motion of users when viewing dynamic holographic content with
HMDs. Scalable coding. View and position dependent coding will be important
to support holographic video streams across displays with differing
5.3. Challenges screen sizes and angular resolutions. This is especially relevant for
SUDDs and HMDs, where only a fraction of the holographic signal
There is as of yet no generic transform or representation that is will reach the viewer’s pupil. This implies that any encoding scheme
applicable to holograms with arbitrary scene content. Ultimately, the satisfying these requirements will require sufficient locality both in time
goal is to build a codec tuned for static and dynamic digital holograms, as well as in frequency.
efficient both in terms of compression performance as well as in Computational complexity. Reducing computation through adapted ap-
encoding/decoding time. To conclude, we list some of the important proximations is of course a generic requirement returning for every
challenges to be tackled concerning hologram coding: component of the holographic processing pipeline. Parallelization of
Efficient representations. Coding strategies based on intermediate rep- methods and distributed processing are additional requirements.
resentations are a promising approach, particularly those based on
6. Quality assessment
the recently proposed nonlinear canonical transforms. However, more
research is necessary to lay sound mathematical foundations as well as
Assessing the impact of distortions introduced in the holographic sig-
time–frequency domain segmentation approaches. In addition, further
nal processing pipeline (Fig. 1) on the visual quality of the reconstructed
investigation of sparse signal representations is a necessity, and research
hologram is not as straightforward as for regular images. In the latter
is welcomed on intra-frame prediction techniques as deployed in many
case, distortions introduced in the spatial or frequency domain can often
image and video coders but then particularly tuned to holographic
be easily linked to reconstruction artefacts. As previously discussed,
signals.
for holographic content, this connection is much more obfuscated.
Motion compensation and estimation. Accounting for scene-space motion Information related to one particular point source in space is spread
is needed because every 3D scene point can potentially affect every throughout the hologram, particularly for large angular FoVs.
hologram pixel and classical local motion models on the hologram (such Hence, when it comes to quality prediction, digital holography yet
as block-based motion compensation) will be ineffective. Novel algo- has an extra level of complexity compared to regular imaging modalities.
rithms will be needed to compensate for arbitrary motion of multiple When the wavefield gets distorted by some form of image processing
independently moving objects. Moreover, optically acquired holograms or compression, one is not primarily interested in changes of the
present a challenge because the ground-truth motion is unknown, and holographic fringe pattern, but rather in how the decoded objects will
will thus have to be estimated. Automated general motion estimation in appear after reconstruction (cf. Section 6.4). Furthermore, the visual
digital holography for complex scenes has not been solved yet either. quality depends on the chosen perspective as well: e.g. the highest
frequency components contain information for the largest viewing
Coding complex values. Holograms are complex-valued and new ap- angles; quantizing these components more coarsely will result in an
proaches are needed for efficiently coding wrapped phase data for inhomogeneous distortion of viewing angles, and potentially a reduction
phase-only holograms, or for the phase components of complex-valued of the angular FoV.
signals. An additional difficulty will be achieving lossy coding, because Unfortunately, so far only initial attempts have been made to develop
of the inherent non-linearity of the phase information. visual quality metrics for holographic data and only few efforts have

125
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

valued error measurement. Another novel quality metric, the Sparseness


Significance Ranking Measure (SSRM) [137], utilizing sparse coding and
a ranking system based on the coefficient magnitudes, was shown to be
more effective than the PSNR or SSIM on holographic data as well [138].
To the knowledge of the authors, no perceptual visual quality
measures have been proposed yet for digital holography. To construct
them, subjective test procedures are a necessity.

6.3. Subjective test procedures


Fig. 22. Numerical reconstructions of holograms from various databases: (a) b-com
hologram repository [106], (b) Interfere-III database [127], (c) EmergImg-HoloGrail v2
database [128]. A standardized testing methodology to measure the perceptual visual
quality – i.e. the visual quality as perceived by the Human Visual System
been undertaken to design subjective quality assessment procedures. (HVS) – for holography is still an open problem. The challenge is two-
Both are also requiring suitable holographic test data. Let us take a closer fold: (1) the heterogeneity of currently existing holographic display
look at these three aspects. systems, differing in many aspects, such as light source properties,
type of SLMs, resolution, pixel pitch, viewing zone geometry, optical
6.1. Holographic test data aberrations; and (2) the high complexity of 3D input data, with content
observable from many angles and at different focal depths. We consider
Over the past years, several research groups have setup open public two categories: direct assessment, where users directly view and rate
databases containing various holograms [106,127,128], see Fig. 22. holograms on an holographic display, and indirect assessment which
Their goal is to provide a representative data set of high-resolution relies on a substitute display (e.g. a light field display) showing views
holograms, both optically recorded and computer-generated, containing extracted from the hologram to be analysed.
both static and dynamic content. At the time of writing, the databases For direct assessment to be reproducible, one would have to establish
are still modest in size and content: each one contains maximum a what the most relevant parameters are. On top of the existing require-
dozen holograms, consists almost entirely of static holograms, and often ments for quality assessment on regular screens, aspects to consider
encode simple scenes with humble resolutions ranging up to 8K and only are: (1) minimal screen capabilities (eyebox dimensions, viewing angle,
sporadically up to 16K or larger. An important goal is to expand these bit-depth) and (2) viewer positioning (should the viewer move freely,
databases by adding more holograms with higher resolutions and more go to specific positions or stay still?). However, given that holographic
heterogeneous and dynamic content. Currently, the JPEG Pleno stan- displays are evolving rapidly, care should be taken that the requirements
dardization initiative [129], initiated by the JPEG committee, attempts are sufficiently general to remain relevant in the years to come.
to setup a database ‘‘JPEG PlenoDB’’ [99], collecting publicly available It is also possible to reliably display high-quality synthetic holograms
data sets and to define guidelines for selection of suitable test content. up to even hundreds of Gigapixels by printing them in e.g. glass
plates [139]. Although this provides an experience which is very close
6.2. Objective visual quality measures to what a future high-end holographic display could show, it becomes
impractical and expensive to print a large quantity of instances to com-
Objective visual quality measures are typically classified into two pare e.g. various codecs, distortions, CGH algorithms etc. Furthermore,
categories: (1) signal fidelity measures that have a clear physical meaning, this medium cannot be used for displaying dynamic content.
but which are typically poor in terms of predicting the quality perceived Alternatively, these obstacles can be partially overcome using in-
by the human visual system and (2) perceptual visual quality measures,
direct assessment techniques, by utilizing conventional 2D displays
typically modelled and trained via subjective quality experiments [130].
or even light-field displays to render various reconstructions of the
In digital holography, popular signal fidelity measures are Euclidean
hologram. Nonetheless, this will come at the cost of ignoring several
distance-based error measures, in particular Mean Square Error (MSE)
visual cues, especially the ones pertaining to depth perception and
and Peak Signal-to-Noise Ratio (PSNR). These are used predominantly
parallax. Initial work on this problem was done in [136,140], where a
for both error analysis and as a measure for reconstruction quality.
large 4K 2D screen was used to rate the holograms of the public data set
However, MSE technically intends only to measure the global energy
found in [127]. In this test, the goal was to rate only the overall quality
variation. Other well-known regular imagery measures like Structural
of central view reconstructions for the holograms. Such a setup allows
Similarity Index Measure (SSIM) have been used separately on the
for adapting to the currently available standard testing methodologies
Cartesian components of the complex wavefield [109,131]. Even tough
with minimal modifications.
a polar representation would be mathematically equivalent, directly
applying fidelity measures on those channels were found to be less For more advanced and thorough quality assessments, one may take
accurate [113] (e.g. due to the non-linear response of wrapped phase inspiration from quality assessment procedures used for light fields: mul-
to distortions). tiple views and focal depths can be shown in succession in a video format
This behaviour is already known for regular image processing [132– to simulate scene perception from many different viewpoints [141]. The
135] and persists for holograms, where the phase encodes among advantage of this approach is that the testing conditions for subjective
others the depth information. The Cartesian representation is much quality measurements have been meticulously standardized for classical
less sensitive to compression artefacts as it avoids non-linearities by 2D displays [142–144]. Another similar approach would be to allow
splitting holographic information into real and imaginary parts [113]. users to actively explore hologram content. This can be achieved on a
In addition, the quality can be measured in the object plane, hologram screen as well, or could be experienced more naturally with HMDs.
plane and after numerical reconstruction. The first two approaches have In recent efforts [98], a high-end light field display was used to
the advantage that they measure the global quality of the hologram with render a set of colour CGHs. The display provided the possibility of
respect to the reference hologram in terms of PSNR or SSIM. The latter rendering a wide field of view but only with horizontal parallax. Such
can be deployed if the numerical reconstruction quality for a particular a display setup can be utilized for subjective tests, where the observers
viewing angle and focus plane needs to be examined [131]. will be able to freely move within the FoV of the screen, observe the
Recently in [136], a new versatile similarity measure (VSM) was reconstructed hologram objects, and score the perceptual visual quality
introduced, aiming to address the mentioned shortcomings for complex from different points of view.

126
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

6.4. Challenges Alternative displays. Adapting subjective quality assessment procedures


will be required as long as sufficiently mature holographic displays
Consequently, quality assessment of holographic data still faces are lacking. These methodologies are based on displaying numerically
many challenges to be resolved: reconstructed holograms on nowadays available displays ranging from
classical 2D displays to light field displays, the latter being capa-
Public holographic databases. Extended databases should be created by ble rendering large, wide angle, numerical hologram reconstructions.
adding more holograms with higher resolutions and more heteroge- Modelling the relationship between quality scores collected via such
neous and dynamic content. These holograms should depict a good ‘‘pseudo-holographic’’ setups with quality scores obtained on early
utilization of the time–frequency space. If large swathes of the time– holographic displays will be a necessity to validate predictions from
frequency domain are zero, this means that the hologram capacity is these experiments.
underused. However, solely looking at the spatial or Fourier representa-
tion can be misleading, since both can have a dense signal yet have 7. Conclusions
a very sparse time–frequency representation. This will happen when
e.g. an object surface is insufficiently diffuse. Secondly, objects should Realizing digital holographic display systems still faces many chal-
be placed close enough to the hologram plane. This is needed to enable lenges on the signal processing side. In this paper, we provided an
proper evaluation of depth cues such as accommodation and parallax. overview on the state-of-the-art in holographic displays, computer-
Finally, the bounds of the aliasing-free zone should be respected (cf. Fig. 7). generated holography, efficient transforms and coding solutions and
The hologram parameters dictate the allowed scene geometries [145]. quality assessment for digital holograms. Because of its multidisciplinary
Care should be taken to correctly place objects in the scene, otherwise nature, several challenges remain to be addressed:
the useable viewing zone will shrink and unwanted artifacts will appear. Holographic displays do not yet reach the requisite visual quality, screen
For example, objects too close to the hologram plane will vanish at steep sizes and viewing angles needed for truly high quality experiences.
viewing angles, having the effect as if the viewer is looking at the scene Various attributes of holographic displays will need to be leveraged
through a keyhole. to get the best signal given the technological constraints. Examples
These databases can subsequently be deployed to develop quality are light modulation efficiency, cross-talk, bit-depth, amplitude/phase
metrics that are targeted at assessing the perceptual visual quality. These modulation, colour display accuracy, speckle noise and embedded signal
metrics face the following challenges. processing within the display.

Visualization artefacts. Visual distortions are entirely different from Recording digital holograms at high resolutions especially for large
natural imagery: e.g. because of the non-local nature of the hologram’s scenes remains cumbersome. Hence, for holographic display, computer-
information content, the effect of local point defects on the hologram generated holography solutions are preferred. CGH algorithms have to
will be less visible than for natural imagery and have a more distributed overcome many hurdles for reaching an acceptable computational
impact on the reconstructed hologram [15]. This requires an extra effort cost. Numerical evaluations of diffraction is highly computationally
to study the effect of different distortion types affecting the wavefield expensive for modelling realistic-looking content, all while needing
on the reconstructed hologram. Examples include: errors induced by resolutions orders of magnitudes larger than current 2D displays. Good
candidates are sparse CGH methods, which exploit spatial, temporal and
lossy codecs, display limitations (bit-depth, phase/amplitude only) and
frequency redundancies in holographic signals without compromising
optical aberrations.
quality.
Speckle noise perception. Speckles arise naturally in holography and will
Transforms and codecs require new efficient representations, which
affect the assessment of any reconstructed views, whether optically
remains an open problem for holographic signals with large space-
recorded or synthetic, by reducing the perceived reconstruction quality.
bandwidth products. Optimized transforms and prediction schemes will
Quality metrics should either account for or ignore this noise depending
have to strike a balance between sparsity, computational efficiency,
on the use case. As explained in the previous section, two generated 3D scene-awareness and scalability requirements. Because holographic
holograms can possess highly different signals because of the different signals are highly non-stationary, we expect that the best transforms will
random speckle instances for diffuseness, yet be visually nearly identical model scene content in phase space, i.e. the time–frequency domain. Not
after reconstruction, making hologram comparison difficult. only will this track the behaviour of diffraction more closely, but this
View position and angle dependencies. Although traditionally a single will also allow for selective ROI decoding of the time–frequency space
so as to only process the displayable subset of the holographic scene or
quality score is expected to represent the overall visual quality, this
to extract specific views.
may not suffice for 3D content. For wide FoV holograms of deep scenes,
a human subject may experience a different perceptual visual quality Quality assessment is important for designing holographic systems. Ade-
depending on the viewing position, angle and focal depth. For example, quate quality measures will help to steer the joint design of holographic
standard JPEG compression with perceptual quantization will favour codecs, CGH algorithms and display systems. The first steps to achieve
low frequencies, thereby better preserving the centre view than larger this goal will be (1) to design large, varied public data sets containing
viewing angles [123]. Visual quality measures are not only expected to holograms of highly realistic objects that can account for all human
provide overall quality prediction per hologram, but also perspective- visual cues; (2) to design initial quality assessment procedures for
dependent predictions as well. As far as concerning depth resolution, numerically reconstructed holograms on regular 2D screens or light
compression technologies might affect the depth of field as well, see field displays at various viewing angles and focal depths; (3) to eval-
Fig. 18. uate the relevant hardware parameters affecting quality perception in
holographic display systems aiding the establishment of standardized
Subjective quality assessment. To model and train perceptual quality testing procedures.
metrics, extensive subjective testing is required to facilitate their design. In this context, it is encouraging that also standardization bodies are
New procedures on holographic display should be developed that already addressing some challenges. Particularly, the JPEG committee
account for aspects such as minimal screen capabilities and viewing is taking the lead by developing the JPEG Pleno standard that will facili-
position. An additional challenge would be to design these assessment tate the capturing, representation and exchange of not solely light field,
procedures such that they are sufficiently flexible to support successive but also point cloud and holographic imaging modalities [129,146,147].
generations of holographic displays. The standard aims at (1) defining tools for improved compression while

127
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

providing advanced functionalities at system level, (2) supporting data [15] C. Slinger, C. Cameron, M. Stanley, Computer-generated holography as a generic
and metadata manipulation, editing, random access and interaction, display technology, Computer 38 (8) (2005) 46–53.
[16] G. Lazarev, A. Hermerschmidt, S. Krger, S. Osten, LCOS spatial light modulators:
protection of privacy and ownership rights as well as other security
trends and applications, Opt. Imaging Metrol.: Adv. Technol. (2012) 1–29.
mechanisms and providing methodologies for quality assessment. [17] T. Haist, W. Osten, Holography using pixelated spatial light modulators - part 1:
In summary, we can state that holographic display systems are theory and basic considerations, J. Micro/Nanolithogr. MEMS MOEMS 14 (2015)
steadily coming within reach [8,14,19–21]. However, to enable this type 041310, http://dx.doi.org/10.1117/1.JMM.14.4.041310.
of visualization, significant efforts are still required, not solely from the [18] T. Haist, W. Osten, Holography using pixelated spatial light modulators - Part
2: applications, J. Micro/Nanolithogr. MEMS MOEMS 14 (4) (2015) 041311,
optics, photonics and nano-electronics communities, but also from the
http://dx.doi.org/10.1117/1.JMM.14.4.041311.
signal processing community. Moreover, all developed algorithms will [19] J. Geng, Three-dimensional display technologies, Adv. Opt. Photon. 5 (4) (2013)
have to run at an acceptable computational cost, which in the end might 456–535, http://dx.doi.org/10.1364/AOP.5.000456.
be one of the highest hurdles to take. Nonetheless, while accounting for [20] J.Y. Son, H. Lee, B.R. Lee, K.H. Lee, Holographic and light-field imaging as future
3-d displays, Proc. IEEE 105 (5) (2017) 789–804, http://dx.doi.org/10.1109/
these many challenges, we can still expect on a relatively short-term
JPROC.2017.2666538.
holographic head-mounted displays to hit the market, competing with [21] D. Smalley, T.-C. Poon, H. Gao, J. Kvavle, K. Qaderi, Volumetric displays: Turning
light field inspired HMD solutions for augmented reality applications. 3-D inside-out, Opt. Photonics News 29 (6) (2018) 26–33, http://dx.doi.org/10.
However, high-resolution multi-user displays with large angular FoV 1364/OPN.29.6.000026.
support, will at least still take a decade before commercial solutions [22] F. Dufaux, Y. Xing, B. Pesquet-Popescu, P. Schelkens, Compression of digital
holographic data: an overview, Proc. SPIE 9599 (2015) http://dx.doi.org/10.
will become available.
1117/12.2190997.
[23] E. Goldstein, Sensation and Perception, Cengage Learning, 2013.
Acknowledgements [24] B.G. Blundell, 3D Displays and Spatial Interaction: From Perception to Technology
Volume I Exploring the Science, Art, Evolution and Use of 3D Technologies,
Walker & Wood Limited, New Zealand, 2011.
The authors would like to thank b-com, ScanLAB Projects and Nicolas
[25] K. Akeley, S.J. Watt, A.R. Girshick, M.S. Banks, A stereo display prototype with
Pavillon for providing the holograms and 3D models used in this paper. multiple focal distances, ACM Trans. Graph. 23 (3) (2004) 804–813, http://dx.
doi.org/10.1145/1015706.1015804.
Funding [26] N. Padmanaban, R. Konrad, T. Stramer, E.A. Cooper, G. Wetzstein, Optimizing
virtual reality for all users through gaze-contingent and adaptive focus displays,
Proc. Natl. Acad. Sci. 114 (9) (2017) 2183–2188, http://dx.doi.org/10.1073/
The research leading to these results has received funding from
pnas.1617251114.
the European Research Council under the European Union’s Seventh [27] T. Balogh, P.T. Kovács, Z. Dobrányi, A. Barsi, Z. Megyesi, Z. Gaál, G. Balogh, The
Framework Programme (FP7/2007–2013)/ERC Grant Agreement Nr. holovizio system-new opportunity offered by 3d displays, in: Proceedings of the
617779 (INTERFERE), and from the Research Foundation – Flanders TMCE, 2008.
[28] S. Yoshida, fvision: 360-degree viewable glasses-free tabletop 3d display com-
(FWO), Belgium project Nr. G024715N.
posed of conical screen and modular projector arrays, Opt. Express 24 (12) (2016)
13194–13203, http://dx.doi.org/10.1364/OE.24.013194.
References [29] T.H. Yoshihiro Kajiki, Hiroshi Yoshikawa, Hologramlike video images by 45-
view stereoscopic display, Proc. SPIE 3012 (1997) http://dx.doi.org/10.1117/12.
[1] D. Gabor, A new microscopic principle, Nature 161 (4098) (1948) 777–778, 274452.
http://dx.doi.org/10.1038/161777a0. [30] Y. Takaki, K. Tanaka, J. Nakamura, Super multi-view display with a lower
[2] E.N. Leith, J. Upatnieks, Reconstructed wavefronts and communication theory, J. resolution flat-panel display, Opt. Express 19 (5) (2011) 4129–4139, http://dx.
Opt. Soc. Amer. 52 (10) (1962) 1123–1128, http://dx.doi.org/10.1364/JOSA.52. doi.org/10.1364/OE.19.004129.
001123. [31] H. Mizushina, J. Nakamura, Y. Takaki, H. Ando, Super multi-view 3d displays
[3] E. Cuche, P. Marquet, C. Depeursinge, Simultaneous amplitude-contrast and reduce conflict between accommodative and vergence responses, J. Soc. Inf. Disp.
quantitative phase-contrast microscopy by numerical reconstruction of Fresnel 24 (12) (2016) 747–756, http://dx.doi.org/10.1002/jsid.520.
off-axis holograms, Appl. Opt. 38 (34) (1999) 6994–7001, http://dx.doi.org/10. [32] Y. Takaki, High-density directional display for generating natural three-
1364/AO.38.006994. dimensional images, Proc. IEEE 94 (3) (2006) 654–663, http://dx.doi.org/10.
[4] F. Charrière, J. Kühn, T. Colomb, F. Montfort, E. Cuche, Y. Emery, K. Weible, P. 1109/JPROC.2006.870684.
Marquet, C. Depeursinge, Characterization of microlenses by digital holographic [33] Z. Alpaslan, H. El-Ghoroury, P. Schelkens, Information processing challenges of
microscopy, Appl. Opt. 45 (5) (2006) 829–835, http://dx.doi.org/10.1364/AO. full parallax light field displays, in: SPIE Photonics Europe, Optics, Photonics
45.000829. and Digital Technologies for Multimedia Applications, Vol. 10679, SPIE, 2018,
[5] E. Cuche, F. Bevilacqua, C. Depeursinge, Digital holography for quantitative http://dx.doi.org/10.1117/12.2316659.
phase-contrast imaging, Opt. Lett. 24 (5) (1999) 291–293. [34] A. Maimone, A. Georgiou, J.S. Kollin, Holographic near-eye displays for virtual
[6] I. Yamaguchi, J.-i. Kato, S. Ohta, Surface shape measurement by phase-shifting and augmented reality, ACM Trans. Graph. 36 (4) (2017) 85:1–85:16, http://dx.
digital holography, Opt. Rev. 8 (2) (2001) 85–89. doi.org/10.1145/3072959.3073624.
[7] H.J. Coufal, D. Psaltis, G.T. Sincerbox, et al., Holographic Data Storage, Vol. 8, [35] N.A. Dodgson, Autostereo displays: 3D without glasses, in: Electronic Information
Springer, 2000. Displays, 1997.
[8] F. Yaraş, H. Kang, L. Onural, State of the art in holographic displays: A survey, J. [36] J.S. Kollin, S.A. Benton, M.L. Jepsen, Real-time display of 3-d computed holograms
Display Technol. 6 (10) (2010) 443–454, http://dx.doi.org/10.1109/JDT.2010. by scanning the image of an acousto-optic modulator, Proc. SPIE 1136 (1989)
2045734. http://dx.doi.org/10.1117/12.961683.
[9] M. Levoy, P. Hanrahan, Light field rendering, in: Proceedings of the 23rd Annual [37] S. Reichelt, R. Häussler, G. Fütterer, N. Leister, H. Kato, N. Usukura, Y. Kanbayashi,
Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96, Full-range, complex spatial light modulator for real-time holography, Opt. Lett. 37
ACM, New York, NY, USA, 1996, pp. 31–42, http://dx.doi.org/10.1145/237170. (11) (2012) 1955–1957, http://dx.doi.org/10.1364/OL.37.001955.
237199. [38] F. Yaraş, H. Kang, L. Onural, Circular holographic video display system, Opt.
[10] M. Yamaguchi, Light-field and holographic three-dimensional displays (Invited), Express 19 (10) (2011) 9147–9156, http://dx.doi.org/10.1364/OE.19.009147.
J. Opt. Soc. Amer A 33 (12) (2016) 2348–2364, http://dx.doi.org/10.1364/ [39] Y. Lim, K. Hong, H. Kim, H.-E. Kim, E.-Y. Chang, S. Lee, T. Kim, J. Nam, H.-G.
JOSAA.33.002348. Choo, J. Kim, J. Hahn, 360-degree tabletop electronic holographic display, Opt.
[11] Stephan Reichelt, Holographic 3-d displays - electro-holography within the grasp Express 24 (22) (2016) 24999–25009, http://dx.doi.org/10.1364/OE.24.024999.
of commercialization, in: R. Haussler, N. Leister (Eds.), Advances in Lasers [40] H. Sasaki, K. Yamamoto, K. Wakunami, Y. Ichihashi, R. Oi, T. Senoh, Large size
and Electro Optics, IntechOpen, Rijeka, 2010, http://dx.doi.org/10.5772/8650, three-dimensional video by electronic holography using multiple spatial light
(Chapter 29). modulators, Sci. Rep. 4 (2014) 6177, http://dx.doi.org/10.1038/srep06177.
[12] D.M. Hoffman, A.R. Girshick, K. Akeley, M.S. Banks, Vergenceaccommodation [41] L. Shi, F.-C. Huang, W. Lopes, W. Matusik, D. Luebke, Near-eye light field
conflicts hinder visual performance and cause visual fatigue, J. Vis. 8 (3) (2008) holographic rendering with spherical waves for wide field of view interactive
33–33. 3d computer graphics, ACM Trans. Graph. 36 (6) (2017) 236:1–236:17, http:
[13] J. Son, W. Son, S. Kim, K. Lee, B. Javidi, Three-dimensional imaging for creating //dx.doi.org/10.1145/3130800.3130832.
real-world-like environments, Proc. IEEE 101 (1) (2013) 190–205, http://dx.doi. [42] R. Häussler, Y. Gritsai, E. Zschau, R. Missbach, H. Sahm, M. Stock, H. Stolle, Large
org/10.1109/JPROC.2011.2178052. real-time holographic 3D displays: enabling components and results, Appl. Opt.
[14] M. Lucente, The first 20 years of holographic videoand the next 20, 2011. 56 (13) (2017) F45–F52, http://dx.doi.org/10.1364/AO.56.000F45.

128
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

[43] T. Inoue, Y. Takaki, Table screen 360-degree holographic display using circular [71] D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (4) (2006) 1289–
viewing-zone scanning, Opt. Express 23 (5) (2015) 6533–6542, http://dx.doi.org/ 1306, http://dx.doi.org/10.1109/TIT.2006.871582.
10.1364/OE.23.006533. [72] L. Williams, G. Nehmetallah, P.P. Banerjee, Digital tomographic compressive holo-
[44] T. Kozacki, G. Finke, P. Garbat, W. Zaperty, M. Kujawińska, Wide angle holo- graphic reconstruction of three-dimensional objects in transmissive and reflective
graphic display system with spatiotemporal multiplexing, Opt. Express 20 (25) geometries, Appl. Opt. 52 (8) (2013) 1702–1710, http://dx.doi.org/10.1364/AO.
(2012) 27473–27481, http://dx.doi.org/10.1364/OE.20.027473. 52.001702.
[45] D. Close, M. Ernstoff, W. Hoffman, E. Opittek, R. Winner, Holographic lens and [73] D.J. Brady, K. Choi, D.L. Marks, R. Horisaki, S. Lim, Compressive holography, Opt.
liquid crystal image source for head-up display, US Patent 3, 915, 548, 1975. Express 17 (15) (2009) 13040–13049, http://dx.doi.org/10.1364/OE.17.013040.
[46] G.E. Moss, Holographic display panel for a vehicle windshield, US Patent 4, 790, [74] S. Bettens, C. Schretter, N. Deligiannis, P. Schelkens, Bounds and conditions
613 A, 1987. for compressive digital holography using wavelet sparsifying bases, IEEE Trans.
[47] W.J.A. Charles, I. Dumoulin, Robert D. Darrow, Display system for enhancing Comput. Imaging 3 (4) (2017) 592–604, http://dx.doi.org/10.1109/TCI.2017.
visualization of body structures during medical procedures, US Patent 5, 526, 812 2761742.
A, 1995. [75] S. Bettens, H. Yan, D. Blinder, H. Ottevaere, C. Schretter, P. Schelkens, Studies on
[48] H. Peng, D. Cheng, J. Han, C. Xu, W. Song, L. Ha, J. Yang, Q. Hu, Y. Wang, the sparsifying operator in compressive digital holography, Opt. Express 25 (16)
Design and fabrication of a holographic head-up display with asymmetric field (2017) 18656–18676, http://dx.doi.org/10.1364/OE.25.018656.
of view, Appl. Opt. 53 (29) (2014) H177–H185, http://dx.doi.org/10.1364/AO. [76] N. Pavillon, C.S. Seelamantula, J. Kühn, M. Unser, C. Depeursinge, Suppression
53.00H177. of the zero-order term in off-axis digital holography through nonlinear filter-
[49] H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, J.-H. Park, ing, Appl. Opt. 48 (34) (2009) H186–H195, http://dx.doi.org/10.1364/AO.48.
3D holographic head mounted display using holographic optical elements with 00H186.
astigmatism aberration compensation, Opt. Express 23 (25) (2015) 32025–32034, [77] J.A. Fessler, S. Sotthivirat, Simplified digital holographic reconstruction using
http://dx.doi.org/10.1364/OE.23.032025. statistical methods, in: 2004 International Conference on Image Processing
[50] G. Li, D. Lee, Y. Jeong, J. Cho, B. Lee, Holographic display for see-through (ICIP’04), Vol. 4, IEEE, 2004, pp. 2435–2438, http://dx.doi.org/10.1109/ICIP.
augmented reality using mirror-lens holographic optical element, Opt. Lett. 41 2004.1421593.
(11) (2016) 2486–2489, http://dx.doi.org/10.1364/OL.41.002486. [78] A. Bourquard, N. Pavillon, E. Bostan, C. Depeursinge, M. Unser, A practical
[51] J. Khan, C. Can, A. Greenaway, I. Underwood, A real-space interactive holographic inverse-problem approach to digital holographic reconstruction, Opt. Express 21
display based on a large-aperture HOE, Proc. SPIE 8644 (2013) http://dx.doi.org/ (3) (2013) 3417–3433, http://dx.doi.org/10.1364/OE.21.003417.
10.1117/12.2021633. [79] C. Schretter, D. Blinder, S. Bettens, H. Ottevaere, P. Schelkens, Regularized non-
[52] R. Hussler, S. Reichelt, N. Leister, E. Zschau, R. Missbach, A. Schwerdtner, Large convex image reconstruction in digital holographic microscopy, Opt. Express 25
real-time holographic displays: from prototypes to a consumer product, Proc. SPIE (14) (2017) 16491–16508, http://dx.doi.org/10.1364/OE.25.016491.
7237 (2009) http://dx.doi.org/10.1117/12.805873. [80] M.E. Lucente, Interactive computation of holograms using a look-up table, J.
[53] M.L. Huebschman, B. Munjuluri, H.R. Garner, Dynamic holographic 3-d image
Electron. Imaging 2 (1) (1993) 28–34, http://dx.doi.org/10.1117/12.133376.
projection, Opt. Express 11 (5) (2003) 437–445, http://dx.doi.org/10.1364/OE.
[81] S.-C. Kim, E.-S. Kim, Effective generation of digital holograms of three-
11.000437.
dimensional objects using a novel look-up table method, Appl. Opt. 47 (19) (2008)
[54] X. Xia, X. Liu, H. Li, Z. Zheng, H. Wang, Y. Peng, W. Shen, A 360-degree floating
D55–D62, http://dx.doi.org/10.1364/AO.47.000D55.
3d display based on light field regeneration, Opt. Express 21 (9) (2013) 11237–
[82] T. Shimobaba, N. Masuda, T. Ito, Simple and fast calculation algorithm for
11247, http://dx.doi.org/10.1364/OE.21.011237.
computer-generated hologram with wavefront recording plane, Opt. Lett. 34 (20)
[55] J. Kim, Y. Lim, K. Hong, E.-Y. Chang, H. gon Choo, 360-degree tabletop color
(2009) 3133–3135, http://dx.doi.org/10.1364/OL.34.003133.
holographic display, in: Digital Holography and Three-Dimensional Imaging,
[83] A. Symeonidou, D. Blinder, A. Munteanu, P. Schelkens, Computer-generated holo-
Optical Society of America, 2017, p. W3A.1, http://dx.doi.org/10.1364/DH.2017.
grams by multiple wavefront recording plane method with occlusion culling, Opt.
W3A.1.
Express 23 (17) (2015) 22149–22161, http://dx.doi.org/10.1364/oe.23.022149.
[56] M. Oikawa, T. Shimobaba, T. Yoda, H. Nakayama, A. Shiraki, N. Masuda, T. Ito,
[84] D. Blinder, P. Schelkens, Accelerated computer generated holography using sparse
Time-division color electroholography using one-chip rgb led and synchronizing
bases in the STFT domain, Opt. Express 26 (2) (2018) 1461–1473, http://dx.doi.
controller, Opt. Express 19 (13) (2011) 12008–12013, http://dx.doi.org/10.
org/10.1364/OE.26.001461.
1364/OE.19.012008.
[85] K. Matsushima, S. Nakahara, Extremely high-definition full-parallax computer-
[57] W. Zaperty, T. Kozacki, M. Kujawińska, Multi-SLM color holographic 3D display
generated hologram created by the polygon-based method, Appl. Opt. 48 (34)
based on RGB spatial filter, J. Display Technol. 12 (12) (2016) 1724–1731.
(2009) H54–H63, http://dx.doi.org/10.1364/AO.48.000H54.
[58] A.W. Lohmann, R.G. Dorsch, D. Mendlovic, Z. Zalevsky, C. Ferreira, Space-
[86] K. Matsushima, M. Nakamura, S. Nakahara, Silhouette method for hidden surface
bandwidth product of optical signals and systems, J. Opt. Soc. Amer A 13 (3)
removal in computer holography and its acceleration using the switch-back tech-
(1996) 470–473, http://dx.doi.org/10.1364/JOSAA.13.000470.
nique, Opt. Express 22 (20) (2014) 24450–24465, http://dx.doi.org/10.1364/OE.
[59] J.W. Goodman, Introduction to Fourier Optics, Roberts & Company, 2004.
22.024450.
[60] R.W. Gerchberg, W.O. Saxton, A practical algorithm for the determination of the
phase from image and diffraction plane pictures, Optik 35 (1972) 237–246. [87] H. Nishi, K. Matsushima, Rendering of specular curved objects in polygon-based
[61] Y. Deng, D. Chu, Coherence properties of different light sources and their effect computer holography, Appl. Opt. 56 (13) (2017) F37–F44, http://dx.doi.org/10.
on the image sharpness and speckle of holographic displays, Sci. Rep. 7 (1) (2017) 1364/AO.56.000F37.
5893. [88] N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue,
[62] WG1, JPEG Pleno Scope, Use Cases and Requirements Ver.1.7, P. Schelkens (Ed.), N. Masuda, T. Ito, Band-limited double-step fresnel diffraction and its application
Doc. N74020, 74th JPEG Meeting, Geneva, Switzerland, 2017. to computer-generated holograms, Opt. Express 21 (7) (2013) 9192–9197, http:
[63] N. Pavillon, C. Arfire, I. Bergoënd, C. Depeursinge, Iterative method for zero-order //dx.doi.org/10.1364/OE.21.009192.
suppression in off-axis digital holography, Opt. Express 18 (15) (2010) 15318– [89] T. Yatagai, Stereoscopic approach to 3-d display using computer-generated holo-
15331, http://dx.doi.org/10.1364/OE.18.015318. grams, Appl. Opt. 15 (11) (1976) 2722–2729, http://dx.doi.org/10.1364/AO.15.
[64] I. Yamaguchi, T. Zhang, Phase-shifting digital holography, Opt. Lett. 22 (16) 002722.
(1997) 1268–1270, http://dx.doi.org/10.1364/OL.22.001268. [90] M. Yamaguchi, H. Hoshino, T. Honda, N. Ohyama, Phase-added stereogram:
[65] V. Mico, Z. Zalevsky, P. García-Martínez, J. García, Synthetic aperture superres- calculation of hologram using computer graphics technique, Proc. SPIE 1914
olution with multiple off-axis holograms, J. Opt. Soc. Amer A 23 (12) (2006) (1993) 25–31, http://dx.doi.org/10.1117/12.155027.
3162–3170, http://dx.doi.org/10.1364/JOSAA.23.003162. [91] R.H.-Y. Chen, T.D. Wilkinson, Computer generated hologram with geometric
[66] A. Pelagotti, M. Paturzo, M. Locatelli, A. Geltrude, R. Meucci, A. Finizio, P. occlusion using gpu-accelerated depth buffer rasterization for three-dimensional
Ferraro, An automatic method for assembling a large synthetic aperture digital display, Appl. Opt. 48 (21) (2009) 4246–4255, http://dx.doi.org/10.1364/AO.48.
hologram, Opt. Express 20 (5) (2012) 4830–4839, http://dx.doi.org/10.1364/OE. 004246.
20.004830. [92] H. Zhang, Y. Zhao, L. Cao, G. Jin, Fully computed holographic stereogram based
[67] A.E.T. James R. Fienup, Gigapixel synthetic-aperture digital holography, Proc. algorithm for computer-generated holograms with accurate depth cues, Opt.
SPIE 8122 (2011) http://dx.doi.org/10.1117/12.894903. Express 23 (4) (2015) 3901–3913, http://dx.doi.org/10.1364/OE.23.003901.
[68] M. Kujawinska, Spatio-temporal multiplexing in digital holographic microscopy [93] L. Shi, F.-C. Huang, W. Lopes, W. Matusik, D. Luebke, Near-eye light field
and holographic displays, in: Digital Holography and Three-Dimensional Imaging, holographic rendering with spherical waves for wide field of view interactive
Optical Society of America, 2017, p. W1A.1, http://dx.doi.org/10.1364/DH.2017. 3d computer graphics, ACM Trans. Graph. 36 (6) (2017) 236:1–236:17, http:
W1A.1. //dx.doi.org/10.1145/3130800.3130832.
[69] M. Paturzo, F. Merola, S. Grilli, S.D. Nicola, A. Finizio, P. Ferraro, Super-resolution [94] S. Igarashi, T. Nakamura, K. Matsushima, M. Yamaguchi, Efficient tiled calculation
in digital holography by a two-dimensional dynamic phase grating, Opt. Express of over-10-gigapixel holograms using ray-wavefront conversion, Opt. Express 26
16 (21) (2008) 17107–17118, http://dx.doi.org/10.1364/OE.16.017107. (8) (2018) 10773–10786, http://dx.doi.org/10.1364/OE.26.010773.
[70] R. Horisaki, Y. Ogura, M. Aino, J. Tanida, Single-shot phase imaging with a coded [95] K. Wakunami, H. Yamashita, M. Yamaguchi, Occlusion culling for computer
aperture, Opt. Lett. 39 (22) (2014) 6466–6469, http://dx.doi.org/10.1364/OL. generated hologram based on ray-wavefront conversion, Opt. Express 21 (19)
39.006466. (2013) 21811–21822, http://dx.doi.org/10.1364/OE.21.021811.

129
D. Blinder et al. Signal Processing: Image Communication 70 (2019) 114–130

[96] A. Symeonidou, D. Blinder, B. Ceulemans, A. Munteanu, P. Schelkens, Three- [121] E. Darakis, J.J. Soraghan, Reconstruction domain compression of phase-shifting
dimensional rendering of computer-generated holograms acquired from point- digital holograms, Appl. Opt. 46 (3) (2007) 351–356, http://dx.doi.org/10.1364/
clouds on light field displays, Proc. SPIE 9971 (2016) http://dx.doi.org/10.1117/ AO.46.000351.
12.2237581. [122] M. Liebling, T. Blu, M. Unser, Fresnelets: New multiresolution wavelet bases for
[97] A. Symeonidou, D. Blinder, P. Schelkens, Colour computer-generated holography digital holography, Trans. Image Proc. 12 (1) (2003) 29–43, http://dx.doi.org/
for point clouds utilizing the Phong illumination model, Opt. Express 26 (8) (2018) 10.1109/TIP.2002.806243.
10282–10298, http://dx.doi.org/10.1364/OE.26.010282. [123] E. Darakis, T.J. Naughton, J.J. Soraghan, Compression defects in different recon-
[98] A. Symeonidou, D. Blinder, A. Ahar, C. Schretter, A. Munteanu, P. Schelkens, structions from phase-shifting digital holographic data, Appl. Opt. 46 (21) (2007)
Speckle noise reduction for computer generated holograms of objects with diffuse 4579–4586, http://dx.doi.org/10.1364/AO.46.004579.
surfaces, Proc. SPIE 9896 (2016) http://dx.doi.org/10.1117/12.2225201. [124] M.P. Marco, V. Bernardo, Antnio M.G. Pinheiro, Benchmarking coding standards
[99] ISO/IEC JTC1/SC29/WG1 (JPEG) Website – JPEG Pleno Database, https://jpeg. for digital holography represented on the object plane, Proc. SPIE 10679 (2018)
org/jpegpleno/plenodb.html (Accessed: 10.08.18). http://dx.doi.org/10.1117/12.2315361.
[100] T. Shimobaba, T. Ito, Fast generation of computer-generated holograms using [125] Y. Xing, B. Pesquet-Popescu, F. Dufaux, Compression of computer generated
wavelet shrinkage, Opt. Express 25 (1) (2017) 77–87, http://dx.doi.org/10.1364/ phase-shifting hologram sequence using avc and hevc, Proc. SPIE 8856 (2013)
OE.25.000077. http://dx.doi.org/10.1117/12.2027148.
[101] D. Blinder, C. Schretter, H. Ottevaere, A. Munteanu, P. Schelkens, Unitary [126] D. Blinder, C. Schretter, P. Schelkens, Global motion compensation for com-
transforms using time-frequency warping for digital holograms of deep scenes, pressing holographic videos, Opt. Express 26 (20) (2018) 25524–25533, http:
IEEE Trans. Comput. Imaging 4 (2) (2018) 206–218, http://dx.doi.org/10.1109/ //dx.doi.org/10.1364/OE.26.025524.
TCI.2018.2813167. [127] D. Blinder, A. Ahar, A. Symeonidou, Y. Xing, T. Bruylants, C. Schretter, B.
[102] A. Gilles, P. Gioia, R. Cozot, L. Morin, Computer generated hologram from Pesquet-Popescu, F. Dufaux, A. Munteanu, P. Schelkens, Open access database
Multiview-plus-Depth data considering specular reflections, in: 2016 IEEE In- for experimental validations of holographic compression engines, in: 2015 Sev-
ternational Conf. Multimedia Expo Workshops (ICMEW), 2016, pp. 1–6, http: enth International Workshop on Quality of Multimedia Experience (QoMEX),
//dx.doi.org/10.1109/ICMEW.2016.7574699. 2015, pp. 1–6, http://dx.doi.org/10.1109/QoMEX.2015.7148145, URL http://
[103] T. Shimobaba, H. Nakayama, N. Masuda, T. Ito, Rapid calculation algorithm erc-interfere.eu/downloads.html.
of Fresnel computer-generated-hologram using look-up table and wavefront- [128] EmergImg-holograil holographic database. URL http://emergimg.di.ubi.pt/
recording plane methods for three-dimensional display, Opt. Express 18 (19) downloads.html.
(2010) 19504–19509, http://dx.doi.org/10.1364/OE.18.019504. [129] T. Ebrahimi, S. Foessel, F. Pereira, P. Schelkens, Jpeg pleno: Toward an efficient
[104] N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. representation of visual reality, IEEE MultiMedia 23 (4) (2016) 14–20, http:
Masuda, T. Ito, Fast high-resolution computer-generated hologram computation //dx.doi.org/10.1109/MMUL.2016.64.
using multiple graphics processing unit cluster system, Appl. Opt. 51 (30) (2012) [130] W. Lin, C.-C. Jay Kuo, Perceptual visual quality metrics: A survey, J. Vis. Commun.
7303–7307, http://dx.doi.org/10.1364/AO.51.007303. Image Represent. 22 (4) (2011) 297–312, http://dx.doi.org/10.1016/j.jvcir.2011.
[105] T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. 01.005.
[131] A. Ahar, D. Blinder, T. Bruylants, C. Schretter, A. Munteanu, P. Schelkens, Sub-
Ichihashi, A. Shiraki, M. Oikawa, N. Takada, et al., High-performance parallel
jective quality assessment of numerically reconstructed compressed holograms,
computing for next-generation holographic imaging, Nat. Electron. 1 (4) (2018)
in: SPIE Optical Engineering+ Applications, International Society for Optics and
254.
Photonics, 2015, 95990K–95990K.
[106] b-com hologram repository, URL https://hologram-repository.labs.b-com.com/
[132] A.V. Oppenheim, J.S. Lim, The importance of phase in signals, Proc. IEEE 69 (5)
#/hologram-repository.
(1981) 529–541.
[107] T. Senoh, K. Wakunami, Y. Ichihashi, H. Sasaki, R. Oi, K. Yamamoto, Multiview
[133] W.H. Hsiao, R.P. Millane, Effects of fourier-plane amplitude and phase errors on
image and depth map coding for holographic tv system, Opt. Eng. 53 (11) (2014)
image reconstruction. i. small amplitude errors, J. Opt. Soc. Amer A 24 (10) (2007)
112302, http://dx.doi.org/10.1117/1.OE.53.11.112302.
3180–3188.
[108] D. Blinder, T. Bruylants, H. Ottevaere, A. Munteanu, P. Schelkens, JPEG 2000-
[134] X.S. Ni, X. Huo, Statistical interpretation of the importance of phase information
based compression of fringe patterns for digital holographic microscopy, Opt. Eng.
in signal and image reconstruction, Statist. Probab. Lett. 77 (4) (2007) 447–454.
53 (12) (2014) 123102, http://dx.doi.org/10.1117/1.OE.53.12.123102.
[135] M. Narwaria, W. Lin, I.V. McLoughlin, S. Emmanuel, L.T. Chia, Fourier transform-
[109] Y. Xing, M. Kaaniche, B. Pesquet-Popescu, F. Dufaux, Adaptive nonseparable
based scalable image quality measure, IEEE Trans. Image Process. 21 (8) (2012)
vector lifting scheme for digital holographic data compression, Appl. Opt. 54 (1)
3364–3377.
(2015) A98–A109, http://dx.doi.org/10.1364/AO.54.000A98.
[136] A. Ahar, T. Birnbaum, C. Jaeh, P. Schelkens, A new similarity measure for complex
[110] L.T. Bang, Z. Ali, P.D. Quang, J.-H. Park, N. Kim, Compression of digital hologram
valued data, in: Digital Holography and Three-Dimensional Imaging, Optical
for three-dimensional object using Wavelet-Bandelets transform, Opt. Express 19
Society of America, 2017, p. Tu1A.6, http://dx.doi.org/10.1364/DH.2017.Tu1A.
(9) (2011) 8019–8031, http://dx.doi.org/10.1364/OE.19.008019.
6.
[111] T. Birnbaum, D. Blinder, C. Schretter, P. Schelkens, Compressing macroscopic [137] A. Ahar, A. Barri, P. Schelkens, From sparse coding significance to perceptual
near-field digital holograms with wave atoms, in: Imaging and Applied Optics quality: A new approach for image quality assessment, IEEE Trans. Image Process.
2018, Optical Society of America, 2018, p. DW2F.5, http://dx.doi.org/10.1364/ 27 (2) (2018) 879–893, http://dx.doi.org/10.1109/TIP.2017.2771412.
DH.2018.DW2F.5. [138] A. Ahar, T. Birnbaum, D. Blinder, A. Symeonidou, P. Schelkens, Performance
[112] J.P. Peixeiro, C. Brites, J. Ascenso, F. Pereira, Holographic data coding: Bench- evaluation of sparseness significance ranking measure (ssrm) on holographic
marking and extending hevc with adapted transforms, IEEE Trans. Multimed. 20 content, in: Imaging and Applied Optics 2018, Optical Society of America, 2018,
(2) (2018) 282–297, http://dx.doi.org/10.1109/TMM.2017.2742701. p. JTu4A.10, http://dx.doi.org/10.1364/3D.2018.JTu4A.10.
[113] M.V. Bernardo, P. Fernandes, A. Arrifano, M. Antonini, E. Fonseca, P.T. Fiadeiro, [139] T.M. Lehtimäki, M. Niemelä, R. Näsänen, R.G. Reilly, T.J. Naughton, Using
A.M. Pinheiro, M. Pereira, Holographic representation: Hologram plane vs. object traditional glass plate holograms to study visual perception of future digital
plane, Signal Process., Image Commun. 68 (2018) 193–206, http://dx.doi.org/10. holographic displays, in: Imaging and Applied Optics 2016, Optical Society of
1016/j.image.2018.08.006. America, 2016, p. JW4A.20, http://dx.doi.org/10.1364/3D.2016.JW4A.20.
[114] L.M. Kartik Viswanathan, Patrick Gioia, Wavelet compression of digital holo- [140] A. Ahar, D. Blinder, T. Bruylants, C. Schretter, A. Munteanu, P. Schelkens, Sub-
grams: Towards a view-dependent framework, Proc. SPIE 8856 (2013) http: jective quality assessment of numerically reconstructed compressed holograms,
//dx.doi.org/10.1117/12.2027199. Proc. SPIE 9599 (2015) http://dx.doi.org/10.1117/12.2189887.
[115] K. Viswanathan, P. Gioia, L. Morin, A framework for view-dependent hologram [141] T.E. Irene Viola, A new approach to subjectively assess quality of plenoptic
representation and adaptive reconstruction, in: 2015 IEEE International Confer- content, Proc. SPIE 9971 (2016) http://dx.doi.org/10.1117/12.2240279.
ence on Image Processing (ICIP), 2015, pp. 3334–3338, http://dx.doi.org/10. [142] Rec. ITU-R P.910 Subjective video quality assessment methods for multimedia
1109/ICIP.2015.7351421. applications, International Telecommunication Union, 2008.
[116] A. El Rhammad, P. Gioia, A. Gilles, A. Cagnazzo, B. Pesquet-Popescu, Color digital [143] Rec. ITU-R P.913 Methods for the subjective assessment of video quality, audio
hologram compression based on matching pursuit, Appl. Opt. 57 (17) (2018) quality and audiovisual quality of internet video and distribution quality television
4930–4942, http://dx.doi.org/10.1364/AO.57.004930. in any environment, International Telecommunication Union, 2016.
[117] Anas El Rhammad, Patrick Gioia, Antonin Gilles, Marco Cagnazzo, Batrice [144] Rec. ITU-R BT.500-13 Methodology for the subjective assessment of the quality
Pesquet-Popescu, View-Dependent Compression of Digital Hologram Based on of television pictures, International Telecommunication Union, 2012.
Matching Pursuit, Vol. 10679, SPIE, 2018, p. 106790L, http://dx.doi.org/10. [145] G. Finke, M. Kujawińska, T. Kozacki, Visual perception in multi SLM holographic
1117/12.2315233. displays, Appl. Opt. 54 (12) (2015) 3560–3568, http://dx.doi.org/10.1364/AO.
[118] D.C. Ghiglia, M.D. Pritt, Two-Dimensional Phase Unwrapping: Theory, Algo- 54.003560.
rithms, and Software, Vol. 4, Wiley New York, 1998. [146] P. Schelkens, Z.Y. Alpaslan, T. Ebrahimi, K.-J. Oh, F.M.B. Pereira, A.M.G. Pinheiro,
[119] D. Blinder, Efficient Representation, Generation and Compression of Digital I. Tabus, Z. Chen, JPEG Pleno: a standard framework for representing and
Holograms (Ph.D. thesis), Vrije Universiteit Brussel, 2018. signalling plenoptic modalities, in: Proc. of SPIE, Optics and Photonics 2018,
[120] D. Blinder, H. Ottevaere, A. Munteanu, P. Schelkens, Efficient multiscale phase Applications of Digital Image Processing XLI, Vol. 10752.
unwrapping methodology with modulo wavelet transform, Opt. Express 24 (20) [147] ISO/IEC JTC1/SC29/WG1 (JPEG) Website, https://www.jpeg.org.
(2016) 23094–23108, http://dx.doi.org/10.1364/OE.24.023094.

130

Potrebbero piacerti anche