Download Image Quality Assessment of Computer generated Images Based on Machine Learning and Soft Computing André Bigand ebook All Chapters PDF
Download Image Quality Assessment of Computer generated Images Based on Machine Learning and Soft Computing André Bigand ebook All Chapters PDF
com
https://textbookfull.com/product/image-quality-assessment-
of-computer-generated-images-based-on-machine-learning-and-
soft-computing-andre-bigand/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/advances-in-soft-computing-and-
machine-learning-in-image-processing-1st-edition-aboul-ella-hassanien/
textboxfull.com
https://textbookfull.com/product/stereoscopic-image-quality-
assessment-yong-ding/
textboxfull.com
https://textbookfull.com/product/recent-advances-of-hybrid-
intelligent-systems-based-on-soft-computing-1st-edition-patricia-
melin/
textboxfull.com
https://textbookfull.com/product/a-machine-learning-based-model-of-
boko-haram-v-s-subrahmanian/
textboxfull.com
Advances in Distributed Computing and Machine Learning:
Proceedings of ICADCML 2020 Asis Kumar Tripathy
https://textbookfull.com/product/advances-in-distributed-computing-
and-machine-learning-proceedings-of-icadcml-2020-asis-kumar-tripathy/
textboxfull.com
https://textbookfull.com/product/learn-pyspark-build-python-based-
machine-learning-and-deep-learning-models-1st-edition-pramod-singh/
textboxfull.com
https://textbookfull.com/product/explainable-and-interpretable-models-
in-computer-vision-and-machine-learning-hugo-jair-escalante/
textboxfull.com
SPRINGER BRIEFS IN COMPUTER SCIENCE
Image Quality
Assessment
of Computer-
generated Images
Based on Machine
Learning and Soft
Computing
123
SpringerBriefs in Computer Science
Series editors
Stan Zdonik, Brown University, Providence, Rhode Island, USA
Shashi Shekhar, University of Minnesota, Minneapolis, Minnesota, USA
Xindong Wu, University of Vermont, Burlington, Vermont, USA
Lakhmi C. Jain, University of South Australia, Adelaide, South Australia, Australia
David Padua, University of Illinois Urbana-Champaign, Urbana, Illinois, USA
Xuemin Sherman Shen, University of Waterloo, Waterloo, Ontario, Canada
Borko Furht, Florida Atlantic University, Boca Raton, Florida, USA
V. S. Subrahmanian, University of Maryland, College Park, Maryland, USA
Martial Hebert, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
Katsushi Ikeuchi, University of Tokyo, Tokyo, Japan
Bruno Siciliano, Università di Napoli Federico II, Napoli, Italy
Sushil Jajodia, George Mason University, Fairfax, Virginia, USA
Newton Lee, Newton Lee Laboratories, LLC, Burbank, California, USA
More information about this series at http://www.springer.com/series/10028
André Bigand Julien Dehos
•
123
André Bigand Christophe Renaud
LISIC Université du Littoral Côte d’Opale
Université du Littoral Côte d’Opale Dunkirk
Calais Cedex France
France
Joseph Constantin
Julien Dehos Faculty of Sciences II
Université du Littoral Côte d’Opale Lebanese University
Dunkirk Beirut
France Lebanon
The measure of image (videos) quality remains a research challenge and a very
active field of investigation considering image processing. One solution consists of
providing a subjective score to the image quality (according to a reference or
without reference) obtained from human observers. The setting of such
psycho-visual tests is very expensive (considering time and human organization)
and needs clear and strict proceedings. Algorithmic solutions have been developed
(objective scores) to avoid such tests. Some of these techniques are based on the
modeling of the Human Visual System (HVS) to mimic the human behavior, but
they are complex. In the case of natural scenes, a great number of image (or video)
quality databases exist that makes possible the validation of these different tech-
niques. Soft computing (machine learning, fuzzy logic, etc.), widely used in many
scientific fields such as biology, medicine, management sciences, financial sciences,
plant control, etc., is also a very useful cross-disciplinary tool in image processing.
These tools have been used to establish image quality and they are now well
known.
Emerging topics these last years concern image synthesis, applied in virtual
reality, augmented reality, movie production, interactive video games, etc. For
example, unbiased global illumination methods based on stochastic techniques can
provide photo-realistic images in which content is indistinguishable from real
photography. But there is a price: these images are prone to noise that can only be
reduced by increasing the number of computed samples of the involved methods
and consequently increasing their computation time. The problem of finding the
number of samples that are required in order to ensure that most of the observers
cannot perceive any noise is still open since the ideal image is unknown.
Image Quality Assessment (IQA) is well known considering natural scene
images. Image quality (or noise evaluation) of computer-generated images is
slightly different, since image generation is different and databases are not yet
developed. In this short book, we address this problem by focusing on visual
perception of noise. But rather than use known perceptual models, we investigate
the use of soft computing approaches classically used in the Artificial Intelligence
(AI) areas such as full-reference and reduced-reference metrics. We propose to use
v
vi Preface
Experiments presented in this book (Chaps. 2 and 3) were carried out using the
CALCULCO computing platform, supported by SCoSI/ULCO (Service COmmun
du Système d’Information de l’Université du Littoral Côte d’Opale), and the
open-source renderer PBRT-v3 by Matt Pharr, Wenzel Jakob, and Greg Humphreys
(http://pbrt.org).
vii
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Natural-Scene Images, Computer-generated Images . . . . . . . . . . . 1
1.2 Image Quality Assessment Models . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Organization of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Monte Carlo Methods for Image Synthesis . . . . . . . . . . . . . . . . . . . . 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Light Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Radiometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 Formulation of Light Transport . . . . . . . . . . . . . . . . . . . . 10
2.3 Monte Carlo Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Monte Carlo Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 Convergence Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.3 Variance Reduction Using Importance Sampling . . . . . . . . 13
2.4 Path Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.1 Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.2 The Path-Tracing Algorithm . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.3 Global Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Visual Impact of Rendering on Image Quality . . . . . . . . . . . . . . . . . 19
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Influence of Rendering Parameters . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.1 Path Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.2 Number of Path Samples . . . . . . . . . . . . . . . . . . . . . . . . . 21
ix
x Contents
André Bigand (IEEE Member) received a Ph.D. from the University Paris 6 and
the HdR degree in from the Université du Littoral Côte d’Opale (ULCO, France).
He is currently senior associate professor in ULCO since. His current research
interests include uncertainty modeling and machine learning with applications to
image processing and image synthesis (particularly noise modeling and filtering).
He is currently with the LISIC Laboratory (ULCO). He is author and coauthor of
scientific papers in international journals and books or communications to con-
ferences with reviewing committee. He has years of experience teaching and lec-
turing. He is a visiting professor at UL—Lebanese University—where he teaches
“machine learning and pattern recognition” in research master STIP. E-mail:
bigand@lisic.univ-littoral.fr. Website: http://www-lisic.univ-littoral.fr/˜bigand/.
xiii
xiv About the Authors
Image Quality Assessment (IQA) aims to characterize the visual quality of an image.
Indeed, there are many sources of image degradation, for example, optical distortion,
sensor noise, compression algorithms, etc., so IQA is useful to evaluate the perceived
quality of an image or to optimize an imaging process. IQA has been well studied for
natural-scene images (captured by a camera) but there is far less work for computer-
generated images (rendered from a virtual scene). This book aims to review the recent
advances in Image Quality Assessment for computer-generated images.
Natural-scene images are obtained by sampling and digitizing the light coming from
a natural scene, with a sensor (CCD, CMOS, etc.). Many aspects are important to
obtain “good quality” images: lighting conditions, optical system of the camera,
sensor quality, etc. An exhaustive presentation about those topics is given in (Xu
et al. 2015). The authors present the methods involved in subjective visual quality
and in objective visual quality assessment. Particularly, they also present image and
video quality databases which are very important to compare the obtained scores,
and they address the interest of machine learning for IQA. So, we will not consider
these topics once more and we recommend the reader to consult this presentation if
necessary.
High-quality computer-generated images are obtained from computer simula-
tions of light transport in virtual 3D scenes. Computing such a photo-realistic image
requires to model the virtual scene precisely: light sources, object geometries, object
materials, virtual camera, etc. It also requires to use a physically based rendering algo-
rithm which accurately simulates the light propagation in the virtual scene and the
light–matter interactions. Today, the vast majority of the physically based renderers
© The Author(s) 2018 1
A. Bigand et al., Image Quality Assessment of Computer-generated Images,
SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-319-73543-6_1
2 1 Introduction
are based on stochastic methods. Path tracing (Kajiya 1986) is a core rendering
algorithm which generates many random paths from the camera to a light source,
through the virtual scene. Since the paths are chosen randomly, the light contribution
can change greatly from one path to another, which can generate high-frequency
color variations in the rendered image (Shirley et al. 1996) known as perceptual
noise. The Monte Carlo theory ensures that this process will converge to the correct
image when the number of sample paths grows; however, this may require a great
number of paths and a high computation time (typically hours per image). Thus, to
render an image in an acceptable time, it is important to compute a number of paths
as small as possible. However, it is difficult to predict how many sample paths are
really required to obtain a “good quality” image or which random paths are the best
for increasing the convergence rate. Moreover, it is even difficult to determine if a
rendered image is sufficiently converged.
To summarize, the main differences between natural-scene images and computer-
generated images (for IQA) are the following:
• Since perceptual noise is intrinsic to the image generation process, a computer-
generated image is converged when no perceptual noise is noticeable in the final
image.
• Image databases for computer-generated images are limited and costly to obtain
(psycho-visual index obtained from human observers).
• Noise features are the most important image features to consider for computer-
generated images.
The final use of computer-generated images is to be seen by human observers who
are generally very sensitive to image artifacts. The Human Visual System (HVS) is
endowed with powerful performances but is a very complex process. Consequently,
perception-driven approaches were proposed to determine if rendered image are con-
verged. The main idea of such approaches is to replace the human observer by a vision
model. By mimicking HVS, such techniques can provide important improvements
for rendering. They can be used for driving rendering algorithms to visually satis-
factory images and to focus on visually important features (Mitchell 1987; Farrugia
and Péroche 2004; Longhurst et al. 2006). HVS models provide interesting results
but are complex, still incomplete, and difficult to set up, and generally require rela-
tively long computation times. Therefore, the methods presented in this book focus
on the use of a new noise based perceptual index to replace psycho-visual index
in the perception-driven model assessment. Perceptual noise is considered from a
machine learning point of view (noise features) or a soft computing point of view
(fuzzy entropy used to set up noise level).
Image quality assessment models are very important to characterize the visual quality
of an image. For example, they are of great interest for image compression (JPEG
models) and natural image characterization (Lahoudou et al. 2010). In the literature,
1.2 Image Quality Assessment Models 3
IQA models are usually classified into three families (see (Lahoudou et al. 2011;
Beghdadi et al. 2013) for a brief review of IQA and machine learning):
• Full-reference models that use the original version of the image for estimating
the quality of the processed version. These models are the most used methods to
evaluate image quality (for example, the well-known PSNR and SSIM). They are
easy to compute in real time and correlated with human subjective appreciation
but require a reference image. Unfortunately, these models are not applicable for
computer-generated images since the final reference image is not already known
during the image generation process.
• No-reference models that evaluate the quality of images without access to reference
images. Some recent papers (Ferzli and Karam 2005; Zhang et al. 2011) proposed
no-reference quality assessment methods with good results but limited to JPEG
images. Other methods were proposed for computer-generated images with some
success (Delepoulle et al. 2012) but a complete framework has to be yet defined.
• Reduced-reference models that analyze the processed image using some relevant
information to calculate the quality of the result image. This model seems to be
particularly interesting for our study as we will show in the following of the book.
In the last decade, numerous IQA methods for computer-generated images have
been proposed but the resulting models are limited in practice and they are still under
investigation. Currently, the classical model to characterize image quality remains
psycho-visual experiments (Human in the loop experiment (Faugeras 1979)).
In this book, we assume that the reader is familiar with the basic aspects of machine
learning and image processing, and we only focus on the visual quality assessment
of computer-generated images using soft computing. We present recent techniques
to assess if such a photo-realistic computer-generated image is noisy or not, based
on full-reference, reduced-reference, and no-reference image quality methods, using
learning machines and fuzzy sets. These techniques make it possible to set up uncer-
tainty brought by perceptual noise affecting the image synthesis process. Note that
we mainly focus on grayscale images, or more precisely the “L” component of “Lab”
color images, since noise mainly affects this component, (Carnet et al. 2008).
In Chap. 2, we introduce image rendering to the reader. We present the basic
notions of light transport and the equations that formalize rendering. We then recall
the Monte Carlo method and detail the path-tracing algorithm which is the core of
many renderers currently used in the computer graphics industry.
4 1 Introduction
In Chap. 3, we study the visual impact of the rendering process on the quality of
the rendered image. We present experimental results obtained from a path-tracing
renderer and illustrate the influence of several parameters (virtual scene and rendering
parameters) on the visual quality of the rendered image.
Chapter 4 introduces image quality evaluation using full-reference methods.
We present a conventional way to obtain noise attributes from computer-generated
images and also introduce the use of deep learning to automatically extract them.
We then present how to use Support Vector Machines (SVM) and Relevance Vector
Machines (RVM) as image quality metrics.
Chapter 5 introduces image quality evaluation using reduced-reference methods.
We present Fast Relevance Vector Machines (FRVM) and explain image quality
evaluation using FRVM and inductive learning. Both methods are then compared on
experimental results.
Chapter 6 introduces no-reference methods using fuzzy sets. We present the
Interval-Valued Fuzzy Set (IVFS) and an entropy based on IVFS. We then detail
an image noise estimation method which uses IVFS and presents promising experi-
mental results obtained with computer-generated images.
In conclusion, Chap. 7 summarizes the important notions presented in this book
and gives some perspectives.
References
Shirley P, Wang C, Zimmerman K (1996) Monte Carlo techniques for direct lighting calculations.
ACM Trans Graph 15(1):1–36
Xu L, Lin W, Kuo CCJ (2015) Visual Quality Assessment by Machine Learning, vol 28. Springer
Brief, London
Zhang J, Ong S, Thinh M (2011) Kurtosis based no-reference quality assessment of jpeg2000
images. Sig Process Image Commun 26(1):13–23
Chapter 2
Monte Carlo Methods for Image
Synthesis
2.1 Introduction
Image synthesis (also called rendering) consists in generating an image from a virtual
3D scene (composed of light sources, objects, materials, and a camera). Numerous
rendering algorithms have been proposed since the 1970s: z-buffer (Catmull 1974),
ray tracing (Whitted 1980), radiosity (Goral et al. 1984), path tracing (Kajiya 1986),
and Reyes (Cook et al. 1987)…
Physically based rendering algorithms (also called photo-realistic rendering algo-
rithms) try to satisfy the physical rules describing the light transport. These algorithms
are commonly used to generate high-quality images (see Fig. 2.1), for example, in
the cinema industry, and include path tracing, photon mapping (Jensen 2001), bidi-
rectional path tracing (Lafortune and Willems 1993; Veach and Guibas 1994), and
metropolis light transport (Veach and Guibas 1997)…
In this book, we only consider the path-tracing algorithm since it is widely used
in modern renderers and is the basis of many other rendering algorithms. In this
chapter, we present the fundamental notions of light transport, which physically
describes rendering. Then, we present the Monte Carlo method, which is the core
computing method used in physically based rendering algorithms. Finally, we detail
the path-tracing algorithm.
2.2.1 Radiometry
2010; Jakob 2013). The notations used in this chapter are mainly inspired from Eric
Veach’s PhD thesis (Veach 1997).
Radiant flux (Φ) is the quantity of energy per unit of time (watt):
dQ
Φ= [W ] (2.1)
dt
Radiant flux measures the light received or emitted by a point of the scene (see
Fig. 2.2).
2.2.1.2 Radiance
Radiance (L) is the flux per unit of area and per unit of projected solid angle (watt
per square meter per steradian):
d 2 Φ(x → x )
L(x → x ) = [W.m −2 .sr −1 ], (2.2)
G(x ↔ x )d A(x)d A(x )
2.2 Light Transport 9
(a) (b)
Fig. 2.2 The radiant flux is the quantity of light emitted from a point or received by a point
Fig. 2.3 Radiance is the flux emitted or received through a beam in a given direction
where G is the geometric function between the emitting surface and the receiv-
ing surface. The notation x → x indicates the direction of light flow. The notation
G(x ↔ x ) indicates a symmetric function.
Radiance measures the flux received or emitted by a point through a beam (see
Fig. 2.3). It is particularly useful for describing light transport in a scene.
dL(x → x )
f s (x → x → x ) = [sr −1 ] (2.3)
L(x → x )G(x ↔ x )d A(x)
The BRDF is useful for defining how a material reflects light (see Fig. 2.4).
10 2 Monte Carlo Methods for Image Synthesis
Using the previous radiometric quantities, we can formulate light transport, from the
sources of the scene to the camera, and thus synthesize an image. Note that light
transport can be formulated from light sources to camera as well as from camera to
light sources, since it satisfies energy conservation.
Rendering consists in computing the radiance received by each pixel of the camera.
The intensity I of a given pixel is defined by the measurement equation:
I = We (x → x )L(x → x )G(x ↔ x )d A(x )d A(x ), (2.4)
M ×M
where M is the set of all points in the scene and We the response of the camera. The
measurement equation simply states that the intensity of a pixel is the sum of the
radiances from all points x of the scene to all points x on the pixel (see Fig. 2.5).
(a) (b)
Fig. 2.6 The rendering equation defines how the light is reflected from all incoming directions to
an outgoing direction (a). It can be applied recursively for incoming directions to fully compute
light transport in the scene (b)
The measurement equation describes how a point x of the scene contributes to the
intensity of a pixel at a point x . To synthesize an image, we also have to compute
the radiance from the scene point x toward the pixel point x , which is described by
the rendering equation:
L(x → x ) = L e (x → x ) + f s (x → x → x )L(x → x )G(x ↔ x )d A(x),
M
(2.5)
where L e is the light emitted at point x (light source). Thus, the radiance received
by x from x is the sum of two terms: the light emitted by x toward x and the light
coming from all points x of the scene and reflected at x toward x (see Fig. 2.6a).
Thus, we can compute the light at x using the rendering equation. However,
this requires to compute the light coming from other points x, i.e., to compute the
rendering equation recursively at these points (see Fig. 2.6b).
The measurement Eq. 2.4 and the rendering Eq. 2.5 are well-defined integral equa-
tions. However, they are difficult to solve using analytic solutions or deterministic
numerical solutions, due to the complexity of the integrands and the high number of
dimensions. Stochastic methods, such as Monte Carlo integration, are more suitable
for computing such equations. Monte Carlo integration is the core of many physically
based rendering algorithms such as path tracing.
12 2 Monte Carlo Methods for Image Synthesis
1 f (X i )
N
IN = , (2.7)
N i=1 p(X i )
where X 1 , ..., X N are points of Ω sampled independently using the density function
p. We can show the validity of this estimator by computing the expected value of
IN :
1 f (X i )
N
E[I N ] = E
N i=1 p(X i )
1
N
f (X i )
= E
N i=1 p(X i )
N (2.8)
1 f (x)
= p(x)dμ(x)
N i=1 Ω p(x)
= f (x)dμ(x)
Ω
= I,
The variance of the Monte Carlo estimator decreases linearly with the number of
samples:
2.3 Monte Carlo Integration 13
1 f (X i )
N
V [I N ] = V
N i=1 p(X i )
N
1 f (X i )
= V
N2 p(X i )
i=1 (2.9)
1
N
f (X i )
= V
N 2 i=1 p(X i )
1 f (X )
= V
N p(X )
Thus, the RMS error converges at a rate of O √1N . This convergence rate is slow
(increasing the number of samples by a factor of four only reduces the integration
error by a factor of two) but it is not affected by the number of dimensions.
f (X )
=c (2.12)
p(X )
14 2 Monte Carlo Methods for Image Synthesis
In practice, we cannot choose such a density function p since the required constant
c is the value we are trying to compute. However, variance can be reduced by choosing
a density function which has a shape similar to f . In physically based renderers,
density functions are carefully implemented by considering the position of light
sources and the reflectance of materials.
Using the measurement Eq. 2.4, we can compute a pixel by integrating radiance
coming from all directions. To compute radiance in a given direction, we can trace
a light ray in this direction until an object is reached and compute the reflected
radiance using the rendering Eq. 2.5. However, this equation requires to integrate
radiance coming from all directions. This means that we have to trace many rays
(for all these directions) and that we have to repeat this process recursively each time
one of these rays reaches an object (i.e., tracing new supplementary rays). This naive
approach has a huge memory cost and is unfeasible in practice.
The basic idea of the path-tracing algorithm is to randomly sample only one
direction for evaluating the rendering equation. Thus, we can sample a path x1 , . . . , xk
from the camera to a light source and compute the contribution of this path to the
pixel value (see Fig. 2.7). This can be seen as a random walk, which means we can
estimate the value of a pixel by randomly sampling many paths X i and by computing
the mean value of the contributions:
K −1
f (X i ) = We (x1 , x2 ) f s (xk+1 , xk , xk−1 )G(xk , xk−1 ) L e (x K , x K −1 )G(x K , x K −1 ) (2.13)
k=2
The path-tracing algorithm has been proposed by James T. Kajiya in Kajiya (1986).
This algorithm implements a random walk for solving the rendering equation. It is
currently used in many physically based renderers.
A pseudo-code implementation of path tracing is given in Algorithm 1. As
explained previously, the algorithm computes each pixel by randomly sampling paths
and computing the mean contribution of the paths for the pixel.
2.4 Path Tracing 15
Fig. 2.7 A path (for example X i = x1 , x2 , x3 , x4 ) models the light transport from a light source
(x1 ) to a camera (x4 ) after reflection in the scene (x2 and x3 ). The contribution of the path can be
computed by developing the rendering equation and the measurement equation: F(X i ) = L e (x1 →
x2 )G(x1 ↔ x2 ) f s (x1 → x2 → x3 )G(x2 ↔ x3 ) f s (x2 → x3 → x4 )G(x3 ↔ x4 )We (x3 → x4 )
Algorithm 1 : Path Tracing (using N paths per pixel and a probability density function p)
for all pixels in the image do
I N ← 0 {initialize the computed intensity of the pixel}
for i ← 1 to N do
sample a point x 1 in the pixel
Pi ← p(x1 ) {initialize the probability of the path}
Fi ← 1 {initialize the contribution of the path}
loop
sample a reflected direction and compute the corresponding point xk in the scene
Pi ← Pi × p(xk , xk−1 )
if xk is on a light source then
exit loop
else
Fi ← Fi × f s (xk+1 , xk , xk−1 )G(xk , xk−1 )
end if
end loop
Fi ← X i × We (x1 , x2 )L e (x K , x K −1 )G(x K , x K −1 )
I N ← I N + NXPi i
end for
pixel ← I N
end for
2.5 Conclusion
References
Catmull EE (1974) A subdivision algorithm for computer display of curved surfaces. Ph.D. thesis,
The University of Utah
Cook RL, Carpenter L, Catmull E (1987) The reyes image rendering architecture. In: Proceedings
of the 14th annual conference on computer graphics and interactive techniques, SIGGRAPH ’87,
pp 95–102
Glassner AS (1994) Principles of digital image synthesis. Morgan Kaufmann Publishers Inc., San
Francisco
Goral CM, Torrance KE, Greenberg DP, Battaile B (1984) Modeling the interaction of light between
diffuse surfaces. In: Proceedings of the 11th annual conference on computer graphics and inter-
active techniques, SIGGRAPH ’84, pp 213–222
Jakob W (2013) Light transport on path-space manifolds. Ph.D. thesis, Cornell University
Jensen HW (2001) Realistic image synthesis using photon mapping. A. K. Peters Ltd., Natick
Kajiya J (1986) The rendering equation. ACM Comput Graph 20(4):143–150
Lafortune EP, Willems YD (1993) Bi-directional path tracing. In: Proceedings of third international
conference on computational graphics and visualization techniques (compugraphics ’93), Alvor,
Portugal, pp 145–153
Nayar SK, Krishnan G, Grossberg MD, Raskar R (2006) Fast separation of direct and global
components of a scene using high frequency illumination. ACM Trans Graph 25(3):935–944
Nicodemus FE, Richmond JC, Hsia JJ, Ginsberg IW, Limperis T (1977) Geometric considerations
and nomenclature for reflectance. National Bureau of Standards
Pharr M, Humphreys G (2010) Physically based rendering: from theory to implementation, 2nd
edn. Morgan Kaufmann Publishers Inc., San Francisco
Veach E (1997) Robust Monte Carlo methods for light transport simulation. Ph.D. thesis, Stanford
University
Veach E, Guibas LJ (1994) Bidirectional estimators for light transport. In: Eurographics rendering
workshop, pp 147–162
Veach E, Guibas LJ (1997) Metropolis light transport. Comput Graph 31(Annual Conference
Series):65–76
Vorba J, Křivánek J (2016) Adjoint-driven russian roulette and splitting in light transport simulation.
ACM Trans Graph 35(4):1–11
Whitted T (1980) An improved illumination model for shaded display. Commun ACM 23(6):343–
349
Exploring the Variety of Random
Documents with Different Content
in tetanus,
577
673
675
in tubercular meningitis,
736
652
787
in cerebral hyperæmia,
770
examination in tumors of the brain,
1035
879
in tabes dorsalis,
834
39
40
287
in hystero-epilepsy,
312
Organic headache,
403
Mental diseases,
176
61-63
1229
799
Ovarian neuralgia,
1240
298
299
884
of spinal hyperæmia,
802
441
354
of neurasthenia,
354
1256
P.
Pacchionian bodies, seat and nature, in brain tumors,
1050
Pachymeningitis,
703
Acute spinal,
747
Chronic spinal,
748
External,
704
Hemorrhagic,
707
Internal,
706
1237
1238
in cerebral syphilis,
1005
889
in disseminated sclerosis,
874
871
in gastralgia,
1238
1239
in hysteria,
250
1185-1187
in intercostal neuralgia,
1234
in lead colic,
683
770
in migraine, seat, characters, and origin of,
408
412
1230
in multiple neuritis,
1195
33-35
in neuritis,
1191
in neuromata,
1210
in sciatica,
1235
in spinal syphilis,
1025
in superficial neuralgia,
1212
in symmetrical gangrene,
1258-1260
in writers' cramp,
519
649
750
spinal pachymeningitis,
747
753
spinal pachymeningitis,
749
in external pachymeningitis,
705
754
in tabes dorsalis,
828
in tubercular meningitis,
725
726
1033
1034
1091-1093
833
Painters' colic,
683
Palsy, lead,
685
602
745
761
Paræsthesia, hysterical,
250
in migraine,
1230
1231
33
696
in spinal syphilis,
1025
in spinal hyperæmia,
802
Paralalia,
571
Paraldehyde, habitual addiction to,
666
use, in alcoholism,
641
642
645
646
674
676
P
ARALYSIS
GITANS
433
Diagnosis,
438
437
Symptoms,
434
Synonyms,
433
Treatment,
438
Paralysis, alcoholic,
621
atrophic, of infants,
1113
cerebral,
917
festinans,
436
hysterical,
237
in acute myelitis,
816
,
817
750
994
in cerebral hemorrhage,
939
954
meningeal hemorrhage,
712
syphilis,
1007-1010
in chorea,
447
in chronic hydrocephalus,
743
spinal pachymeningitis,
749
in encephalitis,
791
708
1118
1123
in spina bifida,
759
664
in tubercular meningitis,
727
986
1040
1091
,
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com