100% found this document useful (6 votes)
14 views

Download Image Quality Assessment of Computer generated Images Based on Machine Learning and Soft Computing André Bigand ebook All Chapters PDF

Assessment

Uploaded by

cosijnvorraa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (6 votes)
14 views

Download Image Quality Assessment of Computer generated Images Based on Machine Learning and Soft Computing André Bigand ebook All Chapters PDF

Assessment

Uploaded by

cosijnvorraa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Experience Seamless Full Ebook Downloads for Every Genre at textbookfull.

com

Image Quality Assessment of Computer generated


Images Based on Machine Learning and Soft
Computing André Bigand

https://textbookfull.com/product/image-quality-assessment-
of-computer-generated-images-based-on-machine-learning-and-
soft-computing-andre-bigand/

OR CLICK BUTTON

DOWNLOAD NOW

Explore and download more ebook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Advances in Soft Computing and Machine Learning in Image


Processing 1st Edition Aboul Ella Hassanien

https://textbookfull.com/product/advances-in-soft-computing-and-
machine-learning-in-image-processing-1st-edition-aboul-ella-hassanien/

textboxfull.com

Stereoscopic Image Quality Assessment Yong Ding

https://textbookfull.com/product/stereoscopic-image-quality-
assessment-yong-ding/

textboxfull.com

Recent Advances of Hybrid Intelligent Systems Based on


Soft Computing 1st Edition Patricia Melin

https://textbookfull.com/product/recent-advances-of-hybrid-
intelligent-systems-based-on-soft-computing-1st-edition-patricia-
melin/
textboxfull.com

A Machine Learning Based Model of Boko Haram V. S.


Subrahmanian

https://textbookfull.com/product/a-machine-learning-based-model-of-
boko-haram-v-s-subrahmanian/

textboxfull.com
Advances in Distributed Computing and Machine Learning:
Proceedings of ICADCML 2020 Asis Kumar Tripathy

https://textbookfull.com/product/advances-in-distributed-computing-
and-machine-learning-proceedings-of-icadcml-2020-asis-kumar-tripathy/

textboxfull.com

Learn PySpark: Build python-based machine learning and


deep learning models 1st Edition Pramod Singh

https://textbookfull.com/product/learn-pyspark-build-python-based-
machine-learning-and-deep-learning-models-1st-edition-pramod-singh/

textboxfull.com

Medical image recognition, segmentation and parsing :


machine learning and multiple object approaches 1st
Edition Zhou
https://textbookfull.com/product/medical-image-recognition-
segmentation-and-parsing-machine-learning-and-multiple-object-
approaches-1st-edition-zhou/
textboxfull.com

Computational Intelligence and Machine Learning


Proceedings of the 7th International Conference on
Advanced Computing Networking and Informatics ICACNI 2019
Jyotsna Kumar Mandal
https://textbookfull.com/product/computational-intelligence-and-
machine-learning-proceedings-of-the-7th-international-conference-on-
advanced-computing-networking-and-informatics-icacni-2019-jyotsna-
kumar-mandal/
textboxfull.com

Explainable and Interpretable Models in Computer Vision


and Machine Learning Hugo Jair Escalante

https://textbookfull.com/product/explainable-and-interpretable-models-
in-computer-vision-and-machine-learning-hugo-jair-escalante/

textboxfull.com
SPRINGER BRIEFS IN COMPUTER SCIENCE

André Bigand · Julien Dehos


Christophe Renaud
Joseph Constantin

Image Quality
Assessment
of Computer-
generated Images
Based on Machine
Learning and Soft
Computing
123
SpringerBriefs in Computer Science

Series editors
Stan Zdonik, Brown University, Providence, Rhode Island, USA
Shashi Shekhar, University of Minnesota, Minneapolis, Minnesota, USA
Xindong Wu, University of Vermont, Burlington, Vermont, USA
Lakhmi C. Jain, University of South Australia, Adelaide, South Australia, Australia
David Padua, University of Illinois Urbana-Champaign, Urbana, Illinois, USA
Xuemin Sherman Shen, University of Waterloo, Waterloo, Ontario, Canada
Borko Furht, Florida Atlantic University, Boca Raton, Florida, USA
V. S. Subrahmanian, University of Maryland, College Park, Maryland, USA
Martial Hebert, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
Katsushi Ikeuchi, University of Tokyo, Tokyo, Japan
Bruno Siciliano, Università di Napoli Federico II, Napoli, Italy
Sushil Jajodia, George Mason University, Fairfax, Virginia, USA
Newton Lee, Newton Lee Laboratories, LLC, Burbank, California, USA
More information about this series at http://www.springer.com/series/10028
André Bigand Julien Dehos

Christophe Renaud Joseph Constantin


Image Quality Assessment


of Computer-generated
Images
Based on Machine Learning and Soft
Computing

123
André Bigand Christophe Renaud
LISIC Université du Littoral Côte d’Opale
Université du Littoral Côte d’Opale Dunkirk
Calais Cedex France
France
Joseph Constantin
Julien Dehos Faculty of Sciences II
Université du Littoral Côte d’Opale Lebanese University
Dunkirk Beirut
France Lebanon

ISSN 2191-5768 ISSN 2191-5776 (electronic)


SpringerBriefs in Computer Science
ISBN 978-3-319-73542-9 ISBN 978-3-319-73543-6 (eBook)
https://doi.org/10.1007/978-3-319-73543-6
Library of Congress Control Number: 2018932548

© The Author(s) 2018


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

The measure of image (videos) quality remains a research challenge and a very
active field of investigation considering image processing. One solution consists of
providing a subjective score to the image quality (according to a reference or
without reference) obtained from human observers. The setting of such
psycho-visual tests is very expensive (considering time and human organization)
and needs clear and strict proceedings. Algorithmic solutions have been developed
(objective scores) to avoid such tests. Some of these techniques are based on the
modeling of the Human Visual System (HVS) to mimic the human behavior, but
they are complex. In the case of natural scenes, a great number of image (or video)
quality databases exist that makes possible the validation of these different tech-
niques. Soft computing (machine learning, fuzzy logic, etc.), widely used in many
scientific fields such as biology, medicine, management sciences, financial sciences,
plant control, etc., is also a very useful cross-disciplinary tool in image processing.
These tools have been used to establish image quality and they are now well
known.
Emerging topics these last years concern image synthesis, applied in virtual
reality, augmented reality, movie production, interactive video games, etc. For
example, unbiased global illumination methods based on stochastic techniques can
provide photo-realistic images in which content is indistinguishable from real
photography. But there is a price: these images are prone to noise that can only be
reduced by increasing the number of computed samples of the involved methods
and consequently increasing their computation time. The problem of finding the
number of samples that are required in order to ensure that most of the observers
cannot perceive any noise is still open since the ideal image is unknown.
Image Quality Assessment (IQA) is well known considering natural scene
images. Image quality (or noise evaluation) of computer-generated images is
slightly different, since image generation is different and databases are not yet
developed. In this short book, we address this problem by focusing on visual
perception of noise. But rather than use known perceptual models, we investigate
the use of soft computing approaches classically used in the Artificial Intelligence
(AI) areas such as full-reference and reduced-reference metrics. We propose to use

v
vi Preface

such approaches to create a machine learning model based on learning machines


such as SVMs and RVMs in order to be able to predict which image highlights
perceptual noise. We also investigate the use of interval-valued fuzzy sets as
no-reference metric. Learning is performed through the use of an example database
which is built from experiments of noise perception with human users. These
models can then be used in any progressive stochastic global illumination method
in order to find the visual convergence threshold of different parts of any image.
The short book is organized as follows: after a brief introduction (Chap. 1),
Chap. 2 describes the Monte Carlo methods for image synthesis we use, and then
chapter briefly describes the visual impact of rendering on image quality and the
interest of a noise model. In Chap. 4, image quality evaluation using SVMs and
RVMs is introduced and in Chap. 5 new learning algorithms that can be applied
with interesting results are presented. Chapter 6 introduces an original method
obtained from the application of fuzzy sets entropy. Finally, the short book is
summarized with some conclusions in Chap. 7.
The goal of this book is to present an emerging topic, that is to say IQA for
computer-generated images, to students (and others) practitioners of image pro-
cessing and related areas such as computer graphics and visualization. In addition,
students and practitioners should be familiar with the underlying techniques that
make this possible (basics of image processing, machine learning, fuzzy sets). This
monograph will be interesting for all people involved in image generation, virtual
reality, augmented reality, and all new trends emerging around these topics.

Calais Cedex, France André Bigand


Dunkirk, France Julien Dehos
Dunkirk, France Christophe Renaud
Beirut, Lebanon Joseph Constantin
Acknowledgements

Experiments presented in this book (Chaps. 2 and 3) were carried out using the
CALCULCO computing platform, supported by SCoSI/ULCO (Service COmmun
du Système d’Information de l’Université du Littoral Côte d’Opale), and the
open-source renderer PBRT-v3 by Matt Pharr, Wenzel Jakob, and Greg Humphreys
(http://pbrt.org).

vii
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Natural-Scene Images, Computer-generated Images . . . . . . . . . . . 1
1.2 Image Quality Assessment Models . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Organization of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Monte Carlo Methods for Image Synthesis . . . . . . . . . . . . . . . . . . . . 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Light Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Radiometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 Formulation of Light Transport . . . . . . . . . . . . . . . . . . . . 10
2.3 Monte Carlo Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Monte Carlo Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 Convergence Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.3 Variance Reduction Using Importance Sampling . . . . . . . . 13
2.4 Path Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.1 Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.2 The Path-Tracing Algorithm . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.3 Global Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Visual Impact of Rendering on Image Quality . . . . . . . . . . . . . . . . . 19
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Influence of Rendering Parameters . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.1 Path Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.2 Number of Path Samples . . . . . . . . . . . . . . . . . . . . . . . . . 21

ix
x Contents

3.3 Influence of the Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22


3.3.1 Light Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.2 Scene Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.3 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4 Full-Reference Methods and Machine Learning . . . . . . . . . . . . . . . . 29
4.1 Image Quality Metrics Using Machine Learning Methods . . . . . . . 29
4.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2.2 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2.3 Psycho-visual Scores Acquisition . . . . . . . . . . . . . . . . . . . 32
4.3 Noise Features Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.1 Classical Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.2 Pooling Strategies and Deep Learning Process . . . . . . . . . . 35
4.4 Image Quality Metrics Based on Supervised Learning
Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4.1 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4.2 Relevance Vector Machines . . . . . . . . . . . . . . . . . . . . . . . 41
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5 Reduced-Reference Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2 Fast Relevance Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Image Quality Evaluation (IQE) . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3.1 IQE Using FRVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3.2 IQE Using Inductive Learning . . . . . . . . . . . . . . . . . . . . . 55
5.4 Experimental Results and Discussion . . . . . . . . . . . . . . . . . . . . . . 57
5.4.1 Design of the Inductive Model Noise Features Vector . . . . 57
5.4.2 Inductive SVM Model Selection . . . . . . . . . . . . . . . . . . . . 58
5.4.3 Experiments Using Inductive Learning . . . . . . . . . . . . . . . 60
5.4.4 Comparison with the Fast Relevance Vector Machine . . . . 65
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6 No-Reference Methods and Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . 71
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.2 Interval-Valued Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.2.1 Uncertainty Representation . . . . . . . . . . . . . . . . . . . . . . . . 74
6.2.2 IVFS Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Contents xi

6.3 IVFS for Image Noise Estimation . . . . . . . . . . . . . . . . . . . . . . . . 76


6.3.1 Design of the IVFS Image Noise Estimation . . . . . . . . . . . 76
6.3.2 Proposed Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.3.3 Algorithm and Implementation . . . . . . . . . . . . . . . . . . . . . 78
6.4 Experimental Results with a Computer-generated Image . . . . . . . . 80
6.4.1 Image Database for Noise Estimation . . . . . . . . . . . . . . . . 80
6.4.2 Performances of the Proposed Method . . . . . . . . . . . . . . . 82
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7 General Conclusion and Perspectives . . . . . . . . . . . . . . . . . . . . . . . . 87
7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
About the Authors

André Bigand (IEEE Member) received a Ph.D. from the University Paris 6 and
the HdR degree in from the Université du Littoral Côte d’Opale (ULCO, France).
He is currently senior associate professor in ULCO since. His current research
interests include uncertainty modeling and machine learning with applications to
image processing and image synthesis (particularly noise modeling and filtering).
He is currently with the LISIC Laboratory (ULCO). He is author and coauthor of
scientific papers in international journals and books or communications to con-
ferences with reviewing committee. He has years of experience teaching and lec-
turing. He is a visiting professor at UL—Lebanese University—where he teaches
“machine learning and pattern recognition” in research master STIP. E-mail:
bigand@lisic.univ-littoral.fr. Website: http://www-lisic.univ-littoral.fr/˜bigand/.

Joseph Constantin obtained an M.S. in Software Engineering and Systems


Modeling from the Lebanese University in 1997 and Ph.D. in Automatic and
Robotic control from the Picardie Jules Verne University, France, in 2000. Since
2001, he has been a senior associate professor at the Lebanese University, Faculty
of Sciences and a researcher in the Applied Physics Laboratory of the Doctoral
School of Sciences and Technology at the Lebanese University. His current
research interests are in the fields of machine learning, image processing, robot
dynamics and control, diagnosis systems, and biomedical engineering.

Christophe Renaud is Full Professor of Computer Science at Université du


Littoral Côte d’Opale (ULCO). He received a Ph.D. in 1993 from the University of
Lille and the HdR degree in 2002 from ULCO. His current research interests focus
on photo-realistic rendering and on image processing and artificial intelligence
applied to rendering techniques. He currently develops collaborations in the area of
digital humanities with art historians and psychologists.

Julien Dehos is an Associate Professor of computer science at Université du


Littoral Côte d’Opale (ULCO). His research interests include image synthesis,

xiii
xiv About the Authors

image processing, and artificial intelligence. He received an engineer’s degree from


the ENSEIRB school and M.S. from the University Bordeaux 1 in 2007, and his Ph.
D. from ULCO in 2010.
Chapter 1
Introduction

Image Quality Assessment (IQA) aims to characterize the visual quality of an image.
Indeed, there are many sources of image degradation, for example, optical distortion,
sensor noise, compression algorithms, etc., so IQA is useful to evaluate the perceived
quality of an image or to optimize an imaging process. IQA has been well studied for
natural-scene images (captured by a camera) but there is far less work for computer-
generated images (rendered from a virtual scene). This book aims to review the recent
advances in Image Quality Assessment for computer-generated images.

1.1 Natural-Scene Images, Computer-generated Images

Natural-scene images are obtained by sampling and digitizing the light coming from
a natural scene, with a sensor (CCD, CMOS, etc.). Many aspects are important to
obtain “good quality” images: lighting conditions, optical system of the camera,
sensor quality, etc. An exhaustive presentation about those topics is given in (Xu
et al. 2015). The authors present the methods involved in subjective visual quality
and in objective visual quality assessment. Particularly, they also present image and
video quality databases which are very important to compare the obtained scores,
and they address the interest of machine learning for IQA. So, we will not consider
these topics once more and we recommend the reader to consult this presentation if
necessary.
High-quality computer-generated images are obtained from computer simula-
tions of light transport in virtual 3D scenes. Computing such a photo-realistic image
requires to model the virtual scene precisely: light sources, object geometries, object
materials, virtual camera, etc. It also requires to use a physically based rendering algo-
rithm which accurately simulates the light propagation in the virtual scene and the
light–matter interactions. Today, the vast majority of the physically based renderers
© The Author(s) 2018 1
A. Bigand et al., Image Quality Assessment of Computer-generated Images,
SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-319-73543-6_1
2 1 Introduction

are based on stochastic methods. Path tracing (Kajiya 1986) is a core rendering
algorithm which generates many random paths from the camera to a light source,
through the virtual scene. Since the paths are chosen randomly, the light contribution
can change greatly from one path to another, which can generate high-frequency
color variations in the rendered image (Shirley et al. 1996) known as perceptual
noise. The Monte Carlo theory ensures that this process will converge to the correct
image when the number of sample paths grows; however, this may require a great
number of paths and a high computation time (typically hours per image). Thus, to
render an image in an acceptable time, it is important to compute a number of paths
as small as possible. However, it is difficult to predict how many sample paths are
really required to obtain a “good quality” image or which random paths are the best
for increasing the convergence rate. Moreover, it is even difficult to determine if a
rendered image is sufficiently converged.
To summarize, the main differences between natural-scene images and computer-
generated images (for IQA) are the following:
• Since perceptual noise is intrinsic to the image generation process, a computer-
generated image is converged when no perceptual noise is noticeable in the final
image.
• Image databases for computer-generated images are limited and costly to obtain
(psycho-visual index obtained from human observers).
• Noise features are the most important image features to consider for computer-
generated images.
The final use of computer-generated images is to be seen by human observers who
are generally very sensitive to image artifacts. The Human Visual System (HVS) is
endowed with powerful performances but is a very complex process. Consequently,
perception-driven approaches were proposed to determine if rendered image are con-
verged. The main idea of such approaches is to replace the human observer by a vision
model. By mimicking HVS, such techniques can provide important improvements
for rendering. They can be used for driving rendering algorithms to visually satis-
factory images and to focus on visually important features (Mitchell 1987; Farrugia
and Péroche 2004; Longhurst et al. 2006). HVS models provide interesting results
but are complex, still incomplete, and difficult to set up, and generally require rela-
tively long computation times. Therefore, the methods presented in this book focus
on the use of a new noise based perceptual index to replace psycho-visual index
in the perception-driven model assessment. Perceptual noise is considered from a
machine learning point of view (noise features) or a soft computing point of view
(fuzzy entropy used to set up noise level).

1.2 Image Quality Assessment Models

Image quality assessment models are very important to characterize the visual quality
of an image. For example, they are of great interest for image compression (JPEG
models) and natural image characterization (Lahoudou et al. 2010). In the literature,
1.2 Image Quality Assessment Models 3

IQA models are usually classified into three families (see (Lahoudou et al. 2011;
Beghdadi et al. 2013) for a brief review of IQA and machine learning):
• Full-reference models that use the original version of the image for estimating
the quality of the processed version. These models are the most used methods to
evaluate image quality (for example, the well-known PSNR and SSIM). They are
easy to compute in real time and correlated with human subjective appreciation
but require a reference image. Unfortunately, these models are not applicable for
computer-generated images since the final reference image is not already known
during the image generation process.
• No-reference models that evaluate the quality of images without access to reference
images. Some recent papers (Ferzli and Karam 2005; Zhang et al. 2011) proposed
no-reference quality assessment methods with good results but limited to JPEG
images. Other methods were proposed for computer-generated images with some
success (Delepoulle et al. 2012) but a complete framework has to be yet defined.
• Reduced-reference models that analyze the processed image using some relevant
information to calculate the quality of the result image. This model seems to be
particularly interesting for our study as we will show in the following of the book.
In the last decade, numerous IQA methods for computer-generated images have
been proposed but the resulting models are limited in practice and they are still under
investigation. Currently, the classical model to characterize image quality remains
psycho-visual experiments (Human in the loop experiment (Faugeras 1979)).

1.3 Organization of the Book

In this book, we assume that the reader is familiar with the basic aspects of machine
learning and image processing, and we only focus on the visual quality assessment
of computer-generated images using soft computing. We present recent techniques
to assess if such a photo-realistic computer-generated image is noisy or not, based
on full-reference, reduced-reference, and no-reference image quality methods, using
learning machines and fuzzy sets. These techniques make it possible to set up uncer-
tainty brought by perceptual noise affecting the image synthesis process. Note that
we mainly focus on grayscale images, or more precisely the “L” component of “Lab”
color images, since noise mainly affects this component, (Carnet et al. 2008).
In Chap. 2, we introduce image rendering to the reader. We present the basic
notions of light transport and the equations that formalize rendering. We then recall
the Monte Carlo method and detail the path-tracing algorithm which is the core of
many renderers currently used in the computer graphics industry.
4 1 Introduction

In Chap. 3, we study the visual impact of the rendering process on the quality of
the rendered image. We present experimental results obtained from a path-tracing
renderer and illustrate the influence of several parameters (virtual scene and rendering
parameters) on the visual quality of the rendered image.
Chapter 4 introduces image quality evaluation using full-reference methods.
We present a conventional way to obtain noise attributes from computer-generated
images and also introduce the use of deep learning to automatically extract them.
We then present how to use Support Vector Machines (SVM) and Relevance Vector
Machines (RVM) as image quality metrics.
Chapter 5 introduces image quality evaluation using reduced-reference methods.
We present Fast Relevance Vector Machines (FRVM) and explain image quality
evaluation using FRVM and inductive learning. Both methods are then compared on
experimental results.
Chapter 6 introduces no-reference methods using fuzzy sets. We present the
Interval-Valued Fuzzy Set (IVFS) and an entropy based on IVFS. We then detail
an image noise estimation method which uses IVFS and presents promising experi-
mental results obtained with computer-generated images.
In conclusion, Chap. 7 summarizes the important notions presented in this book
and gives some perspectives.

References

Beghdadi A, Larabi M, Bouzerdoum A, Iftekharuddin K (2013) A survey of perceptual image


processing methods. Sig Process Image Commun 28:811–831
Carnet M, Callet PL, Barba D (2008) Objective quality assessment of color images based on a
generic perceptual reduced reference. Sig Process Image Commun 23(4):239–256
Delepoulle S, Bigand A, Renaud C (2012) A no-reference computer-generated images quality
metrics and its application to denoising. In: IEEE intelligent systems IS’12 conference, vol 1, pp
67–73
Farrugia J, Péroche B (2004) A progressive rendering algorithm using an adaptive perceptually
based image metric. Comput Graph Forum 23(3):605–614
Faugeras O (1979) Digital color image processing within the framework of a human visual model.
IEEE Trans ASSP 27:380–393
Ferzli R, Karam L (2005) No-reference objective wavelet based noise immune image sharpness
metric. In: International conference on image processing
Kajiya J (1986) The rendering equation. Comput Graph ACM 20(4):143–150
Lahoudou A, Viennet E, Haddadi M (2010) Variable selection for image quality assessment using
a neural network based approach. In: 2nd European workshop on visual information processing
(EUVIP), pp 45–49
Lahoudou A, Viennet E, Bouridane A, Haddadi M (2011) A complete statistical evaluation of
state of the art image quality measures. In: The 7th international workshop on systems, signal
processing and their applications, pp 219–222
Longhurst P, Debattista K, Chalmers A (2006) A GPU based saliency map for high-fidelity selective
rendering. In: AFRIGRAPH 2006 4th international conference on computer graphics. Virtual
reality, visualisation and interaction in Africa, pp 21–29
Mitchell D (1987) Generating antialiased images at low sampling densities. In: Proceedings of
SIGGRAPH’87, New York, NY, USA, pp 65–72
References 5

Shirley P, Wang C, Zimmerman K (1996) Monte Carlo techniques for direct lighting calculations.
ACM Trans Graph 15(1):1–36
Xu L, Lin W, Kuo CCJ (2015) Visual Quality Assessment by Machine Learning, vol 28. Springer
Brief, London
Zhang J, Ong S, Thinh M (2011) Kurtosis based no-reference quality assessment of jpeg2000
images. Sig Process Image Commun 26(1):13–23
Chapter 2
Monte Carlo Methods for Image
Synthesis

2.1 Introduction

Image synthesis (also called rendering) consists in generating an image from a virtual
3D scene (composed of light sources, objects, materials, and a camera). Numerous
rendering algorithms have been proposed since the 1970s: z-buffer (Catmull 1974),
ray tracing (Whitted 1980), radiosity (Goral et al. 1984), path tracing (Kajiya 1986),
and Reyes (Cook et al. 1987)…
Physically based rendering algorithms (also called photo-realistic rendering algo-
rithms) try to satisfy the physical rules describing the light transport. These algorithms
are commonly used to generate high-quality images (see Fig. 2.1), for example, in
the cinema industry, and include path tracing, photon mapping (Jensen 2001), bidi-
rectional path tracing (Lafortune and Willems 1993; Veach and Guibas 1994), and
metropolis light transport (Veach and Guibas 1997)…
In this book, we only consider the path-tracing algorithm since it is widely used
in modern renderers and is the basis of many other rendering algorithms. In this
chapter, we present the fundamental notions of light transport, which physically
describes rendering. Then, we present the Monte Carlo method, which is the core
computing method used in physically based rendering algorithms. Finally, we detail
the path-tracing algorithm.

2.2 Light Transport

2.2.1 Radiometry

Radiometry is the science of measurement of electromagnetic radiation, including


visible light. It is particularly useful for describing light transport and rendering algo-
rithms (Nicodemus et al. 1977; Glassner 1994; Jensen 2001; Pharr and Humphreys
© The Author(s) 2018 7
A. Bigand et al., Image Quality Assessment of Computer-generated Images,
SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-319-73543-6_2
8 2 Monte Carlo Methods for Image Synthesis

Fig. 2.1 Physically based


algorithms can render
high-quality images from a
virtual 3D scene

2010; Jakob 2013). The notations used in this chapter are mainly inspired from Eric
Veach’s PhD thesis (Veach 1997).

2.2.1.1 Radiant Flux

Radiant flux (Φ) is the quantity of energy per unit of time (watt):

dQ
Φ= [W ] (2.1)
dt
Radiant flux measures the light received or emitted by a point of the scene (see
Fig. 2.2).

2.2.1.2 Radiance

Radiance (L) is the flux per unit of area and per unit of projected solid angle (watt
per square meter per steradian):

d 2 Φ(x → x  )
L(x → x  ) = [W.m −2 .sr −1 ], (2.2)
G(x ↔ x  )d A(x)d A(x  )
2.2 Light Transport 9

(a) (b)

Emitted radiant flux Received radiant flux

Fig. 2.2 The radiant flux is the quantity of light emitted from a point or received by a point

(a) (b) (c)

Beam at point x in direction ω Emitted radiance Received radiance

Fig. 2.3 Radiance is the flux emitted or received through a beam in a given direction

where G is the geometric function between the emitting surface and the receiv-
ing surface. The notation x → x  indicates the direction of light flow. The notation
G(x ↔ x  ) indicates a symmetric function.
Radiance measures the flux received or emitted by a point through a beam (see
Fig. 2.3). It is particularly useful for describing light transport in a scene.

2.2.1.3 Bidirectional Reflectance Distribution Function

The Bidirectional Reflectance Distribution Function (BRDF) f s describes the ratio


of radiance reflected from an incoming direction to an outgoing direction:

dL(x  → x  )
f s (x → x  → x  ) = [sr −1 ] (2.3)
L(x → x  )G(x ↔ x  )d A(x)

The BRDF is useful for defining how a material reflects light (see Fig. 2.4).
10 2 Monte Carlo Methods for Image Synthesis

Fig. 2.4 The BRDF


describes how a material
reflects light from an
incoming direction (x → x  )
toward an outgoing direction
(x  → x  )

2.2.2 Formulation of Light Transport

Using the previous radiometric quantities, we can formulate light transport, from the
sources of the scene to the camera, and thus synthesize an image. Note that light
transport can be formulated from light sources to camera as well as from camera to
light sources, since it satisfies energy conservation.

2.2.2.1 Measurement Equation

Rendering consists in computing the radiance received by each pixel of the camera.
The intensity I of a given pixel is defined by the measurement equation:

I = We (x  → x  )L(x  → x  )G(x  ↔ x  )d A(x  )d A(x  ), (2.4)
M ×M

where M is the set of all points in the scene and We the response of the camera. The
measurement equation simply states that the intensity of a pixel is the sum of the
radiances from all points x  of the scene to all points x  on the pixel (see Fig. 2.5).

Fig. 2.5 The intensity of a


pixel can be computed using
the measurement equation,
i.e., the integral of radiance
from scene points to pixel
points
2.2 Light Transport 11

(a) (b)

Fig. 2.6 The rendering equation defines how the light is reflected from all incoming directions to
an outgoing direction (a). It can be applied recursively for incoming directions to fully compute
light transport in the scene (b)

2.2.2.2 Rendering Equation

The measurement equation describes how a point x  of the scene contributes to the
intensity of a pixel at a point x  . To synthesize an image, we also have to compute
the radiance from the scene point x  toward the pixel point x  , which is described by
the rendering equation:

   
L(x → x ) = L e (x → x ) + f s (x → x  → x  )L(x → x  )G(x ↔ x  )d A(x),
M
(2.5)
where L e is the light emitted at point x  (light source). Thus, the radiance received
by x  from x  is the sum of two terms: the light emitted by x  toward x  and the light
coming from all points x of the scene and reflected at x  toward x  (see Fig. 2.6a).
Thus, we can compute the light at x  using the rendering equation. However,
this requires to compute the light coming from other points x, i.e., to compute the
rendering equation recursively at these points (see Fig. 2.6b).

2.3 Monte Carlo Integration

The measurement Eq. 2.4 and the rendering Eq. 2.5 are well-defined integral equa-
tions. However, they are difficult to solve using analytic solutions or deterministic
numerical solutions, due to the complexity of the integrands and the high number of
dimensions. Stochastic methods, such as Monte Carlo integration, are more suitable
for computing such equations. Monte Carlo integration is the core of many physically
based rendering algorithms such as path tracing.
12 2 Monte Carlo Methods for Image Synthesis

2.3.1 Monte Carlo Estimator

Monte Carlo integration aims at evaluating the integral:



I = f (x)dμ(x), (2.6)
Ω

where dμ is a measure on the domain Ω. This integral can be estimated by a random


variable I N :

1  f (X i )
N
IN = , (2.7)
N i=1 p(X i )

where X 1 , ..., X N are points of Ω sampled independently using the density function
p. We can show the validity of this estimator by computing the expected value of
IN :
 
1  f (X i )
N
E[I N ] = E
N i=1 p(X i )
 
1 
N
f (X i )
= E
N i=1 p(X i )
N  (2.8)
1  f (x)
= p(x)dμ(x)
N i=1 Ω p(x)

= f (x)dμ(x)
Ω
= I,

using the linearity and the definition of expected value.


Thus, Monte Carlo integration converges to the correct solution. Moreover, it
is simple to implement since it only requires to evaluate f and to sample points
according to p. Finally, integrating high-dimensional functions is straightforward
and only requires to sample all dimensions of the domain.

2.3.2 Convergence Rate

The variance of the Monte Carlo estimator decreases linearly with the number of
samples:
2.3 Monte Carlo Integration 13
 
1  f (X i )
N
V [I N ] = V
N i=1 p(X i )
 N 
1  f (X i )
= V
N2 p(X i )
i=1 (2.9)
 
1 
N
f (X i )
= V
N 2 i=1 p(X i )
 
1 f (X )
= V
N p(X )

Hence the standard deviation:



σ [I N ] =
V [I N ]
 
1 f (X )
= V (2.10)
N p(X )
 
1 f (X )
=√ σ
N p(X )

Thus, the RMS error converges at a rate of O √1N . This convergence rate is slow
(increasing the number of samples by a factor of four only reduces the integration
error by a factor of two) but it is not affected by the number of dimensions.

2.3.3 Variance Reduction Using Importance Sampling

Many variance reduction techniques have been proposed to improve convergence


rate of Monte Carlo methods. One of them, importance sampling, is classically
implemented in physically based renderer.
The basic idea of the importance sampling technique is to sample important
regions of the domain with a higher probability. Ideally, we would choose a den-
sity function p proportional to f :

p(x) ∝ f (x) (2.11)

which leads to a zero-variance estimator, i.e., constant for all samples X :

f (X )
=c (2.12)
p(X )
14 2 Monte Carlo Methods for Image Synthesis

In practice, we cannot choose such a density function p since the required constant
c is the value we are trying to compute. However, variance can be reduced by choosing
a density function which has a shape similar to f . In physically based renderers,
density functions are carefully implemented by considering the position of light
sources and the reflectance of materials.

2.4 Path Tracing

2.4.1 Random Walk

Using the measurement Eq. 2.4, we can compute a pixel by integrating radiance
coming from all directions. To compute radiance in a given direction, we can trace
a light ray in this direction until an object is reached and compute the reflected
radiance using the rendering Eq. 2.5. However, this equation requires to integrate
radiance coming from all directions. This means that we have to trace many rays
(for all these directions) and that we have to repeat this process recursively each time
one of these rays reaches an object (i.e., tracing new supplementary rays). This naive
approach has a huge memory cost and is unfeasible in practice.
The basic idea of the path-tracing algorithm is to randomly sample only one
direction for evaluating the rendering equation. Thus, we can sample a path x1 , . . . , xk
from the camera to a light source and compute the contribution of this path to the
pixel value (see Fig. 2.7). This can be seen as a random walk, which means we can
estimate the value of a pixel by randomly sampling many paths X i and by computing
the mean value of the contributions:

 K −1 
f (X i ) = We (x1 , x2 ) f s (xk+1 , xk , xk−1 )G(xk , xk−1 ) L e (x K , x K −1 )G(x K , x K −1 ) (2.13)
k=2

2.4.2 The Path-Tracing Algorithm

The path-tracing algorithm has been proposed by James T. Kajiya in Kajiya (1986).
This algorithm implements a random walk for solving the rendering equation. It is
currently used in many physically based renderers.
A pseudo-code implementation of path tracing is given in Algorithm 1. As
explained previously, the algorithm computes each pixel by randomly sampling paths
and computing the mean contribution of the paths for the pixel.
2.4 Path Tracing 15

Fig. 2.7 A path (for example X i = x1 , x2 , x3 , x4 ) models the light transport from a light source
(x1 ) to a camera (x4 ) after reflection in the scene (x2 and x3 ). The contribution of the path can be
computed by developing the rendering equation and the measurement equation: F(X i ) = L e (x1 →
x2 )G(x1 ↔ x2 ) f s (x1 → x2 → x3 )G(x2 ↔ x3 ) f s (x2 → x3 → x4 )G(x3 ↔ x4 )We (x3 → x4 )

Algorithm 1 : Path Tracing (using N paths per pixel and a probability density function p)
for all pixels in the image do
I N ← 0 {initialize the computed intensity of the pixel}
for i ← 1 to N do
sample a point x 1 in the pixel
Pi ← p(x1 ) {initialize the probability of the path}
Fi ← 1 {initialize the contribution of the path}
loop
sample a reflected direction and compute the corresponding point xk in the scene
Pi ← Pi × p(xk , xk−1 )
if xk is on a light source then
exit loop
else
Fi ← Fi × f s (xk+1 , xk , xk−1 )G(xk , xk−1 )
end if
end loop
Fi ← X i × We (x1 , x2 )L e (x K , x K −1 )G(x K , x K −1 )
I N ← I N + NXPi i
end for
pixel ← I N
end for

2.4.3 Global Illumination

Algorithm 1 is a straightforward but inefficient implementation of path tracing and


can be improved in many ways. A major source of inefficiency stands in the fact that
reflected directions are sampled independently from light sources. Indeed, a light
source which directly lights a point of the scene is easy to compute and contributes
probably greatly to the illumination of the point. On the contrary, light coming
16 2 Monte Carlo Methods for Image Synthesis

(a) Full lighting (b) Direct lighting (c) Indirect lighting


Fig. 2.8 Global illumination (a) of a scene can be decomposed in direct lighting (b) and indirect
lighting (c). The direct lighting is the light coming from a source to a point and reflected toward the
camera. The indirect lighting is the light coming from a source and reflected several times before
reaching the camera

indirectly from a source, after several reflections on objects, is difficult to compute


and may contribute little to the illumination of the point (see Fig. 2.8) (Nayar et al.
2006).
Thus, a very common optimization implemented in path tracers consists in sam-
pling light sources directly (Vorba and Křivánek 2016): at each intersection point,
a ray is sent toward a light source to estimate direct lighting and the path is traced
recursively by sampling directions to estimate indirect lighting. This amounts to par-
titioning the integration domain in the rendering equation, which still gives valid
results while improving the convergence speed.

2.5 Conclusion

An image, captured by a camera or seen by the Human Visual System (HVS), is a


measure of the light (radiance) propagated in the scene. Photo-realistic image syn-
thesis consists in computing an image from a virtual 3D scene, using physical laws
of light transport such as the rendering equation. Current rendering algorithms are
based on stochastic methods (Monte Carlo integration, Markov chain) to compute
realistic images. Such an algorithm gradually converges to the expected image of the
virtual scene but this generally requires a lot of computation time. Many improve-
ments have been proposed to speed up the convergence of the rendering algorithms.
The remaining of this book aims to characterize the noise present in rendered images
(resulting variance of the rendering algorithm).
References 17

References

Catmull EE (1974) A subdivision algorithm for computer display of curved surfaces. Ph.D. thesis,
The University of Utah
Cook RL, Carpenter L, Catmull E (1987) The reyes image rendering architecture. In: Proceedings
of the 14th annual conference on computer graphics and interactive techniques, SIGGRAPH ’87,
pp 95–102
Glassner AS (1994) Principles of digital image synthesis. Morgan Kaufmann Publishers Inc., San
Francisco
Goral CM, Torrance KE, Greenberg DP, Battaile B (1984) Modeling the interaction of light between
diffuse surfaces. In: Proceedings of the 11th annual conference on computer graphics and inter-
active techniques, SIGGRAPH ’84, pp 213–222
Jakob W (2013) Light transport on path-space manifolds. Ph.D. thesis, Cornell University
Jensen HW (2001) Realistic image synthesis using photon mapping. A. K. Peters Ltd., Natick
Kajiya J (1986) The rendering equation. ACM Comput Graph 20(4):143–150
Lafortune EP, Willems YD (1993) Bi-directional path tracing. In: Proceedings of third international
conference on computational graphics and visualization techniques (compugraphics ’93), Alvor,
Portugal, pp 145–153
Nayar SK, Krishnan G, Grossberg MD, Raskar R (2006) Fast separation of direct and global
components of a scene using high frequency illumination. ACM Trans Graph 25(3):935–944
Nicodemus FE, Richmond JC, Hsia JJ, Ginsberg IW, Limperis T (1977) Geometric considerations
and nomenclature for reflectance. National Bureau of Standards
Pharr M, Humphreys G (2010) Physically based rendering: from theory to implementation, 2nd
edn. Morgan Kaufmann Publishers Inc., San Francisco
Veach E (1997) Robust Monte Carlo methods for light transport simulation. Ph.D. thesis, Stanford
University
Veach E, Guibas LJ (1994) Bidirectional estimators for light transport. In: Eurographics rendering
workshop, pp 147–162
Veach E, Guibas LJ (1997) Metropolis light transport. Comput Graph 31(Annual Conference
Series):65–76
Vorba J, Křivánek J (2016) Adjoint-driven russian roulette and splitting in light transport simulation.
ACM Trans Graph 35(4):1–11
Whitted T (1980) An improved illumination model for shaded display. Commun ACM 23(6):343–
349
Exploring the Variety of Random
Documents with Different Content
in tetanus,

577

in the treatment of the morphia habit,

673

675

in tubercular meningitis,

736

physiological action of,

652

Ophthalmoscopic appearances in cerebral anæmia,

787

in cerebral hyperæmia,

770
examination in tumors of the brain,

1035

Optic nerve, atrophy of, in disseminated sclerosis,

879

in tabes dorsalis,

834

Optic sensory disturbances in nervous diseases,

39

40

Oöphorectomy, question of, in hysteria,

287
in hystero-epilepsy,

312

Organic headache,

403

Mental diseases,

176

Nervous diseases, diagnosis of,

61-63

Osmic acid, hypodermic use, in neuralgia,

1229

Osteomata in tumors of the brain,


1049

Osteo-myelitis, as a cause of cerebral abscess,

799

Ovarian neuralgia,

1240

pressure, effect of, in hystero-epilepsy,

298

299

Over-exertion as a cause of disseminated sclerosis,

884

of spinal hyperæmia,
802

Over-study, influence on causation of chorea,

441

Over-work, influence on causation of cerebral hyperæmia,

354

of neurasthenia,

354

Oxygen, inhalations in vaso-motor neuroses,

1256

P.
Pacchionian bodies, seat and nature, in brain tumors,

1050

Pachymeningitis,

703

Acute spinal,

747

Chronic spinal,

748

External,

704

Hemorrhagic,

707
Internal,

706

Pain, in angina pectoris,

1237

1238

in cerebral syphilis,

1005

in diffuse spinal sclerosis,

889

in disseminated sclerosis,

874

in family form of tabes dorsalis,

871
in gastralgia,

1238

1239

in hysteria,

250

in injuries of peripheral nerves,

1185-1187

in intercostal neuralgia,

1234

in lead colic,

683

in hyperæmia of the brain,

770
in migraine, seat, characters, and origin of,

408

412

1230

in multiple neuritis,

1195

in nervous diseases, varieties of,

33-35

in neuritis,

1191

in neuromata,

1210
in sciatica,

1235

in spinal syphilis,

1025

in superficial neuralgia,

1212

in symmetrical gangrene,

1258-1260

in writers' cramp,

519

Influence on causation of the opium habit,

649

Seat and character, in acute spinal meningitis,

750
spinal pachymeningitis,

747

in chronic spinal meningitis,

753

spinal pachymeningitis,

749

in external pachymeningitis,

705

in spinal meningeal hemorrhage,

754

in tabes dorsalis,

828

in tubercular meningitis,
725

726

in tumors of the brain,

1033

1034

in tumors of the spinal cord,

1091-1093

Pain-sense in tabes dorsalis,

833

Painters' colic,

683
Palsy, lead,

685

Pancreas, changes in, in chronic alcoholism,

602

Paracentesis in chronic hydrocephalus,

745

of the sac in spina bifida,

761

Paræsthesia, hysterical,

250

in migraine,
1230

1231

in nervous diseases, general,

33

in progressive unilateral facial atrophy,

696

in spinal syphilis,

1025

in spinal hyperæmia,

802

Paralalia,

571
Paraldehyde, habitual addiction to,

666

use, in alcoholism,

641

642

645

646

in the opium habit,

674

676

P
ARALYSIS

GITANS

433

Diagnosis,

438

Etiology and morbid anatomy,

437

Symptoms,

434

Synonyms,

433

Treatment,

438
Paralysis, alcoholic,

621

atrophic, of infants,

1113

cerebral,

917

festinans,

436

hysterical,

237

in acute myelitis,

816

,
817

in acute spinal meningitis,

750

in atrophy of the brain,

994

in cerebral hemorrhage,

939

954

meningeal hemorrhage,

712

syphilis,

1007-1010

in chorea,
447

in chronic hydrocephalus,

743

spinal pachymeningitis,

749

in encephalitis,

791

in hæmatoma of the dura mater,

708

in infantile spinal paralysis,

1118

1123

in nervous diseases, definition of,


42-44

in spina bifida,

759

in the chloral habit,

664

in tubercular meningitis,

727

in thrombosis of cerebral veins and sinuses,

986

in tumors of the brain,

1040

of the spinal cord,

1091

,
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like