0% found this document useful (0 votes)
2 views

Advances and Applications of Optimised Algorithms in Image Processing 1st Edition Diego Oliva download

The document discusses the book 'Advances and Applications of Optimised Algorithms in Image Processing' by Diego Oliva and Erik Cuevas, which explores the integration of image processing and artificial intelligence, particularly focusing on machine learning and optimization techniques. It highlights the relevance of optimization algorithms in solving complex image processing problems, including image segmentation and object detection in medical images. The book is structured for graduate education and aims to provide a comprehensive reference for students, researchers, and professionals in the field.

Uploaded by

daerrshumpr9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Advances and Applications of Optimised Algorithms in Image Processing 1st Edition Diego Oliva download

The document discusses the book 'Advances and Applications of Optimised Algorithms in Image Processing' by Diego Oliva and Erik Cuevas, which explores the integration of image processing and artificial intelligence, particularly focusing on machine learning and optimization techniques. It highlights the relevance of optimization algorithms in solving complex image processing problems, including image segmentation and object detection in medical images. The book is structured for graduate education and aims to provide a comprehensive reference for students, researchers, and professionals in the field.

Uploaded by

daerrshumpr9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Advances and Applications of Optimised

Algorithms in Image Processing 1st Edition Diego


Oliva download

https://textbookfull.com/product/advances-and-applications-of-
optimised-algorithms-in-image-processing-1st-edition-diego-oliva/

Download more ebook from https://textbookfull.com


We believe these products will be a great fit for you. Click
the link to download now, or visit textbookfull.com
to discover even more!

Metaheuristic Algorithms for Image Segmentation Theory


and Applications Diego Oliva

https://textbookfull.com/product/metaheuristic-algorithms-for-
image-segmentation-theory-and-applications-diego-oliva/

Image Processing and Communications Techniques


Algorithms and Applications Micha■ Chora■

https://textbookfull.com/product/image-processing-and-
communications-techniques-algorithms-and-applications-michal-
choras/

Fractals: applications in biological signalling and


image processing 1st Edition Aliahmad

https://textbookfull.com/product/fractals-applications-in-
biological-signalling-and-image-processing-1st-edition-aliahmad/

Advances in Soft Computing and Machine Learning in


Image Processing 1st Edition Aboul Ella Hassanien

https://textbookfull.com/product/advances-in-soft-computing-and-
machine-learning-in-image-processing-1st-edition-aboul-ella-
hassanien/
Advances in Metaheuristics Algorithms Methods and
Applications Erik Cuevas

https://textbookfull.com/product/advances-in-metaheuristics-
algorithms-methods-and-applications-erik-cuevas/

Grouping Genetic Algorithms: Advances and Applications


1st Edition Michael Mutingi

https://textbookfull.com/product/grouping-genetic-algorithms-
advances-and-applications-1st-edition-michael-mutingi/

Image Operators: Image Processing in Python 1st Edition


Jason M. Kinser

https://textbookfull.com/product/image-operators-image-
processing-in-python-1st-edition-jason-m-kinser/

Image operators image processing in Python First


Edition Kinser

https://textbookfull.com/product/image-operators-image-
processing-in-python-first-edition-kinser/

Modern Algorithms for Image Processing: Computer


Imagery by Example Using C# 1st Edition Vladimir
Kovalevsky

https://textbookfull.com/product/modern-algorithms-for-image-
processing-computer-imagery-by-example-using-c-1st-edition-
vladimir-kovalevsky/
Intelligent Systems Reference Library 117

Diego Oliva
Erik Cuevas

Advances and
Applications
of Optimised
Algorithms in
Image Processing
Intelligent Systems Reference Library

Volume 117

Series editors
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl
Lakhmi C. Jain, University of Canberra, Canberra, Australia;
Bournemouth University, UK;
KES International, UK
e-mails: jainlc2002@yahoo.co.uk; Lakhmi.Jain@canberra.edu.au
URL: http://www.kesinternational.org/organisation.php
About this Series

The aim of this series is to publish a Reference Library, including novel advances
and developments in all aspects of Intelligent Systems in an easily accessible and
well structured form. The series includes reference works, handbooks, compendia,
textbooks, well-structured monographs, dictionaries, and encyclopedias. It contains
well integrated knowledge and current information in the field of Intelligent
Systems. The series covers the theory, applications, and design methods of
Intelligent Systems. Virtually all disciplines such as engineering, computer science,
avionics, business, e-commerce, environment, healthcare, physics and life science
are included.

More information about this series at http://www.springer.com/series/8578


Diego Oliva Erik Cuevas

Advances and Applications


of Optimised Algorithms
in Image Processing

123
Diego Oliva Erik Cuevas
Departamento de Electrónica, CUCEI Departamento de Electrónica, CUCEI
Universidad de Guadalajara Universidad de Guadalajara
Guadalajara, Jalisco Guadalajara, Jalisco
Mexico Mexico

and

Tecnológico de Monterrey, Campus


Guadalajara
Zapopan, Jalisco
Mexico

ISSN 1868-4394 ISSN 1868-4408 (electronic)


Intelligent Systems Reference Library
ISBN 978-3-319-48549-2 ISBN 978-3-319-48550-8 (eBook)
DOI 10.1007/978-3-319-48550-8
Library of Congress Control Number: 2016955427

© Springer International Publishing AG 2017


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my family and Gosia Kijak, you are
always my support
Foreword

This book brings together and explores possibilities for combining image
processing and artificial intelligence, both focused on machine learning and opti-
mization, two relevant areas and fields in computer science. Most books have been
proposed about the major topics separately, but not in conjunction, giving it a
special interest. The problems addressed and described in the different chapters
were selected in order to demonstrate the capabilities of optimization and machine
learning to solve different issues in image processing. These problems were selected
considering the degree of relevance in the field providing important cues on par-
ticular applications domains. The topics include the study of different methods for
image segmentation, and more specifically detection of geometrical shapes and
object recognition, where their applications in medical image processing, based on
the modification of optimization algorithms with machine learning techniques,
provide a new point of view. In short, the book was intended with the purpose and
motivation to show that optimization and machine learning main topics are
attractive alternatives for image processing technique taking advantage over other
existing strategies. Complex tasks can be addressed under these approaches pro-
viding new solutions or improving the existing ones thanks to the required foun-
dation for solving problems in specific areas and applications.
Unlike other existing books in similar areas, the book proposed here introduces
to the reader the new trends using optimization approaches about the use of opti-
mization and machine learning techniques applied to image processing. Moreover,
each chapter includes comparisons and updated references that support the results
obtained by the proposed approaches, at the same time that provides the reader a
practical guide to go to the reference sources.
The book was designed for graduate and postgraduate education, where students
can find support for reinforcing or as the basis for their consolidation or deepening
of knowledge, and for researchers. Also teachers can find support for the teaching
process in the areas involving machine vision or as examples related to main
techniques addressed. Additionally, professionals who want to learn and explore the
advances on concepts and implementation of optimization and learning-based

vii
viii Foreword

algorithms applied image processing find in this book an excellent guide for such
purpose.
The content of this book has been organized considering an introduction to
machine learning an optimization. After each chapter addresses and solves selected
problems in image processing. In this regard, Chaps. 1 and 2 provides respectively
introductions to machine learning and optimization, where the basic and main
concepts related to image processing are addressed. Chapter 3, describes the
electromagnetism-like optimization (EMO) algorithm, where the appropriate
modifications are addressed to work properly in image processing. Moreover, its
advantages and shortcomings are also explored. Chapter 4 addresses the digital
image segmentation as an optimization problem. It explains how the image seg-
mentation is treated as an optimization problem using different objective functions.
Template matching using a physical inspired algorithm is addressed in Chap. 5,
where indeed, template matching is considered as an optimization problem, based
on a modification of EMO and considering the use of a memory to reduce the
number of call functions. Chapter 6 addresses the detection of circular shapes
problem in digital images, and again focused as an optimization problem.
A practical medical application is proposed in Chap. 7, where blood cell seg-
mentation by circle detection is the problem to be solved. This chapter introduces a
new objective function to measure the match between the proposed solutions and
the blood cells contained in the images. Finally, Chap. 8 proposes an improvement
EMO applying the concept of opposition-based electromagnetism-like optimiza-
tion. This chapter analyzes a modification of EMO used as a machine learning
technique to improve its performance. An important advantage of this structure is
that each chapter could be read separately. Although all chapters are interconnected,
Chap. 3 serves as the basis for some of them.
The concise comprehensive book on the topics addressed makes this work an
important reference in image processing, which is an important area where a sig-
nificant number of technologies are continuously emerging and sometimes unten-
able and scattered along the literature. Therefore, congratulations to authors for
their diligence, oversight and dedication for assembling the topics addressed in the
book. The computer vision community will be very grateful for this well-done
work.

July 2016 Gonzalo Pajares


Universidad Complutense de Madrid
Preface

The use of cameras to obtain images or videos from the environment has been
extended in the last years. Now these sensors are present in our lives, from cell
phones to industrial, surveillance and medical applications. The tendency is to have
automatic applications that can analyze the images obtained with the cameras. Such
applications involve the use of image processing algorithms.
Image processing is a field in which the environment is analyzed using samples
taken with a camera. The idea is to extract features that permit the identification
of the objects contained in the image. To achieve this goal is necessary applying
different operators that allow a correct analysis of a scene. Most of these operations
are computationally expensive. On the other hand, optimization approaches are
extensively used in different areas of engineering. They are used to explore complex
search spaces and obtain the most appropriate solutions using an objective function.
This book presents a study the uses of optimization algorithms in complex prob-
lems of image processing. The selected problems explore areas from the theory of
image segmentation to the detection of complex objects in medical images. The
concepts of machine learning and optimization are analyzed to provide an overview
of the application of these tools in image processing.
The aim of this book is to present a study of the use of new tendencies to solve
image processing problems. When we start working on those topics almost ten
years ago, the related information was sparse. Now we realize that the researchers
were divided and closed in their fields. On the other hand, the use of cameras was
not popular then. This book presents in a practical way the task to adapt the
traditional methods of a specific field to be solved using modern optimization
algorithms. Moreover, in our study we notice that optimization algorithm could also
be modified and hybridized with machine learning techniques. Such modifications
are also included in some chapters. The reader could see that our goal is to show
that exist a natural link between the image processing and optimization. To achieve
this objective, the first three chapters introduce the concepts of machine learning,
optimization and the optimization technique used to solve the problems. The
structure of the rest of the sections is to first present an introduction to the problem
to be solved and explain the basic ideas and concepts about the implementations.

ix
x Preface

The book was planned considering that, the readers could be students, researchers
expert in the field and practitioners that are not completely involved with the topics.
This book has been structured so that each chapter can be read independently
from the others. Chapter 1 describes the machine learning (ML). This chapter
concentrates on elementary concepts of machine learning. Chapter 2 explains the
theory related with global optimization (GO). Readers that are familiar with those
topics may wish to skip these chapters.
In Chap. 3 the electromagnetism-like optimization (EMO) algorithm is intro-
duced as a tool to solve complex optimization problems. The theory of physics
behind the EMO operators is explained. Moreover, their pros and cons are widely
analyzed, including some of the most significant modifications.
Chapter 4 presents three alternative methodologies for image segmentation
considering different objective functions. The EMO algorithm is used to find the
best thresholds that can segment the histogram of a digital image.
In Chap. 5 the problem template matching is introduced that consists in the
detection of objects in an image using a template. Here the EMO algorithm opti-
mizes an objective function. Moreover, improvements to reduce the number of
evaluations and the convergence velocity are also explained.
Continuing with the object detection, Chap. 6 shows how EMO algorithm can be
applied to detect circular shapes embedded in digital images. Meanwhile, in
Chap. 7 a modified objective function is used to identify white blood cells in
medical images using EMO.
Chapter 8 shows how a machine learning technique could improve the perfor-
mance of an optimization algorithm without affecting its main features such as
accuracy or convergence.
Writing this book was a very rewarding experience where many people were
involved. We acknowledge Dr. Gonzalo Pajares for always being available to help
us. We express our gratitude to Prof. Lakhmi Jain, who so warmly sustained this
project. Acknowledgements also go to Dr. Thomas Ditzinger, who so kindly agreed
to its appearance.
Finally, it is necessary to mention that this book is a small piece in the puzzles of
image processing and optimization. We would like to encourage the reader to
explore and expand the knowledge in order create their own implementations
according their own necessities.

Zapopan, Mexico Diego Oliva


Guadalajara, Mexico Erik Cuevas
July 2016
Contents

1 An Introduction to Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Typed of Machine Learning Strategies . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Nearest Neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Parametric and Non-parametric Models . . . . . . . . . . . . . . . . . . . . . 4
1.5 Overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 The Curse of Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.7 Bias-Variance Trade-Off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.8 Data into Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1 Definition of an Optimization Problem . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Classical Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Evolutionary Computation Methods . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Structure of an Evolutionary Computation Algorithm . . . . . 18
2.4 Optimization and Image Processing and Pattern Recognition . . . . . 19
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 Electromagnetism—Like Optimization Algorithm: An
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1 Introduction to Electromagnetism-Like Optimization (EMO) . . . . . 23
3.2 Optimization Inspired in Electromagnetism . . . . . . . . . . . . . . . . . . 24
3.2.1 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.2 Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.3 Total Force Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.4 Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 A Numerical Example Using EMO . . . . . . . . . . . . . . . . . . . . . . . . 30

xi
xii Contents

3.4 EMO Modifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31


3.4.1 Hybridizing EMO with Descent Search (HEMO) . . . . . . . . 34
3.4.2 EMO with Fixed Search Pattern (FEMO) . . . . . . . . . . . . . . 37
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Digital Image Segmentation as an Optimization Problem . . . . . . . . . 43
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Image Multilevel Thresholding (MT) . . . . . . . . . . . . . . . . . . . . . . . 45
4.2.1 Between—Class Variance (Otsu’s Method) . . . . . . . . . . . . . 46
4.2.2 Entropy Criterion Method (Kapur’s Method) . . . . . . . . . . . 48
4.2.3 Tsallis Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 Multilevel Thresholding Using EMO (MTEMO) . . . . . . . . . . . . . . 51
4.3.1 Particle Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.2 EMO Implementation with Otsu’s and Kapur’s
Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.3 EMO Implementation with Tsallis Entropy . . . . . . . . . . . . . 52
4.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4.1 Otsu and Kapur Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4.2 Tsallis Entropy Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5 Template Matching Using a Physical Inspired Algorithm . . . . . . . . . 93
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 Template Matching Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.3 New Local Search Procedure in EMO for Template Matching . . . . 98
5.3.1 The New Local Search Procedure for EMO . . . . . . . . . . . . 98
5.3.2 Fitness Estimation for Velocity Enhancement in TM . . . . . 102
5.3.3 Template Matching Using EMO . . . . . . . . . . . . . . . . . . . . . 102
5.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6 Detection of Circular Shapes in Digital Images. . . . . . . . . . . . . . . . . . 113
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.2 Circle Detection Using EMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.2.1 Particle Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.2.2 Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.2.3 EMO Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.3.1 Circle Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.3.2 Test on Shape Discrimination . . . . . . . . . . . . . . . . . . . . . . . 121
6.3.3 Multiple Circle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.3.4 Circular Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Contents xiii

6.3.5 Circle Localization from Occluded or Imperfect


Circles and Arc Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 124
6.3.6 Accuracy and Computational Time . . . . . . . . . . . . . . . . . . . 125
6.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7 A Medical Application: Blood Cell Segmentation
by Circle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.2 Circle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.2.1 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.2.2 Particle Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.2.3 Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.2.4 EMO Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.2.5 White Blood Cell Detection . . . . . . . . . . . . . . . . . . . . . . . . 142
7.3 A Numerical Example of White Blood Cells Detection . . . . . . . . . 146
7.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.5 Comparisons to Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7.5.1 Detection Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7.5.2 Robustness Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.5.3 Stability Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8 An EMO Improvement: Opposition-Based
Electromagnetism-Like for Global Optimization . . . . . . . . . . . . . . . . . 159
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
8.2 Opposition - Based Learning (OBL) . . . . . . . . . . . . . . . . . . . . . . . . 161
8.2.1 Opposite Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8.2.2 Opposite Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
8.2.3 Opposition-Based Optimization . . . . . . . . . . . . . . . . . . . . . . 162
8.3 Opposition-Based Electromagnetism-Like Optimization
(OBEMO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.3.1 Opposition-Based Population Initialization . . . . . . . . . . . . . 163
8.3.2 Opposition-Based Production for New Generation . . . . . . . 165
8.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.4.1 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.4.2 Parameter Settings for the Involved EMO Algorithms . . . . 166
8.4.3 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Chapter 1
An Introduction to Machine Learning

1.1 Introduction

We already are in the era of big data. The overall amount of data is steadily
growing. There are about one trillion of web pages; one hour of video is uploaded
to YouTube every second, amounting to 10 years of content every day. Banks
handle more than 1 M transactions per hour and has databases containing more than
2.5 petabytes (2.5 × 1015) of information; and so on [1].
In general, we define machine learning as a set of methods that can automatically
detect patterns in data, and then use the uncovered patterns to predict future data, or
to perform other kinds of decision making under uncertainty. Learning means that
novel knowledge is generated from observations and that this knowledge is used to
achieve defined objectives. Data itself is already knowledge. But for certain
applications and for human understanding, large data sets cannot directly be applied
in their raw form. Learning from data means that new condensed knowledge is
extracted from the large amount of information [2].
Some typical machine learning problems include, for example in bioinformatics,
the analysis of large genome data sets to detect illnesses and for the development of
drugs. In economics, the study of large data sets of market data can improve the
behavior of decision makers. Prediction and inference can help to improve planning
strategies for efficient market behavior. The analysis of share markets and stock
time series can be used to learn models that allow the prediction of future devel-
opments. There are thousands of further examples that require the development of
efficient data mining and machine learning techniques. Machine learning tasks vary
in various kinds of ways, e.g., the type of learning task, the number of patterns, and
their size [2].

© Springer International Publishing AG 2017 1


D. Oliva and E. Cuevas, Advances and Applications of Optimised Algorithms
in Image Processing, Intelligent Systems Reference Library 117,
DOI 10.1007/978-3-319-48550-8_1
2 1 An Introduction to Machine Learning

1.2 Typed of Machine Learning Strategies

The Machine learning methods are usually divided into three main types: super-
vised, unsupervised and reinforcement learning [3]. In the predictive or supervised
learning approach, the goal is to learn a mapping from inputs x to outputs y, given a
 
labeled set of input-output pairs D ¼ fðxi ; yi ÞgNi¼1 ,xi ¼ x1i ; . . .; xdi . Here D is
called the training data set, and N represents the number of training examples.
In the simplest formulation, each training vector x is a d-dimensional vector,
where each dimension represents a feature or attribute of x. Similarly, yi symbolizes
the category assigned to xi . Such categories integrate a set defined as
yi 2 f1; . . .; C g. When yi is categorical, the problem is known as classification and
when yi is real-valued, the problem is known as regression. Figure 1.1 shows a
schematic representation of the supervised learning.
The second main method of machine learning is the unsupervised learning. In
unsupervised learning, it is only necessary to provide the data D ¼ fxi gNi¼1 .
Therefore, the objective of an unsupervised algorithm is to automatically find
patterns from the data, which are not initially apparent. This process is sometimes
called knowledge discovery. Under such conditions, this process is a much less
well-defined problem, since we are not told what kinds of patterns to look for, and
there is no obvious error metric to use (unlike supervised learning, where we can
compare our prediction of yi for a given xi to the observed value). Figure 1.2
illustrate the process of unsupervised learning. In the figure, data are automatically
classified according to their distances in two categories, such as clustering
algorithms.
Reinforcement Learning is the third method of machine learning. It is less
popular compared with supervised and unsupervised methods. Under,
Reinforcement learning, an agent learns to behave in an unknown scenario through
the signals of reward and punishment provided by a critic. Different to supervised
learning, the reward and punishment signals give less information, in most of the
cases only failure or success. The final objective of the agent is to maximize the
total reward obtained in a complete learning episode. Figure 1.3 illustrate the
process of reinforcement learning.

Fig. 1.1 Schematic


representation of the Input xi Supervised Actual
supervised learning learning algorithm output

-
Error + Desired
yi
signal output
1.3 Classification 3

Fig. 1.2 Process of xi


unsupervised learning. Data
are automatically classified
according to their distances in
two categories, such as
clustering algorithms

Fig. 1.3 Process of


reinforcement learning
Unknown scenario

Reward/punishment
Actions

State
Agent

Values

Critic

1.3 Classification

Classification considers the problem of determining categorical labels for unla-


beled patterns based on observations. Let ðx1 ; y1 Þ; . . .; ðxN ; yN Þ be observations of
d-dimensional continuous patterns, i.e., xi 2 Rd with discrete labels y1 ; . . .; yN . The
objective in classification is to obtain a functional model f that allows a reasonable
prediction of unknown class labels y0 for a new pattern x0 . Patterns without labels
should be assigned to labels of patterns that are enough similar, e.g., that are close
to the target pattern in data space, that come from the same distribution, or that lie
on the same side of a separating decision function. But learning from observed
patterns can be difficult. Training sets can be noisy, important features may be
unknown, similarities between patterns may not be easy to define, and observations
4 1 An Introduction to Machine Learning

may not be sufficiently described by simple distributions. Further, learning func-


tional models can be tedious task, as classes may not be linearly separable or may
be difficult to separate with simple rules or mathematical equations.

1.3.1 Nearest Neighbors

The Nearest neighbor (NN) method is the most popular method used in machine
learning for classification. Its best characteristic is its simplicity. It is based on the
idea that the closest patterns to a target pattern x0 , for which we seek the label,
deliver useful information of its description. Based on this idea, NN assigns the
class label of the majority of the k-nearest patterns in data space. Figure 1.4 show
the classification process under the NN method, considering a 4-nearest approach.
Analyzing Fig. 1.4, it is clear that the novel pattern x0 will be classified as element
of the class A, since most of the nearest element are of the A category.

1.4 Parametric and Non-parametric Models

The objective of a machine learning algorithm is to obtain a functional model f that


allows a reasonable prediction or description of a data set. There are many ways to
define such models, but the most important distinction is this: does the model have
a fixed number of parameters, or does the number of parameters grow with the
amount of training data? The former is called a parametric model, and the latter is
called a nonparametric model. Parametric models have the advantage of often being
faster to use, but the disadvantage of making stronger assumptions about the nature
of the data distributions. Nonparametric models are more flexible, but often com-
putationally intractable for large datasets. We will give examples of both kinds of
models in the sections below. We focus on supervised learning for simplicity,
although much of our discussion also applies to unsupervised learning. Figure 1.5
represents graphically the architectures from both approaches.

Fig. 1.4 Classification


A
process under the NN method,
considering a 4-nearest
approach B

2
3
1 4
x
1.5 Overfitting 5

(a) (b)
Input Actual Input Actual
System Output System Output
+ +
- Target - Target
Output Output

Adaptable system with Adaptable system with


fixed parameters Calibration
variable parameters

Fig. 1.5 Graphical representation of the learning process in Parametric and non-parametric
models

1.5 Overfitting

The objective of learning is to obtain better predictions as outputs, being they class
labels or continuous regression values. The process to know how successfully the
algorithm has learnt is to compare the actual predictions with known target labels,
which in fact is how the training is done in supervised learning. If we want to
generalize the performance of the learning algorithm to examples that were not seen
during the training process, we obviously can’t test by using the same data set used
in the learning stage. Therefore, it is necessary a different data, a test set, to prove
the generalization ability of the learning method. This test set is used by the
learning algorithm and compared with the predicted outputs produced during the
learning process. In this test, the parameters obtained in the learning process are not
modified.
In fact, during the learning process, there is at least as much danger in
over-training as there is in under-training. The number of degrees of variability in
most machine learning algorithms is huge—for a neural network there are lots of
weights, and each of them can vary. This is undoubtedly more variation than there
is in the function we are learning, so we need to be careful: if we train for too long,
then we will overfit the data, which means that we have learnt about the noise and
inaccuracies in the data as well as the actual function. Therefore, the model that we
learn will be much too complicated, and won’t be able to generalize.
Figure 1.6 illustrates this problem by plotting the predictions of some algorithm
(as the curve) at two different points in the learning process. On the Fig. 1.6a the
curve fits the overall trend of the data well (it has generalized to the underlying
general function), but the training error would still not be that close to zero since it
passes near, but not through, the training data. As the network continues to learn, it
will eventually produce a much more complex model that has a lower training error
(close to zero), meaning that it has memorized the training examples, including any
noise component of them, so that is has overfitted the training data (see Fig. 1.6b).
6 1 An Introduction to Machine Learning

(a) (b)

f(x) g(x)

x x

Fig. 1.6 Examples of a generalization and b overfitting

We want to stop the learning process before the algorithm overfits, which means
that we need to know how well it is generalizing at each iteration. We can’t use the
training data for this, because we wouldn’t detect overfitting, but we can’t use the
testing data either, because we’re saving that for the final tests. So we need a third
set of data to use for this purpose, which is called the validation set because we’re
using it to validate the learning so far. This is known as cross-validation in statistics.
It is part of model selection: choosing the right parameters for the model so that it
generalizes as well as possible.

1.6 The Curse of Dimensionality

The NN classifier is simple and can work quite well, when it is given a represen-
tative distance metric and an enough training data. In fact, it can be shown that the
NN classifier can come within a factor of 2 of the best possible performance if
N ! 1.
However, the main problem with NN classifiers is that they do not work well
with high dimensional data x. The poor performance in high dimensional settings is
due to the curse of dimensionality.
To explain the curse, we give a simple example. Consider applying a NN
classifier to data where the inputs are uniformly distributed in the d-dimensional
unit cube. Suppose we estimate the density of class labels around a test point x0 by
“growing” a hyper-cube around x0 until it contains a desired fraction F of the data
points. The expected edge length of this cube will be ed ðFÞ ¼ F 1=d . If d = 10 and
we want to compute our estimate on 10 % of the data, we have e10 ð0:1Þ ¼ 0:8, so
we need to extend the cube 80 % along each dimension around x0 . Even if we only
use 1 % of the data, we find e10 ð0:01Þ ¼ 0:63, see Fig. 1.7. Since the entire range
of the data is only 1 along each dimension, we see that the method is no longer very
local, despite the name “nearest neighbor”. The trouble with looking at neighbors
that are so far away is that they may not be good predictors about the behavior of
the input-output function at a given point.
1.7 Bias-Variance Trade-Off 7

(b)
1

(a) 0.8

Edge length of cube


0.6
1
d=1
0.4 d=2
d=3
d=5
0.2 d=7
0
s
1
0
0 0.2 0.4 0.6 0.8 1
Fraction of data in neighborhood

Fig. 1.7 Illustration of the curse of dimensionality. a We embed a small cube of side s inside a
larger unit cube. b We plot the edge length of a cube needed to cover a given volume of the unit
cube as a function of the number of dimensions

1.7 Bias-Variance Trade-Off

An inflexible model is defined as a mathematical formulation that involve few


parameters. Due to the few parameters, they have several problems to fit data well.
On the other hand, flexible models integrates several parameters with better mod-
eling capacities [4]. The sensibility of inflexible models to variations during the
learning process is relatively moderate in comparison to flexible models. Similarly,
inflexible models have comparatively few modeling capacities, but are often easy to
interpret. For example, linear models formulate linear relationships between their
parameters, which are easy to describe with their coefficients, e.g.,

f ðxi Þ ¼ a0 þ a1 x1i þ    þ ad xdi ð1:1Þ

The coefficients a represents the adaptable parameters that formulate relation-


ships easy to interpret. Optimizing the coefficients of a linear model is easier than
fitting an arbitrary function with several adapting parameters.
Linear models do not suffer from overfitting as they are less depending on slight
changes in the training set. They have low variance that is the amount by which the
model changes, if using different training data sets. Such models have large errors
when approximating a complex problem corresponding to a high bias. Bias is a
measure for the inability of fitting the model to the training patterns. In contrast,
flexible methods have high variance, i.e., they vary a lot when changing the training
set, but have low bias, i.e., they better adapt to the observations.
8 1 An Introduction to Machine Learning

Fig. 1.8 Illustration of


bias-variance compromise

error
expected error

variance bias

flexibility

Figure 1.8 illustrates the bias-variance trade-off. On the x-axis the model com-
plexity increases from left to right. While a method with low flexibility has a low
variance, it usually suffers from high bias. The variance increases while the bias
decreases with increasing model flexibility. The effect changes in the middle of the
plot, where variance and bias cross. The expected error is minimal in the middle of
the plot, where bias and variance reach a similar level. For practical problems and
data sets, the bias-variance trade-off has to be considered when the decision for a
particular method is made.

1.8 Data into Probabilities

Figure 1.9 shows the measurements of a feature x for two different classes, C1 and
C2 . Members of class C2 tend to have larger values of feature x than members of
class C1 , but there is some overlap between both classes. Under such conditions, the
correct class is easy to predict at the extremes of the range of each class, but what to
do in the middle where is unclear [3].

Fig. 1.9 Histograms of 350


feature x values against their 300 C
probability p(x) for two 1
C
2
classes 250

200
p(x)

150

100

50

0
30 40 50 60 70 80 90
x
1.8 Data into Probabilities 9

Assuming that we are trying to classify the writing letters ‘a’ and ‘b’ based on
their height (as it is shown in Fig. 1.10). Most of the people write the letter ‘a’
smaller than their ‘b’, but not everybody. However, in this example, other class of
information can be used to solve this classification problem. We know that in
English texts, the letter ‘a’ is much more common than the letter ‘b’. If we see a
letter that is either an ‘a’ or a ‘b’ in normal writing, then there is a 75 % chance that
it is an ‘a.’ We are using prior knowledge to estimate the probability that the letter is
an ‘a’: in this example, pðC1 Þ ¼ 0:75, pðC2 Þ ¼ 0:25. If we weren’t allowed to see
the letter at all, and just had to classify it, then if we picked ‘a’ every time, we’d be
right 75 % of the time.
In order to give a prediction, it is necessary to know the value x of the dis-
criminant feature. It would be a mistake to use only the occurrence (a priori)
probabilities pðC1 Þ and pðC2 Þ. Normally, a classification problem is formulated
through the definition of a data set which contains a set of values of x and the class
of each exemplar. Under such conditions, it is easy to calculate the value of pðC1 Þ
(we just count how many times out of the total the class was C1 and divide by the
total number of examples), and also another useful measurement: the conditional
probability of C1 given that x has value X: pðC1 jXÞ. The conditional probability
tells us how likely it is that the class is C1 given that the value of x is X. So in
Fig. 1.9 the value of pðC1 jXÞ will be much larger for small values of X than for
large values. Clearly, this is exactly what we want to calculate in order to perform
classification. The question is how to get to this conditional probability, since we
can’t read it directly from the histogram. The first thing that we need to do to get
these values is to quantize the measurement x, which just means that we put it into
one of a discrete set of values {X}, such as the bins in a histogram. This is exactly
what is plotted in Fig. 1.8. Now, if we have lots of examples of the two classes, and
the histogram bins that their measurements fall into, we can compute pðCi ; Xj Þ,
which is the joint probability, and tells us how often a measurement of Ci fell into
histogram bin Xj . We do this by looking in histogram bin Xj , counting the number
of elements of Ci , and dividing  by the total number of examples of any class.
We can also define pðXj Ci Þ, which is a different conditional probability, and
tells us how often (in the training set) there is a measurement of Xj given that the
example is a member of class Ci . Again, we can just get this information from the

Fig. 1.10 Letters “a” and “b”


in the pixel context
10 1 An Introduction to Machine Learning

histogram by counting the number of examples of class Ci in histogram bin Xj and


dividing by the number of examples of that class there are (in any bin).
So we have now worked out two things from our  training data: the joint prob-
ability pðCi ; Xj Þ and the conditional probability pðXj Ci Þ. Since we actually want to
compute pðCi jXj Þ we need to know how to link these things together. As some of
you may already know, the answer is Bayes’ rule, which is what we are now going
to derive. There is a link between the joint probability and the conditional proba-
bility. It is:

pðCi Xj Þ ¼ pðXj jCi Þ  pðCi Þ ð1:2Þ

or equivalently:

pðCi ; Xj Þ ¼ pðCi Xj Þ  pðXj Þ ð1:3Þ

Clearly, the right-hand side of these two equations must be equal to each other,
since they are both equal to pðCi ; Xj Þ, and so with one division we can write:

 pðXj jCi ÞpðCi Þ


pðCi Xj Þ ¼ ð1:4Þ
pðXj Þ

This is Bayes’ rule. If you don’t already know it, learn it: it is the most important
equation in machine learning. It relates the posterior probability pðCi jXj Þ with the
prior probability pðC1 Þ and class-conditional probability pðXj Ci Þ The denominator
(the term on the bottom of the fraction) acts to normalize everything, so that all the
probabilities sum to 1. It might not be clear how to compute this term. However, if
we notice that any observation Xk has to belong to some class Ci , then we can
marginalize over the classes to compute:
X
pðXk Þ ¼ pðXk jCi Þ  PðCi Þ ð1:5Þ
i

The reason why Bayes’ rule is so important is that it lets us obtain the posterior
probability—which is what we actually want—by calculating things that are much
easier to compute. We can estimate the prior probabilities by looking at how often
each class appears in our training set, and we can get the class-conditional prob-
abilities from the histogram of the values of the feature for the training set. We can
use the posterior probability to assign each new observation to one of the classes by
picking the class Ci where:

pðCi jxÞ [ pðCj jxÞ 8i 6¼ j ð1:6Þ

where x is a vector of feature values instead of just one feature. This is known as the
maximum a posteriori or MAP hypothesis, and it gives us a way to choose which
class to choose as the output one. The question is whether this is the right thing to
1.8 Data into Probabilities 11

do. There has been quite a lot of research in both the statistical and machine
learning literatures into what is the right question to ask about our data to perform
classification, but we are going to skate over it very lightly.
The MAP question is; what is the most likely class given the training data?
Suppose that there are three possible output classes, and for a particular input the
posterior probabilities of the classes are pðC1 jxÞ ¼ 0:35, pðC2 jxÞ ¼ 0:45,
pðC3 jxÞ ¼ 0:2. The MAP hypothesis therefore tells us that this input is in class C2 ,
because that is the class with the highest posterior probability. Now suppose that,
based on the class that the data is in, we want to do something. If the class is C1 or
C3 then we do action 1, and if the class is C2 then we do action 2. As an example,
suppose that the inputs are the results of a blood test, the three classes are different
possible diseases, and the output is whether or not to treat with a particular
antibiotic. The MAP method has told us that the output is C2 .
As an example, suppose that the inputs are the results of a blood test, the three
classes are different possible diseases, and the output is whether or not to treat with
a particular antibiotic. The MAP method has told us that the output is C2 , and so we
will not treat the disease. But what is the probability that it does not belong to class
C2 , and so should have been treated with the antibiotic? It is 1  pðC2 Þ ¼ 0:55. So
the MAP prediction seems to be wrong: we should treat with antibiotic, because
overall it is more likely. This method where we take into account the final outcomes
of all of the classes is called the Bayes’ Optimal Classification. It minimizes the
probability of misclassification, rather than maximizing the posterior probability.

References

1. Kramer, O.: Machine Learning for Evolution Strategy. Springer, Switzerland (2016)
2. Kevin, P.: Murphy, Machine Learning: A Probabilistic Perspective. MIT Press, London (2011)
3. Marsland, S.: Machine Learning, An Algorithm Perspective. CRC Press, Boca Raton (2015)
4. Mola, A., Vishwanathan, A.: Introduction to Machine Learning. Cambridge University Press,
Cambridge (2008)
Chapter 2
Optimization

2.1 Definition of an Optimization Problem

The vast majority of image processing and pattern recognition algorithms use some
form of optimization, as they intend to find some solution which is “best” according
to some criterion. From a general perspective, an optimization problem is a situation
that requires to decide for a choice from a set of possible alternatives in order to
reach a predefined/required benefit at minimal costs [1].
Consider a public transportation system of a city, for example. Here the system
has to find the “best” route to a destination location. In order to rate alternative
solutions and eventually find out which solution is “best,” a suitable criterion has to
be applied. A reasonable criterion could be the distance of the routes. We then
would expect the optimization algorithm to select the route of shortest distance as a
solution. Observe, however, that other criteria are possible, which might lead to
different “optimal” solutions, e.g., number of transfers, ticket price or the time it
takes to travel the route leading to the fastest route as a solution.
Mathematically speaking, optimization can be described as follows: Given a
function f : S ! R which is called the objective function, find the argument which
minimizes f:

x ¼ arg min f ðxÞ ð2:1Þ


x2S

S defines the so-called solution set, which is the set of all possible solutions for the
optimization problem. Sometimes, the unknown(s) x are referred to design vari-
ables. The function f describes the optimization criterion, i.e., enables us to cal-
culate a quantity which indicates the “quality” of a particular x.
In our example, S is composed by the subway trajectories and bus lines, etc.,
stored in the database of the system, x is the route the system has to find, and the
optimization criterion f(x) (which measures the quality of a possible solution) could

© Springer International Publishing AG 2017 13


D. Oliva and E. Cuevas, Advances and Applications of Optimised Algorithms
in Image Processing, Intelligent Systems Reference Library 117,
DOI 10.1007/978-3-319-48550-8_2
14 2 Optimization

calculate the ticket price or distance to the destination (or a combination of both),
depending on our preferences.
Sometimes there also exist one or more additional constraints which the solution
x has to satisfy. In that case we talk about constrained optimization (opposed to
unconstrained optimization if no such constraint exists). As a summary, an opti-
mization problem has the following components:
• One or more design variables x for which a solution has to be found
• An objective function f(x) describing the optimization criterion
• A solution set S specifying the set of possible solutions x
• (optional) One or more constraints on x
In order to be of practical use, an optimization algorithm has to find a solution in
a reasonable amount of time with reasonable accuracy. Apart from the performance
of the algorithm employed, this also depends on the problem at hand itself. If we
can hope for a numerical solution, we say that the problem is well-posed. For
assessing whether an optimization problem is well-posed, the following conditions
must be fulfilled:
1. A solution exists.
2. There is only one solution to the problem, i.e., the solution is unique.
3. The relationship between the solution and the initial conditions is such that
small perturbations of the initial conditions result in only small variations of x .

2.2 Classical Optimization

Once a task has been transformed into an objective function minimization problem,
the next step is to choose an appropriate optimizer. Optimization algorithms can be
divided in two groups: derivative-based and derivative-free [2].
In general, f(x) may have a nonlinear form respect to the adjustable parameter
x. Due to the complexity of f ðÞ, in classical methods, it is often used an iterative
algorithm to explore the input space effectively. In iterative descent methods, the
next point xk þ 1 is determined by a step down from the current point xk in a
direction vector d:

xk þ 1 ¼ xk þ ad; ð2:2Þ

where a is a positive step size regulating to what extent to proceed in that direction.
When the direction d in Eq. 2.1 is determined on the basis of the gradient (g) of the
objective function f ðÞ, such methods are known as gradient-based techniques.
The method of steepest descent is one of the oldest techniques for optimizing a
given function. This technique represents the basis for many derivative-based
methods. Under such a method, Eq. 2.3 becomes the well-known gradient formula:
2.2 Classical Optimization 15

xk þ 1 ¼ xk  agðf ðxÞÞ; ð2:3Þ

However, classical derivative-based optimization can be effective as long the


objective function fulfills two requirements:
– The objective function must be two-times differentiable.
– The objective function must be uni-modal, i.e., have a single minimum.
A simple example of a differentiable and uni-modal objective function is

f ðx1 ; x2 Þ ¼ 10  eðx1 þ 3x2 Þ


2 2
ð2:4Þ

Figure 2.1 shows the function defined in Eq. 2.4.


Unfortunately, under such circumstances, classical methods are only applicable
for a few types of optimization problems. For combinatorial optimization, there is
no dentition of differentiation.
Furthermore, there are many reasons why an objective function might not be
differentiable. For example, the “floor” operation in Eq. 2.5 quantizes the function
in Eq. 2.4, transforming Fig. 2.1 into the stepped shape seen in Fig. 2.2. At each
step’s edge, the objective function is non-differentiable:
 
f ðx1 ; x2 Þ ¼ floor 10  eðx1 þ 3x2 Þ
2 2
ð2:5Þ

Even in differentiable objective functions, gradient-based methods might not


work. Let us consider the minimization of the Griewank function as an example.

10

9.8
f (x1 ,x 2 )

9.6

9.4

9.2
1
9 0.5
1
0.5 0
0 −0.5 x1
x2 −0.5
−1
−1

Fig. 2.1 Uni-modal objective function


16 2 Optimization

10

8
f (x1 ,x 2 )

0 1
1 0.5
0.5
0
0
x2 −0.5 −0.5 x1
−1 −1

Fig. 2.2 A non-differentiable, quantized, uni-modal function

 
x21 þ x22 x2ffiffi
minimize f ðx1 ; x2 Þ ¼ 4000 cosðx1 Þ cos p
2
þ1
subject to 30  x1  30 ð2:6Þ
30  x2  30

From the optimization problem formulated in Eq. 2.6, it is quite easy to


understand that the global optimal solution is x1 ¼ x2 ¼ 0. Figure 2.3 visualizes the
function defined in Eq. 2.6. According to Fig. 1.3, the objective function has many
local optimal solutions (multimodal) so that the gradient methods with a randomly
generated initial solution will converge to one of them with a large probability.
Considering the limitations of gradient-based methods, image processing and
pattern recognition problems make difficult their integration with classical opti-
mization methods. Instead, some other techniques which do not make assumptions
and which can be applied to wide range of problems are required [3].
f (x1 ,x 2 )

x2
x1

Fig. 2.3 The Griewank multi-modal function


2.3 Evolutionary Computation Methods 17

2.3 Evolutionary Computation Methods

Evolutionary computation (EC) [4] methods are derivative-free procedures, which


do not require that the objective function must be neither two-times differentiable
nor uni-modal. Therefore, EC methods as global optimization algorithms can deal
with non-convex, nonlinear, and multimodal problems subject to linear or nonlinear
constraints with continuous or discrete decision variables.
The field of EC has a rich history. With the development of computational
devices and demands of industrial processes, the necessity to solve some opti-
mization problems arose despite the fact that there was not sufficient prior
knowledge (hypotheses) on the optimization problem for the application of an
classical method. In fact, in the majority of image processing and pattern recog-
nition cases, the problems are highly nonlinear, or characterized by a noisy fitness,
or without an explicit analytical expression as the objective function might be the
result of an experimental or simulation process. In this context, the EC methods
have been proposed as optimization alternatives.
A EC technique is a general method for solving optimization problems. It uses
an objective function in an abstract and efficient manner, typically without utilizing
deeper insights into its mathematical properties. EC methods do not require
hypotheses on the optimization problem nor any kind of prior knowledge on the
objective function. The treatment of objective functions as “black boxes” [5] is the
most prominent and attractive feature of EC methods.
EC methods obtain knowledge about the structure of an optimization problem by
utilizing information obtained from the possible solutions (i.e., candidate solutions)
evaluated in the past. This knowledge is used to construct new candidate solutions
which are likely to have a better quality.
Recently, several EC methods have been proposed with interesting results. Such
approaches uses as inspiration our scientific understanding of biological, natural or
social systems, which at some level of abstraction can be represented as opti-
mization processes [6]. These methods include the social behavior of bird flocking
and fish schooling such as the Particle Swarm Optimization (PSO) algorithm [7],
the cooperative behavior of bee colonies such as the Artificial Bee Colony
(ABC) technique [8], the improvisation process that occurs when a musician
searches for a better state of harmony such as the Harmony Search (HS) [9], the
emulation of the bat behavior such as the Bat Algorithm (BA) method [10], the
mating behavior of firefly insects such as the Firefly (FF) method [11], the
social-spider behavior such as the Social Spider Optimization (SSO) [12], the
simulation of the animal behavior in a group such as the Collective Animal
Behavior [13], the emulation of immunological systems as the clonal selection
algorithm (CSA) [14], the simulation of the electromagnetism phenomenon as the
electromagnetism-Like algorithm [15], and the emulation of the differential and
conventional evolution in species such as the Differential Evolution (DE) [16] and
Genetic Algorithms (GA) [17], respectively.
Random documents with unrelated
content Scribd suggests to you:
NOTES
[1] L'un des interrogés dit que le roi en avait cinq. D'après une autre
relation, le nombre en serait monté à la fin jusqu'à dix-sept.

[2] Ceci se rapporte à l'interprétation du mot: né, geboren.

[3] C'est aussi ce que dit Montaigne dans ses Essais.

[4] Sans doute Copernic qui termina vers 1530 son livre De orbium
cœlestium revolutionibus, imprimé, en 1543, à Nuremberg, avec une
dédicace au pape Paul III. Dès 1540, une lettre de son disciple Rheticus fit
connaître le nouveau système.

[5] Voyez la belle ballade anglaise sur le martyre de Barleycorn.

[6] Il semble qu'on retrouve ces tristes pensées dans le beau portrait de
Luther mort, qui se trouve dans la collection du libraire Zimmer à Heidelberg;
ce portrait exprime aussi la continuation d'un long effort.

[7] Nom d'un village près duquel Luther possédait une petite terre.

[8] Luther appelle Louis ce landgrave, qui s'appelait effectivement Albert-


le-Dénaturé, et vivait en 1288. Sa femme, Marguerite était fille de l'empereur
Frédéric II; son fils est Frédéric I, dit le Mordu.

[9] Voyez le Voyage de Montaigne.

[10] Il est inutile de relever les erreurs grossières dont fourmille ce


chapitre.

[11] Mélanchton fait remarquer que saint Augustin n'exprime pas cette
opinion dans ses écrits de controverse.
RENVOIS
DU TOME TROISIÈME.

Renvoi Page Ligne


[r1] 3, 19. Otto Pack.—Cochlæus, 171.
[r2] 4, 11. Cette ligue.—Ukert, 216.
[r3] 5, 15. Tu crains que.—Luther Werke, t. IX, 231.
[r4] 6, 24. Mémoire de Luther.—Ibid. t. IX, 297.
[r5] 20, 23. L'Espagnol disait.—Ibid. t. IX, 414.
[r6] 23, 14. Luther écrit.—Ibid. t. IX, 459.
[r7] 29, 15. Comment l'Évangile.—Ibid. t. II, 391, 199.
[r8] 35, 17. Nouvelle sur les Anabaptistes.—Ibid. t. II, 328.
[r9] 40, 20. Les anabaptistes soumis.—Ibid. t. II, 365.
[r10] 42, 4. Entretien.—Ibid. t. II, 376.
[r11] 49, 11. Le 19 janvier.—Ibid. t. II, 400.
[r12] 51, 3. Préface de Luther.—Ibid. t. II, 332.
[r13] 60, 14. Les instructions.—Bossuet en a donné le texte
dans son histoire des Variations de l'Église
protestante.—t. I, 328, 199.
[r14] 72, 3. Celui qui insulte.—Tischr. 241.
[r15] 72, 8. Le droit saxon.—Ibid. 315 bis.
[r16] 72, 14. Il n'y a point de doute.—Ibid. 116.
[r17] 72, 22. On disait à Luther.—Ibid. 312 bis.
[r18] 73, 11. Lettre à un ami.—Ibid. 313 bis.
[r19] 73, 20. Il n'est guère plus possible.—Ibid. 315 bis.
[r20] 74, 4. La plus grande grâce.—Ibid. 313.
[r21] 74, 20. Au jour de la.—Ibid. 316 bis.
[r22] 75, 6. Le docteur M.—Ibid. 320.
[r23] 75, 18. En 1541.—Ibid. 264 bis.
[r24] 76, 4. La première année.—Ibid. 313 bis.
[r25] 76, 19. Lucas Cranach.—Ibid. 314.
[r26] 77, 19. On trouve l'image.—Ibid. 312 bis.
[r27] 78, 6. Les petits enfans.—Ibid. 42 bis.
[r28] 78, 3. On amena.—Ibid. 124.
[r29] 78, 20. Servez.—Ibid. 10 bis.
[r30] 79, 3. Au premier jour.—Ibid. 314 bis.
[r31] 79, 13. Après qu'il eut.—Ibid. 47.
[r32] 79, 21. Il disait à son.—Ibid. 49 bis.
[r33] 79, 25. Les enfans sont les plus heureux.—Ibid. 134.
[r34] 80, 10. Une autre fois.—Ibid. 134 bis.
[r35] 80, 19. Comme maître.—Ibid. 45 bis.
[r36] 81, 1. Quels ont dû être.—Ibid. 47.
[r37] 81, 17. Il est touchant.—Ibid. 42-43 passim.
[r38] 81, 24. Le 9 avril 1539.—Ibid. 363.
[r39] 82, 16. Le 18 avril.—Ibid. 423.
[r40] 83, 13. Supportons.—Lettre V, 726.
[r41] 83, 22. Un soir.—Tischr. 43 bis.
[r42] 84, 1. Vers le soir.—Ibid. 24 bis.
[r43] 85, 10. Le petit enfant.—Tischred. 32, verso.
[r44] 86, 23. Dans les choses divines.—Ibid. 69.
[r45] 87, 14. Le décalogue.—Ibid. 112, verso.
[r46] 87, 18. On demandait au docteur.—Ibid. 362.
[r47] 88, 1. Cicéron.—Ibid. 425.
[r48] 88, 12. On demandait à Luther.—Ibid. 106.
[r49] 88, 25. Le docteur soupirait.—Ibid. 11, verso.
[r50] 89, 11. Autrefois.—Ibid. 311.
[r51] 89, 21. Que sont les saints.—Cochlæus, Vie de Luther,
226.
[r52] 90, 10. Nos adversaires.—Tischred. 447.
[r53] 90, 18. Pourquoi enseigne-t-on?—Luth. Werke, t. II,
16.
[r54] 92, 8. Le Pater noster.—Tischreden, 153.
[r55] 93, 3. L'évangile de saint Jean.—Ukert, 18.
[r56] 95, 28. Ambroise.—Tischreden, 383.
[r57] 96, 7. Saint Augustin.—Ibid. 98.
[r58] 97, 11. Les nominaux.—Ibid. 384.
[r59] 98, 15. Le D. Staupitz.—Ibid. 385.
[r60] 99, 11. Jean Huss.—Ibid. 386.
[r61] 99, 26. Jean Huss était.—Ibid. 127.
[r62] 100, 4. La tête de l'antichrist.—Ibid. 241.
[r63] 100, 6. C'est ma pauvre condition.—Ibid. 249.
[r64] 100, 18. Les papistes.—Ibid. 255.
[r65] 100, 28. Le pape le dit.—Ibid. 259.
[r66] 101, 6. D'autres ont attaqué les mœurs.—Ibid. 192.
[r67] 101, 10. Des conciles.—Ibid. 371-76.
[r68] 102, 14. Des biens ecclésiastiques.—Ibid. 380.
[r69] 103, 17. Le proverbe a raison.—Ibid. 60.
[r70] 104, 7. En Italie.—Ibid. 275.
[r71] 104, 26. Dans les disputes.—Ibid. 271.
[r72] 105, 3. La moinerie.—Ibid. 272.
[r73] 123, 4. Oh! combien je tremblais.—Ibid. 181.
[r74] 124, 9. Je n'aime pas que Philippe.—Ibid. 197.
[r75] 124, 14. Le docteur Jonas lui disait.—Ibid. 113.
[r76] 124, 24. Je veux que l'on enseigne.—Ibid. 116.
[r77] 125, 4. Le docteur Erasmus Alberus.—Ibid. 184.
[r78] 125, 16. Albert Dürer.—Ibid. 425.
[r79] 125, 20. Oh! que j'eusse été heureux.—Luth. Werke, t.
IX, 245.
[r80] 125, 27. Rien n'est plus agréable.—Tischreden, 182.
[r81] 126, 3. Parmi les qualités.—Ibid. 183.
[r82] 126, 7. Dans le traité.—Seckendorf, livre I, 202.
[r83] 128, 4. Le docteur Luther disait.—Tischreden, 105.
[r84] 128, 8. Si je meurs.—Ibid. 356.
[r85] 128, 13. Dans la colère.—Ibid. 145.
[r86] 131, 4. Il n'est pas d'alliance.—Ibid. 331.
[r87] 132, 19. La nouvelle étant venue.—Ibid. 274.
[r88] 134, 12. La nuit qui précéda la mort.—Ibid. 360.
[r89] 138, 3. Il vaut mieux.—Ibid. 347.
[r90] 139, 13. Le droit est une belle fiancée.—Ibid. 273.
[r91] 139, 28. Avant moi, il n'y a eu.—Ibid. 402.
[r92] 142, 22. Voilà comme agissent.—Ibid. 403.
[r93] 143, 12. Bon peuple, veuillez agréer.—Ibid. 407.
[r94] 145, 11. Je suis maintenant.—Ibid. 102.
[r95] 146, 8. La loi sans doute.—Ibid. 128.
[r96] 146, 17. Pour me délivrer entièrement.—Tischreden,
133.
[r97] 147, 1. Il n'est qu'un seul point.—Ibid. 140.
[r98] 147, Luther en parlant.—Ibid. 147.
[r99] 147, 8. Le diable veut seulement.—Ibid. 142.
[r100] 147, 15. Un docteur anglais.—Ibid. 144.
[r101] 148, 1. Pour résister.—Ibid. 124.
[r102] 149, 8. Dieu dit à Moïse.—Ibid. 125.
[r103] 153, 6. Le docteur Martin Luther disait au sujet.—Ibid.
292.
[r104] 153, 11. Quand je commençai à écrire.—Ibid. 193.
[r105] 153, 22. En 1521, il vint chez moi.—Ibid. 282.
[r106] 155, 27. Maître Stiefel.—Ibid. 367.
[r107] 156, 26. Bileas.—Ibid. 192.
[r108] 157, 4. Le docteur Jeckel.—Ibid. 287.
[r109] 158, 1. Le docteur Luther faisant reproche.—Ibid. 290.
[r110] 158, 19. Des antinomiens.—Ibid. 287.
[r111] 159, 15. Qui aurait pensé.—Ibid. 288.
[r112] 160, 8. J'ai eu tant de confiance.—Ibid. 291.
[r113] 161, 1. En 1540, Luther.—Ibid. 129.
[r114] 161, 22. Maître Jobst.—Ibid. 124.
[r115] 162, 12. Si au commencement.—Ibid. 125.
[r116] 163, 4. Maître Philippe dit.—Ibid. 445.
[r117] 164, 4. Philippe me demandait.—Ibid. 29.
[r118] 164, 8. Si Philippe n'eût pas été.—Ibid. 195.
[r119] 164, 11. Le Paradis de Luther.—Ibid. 305.
[r120] 164, 21. Les paysans ne sont pas dignes.—Ibid. 52.
[r121] 164, 28. Le docteur Jonas.—Ibid. 137.
[r122] 165, 14. Un méchant et horrible.—Ibid. 70.
[r123] 165, 22. La femme du docteur.—Ibid. 150.
[r124] 166, 2. Le docteur exhortait sa femme.—Ibid.
[r125] 166, 22. Le pater noster.—Ibid. 135.
[r126] 166, 25. J'aime ma Catherine.—Ibid. 140.
[r127] 169, 3. Une jeune fille.—Ibid. 92, verso.
[r128] 169, 9. Un pasteur.—Ibid. 208.
[r129] 172, 5. Il y a des lieux.—Ibid. 212.
[r130] 172, 18. Un jour de grand orage.—Ibid. 219.
[r131] 173, 3. Suivent deux histoires.—Ibid. 214.
[r132] 173, 11. Le diable promène.—Ibid. 213.
[r133] 173, 18. Aux Pays-Bas et en Saxe.—Ibid. 221.
[r134] 173, 21. Les moines conduisaient.—Ibid. 222.
[r135] 173, 24. On racontait à table.—Ibid. 205.
[r136] 174, 8. Un vieux curé.—Ibid. 205.
[r137] 175, 14. Une autre fois, Luther.—Ibid. 205.
[r138] 176, 23. Il y avait à Erfurth.—Ibid. 215.
[r139] 177, 18. Le docteur Luc Gauric.—Ibid. 216.
[r140] 177, 21. Le diable peut se changer.—Ibid. 216.
[r141] 182, 9. Le docteur Luther devenu plus âgé.—Ibid. 222.
[r142] 182, 16. Cela m'est arrivé.—Ibid. 220.
[r143] 182, 23. Je sais, grâce à Dieu.—Ibid. 224.
[r144] 183, 9. Le Diable n'est pas.—Ibid. 202.
[r145] 183, 20. Au mois de janvier 1532.—Ukert, t. I, 320.
[r146] 184, 8. Ma maladie qui consiste.—Tischreden, 210.
[r147] 184, 13. En 1536, il maria.—Ukert, t. I, 322.
[r148] 184, 20. Pendant que le docteur Luther.—Tischreden,
229.
[r149] 185, 8. Quand le diable me trouve.—Ibid. 8.
[r150] 186, 1. La nuit, quand je me réveille.—Ibid. 218.
[r151] 186, 6. Aujourd'hui comme je.—Ibid. 220.
[r152] 186, 15. Un jour que l'on parlait à souper.—Ibid. 12.
[r153] 187, 1. Le diable me fait regarder.—Ibid. 220.
[r154] 187, 4. Le diable nous a juré.—Ibid. 362.
[r155] 187, 6. La tentation de la chair.—Ibid. 318.
[r156] 187, 13. Si je tombe.—Ibid. 226.
[r157] 187, 19. Le grain d'orge a bien à souffrir.—Ibid. 216.
[r158] 188, 15. Quand le diable vient.—Ibid. 227.
[r159] 189, 4. On peut consoler.—Ibid. 231.
[r160] 189, 10. La meilleure médecine.—Ibid. 238.
[r161] 189, 19. Préface du docteur.—Luth. Werke, t. II, 1.
[r162] 200, 3. Le mal de dents.—Tischreden, 356.
[r163] 200, 12. Un homme se plaignait.—Ibid. 357.
[r164] 201, 8. Après avoir prêché.—Ibid. 362.
[r165] 203, 3. Si j'avais su.—Ibid. 6.
[r166] 203, 8. On disait une fois.—Ibid. 5.
[r167] 203, 18. On disait un jour.—Ibid. 5, verso.
[r168] 204, 13. C'est vous qui.—Ibid. 195, verso.
[r169] 204, 15. Il sortit un jour.—Ibid. 189, verso.
[r170] 204, 17. Le 16 février.—Ibid. 414.
[r171] 204, 23. Le chancelier du comte.—Ibid. 19.
[r172] 205, 16. Dieu a un beau jeu.—Ibid. 32, verso.
[r173] 205, 22. Le monde.—Ibid. 448, verso.
[r174] 205, 26. Luther.—Ibid. 449.
[r175] 206, 15. Un des convives.—Ibid. 295.
[r176] 206, 23. Il sera si mauvais sujet.—Ibid. 15.
[r177] 207, 3. On parlait à table.—Ibid. 304. verso.
[r178] 207, 23. Pauvres gens.—Ibid. 46.
[r179] 210, 17. Je l'ai dit d'avance.—Ibid. 416.
[r180] 211, 7. La vieille électrice.—Ibid. 361-2.
[r181] 211, 15. Je voudrais.—Ibid. 147.
[r182] 211, 18. 16 février 1546.—Ibid. 362.
[r183] 211, 25. Impromptu de Luther sur la fragilité.—Ibid.
358.
[r184] 212, 19. Prédiction du Révérend.—Opera latina, Iena,
1612, Ier vol. après la table des matières.
[r185] 303, 23. Il n'y a jamais eu.—Tischreden, 243.
[r186] 304, 1. Le Pape Jules IIe du nom.—Ibid. 242.
[r187] 304, 12. Si j'avais été.—Ibid. 243.
[r188] 304, 17. Le Pape Jules II, un homme.—Ibid. 269.
[r189] 304, 23. L'an 1532.—Ibid. 341.
[r190] 305, 1. Lorsque ceux de Bruges.—Ibid. 448.
[r191] 305, 27. L'empereur Maximilien.—Ibid. 343.
[r192] 305, 22. On dit que.—Ibid. 184, verso.
[r193] 306, 22. Après l'élection.—Ibid. 53.
[r194] 307, 5. La nouvelle vint.—Ibid. 349.
[r195] 307, 14. Les rois de France.—Ibid. 349, verso.
[r196] 309, 17. Sept universités.—Ibid. 348.
[r197] 309, 23. Quelques-uns qui avaient.—Ibid. 348, verso.
[r198] 310, 3. Le duc Georges.—Ibid. 265.
[r199] 310, 7. Lorsque le duc George déclara.—Ibid. 156.
[r200] 310, 17. Le duc George a sucé.—Ibid. 313, verso.
[r201] 310, 25. Lorsque le duc George voyait.—Ibid. 142,
verso.
[r202] 312, 6. L'électeur Frédéric.—Ibid. 451, verso.
[r203] 313, 3. En 1525.—Ibid. 152.
[r204] 314, 8. On dit que l'empereur.—Ibid. 353.
[r205] 315, 6. Quoique le docteur Jonas.—Ibid. 354.
[r206] 317, 21. Après la diète.—Ibid. 156.
[r207] 319, 4. En Italie les hôpitaux.—Ibid. 145.
[r208] 320, 1. Je ne manque point.—Ibid. 424.
[r209] 320, 14. En Italie et en France.—Ibid. 281, verso.
[r210] 320, 18. En France.—Ibid. 271, verso.
[r211] 320, 25. Lorsque je vis Rome.—Ibid. 442.
[r212] 322, 1. Il y avait en Italie.—Ibid. 269, verso.
[r213] 322, 6. Un soir à la table.—Ibid. 442, verso.
[r214] 322, 15. Christoff Gross.—Ibid. 441, verso.
[r215] 323, 4. La peste règne toujours.—Ibid. 440, verso.
[r216] 324, 21. Dans mon voyage.—Ibid. 166.
[r217] 324, 25. George Siegeler.—Ibid. 184.
[r218] 325, 5. La Thuringe.—Ibid. 62.
[r219] 325, 14. L'électorat de Saxe.—Ibid. 269.
[r220] 325, 24. Le vieil électeur.—Ibid. 61, verso.
[r221] 329, Le Turc ira à Rome.—Ibid. 432.
[r222] 329, 7. Le Christ a sauvé.—Ibid. 432.
[r223] 329, 15. Qui m'eût dit.—Ibid. 436.
[r224] 329, 23. Je ne compte point.—Ibid. 436, verso.
[r225] 329, 27. Luther dit qu'après. Luth. Werke,.—Ibid. t. II.
402.

FIN DU TOME TROISIÈME.


TABLE
DU TROISIÈME VOLUME.

Livre III.—1529-1546 1
er
Chap. 1 . 1529-1532. Les Turcs.—Danger de l'Allemagne.—
Augsbourg, Smalkalde.—Danger du protestantisme. 1
Chap. II. 1534-1536. Anabaptistes de Münster. 28
Chap. III. 1536-1545. Dernières années de la vie de Luther.
—Polygamie du landgrave de Hesse, etc. 56
Livre IV.—1530-1546. 71
Chap. 1er. Conversations de Luther.—La famille, la femme,
les enfans.—La nature. 71
Chap. II. La Bible.—Les Pères.—Les scolastiques.—Le pape.
Les conciles. 85
Chap. III. Des écoles et universités et des arts libéraux. 100
Chap. IV. Drames.—Musique.—Astrologie.—Imprimerie.—
Banque, etc. 114
Chap. V. De la prédication.—Style de Luther.—Il avoue la
violence de son caractère. 123
Livre V.—Chap. 1er. Mort du père de Luther, de sa fille, etc. 131
Chap. II. De l'équité, de la Loi.—Opposition du théologien et
du juriste. 138
Chap. III. La foi; la loi. 144
Chap. IV. Des novateurs.—Mystiques, etc. 152
Chap. V. Tentations.—Regrets et doutes des amis, de la
femme; doutes de Luther lui-même. 163
Chap. VI. Le diable.—Tentations. 168
Chap. VII. Maladies.—Désir de la mort et du jugement.—
Mort, 1546. 200
Additions et Éclaircissemens. 223
Notes. 352
Renvois. 353

FIN DE LA TABLE DU TOME TROISIÈME.


ERRATA.

Page 2, ligne 12, au lieu de regardent, lisez regardant.


Page 9, ligne 21, au lieu de le mieux, lisez mieux.
Page 58, ligne 28, au lieu de théologien, lisez théologiens.
Page 252, ligne 17, au lieu de digamie, lisez bigamie.
Page 282, ligne 15, au lieu de occurences, lisez occurrences.
Page 287, ligne 10, au lieu de heureux la mère, lisez heureuse la
mère.
Page 308, ligne 10, au lieu de de Pavie, lisez à Pavie.
Page 316, ligne 1, au lieu de ça été, lisez ç'a été.
Page 317, ligne 20, au lieu de parle parle, lisez parle.
Page 327, ligne 22, au lieu de demandez, lisez demander.
Page 328, ligne 13, au lieu de ambarras, lisez embarras.

Au lecteur.
Ce livre électronique reproduit intégralement le texte original. Les erreurs
signalées par l'auteur (voir Errata) ont été corrigées. Elles sont indiquées par
(Err.) Quelques erreurs typographiques ont également été corrigées; la liste
de ces corrections se trouve ci-dessous. La ponctuation a été tacitement
corrigée par endroits.
Les notes de bas de page ont été renumérotées de 1 à 11 et regroupées à
la fin du livre. Les «Additions et éclaircissemens» ont été numérotés de a1 à
a79. Les «Renvois» ont été numérotés de r1 à r225. Additions et renvois ont
été signalés dans le texte.

Corrections.
Pages 3, 353, 355: «Cochlœus» remplacé par «Cochlæus».
Page 28: «compagnonage» remplacé par «compagnonnage» (Le mystique
compagnonnage allemand).
Page 36: «dor» par «d'or» (trente et un chevaux couverts de draps d'or).
Page 37: «cent» par «cents» (près de quatre mille deux cents).
Page 75: «de de» par «de» (Ne vous scandalisez pas de me voir).
Page 139: «barette» par «barrette» (doit ôter sa barrette devant la
théologie).
Page 209: «rassassié» remplacé par «rassasié» (On est rassasié de la parole
de Dieu).
Page 222: «sufffire» par «suffire» (que nous ayons pu y suffire).
Page 258: «deux» par «d'eux» (Que l'un d'eux avait commis un meurtre).
Page 315: «pomptement» par «promptement» (il exécute promptement).
Page 339: «Brandbourg» par «Brandebourg» (récemment introduite dans le
Brandebourg).
Page 340: «tintamare» par «tintamarre» (avec chant et tintamarre).
Page 353 «RENVOIS DU TOME TROISIÈME»: il faut sans doute lire «RENVOIS
DU TOME DEUXIÈME».
Page 360 (renvoi nº 160): ajouté «_Ibid._»
Page 361 (renvoi nº 176): au lieu de «Il sera si mauvais» il faut sans doute
lire «Il fera si mauvais»; ajouté «_Ibid._»
Page 366 Table des matières: au lieu de «TROISIÈME VOLUME» et «TOME
TROISIÈME» il faut sans doute lire «DEUXIÈME VOLUME» et «TOME
DEUXIÈME».
*** END OF THE PROJECT GUTENBERG EBOOK MÉMOIRES DE
LUTHER ÉCRITS PAR LUI-MÊME, TOME II ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States
copyright in these works, so the Foundation (and you!) can copy
and distribute it in the United States without permission and
without paying copyright royalties. Special rules, set forth in the
General Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG
LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree
to abide by all the terms of this agreement, you must cease
using and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project Gutenberg™
works in compliance with the terms of this agreement for
keeping the Project Gutenberg™ name associated with the
work. You can easily comply with the terms of this agreement
by keeping this work in the same format with its attached full
Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and
with almost no restrictions whatsoever. You may copy it,
give it away or re-use it under the terms of the Project
Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country
where you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like