Machine Learning with Python Cookbook, 2nd Edition (First Early Release) Kyle Gallatin pdf download
Machine Learning with Python Cookbook, 2nd Edition (First Early Release) Kyle Gallatin pdf download
https://ebookmeta.com/product/machine-learning-with-python-
cookbook-2nd-edition-first-early-release-kyle-gallatin/
https://ebookmeta.com/product/machine-learning-with-python-
cookbook-2nd-edition-kyle-gallatin/
https://ebookmeta.com/product/machine-learning-with-python-
cookbook-2nd-edition-chris-albon/
https://ebookmeta.com/product/machine-learning-with-python-
cookbook-practical-solutions-from-preprocessing-to-deep-
learning-2nd-ed-release-5-2nd-edition-chris-albon/
https://ebookmeta.com/product/the-white-educators-guide-to-
equity-jeramy-wallace/
Lawyer Games After Midnight in the Garden of Good and
Evil 2nd Edition Dep Kirkland
https://ebookmeta.com/product/lawyer-games-after-midnight-in-the-
garden-of-good-and-evil-2nd-edition-dep-kirkland/
https://ebookmeta.com/product/artificial-intelligence-a-modern-
approach-3rd-edition-stuart-russell-peter-norvig/
https://ebookmeta.com/product/body-and-soul-in-hellenistic-
philosophy-1st-edition-brad-inwood/
https://ebookmeta.com/product/gravity-falls-don-t-color-this-
book-1st-edition-emmy-cicierega-alex-hirsch/
https://ebookmeta.com/product/folk-tales-of-bengal-1st-edition-
lal-behari-day/
Annual Review of Gerontology and Geriatrics Volume 39
2019 154th Edition Roland J Thorpe Jr Phd
https://ebookmeta.com/product/annual-review-of-gerontology-and-
geriatrics-volume-39-2019-154th-edition-roland-j-thorpe-jr-phd/
Machine Learning with
Python Cookbook
SECOND EDITION
Practical Solutions from Preprocessing to Deep Learning
With Early Release ebooks, you get books in their earliest form—the author’s
raw and unedited content as they write—so you can take advantage of these
technologies long before the official release of these titles.
1.0 Introduction
NumPy is a foundational tool of the Python machine learning stack.
NumPy allows for efficient operations on the data structures often
used in machine learning: vectors, matrices, and tensors. While
NumPy is not the focus of this book, it will show up frequently
throughout the following chapters. This chapter covers the most
common NumPy operations we are likely to run into while working
on machine learning workflows.
Problem
You need to create a vector.
Solution
Use NumPy to create a one-dimensional array:
# Load library
import numpy as np
Discussion
NumPy’s main data structure is the multidimensional array. A vector
is just an array with a single dimension. In order to create a vector,
we simply create a one-dimensional array. Just like vectors, these
arrays can be represented horizontally (i.e., rows) or vertically (i.e.,
columns).
See Also
Vectors, Math Is Fun
Euclidean vector, Wikipedia
Problem
You need to create a matrix.
Solution
Use NumPy to create a two-dimensional array:
# Load library
import numpy as np
# Create a matrix
matrix = np.array([[1, 2],
[1, 2],
[1, 2]])
Discussion
To create a matrix we can use a NumPy two-dimensional array. In
our solution, the matrix contains three rows and two columns (a
column of 1s and a column of 2s).
NumPy actually has a dedicated matrix data structure:
matrix([[1, 2],
[1, 2],
[1, 2]])
See Also
Matrix, Wikipedia
Matrix, Wolfram MathWorld
1.3 Creating a Sparse Matrix
Problem
Given data with very few nonzero values, you want to efficiently
represent it.
Solution
Create a sparse matrix:
# Load libraries
import numpy as np
from scipy import sparse
# Create a matrix
matrix = np.array([[0, 0],
[0, 1],
[3, 0]])
Discussion
A frequent situation in machine learning is having a huge amount of
data; however, most of the elements in the data are zeros. For
example, imagine a matrix where the columns are every movie on
Netflix, the rows are every Netflix user, and the values are how many
times a user has watched that particular movie. This matrix would
have tens of thousands of columns and millions of rows! However,
since most users do not watch most movies, the vast majority of
elements would be zero.
A sparse matrix is a matrix in which most elements are 0. Sparse
matrices only store nonzero elements and assume all other values
will be zero, leading to significant computational savings. In our
solution, we created a NumPy array with two nonzero values, then
converted it into a sparse matrix. If we view the sparse matrix we
can see that only the nonzero values are stored:
(1, 1) 1
(2, 0) 3
(1, 1) 1
(2, 0) 3
(1, 1) 1
(2, 0) 3
As we can see, despite the fact that we added many more zero
elements in the larger matrix, its sparse representation is exactly the
same as our original sparse matrix. That is, the addition of zero
elements did not change the size of the sparse matrix.
As mentioned, there are many different types of sparse matrices,
such as compressed sparse column, list of lists, and dictionary of
keys. While an explanation of the different types and their
implications is outside the scope of this book, it is worth noting that
while there is no “best” sparse matrix type, there are meaningful
differences between them and we should be conscious about why
we are choosing one type over another.
See Also
Sparse matrices, SciPy documentation
101 Ways to Store a Sparse Matrix
Problem
You need to pre-allocate arrays of a given size with some value.
Solution
NumPy has functions for generating vectors and matrices of any size
using 0s, 1s, or values of your choice.
# Load library
import numpy as np
Discussion
Generating arrays prefilled with data is useful for a number of
purposes, such as making code more performant or having synthetic
data to test algorithms with. In many programming languages, pre-
allocating an array of default values (such as 0s) is considered
common practice.
Problem
You need to select one or more elements in a vector or matrix.
Solution
NumPy’s arrays make it easy to select elements in vectors or
matrices:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Discussion
Like most things in Python, NumPy arrays are zero-indexed, meaning
that the index of the first element is 0, not 1. With that caveat,
NumPy offers a wide variety of methods for selecting (i.e., indexing
and slicing) elements or groups of elements in arrays:
array([1, 2, 3, 4, 5, 6])
array([1, 2, 3])
array([4, 5, 6])
# Select the last element
vector[-1]
array([6, 5, 4, 3, 2, 1])
array([[1, 2, 3],
[4, 5, 6]])
array([[2],
[5],
[8]])
Problem
You want to describe the shape, size, and dimensions of the matrix.
Solution
Use the shape, size, and ndim attributes of a NumPy object:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]])
(3, 4)
12
Discussion
This might seem basic (and it is); however, time and again it will be
valuable to check the shape and size of an array both for further
calculations and simply as a gut check after some operation.
Problem
You want to apply some function to all elements in an array.
Solution
Use NumPy’s vectorize method:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Discussion
NumPy’s vectorize class converts a function into a function that
can apply to all elements in an array or slice of an array. It’s worth
noting that vectorize is essentially a for loop over the elements
and does not increase performance. Furthermore, NumPy arrays
allow us to perform operations between arrays even if their
dimensions are not the same (a process called broadcasting). For
example, we can create a much simpler version of our solution using
broadcasting:
Problem
You need to find the maximum or minimum value in an array.
Solution
Use NumPy’s max and min methods:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Discussion
Often we want to know the maximum and minimum value in an
array or subset of an array. This can be accomplished with the max
and min methods. Using the axis parameter we can also apply the
operation along a certain axis:
array([3, 6, 9])
Problem
You want to calculate some descriptive statistics about an array.
Solution
Use NumPy’s mean, var, and std:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
# Return mean
np.mean(matrix)
5.0
# Return variance
np.var(matrix)
6.666666666666667
Discussion
Just like with max and min, we can easily get descriptive statistics
about the whole matrix or do calculations along a single axis:
Problem
You want to change the shape (number of rows and columns) of an
array without changing the element values.
Solution
Use NumPy’s reshape:
# Load library
import numpy as np
array([[ 1, 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11, 12]])
Discussion
reshape allows us to restructure an array so that we maintain the
same data but it is organized as a different number of rows and
columns. The only requirement is that the shape of the original and
new matrix contain the same number of elements (i.e., the same
size). We can see the size of a matrix using size:
matrix.size
12
matrix.reshape(1, -1)
matrix.reshape(12)
Problem
You need to transpose a vector or matrix.
Solution
Use the T method:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
# Transpose matrix
matrix.T
array([[1, 4, 7],
[2, 5, 8],
[3, 6, 9]])
Discussion
Transposing is a common operation in linear algebra where the
column and row indices of each element are swapped. One nuanced
point that is typically overlooked outside of a linear algebra class is
that, technically, a vector cannot be transposed because it is just a
collection of values:
# Transpose vector
np.array([1, 2, 3, 4, 5, 6]).T
array([1, 2, 3, 4, 5, 6])
Problem
You need to transform a matrix into a one-dimensional array.
Solution
Use flatten:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
# Flatten matrix
matrix.flatten()
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
Discussion
flatten is a simple method to transform a matrix into a one-
dimensional array. Alternatively, we can use reshape to create a
row vector:
matrix.reshape(1, -1)
array([[1, 2, 3, 4, 5, 6, 7, 8, 9]])
array([1, 2, 3, 4, 5, 6, 7, 8])
Problem
You need to know the rank of a matrix.
Solution
Use NumPy’s linear algebra method matrix_rank:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 1, 1],
[1, 1, 10],
[1, 1, 15]])
Discussion
The rank of a matrix is the dimensions of the vector space spanned
by its columns or rows. Finding the rank of a matrix is easy in
NumPy thanks to matrix_rank.
See Also
The Rank of a Matrix, CliffsNotes
Problem
You need to get the diagonal elements of a matrix.
Solution
Use diagonal:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 2, 3],
[2, 4, 6],
[3, 8, 9]])
# Return diagonal elements
matrix.diagonal()
array([1, 4, 9])
Discussion
NumPy makes getting the diagonal elements of a matrix easy with
diagonal. It is also possible to get a diagonal off from the main
diagonal by using the offset parameter:
array([2, 6])
array([2, 8])
Problem
You need to calculate the trace of a matrix.
Solution
Use trace:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 2, 3],
[2, 4, 6],
[3, 8, 9]])
# Return trace
matrix.trace()
14
Discussion
The trace of a matrix is the sum of the diagonal elements and is
often used under the hood in machine learning methods. Given a
NumPy multidimensional array, we can calculate the trace using
trace. We can also return the diagonal of a matrix and calculate its
sum:
14
See Also
The Trace of a Square Matrix
Problem
You need to calculate the dot product of two vectors.
Solution
Use NumPy’s dot:
# Load library
import numpy as np
32
Discussion
The dot product of two vectors, a and b, is defined as:
32
See Also
Vector dot product and vector length, Khan Academy
Dot Product, Paul’s Online Math Notes
Problem
You want to add or subtract two matrices.
Solution
Use NumPy’s add and subtract:
# Load library
import numpy as np
# Create matrix
matrix_a = np.array([[1, 1, 1],
[1, 1, 1],
[1, 1, 2]])
# Create matrix
matrix_b = np.array([[1, 3, 1],
[1, 3, 1],
[1, 3, 8]])
array([[ 2, 4, 2],
[ 2, 4, 2],
[ 2, 4, 10]])
Discussion
Alternatively, we can simply use the + and - operators:
array([[ 2, 4, 2],
[ 2, 4, 2],
[ 2, 4, 10]])
1.18 Multiplying Matrices
Problem
You want to multiply two matrices.
Solution
Use NumPy’s dot:
# Load library
import numpy as np
# Create matrix
matrix_a = np.array([[1, 1],
[1, 2]])
# Create matrix
matrix_b = np.array([[1, 3],
[1, 2]])
array([[2, 5],
[3, 7]])
Discussion
Alternatively, in Python 3.5+ we can use the @ operator:
array([[2, 5],
[3, 7]])
array([[1, 3],
[1, 4]])
See Also
Array vs. Matrix Operations, MathWorks
Problem
You want to calculate the inverse of a square matrix.
Solution
Use NumPy’s linear algebra inv method:
# Load library
import numpy as np
# Create matrix
matrix = np.array([[1, 4],
[2, 5]])
array([[-1.66666667, 1.33333333],
[ 0.66666667, -0.33333333]])
Discussion
The inverse of a square matrix, A, is a second matrix A–1, such that:
where I is the identity matrix. In NumPy we can use linalg.inv
to calculate A–1 if it exists. To see this in action, we can multiply a
matrix by its inverse and the result is the identity matrix:
See Also
Inverse of a Matrix
Problem
You want to generate pseudorandom values.
Solution
Use NumPy’s random:
# Load library
import numpy as np
# Set seed
np.random.seed(0)
array([3, 7, 9])
2.0 Introduction
The first step in any machine learning endeavor is to get the raw
data into our system. The raw data might be a logfile, dataset file,
database, or cloud blob store such as Amazon S3. Furthermore,
often we will want to retrieve data from multiple sources.
The recipes in this chapter look at methods of loading data from a
variety of sources, including CSV files and SQL databases. We also
cover methods of generating simulated data with desirable
properties for experimentation. Finally, while there are many ways to
load data in the Python ecosystem, we will focus on using the
pandas library’s extensive set of methods for loading external data,
and using scikit-learn—an open source machine learning library in
Python—for generating simulated data.
2.1 Loading a Sample Dataset
Problem
You want to load a preexisting sample dataset from the scikit-learn
library.
Solution
scikit-learn comes with a number of popular datasets for you to use:
load_iris
Contains 150 observations on the measurements of Iris flowers.
It is a good dataset for exploring classification algorithms.
load_digits
Contains 1,797 observations from images of handwritten digits. It
is a good dataset for teaching image classification.
To see more details on any of the datasets above, you can print the
DESCR attribute:
.. _digits_dataset:
See Also
scikit-learn toy datasets
The Digit Dataset
Problem
You need to generate a dataset of simulated data.
Solution
scikit-learn offers many methods for creating simulated data. Of
those, three methods are particularly useful: make_regression,
make_classification, and make_blobs.
When we want a dataset designed to be used with linear regression,
make_regression is a good choice:
# Load library
from sklearn.datasets import make_regression
n_informative = 3,
n_targets
= 1,
noise =
0.0,
coef =
True,
random_state = 1)
Feature Matrix
[[ 1.29322588 -0.61736206 -0.11044703]
[-2.793085 0.36633201 1.93752881]
[ 0.80186103 -0.18656977 0.0465673 ]]
Target Vector
[-10.37865986 25.5124503 19.67705609]
# Load library
from sklearn.datasets import make_classification
# Load library
from sklearn.datasets import make_blobs
Feature Matrix
[[ -1.22685609 3.25572052]
[ -9.57463218 -4.38310652]
[-10.71976941 -4.20558148]]
Target Vector
[0 1 1]
Discussion
As might be apparent from the solutions, make_regression
returns a feature matrix of float values and a target vector of float
values, while make_classification and make_blobs return a
feature matrix of float values and a target vector of integers
representing membership in a class.
scikit-learn’s simulated datasets offer extensive options to control the
type of data generated. scikit-learn’s documentation contains a full
description of all the parameters, but a few are worth noting.
In make_regression and make_classification,
n_informative determines the number of features that are used
to generate the target vector. If n_informative is less than the
total number of features (n_features), the resulting dataset will
have redundant features that can be identified through feature
selection techniques.
In addition, make_classification contains a weights
parameter that allows us to simulate datasets with imbalanced
classes. For example, weights = [.25, .75] would return a
dataset with 25% of observations belonging to one class and 75% of
observations belonging to a second class.
For make_blobs, the centers parameter determines the number
of clusters generated. Using the matplotlib visualization library,
we can visualize the clusters generated by make_blobs:
# Load library
import matplotlib.pyplot as plt
# View scatterplot
plt.scatter(features[:,0], features[:,1], c=target)
plt.show()
See Also
make_regression documentation
make_classification documentation
make_blobs documentation
Problem
You need to import a comma-separated values (CSV) file.
Solution
Use the pandas library’s read_csv to load a local or hosted CSV
file:
# Load library
import pandas as pd
# Create URL
url =
'https://raw.githubusercontent.com/chrisalbon/sim_data/mast
er/data.csv'
# Load dataset
dataframe = pd.read_csv(url)
Discussion
There are two things to note about loading CSV files. First, it is often
useful to take a quick look at the contents of the file before loading.
It can be very helpful to see how a dataset is structured beforehand
and what parameters we need to set to load in the file. Second,
read_csv has over 30 parameters and therefore the documentation
can be daunting. Fortunately, those parameters are mostly there to
allow it to handle a wide variety of CSV formats. For example, CSV
files get their names from the fact that the values are literally
separated by commas (e.g., one row might be 2,"2015-01-01
00:00:00",0); however, it is common for “CSV” files to use other
characters as separators, like tabs. pandas’ sep parameter allows us
to define the delimiter used in the file. Although it is not always the
case, a common formatting issue with CSV files is that the first line
of the file is used to define column headers (e.g., integer,
datetime, category in our solution). The header parameter
allows us to specify if or where a header row exists. If a header row
does not exist, we set header=None.
2.4 Loading an Excel File
Problem
You need to import an Excel spreadsheet.
Solution
Use the pandas library’s read_excel to load an Excel spreadsheet:
# Load library
import pandas as pd
# Create URL
url =
'https://raw.githubusercontent.com/chrisalbon/sim_data/mast
er/data.xlsx'
# Load data
dataframe = pd.read_excel(url, sheet_name=0, header=1)
5 2015-01-01 00:00:00 0
0 5 2015-01-01 00:00:01 0
1 9 2015-01-01 00:00:02 0
Discussion
This solution is similar to our solution for reading CSV files. The main
difference is the additional parameter, sheetname, that specifies
which sheet in the Excel file we wish to load. sheetname can
accept both strings containing the name of the sheet and integers
pointing to sheet positions (zero-indexed). If we need to load
multiple sheets, include them as a list. For example, sheetname=
[0,1,2, "Monthly Sales"] will return a dictionary of pandas
DataFrames containing the first, second, and third sheets and the
sheet named Monthly Sales.
Problem
You need to load a JSON file for data preprocessing.
Solution
The pandas library provides read_json to convert a JSON file into
a pandas object:
# Load library
import pandas as pd
# Create URL
url =
'https://raw.githubusercontent.com/chrisalbon/sim_data/mast
er/data.json'
# Load data
dataframe = pd.read_json(url, orient='columns')
Discussion
Importing JSON files into pandas is similar to the last few recipes we
have seen. The key difference is the orient parameter, which
indicates to pandas how the JSON file is structured. However, it
might take some experimenting to figure out which argument
(split, records, index, columns, and values) is the right one.
Another helpful tool pandas offers is json_normalize, which can
help convert semistructured JSON data into a pandas DataFrame.
See Also
json_normalize documentation
Problem
You need to load a parquet file.
Solution
The pandas read_parquet function allows us to read in parquet
files:
# Load library
import pandas as pd
# Create URL
url = 'https://machine-learning-python-
cookbook.s3.amazonaws.com/data.parquet'
# Load data
dataframe = pd.read_parquet(url)
Discussion
Paruqet is a popular data storage format in the large data space. It
is often used with big data tools such as hadoop and spark. While
Pyspark is outside the focus of this book, it’s highly likely companies
operating a large scale will use an efficient data storage format such
as parquet and it’s valuable to know how to read it into a dataframe
and manipulate it.
See Also
Apache Parquet Documentation
Problem
You need to load data from a database using the structured query
language (SQL).
Solution
pandas’ read_sql_query allows us to make a SQL query to a
database and load it:
# Load libraries
import pandas as pd
from sqlalchemy import create_engine
# Load data
Discovering Diverse Content Through
Random Scribd Documents
materially different from that of
other inventors: he may have
been kept for years on the
threshold of success, vainly
trying to remove some
obstruction which blocked up his
way. If we suppose that
Gutenberg p401 began, as a
novice would probably begin, by
founding types of soft lead in
moulds of sand, the printer will
understand why he would
condemn the types made by this
method. If he afterward made a
mould of hard metal, and
founded types in matrices of
brass, we can understand that, in
the beginning, he had abundant
reason to reject his first types for
inaccuracies of body and
irregularities of height and lining.
To him as to all true inventors,
there could be no patching up of
defects in plan or in construction.
It was necessary to throw away
all the defective work and to
begin anew. Experiments like Fac-simile of the Types of a ♠
these consume a great deal of Donatus attributed to
Gutenberg at Strasburg.
time and quite as much of [From Bernard.]
Pica Paragon
Body. Body. [from De la Borde.] ♠
English Double-
body. pica
Body.
Translation .
To all the faithful followers of Christ who may read this letter, Paul
Zappe, counselor, ambassador, and administrator-general of his
gracious majesty, the king of Cyprus, sends greeting:
Whereas the Most Holy Father in Christ, our Lord, Nicholas V, by
divine grace, pope, mercifully compassionating the afflictions of the
kingdom of Cyprus from those most treacherous enemies of the
Cross of Christ, the Turks and Saracens, in an earnest exhortation,
by the sprinkling of the blood of our Lord Jesus Christ, freely
granted to all those faithful followers of Christ, wheresoever
established, who, within three years from the first day of May, in
the year of our Lord 1452, should piously contribute, according to
their ability, more or less, as it should seem good to their own
consciences, to the procurators, or their deputies, for the defense of
the Catholic religion and the aforementioned kingdom,—that
confessors, secular and regular, chosen by themselves, having
heard their confessions for excesses, crimes, and faults, however
great, even for those hitherto reserved exclusively for the apostolic
see to remit, should be licensed to pronounce due absolution upon
them, and enjoin salutary penance; and, also, that they might
absolve those persons, if they should humbly beseech it, who,
perchance might be suffering excommunication, suspension, and
other sentences, censures, and ecclesiastical punishments,
instituted by canon law, or promulgated by man,—salutary penance
being required, or other satisfaction which might be enjoined by
canon law, varying according to the nature of the offence; and,
also, that they might be empowered by apostolic authority to grant
to those who were truly penitent, and confessed their guilt, or if
perchance, on account of the loss of speech, they could not
confess, those who gave outward demonstrations of contrition—the
fullest indulgence of all their sins, and a full remission, as well
during life as in the hour of death—reparation being made by them
if they should survive, or by their heirs if they should then die: And
the penance required after the granting of the indulgence is this—
that they should fast throughout a whole year on every Friday, or
some other day of the week, the lawful hindrances to performance
being prescribed by the regular usage of the Church, a vow or any
other thing not standing in the way of it; and as for those
prevented from so doing in the stated year, or any part of it, they
should fast in the following year, or in any year they can; and if they
should not be able conveniently to fulfill the required fast in any of
the years, or any part of them, the confessor, for that purpose shall
be at liberty to commute it for other acts of charity, which they
should be equally bound to do: And all this, so that they presume
not, which God forbid, to sin from the assurance of remission of this
kind, for otherwise, that which is called concession, whereby they
are admitted to full remission in the hour of death, and remission,
which, as it is promised, leads them to sin with assurance, would be
of no weight and validity: And whereas the devout Judocus Ott von
Apspach , in order to obtain the promised indulgence, according to
his ability hath piously contributed to the above-named laudable
purpose, he is entitled to enjoy the benefit of indulgence, of this
nature. In witness of the truth of the above concession, the seal
ordained for this purpose is affixed. Given at Mentz in the year of
our Lord 1454, on the last day of December .
T HE F ULLEST F ORM OF A BSOLUTION AND R EMISSION D URING
L IFE : May our Lord Jesus Christ bestow on thee his most holy and
gracious mercy; may he absolve thee, both by his own authority
and that of the blessed Peter and Paul, His apostles; and by the
authority apostolic committed unto me, and conceded on thy behalf,
I absolve thee from all thy sins repented for with contrition,
confessed and forgotten, as also from all carnal sins, excesses,
crimes and delinquencies ever so grievous, and whose cognizance is
reserved to the Holy See, as well as from any ecclesiastical
judgment, censure, and punishment, promulgated either by law or
by man, if thou hast incurred any,—giving thee plenary indulgence
and remission of all thy sins, inasmuch as in this matter the keys of
the Holy Mother Church do avail. In the name of the Father, and the
Son, and the Holy Ghost. Amen.
T HE P LENARY FORM OF R EMISSION AT THE P OINT OF D EATH : May
our Lord [as above]. I absolve thee from all thy sins, with contrition
repented for, confessed and forgotten, restoring thee to the unity of
the faithful, and the partaking of the sacraments of the Church,
releasing thee from the torments of purgatory, which thou hast
incurred, by giving thee plenary remission of all thy sins, inasmuch
as in this matter the keys of the Mother Church do avail. In the
name of the Father, and the Son, and the Holy Ghost. Amen.
Joseph, abbot of the Monastery of Saint Burckard,
Duly qualified to make this engagement.