100% found this document useful (1 vote)
21 views

Explainable AI Recipes: Implement Solutions to Model Explainability and Interpretability with Python 1st Edition Pradeepta Mishra download

The document promotes the book 'Explainable AI Recipes' by Pradeepta Mishra, which focuses on implementing solutions for model explainability and interpretability using Python. It covers various topics including supervised learning models, nonlinear models, and deep learning models, providing practical recipes and code examples. The book aims to enhance trust in AI systems by making their decision-making processes transparent and understandable.

Uploaded by

nanaeboinas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
21 views

Explainable AI Recipes: Implement Solutions to Model Explainability and Interpretability with Python 1st Edition Pradeepta Mishra download

The document promotes the book 'Explainable AI Recipes' by Pradeepta Mishra, which focuses on implementing solutions for model explainability and interpretability using Python. It covers various topics including supervised learning models, nonlinear models, and deep learning models, providing practical recipes and code examples. The book aims to enhance trust in AI systems by making their decision-making processes transparent and understandable.

Uploaded by

nanaeboinas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

Visit https://ebookmass.

com to download the full version and


browse more ebooks or textbooks

Explainable AI Recipes: Implement Solutions to


Model Explainability and Interpretability with
Python 1st Edition Pradeepta Mishra

_____ Press the link below to begin your download _____

https://ebookmass.com/product/explainable-ai-recipes-
implement-solutions-to-model-explainability-and-
interpretability-with-python-1st-edition-pradeepta-mishra-2/

Access ebookmass.com now to download high-quality


ebooks or textbooks
We believe these products will be a great fit for you. Click
the link to download now, or visit ebookmass.com
to discover even more!

Explainable AI Recipes: Implement Solutions to Model


Explainability and Interpretability with Python 1st
Edition Pradeepta Mishra
https://ebookmass.com/product/explainable-ai-recipes-implement-
solutions-to-model-explainability-and-interpretability-with-
python-1st-edition-pradeepta-mishra-2/

Productionizing AI: How to Deliver AI B2B Solutions with


Cloud and Python 1st Edition Barry Walsh

https://ebookmass.com/product/productionizing-ai-how-to-deliver-
ai-b2b-solutions-with-cloud-and-python-1st-edition-barry-walsh/

Introduction to Responsible AI: Implement Ethical AI Using


Python 1st Edition Manure

https://ebookmass.com/product/introduction-to-responsible-ai-
implement-ethical-ai-using-python-1st-edition-manure/

Productionizing AI: How to Deliver AI B2B Solutions with


Cloud and Python 1st Edition Barry Walsh

https://ebookmass.com/product/productionizing-ai-how-to-deliver-
ai-b2b-solutions-with-cloud-and-python-1st-edition-barry-walsh-2/
PyTorch Recipes: A Problem-Solution Approach to Build,
Train and Deploy Neural Network Models, 2nd Edition
Pradeepta Mishra
https://ebookmass.com/product/pytorch-recipes-a-problem-solution-
approach-to-build-train-and-deploy-neural-network-models-2nd-edition-
pradeepta-mishra/

Time Series Algorithms Recipes: Implement Machine Learning


and Deep Learning Techniques with Python Akshay R Kulkarni

https://ebookmass.com/product/time-series-algorithms-recipes-
implement-machine-learning-and-deep-learning-techniques-with-python-
akshay-r-kulkarni/

Introduction to Datafication : Implement Datafication


Using AI and ML Algorithms Shivakumar R. Goniwada

https://ebookmass.com/product/introduction-to-datafication-implement-
datafication-using-ai-and-ml-algorithms-shivakumar-r-goniwada-2/

Introduction to Datafication: Implement Datafication Using


AI and ML Algorithms Shivakumar R. Goniwada

https://ebookmass.com/product/introduction-to-datafication-implement-
datafication-using-ai-and-ml-algorithms-shivakumar-r-goniwada/

Introduction to Prescriptive AI: A Primer for Decision


Intelligence Solutioning with Python Akshay Kulkarni

https://ebookmass.com/product/introduction-to-prescriptive-ai-a-
primer-for-decision-intelligence-solutioning-with-python-akshay-
kulkarni/
Pradeepta Mishra

Explainable AI Recipes
Implement Solutions to Model Explainability and
Interpretability with Python
Pradeepta Mishra
Bangalore, Karnataka, India

ISBN 978-1-4842-9028-6 e-ISBN 978-1-4842-9029-3


https://doi.org/10.1007/978-1-4842-9029-3

© Pradeepta Mishra 2023

Apress Standard

The use of general descriptive names, registered names, trademarks,


service marks, etc. in this publication does not imply, even in the
absence of a specific statement, that such names are exempt from the
relevant protective laws and regulations and therefore free for general
use.

The publisher, the authors, and the editors are safe to assume that the
advice and information in this book are believed to be true and accurate
at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the
material contained herein or for any errors or omissions that may have
been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Apress imprint is published by the registered company APress


Media, LLC, part of Springer Nature.
The registered company address is: 1 New York Plaza, New York, NY
10004, U.S.A.
I dedicate this book to my late father; my mother; my lovely wife, Prajna;
and my daughters, Priyanshi (Aarya) and Adyanshi (Aadya). This work
would not have been possible without their inspiration, support, and
encouragement.
Introduction
Artificial intelligence plays a crucial role determining the decisions
businesses make. In these cases, when a machine makes a decision,
humans usually want to understand whether the decision is authentic
or whether it was generated in error. If business stakeholders are not
convinced by the decision, they will not trust the machine learning
system, and hence artificial intelligence adoption will gradually reduce
within that organization. To make the decision process more
transparent, developers must be able to document the explainability of
AI decisions or ML model decisions. This book provides a series of
solutions to problems that require explainability and interpretability.
Adopting an AI model and developing a responsible AI system requires
explainability as a component.
This book covers model interpretation for supervised learning
linear models, including important features for regression and
classification models, partial dependency analysis for regression and
classification models, and influential data point analysis for both
classification and regression models. Supervised learning models using
nonlinear models is explored using state-of-the-art frameworks such as
SHAP values/scores, including global explanation, and how to use LIME
for local interpretation. This book will also give you an understanding
of bagging, boosting-based ensemble models for supervised learning
such as regression and classification, as well as explainability for time-
series models using LIME and SHAP, natural language processing tasks
such as text classification, and sentiment analysis using ELI5, ALIBI.
The most complex models for classification and regression, such as
neural network models and deep learning models, are explained using
the CAPTUM framework, which shows feature attribution, neuron
attribution, and activation attribution.
This book attempts to make AI models explainable to help
developers increase the adoption of AI-based models within their
organizations and bring more transparency to decision-making. After
reading this book, you will be able to use Python libraries such as Alibi,
SHAP, LIME, Skater, ELI5, and CAPTUM. Explainable AI Recipes provides
a problem-solution approach to demonstrate each machine learning
model, and shows how to use Python’s XAI libraries to answer
questions of explainability and build trust with AI models and machine
learning models. All source code can be downloaded from
github.com/apress/explainable-ai-recipes.
Any source code or other supplementary material referenced by the
author in this book is available to readers on GitHub
(https://github.com/Apress). For more detailed information, please
visit www.apress.com/source-code.
Acknowledgments
I would like to thank my wife, Prajna, for her continuous inspiration
and support and for sacrificing her weekends to help me complete this
book; and my daughters, Aarya and Aadya, for being patient throughout
the writing process.
A big thank-you to Celestin Suresh John and Mark Powers for fast-
tracking the whole process and guiding me in the right direction.
I would like to thank the authors of the Appliances Energy
Prediction dataset (http://archive.ics.uci.edu/ml) for
making it available: D. Dua and C. Graff. I use this dataset in the book to
show how to develop a model and explain the predictions generated by
a regression model for the purpose of model explainability using
various explainable libraries.
Table of Contents
Chapter 1:​Introducing Explainability and Setting Up Your
Development Environment
Recipe 1-1.​SHAP Installation
Problem
Solution
How It Works
Recipe 1-2.​LIME Installation
Problem
Solution
How It Works
Recipe 1-3.​SHAPASH Installation
Problem
Solution
How It Works
Recipe 1-4.​ELI5 Installation
Problem
Solution
How It Works
Recipe 1-5.​Skater Installation
Problem
Solution
How It Works
Recipe 1-6.​Skope-rules Installation
Problem
Solution
How It Works
Recipe 1-7.​Methods of Model Explainability
Problem
Solution
How It Works
Conclusion
Chapter 2:​Explainability for Linear Supervised Models
Recipe 2-1.​SHAP Values for a Regression Model on All
Numerical Input Variables
Problem
Solution
How It Works
Recipe 2-2.​SHAP Partial Dependency Plot for a Regression
Model
Problem
Solution
How It Works
Recipe 2-3.​SHAP Feature Importance for Regression Model
with All Numerical Input Variables
Problem
Solution
How It Works
Recipe 2-4.​SHAP Values for a Regression Model on All Mixed
Input Variables
Problem
Solution
How It Works
Recipe 2-5.​SHAP Partial Dependency Plot for Regression
Model for Mixed Input
Problem
Solution
How It Works
Recipe 2-6.​SHAP Feature Importance for a Regression Model
with All Mixed Input Variables
Problem
Solution
How It Works
Recipe 2-7.​SHAP Strength for Mixed Features on the Predicted
Output for Regression Models
Problem
Solution
How It Works
Recipe 2-8.​SHAP Values for a Regression Model on Scaled Data
Problem
Solution
How It Works
Recipe 2-9.​LIME Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 2-10.​ELI5 Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 2-11.​How the Permutation Model in ELI5 Works
Problem
Solution
How It Works
Recipe 2-12.​Global Explanation for Logistic Regression Models
Problem
Solution
How It Works
Recipe 2-13.​Partial Dependency Plot for a Classifier
Problem
Solution
How It Works
Recipe 2-14.​Global Feature Importance from the Classifier
Problem
Solution
How It Works
Recipe 2-15.​Local Explanations Using LIME
Problem
Solution
How It Works
Recipe 2-16.​Model Explanations Using ELI5
Problem
Solution
How It Works
Conclusion
References
Chapter 3:​Explainability for Nonlinear Supervised Models
Recipe 3-1.​SHAP Values for Tree Models on All Numerical
Input Variables
Problem
Solution
How It Works
Recipe 3-2.​Partial Dependency Plot for Tree Regression Model
Problem
Solution
How It Works
Recipe 3-3.​SHAP Feature Importance for Regression Models
with All Numerical Input Variables
Problem
Solution
How It Works
Recipe 3-4.​SHAP Values for Tree Regression Models with All
Mixed Input Variables
Problem
Solution
How It Works
Recipe 3-5.​SHAP Partial Dependency Plot for Regression
Models with Mixed Input
Problem
Solution
How It Works
Recipe 3-6.​SHAP Feature Importance for Tree Regression
Models with All Mixed Input Variables
Problem
Solution
How It Works
Recipe 3-7.​LIME Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 3-8.​ELI5 Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 3-9.​How the Permutation Model in ELI5 Works
Problem
Solution
How It Works
Recipe 3-10.​Global Explanation for Decision Tree Models
Problem
Solution
How It Works
Recipe 3-11.​Partial Dependency Plot for a Nonlinear Classifier
Problem
Solution
How It Works
Recipe 3-12.​Global Feature Importance from the Nonlinear
Classifier
Problem
Solution
How It Works
Recipe 3-13.​Local Explanations Using LIME
Problem
Solution
How It Works
Recipe 3-14.​Model Explanations Using ELI5
Problem
Solution
How It Works
Conclusion
Chapter 4:​Explainability for Ensemble Supervised Models
Recipe 4-1.​Explainable Boosting Machine Interpretation
Problem
Solution
How It Works
Recipe 4-2.​Partial Dependency Plot for Tree Regression
Models
Problem
Solution
How It Works
Recipe 4-3.​Explain a Extreme Gradient Boosting Model with All
Numerical Input Variables
Problem
Solution
How It Works
Recipe 4-4.​Explain a Random Forest Regressor with Global and
Local Interpretations
Problem
Solution
How It Works
Recipe 4-5.​Explain the Catboost Regressor with Global and
Local Interpretations
Problem
Solution
How It Works
Recipe 4-6.​Explain the EBM Classifier with Global and Local
Interpretations
Problem
Solution
How It Works
Recipe 4-7.​SHAP Partial Dependency Plot for Regression
Models with Mixed Input
Problem
Solution
How It Works
Recipe 4-8.​SHAP Feature Importance for Tree Regression
Models with Mixed Input Variables
Problem
Solution
How It Works
Recipe 4-9.​Explaining the XGBoost Model
Problem
Solution
How It Works
Recipe 4-10.​Random Forest Regressor for Mixed Data Types
Problem
Solution
How It Works
Recipe 4-11.​Explaining the Catboost Model
Problem
Solution
How It Works
Recipe 4-12.​LIME Explainer for the Catboost Model and
Tabular Data
Problem
Solution
How It Works
Recipe 4-13.​ELI5 Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 4-14.​How the Permutation Model in ELI5 Works
Problem
Solution
How It Works
Recipe 4-15.​Global Explanation for Ensemble Classification
Models
Problem
Solution
How It Works
Recipe 4-16.​Partial Dependency Plot for a Nonlinear Classifier
Problem
Solution
How It Works
Recipe 4-17.​Global Feature Importance from the Nonlinear
Classifier
Problem
Solution
How It Works
Recipe 4-18.​XGBoost Model Explanation
Problem
Solution
How It Works
Recipe 4-19.​Explain a Random Forest Classifier
Problem
Solution
How It Works
Recipe 4-20.​Catboost Model Interpretation for Classification
Scenario
Problem
Solution
How It Works
Recipe 4-21.​Local Explanations Using LIME
Problem
Solution
How It Works
Recipe 4-22.​Model Explanations Using ELI5
Problem
Solution
How It Works
Recipe 4-23.​Multiclass Classification Model Explanation
Problem
Solution
How It Works
Conclusion
Chapter 5:​Explainability for Natural Language Processing
Recipe 5-1.​Explain Sentiment Analysis Text Classification
Using SHAP
Problem
Solution
How It Works
Recipe 5-2.​Explain Sentiment Analysis Text Classification
Using ELI5
Problem
Solution
How It Works
Recipe 5-3.​Local Explanation Using ELI5
Problem
Solution
How It Works
Conclusion
Chapter 6:​Explainability for Time-Series Models
Recipe 6-1.​Explain Time-Series Models Using LIME
Problem
Solution
How It Works
Recipe 6-2.​Explain Time-Series Models Using SHAP
Problem
Solution
How It Works
Conclusion
Chapter 7:​Explainability for Deep Learning Models
Recipe 7-1.​Explain MNIST Images Using a Gradient Explainer
Based on Keras
Problem
Solution
How It Works
Recipe 7-2.​Use Kernel Explainer–Based SHAP Values from a
Keras Model
Problem
Solution
How It Works
Recipe 7-3.​Explain a PyTorch-Based Deep Learning Model
Problem
Solution
How It Works
Conclusion
Index
About the Author
Pradeepta Mishra
is an AI/ML leader, experienced data
scientist, and artificial intelligence
architect. He currently heads NLP, ML,
and AI initiatives for five products at
FOSFOR by LTI, a leading-edge innovator
in AI and machine learning based out of
Bangalore, India. He has expertise in
designing artificial intelligence systems
for performing tasks such as
understanding natural language and
making recommendations based on
natural language processing. He has filed
12 patents as an inventor and has
authored and coauthored five books,
including R Data Mining Blueprints (Packt Publishing, 2016), R: Mining
Spatial, Text, Web, and Social Media Data (Packt Publishing, 2017),
PyTorch Recipes (Apress, 2019), and Practical Explainable AI Using
Python (Apress, 2023). There are two courses available on Udemy
based on these books.
Pradeepta presented a keynote talk on the application of
bidirectional LSTM for time-series forecasting at the 2018 Global Data
Science Conference. He delivered the TEDx talk “Can Machines Think?”
on the power of artificial intelligence in transforming industries and job
roles across industries. He has also delivered more than 150 tech talks
on data science, machine learning, and artificial intelligence at various
meetups, technical institutions, universities, and community forums. He
is on LinkedIn (www.linkedin.com/in/pradeepta/) and Twitter
(@pradmishra1).
About the Technical Reviewer
Bharath Kumar Bolla
has more than ten years of experience
and is currently working as a senior data
science engineer consultant at Verizon,
Bengaluru. He has a PG diploma in data
science from Praxis Business School and
an MS in life sciences from Mississippi
State University. He previously worked
as a data scientist at the University of
Georgia, Emory University, and Eurofins
LLC & Happiest Minds. At Happiest
Minds, he worked on AI-based digital
marketing products and NLP-based
solutions in the education domain. Along
with his day-to-day responsibilities,
Bharath is a mentor and an active
researcher. To date, he has published ten articles in journals and peer-
reviewed conferences. He is particularly interested in unsupervised and
semisupervised learning and efficient deep learning architectures in
NLP and computer vision.
© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
P. Mishra, Explainable AI Recipes
https://doi.org/10.1007/978-1-4842-9029-3_1

1. Introducing Explainability and Setting Up Your


Development Environment
Pradeepta Mishra1
(1) Bangalore, Karnataka, India

Industries in which artificial intelligence has been applied include banking, financial services, insurance,
healthcare, manufacturing, retail, and pharmaceutical. There are regulatory requirements in some of these
industries where model explainability is required. Artificial intelligence involves classifying objects,
recognizing objects to detect fraud, and so forth. Every learning system requires three things: input data,
processing, and an output. If the performance of any learning system improves over time by learning from
new examples or data, it is called a machine learning system. When the number of features for a machine
learning task increases or the volume of data increases, it takes a lot of time to apply machine learning
techniques. That’s when deep learning techniques are used.
Figure 1-1 represents the relationships between artificial intelligence, machine learning, and deep
learning.

Figure 1-1 Relationships among ML, DL, and AI

After preprocessing and feature creation, you can observe hundreds of thousands of features that need
to be computed to produce output. If we train a machine learning supervised model, it will take significant
time to produce the model object. To achieve scalability in this task, we need deep learning algorithms, such
as a recurrent neural network. This is how artificial intelligence is connected to deep learning and machine
learning.
In the classical predictive modeling scenario, a function is identified, and the input data is usually fit to
the function to produce the output, where the function is usually predetermined. In a modern predictive
modeling scenario, the input data and output are both shown to a group of functions, and the machine
identifies the best function that approximates well to the output given a particular set of input. There is a
need to explain the output of a machine learning and deep learning model in performing regression- and
classification-related tasks. These are the reasons why explainability is required:
Trust: To gain users’ trust on the predicted output
Reliability: To make the user rely on the predicted output
Regulatory: To meet regulatory and compliance requirements
Adoption: To increase AI adoption among the users
Fairness: To remove any kind of discrimination in prediction
Accountability: To establish ownership of the predictions
There are various ways that explainability can be achieved using statistical properties, probabilistic
properties and associations, and causality among the features. Broadly, the explanations of the models can
be classified into two categories, global explanations and local explanations. The objective of local
explanation is to understand the inference generated for one sample at a time by comparing the nearest
possible data point; global explanation provides an idea about the overall model behavior.
The goal of this chapter is to introduce how to install various explainability libraries and interpret the
results generated by those explainability libraries.

Recipe 1-1. SHAP Installation


Problem
You want to install the SHAP (shapely additive explanations) library.

Solution
The solution to this problem is to use the simple pip or conda option.

How It Works
Let’s take a look at the following script examples. The SHAP Python library is based on a game theoretic
approach that attempts to explain local and as well as global explanations.

pip install shap

or

conda install -c conda-forge shap

Looking in indexes: https://pypi.org/simple, https://us-


python.pkg.dev/colab-wheels/public/simple/
Collecting shap
Downloading shap-0.41.0-cp37-cp37m-
manylinux_2_12_x86_64.manylinux2010_x86_64.whl (569 kB)
|████████████████████████████████| 569 kB 8.0 MB/s
Requirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.7/dist-
packages (from shap) (4.64.1)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-
packages (from shap) (1.3.5)
Collecting slicer==0.0.7
Downloading slicer-0.0.7-py3-none-any.whl (14 kB)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.7/dist-
packages (from shap) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-
packages (from shap) (1.7.3)
Requirement already satisfied: scikit-learn in
/usr/local/lib/python3.7/dist-packages (from shap) (1.0.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-
packages (from shap) (1.21.6)
Requirement already satisfied: numba in /usr/local/lib/python3.7/dist-
packages (from shap) (0.56.2)
Requirement already satisfied: packaging>20.9 in
/usr/local/lib/python3.7/dist-packages (from shap) (21.3)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in
/usr/local/lib/python3.7/dist-packages (from packaging>20.9->shap) (3.0.9)
Requirement already satisfied: llvmlite<0.40,>=0.39.0dev0 in
/usr/local/lib/python3.7/dist-packages (from numba->shap) (0.39.1)
Requirement already satisfied: setuptools<60 in
/usr/local/lib/python3.7/dist-packages (from numba->shap) (57.4.0)
Requirement already satisfied: importlib-metadata in
/usr/local/lib/python3.7/dist-packages (from numba->shap) (4.12.0)
Requirement already satisfied: typing-extensions>=3.6.4 in
/usr/local/lib/python3.7/dist-packages (from importlib-metadata->numba-
>shap) (4.1.1)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-
packages (from importlib-metadata->numba->shap) (3.8.1)
Requirement already satisfied: python-dateutil>=2.7.3 in
/usr/local/lib/python3.7/dist-packages (from pandas->shap) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in
/usr/local/lib/python3.7/dist-packages (from pandas->shap) (2022.2.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-
packages (from python-dateutil>=2.7.3->pandas->shap) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in
/usr/local/lib/python3.7/dist-packages (from scikit-learn->shap) (3.1.0)
Requirement already satisfied: joblib>=0.11 in
/usr/local/lib/python3.7/dist-packages (from scikit-learn->shap) (1.1.0)
Installing collected packages: slicer, shap
Successfully installed shap-0.41.0 slicer-0.0.7

Recipe 1-2. LIME Installation


Problem
You want to install the LIME Python library.

Solution
You can install the LIME library using pip or conda.

How It Works
Let’s take a look at the following example script:

pip install lime

or

conda install -c conda-forge lime

Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-


wheels/public/simple/
Collecting lime
Downloading lime-0.2.0.1.tar.gz (275 kB)
|████████████████████████████████| 275 kB 7.5 MB/s
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-pac
(from lime) (3.2.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages
(from lime) (1.21.6)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages
(from lime) (1.7.3)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages
lime) (4.64.1)
Requirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.7/
packages (from lime) (1.0.2)
Requirement already satisfied: scikit-image>=0.12 in /usr/local/lib/python3.7/
packages (from lime) (0.18.3)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-
packages (from scikit-image>=0.12->lime) (2.6.3)
Requirement already satisfied: PyWavelets>=1.1.1 in /usr/local/lib/python3.7/d
packages (from scikit-image>=0.12->lime) (1.3.0)
Requirement already satisfied: pillow!=7.1.0,!=7.1.1,>=4.3.0 in
/usr/local/lib/python3.7/dist-packages (from scikit-image>=0.12->lime) (7.1.2)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist
packages (from scikit-image>=0.12->lime) (2.9.0)
Requirement already satisfied: tifffile>=2019.7.26 in
/usr/local/lib/python3.7/dist-packages (from scikit-image>=0.12->lime) (2021.1
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/d
packages (from matplotlib->lime) (1.4.4)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-
packages (from matplotlib->lime) (0.11.0)
Requirement already satisfied: python-dateutil>=2.1 in
/usr/local/lib/python3.7/dist-packages (from matplotlib->lime) (2.8.2)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in
/usr/local/lib/python3.7/dist-packages (from matplotlib->lime) (3.0.9)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/d
packages (from kiwisolver>=1.0.1->matplotlib->lime) (4.1.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packa
(from python-dateutil>=2.1->matplotlib->lime) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in
/usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.18->lime) (3.1.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-
packages (from scikit-learn>=0.18->lime) (1.1.0)
Building wheels for collected packages: lime
Building wheel for lime (setup.py) ... done
Created wheel for lime: filename=lime-0.2.0.1-py3-none-any.whl size=283857
sha256=674ceb94cdcb54588f66c5d5bef5f6ae0326c76e645c40190408791cbe4311d5
Stored in directory:
/root/.cache/pip/wheels/ca/cb/e5/ac701e12d365a08917bf4c6171c0961bc880a8181359c
Successfully built lime
Installing collected packages: lime
Successfully installed lime-0.2.0.1

Recipe 1-3. SHAPASH Installation


Problem
You want to install SHAPASH.

Solution
If you want to use a combination of functions from both the LIME library and the SHAP library, then you can
use the SHAPASH library. You just have to install it, which is simple.

How It Works
Let’s take a look at the following code to install SHAPASH. This is not available on the Anaconda distribution;
the only way to install it is by using pip.

pip install shapash

Recipe 1-4. ELI5 Installation


Problem
You want to install ELI5.

Solution
Since this is a Python library, you can use pip.

How It Works
Let’s take a look at the following script:

pip install eli5


Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-
wheels/public/simple/
Collecting eli5
Downloading eli5-0.13.0.tar.gz (216 kB)
|████████████████████████████████| 216 kB 6.9 MB/s
Requirement already satisfied: attrs>17.1.0 in /usr/local/lib/python3.7/dist-
packages (from eli5) (22.1.0)
Collecting jinja2>=3.0.0
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
|████████████████████████████████| 133 kB 42.7 MB/s
Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.7/dist-
packages (from eli5) (1.21.6)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages
(from eli5) (1.7.3)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages
eli5) (1.15.0)
Requirement already satisfied: scikit-learn>=0.20 in /usr/local/lib/python3.7/
packages (from eli5) (1.0.2)
Requirement already satisfied: graphviz in /usr/local/lib/python3.7/dist-packa
(from eli5) (0.10.1)
Requirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.7/dis
packages (from eli5) (0.8.10)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.7/dis
packages (from jinja2>=3.0.0->eli5) (2.0.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-
packages (from scikit-learn>=0.20->eli5) (1.1.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in
/usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.20->eli5) (3.1.0)
Building wheels for collected packages: eli5
Building wheel for eli5 (setup.py) ... done
Created wheel for eli5: filename=eli5-0.13.0-py2.py3-none-any.whl size=10774
sha256=3e02d416bd1cc21aebce604207129919a096a92128d7d27c50be1f3a97d3b1de
Stored in directory:
/root/.cache/pip/wheels/cc/3c/96/3ead31a8e6c20fc0f1a707fde2e05d49a80b1b4b30096
Successfully built eli5
Installing collected packages: jinja2, eli5
Attempting uninstall: jinja2
Found existing installation: Jinja2 2.11.3
Uninstalling Jinja2-2.11.3:
Successfully uninstalled Jinja2-2.11.3
ERROR: pip's dependency resolver does not currently take into account all the
packages that are installed. This behavior is the source of the following
dependency conflicts.
flask 1.1.4 requires Jinja2<3.0,>=2.10.1, but you have jinja2 3.1.2 which is
incompatible.
Successfully installed eli5-0.13.0 jinja2-3.1.2

Recipe 1-5. Skater Installation


Problem
You want to install Skater.

Solution
Skater is an open-source framework to enable model interpretation for various kinds of machine learning
models. The Python-based Skater library provides both global and local interpretations and can be installed
using pip.

How It Works
Let’s take a look at the following script:

pip install skater

Recipe 1-6. Skope-rules Installation


Problem
You want to install Skopes-rule.

Solution
Skope-rules offers a trade-off between the interpretability of a decision tree and the modeling power of a
random forest model. The solution is simple; you use the pip command.

How It Works
Let’s take a look at the following code:

pip install skope-rules


Looking in indexes: https://pypi.org/simple, https://us-
python.pkg.dev/colab-wheels/public/simple/
Collecting skope-rules
Downloading skope_rules-1.0.1-py3-none-any.whl (14 kB)
Requirement already satisfied: numpy>=1.10.4 in
/usr/local/lib/python3.7/dist-packages (from skope-rules) (1.21.6)
Requirement already satisfied: scikit-learn>=0.17.1 in
/usr/local/lib/python3.7/dist-packages (from skope-rules) (1.0.2)
Requirement already satisfied: pandas>=0.18.1 in
/usr/local/lib/python3.7/dist-packages (from skope-rules) (1.3.5)
Requirement already satisfied: scipy>=0.17.0 in
/usr/local/lib/python3.7/dist-packages (from skope-rules) (1.7.3)
Requirement already satisfied: pytz>=2017.3 in
/usr/local/lib/python3.7/dist-packages (from pandas>=0.18.1->skope-rules)
(2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in
/usr/local/lib/python3.7/dist-packages (from pandas>=0.18.1->skope-rules)
(2.8.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-
packages (from python-dateutil>=2.7.3->pandas>=0.18.1->skope-rules) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in
/usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.17.1->skope-
rules) (3.1.0)
Requirement already satisfied: joblib>=0.11 in
/usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.17.1->skope-
rules) (0.11)
Installing collected packages: skope-rules
Successfully installed skope-rules-1.0.1

Recipe 1-7. Methods of Model Explainability


Problem
There are various libraries and many explanations for how to identify the right method for model
explainability.

Solution
The explainability method depends on who is the consumer of the model output, if it is the business or
senior management then the explainability should be very simple and plain English without any
mathematical formula and if the consumer of explainability is data scientists and machine learning
engineers then the explanations may include the mathematical formulas.

How It Works
The levels of transparency of the machine learning models can be categorized into three buckets, as shown
in Figure 1-2.

Figure 1-2 Methods of model explainability

Textual explanations require explaining the mathematical formula in plain English, which can help
business users or senior management. The interpretations can be designed based on model type and model
variant and can draw inferences from the model outcome. A template to draw inferences can be designed
and mapped to the model types, and then the templates can be filled in using some natural language
processing methods.
A visual explainability method can be used to generate charts, graphs such as dendrograms, or any other
types of graphs that best explain the relationships. The tree-based methods use if-else conditions on the
back end; hence, it is simple to show the causality and the relationship.
Using common examples and business scenarios from day-to-day operations and drawing parallels
between them can also be useful.
Which method you should choose depends on the problem that needs to be solved and the consumer of
the solution where the machine learning model is being used.

Conclusion
In various AI projects and initiatives, the machine learning models generate predictions. Usually, to trust the
outcomes of a model, a detailed explanation is required. Since many people are not comfortable explaining
the machine learning model outcomes, they cannot reason out the decisions of a model, and thereby AI
adoption is restricted. Explainability is required from regulatory stand point as well as auditing and
compliance point of view. In high-risk use cases such as medical imaging and object detection or pattern
recognition, financial prediction and fraud detection, etc., explainability is required to explain the decisions
of the machine learning model.
In this chapter, we set up the environment by installing various explainable AI libraries. Machine
learning model interpretability and explainability are the key focuses of this book. We are going to use
Python-based libraries, frameworks, methods, classes, and functions to explain the models.
In the next chapter, we are going to look at the linear models.
© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
P. Mishra, Explainable AI Recipes
https://doi.org/10.1007/978-1-4842-9029-3_2

2. Explainability for Linear Supervised Models


Pradeepta Mishra1
(1) Bangalore, Karnataka, India

A supervised learning model is a model that is used to train an algorithm to map input data to output data. A
supervised learning model can be of two types: regression or classification. In a regression scenario, the
output variable is numerical, whereas with classification, the output variable is binary or multinomial. A
binary output variable has two outcomes, such as true and false, accept and reject, yes and no, etc. In the
case of multinomial output variables, the outcome can be more than two, such as high, medium, and low. In
this chapter, we are going to use explainable libraries to explain a regression model and a classification
model, while training a linear model.
In the classical predictive modeling scenario, a function has been identified, and the input data is usually
fit to the function to produce the output, where the function is usually predetermined. In a modern
predictive modeling scenario, the input data and output are both shown to a group of functions, and the
machine identifies the best function that approximates well to the output given a particular set of input.
There is a need to explain the output of machine learning and deep learning models when performing
regression and classification tasks. Linear regression and linear classification models are simpler to explain.
The goal of this chapter is to introduce various explainability libraries for linear models such as feature
importance, partial dependency plot, and local interpretation.

Recipe 2-1. SHAP Values for a Regression Model on All Numerical Input
Variables
Problem
You want to explain a regression model built on all the numeric features of a dataset.

Solution
A regression model on all the numeric features is trained, and then the trained model will be passed through
SHAP to generate global explanations and local explanations.

How It Works
Let’s take a look at the following script. The Shapely value can be called the SHAP value. It is used to explain
the model. It uses the impartial distribution of predictions from a cooperative game theory to attribute a
feature to the model’s predictions. Input features from the dataset are considered as players in the game.
The models function is considered the rules of the game. The Shapely value of a feature is computed based
on the following steps:
1. SHAP requires model retraining on all feature subsets; hence, usually it takes time if the explanation has
to be generated for larger datasets.

2. Identify a feature set from a list of features (let’s say there are 15 features, and we can select a subset
with 5 features).

3. For any particular feature, two models using the subset of features will be created, one with the feature
and another without the feature.
4. Then the prediction differences will be computed.

5. The differences in prediction are computed for all possible subsets of features.

6. The weighted average value of all possible differences is used to populate the feature importance.

If the weight of the feature is 0.000, then we can conclude that the feature is not important and has not
joined the model. If it is not equal to 0.000, then we can conclude that the feature has a role to play in the
prediction process.
We are going to use a dataset from the UCI machine learning repository. The URL to access the dataset is
as follows:
https://archive.ics.uci.edu/ml/datasets/Appliances+energy+prediction
The objective is to predict the appliances’ energy use in Wh, using the features from sensors. There are
27 features in the dataset, and here we are trying to understand what features are important in predicting
the energy usage. See Table 2-1.

Table 2-1 Feature Description from the Energy Prediction Dataset

Feature Name Description Unit


Appliances Energy use In Wh
Lights Energy use of light fixtures in the house In Wh
T1 Temperature in kitchen area In Celsius
RH_1 Humidity in kitchen area In %
T2 Temperature in living room area In Celsius
RH_2 Humidity in living room area In %
T3 Temperature in laundry room area
RH_3 Humidity in laundry room area In %
T4 Temperature in office room In Celsius
RH_4 Humidity in office room In %
T5 Temperature in bathroom In Celsius
RH_5 Humidity in bathroom In %
T6 Temperature outside the building (north side) In Celsius
RH_6 Humidity outside the building (north side) In %
T7 Temperature in ironing room In Celsius
RH_7 Humidity in ironing room In %
T8 Temperature in teenager room 2 In Celsius
RH_8 Humidity in teenager room 2 In %
T9 Temperature in parents room In Celsius
RH_9 Humidity in parents room In %
To Temperature outside (from the Chievres weather station) In Celsius
Pressure (from Chievres weather station) In mm Hg
aRH_out Humidity outside (from the Chievres weather station) In %
Wind speed (from Chievres weather station) In m/s
Visibility (from Chievres weather station) In km
Tdewpoint (from Chievres weather station) Â °C
rv1 Random variable 1 Nondimensional
rv2 Random variable 2 Nondimensional
import pandas as pd
df_lin_reg = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-
databases/00374/energydata_complete.csv')
del df_lin_reg['date']
df_lin_reg.info()
df_lin_reg.columns
Index(['Appliances', 'lights', 'T1', 'RH_1', 'T2', 'RH_2', 'T3', 'RH_3',
'T4', 'RH_4', 'T5', 'RH_5', 'T6', 'RH_6', 'T7', 'RH_7', 'T8', 'RH_8', 'T9',
'RH_9', 'T_out', 'Press_mm_hg', 'RH_out', 'Windspeed', 'Visibility',
'Tdewpoint', 'rv1', 'rv2'], dtype='object')

#y is the dependent variable, that we need to predict


y = df_lin_reg.pop('Appliances')
# X is the set of input features
X = df_lin_reg

import pandas as pd
import shap
import sklearn

# a simple linear model initialized


model = sklearn.linear_model.LinearRegression()

# linear regression model trained


model.fit(X, y)

print("Model coefficients:\n")
for i in range(X.shape[1]):
print(X.columns[i], "=", model.coef_[i].round(5))

Model coefficients:

lights = 1.98971
T1 = -0.60374
RH_1 = 15.15362
T2 = -17.70602
RH_2 = -13.48062
T3 = 25.4064
RH_3 = 4.92457
T4 = -3.46525
RH_4 = -0.17891
T5 = -0.02784
RH_5 = 0.14096
T6 = 7.12616
RH_6 = 0.28795
T7 = 1.79463
RH_7 = -1.54968
T8 = 8.14656
RH_8 = -4.66968
T9 = -15.87243
RH_9 = -0.90102
T_out = -10.22819
Press_mm_hg = 0.13986
RH_out = -1.06375
Windspeed = 1.70364
Visibility = 0.15368
Tdewpoint = 5.0488
rv1 = -0.02078
rv2 = -0.02078

# compute the SHAP values for the linear model


explainer = shap.Explainer(model.predict, X)

# SHAP value calculation


shap_values = explainer(X)
Permutation explainer: 19736it [16:15, 20.08it/s]
This part of the script takes time as it is a computationally intensive process. The explainer function
calculates permutations, which means taking a feature set and generating the prediction difference. This
difference is the presence of one feature and the absence of the same feature. For faster calculation, we can
reduce the sample size to a smaller set, let’s say 1,000 or 2,000. In the previous script, we are using the
entire population of 19,735 records to calculate the SHAP values. This part of the script can be improved by
applying Python multiprocessing, which is beyond the scope of this chapter.
The SHAP value for a specific feature 𝑖 is just the difference between the expected model output and the
partial dependence plot at the feature’s value 𝑥𝑖. One of the fundamental properties of Shapley values is that
they always sum up to the difference between the game outcome when all players are present and the game
outcome when no players are present. For machine learning models, this means that SHAP values of all the
input features will always sum up to the difference between the baseline (expected) model output and the
current model output for the prediction being explained.
SHAP values have three objects: (a) the SHAP value for each feature, (b) the base value, and (c) the
original training data. As there are 27 features, we can expect 27 shap values.

pd.DataFrame(np.round(shap_values.values,3)).head(3)

# average prediction value is called as the base value


pd.DataFrame(np.round(shap_values.base_values,3)).head(3)

pd.DataFrame(np.round(shap_values.data,3)).head(3)

Recipe 2-2. SHAP Partial Dependency Plot for a Regression Model


Problem
You want to get a partial dependency plot from SHAP.

Solution
The solution to this problem is to use the partial dependency method (partial_dependence_plot)
from the model.

How It Works
Let’s take a look at the following example. There are two ways to get the partial dependency plot, one with a
particular data point superimposed and the other without any reference to the data point. See Figure 2-1.

# make a standard partial dependence plot for lights on predicted output for
row number 20 from the training dataset.
sample_ind = 20
shap.partial_dependence_plot(
"lights", model.predict, X, model_expected_value=True,
feature_expected_value=True, ice=False,
shap_values=shap_values[sample_ind:sample_ind+1,:]
)

Figure 2-1 Correlation between feature light and predicted output of the model

The partial dependency plot is a way to explain the individual predictions and generate local
interpretations for the sample selected from the dataset; in this case, the sample 20th record is selected from
the training dataset. Figure 2-1 shows the partial dependency superimposed with the 20th record in red.

shap.partial_dependence_plot(
"lights", model.predict, X, ice=False,
model_expected_value=True, feature_expected_value=True
)
Figure 2-2 Partial dependency plot between lights and predicted outcome from the model

# the waterfall_plot shows how we get from shap_values.base_values to


model.predict(X)[sample_ind]
shap.plots.waterfall(shap_values[sample_ind], max_display=14)

Figure 2-3 Local interpretation for record number 20

The local interpretation for record number 20 from the training dataset is displayed in Figure 2-3. The
predicted output for the 20th record is 140 Wh. The most influential feature impacting the 20th record is
RH_1, which is the humidity in the kitchen area in percentage, and RH_2, which is the humidity in the living
room area. On the bottom of Figure 2-3, there are 14 features that are not very important for the 20th
record’s predicted value.
X[20:21]
model.predict(X[20:21])
array([140.26911466])

Recipe 2-3. SHAP Feature Importance for Regression Model with All
Numerical Input Variables
Problem
You want to calculate the feature importance using the SHAP values.

Solution
The solution to this problem is to use SHAP absolute values from the model.

How It Works
Let’s take a look at the following example. SHAP values can be used to show the global importance of
features. Importance features means features that have a larger importance in predicting the output.

#computing shap importance values for the linear model


import numpy as np
feature_names = shap_values.feature_names
shap_df = pd.DataFrame(shap_values.values, columns=feature_names)
vals = np.abs(shap_df.values).mean(0)
shap_importance = pd.DataFrame(list(zip(feature_names, vals)), columns=
['col_name', 'feature_importance_vals'])
shap_importance.sort_values(by=['feature_importance_vals'], ascending=False,
inplace=True)

print(shap_importance)
col_name feature_importance_vals
2 RH_1 49.530061
19 T_out 43.828847
4 RH_2 42.911069
5 T3 41.671587
11 T6 34.653893
3 T2 31.097282
17 T9 26.607721
16 RH_8 19.920029
24 Tdewpoint 17.443688
21 RH_out 13.044643
6 RH_3 13.042064
15 T8 12.803450
0 lights 11.907603
12 RH_6 7.806188
14 RH_7 6.578015
7 T4 5.866801
22 Windspeed 3.361895
13 T7 3.182072
18 RH_9 3.041144
23 Visibility 1.385616
10 RH_5 0.855398
20 Press_mm_hg 0.823456
1 T1 0.765753
8 RH_4 0.642723
25 rv1 0.260885
26 rv2 0.260885
9 T5 0.041905
All the feature importance values are not scaled; hence, sum of values from all features will not be
totaling 100.
The beeswarm chart in Figure 2-4 shows the impact of SHAP values on model output. The blue dot
shows a low feature value, and a red dot shows a high feature value. Each dot indicates one data point from
the dataset. The beeswarm plot shows the distribution of feature values against the SHAP values.

shap.plots.beeswarm(shap_values)

Figure 2-4 Impact on model output

Recipe 2-4. SHAP Values for a Regression Model on All Mixed Input Variables
Problem
How do you estimate SHAP values when you introduce the categorical variables along with the numerical
variables, which is a mixed set of input features.

Solution
The solution is that the mixed input variables that have numeric features as well as categorical or binary
features can be modeled together. As the number of features increases, the time to compute all the
permutations will also increase.

How It Works
We are going to use an automobile public dataset with some modifications. The objective is to predict the
price of a vehicle given the features such as make, location, age, etc. It is a regression problem that we are
going to solve using a mix of numeric and categorical features.

df =
pd.read_csv('https://raw.githubusercontent.com/pradmishra1/PublicDatasets/main
df.head(3)
df.columns
Index(['Price', 'Make', 'Location', 'Age', 'Odometer', 'FuelType', 'Transmissi
'Mileage', 'EngineCC', 'PowerBhp'], dtype='object')

We cannot use string-based features or categorical features in the model directly as matrix multiplication
is not possible on string features; hence, the string-based features need to be transformed into dummy
variables or binary features with 0 and 1 flags. The transformation step is skipped here because many data
scientists already know how to do this data transformation. We are importing another transformed dataset
directly.

df_t =
pd.read_csv('https://raw.githubusercontent.com/pradmishra1/PublicDatasets/main
del df_t['Unnamed: 0']
df_t.head(3)
df_t.columns
Index(['Price', 'Age', 'Odometer', 'mileage', 'engineCC', 'powerBhp', 'Locatio
'Location_Chennai', 'Location_Coimbatore', 'Location_Delhi', 'Location_Hyderab
'Location_Kochi', 'Location_Kolkata', 'Location_Mumbai', 'Location_Pune', 'Fue
'FuelType_Electric', 'FuelType_LPG', 'FuelType_Petrol', 'Transmission_Manual',
Above', 'OwnerType_Second', 'OwnerType_Third'], dtype='object')

#y is the dependent variable, that we need to predict


y = df_t.pop('Price')
# X is the set of input features
X = df_t

import pandas as pd
import shap
import sklearn

# a simple linear model initialized


model = sklearn.linear_model.LinearRegression()

# linear regression model trained


model.fit(X, y)

print("Model coefficients:\n")
for i in range(X.shape[1]):
print(X.columns[i], "=", model.coef_[i].round(5))
Model coefficients:

Age = -0.92281
Odometer = 0.0
mileage = -0.07923
engineCC = -4e-05
powerBhp = 0.1356
Location_Bangalore = 2.00658
Location_Chennai = 0.94944
Location_Coimbatore = 2.23592
Location_Delhi = -0.29837
Location_Hyderabad = 1.8771
Location_Jaipur = 0.8738
Location_Kochi = 0.03311
Location_Kolkata = -0.86024
Location_Mumbai = -0.81593
Location_Pune = 0.33843
FuelType_Diesel = -1.2545
FuelType_Electric = 7.03139
FuelType_LPG = 0.79077
FuelType_Petrol = -2.8691
Transmission_Manual = -2.92415
OwnerType_Fourth +ACY- Above = 1.7104
OwnerType_Second = -0.55923
OwnerType_Third = 0.76687
To compute the SHAP values, we can use the explainer function with the training dataset X and model
predict function. The SHAP value calculation happens using a permutation approach; it took 5 minutes.

# compute the SHAP values for the linear model


explainer = shap.Explainer(model.predict, X)

# SHAP value calculation


shap_values = explainer(X)
Permutation explainer: 6020it [05:14, 18.59it/s]

import numpy as np
pd.DataFrame(np.round(shap_values.values,3)).head(3)

# average prediction value is called as the base value


pd.DataFrame(np.round(shap_values.base_values,3)).head(3)

0
0 11.933
1 11.933
2 11.933

pd.DataFrame(np.round(shap_values.data,3)).head(3)

Recipe 2-5. SHAP Partial Dependency Plot for Regression Model for Mixed
Input
Problem
You want to plot the partial dependency plot and interpret the graph for numeric and categorical dummy
variables.

Solution
The partial dependency plot shows the correlation between the feature and the predicted output of the
target variables. There are two ways we can showcase the results, one with a feature and expected value of
the prediction function and the other with superimposing a data point on the partial dependency plot.

How It Works
Let’s take a look at the following example (see Figure 2-5):

shap.partial_dependence_plot(
"powerBhp", model.predict, X, ice=False,
model_expected_value=True, feature_expected_value=True
)
Figure 2-5 Partial dependency plot for powerBhp and predicted price of the vehicle
The linear blue line shows the positive correlation between the price and the powerBhp. The powerBhp
is a strong feature. The higher the bhp, the higher the price of the car. This is a continuous or numeric
feature; let’s look at the binary or dummy features. There are two dummy features if the car is registered in a
Bangalore location or in a Kolkata location as dummy variables. See Figure 2-6.

shap.partial_dependence_plot(
"Location_Bangalore", model.predict, X, ice=False,
model_expected_value=True, feature_expected_value=True
)

Figure 2-6 Dummy variable Bangalore location versus SHAP value

If the location of the car is Bangalore, then the price would be higher, and vice versa. See Figure 2-7.

shap.partial_dependence_plot(
"Location_Kolkata", model.predict, X, ice=False,
model_expected_value=True, feature_expected_value=True
)
Figure 2-7 Dummy variable Location_Kolkata versus SHAP value
If the location is Kolkata, then the price is expected to be lower. The reason for the difference between
the two locations is in the data that is being used to train the model. The previous three figures show the
global importance of a feature versus the prediction function. As an example, only two features are taken
into consideration; we can use all features one by one and display many graphs to get more understanding
about the predictions.
Now let’s look at a sample data point superimposed on a partial dependence plot to display local
explanations. See Figure 2-8.

# make a standard partial dependence plot for lights on predicted output


sample_ind = 20 #20th record from the dataset
shap.partial_dependence_plot(
"powerBhp", model.predict, X, model_expected_value=True,
feature_expected_value=True, ice=False,
shap_values=shap_values[sample_ind:sample_ind+1,:]
)

Figure 2-8 Power bhp versus prediction function

The vertical dotted line shows the average powerBhp, and the horizontal dotted line shows the average
predicted value by the model. The small blue bar dropping from the black dot reflects the placement of
record number 20 from the dataset. Local interpretation means that for any sample record from the dataset,
we should be able to explain the predictions. Figure 2-9 shows the importance of features corresponding to
each record in the dataset.

# the waterfall_plot shows how we get from shap_values.base_values to


model.predict(X)[sample_ind]
shap.plots.waterfall(shap_values[sample_ind], max_display=14)

Figure 2-9 Local interpretation of the 20th record and corresponding feature importance
For the 20th record, the predicted price is 22.542, the powerBhp stands out to be most important feature,
and manual transmission is the second most important feature.

X[20:21]
model.predict(X[20:21])
array([22.54213017])

Recipe 2-6. SHAP Feature Importance for a Regression Model with All Mixed
Input Variables
Problem
You want to get the global feature importance from SHAP values using mixed-input feature data.

Solution
The solution to this problem is to use absolute values and sort them in descending order.

How It Works
Let’s take a look at the following example:

#computing shap importance values for the linear model


import numpy as np
# feature names from the training data
feature_names = shap_values.feature_names
#combining the shap values with feature names
shap_df = pd.DataFrame(shap_values.values, columns=feature_names)
#taking the absolute shap values
vals = np.abs(shap_df.values).mean(0)
#creating a dataframe view
shap_importance = pd.DataFrame(list(zip(feature_names, vals)), columns=
['col_name', 'feature_importance_vals'])
#sorting the importance values
shap_importance.sort_values(by=['feature_importance_vals'], ascending=False,
inplace=True)
print(shap_importance)
col_name feature_importance_vals
4 powerBhp 6.057831
0 Age 2.338342
18 FuelType_Petrol 1.406920
19 Transmission_Manual 1.249077
15 FuelType_Diesel 0.618288
7 Location_Coimbatore 0.430233
9 Location_Hyderabad 0.401118
2 mileage 0.270872
13 Location_Mumbai 0.227442
5 Location_Bangalore 0.154706
21 OwnerType_Second 0.154429
6 Location_Chennai 0.133476
10 Location_Jaipur 0.127807
12 Location_Kolkata 0.111829
14 Location_Pune 0.051082
8 Location_Delhi 0.049372
22 OwnerType_Third 0.021778
3 engineCC 0.020145
1 Odometer 0.009602
11 Location_Kochi 0.007474
20 OwnerType_Fourth +ACY- Above 0.002557
16 FuelType_Electric 0.002336
17 FuelType_LPG 0.001314
At a high level, for the linear model that is used to predict the price of the automobiles, the previous
features are important, with the highest being the powerBhp, age of the car, petrol type, manual
transmission type, etc. The previous tabular output shows global feature importance.

Recipe 2-7. SHAP Strength for Mixed Features on the Predicted Output for
Regression Models
Problem
You want to know the impact of a feature on the model function.

Solution
The solution to this problem is to use a beeswarm plot that displays the blue and red points.

How It Works
Let’s take a look at the following example (see Figure 2-10). From the beeswarm plot there is a positive
relationship between powerBhp and positive SHAP value; however, there is a negative correlation between
the age of a car and the price of the car. As the feature value increases from a lower powerBhp value to a
higher powerBhp value, the shap value increases and vice versa. However, there is an opposite trend for the
age feature.
shap.plots.beeswarm(shap_values)

Figure 2-10 The SHAP value impact on the model output

Recipe 2-8. SHAP Values for a Regression Model on Scaled Data


Problem
You don’t know whether getting SHAP values on scaled data is better than the unscaled numerical data.

Solution
The solution to this problem is to use a numerical dataset and generate local and global explanations after
applying the standard scaler to the data.

How It Works
Let’s take a look at the following script:

import pandas as pd
df_lin_reg = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-
databases/00374/energydata_complete.csv')
del df_lin_reg['date']
#y is the dependent variable, that we need to predict
y = df_lin_reg.pop('Appliances')
# X is the set of input features
X = df_lin_reg
import pandas as pd
import shap
import sklearn
#create standardized features
scaler = sklearn.preprocessing.StandardScaler()
scaler.fit(X)
#transform the dataset
X_std = scaler.transform(X)
# a simple linear model initialized
model = sklearn.linear_model.LinearRegression()

# linear regression model trained


model.fit(X_std, y)
Other documents randomly have
different content
1064 A few pits like those of Mount Caburn, and containing
similar relics, were found at Cissbury and Winkelbury
(Archaeol. Journal, xli, 1884, p. 76).

1065 Archaeologia, xlii, 1869, pp. 39, 48-50; xlvi, 1881, pp. 450-
1, 456-8.

1066 Oppidum autem Britanni vocant, cum silvas impeditas vallo


atque fossa munierunt, quo incursionis hostium vitandae
causa convenire consuerunt. B. G., v, 21, § 3.

1067 See p. 136, supra, and Archaeologia, xlvi, 1881, p. 458.

1068 B. G., vii, 30, § 4.

1069 Archaeol. Cambr., 6th ser., vi, 1906, pp. 266-7. Two forts
with defences of this kind are known in Peebles-shire.

1070 B. G., ii, 29, § 2; vi, 32, § 4. Cf. Mém. de la Soc. nat. des
ant. de France, 4e sér., ii, 1871, pp. 141-2.

1071 See p. 70, supra.

1072 Proc. Soc. Ant. Scot., xxxiii, 1899, pp. 29-30

1073 See p. 138, supra, and also Archaeologia, xlvi, 1881, pp.
438-9, 467; A. Pitt-Rivers, Excavations in Cranborne
Chase, ii, 238-9; Archaeol. Cambr., 5th ser., xvi, 1899, pp.
106-8, 130; xvii, 1900, pp. 189, 195, 206, 209; Archaeol.
Journal, lvii, 1900, pp. 52-6, 60-3, 66-7; Journ. Roy. Inst.
Cornwall, xvi, 1904, pp. 73-83; and Guide to the Ant. of
the Early Iron Age (Brit. Museum), pp. 122-4.

1074 See p. 134, supra.

1075 Trans. Hon. Soc. Cymmrodorion, 1898-9 (1900), p. 20.

1076 See Proc. Soc. Ant. Scot., xxix, 1895, pp. 131, 149-50.
1077 B. G., vii, 22.

1078 Proc. Soc. Ant. Scot., xxv, 1891, pp. 428, 438, 440, 444-5.

1079 Ib., xxxiii, 1899, pp. 15, 20-3, 26-32; xxxiv, 1900, p. 74. A
similar method of fortification was practised by the
Dacians (Congrès archéol. de France, 1874 1876, p. 444),
‘in the Danne-werk at Korborg, near Schleswig’ (A. Pitt-
Rivers, Excavations in Cranborne Chase, iii, 254), and in
Nassau (Rev. de synthèse hist., iii, 1901, p. 45).

The well-known camp on Herefordshire Beacon is


interesting because, like Old Sarum (Sorbiodunum), it
contains a citadel. Though it is locally described as a
‘British camp’, its date is at present uncertain. While most
of the objects which have been found in it are
comparatively late, Pitt-Rivers (Journ. Anthr. Inst., x,
1881, p. 331) pointed out that the pottery seemed to
indicate its Celtic origin; but the citadel presents a
difficulty. Was it a later addition? See also F. J. Haverfield,
Archaeol. Survey of Herefordshire, 1896, pp. 3-4.

The ‘vitrified’ stone forts of the British Isles demand a


brief notice. There are none in England, but many in the
northern and western counties of Scotland and some in
France. It is very doubtful whether any exist in Wales or
Ireland (Archaeol. Journal, xxxvii, 1880, pp. 227, 234; D.
Christison, Early Fortifications in Scotland, pp. 187, 190).
The question is whether the vitrifaction, which was due to
fire, was accidental or designed; and in some cases the
only way of settling this is to ascertain by excavation the
extent of the vitrifaction (ib., p. 192). The best authorities
have concluded that when the vitrified part of the fort is
small the phenomenon may be safely ascribed to accident,
—perhaps to a beacon fire; but that when it may be
traced almost all round the rampart it was intentional (ib.,
pp. 186-7; Archaeol. Journal, xxxvii, 1880, pp. 240-1; R.
Munro, Prehist. Scotland, pp. 382-3). Probably the
builders intended to give cohesion to the walls and make
it impossible for assailants to demolish them (L’Anthr., xiv,
1903, pp. 330-1); or when the vitrifaction was confined to
the upper surface the defenders would have secured firm
foothold while the assailants would have stumbled over
loose stones (D. Christison, op. cit., pp, 186-7). [See
Addenda]

1080 Reports Archit. Soc. of ... Lincoln, &c., xviii, 1885-6, pp.
53-61; Archaeologia, lii, 1890, pp. 382-4; Vict. Hist. of ...
Northampton, i, 147-9, 151-2. At Beansale and Claverdon
in Warwickshire there are camps which in many respects
resemble that of Hunsbury, but have not been excavated
(Vict. Hist. of ... Warwick, i, 350).

Professor T. McKenny Hughes (Archaeologia, liii, 1892, p.


484) suggests that Offa’s Dyke may have ‘belonged to the
defensive system of the Britons’. All we know is that those
dykes which have been excavated—Bokerly Dyke and
Wansdyke—were Roman or post-Roman (A. Pitt-Rivers,
Excavations in Cranborne Chase, iii, p. xiii); and it is in the
last degree improbable that earthworks which extend over
territory that belonged to several tribes should have been
constructed at a time when tribes only combined for brief
periods and in the presence of urgent and common peril.
Cf. F. J. Haverfield, Archaeol. Survey of Herefordshire,
1896, p. 7, and Eng. Hist. Rev., xvii, 1902, pp. 628-9.

1081 See p. 93, supra.

1082 Journ. Derbyshire Archaeol. and Nat. Hist. Soc., xiii, 1891,
pp. 194-9; xiv, 1892, pp. 247-8; xvii, 1895, p. 76; Vict.
Hist. of ... Derby, i, 231-42. Cf. Association franç. pour
l’avancement des sc., 32e sess., 1903, 2e partie, p. 890.
1083 Vict. Hist. of ... Bedford, i, 172. See also p. 84, n. 1, supra.

1084 Geogr., iv, 4, § 3. Cf. Caesar, B. G., v, 12, § 3, 43, § 1, and


Diodorus Siculus, v, 21, § 5. Woodcuts, one of the
Romano-British villages explored by Pitt-Rivers, was
constructed and chiefly occupied by Britons (Excavations
in Cranborne Chase, ii, 65, iii, 3); but, as Prof. Haverfield
has pointed out (The Romanization of Roman Britain, pp.
18-9), ‘the material life was Roman’.

1085 B. G., v, 12, § 3.

1086 Athenaeus, iv, 36. Cf. Diodorus Siculus, v, 28, §§ 4-5 and
Strabo, iv, 4, § 3.

1087 Proc. Soc. Ant., 2nd ser., iv, 1867-70, pp. 164-70; Journ.
Brit. Archaeol. Association, xxxvi, 1880, pp. 254-61; J.
Anderson, Scotland in Pagan Times,—the Iron Age, p.
207; R. Munro, Prehist. Scotland, pp. 348-9; B. C. A.
Windle, Remains of the Prehist. Age, p. 266; Proc. Soc.
Ant. Scot., xxxviii, 1904, pp. 541-7. It must be admitted
that conclusive evidence is wanting to prove that any of
the Cornish subterranean dwellings were inhabited before
the Roman occupation (see Vict. Hist. of ... Cornwall, i,
367-9). The ‘hut-clusters’ of Cornwall, of which Chrysoister
is a good example (W. C. Lukis, Prehist. Stone Monuments
of the Brit. Isles,—Cornwall, p. 19) were probably later
than the hut-circles of the same county. Some may have
been built before the Christian era, but they were certainly
inhabited in Roman times (Vict. Hist. of ... Cornwall, i,
370).

1088 Archaeol. Journal, x, 1853, pp. 212, 215-9, 221-2; xviii,


1861, pp. 39-46; Proc. Soc. Ant. Scot., iii, 1863, pp. 128,
134-8, 141; xxxviii, 1904, pp. 102-22, 173-89, 548-58; Sir
A. Mitchell, The Past in the Present, p. 58; Trans. Glasgow
Archaeol. Soc., N. S., iv, 1902, pp. 189-90.

1089 Proc. Soc. Ant. Scot., xxxv, 1901, pp. 116-7, 119, 147;
xxxviii, 1904, p. 558.

1090 Proc. Soc. Ant. Scot., xxxv, 1901, pp. 146-8; Guide to the
Ant. of the Bronze Age (Brit. Museum), pp. 35-6; A. Lang,
The Clyde Mystery, p. 41.

1091 Journ. Anthr. Inst., xv, 1886, pp. 463-5; xxviii, 1899, pp.
150-4; R. Munro, The Lake-Dwellings of Europe, pp. 454,
459, 461, 475, 493. Dr. Munro (ib., pp. 490-2) observes
that ‘in the early centuries of the Christian era the
distribution of crannogs in Scotland and Ireland closely
coincides with a well-defined area in which the Celtic
language was spoken’, though he admits that ‘they have
not been found in the south-eastern provinces of
Scotland’. ‘In this wider area’ [including Southern Britain],
he continues, ‘on the supposition that the Celts were the
introducers or founders of the system, we ought to find
some vestiges of these dwellings.... This is precisely what
the general researches into British lake-dwellings have
shown in the stray remnants of them that have been
found in Llangorse, Holderness, the meres of Norfolk and
Suffolk, Cold Ash Common, etc. All these, with perhaps
the exception of the pile-structures at London Wall,
appear to be older than the majority of the crannogs of
Scotland and Ireland.... Taking all these facts into account
... I am inclined to believe that we have here evidence of
a widely distributed custom which underlies the
subsequent [to Caesar] great development which the
lake-dwellings assumed in Scotland and Ireland. Moreover,
I believe it probable that the early Celts had got this
knowledge from contact with the inhabitants of the pile-
dwellings of Central Europe.’
Llangorse is the only Welsh site at which a lake-dwelling
has been found (ib., p. 464). I venture to ask the doctor
why lake-dwellings are so rare in England and Wales,
where, on his theory, they ought to abound; why the
Scottish and Irish Celts did not apply their ‘knowledge’ for
some centuries after they reached the British Isles; and
why lake-dwellings are non-existent (ib., p. 493) in Spain
and Portugal, where Celts were numerous (G. Dottin,
Manuel pour servir à l’étude de l’ant. celt., pp. 324, 329-
31, 349)? And, seeing that there are pile-dwellings in New
Guinea and Central Africa, is it not conceivable that those
of the British Isles had no connexion with Central Europe?

1092 Cf. Tacitus, Germania, 24, and Archaeologia, xliii, 1871, pp.
439-40.

1093 Vict. Hist. of ... Somerset, i, 198.

1094 Report of ... the Brit. Association, 1893 (1894), p. 903;


1894, pp. 431-4; 1898, pp. 694-5; 1904 (1905), pp. 324-
30; Proc. Somerset. Archaeol. and Nat. Hist. Soc., xlix,
1903, pp. 103, 107-8, 114-5, 120-1; 1, 1904, pp. 68-93;
li, 1905, pp. 77-104; Journ. Anthr. Inst., xxxv, 1905, p.
395; Guide to the Ant. of the Early Iron Age (Brit.
Museum), pp. 126-7.

1095 Ib., p. 127; Crania Britannica, ii, pl. 6 and 7, p. 4.

1096 Diodorus Siculus, v, 30, § 1; Strabo, iv, 4, § 3; C. Elton,


Origins of Eng. Hist., 1890, p. 110; Rice Holmes, Caesar’s
Conquest of Gaul, 1903, p. 10; Rev. arch., 4e sér., i, 1903,
pp. 337-42; H. d’A. de Jubainville, Les Celtes, pp. 337-42.

1097 J. O. Westwood, Lapidarium Walliae, 1876-9, p. 37, and pl.


xxv, fig. 3; J. Rhys, The Welsh People, 1902, p. 567.

1098 B. G., v, 14, § 3.


1099 Proc. Soc. Ant., 2nd ser., xx, 1904-5, pp. 345-6; Guide to
the Ant. of the Early Iron Age (Brit. Museum), pp. 50, 135.

1100 B. G., vi, 14, § 3.

1101 Bibl. Hist., v, 28, § 6.

1102 B. G., i, 29, § 1.

1103 Ib., v, 48, §§ 3-4. Cf. my Caesar’s Conquest of Gaul, 1899,


p. 715.

1104 J. Evans, Coins of the Anc. Britons, p. 171. Cf. p. 368,


infra, and F. J. Haverfield, The Romanization of Roman
Britain, 1906, p. 9.

1105 Diodorus Siculus, v, 31, § 2; Strabo, iv, 4, § 4; Athenaeus,


iv, 37, vi, 49; Ammianus Marcellinus, xv, 9, § 8.

1106 Guide to the Ant. of the Early Iron Age (Brit. Museum), p.
144. Cf. Proc. Soc. Ant., 2nd ser., xviii, 1901, p. 373.

1107 Vict. Hist. of ... Lancs, i, 246. Only one has come to light in
Durham (Vict. Hist. of ... Durham, i. 209).

1108 Excavations in Cranborne Chase, iv, 11, 59-61. A bronze


socketed celt has been found at Cann, near Shaftesbury,
in association with British silver coins (J. Evans, Coins of
the Anc. Britons, p. 102).

1109 Archaeologia, xvi, 1812, pp. 348-9; Guide to the Ant. of


the Early Iron Age (Brit. Museum), pp. 83, 103-4. If it is
true that coins formed part of the Hagbourne Hill deposit,
bronze implements must have continued in use in
Berkshire to a very late date.
1110 May the rarity of British iron weapons be partly accounted
for by supposing that during the greater part of the Late
Celtic Period swords and spear-heads were still in many
cases made of bronze? In the Homeric Age implements
were of iron, but the weapons which the poet mentions
were all of bronze, doubtless because the armourers had
not yet learned to temper iron (Rev. arch., 4e sér., vii,
1906, pp. 284, 290-1, 294).

1111 B. G., v, 14, § 2.

1112 See pp. 161, 189, supra.

1113 F. J. Haverfield, The Romanization of Roman Britain, pp. 7-


9. Cf. Vict. Hist. of ... Derby, i, 191-2, and see also
Solinus, 22, 12 (ed. Th. Mommsen, p. 234).

I hardly know whether it is worth while to notice the


statements of Diodorus (v, 32, § 3) and Strabo (iv, 5, § 4)
in regard to the prevalence of cannibalism in certain parts
of the British Isles. If there is any truth in them, the
cannibals had doubtless inherited the custom from
neolithic times (p. 113, supra). Strabo’s remark, which, as
he himself warns us, does not rest upon good authority,
refers only to Ireland. Diodorus says that some of the
Britons were cannibals; but this observation may also
refer to the Irish. A mound-dwelling near Kirkwall
(Archaeol. Journal, x, 1853, p. 217) is said to have
contained broken human bones mingled with those of
sheep, which may or may not be evidence of cannibalism;
and every scholar knows the speech that Caesar puts into
the mouth of Critognatus, one of the Arvernian chiefs who
was blockaded in Alesia (B. G., vii, 77, § 12). As for the
unnatural vices with which Diodorus (v, 32, § 7), Strabo
(iv, 4, § 6), and others charge the Celts, they are rife
among the civilized nations of modern Europe.
1114 See H. J. Mackinder, Britain and the British Seas, pp. 177-
9.

1115 See p. 288, infra.

1116 B. G., v, 9, § 4; 11, § 9.

1117 Agricola, 12.

1118 Journ. Brit. Archaeol. Association, xxviii, 1872, p. 42;


Archaeologia, xlvi, 1881, p. 467; lii, 1890, pp. 761-2;
Trans. Epping forest ... Field Club, ii, 1882, p. 65; C. W.
Dymond and H. G. Tomkins, Worlebury, 1886, p. 78; Sir J.
Evans, Anc. Stone Implements, 1897, pp. 419-20;
Archaeol. Journal, lix, 1902, pp. 213-6.

1119 See Rev. arch., 3e sér., xli, 1902, p. 428, and my Caesar’s
Conquest of Gaul, 1903, p. 12, n. 1.

1120 See B. G., i, 18, §§ 6-7.

1121 Tacitus, Ann., xii, 36.

1122 See p. 296, infra.

1123 B. G., vi, 19, §§ 1-2. Cf. my Caesar’s Conquest of Gaul,


1899, pp. 521-2.

1124 B. G., vi, 19, § 3. M. d’Arbois de Jubainville (Études sur le


droit celt., i, 1895, p. 241) holds that if uxores means
‘wives’, Caesar’s statement is inconsistent with the custom
which regulated the administration of dowries, and
accordingly gives the word the sense of ‘concubines’. It
seems to me equally rash to assume that Caesar was
mistaken, and that uxores means ‘wives’ in § 1 and
‘concubines’ in § 3. May we not suppose that the
husband’s power was checked by public opinion?
1125 B. G., vi, 19, § 3.

1126 See my Caesar’s Conquest of Gaul, 1903, pp. 12-5.

1127 Ib., 1899, pp. 525-7.

1128 See W. Robertson Smith, The Religion of the Semites,


1901, pp. 31-2, 38-9.

1129 Ausonius, Clarae urbes, xiv, 31-2; Gildas, Hist., 2. Cf. J.


Rhys, Celtic Heathendom, 1888, p. 106; Sir A. Lyall,
Asiatic Studies, i, 1899, pp. 12, 20-2; and E. B. Tylor, Prim.
Culture, ii, 1903, pp. 212-4.

1130 See J. G. Frazer, Early Hist. of the Kingship, p. 154.

1131 Corpus Inscr. Lat., vii, 507; J. Rhys, Celtic Heathendom, p.


104.

1132 See Rev. celt., ii, 1873-5, p. 1; iv, 1879-80, pp. 57-8; xviii,
1897, p. 259; E. B. Tylor, Prim. Culture, ii, 1903, pp. 221,
228.

1133 Cf. J. Rhys, Celtic Heathendom, p. 106, with G. Dottin, La


rel. des Celtes, 1904, p. 60.

1134 M. Jullian (Rev. des études anc., iv, 1902, p. 101) points
out that the texts fall into two groups, one of which, all
posterior to 100 B.C., deals with the Transalpine Celts, and
the other, mostly earlier, with all the others, except the
Britons.

1135 Rev. celt., xii, 1891, p. 316; Rev. num., 3e sér., ii, 1884, pp.
179-202; Rev. des études anc., iv, 1902, p. 279, n. 2.

1136 ‘On se tromperait beaucoup,’ says M. Dottin (La rel. des


Celtes, pp. 7-8), ‘si l’on croyait que tous les anciens
Mercuriacus de France, devenus aujourd’hui Mercuray,
Mercurey, Mercoirey, Mercury, sont dérivés du nom de
dieu Mercurius. Ils proviennent plus vraisemblablement du
gentilice romain Mercurius, assez fréquent dans les
inscriptions, et dénomment simplement le fundus, la
propriété d’un Gallo-Romain du nom de Mercurius.’

1137 J. Rhys, Celtic Heathendom, p. 235. See also Rev. celt., iv,
1879-80, p. 45; x, 1889, pp. 485, 487, 489; H. Gaidoz,
Esquisse de la rel. des Gaulois, 1879, p. 11, Études de
mythologie gaul.,—Le dieu gaul. du soleil, 1886, pp. 90-1,
93; Rev. num., 3e sér., ii, 1884, p. 201, n. 1; Archaeol.
Review, ii, 1889, p. 124; Journ. Brit. Archaeol. Association,
1, 1894, pp. 105-9; and G. Dottin, La rel. des Celtes, pp.
5-16, 56-7, 60.

1138 Caesar does not say that Mercury was actually the
supreme deity of the Gauls, but only the most fervently
worshipped: he expressly says that they regarded their
Jupiter as the lord of the celestials. ‘It must not be
supposed,’ says Sir Alfred Lyall (Asiatic Studies, i, 1899, p.
121), ‘that even the uppermost gods of Hinduism have
retired behind mere ceremonial altars, like constitutional
monarchs.... But there seem to be many grades of
accessibility among them, from Brahma—who, since he
created the world, has taken no further trouble about it,
and is naturally rewarded by possessing only one or two
of the million temples to Hindu gods,’ &c.

1139 B. G., vi, 17.

1140 De divin., i, 41, § 90. Cf. my Caesar’s Conquest of Gaul,


1899, p. 532, n. 13.

1141 H. Gaidoz, Études de mythol. gaul.,—Le dieu gaul. du


soleil, p. 91. Cf. E. B. Tylor, Prim. Culture, ii, 1903, pp.
252, 254.

1142 De his eandem fere quam reliquae gentes habent


opinionem. B. G., vi, 17, § 2.

1143 See Rev. des études anc., vi, 1904, p. 329. Cf. Sir A. Lyall,
Asiatic Studies, i, 1899, pp. 2-3, 6.

1144 See W. Robertson Smith, The Religion of the Semites,


1901, pp. 16-8, 29, 253-6, 263.

1145 See pp. 273 n. 7, 284, infra, and G. Boissier, La rel. des
Romains, i, 1892, pp. 335, 340-1.

1146 Folk-Lore, xvii, 1906, pp. 32, 324. See Mr. A. B. Cook’s
series of articles in the same volume and in the first
number of vol. xviii.

1147 W. Warde Fowler, The Roman Festivals, 1899, p. 333.

1148 Ib., p. 347. Cf. W. Robertson Smith, The Religion of the


Semites, 1901, p. 64.

1149 J. Rhys, Celtic Heathendom, p. 49; G. Dottin, La rel. des


Celtes, p. 12. Mercury was also reverenced more than any
other god by the Germans of whom Tacitus wrote (Germ.,
9).

1150 B. G., v, 22, § 3.

1151 H. d’A. de Jubainville, Les Celtes, pp. 39-40, 44. Cf. J.


Rhys, Celtic Heathendom, p. 220.

1152 J. Rhys, Celtic Heathendom, pp. 39, 41-2.

1153 M. Camille Jullian (Rev. des études anc., iv, 1902, p. 109,
n. 1) points out that in vol. vii [p. 331] of the Corpus inscr.
Lat. there are sixty-one inscriptions in honour of Mars [of
which, however, eight are uncertain], and only eight in
honour of Mercury; and the greater popularity of Mars is
also apparent in the supplements published in Ephemeris
epigraphica (iii, 1877, pp. 125, 128; iv, 1881, p. 196; vii,
1892, pp. 289, 299, 313, 324, 332, 334, 352). But no
account should be taken of those inscriptions in which the
name of Mars is not coupled with that of a Celtic deity,
though even with this reservation the ascendancy of Mars
remains unaffected.

1154 See Rev. des études anc., iv, 1902, p. 109, n. 1. Even in
Gaul the cult of Mars appears to have preponderated
among the Aquitani (ib., pp. 106-7, and Corpus inscr. Lat.,
xiii, 87, 108-17, 209-13).

1155 B. G., vi, 17, §§ 3-5. Cf. J. Rhys, Celtic Heathendom, pp.
49-50.

1156 Corpus inscr. Lat., vii, 84.

1157 Pharsalia, i, 445-6.

1158 There is no trace of the worship of Esus in the British Isles,


unless M. d’Arbois de Jubainville (Les Celtes, p. 63) is right
in thinking that Esus was a god whose surname was
Smertullos, and that Smertullos, the Celtic Pollux, is to be
identified with the Irish Cuchulainn (see also Fragm. hist.
Graec., ed. Didot, i, 1841, p. 194, fr. 6; Diodorus Siculus,
iv, 56, § 4; Corpus inscr. Lat., xiii, 3026 c; and H. d’A. de
Jubainville, Principaux auteurs à consulter sur l’hist. des
Celtes, p. 88). Esus is depicted as a woodman in the act
of felling a tree on No. 2 of four altars which were
discovered at Paris in 1710; while Smertullos appears on
the right of No. 3, threatening a serpent with a club. M.
d’Arbois is a little rash in concluding (La civilisation des
Celtes, 1899, p. 173) that because there was a Briton
called Esunectus, who may have been an immigrant from
Gaul, Esus was worshipped in Britain. The name AESV
occurs on a coin of the Iceni; but its meaning is uncertain
(J. Evans, Coins of the Anc. Britons, p. 386). The
scholiasts of Lucan identified Esus with Mercury; but their
authority on such a matter is worthless (see Rev. celt.,
xviii, 1897, p. 117). Prof. Rhys, however, has recently
examined an inscription (Celtic Inscr. in France and Italy,
1907, p. 56), which leads him to give a qualified support
to the identification.

1159 Corpus inscr. Lat., vii, 747, 1114d; H. Gaidoz, Esquisse de


la rel. des Gaulois, p. 12; W. H. Roscher, Lex. der griech.
und röm. Mythol., i, 1884-6, col. 1286-93; Rev. arch., 3e
sér., xxvi, 1895, pp. 309, 317; 4e sér., ii, 1903, pp. 348-50;
Rev. des études anc., vii, 1905, pp. 234-8.

1160 Rev. celt., xviii, 1897, pp. 140-1.

1161 Corpus inscr. Lat., vii, 168.

1162 My criticism of M. S. Reinach’s theory is supported, I am


glad to see, by M. Jullian (Rev. des études anc., v, 1903,
pp. 217-9).

1163 H. Gaidoz, Études de mythologie gaul.,—Le dieu gaul. du


soleil, &c., pp. 96-7.

1164 H. Gaidoz, Études de mythologie gaul.,—Le dieu gaul. du


soleil, &c., pp. 7, 61-3, 66, 92, 96; Corpus inscr. Lat., vii,
879, 882; J. Rhys, Celtic Heathendom, pp. 55-6; Class.
Rev., xvii, 1903, p. 420; Guide to the Ant. of the Early Iron
Age (Brit. Museum), pp. 60, 136, 152; Rev. des études
anc., vii, 1905, pp. 156-7; Folk-Lore, xvi, 1905, p. 272, n.
9. The supposition that the wheels were money is no
longer admitted by competent antiquaries (A. Blanchet,
Traité des monn. gaul., pp. 27-8).
1165 J. G. Frazer, Golden Bough, iii, 1900, p. 326.

1166 J. Rhys, Celtic Heathendom, pp. 74-5.

1167 Corpus inscr. Lat., vii, 200, 203, 875, 1062. Cf. W. H.
Roscher, Lex. der griech. und röm. Myth., i, 1884-6, col.
819, and H. d’A. de Jubainville, Les Celtes, p. 35.

1168 Ib., p. 33. Cf. J. Rhys, Celtic Inscr. in France and Italy, p.
11.

1169 Corpus inscr. Lat., vii, 1345; Trans. Cumberland and


Westmorland Ant. and Archaeol. Soc., xv, 1899, p. 463.

1170 Corpus inscr. Lat., vii, 1082. ‘On se tromperait


grandement,’ says M. d’A. de Jubainville (Les Druides,
1906, p. 68), ‘si l’on croyait qu’il y eut entre le dieu gaulois
Belenus ... et les dieux gaulois Grannos et Borvo [all of
whom were assimilated to Apollo] ... une analogie
quelconque ... Le dieu Maponus, “jeune fils”, n’avait
probablement de commun avec Apollon que la jeunesse
éternelle.’

1171 Prof. Rhys (Celtic Heathendom, p. 126) says that ‘most of


the remains of antiquity connected with his temple make
him a sort of Jupiter’, but adds (ib., p. 130) that he ‘was
not simply a Neptune ... he was also a Mars, as the
inscriptions at Lydney testify’. But the testimony of the
inscriptions (Corpus inscr. Lat., vii, 138-40) consists simply
in the letter M; and Hübner, to whom the professor
appeals, queries his own suggestion that M stands for
Marti. [I learn from one of Mr. A. B. Cook’s articles in Folk-
Lore (xvii, 1906, p. 39, n. 1) that Hübner (Jahrbuch des
Vereins von Alterthumsfreunden im Rheinlande, Heft lxvi,
1879, pp. 29-46) corrected and supplemented the account
of Nodons which he had given in the Corpus, and
interpreted D. M. NODONTI as d(eo) m(agno)—‘the great
god’—a reading which would authorize us to regard him,
with Mr. Cook, as ‘a Jupiter and a Neptune rolled into
one’.]

1172 Folk-Lore, xvii, 1906, pp. 30, 39.

1173 H. d’A. de Jubainville, Les Celtes, pp. 33-5.

1174 Ib., pp. 54-6.

1175 J. Rhys, Celtic Inscr. in France and Italy, p. 14.

1176 B. G., vi, 18, § 1. Cf. Tacitus, Germ., 2.

1177 C. Jullian in Daremberg and Saglio, Dict. des ant. grecques


et rom., ii, 1892, p. 280. Cf. Bull. de l’Acad. des inscr.,
1887, p. 443, and Rev. arch., xx, 1892, pp. 208, 213.

1178 Rev. celt., xvii, 1896, pp. 45-59. Cf. G. Dottin, La rel. des
Celtes, pp. 21-2. The Celtic name of the god on the altar
at Sarrebourg was Sucellos.

1179 C. de Clarac, Musée de sculpture ant. et mod.,—Planches,


t. iii, 1832-4, pl. 398 [670]; Comptes rendus ... de l’Acad.
des inscr., 4e sér., xv, 1887, p. 444.

1180 S. Reinach, Antiquités nat.,—Descr. raisonnée du musée de


St. Germain-en-Laye, pp. 137, 156-68; H. Gaidoz, Le
grand dieu gaul. chez les Allobroges, 1902, p. vi. Cf. J.
Rhys, Celtic Heathendom, p. 81, and Folk-Lore, xvi, 1905,
p. 273. Dis Pater is identified by Professor Rhys and M. G.
Bloch (E. Lavisse, Hist. de France, i, 51-2) with Cernunnos
(see p. 284, infra). Cf. W. Warde Fowler, The Roman
Festivals, p. 286.

M. H. Gaidoz (Rev. arch., 3e sér., xx, 1892, p. 213) says


that the worship of Dis Pater in Britain is attested—it
hardly needs attestation—by two inscriptions (Corpus
inscr. Lat., vii, 154, 250). The former is not worth quoting.
The latter—one of many inscriptions addressed to the Di
Manes which are contained in the Corpus and in
Ephemeris epigraphica, (vols. iii and vii) contains the
words Secreti Manes qui regna Acherusia Ditis incolitis.

1181 B. G., vi, 21, § 2.

1182 Germ., 9.

1183 Rev. des études anc., iv, 1902, p. 228; v, 1903, p. 106.

1184 See G. Boissier, La rel. rom., i, 6.

1185 Class. Rev., xviii, 1904, pp. 361, 367-72, 375; Folk-Lore,
xv, 1904, p. 264; xvi, 1905, p. 321; xvii, 1906, p. 30.

1186 Rev. des études anc., iv, 1902, p. 221.

1187 Ib., v, 1903, p. 110.

1188 Ib., vi, 1904, pp. 111 n. 1, 134 n. 4; A. Holder, Alt-


celtischer Sprachschatz, ii, 1805-6.

1189 Folk-Lore, xvii, 1906, pp. 59, 71.

1190 Rev. des études anc., iv, 1902, pp. 110-4.

1191 G. Dottin, Manuel pour servir à l’étude de l’ant. celt., pp.


234-5.

1192 See Rev. celt., xxv, 1904, pp. 130-1.

1193 Corpus inscr. Lat., vii, 168a, 221, 348, 559; Ephemeris
epigr., iii, 1877, p. 120; iv, 1881, p. 198a; Rev. des études
anc., viii, 1906, pp. 53-8.
1194 Rev. celt., i, 1870-2, pp. 306-19.

1195 Corpus inscr. Lat., xiii, pars i, fasc. i, p. 249.

1196 J. Rhys, Celtic Heathendom, p. 99.

1197 Rev. celt., i, 1870-2, pp. 306-19.

1198 Diodorus Siculus, v, 29, § 4; Rev. celt., viii, 1887, pp. 47,
59, n. 13; H. d’A. de Jubainville, La civilisation des Celtes,
pp. 374-5; Rev. des études anc., v, 1903, p. 252.

1199 See E. B. Tylor, Prim. Culture, ii, 1903, pp. 229-34.

1200 J. Evans, Coins of the Ancient Britons, p. 121, Suppl., p.


477; Cf. Rev. celt., xxi, 1900, pp. 297-9.

1201 B. G., vii, 88, § 4; E. Desjardins, Géogr. de la Gaule rom.,


iii, 1890, pl. xii; S. Reinach, Répertoire de la statuaire
grecque et rom., ii, 746-7; H. d’A. de Jubainville, La
civilisation des Celtes, 1899, pp. 390-1; Rev. des études
anc., vi, 1904, p. 48.

1202 Corpus inscr. Lat., xiii, 3026 b, c. Cf. G. Dottin, La rel. des
Celtes, pp. 20-1, 28, and Rev. celt., xxvi, 1905, p. 199. M.
d’Arbois de Jubainville (ib., p. 195) thinks that the original
Epona was the mare deified, and that the woman in the
statues was a Greek addition. Cf. A. Lang, Custom and
Myth, 1885, pp. 118-20, and Sir A. Lyall’s Asiatic Studies,
i, 1899, p. 18.

1203 xxii, 57, § 10; xxiii, 24, § 11.

1204 ii, 32, § 6.

1205 B. G., vi, 13, § 10, 17, § 5; Tac., Ann., xiv, 30; Dion
Cassius, lxii, 7, § 3. Cf. G. Dottin, La rel. des Celtes, p. 30.
Strabo (iv, 4, § 6), Diodorus Siculus (v, 27, § 4), Plutarch
(Caesar, 26), and Suetonius (Divus Iulius, 54) speak of
temples in Transalpine Gaul; but all archaeologists would
admit that the words which they used—τέμενος, ἱερόν,
fanum, and templum—did not denote roofed edifices. I
think, however, that Livy (xxii, 57, § 10, xxiii, 24, § 11)
had such buildings in mind. Whether he was well informed
is another question. Cf. Rev. des études anc., iv, 1902, pp.
279-80.

1206 Tacitus, Germ., 9.

1207 Livy, i, 31, § 3. Cf. W. Warde Fowler, The Roman Festivals,


pp. 338-9, and J. G. Frazer, Early Hist. of the Kingship, pp.
210-1.

1208 Rev. celt., xiii, 1892, pp. 190-3. Cf. vol. xi, 1890, p. 225. M.
d’A. de Jubainville (Rev. arch., 4e sér., viii, 1906, p. 146)
says that ‘la vie de Saint Samson désigne par le mot
simulacrum une pierre levée, lapis stans, qui était l’objet
d’un culte en Grande-Bretagne au milieu du vie siècle’, &c.

1209 Pausanias, vii, 22, § 4.

1210 M. Jullian (Rev. des études anc., iv, 1902, pp. 284 n. 6,
285 n. 1), referring to the passage in which Lucan (iii,
412-3) describes the Druids’ grove near Massilia,—

simulacraque maesta deorum


Arte carent caesisque exstant informia truncis,

and interpreting it differently from M. Reinach, argues that


Caesar’s simulacra ‘ne peut signifier que des objets ayant
déjà vaguement l’aspect de forme humaine’. In regard to
the ‘statues—menhirs’, which the abbé Hermet (Congrès
internat. d’anthr. et d’archéol. préhist., 1900 1903, pp. 335-
8) regards as figures of divinities, see p. 200, supra, and
cf. E. B. Tylor, Prim. Culture, ii, 1903, p. 168.

1211 Rev. celt., xiii, 1892, p. 199.

1212 M. d’A. de Jubainville (ib., xxvii, 1906, p. 122) argues that


the absence of pre-Roman Gallic statues is due not to
Druidical influence but to the fact that the Gauls built their
houses not of stone but of wood, and were therefore
ignorant of the art of sculpture! But houses built of stone
have been found at Bibracte. See Congrès internat.
d’anthr. et d’archéol. préhist., 1900 (1902), pp. 418-9.

1213 Augustine, De civ. Dei, iv, 31.

1214 Germ., 9.

1215 G. Boissier, La rel. rom., 1892, pp. 8, 35. Cf. Ovid, Fasti, vi,
295.

1216 See Guide to the Ant. of the Early Iron Age (Brit.
Museum), p. 115.

M. Camille Jullian (Rev. des études anc., v, 1903, p. 251,


n. 1) maintains that Caesar (B. G., vi, 19, § 4) does not
say that the rich were cremated, but only their slaves. M.
Jullian’s interpretation of this well-known passage is, I
believe, unique; anyhow, the statement in the text rests
upon certain archaeological evidence. See Rev. celt., xx,
1899, pp. 119-20; Rev. de synthèse hist., 1901, p. 50; and
Guide to the Ant. of the Early Iron Age (Brit. Museum), p.
84.

1217 Archaeologia, lii, 1890, pp. 320, 322, 325.

1218 J. R. Mortimer, Forty Years’ Researches, p. 357.


1219 J. Anderson, Scotland in Pagan Times,—the Bronze and
Stone Ages, p. 229.

1220 Crania Britannica, ii, pl. 6 and 7, pp. 1-3; Archaeol.


Journal, xliv, 1887, p. 271; Archaeol. Cant., xxvi, 1904, pp.
11-2; Guide to the Ant. of the Early Iron Age (Brit.
Museum), p. 109.

1221 Guide to the Ant. of the Early Iron Age (Brit. Museum), pp.
106-7, 110-1. Cf. Crania Britannica, ii, pl. 6 and 7, p. 6. Mr.
Reginald Smith (Guide, &c., p. 112) remarks, in regard to
the ‘Danes’ Graves’ near Driffield, in the East Riding of
Yorkshire, that ‘the bodies lay indifferently on the right or
left side, though the majority had the head at the north
end of the grave: there was thus’, he adds, ‘no tendency
to face the sun, as in the Bronze period’. Since the bodies,
on whichever side they lay, would have faced either the
morning or the afternoon sun, Mr. Smith’s observation
apparently assumes that in the Bronze period corpses
were laid so as to face the morning sun, which was far
from being an invariable rule. See pp. 188-9, supra, and
the authorities there cited; also Wilts Archaeol. and Nat.
Hist. Mag., x, 1866, p. 101. Unhappily Sir R. C. Hoare,
from whom we learn that in Wiltshire corpses were
generally laid with their heads pointing northward, omits
to say whether they were laid on the right or the left side.
[See Addenda.]

1222 J. Romilly Allen, Celtic Art, pp. 63-71; Guide to the Ant. of
the Early Iron Age (Brit. Museum), pp. 104-20.

1223 Ib., p. 112.

1224 Ib., p. 122; W. Greenwell, Brit. Barrows, pp. 208-12.

1225 B. G., vi, 19, § 4.


1226 Or, as Dr. Evans, who mentions both alternatives, suggests
(Archaeologia, lii, 1890, p. 326), for the introduction of
food. See pp. 115-6, supra.

1227 Archaeologia, lii, 1890, pp. 324-7. Cf. Guide to the Ant. of
the Early Iron Age (Brit. Museum), pp. 82-3, and see also
W. C. Borlase, Nenia Cornubiae, pp. 247-51.

1228 Vitae phil., ed. Didot, p. 2, ll. 22-3.

1229 B. G., vi, 13, § 11.

1230 Diogenes Laertius, ed. Didot, p. 1, l. 11.

1231 B. G., vi, 21, § 1.

1232 Ib., 16, § 1.

1233 See Rev. des études anc., iv, 1902, p. 102.

1234 B. G., vi, 21, § 1.

1235 Arrian, De venatione, 34, §§ 1-3.

1236 See Rev. des études anc., vi, 1904, pp. 47-8, 53, 55, 59-
60.

1237 See p. 291, n. 2, infra.

1238 ‘The political condition of the people of Brythonic Britain,’


says Prof. Rhys (Celtic Britain, 3rd ed., 1904, pp. 57, 61),
‘towards the end of the Early Iron Age and the close of
their independence, is best studied in connection with that
of Gaul as described by Caesar.... The state of things,
politically speaking, which existed in Gaul, existed also
most likely among the Belgic tribes in Britain.’ That is to
say, the professor accepts the political part of Caesar’s
description as applying to the Belgic and the other
Brythonic tribes of both Gaul and Britain. Yet he insists
that that part of the same description which deals with
Druidism, and which is indissolubly connected with the
political part, has nothing to do either with the Belgae or
the other Brythons.

1239 Professor Rhys virtually admits this when he says that the
Brythonic dialect was largely influenced by the language
of the aborigines. See p. 452, n. 8, infra.

1240 The problem of the origin of Druidism is interesting as an


example of the divergence which exists among Celtic
scholars upon almost every important question of Celtic
religion, and also because it once more illustrates the
working of that powerful but erratic engine,—the mind of
Professor Rhys. The first known mention of Druidism, the
substance of which is reproduced in Diogenes Laertius’s
Lives of the Philosophers, occurred in a work by Sotion of
Alexandria, who lived about 200 B.C. From this, M. d’Arbois
de Jubainville (Principaux auteurs de l’ant. à consulter sur
l’hist. des Celtes, 1902, pp. 187-8) infers that the Belgic
invaders of Britain found Druidism flourishing there about
that date, and transplanted it into the country which they
had left, but with which they kept up a constant
intercourse. M. d’Arbois has consistently maintained this
view for many years; and under his influence Professor
Rhys affirmed in 1879 (Lectures on Welsh Philology, 2nd
ed., pp. 83-4) that Druidism reached Gaul ‘undoubtedly
through the Belgae who had settled in Britain’. Now,
however, the professor rightly holds that the Belgae were
preceded in Britain by other Brythons (Celtic Britain, 1904,
p. 4); and it would seem therefore that the date of the
first mention of Druidism gives no clue as to the place
where it originated. Moreover, Professor Rhys has long
been of opinion that there is ‘no proof that any Belgic or
Brythonic people ever had Druids’ (ib., 2nd ed., 1884, p.
69; 3rd ed., 1904, p. 69; Report of ... the Brit. Association,
1900, p. 894). In 1901, accordingly, he argued (Celtic
Folk-lore, ii, 623, 685) that the Goidelic invaders of Britain
(whose existence, I must remind the reader, is denied by
some Celtic scholars) ‘got their magic and druidism’ from
‘the [imaginary] dwarf race of the sids’ (see p. 391, infra).
But in 1900 (The Welsh People, p. 83) and again in 1902
(ib., 3rd ed.) he affirmed that Druidism had been ‘evolved
by the Continental Goidels, or rather accepted by them
from the Aborigines’. Presumably, then, they already had
Druids when they invaded Britain, and had no need to
borrow them from the sids. By 1904, however, the
professor appears to have concluded that Druidism
originated independently among the aborigines both of
Gaul and of Britain, and that with both it was an
inheritance from common ancestors; for, after telling us
(Celtic Britain, 3rd ed., p. 69) that Druidism ‘may be
surmised to have had its origin’ among ‘the non-Celtic
natives’ of Britain, he goes on to say that it ‘possessed
certain characteristics which enabled it to make terms
with the Celtic conqueror, both in Gaul and in the British
islands’; while on page 73 he remarks that ‘it is hard to
accept the belief ... that druidism originated here’, and
concludes that ‘the Celts found it both here and there [in
Gaul] the common religion of some of the aboriginal
inhabitants’. But the weary student who hopes to be
allowed to acquiesce in this conclusion is distracted by
finding that on page 4 of this very book, in which the
professor insists that ‘there is no proof that any ...
Brythonic people ever had Druids’, he affirms that ‘traces
of [the Goidels] are difficult to discover on the Continent’
(Celtic Britain, p. 4). This time the conclusion would seem
to be that the Gauls, whose Druids Caesar described,
were neither Goidels nor Brythons! It is hardly necessary
to add that the professor has since satisfied himself (see
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookmasss.com

You might also like