100% found this document useful (4 votes)
28 views

Instant Download Deep Learning and Linguistic Representation 1st Edition Shalom Lappin PDF All Chapters

Lappin

Uploaded by

matosodelphy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
28 views

Instant Download Deep Learning and Linguistic Representation 1st Edition Shalom Lappin PDF All Chapters

Lappin

Uploaded by

matosodelphy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Download the full version of the textbook now at textbookfull.

com

Deep Learning and Linguistic Representation


1st Edition Shalom Lappin

https://textbookfull.com/product/deep-learning-
and-linguistic-representation-1st-edition-shalom-
lappin/

Explore and download more textbook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Programming PyTorch for Deep Learning Creating and


Deploying Deep Learning Applications 1st Edition Ian
Pointer
https://textbookfull.com/product/programming-pytorch-for-deep-
learning-creating-and-deploying-deep-learning-applications-1st-
edition-ian-pointer/
textbookfull.com

Deep Learning Pipeline: Building a Deep Learning Model


with TensorFlow 1st Edition Hisham El-Amir

https://textbookfull.com/product/deep-learning-pipeline-building-a-
deep-learning-model-with-tensorflow-1st-edition-hisham-el-amir/

textbookfull.com

R Deep Learning Essentials 1st Edition Wiley

https://textbookfull.com/product/r-deep-learning-essentials-1st-
edition-wiley/

textbookfull.com

Sara Moulton s Home Cooking 101 How to Make Everything


Taste Better First Edition Sara Moulton

https://textbookfull.com/product/sara-moulton-s-home-cooking-101-how-
to-make-everything-taste-better-first-edition-sara-moulton/

textbookfull.com
Stem Cell Genetics for Biomedical Research Raul Delgado-
Morales

https://textbookfull.com/product/stem-cell-genetics-for-biomedical-
research-raul-delgado-morales/

textbookfull.com

Pricing done right the pricing framework proven successful


by the worlds most profitable companies 1st Edition Smith

https://textbookfull.com/product/pricing-done-right-the-pricing-
framework-proven-successful-by-the-worlds-most-profitable-
companies-1st-edition-smith/
textbookfull.com

Academic Writing and Identity Constructions:


Performativity, Space and Territory in Academic Workplaces
1st Edition Louise M. Thomas
https://textbookfull.com/product/academic-writing-and-identity-
constructions-performativity-space-and-territory-in-academic-
workplaces-1st-edition-louise-m-thomas/
textbookfull.com

The beginner s guide to dehydrating food 2nd ed How to


preserve all your favorite vegetables fruits meats and
herbs 2nd Edition Teresa Marrone
https://textbookfull.com/product/the-beginner-s-guide-to-dehydrating-
food-2nd-ed-how-to-preserve-all-your-favorite-vegetables-fruits-meats-
and-herbs-2nd-edition-teresa-marrone/
textbookfull.com

Numerical Methods and Optimization in Finance Manfred


Gilli

https://textbookfull.com/product/numerical-methods-and-optimization-
in-finance-manfred-gilli/

textbookfull.com
Highlights Of The 2020 American Heart Association AHA
Guidelines for CPR and ECC 1st Edition Eric J. Lavonas

https://textbookfull.com/product/highlights-of-the-2020-american-
heart-association-aha-guidelines-for-cpr-and-ecc-1st-edition-eric-j-
lavonas/
textbookfull.com
Deep Learning
and Linguistic
Representation
Chapman & Hall/CRC Machine Learning & Pattern Recognition
Introduction to Machine Learning with Applications in Information Security
Mark Stamp

A First Course in Machine Learning


Simon Rogers, Mark Girolami

Statistical Reinforcement Learning: Modern Machine Learning Approaches


Masashi Sugiyama

Sparse Modeling: Theory, Algorithms, and Applications


Irina Rish, Genady Grabarnik

Computational Trust Models and Machine Learning


Xin Liu, Anwitaman Datta, Ee-Peng Lim

Regularization, Optimization, Kernels, and Support Vector Machines


Johan A.K. Suykens, Marco Signoretto, Andreas Argyriou

Machine Learning: An Algorithmic Perspective, Second Edition


Stephen Marsland

Bayesian Programming
Pierre Bessiere, Emmanuel Mazer, Juan Manuel Ahuactzin, Kamel Mekhnacha

Multilinear Subspace Learning: Dimensionality Reduction of Multidimensional


Data
Haiping Lu, Konstantinos N. Plataniotis, Anastasios Venetsanopoulos

Data Science and Machine Learning: Mathematical and Statistical Methods


Dirk P. Kroese, Zdravko Botev, Thomas Taimre, Radislav Vaisman

Deep Learning and Linguistic Representation


Shalom Lappin

For more information on this series please visit: https://www.routledge.com/Chapman--Hall-


CRC-Machine-Learning--Pattern-Recognition/book-series/CRCMACLEAPAT
Deep Learning
and Linguistic
Representation

Shalom Lappin
First edition published 2021
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742

and by CRC Press


2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN

© 2021 Shalom Lappin

CRC Press is an imprint of Taylor & Francis Group, LLC

The right of Shalom Lappin to be identified as author of this work has been asserted by him in accor-
dance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.

Reasonable efforts have been made to publish reliable data and information, but the author and pub-
lisher cannot assume responsibility for the validity of all materials or the consequences of their use.
The authors and publishers have attempted to trace the copyright holders of all material reproduced
in this publication and apologize to copyright holders if permission to publish in this form has not
been obtained. If any copyright material has not been acknowledged please write and let us know so
we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, access www.copyright.
com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA
01923, 978-750-8400. For works that are not available on CCC please contact mpkbookspermis-
sions@tandf.co.uk

Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.

Library of Congress Cataloging-in-Publication Data


Names: Lappin, Shalom, author.
Title: Deep learning and linguistic representation / Shalom Lappin.
Description: Boca Raton : CRC Press, 2021. | Includes bibliographical
references and index.
Identifiers: LCCN 2020050622 | ISBN 9780367649470 (hardback) | ISBN
9780367648749 (paperback) | ISBN 9781003127086 (ebook)
Subjects: LCSH: Computational linguistics. | Natural language processing
(Computer science) | Machine learning.
Classification: LCC P98 .L37 2021 | DDC 410.285--dc23
LC record available at https://lccn.loc.gov/2020050622

ISBN: 978-0-367-64947-0 (hbk)


ISBN: 978-0-367-64874-9 (pbk)
ISBN: 978-1-003-12708-6 (ebk)

Typeset in Latin Modern font


by KnowledgeWorks Global Ltd.
‫לזכר אמי עדה לפין‪ ,‬שלימדה אותי מהי אהבת שפה‬
Contents

Preface xi

Chapter 1  Introduction: Deep Learning in Natural Language


Processing 1

1.1 OUTLINE OF THE BOOK 1


1.2 FROM ENGINEERING TO COGNITIVE SCIENCE 4
1.3 ELEMENTS OF DEEP LEARNING 7
1.4 TYPES OF DEEP NEURAL NETWORKS 10
1.5 AN EXAMPLE APPLICATION 17
1.6 SUMMARY AND CONCLUSIONS 21

Chapter 2  Learning Syntactic Structure with Deep Neural


Networks 23

2.1 SUBJECT-VERB AGREEMENT 23


2.2 ARCHITECTURE AND EXPERIMENTS 24
2.3 HIERARCHICAL STRUCTURE 34
2.4 TREE DNNS 39
2.5 SUMMARY AND CONCLUSIONS 42

Chapter 3  Machine Learning and the Sentence Acceptability


Task 45

3.1 GRADIENCE IN SENTENCE ACCEPTABILITY 45


3.2 PREDICTING ACCEPTABILITY WITH MACHINE LEARN-
ING MODELS 51

vii
viii  Contents

3.3 ADDING TAGS AND TREES 62


3.4 SUMMARY AND CONCLUSIONS 66

Chapter 4  Predicting Human Acceptability Judgements in


Context 69

4.1 ACCEPTABILITY JUDGEMENTS IN CONTEXT 69


4.2 TWO SETS OF EXPERIMENTS 75
4.3 THE COMPRESSION EFFECT AND DISCOURSE CO-
HERENCE 78
4.4 PREDICTING ACCEPTABILITY WITH DIFFERENT DNN
MODELS 80
4.5 SUMMARY AND CONCLUSIONS 87

Chapter 5  Cognitively Viable Computational Models of Lin-


guistic Knowledge 89

5.1 HOW USEFUL ARE LINGUISTIC THEORIES FOR NLP


APPLICATIONS? 89
5.2 MACHINE LEARNING MODELS VS FORMAL GRAM-
MAR 92
5.3 EXPLAINING LANGUAGE ACQUISITION 96
5.4 DEEP LEARNING AND DISTRIBUTIONAL SEMANTICS 100
5.5 SUMMARY AND CONCLUSIONS 108

Chapter 6  Conclusions and Future Work 113

6.1 REPRESENTING SYNTACTIC AND SEMANTIC KNOWL-


EDGE 113
6.2 DOMAIN-SPECIFIC LEARNING BIASES AND LAN-
GUAGE ACQUISITION 119
6.3 DIRECTIONS FOR FUTURE WORK 121
REFERENCES 123
Contents  ix

Author Index 139

Subject Index 145


Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
Preface

Over the past 15 years deep learning has produced a revolution in artifi-
cial intelligence. In natural language processing it has created robust,
large coverage systems that achieve impressive results across a wide
range of applications, where these were resistant to more traditional
machine learning methods, and to symbolic approaches. Deep neural
networks (DNNs) have become dominant throughout many domains of
AI in general, and in NLP in particular, by virtue of their success as
engineering techniques.
Recently, a growing number of computational linguists and cognitive
scientists have been asking what deep learning might teach us about the
nature of human linguistic knowledge. Unlike the early connectionists of
the 1980s, these researchers have generally avoided making claims about
analogies between deep neural networks and the operations of the brain.
Instead, they have considered the implications of these models for the
cognitive foundations of natural language, in nuanced and indirect ways.
In particular, they are interested in the types of syntactic structure that
DNNs identify, and the semantic relations that they can recognise. They
are concerned with the manner in which DNNs represent this informa-
tion, and the training procedures through which they obtain it.
This line of research suggests that it is worth exploring points of
similarity and divergence between the ways in which DNNs and humans
encode linguistic information. The extent to which DNNs approach (and,
in some cases, surpass) human performance on linguistically interesting
NLP tasks, through efficient learning, gives some indication of the capac-
ity of largely domain general computational learning devices for language
learning. An obvious question is whether humans could, in principle, ac-
quire this knowledge through similar sorts of learning processes.
This book draws together work on deep learning applied to natural
language processing that I have done, together with colleagues, over the
past eight years. It focusses on the question of what current methods
of machine learning can contribute to our understanding of the way in
which humans acquire and represent knowledge of the syntactic and

xi
xii  Preface

semantic properties of their language. The book developed out of two


online courses that I gave recently. I presented the first as a series of
talks for students and colleagues from the Centre for Linguistic Theory
and Studies in Probability (CLASP) at the University of Gothenburg,
in June 2020. The second was an invited course for the Brandeis Web
Summer School in Logic, Language, and Information, in July 2020. I
am grateful to the participants of both forums for stimulating discussion
and helpful comments.
I am deeply indebted to the colleagues and students with whom I
did the joint experimental work summarised here. I use it as the basis
for addressing the broader cognitive issues that constitute the book’s
focus. As will become clear from the co-authored publications that I cite
throughout the following chapters, they have played a central role in the
development of my ideas on these issues. I am enormously grateful to
them for guiding me through much of the implementation of the work
that we have done together. I wish to express my appreciation to Carlos
Armendariz, Jean-Philippe Bernardy, Yuri Bizzoni, Alex Clark, Adam
Ek, Gianluca Giorgolo, Jey Han Lau, and Matt Purver for their very
major contributions to our joint work.
The students and research staff at CLASP have provided a won-
derfully supportive research environment. Their scientific activity and,
above all, their friendship, have played a significant role in facilitating
the work presented here. Stergios Chatzikyriakidis and Bill Noble have
been an important source of feedback for the development of some of
the ideas presented here. I am grateful to my colleagues in the Cognitive
Science Group in the School of Electronic Engineering and Computer
Science at Queen Mary University of London for helpful discussion of
some of the questions that I take up in this book. Stephen Clark and
Pat Healey have provided helpful advice on different aspects of my re-
cent research. I would also like to thank Devdatt Dubhashi, head of the
Machine Learning Group at Chalmers University of Science in Gothen-
burg, for introducing me to many of the technical aspects of current
work in deep learning, and for lively discussion of its relevance to NLP.
I am particularly grateful to Stephen Clark for detailed comments on
an earlier draft of this monograph. He caught many mistakes, and sug-
gested valuable improvements. Needless to say, I bear sole responsibility
for any errors in this book. My work on the monograph is supported by
grant 2014-39 from the Swedish Research Council, which funds CLASP.
Elliott Morsia, my editor at Taylor and Francis, has provided superb
help and support. Talitha Duncan-Todd, my production person, and
Preface  xiii

Shashi Kumar, the Latex support person, have given me much needed
assistance throughout the production process.
My family, particularly my children and grandchildren, are the source
of joy and wider sense of purpose needed to complete this, and many
other projects. While they share the consensus that I am a hopeless nerd,
they assure me that the scientific issues that I discuss here are worth-
while. They frequently ask thoughtful questions that help to advance my
thinking on these issues. They remain permanently surprised that some-
one so obviously out of it could work in such a cool field. In addition to
having them, this is indeed one of the many blessings I enjoy. Above all,
my wife Elena is a constant source of love and encouragement. Without
her none of this would be possible.
The book was written in the shadow of the covid 19 pandemic. This
terrible event has brought at least three phenomena clearly into view.
First, it has underlined the imperative of taking the results of scientific
research seriously, and applying them to public policy decisions. Leaders
who dismiss well motivated medical advice, and respond to the crisis
through denial and propaganda, are inflicting needless suffering on their
people. By contrast, governments that allow themselves to be guided by
well supported scientific work have been able to mitigate the damage
that the crisis is causing.
Second, the crisis has provided a case study in the damage that
large scale campaigns of disinformation, and defamation, can cause to
the health and the well-being of large numbers of people. Unfortunately,
digital technology, some of it involving NLP applications, has provided
the primary devices through which these campaigns are conducted. Com-
puter scientists working on these technologies have a responsibility to
address the misuse of their work for socially destructive purposes. In
many cases, this same technology can be applied to filter disinformation
and hate propaganda. It is also necessary to insist that the agencies for
which we do this work be held accountable for the way in which they
use it.
Third, the pandemic has laid bare the devastating effects of extreme
economic and social inequality, with the poor and ethnically excluded
bearing the brunt of its effects. Nowhere has this inequality been more
apparent than in the digital technology industry. The enterprises of this
industry sustain much of the innovative work being done in deep learn-
ing. They also instantiate the sharp disparities of wealth, class, and
opportunity that the pandemic has forced into glaring relief. The engi-
neering and scientific advances that machine learning is generating hold
xiv  Preface

out the promise of major social and environmental benefit. In order for
this promise to be realised, it is necessary to address the acute deficit in
democratic accountability, and in equitable economic arrangements that
the digital technology industry has helped to create.
Scientists working in this domain can no longer afford to treat these
problems as irrelevant to their research. The survival and stability of
the societies that sustain this research depend on finding reasonable
solutions to them.
NLP has blossomed into a wonderfully vigorous field of research.
Deep learning is still in its infancy, and it is likely that the architec-
tures of its systems will change radically in the near future. By using it
to achieve perspective on human cognition, we stand to gain important
insight into linguistic knowledge. In pursuing this work it is essential
that we pay close attention to the social consequences of our scientific
research.

Shalom Lappin
London
October, 2020
CHAPTER 1

Introduction: Deep
Learning in Natural
Language Processing

1.1 OUTLINE OF THE BOOK


In this chapter I will briefly introduce some of the main formal and archi-
tectural elements of deep learning systems.1 I will provide an overview of
the major types of DNN used in NLP. We will start with simple feed for-
ward networks and move on to different types of Recurrent Neural Net-
works (RNNs), specifically, simple RNNs and Long Short-Term Memory
RNNs. We will next look at Convolutional Neural Networks (CNNs),
and then conclude with Transformers. For the latter type of network we
will consider GPT-2, GPT-3, and BERT. I conclude the chapter with
a composite DNN that Bizzoni and Lappin (2017) construct for para-
phrase assessment, in order to illustrate how these models are used in
NLP applications.
Chapter 2 is devoted to recent work on training DNNs to learn syn-
tactic structure for a variety of tasks. The first application that I look
at is predicting subject-verb agreement across sequences of possible NP
controllers (Bernardy & Lappin, 2017; Gulordava, Bojanowski, Grave,
Linzen, & Baroni, 2018; Linzen, Dupoux, & Goldberg, 2016). It is neces-
sary to learn hierarchical syntactic structure to succeed at this task, as
linear order does not, in general, determine subject-verb agreement. An
1
For an excellent detailed guide to the mathematical and formal concepts of deep
learning consult Goodfellow, Bengio, and Courville (2016).

1
2  Introduction: Deep Learning in Natural Language Processing

important issue here is whether unsupervised neural language models


(LMs) can equal or surpass the performance of supervised LSTM mod-
els. I then look at work comparing Tree RNNs, which encode syntactic
tree structure, with sequential (non-tree) LSTMs, across several other
applications.
In Chapter 3, I present work by Lau, Clark, and Lappin (2017) on
using a variety of machine learning methods, including RNNs, to predict
mean human sentence acceptability judgements. They give experimental
evidence that human acceptability judgements are individually, as well
as aggregately, gradient, and they test several machine learning models
on crowd-sourced annotated corpora. These include naturally occurring
text from the British National Corpus (BNC) and Wikipedia, which is
subjected to round-trip machine translation through another language,
and back to English, to introduce infelicities into the test corpus. Lau,
Clark, and Lappin (2017) extend these experiments to other languages.
They also test their models on a set of linguists’ examples, which they
annotate through crowd sourcing. I conclude this chapter with a discus-
sion of Ek, Bernardy, and Lappin (2019), which reports the results of
an experiment in which LSTMs are trained on Wikipedia corpora en-
riched with a variety of syntactic and semantic markers, and then tested
for predicting mean human acceptability judgements, on Lau, Clark, and
Lappin (2017)’s annotated BNC test set. The sentence acceptability task
is linguistically interesting because it measures the capacity of machine
learning language models to predict the sorts of judgements that have
been widely used to motivate linguistic theories. The accuracy of a lan-
guage model in this task indicates the extent to which it can acquire the
sort of knowledge that speakers use in classifying sentences as more or
less grammatical and well-formed in other respects.
Chapter 4 looks at recent work by Bernardy, Lappin, and Lau (2018)
and Lau, Armendariz, Lappin, Purver, and Shu (2020) on extending the
sentence acceptability task to predicting mean human judgements of
sentences presented in different sorts of document contexts. The crowd-
source experiments reported in these papers reveal an unexpected com-
pression effect, in which speakers assessing sentences in both real and
random contexts raise acceptability scores, relative to out of context rat-
ings, at the lower end of the scale, but lower them at the high end. Lau
et al. (2020) control for a variety of confounds in order to identify the
factors that seem to produce the compression effect. This chapter also
presents the results of Lau’s new total least squares regression work,
Outline of the Book  3

which confirms that this effect is a genuine property of the data, rather
than regression to the mean.
Lau et al. (2020) expand the set of neural language models to
include unidirectional and bidirectional transformers. They find that
bidirectional, but not unidirectional, transformers approach a plausible
estimated upper bound on individual human prediction of sentence ac-
ceptability, across context types. This result raises interesting questions
concerning the role of directionality in human sentence processing.
In Chapter 5 I discuss whether DNNs, particularly those described
in previous chapters, offer cognitively plausible models of linguistic rep-
resentation and language acquisition. I suggest that if linguistic theories
provide accurate explanations of linguistic knowledge, then NLP systems
that incorporate their insights should perform better than those that
do not, and I explore whether these theories, specifically those of for-
mal syntax, have, in fact, made significant contributions to solving NLP
tasks. Answering this question involves looking at more recent DNNs
enriched with syntactic structure. I also compare DNNs with grammars,
as models of linguistic knowledge. I respond to criticisms that Sprouse,
Yankama, Indurkhya, Fong, and Berwick (2018) raise against Lau, Clark,
and Lappin (2017)’s work on neural language models for the sentence
acceptability task to support the view that syntactic knowledge is proba-
bilistic rather than binary in nature. Finally, I consider three well-known
cases from the history of linguistics and cognitive science in which the-
orists reject an entire class of models as unsuitable for encoding human
linguistic knowledge, on the basis of the limitations of a particular mem-
ber of the class. The success of more sophisticated models in the class
has subsequently shown these inferences to be unsound. They represent
influential cases of over reach, in which convincing criticism of a fairly
simple computational model is used to dismiss all models of a given
type, without considering straightforward improvements that avoid the
limitations of the simpler system.
I conclude Chapter 5 with a discussion of the application of deep
learning to distributional semantics. I first briefly consider the type the-
oretic model that Coecke, Sadrzadeh, and Clark (2010) and Grefenstette,
Sadrzadeh, Clark, Coecke, and Pulman (2011) develop to construct com-
positional interpretations for phrases and sentences from distributional
vectors, on the basis of the syntactic structure specified by a pregroup
grammar. This view poses a number of conceptual and empirical prob-
lems. I then suggest an alternative approach on which semantic interpre-
tation in a deep learning context is an instance of sequence to sequence
4  Introduction: Deep Learning in Natural Language Processing

(seq2seq) machine translation. This involves mapping sentence vectors


into multimodal vectors that represent non-linguistic events and situa-
tions.
Chapter 6 presents the main conclusions of the book. It briefly takes
up some of the unresolved issues, and the questions raised by the research
discussed here. I consider how to pursue these in future work.

1.2 FROM ENGINEERING TO COGNITIVE SCIENCE


Over the past ten years the emergence of powerful deep learning (DL)
models has produced significant advances across a wide range of AI tasks
and domains. These include, among others, image classification, face
recognition, medical diagnostics, game playing, and autonomous robots.
DL has been particularly influential in NLP, where it has yielded sub-
stantial progress in applications like machine translation, speech recog-
nition, question-answering, dialogue management, paraphrase identifica-
tion and natural language inference (NLI). In these areas of research, it
has displaced other machine learning methods to become the dominant
approach.
The success of DL as an engineering method raises important cog-
nitive issues. DNNs constitute domain general learning devices, which
apply the same basic approach to learning, data processing, and repre-
sentation to all types of input data. If they are able to approximate or
surpass human performance in a task, what conclusions, if any, can we
draw concerning the nature of human learning and representation for
that task?
Lappin and Shieber (2007) and A. Clark and Lappin (2011) suggest
that computational learning theory provides a guide to determining how
much linguistic knowledge can be acquired through different types of ma-
chine learning models. They argue that relatively weak bias models can
efficiently learn complex grammar classes suitable for natural language
syntax. Their results do not entail that humans actually use these models
for language acquisition. But they do indicate the classes of grammars
that humans can, in principle, acquire through domain general methods
of induction, from tractable quantities of data, in plausible amounts of
time.
Early connectionists (Rumelhart, McClelland, & PDP Research Group,
1986) asserted that neural networks are modelled on the human brain.
Few people working in DL today make this strong claim. The extent,
if any, to which human learning and representation resemble those of a
From Engineering to Cognitive Science  5

DNN can only be determined by neuroscientific research. A weak view of


DL takes it to show what sorts of knowledge can be acquired by domain
general learning procedures. To the degree that domain-specific learning
biases must be added to a DNN, either through architectural design or
enrichment of training data with feature annotation, in order to succeed
at an AI task, domain general learning alone is not sufficient to achieve
knowledge of this task.
The distinction between strong and weak views of DNNs is tangen-
tially related to the difference between strong and weak AI. On the
strong view of AI, an objective of research in artificial intelligence is to
construct computational agents that reproduce general human intelli-
gence and are fully capable of human reasoning. The weak view of AI
takes its objective to be the development of computational devices that
achieve functional equivalence to human problem solving abilities.2 But
while related, these two pairs of notions are distinct, and it is important
not to confuse them. The strong vs weak construal of DL turns not on
the issue of replicating general intelligence, but on whether or not one
regards DNNs as models of the brain.
In this book I am setting aside the controversy between strong vs
weak AI, while adopting the weak view of DL. If a DNN is able to ap-
proach or surpass human performance on a linguistic task, then this
shows how domain general learning mechanisms, possibly supplemented
with additional domain bias factors, can, in principle, acquire this knowl-
edge efficiently. I am not concentrating on grammar induction, but the
more general issues of language learning and the nature of linguistic
representation.
Many linguists and cognitive scientists express discomfort with DNNs
as models of learning and representation, on the grounds that they are
opaque in the way in which they produce their output. They are fre-
quently described as black box devices that are not accessible to clear
explanation, in contrast to more traditional machine learning models and
symbolic, rule-based systems. Learning theorists also observe that older
computational learning theories, like PAC (Valiant, 1984) and Bayesian
models (Berger, 1985), prove general results on the class of learnable
2
See Bringsjord and Govindarajulu (2020) for a recent discussion of strong and
weak approaches to AI. Turing (1950) played a significant role in defining the terms
of debate between these two views of AI by suggesting a test of general intelligence on
which humans would be unable to distinguish an artificial from a human interlocutor
in dialogue. For discussions of the Turing test, see Shieber (2007) and the papers in
Shieber (2004).
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
6  Introduction: Deep Learning in Natural Language Processing

objects. These results specify the rates and complexity of learning, in


proportion to resources of time and data, for their respective frame-
works.3 In general, such results are not yet available for different classes
of DNN.
It is certainly the case that we still have much to discover about how
DNNs function and the formal limits of their capacities. However, it is
not accurate to describe them as sealed apparatuses whose inner working
is shrouded in mystery. We design their architecture, implement them
and control their operation. A growing number of researchers are devis-
ing methods to illuminate the internal patterns through which DNNs
encode, store, and process information of different kinds. The papers in
Linzen, Chrupala, and Alishahi (2018) and Linzen, Chrupala, Belinkov,
and Hupkes (2019) address the issue of explainability of DL in NLP, as
part of an ongoing series of workshops devoted to this question.
At the other extreme one encounters hyperbole and unrealistic ex-
pectations that DL will soon yield intelligent agents able to engage in
complex reasoning and inference, at a level that equals or surpasses hu-
man performance. This has given rise to fears of powerful robots that
can become malevolent AI actors.4 This view is not grounded in the re-
ality of current AI research. DNNs have not achieved impressive results
in domain general reasoning tasks. A breakthrough in this area does not
seem imminent with current systems, nor is it likely in the foreseeable
future. The relative success of DNNs with natural language inference
test suites relies on pattern matching and analogy, achieved through
training on large data sets of the same type as the sets on which they
are tested. Their performance is easily disrupted by adversarial testing
with substitutions of alternative sentences in these test sets. Talman and
Chatzikyriakidis (2019) provide interesting experimental results of this
sort of adversarial testing.
DL is neither an ad hoc engineering tool that applies impenetrable
processing systems to vast amounts of data to achieve results through in-
explicable operations, nor a technology that is quickly approaching the
construction of agents with powerful general intelligence. It is a class
of remarkably effective learning models that have made impressive ad-
vances across a wide range of AI applications. By studying the abilities
and the limitations of these models in handling linguistically interesting
3
See Lappin and Shieber (2007), and A. Clark and Lappin (2011) for the appli-
cation of PAC and statistical learning theory to grammar induction.
4
See Dubhashi and Lappin (2017) for arguments that these concerns are seriously
misplaced.
Elements of Deep Learning  7

NLP tasks, we stand to gain useful insights into possible ways of encoding
and representing natural language as part of the language learning pro-
cess. We will also deepen our understanding of the relative contributions
of domain general induction procedures on one hand, and language spe-
cific learning biases on other, to the success and efficiency of this process.

1.3 ELEMENTS OF DEEP LEARNING


DNNs learn an approximation of a function f (x) = y which maps input
data x to an output value y. These values generally involve classifying an
object relative to a set of possible categories or determining the condi-
tional probability of an event, given a sequence of preceding occurrences.
Deep Feed Forward Networks consist of

(i) an input layer where data are entered,

(ii) one or more hidden layers, in which units (neurons) compute the
weights for components of the data, and

(iii) an output layer that generates a value for the function.

DNNs can, in principle, approximate any function and, in particular,


non-linear functions. Sigmoid functions are commonly used to determine
the activation threshold for a neuron. The graph in Fig 1.1 represents a
set of values that a sigmoid function specifies. The architecture of a feed
forward network is shown in Fig 1.2.

Figure 1.1 Sigmoid function.


8  Introduction: Deep Learning in Natural Language Processing

Figure 1.2 Feed Forward Network.


From Tushar Gupta, “Deep Learning: Feedforward Neural Network”, Towards Data
Science, January 5, 2017.

Training a DNN involves comparing its predicted function value to


the ground truth of its training data. Its error rate is reduced in cycles
(epochs) through back propagation. This process involves computing
the gradient of a loss (error) function and proceeding down the slope,
by specified increments, to an estimated optimal level, determined by
stochastic gradient descent.
Cross Entropy is a function that measures the difference between two
probability distributions P and Q through the formula:

H(P, Q) = −Ex∼P log Q(x)


H(P, Q) is the cross entropy of the distribution Q relative to the distri-
bution P . −Ex∼P log Q(x) is the negative of the expected value, for x,
Elements of Deep Learning  9

given P , of the natural logarithm of Q(x). Cross entropy is widely used


as a loss function for gradient descent in training DNNs. At each epoch
in the training process cross entropy is computed, and the values of the
weights assigned to the hidden units are adjusted to reduce error along
the slope identified by gradient descent. Training is concluded when the
distance between the network’s predicted distribution and that projected
from the training data reach an estimated optimal minimum.
In many cases the hidden layers of a DNN will produce a set of non-
normalised probability scores for the different states of a random variable
corresponding to a category judgement, or the likelihood of an event.
The softmax function maps the vector of these scores into a normalised
probability distribution whose values sum to 1. The function is defined
as follows:

ezi
sof tmax(z)i = P zj
je
This function applies the exponential function to each input value, and
normalises it by dividing it with the sum of the exponentials for all the
inputs, to insure that the output values sum to 1. The softmax function
is widely used in the output layer of a DNN to generate a probability
distribution for a classifier, or for a probability model.
Words are represented in a DNN by vectors of real numbers. Each
element of the vector expresses a distributional feature of the word.
These features are the dimensions of the vectors, and they encode its
co-occurrence patterns with other words in a training corpus. Word em-
beddings are generally compressed into low dimensional vectors (200–300
dimensions) that express similarity and proximity relations among the
words in the vocabulary of a DNN model. These models frequently use
large pre-trained word embeddings, like word2vec (Mikolov, Kombrink,
Deoras, Burget, & Èernocký, 2011) and GloVe (Pennington, Socher, &
Manning, 2014), compiled from millions of words of text.
In supervised learning a DNN is trained on data annotated with
the features that it is learning to predict. For example, if the DNN is
learning to identify the objects that appear in graphic images, then its
training data may consist of large numbers of labelled images of the
objects that it is intended to recognise in photographs. In unsupervised
10  Introduction: Deep Learning in Natural Language Processing

learning the training data are not labelled.5 A generative neural language
model may be trained on large quantities of raw text. It will generate the
most likely word in a sequence, given the previous words, on the basis
of the probability distribution over words, and sequences of words, that
it estimates from the unlabelled training corpus.

1.4 TYPES OF DEEP NEURAL NETWORKS


Feed Forward Neural Networks take data encoded in vectors of fixed size
as input, and they yield output vectors of fixed size. Recurrent Neural
Networks (RNNs) (Elman, 1990) apply to sequences of input vectors,
producing a string of output vectors. They retain information from pre-
vious processing phases in a sequence, and so they have a memory over
the span of the input. RNNs are particularly well suited to processing
natural language, whose units of sound and text are structured as or-
dered strings. Fig 1.3 shows the architecture of an RNN.

ht h0 h1 h2 ht

A = A A A A

xt x0 x1 x2 ... xt

Figure 1.3 Recurrent Neural Network

Simple RNNs preserve information from previous states, but they


do not effectively control this information. They have difficulties repre-
senting long-distance dependencies between elements of a sequence. A
5
See A. Clark and Lappin (2010) for a detailed discussion of supervised and
unsupervised learning in NLP.
Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Floor of Heaven
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Floor of Heaven

Author: T. D. Hamm

Illustrator: Dan Adkins

Release date: November 8, 2023 [eBook #72070]

Language: English

Original publication: New York, NY: Ziff-Davis Publishing Company,


1960

Credits: Greg Weeks, Mary Meehan and the Online Distributed


Proofreading Team at http://www.pgdp.net

*** START OF THE PROJECT GUTENBERG EBOOK FLOOR OF


HEAVEN ***
Floor of Heaven

By T. D. HAMM

Illustrated by ADKINS

[Transcriber's Note: This etext was produced from


Amazing Stories January 1961.
Extensive research did not uncover any evidence that
the U.S. copyright on this publication was renewed.]
The three crew members of the Ad Astra looked at one another,
grinning weakly, in the whispering silence after the motors had kicked
off. This was the culminating point of a half-century of preparation;
behind them was the satellite launching station—ahead of them, a
faint red dot, was Mars.
Bryan, nominal head of the expedition, touched the shutter studs that
opened their windows on the universe. They stood silently, the three
of them; Bryan and Hughes looking back at the majesty of the
retreating Earth—Williams, rigid with ecstasy at the forward port.
The stars were his passion and his joy. Women filled a momentary
need, men he accepted or rejected as they could help him to achieve
his goal. Now, as astrogator of the Ad Astra he had fulfilled his
dream; and now before him Canopus, Rigel, Cassiopeia and
Aldebaran lay jewel-like on the dark velvet of space.
How stars had absorbed the thoughts of mankind since the
beginning, he thought happily, and what dreams had the ancient
Chaldeans known as they mapped the routes of the galleons of
space? And the poets.... "See how the floor of Heaven is thick inlaid
with patines of bright gold—" he quoted softly.
"My, that's pretty," Bryan said solemnly behind him. "Who said that?"
"Williams did—" returned Hughes equally dead-pan.
Williams flushed under their good-natured grins. "Shakespeare said
it, you uneducated yokels," he said loftily. "How come you aren't
cheating each other at gin rummy yet? Last I heard one of you owed
the other a million dollars."
"It was only six hundred thousand," Hughes grinned, "and I'm about
to take him double or nothing!"
The weeks passed slowly. Barely audible, the computers ticked,
keeping the ship on course. Bryan and Hughes wrangled amiably
over their interminable card-games, throwing an occasional, joking
aside to Williams watching the stars, absorbed as a miser fingering
his jewels.
Mars, from a minute speck, grew to a planet lying bloody in the cold
rays of the distant sun. Strapped down in obedience to the computer-
given signal, the ship reversed, fired its rockets and touched down on
her supporting pillars of flame and became only a shining needle
dwarfed in the immensity of the pinkish-red desert.
They looked at each other doubtfully, conscious of anti-climax. This
was little different from the far reaches of the Gobi plateau where they
had trained for weary, boring months. Bryan and Hughes drew the
lots as the two to don their heated, protective suits and explore within
cautious distance of the ship. Williams, restless and bored, watched
their horseplay resentfully. Even the tenuous atmosphere of the dead
world dimmed the splendor of the heavens; why didn't they hurry and
get it over with? He shivered a little watching Bryan and Hughes
trudging clumsily in the sand, throwing out a comment occasionally
for the benefit of the tape recorder in the cabin.
"This is different from the deserts back home," Hughes said. "Back
there you get the feeling they're just waiting for somebody to move in,
but here...."
"It's more like a haunted house," Bryan finished for him. Williams,
adjusting his headphones, was conscious of a deepening of his faint
uneasiness—why didn't they hurry up and get back! All they really
had to do was build a cairn and plant the Federation flag. They had
found a few rocks and Bryan was stooping to bury the prepared
canister with the data of the flight—
Williams watching incredulously as Bryan and Hughes reeled and
staggered, was dimly conscious of a sudden faint tremor along the
ship. There was an abrupt metallic shrieking in his headset, a
background of thundering, grinding bedlam, and over it Bryan's voice
frantic—
"Cave-in! Lift ship—lift ship!"
It had been the one constant in the shifting, nebulous mass of theory
drilled into them. They were valuable—the ship was irreplaceable.
With a last unbelieving look of horror at the gigantic crack widening
under the very feet of his companions, Williams threw himself into the
control seat and threw the lever over to "takeoff" position. The rockets
fired and the ship rose majestically, the thousand foot fiery splendor
of her trail blotting out the space-suited figures toppling into the
thundering chasm.
Hours later, Williams pulled himself up, looking around dazedly. The
motors had shut off and the great ship was coasting noiselessly along
the return track; only the computers ticked steadily and the air-valves
made a muted shushing in the silence. Funny he hadn't noticed the
silence on the way out—sometimes he had even been irritated with
the noise Bryan and Hughes had made with their eternal wrangling
over their cards. Automatically he pushed the forward viewing plate
button feeling the familiar sense of timeless peace as he looked out
on the eternal suns.
Mechanically he ate and slept in the days that followed, dimly aware
of a giggling, wild-eyed stranger in some remote corner of his mind,
waiting to overtake him if he showed awareness of his presence. He
pushed away too, the thought of Bryan and Hughes, forgetting in the
sameness of his days that he had ever been anything but alone. At
first he had cried a little in his lonesomeness, but as the weeks went
on he remembered only that once there had been others who had
deserted him. He nodded familiarly to the stars, smiling a little; there
was only himself and them, shining steadfastly above him. They
would never change—never desert him!
Time went by unnoticed. The green dot of Earth became a glowing
green and blue orb, circled by a tiny white dot. The computer
changed its rhythm—above the control board the "strap-in" warning
flashed unseen as the rockets fired swinging the ship into the
turnover, ready for orbit with the satellite ferry station. Williams gazing
with dreamy pleasure at the jewelled curtain above him was hurled
against the port by the sudden surge of acceleration. The ship heeled
over, twisted, then turned——
Williams hung head down, screaming, as the black curtain tore, the
stars falling dizzily away—below him....

He screamed once, falling face down through the stars, through


the gold-inlaid, dizzying, beautiful, sickening....

A year later, the psychiatrists, quite pleased with themselves found


him ready for duty again; not in Space of course, but the hero of the
first Mars expedition was always sure of a job with Space Authority.
Now he could even look up at the stars at night without screaming
with vertigo.
Tonight walking confidently along the country road, fragrant and
dotted with shining pools after the recent rain, he looked up thinking
nostalgically, "the patines of bright gold...."
A coldness about his feet halted him. He looked down and once
again the black curtain tore before his eyes—once more they were
there, the cold unfriendly stars, swinging in the empty void—
Below him.
Falling downward past the whirling suns, he screamed, hardly aware
of the choking wetness in his lungs....
About him the inch-deep shining pool rippled for a moment and was
still, reflecting once more the floor of Heaven.
*** END OF THE PROJECT GUTENBERG EBOOK FLOOR OF
HEAVEN ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part of
this license, apply to copying and distributing Project Gutenberg™
electronic works to protect the PROJECT GUTENBERG™ concept
and trademark. Project Gutenberg is a registered trademark, and
may not be used if you charge for an eBook, except by following the
terms of the trademark license, including paying royalties for use of
the Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is very
easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund from
the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law in
the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms of
this agreement by keeping this work in the same format with its
attached full Project Gutenberg™ License when you share it without
charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears, or
with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning of
this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original “Plain
Vanilla ASCII” or other form. Any alternate format must include the
full Project Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in paragraph
1.F.3, the Project Gutenberg Literary Archive Foundation, the owner
of the Project Gutenberg™ trademark, and any other party
distributing a Project Gutenberg™ electronic work under this
agreement, disclaim all liability to you for damages, costs and
expenses, including legal fees. YOU AGREE THAT YOU HAVE NO
REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF
WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE
FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving it,
you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or entity
that provided you with the defective work may elect to provide a
replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the
Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and distribution
of Project Gutenberg™ electronic works, harmless from all liability,
costs and expenses, including legal fees, that arise directly or
indirectly from any of the following which you do or cause to occur:
(a) distribution of this or any Project Gutenberg™ work, (b)
alteration, modification, or additions or deletions to any Project
Gutenberg™ work, and (c) any Defect you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many small
donations ($1 to $5,000) are particularly important to maintaining tax
exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no

You might also like