Download the Full Version of textbook for Fast Typing at textbookfull.
com
Probability theory and statistical inference
empirical modeling with observational data 2nd
Edition Spanos A
https://textbookfull.com/product/probability-theory-and-
statistical-inference-empirical-modeling-with-observational-
data-2nd-edition-spanos-a/
OR CLICK BUTTON
DOWNLOAD NOW
Download More textbook Instantly Today - Get Yours Now at textbookfull.com
Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.
Probability Theory And Statistical Inference Empirical
Modeling With Observational Data Aris Spanos
https://textbookfull.com/product/probability-theory-and-statistical-
inference-empirical-modeling-with-observational-data-aris-spanos/
textboxfull.com
Probably Not: Future Prediction Using Probability And
Statistical Inference Lawrence N. Dworsky
https://textbookfull.com/product/probably-not-future-prediction-using-
probability-and-statistical-inference-lawrence-n-dworsky/
textboxfull.com
Exact Statistical Inference for Categorical Data 1st
Edition Shan
https://textbookfull.com/product/exact-statistical-inference-for-
categorical-data-1st-edition-shan/
textboxfull.com
Applied statistical inference with MINITAB Second Edition
Lesik
https://textbookfull.com/product/applied-statistical-inference-with-
minitab-second-edition-lesik/
textboxfull.com
Theory of stochastic objects probability stochastic
processes and inference 1st Edition Micheas
https://textbookfull.com/product/theory-of-stochastic-objects-
probability-stochastic-processes-and-inference-1st-edition-micheas/
textboxfull.com
Theory of stochastic objects: probability, stochastic
processes, and inference First Edition Micheas
https://textbookfull.com/product/theory-of-stochastic-objects-
probability-stochastic-processes-and-inference-first-edition-micheas/
textboxfull.com
Statistical independence in probability analysis number
theory Kac
https://textbookfull.com/product/statistical-independence-in-
probability-analysis-number-theory-kac/
textboxfull.com
Computer Age Statistical Inference Algorithms Evidence and
Data Science 1st Edition Bradley Efron
https://textbookfull.com/product/computer-age-statistical-inference-
algorithms-evidence-and-data-science-1st-edition-bradley-efron/
textboxfull.com
Computer Age Statistical Inference Algorithms Evidence and
Data Science 1st Edition Bradley Efron
https://textbookfull.com/product/computer-age-statistical-inference-
algorithms-evidence-and-data-science-1st-edition-bradley-efron-2/
textboxfull.com
Probability Theory and Statistical
Inference
Doubt over the trustworthiness of published empirical results is not unwarranted
and is often a result of statistical misspecification: invalid probabilistic assump-
tions imposed on data. Now in its second edition, this bestselling textbook offers
a comprehensive course in empirical research methods, teaching the probabilistic
and statistical foundations that enable the specification and validation of statistical
models, providing the basis for an informed implementation of statistical proce-
dure to secure the trustworthiness of evidence. Each chapter has been thoroughly
updated, accounting for developments in the field and the author’s own research.
The comprehensive scope of the textbook has been expanded by the addition of
a new chapter on the Linear Regression and related statistical models. This new
edition is now more accessible to students of disciplines beyond economics and
includes more pedagogical features, with an increased number of examples as well
as review questions and exercises at the end of each chapter.
ARIS SPANOS is Wilson E. Schmidt Professor of Economics at Virginia Poly-
technic Institute and State University. He is the author of Statistical Foundations
of Econometric Modelling (Cambridge, 1986) and, with D. G. Mayo, Error and
Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the
Objectivity and Rationality of Science (Cambridge, 2010).
Probability Theory
and Statistical
Inference
Empirical Modeling with
Observational Data
Second Edition
Aris Spanos
Virginia Tech
(Virginia Polytechnic Institute & State University)
University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India
79 Anson Road, #06–04/06, Singapore 079906
Cambridge University Press is part of the University of Cambridge.
It furthers the University’s mission by disseminating knowledge in the pursuit of
education, learning, and research at the highest international levels of excellence.
www.cambridge.org
Information on this title: www.cambridge.org/9781107185142
DOI: 10.1017/9781316882825
c Aris Spanos 2019
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 1999
Third printing 2007
Second edition 2019
Printed in the United Kingdom by TJ International Ltd. Padstow Cornwall
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Names: Spanos, Aris, 1952– author.
Title: Probability theory and statistical inference : empirical modelling
with observational data / Aris Spanos (Virginia College of Technology).
Description: Cambridge ; New York, NY : Cambridge University Press, 2019. |
Includes bibliographical references and index.
Identifiers: LCCN 2019008498 (print) | LCCN 2019016182 (ebook) | ISBN 9781107185142 | ISBN
Subjects: LCSH: Probabilities – Textbooks. | Mathematical
statistics – Textbooks.
Classification: LCC QA273 (ebook) | LCC QA273 .S6875 2019 (print) | DDC 519.5–dc23
LC record available at https://lccn.loc.gov/2019008498
ISBN 978-1-107-18514-2 Hardback
ISBN 978-1-316-63637-4 Paperback
Cambridge University Press has no responsibility for the persistence or accuracy of
URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
To my grandchildren Nicholas, Jason, and Evie,
my daughters Stella, Marina, and Alexia, and my
wife Evie for their unconditional love and support
Contents
Preface to the Second Edition page xix
1 An Introduction to Empirical Modeling 1
1.1 Introduction 1
1.2 Stochastic Phenomena: A Preliminary View 3
1.2.1 Chance Regularity Patterns 3
1.2.2 From Chance Regularities to Probabilities 7
1.2.3 Chance Regularity Patterns and Real-World Phenomena 11
1.3 Chance Regularities and Statistical Models 12
1.4 Observed Data and Empirical Modeling 14
1.4.1 Experimental vs. Observational Data 14
1.4.2 Observed Data and the Nature of a Statistical Model 15
1.4.3 Measurement Scales and Data 16
1.4.4 Measurement Scale and Statistical Analysis 18
1.4.5 Cross-Section vs. Time Series, is that the Question? 20
1.4.6 Limitations of Economic Data 22
1.5 Statistical Adequacy 23
1.6 Statistical vs. Substantive Information∗ 25
1.7 Looking Ahead 27
1.8 Questions and Exercises 28
2 Probability Theory as a Modeling Framework 30
2.1 Introduction 30
2.1.1 Primary Objective 30
2.1.2 Descriptive vs. Inferential Statistics 30
2.2 Simple Statistical Model: A Preliminary View 32
2.2.1 The Basic Structure of a Simple Statistical Model 33
2.2.2 The Notion of a Random Variable: A Naive View 34
2.2.3 Density Functions 35
2.2.4 A Random Sample: A Preliminary View 36
2.3 Probability Theory: An Introduction 40
2.3.1 Outlining the Early Milestones of Probability Theory 40
2.3.2 Probability Theory: A Modeling Perspective 42
2.4 A Simple Generic Stochastic Mechanism 42
2.4.1 The Notion of a Random Experiment 42
2.4.2 A Bird’s-Eye View of the Unfolding Story 44
vii
viii Contents
2.5 Formalizing Condition [a]: The Outcomes Set 45
2.5.1 The Concept of a Set in Set Theory 45
2.5.2 The Outcomes Set 45
2.5.3 Special Types of Sets 46
2.6 Formalizing Condition [b]: Events and Probabilities 48
2.6.1 Set-Theoretic Operations 48
2.6.2 Events vs. Outcomes 51
2.6.3 Event Space 51
2.6.4 A Digression: What is a Function? 58
2.6.5 The Mathematical Notion of Probability 59
2.6.6 Probability Space (S,, P(.)) 63
2.6.7 Mathematical Deduction 64
2.7 Conditional Probability and Independence 65
2.7.1 Conditional Probability and its Properties 65
2.7.2 The Concept of Independence Among Events 69
2.8 Formalizing Condition [c]: Sampling Space 70
2.8.1 The Concept of Random Trials 70
2.8.2 The Concept of a Statistical Space 72
2.8.3 The Unfolding Story Ahead 74
2.9 Questions and Exercises 75
3 The Concept of a Probability Model 78
3.1 Introduction 78
3.1.1 The Story So Far and What Comes Next 78
3.2 The Concept of a Random Variable 79
3.2.1 The Case of a Finite Outcomes Set: S = {s1 , s2 , . . . , sn } 80
3.2.2 Key Features of a Random Variable 81
3.2.3 The Case of a Countable Outcomes Set:
S = {s1 , s2 , . . . , sn , . . .} 85
3.3 The General Concept of a Random Variable 86
3.3.1 The Case of an Uncountable Outcomes Set S 86
3.4 Cumulative Distribution and Density Functions 89
3.4.1 The Concept of a Cumulative Distribution Function 89
3.4.2 The Concept of a Density Function 91
3.5 From a Probability Space to a Probability Model 95
3.5.1 Parameters and Moments 97
3.5.2 Functions of a Random Variable 97
3.5.3 Numerical Characteristics of Random Variables 99
3.5.4 Higher Moments 102
3.5.5 The Problem of Moments* 110
3.5.6 Other Numerical Characteristics 112
3.6 Summary 118
3.7 Questions and Exercises 119
Appendix 3.A: Univariate Distributions 121
Contents ix
3.A.1 Discrete Univariate Distributions 121
3.A.2 Continuous Univariate Distributions 123
4 A Simple Statistical Model 130
4.1 Introduction 130
4.1.1 The Story So Far, a Summary 130
4.1.2 From Random Trials to a Random Sample: A First View 130
4.2 Joint Distributions of Random Variables 131
4.2.1 Joint Distributions of Discrete Random Variables 131
4.2.2 Joint Distributions of Continuous Random Variables 133
4.2.3 Joint Moments of Random Variables 136
4.2.4 The n Random Variables Joint Distribution 138
4.3 Marginal Distributions 139
4.4 Conditional Distributions 142
4.4.1 Conditional Probability 142
4.4.2 Conditional Density Functions 143
4.4.3 Continuous/Discrete Random Variables* 146
4.4.4 Conditional Moments 146
4.4.5 A Digression: Other Forms of Conditioning 148
4.4.6 Marginalization vs. Conditioning 150
4.4.7 Conditioning on Events vs. Random Variables 151
4.5 Independence 155
4.5.1 Independence in the Two Random Variable Case 155
4.5.2 Independence in the n Random Variable Case 156
4.6 Identical Distributions and Random Samples 158
4.6.1 Identically Distributed Random Variables 158
4.6.2 A Random Sample of Random Variables 160
4.7 Functions of Random Variables 161
4.7.1 Functions of One Random Variable 161
4.7.2 Functions of Several Random Variables 162
4.7.3 Ordered Sample and its Distributions* 165
4.8 A Simple Statistical Model 166
4.8.1 From a Random Experiment to a Simple Statistical Model 166
4.9 The Statistical Model in Empirical Modeling 167
4.9.1 The Concept of a Statistical Model: A Preliminary View 167
4.9.2 Statistical Identification of Parameters 168
4.9.3 The Unfolding Story Ahead 169
4.10 Questions and Exercises 170
Appendix 4.A: Bivariate Distributions 171
4.A.1 Discrete Bivariate Distributions 171
4.A.2 Continuous Bivariate Distributions 172
5 Chance Regularities and Probabilistic Concepts 176
5.1 Introduction 176
x Contents
5.1.1 Early Developments in Graphical Techniques 176
5.1.2 Why Do We Care About Graphical Techniques? 177
5.2 The t-Plot and Independence 178
5.3 The t-Plot and Homogeneity 184
5.4 Assessing Distribution Assumptions 189
5.4.1 Data that Exhibit Dependence/Heterogeneity 189
5.4.2 Data that Exhibit Normal IID Chance Regularities 195
5.4.3 Data that Exhibit Non-Normal IID Regularities 196
5.4.4 The Histogram, the Density Function, and Smoothing 201
5.4.5 Smoothed Histograms and Non-Random Samples 206
5.5 The Empirical CDF and Related Graphs* 206
5.5.1 The Concept of the Empirical cdf (ecdf) 207
5.5.2 Probability Plots 208
5.5.3 Empirical Example: Exchange Rate Data 215
5.6 Summary 218
5.7 Questions and Exercises 219
Appendix 5.A: Data – Log-Returns 220
6 Statistical Models and Dependence 222
6.1 Introduction 222
6.1.1 Extending a Simple Statistical Model 222
6.2 Non-Random Sample: A Preliminary View 224
6.2.1 Sequential Conditioning: Reducing the Dimensionality 225
6.2.2 Keeping an Eye on the Forest! 227
6.3 Dependence and Joint Distributions 228
6.3.1 Dependence Between Two Random Variables 228
6.4 Dependence and Moments 229
6.4.1 Joint Moments and Dependence 229
6.4.2 Conditional Moments and Dependence 232
6.5 Joint Distributions and Modeling Dependence 233
6.5.1 Dependence and the Normal Distribution 234
6.5.2 A Graphical Display: The Scatterplot 236
6.5.3 Dependence and the Elliptically Symmetric Family 240
6.5.4 Dependence and Skewed Distributions 245
6.5.5 Dependence in the Presence of Heterogeneity 257
6.6 Modeling Dependence and Copulas* 258
6.7 Dependence for Categorical Variables 262
6.7.1 Measurement Scales and Dependence 262
6.7.2 Dependence and Ordinal Variables 263
6.7.3 Dependence and Nominal Variables 266
6.8 Conditional Independence 268
6.8.1 The Multivariate Normal Distribution 269
6.8.2 The Multivariate Bernoulli Distribution 271
6.8.3 Dependence in Mixed (Discrete/Continuous) Variables 272
Contents xi
6.9 What Comes Next? 273
6.10 Questions and Exercises 274
7 Regression Models 277
7.1 Introduction 277
7.2 Conditioning and Regression 279
7.2.1 Reduction and Conditional Moment Functions 279
7.2.2 Regression and Skedastic Functions 281
7.2.3 Selecting an Appropriate Regression Model 288
7.3 Weak Exogeneity and Stochastic Conditioning 292
7.3.1 The Concept of Weak Exogeneity 292
7.3.2 Conditioning on a σ -Field 295
7.3.3 Stochastic Conditional Expectation and its Properties 297
7.4 A Statistical Interpretation of Regression 301
7.4.1 The Statistical Generating Mechanism 301
7.4.2 Statistical vs. Substantive Models, Once Again 304
7.5 Regression Models and Heterogeneity 308
7.6 Summary and Conclusions 310
7.7 Questions and Exercises 312
8 Introduction to Stochastic Processes 315
8.1 Introduction 315
8.1.1 Random Variables and Orderings 316
8.2 The Concept of a Stochastic Process 318
8.2.1 Defining a Stochastic Process 318
8.2.2 Classifying Stochastic Processes; What a Mess! 320
8.2.3 Characterizing a Stochastic Process 322
8.2.4 Partial Sums and Associated Stochastic Processes 324
8.2.5 Gaussian (Normal) Process: A First View 328
8.3 Dependence Restrictions (Assumptions) 329
8.3.1 Distribution-Based Concepts of Dependence 329
8.3.2 Moment-Based Concepts of Dependence 330
8.4 Heterogeneity Restrictions (Assumptions) 331
8.4.1 Distribution-Based Heterogeneity Assumptions 331
8.4.2 Moment-Based Heterogeneity Assumptions 333
8.5 Building Block Stochastic Processes 335
8.5.1 IID Stochastic Processes 335
8.5.2 White-Noise Process 336
8.6 Markov and Related Stochastic Processes 336
8.6.1 Markov Process 336
8.6.2 Random Walk Processes 338
8.6.3 Martingale Processes 340
8.6.4 Martingale Difference Process 342
8.7 Gaussian Processes 345
xii Contents
8.7.1 AR(p) Process: Probabilistic Reduction Perspective 345
8.7.2 A Wiener Process and a Unit Root [UR(1)] Model 349
8.7.3 Moving Average [MA(q)] Process 352
8.7.4 Autoregressive vs. Moving Average Processes 353
8.7.5 The Brownian Motion Process* 354
8.8 Counting Processes* 360
8.8.1 The Poisson Process 361
8.8.2 Duration (Hazard-Based) Models 363
8.9 Summary and Conclusions 364
8.10 Questions and Exercises 367
Appendix 8.A: Asymptotic Dependence and Heterogeneity
Assumptions* 369
8.A.1 Mixing Conditions 369
8.A.2 Ergodicity 370
9 Limit Theorems in Probability 373
9.1 Introduction 373
9.1.1 Why Do We Care About Limit Theorems? 374
9.1.2 Terminology and Taxonomy 375
9.1.3 Popular Misconceptions About Limit Theorems 376
9.2 Tracing the Roots of Limit Theorems 377
9.2.1 Bernoulli’s Law of Large Numbers: A First View 377
9.2.2 Early Steps Toward the Central Limit Theorem 378
9.2.3 The First SLLN 381
9.2.4 Probabilistic Convergence Modes: A First View 381
9.3 The Weak Law of Large Numbers 383
9.3.1 Bernoulli’s WLLN 383
9.3.2 Poisson’s WLLN 385
9.3.3 Chebyshev’s WLLN 386
9.3.4 Markov’s WLLN 387
9.3.5 Bernstein’s WLLN 388
9.3.6 Khinchin’s WLLN 389
9.4 The Strong Law of Large Numbers 390
9.4.1 Borel’s (1909) SLLN 390
9.4.2 Kolmogorov’s SLLN 391
9.4.3 SLLN for a Martingale 392
9.4.4 SLLN for a Stationary Process 394
9.4.5 The Law of Iterated Logarithm* 395
9.5 The Central Limit Theorem 396
9.5.1 De Moivre–Laplace CLT 397
9.5.2 Lyapunov’s CLT 399
9.5.3 Lindeberg–Feller’s CLT 399
9.5.4 Chebyshev’s CLT 401
9.5.5 Hajek–Sidak CLT 401
Contents xiii
9.5.6 CLT for a Martingale 402
9.5.7 CLT for a Stationary Process 402
9.5.8 The Accuracy of the Normal Approximation 403
9.5.9 Stable and Other Limit Distributions* 404
9.6 Extending the Limit Theorems* 406
9.6.1 A Uniform SLLN* 409
9.7 Summary and Conclusions 409
9.8 Questions and Exercises 410
Appendix 9.A: Probabilistic Inequalities 412
9.A.1 Probability 412
9.A.2 Expectation 413
Appendix 9.B: Functional Central Limit Theorem 414
10 From Probability Theory to Statistical Inference 421
10.1 Introduction 421
10.2 Mathematical Probability: A Brief Summary 422
10.2.1 Kolmogorov’s Axiomatic Approach 422
10.2.2 Random Variables and Statistical Models 422
10.3 Frequentist Interpretation(s) of Probability 423
10.3.1 “Randomness” (Stochasticity) is a Feature of the Real
World 423
10.3.2 Model-Based Frequentist Interpretation of Probability 424
10.3.3 Von Mises’ Frequentist Interpretation of Probability 426
10.3.4 Criticisms Leveled Against the Frequentist Interpretation 427
10.3.5 Kolmogorov Complexity: An Algorithmic Perspective 430
10.3.6 The Propensity Interpretation of Probability 431
10.4 Degree of Belief Interpretation(s) of Probability 432
10.4.1 “Randomness” is in the Mind of the Beholder 432
10.4.2 Degrees of Subjective Belief 432
10.4.3 Degrees of “Objective Belief”: Logical Probability 435
10.4.4 Which Interpretation of Probability? 436
10.5 Frequentist vs. Bayesian Statistical Inference 436
10.5.1 The Frequentist Approach to Statistical Inference 436
10.5.2 The Bayesian Approach to Statistical Inference 440
10.5.3 Cautionary Notes on Misleading Bayesian Claims 443
10.6 An Introduction to Frequentist Inference 444
10.6.1 Fisher and Neglected Aspects of Frequentist Statistics 444
10.6.2 Basic Frequentist Concepts and Distinctions 446
10.6.3 Estimation: Point and Interval 447
10.6.4 Hypothesis Testing: A First View 449
10.6.5 Prediction (Forecasting) 450
10.6.6 Probability vs. Frequencies: The Empirical CDF 450
10.7 Non-Parametric Inference 453
10.7.1 Parametric vs. Non-Parametric Inference 453
xiv Contents
10.7.2 Are Weaker Assumptions Preferable to Stronger Ones? 454
10.7.3 Induction vs. Deduction 457
10.7.4 Revisiting Generic Robustness Claims 458
10.7.5 Inference Based on Asymptotic Bounds 458
10.7.6 Whither Non-Parametric Modeling? 460
10.8 The Basic Bootstrap Method 461
10.8.1 Bootstrapping and Statistical Adequacy 462
10.9 Summary and Conclusions 464
10.10 Questions and Exercises 466
11 Estimation I: Properties of Estimators 469
11.1 Introduction 469
11.2 What is an Estimator? 469
11.3 Sampling Distributions of Estimators 472
11.4 Finite Sample Properties of Estimators 474
11.4.1 Unbiasedness 474
11.4.2 Efficiency: Relative vs. Full Efficiency 475
11.4.3 Sufficiency 480
11.4.4 Minimum MSE Estimators and Admissibility 485
11.5 Asymptotic Properties of Estimators 488
11.5.1 Consistency (Weak) 488
11.5.2 Consistency (Strong) 490
11.5.3 Asymptotic Normality 490
11.5.4 Asymptotic Efficiency 491
11.5.5 Properties of Estimators Beyond the First Two Moments 492
11.6 The Simple Normal Model: Estimation 493
11.7 Confidence Intervals (Interval Estimation) 498
11.7.1 Long-Run “Interpretation” of CIs 499
11.7.2 Constructing a Confidence Interval 499
11.7.3 Optimality of Confidence Intervals 501
11.8 Bayesian Estimation 502
11.8.1 Optimal Bayesian Rules 503
11.8.2 Bayesian Credible Intervals 504
11.9 Summary and Conclusions 505
11.10 Questions and Exercises 507
12 Estimation II: Methods of Estimation 510
12.1 Introduction 510
12.2 The Maximum Likelihood Method 511
12.2.1 The Likelihood Function 511
12.2.2 Maximum Likelihood Estimators 514
12.2.3 The Score Function 517
12.2.4 Two-Parameter Statistical Model 519
12.2.5 Properties of Maximum Likelihood Estimators 524
Contents xv
12.2.6 The Maximum Likelihood Method and its Critics 532
12.3 The Least-Squares Method 534
12.3.1 The Mathematical Principle of Least Squares 534
12.3.2 Least Squares as a Statistical Method 535
12.4 Moment Matching Principle 536
12.4.1 Sample Moments and their Properties 539
12.5 The Method of Moments 543
12.5.1 Karl Pearson’s Method of Moments 543
12.5.2 The Parametric Method of Moments 544
12.5.3 Properties of PMM Estimators 546
12.6 Summary and Conclusions 547
12.7 Questions and Exercises 549
Appendix 12.A: Karl Pearson’s Approach 551
13 Hypothesis Testing 553
13.1 Introduction 553
13.1.1 Difficulties in Mastering Statistical Testing 553
13.2 Statistical Testing Before R. A. Fisher 555
13.2.1 Francis Edgeworth’s Testing 555
13.2.2 Karl Pearson’s Testing 556
13.3 Fisher’s Significance Testing 558
13.3.1 A Closer Look at the p-value 561
13.3.2 R. A. Fisher and Experimental Design 563
13.3.3 Significance Testing: Empirical Examples 565
13.3.4 Summary of Fisher’s Significance Testing 568
13.4 Neyman–Pearson Testing 569
13.4.1 N-P Objective: Improving Fisher’s Significance Testing 569
13.4.2 Modifying Fisher’s Testing Framing: A First View 570
13.4.3 A Historical Excursion 574
13.4.4 The Archetypal N-P Testing Framing 575
13.4.5 Significance Level α vs. the p-value 578
13.4.6 Optimality of a Neyman–Pearson Test 580
13.4.7 Constructing Optimal Tests: The N-P Lemma 586
13.4.8 Extending the Neyman–Pearson Lemma 588
13.4.9 Constructing Optimal Tests: Likelihood Ratio 591
13.4.10 Bayesian Testing Using the Bayes Factor 594
13.5 Error-Statistical Framing of Statistical Testing 596
13.5.1 N-P Testing Driven by Substantively Relevant Values 596
13.5.2 Foundational Issues Pertaining to Statistical Testing 598
13.5.3 Post-Data Severity Evaluation: An Evidential Account 600
13.5.4 Revisiting Issues Bedeviling Frequentist Testing 603
13.5.5 The Replication Crises and Severity 609
13.6 Confidence Intervals and their Optimality 610
13.6.1 Mathematical Duality Between Testing and CIs 610
xvi Contents
13.6.2 Uniformly Most Accurate CIs 612
13.6.3 Confidence Intervals vs. Hypothesis Testing 613
13.6.4 Observed Confidence Intervals and Severity 614
13.6.5 Fallacious Arguments for Using CIs 614
13.7 Summary and Conclusions 615
13.8 Questions and Exercises 617
Appendix 13.A: Testing Differences Between Means 620
13.A.1 Testing the Difference Between Two Means 620
13.A.2 What Happens when Var(X1t ) = Var(X2t )? 621
13.A.3 Bivariate Normal Model: Paired Sample Tests 622
13.A.4 Testing the Difference Between Two Proportions 623
13.A.5 One-Way Analysis of Variance 624
14 Linear Regression and Related Models 625
14.1 Introduction 625
14.1.1 What is a Statistical Model? 625
14.2 Normal, Linear Regression Model 626
14.2.1 Specification 626
14.2.2 Estimation 628
14.2.3 Fitted Values and Residuals 633
14.2.4 Goodness-of-Fit Measures 635
14.2.5 Confidence Intervals and Hypothesis Testing 635
14.2.6 Normality and the LR Model 642
14.2.7 Testing a Substantive Model Against the Data 643
14.3 Linear Regression and Least Squares 648
14.3.1 Mathematical Approximation and Statistical
Curve-Fitting 648
14.3.2 Gauss–Markov Theorem 651
14.3.3 Asymptotic Properties of OLS Estimators 653
14.4 Regression-Like Statistical Models 655
14.4.1 Gauss Linear Model 655
14.4.2 The Logit and Probit Models 655
14.4.3 The Poisson Regression-Like Model 657
14.4.4 Generalized Linear Models 657
14.4.5 The Gamma Regression-Like Model 658
14.5 Multiple Linear Regression Model 658
14.5.1 Estimation 660
14.5.2 Linear Regression: Matrix Formulation 661
14.5.3 Fitted Values and Residuals 662
14.5.4 OLS Estimators and their Sampling Distributions 665
14.6 The LR Model: Numerical Issues and Problems 666
14.6.1 The Problem of Near-Collinearity 666
14.6.2 The Hat Matrix and Influential Observations 673
14.6.3 Individual Observation Influence Measures 674
Contents xvii
14.7 Conclusions 675
14.8 Questions and Exercises 677
Appendix 14.A: Generalized Linear Models 680
14.A.1 Exponential Family of Distributions 680
14.A.2 Common Features of Generalized Linear Models 681
14.A.3 MLE and the Exponential Family 682
Appendix 14.B: Data 683
15 Misspecification (M-S) Testing 685
15.1 Introduction 685
15.2 Misspecification and Inference: A First View 688
15.2.1 Actual vs. Nominal Error Probabilities 688
15.2.2 Reluctance to Test the Validity of Model Assumptions 691
15.3 Non-Parametric (Omnibus) M-S Tests 694
15.3.1 The Runs M-S Test for the IID Assumptions [2]–[4] 694
15.3.2 Kolmogorov’s M-S Test for Normality ([1]) 695
15.4 Parametric (Directional) M-S Testing 697
15.4.1 A Parametric M-S Test for Independence ([4]) 697
15.4.2 Testing Independence and Mean Constancy ([2] and [4]) 698
15.4.3 Testing Independence and Variance Constancy ([2]
and [4]) 700
15.4.4 The Skewness–Kurtosis Test of Normality 700
15.4.5 Simple Normal Model: A Summary of M-S Testing 701
15.5 Misspecification Testing: A Formalization 703
15.5.1 Placing M-S Testing in a Proper Context 703
15.5.2 Securing the Effectiveness/Reliability of M-S Testing 704
15.5.3 M-S Testing and the Linear Regression Model 705
15.5.4 The Multiple Testing (Comparisons) Issue 706
15.5.5 Testing for t-Invariance of the Parameters 707
15.5.6 Where do Auxiliary Regressions Come From? 707
15.5.7 M-S Testing for Logit/Probit Models 710
15.5.8 Revisiting Yule’s “Nonsense Correlations” 710
15.5.9 Respecification 713
15.6 An Illustration of Empirical Modeling 716
15.6.1 The Traditional Curve-Fitting Perspective 716
15.6.2 Traditional ad hoc M-S Testing and Respecification 718
15.6.3 The Probabilistic Reduction Approach 721
15.7 Summary and Conclusions 729
15.8 Questions and Exercises 731
Appendix 15.A: Data 734
References 736
Index 752
Preface to the Second
Edition
The original book, published 20 years ago, has been thoroughly revised with two objectives
in mind. First, to make the discussion more compact and coherent by avoiding repetition and
many digressions. Second, to improve the methodological coherence of the proposed empir-
ical modeling framework by including material pertaining to foundational issues that has
been published by the author over the last 20 years or so in journals on econometrics, statis-
tics, and philosophy of science. In particular, this revised edition brings out more clearly
several crucial distinctions that elucidate empirical modeling, including (a) the statistical
vs. the substantive information/model, (b) the modeling vs. the inference facet of statistical
analysis, (c) testing within and testing outside the boundary of a statistical model, and (d)
pre-data vs. post-data error probabilities. These distinctions shed light on several founda-
tional issues and suggest solutions. In addition, the comprehensiveness of the book has been
improved by adding Chapter 14 on the linear regression and related models.
The current debates on the “replication crises” render the methodological framework
articulated in this book especially relevant for today’s practitioner. A closer look at the
debates (Mayo, 2018) reveals that the non-replicability of empirical evidence problem is,
first and foremost, a problem of untrustworthy evidence routinely published in prestigious
journals. The current focus of that literature on the abuse of significance testing is rather
misplaced, because it is only a part of a much broader problem relating to the mechan-
ical application of statistical methods without a real understanding of their assumptions,
limitations, proper implementation, and interpretation of their results. The abuse and mis-
interpretation of the p-value is just symptomatic of the same uninformed implementation
that contributes majorly to the problem of untrustworthy evidence. Indeed, the same unin-
formed implementation often ensures that untrustworthy evidence is routinely replicated,
when the same mistakes are repeated by equally uninformed practitioners! In contrast to the
current conventional wisdom, it is argued that a major contributor to the untrustworthy evi-
dence problem is statistical misspecification: invalid probabilistic assumptions imposed on
one’s data, another symptom of the same uninformed implementation. The primary objec-
tive of this book is to provide the necessary probabilistic foundation and the overarching
modeling framework for an informed and thoughtful application of statistical methods, as
well as the proper interpretation of their inferential results. The emphasis is placed less on
the mechanics of the application of statistical methods, and more on understanding their
assumptions, limitations, and proper interpretation.
xix
xx Preface to the Second Edition
Key Features of the Book
● It offers a seamless integration of probability theory and statistical inference with a view
to elucidating the interplay between deduction and induction in “learning from data”
about observable phenomena of interest using statistical procedures.
● It develops frequentist modeling and inference from first principles by emphasizing the
notion of a statistical model and its adequacy (the validity of its probabilistic assump-
tions vis-à-vis the particular data) as the cornerstone for reliable inductive inference and
trustworthy evidence.
● It presents frequentist inference as well-grounded procedures whose optimality is
assessed by their capacity to achieve genuine “learning from data.”
● It focuses primarily on the skills and the technical knowledge one needs to be able to
begin with substantive questions of interest, select the relevant data carefully, and pro-
ceed to establish trustworthy evidence for or against hypotheses or claims relating to the
questions of interest. These skills include understanding the statistical information con-
veyed by data plots, selecting appropriate statistical models, as well as validating them
using misspecification testing before any inferences are drawn.
● It articulates reasoned responses to several charges leveled against several aspects of
frequentist inference by addressing the underlying foundational issues, including the
use and abuse of p-values and confidence intervals, Neyman–Pearson vs. Fisher test-
ing, and inference results vs. evidence that have bedeviled frequentist inference since
the 1930s. The book discusses several such foundational issues/problems and proposes
ways to address them using an error statistical perspective grounded in the concept of
severity. Methodological issues discussed in this book include rebuttals to widely used,
ill-thought-out arguments for ignoring statistical misspecification, as well as principled
responses to certain Bayesian criticisms of the frequentist aproach.
● Its methodological perspective differs from the traditional textbook perspective by bring-
ing out the perils of curve-fitting and focusing on the key question: How can empirical
modeling lead to “learning from data” about phenomena of interest by giving rise to
trustworthy evidence?
N O T E: All sections marked with an asterisk (∗) can be skipped at first reading without any
serious interruption in the flow of the discussion.
Acknowledgments
More than any other person, Deborah G. Mayo, my colleague and collaborator on many
foundational issues in statistical inference, has helped to shape my views on several method-
ological issues addressed in this book; for that and the constant encouragement, I’m most
grateful to her. I’m also thankful to Clark Glymour, the other philosopher of science with
whom I had numerous elucidating and creative discussions on many philosophical issues
discoursed in the book. Thanks are also due to Sir David Cox for many discussions that
helped me appreciate the different perspectives on frequentist inference. Special thanks are
also due to my longtime collaborator, Anya McGuirk, who contributed majorly in puzzling
out several thorny issues discussed in this book. I owe a special thanks to Julio Lopez for
Preface to the Second Edition xxi
his insightful comments, as well as his unwavering faith in the coherence and value of the
proposed approach to empirical modeling. I’m also thankful to Jesse Bledsoe for helpful
comments on chapter 13 and Mariusz Kamienski for invaluable help on the front cover
design.
I owe special thanks to several of my former and current students over the last 20 years,
who helped to improve the discussion in this book by commenting on earlier drafts and
finding mistakes and typos. They include Elena Andreou, Andros Kourtellos, Carlos Elias,
Maria Heracleous, Jason Bergtold, Ebere Akobundu, Andreas Koutris, Alfredo Romero,
Niraj Pouydal, Michael Michaelides, Karo Solat, and Mohammad Banasaz.
Symbols
N – set of natural numbers N:={1, 2, ..., n, ...}
R – the set of real numbers; the real line (−∞, ∞)
n times
Rn :=R × R× · · · × R
R+ – the set of positive real numbers; the half real line (0, ∞)
f (x; θ ) – density function of X with parameters θ
F(x; θ ) – cumulative distribution function of X with parameters θ
N(μ, σ 2 ) – Normal distribution with mean μ and variance σ 2
E – Random Experiment (RE)
S – outcomes set (sample space)
– event space (a σ −field)
P(.) – probability set function
σ (X) – minimal sigma-field generated by X
Acronyms
AR(p) – Autoregressive model with p lags
CAN – Consistent, Asymptotically Normal
cdf – cumulative distribution function
CLT – Central Limit Theorem
ecdf – empirical cumulative distributrion function
GM – Generating Mechanism
IID – Indepedent and Identically Distributed
LS – Least-Squares
ML – Maximum Likelihood
M-S – Mis-Specification
N-P – Neyman-Pearson
PMM – Parametric Method of Moments
SLLN – Strong Law Large Numbers
WLLN – Weak Law Large Numbers
UMP – Uniformly Most Powerful
1 An Introduction
to Empirical Modeling
1.1 Introduction
Empirical modeling, broadly speaking, refers to the process, methods, and strategies
grounded on statistical modeling and inference whose primary aim is to give rise to “learn-
ing from data” about stochastic observable phenomena, using statistical models. Real-world
phenomena of interest are said to be “stochastic,” and thus amenable to statistical modeling,
when the data they give rise to exhibit chance regularity patterns, irrespective of whether
they arise from passive observation or active experimentation. In this sense, empirical
modeling has three crucial features:
(a) it is based on observed data that exhibit chance regularities;
(b) its cornerstone is the concept of a statistical model that decribes a probabilistic
generating mechanism that could have given rise to the data in question;
(c) it provides the framework for combining the statistical and substantive informa-
tion with a view to elucidating (understanding, predicting, explaining) phenomena of
interest.
Statistical vs. substantive information. Empirical modeling across different disciplines
involves an intricate blending of substantive subject matter and statistical information. The
substantive information stems from a theory or theories pertaining to the phenomenon of
interest that could range from simple conjectures to intricate substantive (structural) mod-
els. Such information has an important and multifaceted role to play by demarcating the
crucial aspects of the phenomenon of interest (suggesting the relevant variables and data),
as well as enhancing the learning from data when it meliorates the statistical information
without belying it. In contrast, statistical information stems from the chance regularities in
data. Scientific knowledge often begins with substantive conjectures based on subject matter
information, but it becomes knowledge when its veracity is firmly grounded in real-world
data. In this sense, success in “learning from data” stems primarily from a harmonious blend-
ing of these two sources of information into an empirical model that is both statistically and
substantively “adequate”; see Sections 1.5 and 1.6.
1
2 An Introduction to Empirical Modeling
Empirical modeling as curve-fitting. The current traditional perspective on empirical mod-
eling largely ignores the above distinctions by viewing the statistical problem as “quantifying
theoretical relationships presumed true.” From this perspective, empirical modeling is
viewed as a curve-fitting problem, guided primarily by goodness-of-fit. The substantive
model is often imposed on the data in an attempt to quantify its unknown parameters. This
treats the substantive information as established knowledge, and not as tentative conjec-
tures to be tested against data. The end result of curve-fitting is often an estimated model
that is misspecified, both statistically (invalid probabilistic assumptions) and substantively;
it doesn’t elucidate sufficiently the phenomenon of interest. This raises a thorny problem
in philosophy of science known as Duhem’s conundrum (Mayo, 1996), because there is
no principled way to distinguish between the two types of misspecification and apportion
blame. It is argued that the best way to address this impasse is (i) to disentangle the sta-
tistical from the substantive model by unveiling the probabilistic assumptions (implicitly
or explicitly) imposed on the data (the statistical model) and (ii) to separate the modeling
from the inference facet of empirical modeling. The modeling facet includes specifying and
selecting a statistical model, as well as appraising its adequacy (the validity of its proba-
bilistic assumptions) using misspecification testing. The inference facet uses a statistically
adequate model to pose questions of substantive interest to the data. Crudely put, conflating
the modeling with the inference facet is analogous to mistaking the process of constructing a
boat to preset specifications with sailing it in a competitive race; imagine trying to construct
the boat while sailing it in a competitive race.
Early cautionary note. It is likely that some scholars in empirical modeling will mock and
criticize the introduction of new terms and distinctions in this book as “mounds of gratu-
itous jargon,” symptomatic of an ostentatious display of pedantry. As a pre-emptive response
to such critics, allow me to quote R. A. Fisher’s 1931 reply to Arne Fisher’s [American
mathematician/statistician] complaining about his
“introduction in statistical method of some outlandish and barbarous technical terms. They stand
out like quills upon the porcupine, ready to impale the sceptical critic. Where, for instance, did
you get that atrocity, a statistic?”
His serene response was:
I use special words for the best way of expressing special meanings. Thiele and Pearson were
quite content to use the same words for what they were estimating and for their estimates of it.
Hence the chaos in which they left the problem of estimation. Those of us who wish to distinguish
the two ideas prefer to use different words, hence ‘parameter’ and ‘statistic’. No one who does not
feel this need is under any obligation to use them. Also, to Hell with pedantry. (Bennett, 1990,
pp. 311–313) [emphasis added]
A bird’s-eye view of the chapter. The rest of this chapter elaborates on the crucial features
of empirical modeling (a)–(c). In Section 1.2 we discuss the meaning of stochastic observ-
able phenomena and why such phenomena are amenable to empirical modeling. Section 1.3
focuses on the relationship between data from stochastic phenomena and statistical models.
Section 1.4, discusses several important issues relating to observed data, including their dif-
ferent measurement scales, nature, and accuracy. In Section 1.5 we discuss the important
notion of statistical adequacy: whether the postulated statistical model “accounts fully for”
1.2 Stochastic Phenomena: A Preliminary View 3
the statistical systematic information in the data. Section 1.6 discusses briefly the connection
between a statistical model and the substantive information of interest.
1.2 Stochastic Phenomena: A Preliminary View
This section provides an intuitive explanation for the notion of a stochastic phenomenon as
it relates to the concept of a statistical model, discussed in the next section.
1.2.1 Chance Regularity Patterns
The chance regularities denote patterns that are usually revealed using a variety of graph-
ical techniques and careful preliminary data analysis. The essence of chance regularity, as
suggested by the term itself, comes in the form of two entwined features:
chance an inherent uncertainty relating to the occurrence of particular outcomes;
regularity discernible regularities associated with an aggregate of many outcomes.
T E R M I N O L O G Y: The term “chance regularity” is used in order to avoid possible confusion
with the more commonly used term “randomness.”
At first sight these two attributes might appear to be contradictory, since “chance” is often
understood as the absence of order and “regularity” denotes the presence of order. However,
there is no contradiction because the “disorder” exists at the level of individual outcomes
and the order at the aggregate level. The two attributes should be viewed as inseparable for
the notion of chance regularity to make sense.
Example 1.1 To get some idea about “chance regularity” patterns, consider the data given
in Table 1.1.
Table 1.1 Observed data
3 10 11 5 6 7 10 8 5 11 2 9 9 6 8 4 7 6 5 12
7 8 5 4 6 11 7 10 5 8 7 5 9 8 10 2 7 3 8 10
11 8 9 5 7 3 4 9 10 4 7 4 6 9 7 6 12 8 11 9
10 3 6 9 7 5 8 6 2 9 6 4 7 8 10 5 8 7 9 6
5 7 7 6 12 9 10 4 8 6 5 4 7 8 6 7 11 7 8 3
A glance at Table 1.1 suggests that the observed data constitute integers between 2 and 12,
but no real patterns are apparent, at least at first sight. To bring out any chance regularity
patterns we use a graph as shown in Figure 1.1, t-plot: {(t, xt ), t = 1, 2, . . . , n}.
The first distinction to be drawn is that between chance regularity patterns and determin-
istic regularities that is easy to detect.
Deterministic regularity. When a t-plot exhibits a clear pattern which would enable one
to predict (guess) the value of the next observation exactly, the data are said to exhibit
deterministic regularity. The easiest way to think about deterministic regularity is to visualize
4 An Introduction to Empirical Modeling
12
10
8
x
1 10 20 30 40 50 60 70 80 90 100
Index
Fig. 1.1 t-Plot of a sequence of 100 observations
1.5
1.0
0.5
0.0
x
–0.5
–1.0
–1.5
1 10 20 30 40 50 60 70 80 90 100
Index
Fig. 1.2 Graph of x = 1.5 cos((π/3)t+(π/3))
the graphs of mathematical functions. If a t-plot of data can be depicted by a mathematical
function, the numbers exhibit deterministic regularity; see Figure 1.2.
In contrast to deterministic regularities, to detect chance patterns one needs to perform a
number of thought experiments.
Thought experiment 1–Distribution regularity. Associate each observation with identical
squares and rotate Figure 1.1 anti-clockwise by 90◦ , letting the squares fall vertically to form
a pile on the x-axis. The pile represents the well-known histogram (see Figure 1.3).
The histogram exhibits a clear triangular shape, reflecting a form of regularity often
associated with stable (unchanging) relative frequencies (RF) expressed as percentages
1.2 Stochastic Phenomena: A Preliminary View 5
18
16
Relative frequency (%)
14
12
10
8
6
4
2
0
2 3 4 5 6 7 8 9 10 11 12
x
Fig. 1.3 Histogram of the data in Figure 1.1
(%). Each bar of the histogram represents the frequency of each of the integers 2–12.
For example, since the value 3 occurs five times in this data set, its relative frequency is
RF(3)=5/100 = .05. The relative frequency of the value 7 is RF(7)=17/100 = .17, which is
the highest among the values 2–12. For reasons that will become apparent shortly, we name
this discernible distribution regularity.
[1] Distribution: After a large enough number of trials, the relative frequency of the
outcomes forms a seemingly stable distribution shape.
Thought experiment 2. In Figure 1.1, one would hide the observations beyond a certain
value of the index, say t = 40, and try to guess the next outcome on the basis of the observa-
tions up to t = 40. Repeat this along the x-axis for different index values and if it turns out
that it is more or less impossible to use the previous observations to narrow down the poten-
tial outcomes, conclude that there is no dependence pattern that would enable the modeler
to guess the next observation (within narrow bounds) with any certainty. In this experiment
one needs to exclude the extreme values of 2 and 12, because following these values one
is almost certain to get a value greater and smaller, respectively. This type of predictability
is related to the distribution regularity mentioned above. For reference purposes we name
the chance regularity associated with the unpredictability of the next observation given the
previous observations.
[2] Independence: In a sequence of trials, the outcome of any one trial does not
influence and is not influenced by the outcome of any other.
Thought experiment 3. In Figure 1.1 take a wide enough frame (to cover the spread of the
fluctuations) that is also long enough (roughly less than half the length of the horizontal axis)
and let it slide from left to right along the horizontal axis, looking at the picture inside the
frame as it slides along. In cases where the picture does not change significantly, the data
exhibit the chance regularity we call homogeneity, otherwise heterogeneity is present; see
Another Random Scribd Document
with Unrelated Content
Töin tuskin pojat malttoivat tervehtiä.
Työ keskeytyi patruunin puhutellessa isäntärenkiä, ja paikalla
livahti Lassilan Aukusti, joka vierimäisellä saralla etumaisinna
leikkasi, kappaleen edelle.
Isäntärenki sen kyllä älysi. Vaikka hän taas alottaessaan näytti
entiseen tapaansa levollisin ottein niittelevän, huomasi Ville
viikatteen hujauttavan jalkaa pitemmälti. Ilmonen ja tallirenki, jotka
tulivat perästä, lisäsivät vauhtia, mutta eivät toisenkaan saran miehet
mielellään heitä edelle päästäneet. Ei vain niillä kahdella saralla,
vaan kohta koko pellolla kilpailtiin. Kukaan ei enää malttanut puhella,
ja kaljakorvot, joita ei enää kukaan siirtänyt, jäivät niin kauvas, että
ne sinne unohtuivat. Leikkaamaton ruis ei enää ollutkaan suorana
viivana järven rannasta metsän laitaan. Lahermia ja niemekkeitä oli
sinne ilmestynyt, ja etäisimmässä käressä seurasi isäntärengin
kookas vartalo hitain, tasaisin liikkein Krymmin marssia.
Pojat ehättivät ehättämistään, ja pari kertaa täytyi Villen muistuttaa
Ollia, ettei hän viskelisi lyhteitä. Innoissaan he eivät muistaneetkaan
väsymystä eivätkä janoa. Ja kun vain voi olla ajattelematta niin
hyödyttömiä kuin väsymystä ja janoa, niin jaksaa kestää
uskomattomasti.
Hullummasti kävi vanhalle Tanelille. Hän juosta hynttyytti lyhde
molemmissa kainaloissa ja kaksi, kummassakin kädessä, mutta
sittenkään hän ei ennättänyt tasassa kuhilaastaa. Mutta ei hätää niin
kauvan, kun hän pysyi toisten kuhilaastajain edellä.
— "Olipa vahinko", sanoi Ville isäntärengille, kun Lentolan
päivälliskello soi, "nythän se sujui niin rivakasti."
— "Tuolla metsänlaidassa kai me syömme", arveli Olli. "Sinne
näkyvät hirmuisesti ruokaa kantavan."
Ei yksikään ajatellut, että Ville ja Olli ja Rainar menisivät
päivälliselle kotiin. Mäenrinteessä ei ainoastaan ollut sijaa heille
niinkuin kaikille muillekkin, mutta sen lisäksi oli taloudenhoitaja, joka
kaikki järjesti, hakenut heille kauneimman paikan, ja sinne oli ihan
heitä varten viety iso viilipytty.
Siinäkös hilskettä ja puheen pärinää, kunnes kaikilla oli paikkansa!
Mutta sitten kaikki vaikenivat. Ei kuulunut muuta kuin puulusikkain
hiljainen kapse viilipytyn laitoja vasten. Jos joku silloin olisi kulkenut
pitkin metsäpolkua raja-aidan takana eikä olisi sattunut pellolle päin
silmäämään, olisi hänen ollut mahdotonta arvata toista sataa henkeä
istuvan siinä ihan lähellä — niin hiljaista oli.
Rainar oli ihmeissään. "Jospa tietäisitte, millaista melskettä on
päivällisillä Helsingissä" hän sanoi.
— "Täällä syödään aina hiljaa", selitti Ville. "Eiköhän se tulle siitä,
että ennenvanhaan pidettiin ruoka pyhänä. Sen tähden syödäänkin
avopäin. Katsohan, ei kellään ole päähinettä."
Rainar huomasi Villen ja Ollinkin olevan paljain päin, ja hänkin
silpasi paikalla hatun päästään.
— "Sanotaan tukan lähtevän hatun mukana, jos syö hattu
päässä", tiesi
Olli.
Peipposet, jotka olivat kokoutuneet suureen koivuun pellon
likettyelle, ja jotka ensin olivat sekä hämmästyneet että säikähtäneet,
alkoivat heti uudelleen harjottaa jäähyväislaulujaan. Luultavasti ne
syksyllä, ennenkuin jättivät kauniin isänmaansa, aikoivat jonakin
iltana laulaa pääkaupungissa joko Kaisaniemessä tai
Kaivopuistossa.
Pojat olivat mielestään harvoin syöneet niin hyvää viilipyttyä. Mutta
pienempikin olisi riittänyt. Ei edes Olli luullut heidän jaksavan sitä
lopettaa, niin syvä se oli, ja sääli oli jättää tähteeksi. Silloin hän ei
tiennyt, että oli vielä savustettua silavaa ja perunoita. Sitäkin
vähemmän hän osasi aavistaa muutakin tulevan. Hän luuli
taloudenhoitajan tarjoovan kuumempia perunoita, kun hän toisen
kerran kantoi lautasliinalla peitettyä vatia.
— "Jättäkää vain piimä ja syökää viili", hän kehotti huomatessaan,
miten syvään Olli pisti lusikkansa. Sitten hän levitti lautasliinan
nurmikolle ja pani siihen paksun, kauniin pannukakun.
— "Olenpa jotensakin kylläinen", tuumaili Olli, laskeutuen
päivällisen jälkeen pitkäkseen nurmikolle. "Miten kyennee
kyyristymään, en ymmärrä."
— "No, kyllä se menee, kun alkuun pääsee", arveli Ville
haukotellen.
Hän ja Rainar heittäytyivät myöskin pitkälle pituuttaan.
Hetkisen perästä taloudenhoitaja palasi hakemaan viilipyttyä ja
vatia. Hän otti suuren esiliinansa edestään ja levitti sen parille
puunoksalle, ettei aurinko paistaisi suoraan Ollin silmiin. Esiliinaa
asettaessa sattui havunneulanen putoamaan ihan Ollin poskelle.
Mutta hän nukkui jo niin sikeästi, ettei huomannut taloudenhoitajan
hellävaroen ottavan sitä. — "Se päivillistuntipa hurahti", sanoi Ville
hypäten pystyyn kellon soidessa. "Juuri ikään nukahdin."
Olli vain kääntäytyi kylelleen ja korsasi. Hän ei silmiäänkään
raottanut, kun Ville häntä jalasta puisti. "Makaa sitten ja häpeä.
Toiset alottavat. Krymmi jo viuluaan virittää."
— "Ei kiireellä kauvas päästä", virkkoi Olli kohoten istualleen.
"Kyllähän se tiedetään, miten kauvan Krymmi virittää."
He ennättivät paraiksi pellolle, kun Krymmi taas käyrällä viulua
naputti ja katseli ympärilleen. Viulun juoksutusten tahdissa viikatteet
taas heiluivat ja kullankeltaisena aallokkona niitetty vilja odotti
kokoojaansa ja sitojaansa.
Eipä totta tosiaan ollut aikaa tuumiskella, voiko kumartua. Ei siinä
joutanut siekailemaan. Ennenkuin pojat huomasivatkaan, oli kilpailu
yhtä vilkas kuin ennenkin päivällistä. Mutta samassa oli myöskin
sirkiselvä, ettei toisilla saroilla heistä edelle päästy.
Kahvia juodessa selvisi toinenkin seikka.
— "Me panemme koko pellon puhtaaksi tänään", ihasteli
isäntärenki.
— "Ja sitä ei ole ennen tällä palstalla tapahtunut", jatkoi vanha
Taneli ylpeänä.
— "Me olemme onnen myyriä", sanoi Ville Ollille ja Rainarille.
"Kotona leikkuu loppui ennemmin kuin ennen ja täällä niinikään. Kun
vain nyt kaikki uuten uhkain ahertaisivat viimeiset."
— "Ei hoppu hyväksi", sanoi Olli ja huokasi nähdessään, miten
pitkälti vielä oli Lentolan puutarhanaitaan, johon pelto loppui.
— "Hyvin tehty, paljo voitu", sanoi Lentolan patruuni, kun hän
illemmalla pellolle tuli. Eikä hän liiaksi kehassut. Vilja oli tarkoin
korjattu, kuhilaat kunnolliset ja jo ennenkuin aurinko painui
Vanajaveden toisella puolella olevan metsän taa, asetettiin viimeinen
viikate puutarhanaidalle, ja kaikki auttoivat sitoessa viimeisiä lyhteitä
ja pystyttäessä viimeisiä kuhilaita. Isäntäänkin työinto tarttui. Hän
kantoi lyhteitä ja kuhilaita. Mutta kun hän taittoi hatun muutamalle
viimeiselle kuhilaalle, kävi niin surkeasti, että se vierähti maahan,
kussa ikään hän oli selkänsä kääntänyt.
Vanha Taneli nosti lyhteen ja taittoi sen uudestaan.
— "Kaikki poikaset tässä yrkäilevät, kun eivät kuitenkaan pysty",
hän sanoi niin kovasti, että Lentolan tallirenki sen kuulisi. Taneli luuli
näet hänen kyhänneen hattua kuhilaalle. Kaikki purskahtivat
nauramaan, ja lentolainen, joka heti yskän ymmärsi, nauroi itsekkin,
niin että vesi silmistä tippui.
— "Vanha Taneli, poikasuus on vierähtänyt", hän sanoi. "Mutta
jollet sinä itse tällä samalla pellolla neljäkymmentä vuotta sitten olisi
opettanut minua kuhilaalle hattua panemaan, niin luulisinpa, etten
sitä osaa. Sinä tiedät, että minä osaan. Taneli, sinä tiedät sen", hän
lisäsi, tarttuen vanhan ukko-Tanelin käteen.
Taneli nolostui niin, ettei saanut suustaan luotuista sanaa, ja
kyynelkarpalo vierähti hikihelmien sekaan hänen päivettyneelle
poskelleen.
— "Reippain marssisi nyt, Krymmi", jatkoi lentolainen, "ja sitten
taloon joka kynsi!"
Pitkä oli jono. Etumaisinna aivan Kryrumin jälkeen kulkivat
Lentolan patruuni ja vanha Taneli käsi kädessä. Eikä hän hellittänyt,
ennenkuin Taneli istui, pirtin pitkän pöydän yläpäässä eikä tarvinnut
kuin kättään ojentaa valitakseen, minkä halusi tarjottimelle kasatuista
jättiläisvoileivistä.
Pöytiä ja rahia jo pirtistä kannettiin, että tanssimaan sovitaan. Ja
Olli oli juuri lopettanut kolmannen voileipänsä, kun käsi laskeutui
hänen olalleen. Karoliina oli tullut.
— "Kas vain! Kotipuolen ihmisiähän täällä näkee", sanoi Karoliina.
"Tulin karjakkoa tervehtimään, veneellä, ja satuin paraiksi taikoihin.
Mennäänkö kotiin yhtä matkaa?"
— "Kyllä Karoliina tunnetaan", virkkoi Ville hänen poistuttuaan.
"Minä olen ihan varma, että hän on tullut meidän tautta. Ja niin
tottumaton soutaja. Ajatelkaapas, nyt vain soutaa hurautamme tästä
emmekä tarvitse maanteitse kiertää. Emmekö sentään anna hänen
tanssia, minkä ikinä jaksaa?"
Kun Krymmi alkoi soittaa, olivat Karoliina ja Ville ensimäisiä
lattialla. Olivathan he vähän kuin paripuolet, sillä Karoliina oli melkein
kaksi vertaa Villeä suurempi, ehkä enemmänkin, jos painosta
päätetään. Mutta ei siitä kukaan välittänyt.
Kun he olivat lopettanut, tuli Olli paikalla kumartamaan. Karoliinalla
oli jo ihan kuuma, aniharvoin hän tanssi, mutta kun Olli pyysi, ei hän
mitenkään voinut kieltäytyä.
Kukapa uskoisi, että Ville hetkinen sitten oli houkutellut Ollin
jäämään. Olli ei totta tosiaan ollut väsynyt. Vasta sitten kun Krymmi
muutamin hitain vetäisyin lopetti valssin, Ollikin lopetti.
— "Ville ja Olli ja Rainar eivät ole reippaita ainoastaan pellolla,
vaan myöskin oiva tanssittajia", sanoi Lentolan patruuni, otsaansa
pyyhkien.
Ehkei hän ollut huomannut Rainarin monasti sotkeutuneen
tanssiessaan isäntärengin Liisun kanssa. Ja kerran hän suin päin
kaatui joutuessaan nurkassa liian lähelle Karoliinaa ja Ollia. Sellaista
saattaa sattua paraimmallekkin tanssijalle semmoisessa ahdingossa
ja tanssinhurakassa. Kaakkuri meni mainiosti. Ville olisi suonut
Malmin ja Toivosen olevan näkemässä, sillä he sen olivat opettaneet
heille. Rainar, joka ei sitä tanssia osannut, päätti jo huomenna
opetella. Sama oli, tanssiko hän sinä iltana tai ei. Ainoa, jota hän
mielellään olisi tanssittanut, ei kuitenkaan ollut siellä. — Heidän
kotiin tullessaan hän kai jo nukkui. Ehkäpä jo nyt näki unta.
*****
Ruokasalin kello, joka jo oli pirtistä muutettu, löi yksitoista. Äiti
seisoi tuolilla korkean tammisen astiakaapin edessä ja oli juuri
pannut viimeisen kappaleen vierasastioita, jonka Elsa hänelle antoi,
ylimäiselle hyllylle, kun he kumpikin kipsahtivat kuullessaan
viulunsoittoa rannasta.
— "Karoliina ja pojat palaavat", sanoi äiti, "ja ovat ottaneet
Krymmin veneeseen. Ihan säikähdin. Niin, tämän päivän työt on
tehty. Elsa, juoksaseppas sytyttämään lamppu poikain huoneeseen,
niin minä sulen ovet täällä."
Elsa oli jo ulkona.
Puutarhassa oli niin pimeä, että Villen täytyi kulkea edellä
opastamassa Krymmiä, joka oli vennonvieras. Viulusta ei ainakaan
huomannut Krymmin monasti kompastuneen tullessaan liian lähelle
tienlaitaa. Komeasti ja kauniisti kaikui Napoleonin marssi Alppien yli
hiljaisen salaperäisessä puutarhassa.
Karoliina ja Olli kulkivat käsikkäin. Heidän perästään tulivat
Isontalon Fiina ja Mattilan nuori isäntä, joka myöskin oli veneessä
päässyt. Viimeisenä käveli Rainar, joka oli väsyksissä ja nukkunut
veneessä, niinkuin Ollikin. Hän mietti, mitähän talossa soitosta
sanotaan ja varsinkin, mitä eräs siellä siitä ajatteli, kun hän yhtäkkiä
huomasi Elsan vaalean puvun tien vieressä muutaman askeleen
päässä.
Elsa ja hän kulkivat viimeisinnä. Ja kun Rainar ihan kuin
sattumalta tarttui hänen käteensä, antoi hän hänen sitä pitää
puutarhan veräjälle asti. Mutta siinä Elsa kiskasi kätensä pois ja
pyyhälsi toisten ohi lamppua sytyttämään.
10.
JOULU.
Pakkastiaiset muuttivat uuteen taloon viimeiseksi. Ville ja Olli jo
luulivat niiden niin tulipaloa säikähtäneen, etteivät enää milloinkaan
uskalla sinne asettua. Mutta eräänä lokakuun aamuna kuului
raaputtamista ja naputtamista ilmatorvesta.
— "Talvi tulee", ennusti Malmi.
Ja taas hän oli oikeassa.
Paria päivää myöhemmin kiilsi koko lahti talon alla
tummansinisenä kuvastimena. Ensimäinen riite oli niin huonoa, että
Olli melkein heti kupsahti polviaan myöten järveen, kun koetteli,
kestikö jää jo toista jalkaa.
Seuraavalla viikolla Plyhti jo ajoi jäitse myllyyn. Varovasti hän ajoi,
ja varovasti Kimo laski jalkansa jäälle. Mutta se oli vahvempaa kuin
oli luultukaan. Oli taas täysi talvi.
Heti pohjoistuulikin saapui jäämerestä. Se kierteli nurkkia, ravisteli
ovia ja ikkunoita ja tutki kaikki raot ja liitokset. "Tässä talossa ei
aavistakkaan, minkälainen ilma ulkona on", ihaili äiti monasti, kun
hän ommellessaan näki lumen juoksevan jäätä pitkin tai pyrypilvien
kiitävän pelloilla.
Oiva talo se oli kaikistakin. Entä miten vähä halkoja tarvittiin
entisiksi! Usein olivat Ville ja Olli jo niin ajoissa halonvedon
lopettaneet, että ennättivät pitkälti ajella, ennenkuin Karoliinan lyhty
maitokammarista välkkyi.
— "Taitaa olla huonot työnansiot", sanoi Karoliina täyttäen heidän
maitotuoppinsa piripinnaksi.
— "Jo vain", vastasi Ville, "paljoakin huonommat kuin viime
vuonna."
— "Sen sijaan on muuta työtä enemmän", selitti Olli koulua
ajatellen.
Mutta oikeastaan se lukukausi luiskahti pikemmin ja helpommin
kuin he olivat luulleetkaan. Ihmeellistä oli, että useimmat aineet
muuttuivat sitä hauskemmiksi, mitä pitemmälle ennätti.
Sekin on sanottava, että muistutusvihkoa vain ani harvoin tarvittiin.
Tosin silloin kerran se mustepullon ja Ollin hihan kolina. Olli näet
opettajan kahvilla ollessa halusi nähdä, miten Leeni kirjotti —
Leenikin oli koulunkäyntinsä alottanut. — Mutta sen vertaista ei
kukaan muistanut joulun lähestyessä, ei ainakaan juuri
joulunaattona. "Ahkeruuden ja hyvän edistyksen kehotukseksi", oli
isä kirjottanut Oikean Robinpoika Kruusen kanteen, jonka kirjan Olli
sai joululahjaksi.
Vihtori-eno oli tullut. Hänen matkalaukkunsa oli raskaampi kuin
milloinkaan ennen. Siltä tuntui pojista, jotka kantoivat sen uuteen
vierashuoneeseen. Sitä paitsi oli reessä ollut vielä koko joukko
kääröksiä. Leenillä oli syli niin täynnä, että muuan suuri käärös
vierähti eteisen lattialle. Hän oli onnettomista onnettomin. Vaikka olisi
vielä ollut särkyvä! Hän ei uskaltanut edes ajatellakkaan erästä
esinettä, joka juuri olisi voinut olla sellaisessa rasiassa.
Ville koki häntä lohduttaa. Hän ravisteli kääröstä hellävaroin ja
kuunteli. "Kyllä se on niin hyvästi laitettu, että ole vain huoleti", hän
sanoi. "Muistelehan, miten rajusti Miina joululahjat viskaa, eikä niistä
ole mikään särkynyt. Ja kaupoissa ne kyllä ovat tottuneet käärimään.
Lastuvilloihinhan ne usein panevat särkyvät."
Leeni uskoi, ettei mitään ollut särkynyt. "Masurkka oli puuvilloissa",
hän lohduttautui ja pyyhki kyyneleensä esiliinansa nurealle puolelle.
Rainar oli lähettänyt suuren kopan pikatavarana. Se tuli perille
iltapäivällä joulunaattona. Viime hetkessä. Jos asemapäällikkö ei
olisi puhelimella ilmottanut, olisi se saatu vasta pyhäin jälkeen. Mutta
Antti sen Eiralla pian pyyhälsi kotiin.
Se oli osotettu Villelle. Mutta Olli auttoi purkaessa. Siinä oli
joululahjoja, siististi käärittyinä valkeaan paperiin ja huolellisesti
lakattuina, äitille ja isälle ja tytöille ja Karoliinalle ja Hannalle ja
Miinalle. Kahdessa kääröksessä luki: Isäntärenki Malmi, Metsola.
Toisessa oli varmaan kirja, mutta toinen, pienen pieni, tuntui moneen
paperiin käärityltä rasialta. "Ehkäpä siinä on yhtä kauniit kellonperät
kuin on Lentolan isäntärengillä", arveli Ville, joka tiesi enemmän kuin
oli tietävinäänkään.
Olipa siellä käärös Antillekkin eikä Plyhtikään jäänyt ilman.
"Käännäppä päätäsi toisaanne", kehotti Ville huomatessaan Ollin
nimen suuressa kääröksessä.
Olli siirsihe ikkunan luo. Hän painoi kasvonsa ikkunaan ja
varjostaen käsillään koetti katsella pimeyteen. Ville tuikkasi
kääröksen koppaan, johon he joululahjansa kokosivat.
— "Lentolan suuressa salissa on joulukuusi varmaankin jo
sytytetty", sanoi Olli. "Joka ikkunasta valo loistaa."
— "Tässä on sinulle osotettu käärös", kiehitti Ville. "Siinä lukee:
paikalla annettava Ollille."
Silloin oli Ollin vuoro koettaa saada Villelle tulevat kaksi kääröstä
huomaamatta vasuun.
Kopan pohjalla oli pitkä kirje heille molemmille. "Se on päätetty!"
huusi Ville kirjeen luettuaan. "Hän tulee heti uudelta vuodelta ja
rupeaa lukemaan meidän kanssa ja jää maalle ijäksi."
— "Jos hiukkasta aikaisemmin olisi se tietty, niin olisin haravan
sijasta tehnyt hänelle joululahjaksi lapion", päivitteli Olli. "Sinä olit
viisas, kun valmistit kirveenvarren. Sitähän tarvitaan aina."
— "Onpa totisesti hyvä, että on oma harava", lohdutti Ville. "Pian
se talvi luiskahtaa."
Samassa raotettiin ovea ja Leeni kurkisti huoneeseen. "Oletteko
valmiit, pojat? Äiti sytyttää kohta kuusen."
Hän oli niin hieno ja niin siisti ja silitetty ja suittu, ettei oikein
tiennyt, missä käsiään pitäisi.
— "Leenistä ainakin näkee, että nyt on joulu", tuumi Ville.
Vihtori-eno ja isä istuivat salissa ja puhelivat. Leeni oli jättänyt
oven auki. Joka sana kuului.
— "Vaikea kestää! Mitä!" sanoi Vihtori-eno kiihkeästi. "Olosuhteet
muuttuvat ihan sietämättömiksi. Ajatteles viimeisiä nimityksiä. — Ei
ole puhetta kestämisestä, vaan siitä, mateleeko onnenonkijain
joukkoon vai —"
— "No niin", kuului äitin iloinen ääni. "Ei valtiollisia asioita
joulunaattona. Nyt minä kutsutan väen pirtistä ja sitten sytytetään
kuusi ja olemme taas kerran lapsia lapsien joukossa."
— "Tai ottaa eron", lisäsi isä noustessaan.
— "Ja pysyy rehellisenä ja vapaana miehenä, joka ei taivu", sanoi
Vihtori-eno.
— "Jota eivät ihmiset milloinkaan taivuta", lisäsi isä, "ja joka
muuttaa maalle ja hankkii oman turpeen ja ojittaa soita."
— "Tulevain sukupolvien hyväksi", nauroi Vihtori-eno.
— "Juuri niin", virkkoi isä, "tulevain sukupolvien hyväksi."
— "Entäpä Vihtori-eno ostaa Sulkolan, kun hän virasta eroaa",
toimesi Ville. "Oletko valmis, poika, niin mennään? Sammuta sinä
lamppu, niin minä tyhjennän meidän joululahjakopan suureen
vasuun eteiseen."
Olli painoi mennessään ruokasalin oven kiini eikä pakkastiainen
ilmatorvessa kuullut sen enempää. Se oli juuri nukahtanut, kun
häikäsevä valovirta heijastui lumesta ja herätti sen. Säikähtyneenä
se kavahti pystyyn. Mutta sitten se muisti, että joulukuusihan se oli.
— Kuului veisuuta. Selvästi erotti Elsan ja Villen kirkkaat äänet ja
isän kauniin veisuun. Mutta paraiten kuului Antin syvä ääni, joka
hartaasti veteli vanhaa, kaunista jouluvirttä, niin tasaisessa tahdissa,
että joka vain kerrankaan lienee kuullut hänen matkivan
rakuunarykmentin soittokunnan bassotorvea, se ehdottomasti muisti
maalahtelaisen valkean hevosen selässä.
Veisuu vaikeni. Oli hiljaista. Pakkastiainen pöyhistelihe yöksi,
painoi päänsä syvään siipien väliin ja nukahti. Se ei edes kuullut
Villen ja Ollin huoneeseen tuloa, vaikka he puhella hurpattivat ja
nauroivat. Turvallisesti ja suloisesti se nukkui, kunnes kulkuset
alkoivat kilistä maantiellä, kun kirkkoväki hämärissä joulukirkkoon
ajoi.
Pakkastiainen — poikain huoneen ilmatorvessa asuvan
pakkastiaisen hyvä tuttava — kertoi minulle paljon tästä Metsolassa
asuessani.
Osan kuulin väsyneeltä, myöhästyneeltä nuorelta kottaraiselta,
joka sumuisena syysiltana Hiidenmaan lähellä laskeutui laivan
yläsillalle juuri minun vartijavuorollani, ja jolle minä sitten annoin
vapaan matkalipun Stettiniin.
Osan juttelivat minulle västäräkit ja lehtokertut ja pääskyset ja
muut Suomen muuttolinnut, jotka Afrikkaan matkatessaan lepäsivät
mantelipuussa sen vanhan kiviaidan yläpuolella — muistattehan? —
jossa sisiliskoisä ja sisiliskoäiti viihtyivät niin hyvin, etteivät
milloinkaan muuanne ikävöineet.
Loput olen nähnyt ihanissa unelmissa onnellisista lapsista, jotka
kunnioittavat isäänsä ja äitiänsä ja kasvavat maansa kunniaksi.
*** END OF THE PROJECT GUTENBERG EBOOK METSOLAN
POJAT: MAALAISELÄMÄÄ ***
Updated editions will replace the previous one—the old editions will
be renamed.
Creating the works from print editions not protected by U.S.
copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part of
this license, apply to copying and distributing Project Gutenberg™
electronic works to protect the PROJECT GUTENBERG™ concept
and trademark. Project Gutenberg is a registered trademark, and
may not be used if you charge for an eBook, except by following the
terms of the trademark license, including paying royalties for use of
the Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is very
easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.
START: FULL LICENSE
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg™ mission of promoting the free
distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.
Section 1. General Terms of Use and
Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund from
the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.
1.B. “Project Gutenberg” is a registered trademark. It may only be
used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law in
the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms of
this agreement by keeping this work in the same format with its
attached full Project Gutenberg™ License when you share it without
charge with others.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears, or
with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
1.E.2. If an individual Project Gutenberg™ electronic work is derived
from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg™ electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning of
this work.
1.E.4. Do not unlink or detach or remove the full Project
Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original “Plain
Vanilla ASCII” or other form. Any alternate format must include the
full Project Gutenberg™ License as specified in paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg™ electronic works
provided that:
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You provide a full refund of any money paid by a user who
notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
textbookfull.com