100% found this document useful (2 votes)
11 views

Generalized Linear Models and Extensions 4th Edition James W. Hardin 2024 scribd download

Hardin

Uploaded by

morahvielaeg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
11 views

Generalized Linear Models and Extensions 4th Edition James W. Hardin 2024 scribd download

Hardin

Uploaded by

morahvielaeg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Download the Full Version of textbook for Fast Typing at textbookfull.

com

Generalized Linear Models and Extensions 4th


Edition James W. Hardin

https://textbookfull.com/product/generalized-linear-models-
and-extensions-4th-edition-james-w-hardin/

OR CLICK BUTTON

DOWNLOAD NOW

Download More textbook Instantly Today - Get Yours Now at textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Generalized linear models and extensions Fourth Edition


Hardin

https://textbookfull.com/product/generalized-linear-models-and-
extensions-fourth-edition-hardin/

textboxfull.com

Linear and Generalized Linear Mixed Models and Their


Applications 2nd Edition Jiming Jiang

https://textbookfull.com/product/linear-and-generalized-linear-mixed-
models-and-their-applications-2nd-edition-jiming-jiang/

textboxfull.com

An Introduction to Generalized Linear Models Annette J.


Dobson

https://textbookfull.com/product/an-introduction-to-generalized-
linear-models-annette-j-dobson/

textboxfull.com

Data analysis using hierarchical generalized linear models


with R 1st Edition Youngjo Lee

https://textbookfull.com/product/data-analysis-using-hierarchical-
generalized-linear-models-with-r-1st-edition-youngjo-lee/

textboxfull.com
Repeated Measures Design with Generalized Linear Mixed
Models for Randomized Controlled Trials 1st Edition
Toshiro Tango
https://textbookfull.com/product/repeated-measures-design-with-
generalized-linear-mixed-models-for-randomized-controlled-trials-1st-
edition-toshiro-tango/
textboxfull.com

Handbook of Diagnostic Classification Models Models and


Model Extensions Applications Software Packages Matthias
Von Davier
https://textbookfull.com/product/handbook-of-diagnostic-
classification-models-models-and-model-extensions-applications-
software-packages-matthias-von-davier/
textboxfull.com

Linear Regression Models 1st Edition John P. Hoffmann

https://textbookfull.com/product/linear-regression-models-1st-edition-
john-p-hoffmann/

textboxfull.com

Advances in DEA theory and applications : with extensions


to forecasting models First Edition Tone

https://textbookfull.com/product/advances-in-dea-theory-and-
applications-with-extensions-to-forecasting-models-first-edition-tone/

textboxfull.com

Generalized Additive Models: An Introduction With R


(Second Edition) Simon N. Wood

https://textbookfull.com/product/generalized-additive-models-an-
introduction-with-r-second-edition-simon-n-wood/

textboxfull.com
Generalized Linear Models
and Extensions
Fourth Edition

James W. Hardin Department of Epidemiology and Biostatistics University of


South Carolina

Joseph M. Hilbe Statistics, School of Social and Family Dynamics Arizona State
University

A Stata Press Publication StataCorp LLC College Station, Texas

Copyright © 2001, 2007, 2012, 2018 by StataCorp LLC


All rights reserved. First edition 2001
Second edition 2007
Third edition 2012
Fourth edition 2018

Published by Stata Press, 4905 Lakeway Drive, College Station, Texas 77845

Typeset in LATEX 2

Printed in the United States of America

10 9 8 7 6 5 4 3 2 1

Print ISBN-10: 1-59718-225-7

Print ISBN-13: 978-1-59718-225-6

ePub ISBN-10: 1-59718-226-5


ePub ISBN-13: 978-1-59718-226-3

Mobi ISBN-10: 1-59718-227-3

Mobi ISBN-13: 978-1-59718-227-0

Library of Congress Control Number: 2018937959

No part of this book may be reproduced, stored in a retrieval system, or transcribed, in any form or by any
means—electronic, mechanical, photocopy, recording, or otherwise—without the prior written permission
of StataCorp LLC.

Stata, , Stata Press, Mata, , and NetCourse are registered trademarks of StataCorp LLC.

Stata and Stata Press are registered trademarks with the World Intellectual Property Organization of the
United Nations.

NetCourseNow is a trademark of StataCorp LLC.

LATEX 2 is a trademark of the American Mathematical Society.


In all editions of this text, this dedication page was written as the final
deliverable to the editor—after all the work and all the requested
changes and additions had been addressed. In every previous edition,
Joe and I co-dedicated this book to our wives and children. This time,
I write the dedication alone.

In memory of

Joseph M. Hilbe

who passed away after responding to the editor’s final change


requests, but before the specification of our dedication. Joe was a dear
friend and colleague. We worked closely on this new edition, and he
was as cheerful and tireless as always. We worked a long time to put
this latest edition together, and he would want the reader to know that
he is very proud of our collaboration, but even more proud of his
family: Cheryl, Heather, Michael, and Mitchell.
Contents

Figures

Tables
Listings

Preface
1 Introduction
1.1 Origins and motivation
1.2 Notational conventions
1.3 Applied or theoretical?
1.4 Road map
1.5 Installing the support materials
I Foundations of Generalized Linear Models

2 GLMs
2.1 Components
2.2 Assumptions
2.3 Exponential family
2.4 Example: Using an offset in a GLM
2.5 Summary
3 GLM estimation algorithms
3.1 Newton–Raphson (using the observed Hessian)
3.2 Starting values for Newton–Raphson
3.3 IRLS (using the expected Hessian)
3.4 Starting values for IRLS
3.5 Goodness of fit
3.6 Estimated variance matrices
3.6.1 Hessian
3.6.2 Outer product of the gradient
3.6.3 Sandwich
3.6.4 Modified sandwich
3.6.5 Unbiased sandwich
3.6.6 Modified unbiased sandwich
3.6.7 Weighted sandwich: Newey–West
3.6.8 Jackknife
Usual jackknife
One-step jackknife
Weighted jackknife
Variable jackknife
3.6.9 Bootstrap
Usual bootstrap
Grouped bootstrap
3.7 Estimation algorithms
3.8 Summary
4 Analysis of fit
4.1 Deviance
4.2 Diagnostics
4.2.1 Cook’s distance
4.2.2 Overdispersion
4.3 Assessing the link function
4.4 Residual analysis
4.4.1 Response residuals
4.4.2 Working residuals
4.4.3 Pearson residuals
4.4.4 Partial residuals
4.4.5 Anscombe residuals
4.4.6 Deviance residuals
4.4.7 Adjusted deviance residuals
4.4.8 Likelihood residuals
4.4.9 Score residuals
4.5 Checks for systematic departure from the model
4.6 Model statistics
4.6.1 Criterion measures
AIC
BIC
4.6.2 The interpretation of R in linear regression
Percentage variance explained
The ratio of variances
A transformation of the likelihood ratio
A transformation of the F test
Squared correlation
4.6.3 Generalizations of linear regression R interpretations
Efron’s pseudo-R
McFadden’s likelihood-ratio index
Ben-Akiva and Lerman adjusted likelihood-ratio index
McKelvey and Zavoina ratio of variances
Transformation of likelihood ratio
Cragg and Uhler normed measure
4.6.4 More R measures
The count R
The adjusted count R
Veall and Zimmermann R
Cameron–Windmeijer R
4.7 Marginal effects
4.7.1 Marginal effects for GLMs
4.7.2 Discrete change for GLMs
II Continuous Response Models

5 The Gaussian family


5.1 Derivation of the GLM Gaussian family
5.2 Derivation in terms of the mean
5.3 IRLS GLM algorithm (nonbinomial)
5.4 ML estimation
5.5 GLM log-Gaussian models
5.6 Expected versus observed information matrix
5.7 Other Gaussian links
5.8 Example: Relation to OLS
5.9 Example: Beta-carotene
6 The gamma family
6.1 Derivation of the gamma model
6.2 Example: Reciprocal link
6.3 ML estimation
6.4 Log-gamma models
6.5 Identity-gamma models
6.6 Using the gamma model for survival analysis
7 The inverse Gaussian family
7.1 Derivation of the inverse Gaussian model
7.2 Shape of the distribution
7.3 The inverse Gaussian algorithm
7.4 Maximum likelihood algorithm
7.5 Example: The canonical inverse Gaussian
7.6 Noncanonical links
8 The power family and link
8.1 Power links
8.2 Example: Power link
8.3 The power family
III Binomial Response Models

9 The binomial–logit family


9.1 Derivation of the binomial model
9.2 Derivation of the Bernoulli model
9.3 The binomial regression algorithm
9.4 Example: Logistic regression
9.4.1 Model producing logistic coefficients: The heart data
9.4.2 Model producing logistic odds ratios
9.5 GOF statistics
9.6 Grouped data
9.7 Interpretation of parameter estimates
10 The general binomial family
10.1 Noncanonical binomial models
10.2 Noncanonical binomial links (binary form)
10.3 The probit model
10.4 The clog-log and log-log models
10.5 Other links
10.6 Interpretation of coefficients
10.6.1 Identity link
10.6.2 Logit link
10.6.3 Log link
10.6.4 Log complement link
10.6.5 Log-log link
10.6.6 Complementary log-log link
10.6.7 Summary
10.7 Generalized binomial regression
10.8 Beta binomial regression
10.9 Zero-inflated models
11 The problem of overdispersion
11.1 Overdispersion
11.2 Scaling of standard errors
11.3 Williams’ procedure
11.4 Robust standard errors
IV Count Response Models

12 The Poisson family


12.1 Count response regression models
12.2 Derivation of the Poisson algorithm
12.3 Poisson regression: Examples
12.4 Example: Testing overdispersion in the Poisson model
12.5 Using the Poisson model for survival analysis
12.6 Using offsets to compare models
12.7 Interpretation of coefficients
13 The negative binomial family
13.1 Constant overdispersion
13.2 Variable overdispersion
13.2.1 Derivation in terms of a Poisson–gamma mixture
13.2.2 Derivation in terms of the negative binomial probability function
13.2.3 The canonical link negative binomial parameterization
13.3 The log-negative binomial parameterization
13.4 Negative binomial examples
13.5 The geometric family
13.6 Interpretation of coefficients
14 Other count-data models
14.1 Count response regression models
14.2 Zero-truncated models
14.3 Zero-inflated models
14.4 General truncated models
14.5 Hurdle models
14.6 Negative binomial(P) models
14.7 Negative binomial(Famoye)
14.8 Negative binomial(Waring)
14.9 Heterogeneous negative binomial models
14.10 Generalized Poisson regression models
14.11 Poisson inverse Gaussian models
14.12 Censored count response models
14.13 Finite mixture models
14.14 Quantile regression for count outcomes
14.15 Heaped data models
V Multinomial Response Models
15 Unordered-response family
15.1 The multinomial logit model
15.1.1 Interpretation of coefficients: Single binary predictor
15.1.2 Example: Relation to logistic regression
15.1.3 Example: Relation to conditional logistic regression
15.1.4 Example: Extensions with conditional logistic regression
15.1.5 The independence of irrelevant alternatives
15.1.6 Example: Assessing the IIA
15.1.7 Interpreting coefficients
15.1.8 Example: Medical admissions—introduction
15.1.9 Example: Medical admissions—summary
15.2 The multinomial probit model
15.2.1 Example: A comparison of the models
15.2.2 Example: Comparing probit and multinomial probit
15.2.3 Example: Concluding remarks
16 The ordered-response family
16.1 Interpretation of coefficients: Single binary predictor
16.2 Ordered outcomes for general link
16.3 Ordered outcomes for specific links
16.3.1 Ordered logit
16.3.2 Ordered probit
16.3.3 Ordered clog-log
16.3.4 Ordered log-log
16.3.5 Ordered cauchit
16.4 Generalized ordered outcome models
16.5 Example: Synthetic data
16.6 Example: Automobile data
16.7 Partial proportional-odds models
16.8 Continuation-ratio models
16.9 Adjacent category model
VI Extensions to the GLM
17 Extending the likelihood
17.1 The quasilikelihood
17.2 Example: Wedderburn’s leaf blotch data
17.3 Example: Tweedie family variance
17.4 Generalized additive models
18 Clustered data
18.1 Generalization from individual to clustered data
18.2 Pooled estimators
18.3 Fixed effects
18.3.1 Unconditional fixed-effects estimators
18.3.2 Conditional fixed-effects estimators
18.4 Random effects
18.4.1 Maximum likelihood estimation
18.4.2 Gibbs sampling
18.5 Mixed-effect models
18.6 GEEs
18.7 Other models
19 Bivariate and multivariate models
19.1 Bivariate and multivariate models for binary outcomes
19.2 Copula functions
19.3 Using copula functions to calculate bivariate probabilities
19.4 Synthetic datasets
19.5 Examples of bivariate count models using copula functions
19.6 The Famoye bivariate Poisson regression model
19.7 The Marshall–Olkin bivariate negative binomial regression model
19.8 The Famoye bivariate negative binomial regression model
20 Bayesian GLMs
20.1 Brief overview of Bayesian methodology
20.1.1 Specification and estimation
20.1.2 Bayesian analysis in Stata
20.2 Bayesian logistic regression
20.2.1 Bayesian logistic regression—noninformative priors
20.2.2 Diagnostic plots
20.2.3 Bayesian logistic regression—informative priors
20.3 Bayesian probit regression
20.4 Bayesian complementary log-log regression
20.5 Bayesian binomial logistic regression
20.6 Bayesian Poisson regression
20.6.1 Bayesian Poisson regression with noninformative priors
20.6.2 Bayesian Poisson with informative priors
20.7 Bayesian negative binomial likelihood
20.7.1 Zero-inflated negative binomial logit
20.8 Bayesian normal regression
20.9 Writing a custom likelihood
20.9.1 Using the llf() option
Bayesian logistic regression using llf()
Bayesian zero-inflated negative binomial logit regression using llf()
20.9.2 Using the llevaluator() option
Logistic regression model using llevaluator()
Bayesian clog-log regression with llevaluator()
Bayesian Poisson regression with llevaluator()
Bayesian negative binomial regression using llevaluator()
Zero-inflated negative binomial logit using llevaluator()
Bayesian gamma regression using llevaluator()
Bayesian inverse Gaussian regression using llevaluator()
Bayesian zero-truncated Poisson using llevaluator()
Bayesian bivariate Poisson using llevaluator()
VII Stata Software

21 Programs for Stata


21.1 The glm command
21.1.1 Syntax
21.1.2 Description
21.1.3 Options
21.2 The predict command after glm
21.2.1 Syntax
21.2.2 Options
21.3 User-written programs
21.3.1 Global macros available for user-written programs
21.3.2 User-written variance functions
21.3.3 User-written programs for link functions
21.3.4 User-written programs for Newey–West weights
21.4 Remarks
21.4.1 Equivalent commands
21.4.2 Special comments on family(Gaussian) models
21.4.3 Special comments on family(binomial) models
21.4.4 Special comments on family(nbinomial) models
21.4.5 Special comment on family(gamma) link(log) models
22 Data synthesis
22.1 Generating correlated data
22.2 Generating data from a specified population
22.2.1 Generating data for linear regression
22.2.2 Generating data for logistic regression
22.2.3 Generating data for probit regression
22.2.4 Generating data for complimentary log-log regression
22.2.5 Generating data for Gaussian variance and log link
22.2.6 Generating underdispersed count data
22.3 Simulation
22.3.1 Heteroskedasticity in linear regression
22.3.2 Power analysis
22.3.3 Comparing fit of Poisson and negative binomial
22.3.4 Effect of missing covariate on in Poisson regression
A Tables
References

Author index

Subject index
Figures
5.1 Pearson residuals obtained from linear model
5.2 Normal scores versus sorted Pearson residuals obtained from linear model
5.3 Pearson residuals versus kilocalories; Pearson residuals obtained from linear
model
5.4 Pearson residuals obtained from log-Gaussian model (two outliers removed)
5.5 Pearson residuals versus fitted values from log-Gaussian model (two outliers
removed)
5.6 Pearson residuals from lognormal model (log-transformed outcome, two
outliers removed, and zero outcome removed)
5.7 Pearson residuals versus fitted values from lognormal model (log-
transformed outcome, two outliers removed, and zero outcome removed)
5.8 Normal scores versus sorted Pearson residuals obtained from lognormal
model (log-transformed outcome, two outliers removed, and zero outcome
removed)
5.9 Pearson residuals versus kilocalories; Pearson residuals obtained from
lognormal model (log-transformed outcome, two outliers removed, and zero
outcome removed)
6.1 Anscombe residuals versus log (variance)
7.1 Inverse Gaussian ( , )
7.2 Inverse Gaussian ( , )
7.3 Inverse Gaussian ( , )
7.4 Inverse Gaussian ( , )
7.5 Inverse Gaussian ( , )
7.6 Inverse Gaussian ( , )
9.1 Sample proportions of girls reaching menarche for each age category
9.2 Predicted probabilities of girls reaching menarche for each age category
9.3 Predicted probabilities and sample proportions of girls reaching menarche
for each age category
10.1 Probit and logit functions
10.2 Predicted probabilities for probit and logit link function in grouped binary
models. The observed (sample) proportions are included as well.
10.3 Complementary log-log and log-log functions
10.4 Probit, logit, and identity functions
10.5 Observed proportion of carrot fly damage for each treatment (see
table 10.3)
13.1 Frequency of occurrence versus LOS
14.1 Probability mass functions for negative binomial models
14.2 Histogram of response variables created as a mixture of scaled Poissons
14.3 Graphs are organized for the conditional distribution of the outcome
conditional on the covariates ( ). The values of the covariates are (0,0) in
the upper left, (0,1) in the upper right, (1,0) in the lower left, and (1,1) in the
lower right. Bars represent the empirical distribution of the outcome variable.
Circles represent the estimated probabilities evaluated at
generated by the fitted Poisson regression model. Triangles
represent the estimated probabilities evaluated at
of the fitted heaped Poisson regression model.
15.1 Length of stay versus admission type for elective admissions
15.2 Length of stay versus admission type for urgent admissions
15.3 Length of stay versus admission type for emergency admissions
17.1 Pearson residuals versus linear predictor
17.2 Pearson residuals versus log(variance)
17.3 Pearson residuals versus linear predictor
17.4 Pearson residuals versus log(variance)
18.1 Simulation–extrapolation results
20.1 Centered number of doctor visits
20.2 Centered age in years
20.3 Centered number of doctor visits
20.4 Centered age in years
20.5 Out of work
20.6 Standardized age in years
Tables
2.1 Predicted values for various choices of variance function
9.1 Binomial regression models
9.2 Common binomial link functions
9.3 Variables from heart01.dta
10.1 Common binomial noncanonical link functions
10.2 Noncanonical binomial link functions ( )
10.3 1964 microplot data of carrot fly damage
10.4 Survivors among different categorizations of passengers on the Titanic
14.1 Other count-data models
14.2 Variance functions for count-data models; , , , , , and are
constants
14.3 Poisson and negative binomial panel-data models
14.4 Types of censoring for outcome
15.1 Multinomial (three-levels) logistic regression with one binary predictor
15.2 Multinomial (three-levels) logistic regression with one binary predictor
where the coefficients of the reference outcome ( ) are set to zero
16.1 Ordered (three-levels) logistic regression with one binary predictor
16.2 Ordered (three-levels) logistic regression with one binary predictor where
and
16.3 Cumulative logits outcome versus outcomes
16.4 Cumulative logits outcomes versus outcome
18.1 Stata commands for mixed-effects modeling
18.2 Equivalent commands in Stata
18.3 Equivalent random-effects logistic regression commands in Stata
19.1 Bivariate copula functions
19.2 Programs for generating bivariate outcomes with rejectsample
20.1 Built-in support for the bayes prefix
20.2 Additional built-in support for the bayes prefix
20.3 Built-in log likelihoods
20.4 Illustrated (user-specified) log likelihoods
20.5 Built-in prior distributions
20.6 Results from bayesmh with informative and noninformative priors
21.1 Resulting standard errors
21.2 Statistics for predict
21.3 Equivalent Stata commands
A.1 Variance functions
A.2 Link and inverse link functions ( )
A.3 First derivatives of link functions ( )
A.4 First derivatives of inverse link functions ( )
A.5 Second derivatives of link functions where and

A.6 Second derivatives of inverse link functions where and

A.7 Log likelihoods


A.8 Weight functions (kernels) for weighted sandwich variance estimates
A.9 Pearson residuals
A.10 Anscombe residuals
A.11 Squared deviance residuals and deviance adjustment factors
A.12 Kullback–Leibler (K–L) divergence (Cameron and Windmeijer 1997)
A.13 Cameron–Windmeijer (1997)
A.14 Interpretation of power links
Listings
3.1 IRLS algorithm
3.2 Newton–Raphson algorithm
5.1 IRLS algorithm for nonbinomial models
5.2 IRLS algorithm for Gaussian models
5.3 IRLS algorithm (reduced) for Gaussian models
5.4 IRLS algorithm for GLM using OIM
5.5 IRLS algorithm for log-Gaussian models using OIM
6.1 IRLS algorithm for gamma models
7.1 IRLS algorithm for inverse Gaussian models
7.2 IRLS algorithm for log-inverse Gaussian models using OIM
9.1 IRLS algorithm for grouped logistic regression
9.2 IRLS algorithm for binary logistic regression
10.1 IRLS algorithm for binary probit regression
10.2 IRLS algorithm for binary probit regression using OIM
10.3 IRLS algorithm for binary clog-log regression using EIM
10.4 IRLS algorithm for binary clog-log regression using OIM
10.5 IRLS algorithm for binary log-log regression
11.1 Williams’ procedure algorithm
11.2 IRLS algorithm for Williams’ procedure with optimal scale
12.1 IRLS algorithm for Poisson regression
13.1 IRLS algorithm for negative binomial regression
13.2 IRLS algorithm for log-negative binomial regression
13.3 IRLS algorithm for log-negative binomial regression using OIM
13.4 IRLS algorithm for log-negative binomial regression with estimation of
13.5 IRLS algorithm for geometric regression
13.6 IRLS algorithm for log-geometric regression
17.1 Code for user-written binsq variance program
17.2 Code for user-written Tweedie variance program
20.1 blogitll evaluator for logistic regression
20.2 bclogll evaluator for complementary log-log regression
20.3 bpoill evaluator for Poisson regression
20.4 bnbll evaluator for negative binomial regression
20.5 bzinbll evaluator for log-link zero-inflated negative binomial
20.6 bgammall evaluator for gamma regression
20.7 bivgll evaluator for inverse Gaussian regression
20.8 bpoi0ll evaluator for zero-truncated Poisson regression
21.1 Skeleton code for a user-written variance program
21.2 Skeleton code for a user-written link program
21.3 Skeleton code for user-written Newey–West kernel weights program
21.4 Example code for user-written Tukey–Hanning weights kernel
Preface
We have added several new models to the discussion of extended generalized
linear models (GLMs). We have included new software and discussion of
extensions to negative binomial regression because of Waring and Famoye. We
have also added discussion of heaped data and bias-corrected GLMs because of
Firth. There are two new chapters on multivariate outcomes and Bayes GLMs. In
addition, we have expanded the clustered data discussion to cover more of the
commands available in Stata.

We now include even more examples using synthetically created models to


illustrate estimation results, and we illustrate to readers how to construct
synthetic Monte Carlo models for binomial and major count models. Code for
creating synthetic Poisson, negative binomial, zero-inflated, hurdle, and finite
mixture models is provided and further explained. We have enhanced discussion
of marginal effects and discrete change for GLMs.

This fourth edition of Generalized Linear Models and Extensions is written


for the active researcher as well as for the theoretical statistician. Our goal has
been to clarify the nature and scope of GLMs and to demonstrate how all the
families, links, and variations of GLMs fit together in an understandable whole.

In a step-by-step manner, we detail the foundations and provide working


algorithms that readers can use to construct and better understand models that
they wish to develop. In a sense, we offer readers a workbook or handbook of
how to deal with data using GLM and GLM extensions.

This text is intended as a textbook on GLMs and as a handbook of advice for


researchers. We continue to use this book as the required text for a web-based
short course through Statistics.com (also known as the Institute for Statistical
Education); see http://www.statistics.com. The students of this six-week course
include university professors and active researchers from hospitals, government
agencies, research institutes, educational concerns, and other institutions across
the world. This latest edition reflects the experiences we have had in
communicating to our readers and students the relevant materials over the past
decade.

Many people have contributed to the ideas presented in the new edition of
this book. John Nelder has been the foremost influence. Other important and
influential people include Peter Bruce, David Collett, David Hosmer, Stanley
Lemeshow, James Lindsey, J. Scott Long, Roger Newson, Scott Zeger, Kung-
Yee Liang, Raymond J. Carroll, H. Joseph Newton, Henrik Schmiediche,
Norman Breslow, Berwin Turlach, Gordon Johnston, Thomas Lumley, Bill
Sribney, Vince Wiggins, Mario Cleves, William Greene, Andrew Robinson,
Heather Presnal, and others. Specifically, for this edition, we thank Tammy
Cummings, Chelsea Deroche, Xinling Xu, Roy Bower, Julie Royer, James
Hussey, Alex McLain, Rebecca Wardrop, Gelareh Rahimi, Michael G. Smith,
Marco Geraci, Bo Cai, and Feifei Xiao.

As always, we thank William Gould, president of StataCorp, for his


encouragement in this project. His statistical computing expertise and his
contributions to statistical modeling have had a deep impact on this book.

We are grateful to StataCorp’s editorial staff for their equanimity in reading


and editing our manuscript, especially to Patricia Branton and Lisa Gilmore for
their insightful and patient contributions in this area. Finally, we thank Kristin
MacDonald and Isabel Canette-Fernandez, Stata statisticians at StataCorp, for
their expert assistance on various programming issues, and Nikolay Balov,
Senior Statistician and Software Developer at StataCorp, for his helpful
assistance with chapter 20 on Bayesian GLMs. We would also like to thank Rose
Medeiros, Senior Statistician at StataCorp, for her assistance in the final passes
of this edition.

Stata Press allowed us to dictate some of the style of this text. In writing this
material in other forms for short courses, we have always included equation
numbers for all equations rather than only for those equations mentioned in text.
Although this is not the standard editorial style for textbooks, we enjoy the
benefits of students being able to communicate questions and comments more
easily (and efficiently). We hope that readers find this practice as beneficial as
our short-course participants have found it.

Errata, datasets, and supporting Stata programs (do-files and ado-files) may
be found at the publisher’s site http://www.stata-press.com/books/generalized-
linear- models-and-extensions/. We also maintain these materials on the author
sites at http://www.thirdwaystat.com/jameshardin/ and at
https://works.bepress.com/joseph_hilbe/. We are very pleased to be able to
produce this newest edition. Working on this text from the first edition in 2001
over the past 17 years has been a tremendously satisfying experience.

James W. Hardin
Joseph M. Hilbe
March 2018
Chapter 1
Introduction
In updating this text, our primary goal is to convey the practice of analyzing data
via generalized linear models to researchers across a broad spectrum of scientific
fields. We lay out the framework used for describing various aspects of data and
for communicating tools for data analysis. This initial part of the text contains no
examples. Rather, we focus on the lexicon of generalized linear models used in
later chapters. These later chapters include examples from fields such as
biostatistics, economics, and survival analysis.

In developing analysis tools, we illustrate techniques via their genesis in


estimation algorithms. We believe that motivating the discussion through the
estimation algorithms clarifies the origin and usefulness of all generalized linear
models. Instead of detailed theoretical exposition, we refer to texts and papers
that present such material so that we may focus our detailed presentations on the
algorithms and their justification. Our detailed presentations are mostly
algebraic; we have minimized matrix notation whenever possible.

We often present illustrations of models using data that we synthesize.


Although it is preferable to use real data to illustrate interpretation in context,
there is a distinct advantage to examples using simulated data. The advantage of
worked examples relying on data synthesis is that the data-generating process
offers yet another glimpse into the associations between variables and outcomes
that are to be captured in the procedure. The associations are thus seen from both
the results of the model and the origins of data generation.
1.1 Origins and motivation

We wrote this text for researchers who want to understand the scope and
application of generalized linear models while being introduced to the
underlying theory. For brevity’s sake, we use the acronym GLM to refer to the
generalized linear model, but we acknowledge that GLM has been used elsewhere
as an acronym for the general linear model. The latter usage, of course, refers to
the area of statistical modeling based solely on the normal or Gaussian
probability distribution.

We take GLM to be the generalization of the general, because that is precisely


what GLMs are. They are the result of extending ordinary least-squares (OLS)
regression, or the normal model, to a model that is appropriate for a variety of
response distributions, specifically to those distributions that compose the single
parameter exponential family of distributions. We examine exactly how this
extension is accomplished. We also aim to provide the reader with a firm
understanding of how GLMs are evaluated and when their use is appropriate. We
even advance a bit beyond the traditional GLM and give the reader a look at how
GLMs can be extended to model certain types of data that do not fit exactly within
the GLM framework.

Nearly every text that addresses a statistical topic uses one or more statistical
computing packages to calculate and display results. We use Stata exclusively,
though we do refer occasionally to other software packages—especially when it
is important to highlight differences.

Some specific statistical models that make up GLMs are often found as
standalone software modules, typically fit using maximum likelihood methods
based on quantities from model-specific derivations. Stata has several such
commands for specific GLMs including poisson, logistic, and regress. Some
of these procedures were included in the Stata package from its first version.
More models have been addressed through commands written by users of Stata’s
programming language leading to the creation of highly complex statistical
models. Some of these community-contributed commands have since been
incorporated into the official Stata package. We highlight these commands and
illustrate how to fit models in the absence of a packaged command; see
especially chapter 14.
Stata’s glm command was originally created as a community-contributed
command (Hilbe 1993b) and then officially adopted into Stata two years later as
part of Stata 4.0. Examples of the glm command in this edition reflect
StataCorp’s continued updates to the command.

Readers of technical books often need to know about prerequisites,


especially how much math and statistics background is required. To gain full
advantage from this text and follow its every statement and algorithm, you
should have an understanding equal to a two-semester calculus-based course on
statistical theory. Without a background in statistical theory, the reader can
accept the presentation of the theoretical underpinnings and follow the (mostly)
algebraic derivations that do not require more than a mastery of simple
derivatives. We assume prior knowledge of multiple regression but no other
specialized knowledge is required.

We believe that GLMs are best understood if their computational basis is clear.
Hence, we begin our exposition with an explanation of the foundations and
computation of GLMs; there are two major methodologies for developing
algorithms. We then show how simple changes to the base algorithms lead to
different GLM families, links, and even further extensions. In short, we attempt to
lay the GLM open to inspection and to make every part of it as clear as possible.
In this fashion, the reader can understand exactly how and why GLM algorithms
can be used, as well as altered, to better model a desired dataset.

Perhaps more than any other text in this area, we alternatively examine two
major computational GLM algorithms and their modifications:

1. Iteratively reweighted least squares

2. Newton–Raphson

Interestingly, some of the models we present are calculated only by using one
of the above methods. Iteratively reweighted least squares is the more
specialized technique and is applied less often. Yet it is typically the algorithm of
choice for quasilikelihood models such as generalized estimating equations
(GEEs). On the other hand, truncated models that do not fit neatly into the
exponential family of distributions are modeled using Newton–Raphson methods
—and for this, too, we show why. Again, focusing on the details of calculation
should help the reader understand both the scope and the limits of a particular
model.

Whenever possible, we present the log likelihood for the model under
discussion. In writing the log likelihood, we include offsets so that interested
programmers can see how those elements enter estimation. In fact, we attempt to
offer programmers the ability to understand and write their own working GLMs,
plus many useful extensions. As programmers ourselves, we believe that there is
value in such a presentation; we would have much enjoyed having it at our
fingertips when we first entered this statistical domain.
1.2 Notational conventions

We use to denote the likelihood and the script to denote the log likelihood.
We use to denote the design matrix of independent (explanatory) variables.
When appropriate, we use boldface type to emphasize that we are referring to
a matrix; a lowercase letter with a subscript will refer to the th row from the
matrix .

We use to denote the dependent (response) variable and refer to the vector
as the coefficients of the design matrix. We use when we wish to discuss or
emphasize the fitted coefficients. Throughout the text, we discuss the role of the
(vector) linear predictor . In generalizing this concept, we also refer to
the augmented (by an offset) version of the linear predictor .

Finally, we use the notation to refer to the expectation of a random


variable and the notation to refer to the variance of a random variable. We
describe other notational conventions at the time of their first use.
1.3 Applied or theoretical?

A common question regarding texts concerns their focus. Is the text applied or
theoretical? Our text is both. However, we would argue that it is basically
applied. We show enough technical details for the theoretician to understand the
underlying basis of GLMs. However, we believe that understanding the use and
limitations of a GLM includes understanding its estimation algorithm. For some,
dealing with formulas and algorithms appears thoroughly theoretical. We believe
that it aids in understanding the scope and limits of proper application. Perhaps
we can call the text a bit of both and not worry about classification. In any case,
for those who fear formulas, each formula and algorithm is thoroughly
explained. We hope that by book’s end the formulas and algorithms will seem
simple and meaningful. For completeness, we give the reader references to texts
that discuss more advanced topics and theory.
1.4 Road map

Part I of the text deals with the basic foundations of GLM. We detail the various
components of GLM, including various family, link, variance, deviance, and log-
likelihood functions. We also provide a thorough background and detailed
particulars of both the Newton–Raphson and iteratively reweighted least-squares
algorithms. The chapters that follow highlight this discussion, which describes
the framework through which the models of interest arise.

We also give the reader an overview of GLM residuals, introducing some that
are not widely known, but that nevertheless can be extremely useful for
analyzing a given model’s worth. We discuss the general notion of goodness of
fit and provide a framework through which you can derive more extensions to
GLM. We conclude this part with discussion and illustration of simulation and
data synthesis.

We often advise participants only interested in the application and


interpretation of models to skip this first part of the book. Even those interested
in the theoretical underpinnings will find that this first part of the book can serve
more as an appendix. That is, the information in this part often turns out to be
most useful in subsequent readings of the material.

Part II addresses the continuous family of distributions, including the


Gaussian, gamma, inverse Gaussian, and power families. We derive the related
formulas and relevant algorithms for each family and then discuss the ancillary
or scale parameters appropriate to each model. We also examine noncanonical
links and generalizations to the basic model. Finally, we give examples, showing
how a given dataset may be analyzed using each family and link. We give
examples dealing with model application, including discussion of the appropriate
criteria for the analysis of fit. We have expanded the number of examples in this
new edition to highlight both model fitting and assessment.

Part III addresses binomial response models. It includes exposition of the


general binomial model and of the various links. Major links described include
the canonical logit, as well as the noncanonical links probit, log-log, and
complementary log-log. We also cover other links. We present examples and
criteria for analysis of fit throughout the part. This new edition includes
extensions to generalized binomial regression resulting from a special case of
Exploring the Variety of Random
Documents with Different Content
much salt as may be required should be added to the stock when the
head first begins to boil in it: the cook must regulate also by the taste
the exact proportion of cayenne, mace, and catsup, which will flavour
the soup agreeably. The fragments of the head, with the bones and
the residue of the beef used for stock, if stewed down together with
some water and a few fresh vegetables, will afford some excellent
broth, such as would be highly acceptable, especially if well
thickened with rice, to many a poor family during the winter months.
31. Unless very good and pure in flavour, we cannot recommend the addition of
this or of any other catsup to soup or gravy.

Stock: shin of beef, 6 to 7 lbs.; water, 5 quarts: stewed down (with


vegetables, &c.) till reduced nearly half. Boned half-head with skin
on stewed in stock: 1-1/2 hour. Soup: stock, 5 pints; tongue, skin of
head, and part of flesh: 15 to 40 minutes, or more if not quite tender.
Rice-flour, 6 to 8 oz.; cayenne, quarter-teaspoonful; mace, twice as
much; mushroom catsup, 1/2 wineglassful: 10 minutes. Sherry, 2
wineglassesful, forcemeat-balls, 20 to 30.
SOUP DES GALLES.

Add to the liquor in which a knuckle of veal has been boiled the
usual time for table as much water as will make altogether six quarts,
and stew in it gently sixpennyworth of beef bones and sixpennyworth
of pork-rinds. When the boiling is somewhat advanced, throw in the
skin of a calf’s head; and in an hour afterwards, or when it is quite
tender, lift it out and set it aside till wanted. Slice and fry four large
mild onions, stick into another eight or ten cloves, and put them into
the soup after it has stewed from six to seven hours. Continue the
boiling for two or three hours longer, then strain off the soup, and let
it remain until perfectly cold. When wanted for table, take it quite
clear from the fat and sediment, and heat it anew with the skin of the
calf’s head cut into dice, three ounces of loaf sugar, four
tablespoonsful of strained lemon-juice, two of soy, and three
wineglassesful of sherry; give it one boil, skim it well, and serve it as
hot as possible. Salt must be added to it sparingly in the first
instance on account of the soy: a proper seasoning of cayenne or
pepper must not, of course, be omitted.
This receipt was given to the writer, some years since, as a
perfectly successful imitation of a soup which was then, and is still,
she believes, selling in London at six shillings the quart. Never
having tasted the original Soupe des Galles she cannot say how far
it is a correct one; but she had it tested with great exactness when
she received it first, and found the result a very good soup prepared
at an extremely moderate cost. The pork-rinds, when long boiled,
afford a strong and flavourless jelly, which might be advantageously
used to give consistence to other soups. They may be procured
during the winter, usually at the butcher’s, but if not, at the
porkshops: they should be carefully washed before they are put into
the soup-pot. When a knuckle of veal cannot conveniently be had, a
pound or two of the neck and a morsel of scrag of mutton may
instead be boiled down with the beef-bones; or two or three pounds
of neck or shin of beef: but these will, of course, augment the cost of
the soup.
POTAGE À LA REINE.

(A Delicate White Soup.)

Should there be no strong veal broth, nor any white stock in


readiness, stew four pounds of the scrag or knuckle of veal, with a
thick slice or two of lean ham, a faggot of sweet herbs, two
moderate-sized carrots, and the same of onions, a large blade of
mace, and a half-teaspoonful of white peppercorns, in four quarts of
water until reduced to about five pints; then strain the liquor, and set
it by until the fat can be taken entirely from it. Skin and wash
thoroughly, a couple of fine fowls, or three young pullets, and take
away the dark spongy substance which adheres to the insides; pour
the veal broth to them, and boil them gently from three quarters of an
hour to an hour; then lift them out, take off all the white flesh, mince it
small, pound it to the finest paste, and cover it with a basin until
wanted for use. In the mean time let the bodies of the fowls be put
again into the stock, and stewed gently for an hour and a half; add
as much salt and cayenne, as will season the soup properly, strain it
off when sufficiently boiled, and let it cool; skim off every particle of
fat; steep, in a small portion of it, which should be boiling, four
ounces of the crumb of light stale bread sliced thin, and when it has
simmered a few minutes, drain or wring the moisture from it in a
clean cloth, add it to the flesh of the chickens, and pound them
together until they are perfectly blended; then pour the stock to them
in very small quantities at first, and mix them smoothly with it; pass
the whole through a sieve or tammy, heat it in a clean stewpan, stir
to it from a pint to a pint and a half of boiling cream, and add, should
it not be sufficiently thick, an ounce and a half of arrow-root, quite
free from lumps, and moistened with a few spoonsful of cold milk or
stock.
Remark.—This soup, and the two which immediately follow it, if
made with care and great nicety by the exact directions given here
for them, will be found very refined and excellent. For stock: veal, 4
lbs.; ham, 6 oz.; water, 4 quarts; bunch of herbs; carrots, 2; onions,
2; mace, large blade; peppercorns, 1/2 teaspoonful; salt: 5 hours.
Fowls, 2, or pullets, 3: 3/4 to 1 hour; stewed afterwards 1 to 1-1/2
hour. Crumb of bread, 4 oz.; cream, 1 to 1-1/2 pint; arrow-root (if
needed), 1-1/2 oz.
Obs.—Some cooks pound with the bread and chickens the yolks
of three or four hard-boiled eggs, but these improve neither the
colour nor the flavour of the potage.
WHITE OYSTER SOUP.

(or Oyster Soup à la Reine.)


When the oysters are small, from two to three dozens for each pint
of soup should be prepared, but this number can of course be
diminished or increased at pleasure. Let the fish (which should be
finely conditioned natives) be opened carefully; pour the liquor from
them, and strain it; rinse them in it well, and beard them; strain the
liquor a second time through a lawn sieve or folded muslin, and pour
it again over the oysters. Take a portion from two quarts of the palest
veal stock, and simmer the beards in it from twenty to thirty minutes.
Heat the soup, flavour it with mace and cayenne, and strain the
stock from the oyster-beards into it. Plump the fish in their own
liquor, but do not let them boil; pour the liquor to the soup, and add to
it a pint of boiling cream; put the oysters into the tureen, dish the
soup, and send it to table quickly. Should any thickening be required,
stir briskly to the stock an ounce and a half of arrow-root entirely free
from lumps, and carefully mixed with a little milk or cream; or, in lieu
of this, when a rich soup is liked, thicken it with four ounces of fresh
butter well blended with three of flour.
Oysters, 8 to 12 dozens; pale veal stock, 2 quarts; cream, 1 pint;
thickening, 1 oz. arrow-root, or butter, 4 oz., flour, 3 oz.
RABBIT SOUP À LA REINE.

Wash and soak thoroughly three young rabbits, put them whole
into the soup-pot, and pour on them seven pints of cold water or of
clear veal broth; when they have stewed gently about three quarters
of an hour lift them out, and take off the flesh of the backs, with a
little from the legs should there not be half a pound of the former;
strip off the skin, mince the meat very small, and pound it to the
smoothest paste; cover it from the air, and set it by. Put back into the
soup the bodies of the rabbits, with two mild onions of moderate
size, a head of celery, three carrots, a faggot of savoury herbs, two
blades of mace, a half-teaspoonful of peppercorns, and an ounce of
salt. Stew the whole softly three hours; strain it off, let it stand to
settle, pour it gently from the sediment, put from four to five pints into
a clean stewpan, and mix it very gradually while hot with the
pounded rabbit-flesh; this must be done with care, for if the liquid be
not added in very small portions at first, the meat will gather into
lumps and will not easily be worked smooth afterwards. Add as
much pounded mace and cayenne as will season the soup
pleasantly, and pass it through a coarse but very clean sieve; wipe
out the stewpan, put back the soup into it, and stir in when it boils, a
pint and a quarter of good cream[32] mixed with a tablespoonful of
the best arrow-root: salt, if needed, should be thrown in previously.
32. We give this receipt exactly as we had it first compounded, but less cream
and rather more arrow-root might be used for it, and would adapt it better to
the economist.

Young rabbits, 3; water, or clear veal broth, 7 pints: 3/4 of an hour.


Remains of rabbits; onions, 2; celery, 1 head; carrots, 3; savoury
herbs; mace, 2 blades; white peppercorns, a half-teaspoonful; salt, 1
oz.: 3 hours. Soup, 4 to 5 pints; pounded rabbit-flesh, 8 oz.; salt,
mace, and cayenne, if needed; cream, 1-1/4 pint; arrow-root, 1
tablespoonful (or 1-1/2 ounce).
BROWN RABBIT SOUP.

Cut down into joints, flour, and fry lightly, two full grown, or three
young rabbits; add to them three onions of moderate size, also fried
to a clear brown; on these pour gradually seven pints of boiling
water, throw in a large teaspoonful of salt, clear off all the scum with
care as it rises, and then put to the soup a faggot of parsley, four not
very large carrots, and a small teaspoonful of peppercorns; boil the
whole very softly from five hours to five and a half; add more salt if
needed, strain off the soup, let it cool sufficiently for the fat to be
skimmed clean from it, heat it afresh, and send it to table with
sippets of fried bread. Spice, with a thickening of rice-flour, or of
wheaten flour browned in the oven, and mixed with a spoonful or two
of very good mushroom catsup, or of Harvey’s sauce, can be added
at pleasure to the above, with a few drops of eschalot-wine, or
vinegar; but the simple receipt will be found extremely good without
them.
Rabbits, 2 full grown, or 3 small; onions fried, 3 middling-sized;
water, 7 pints; salt, 1 large teaspoonful or more; carrots, 4, a faggot
of parsley; peppercorns, 1 small teaspoonful: 5 to 5-1/2 hours.
SUPERLATIVE HARE SOUP.

Cut down a hare into joints, and put into a soup-pot, or large
stewpan, with about a pound of lean ham, in thick slices, three
moderate-sized mild onions, three blades of mace, a faggot of
thyme, sweet marjoram, and parsley, and about three quarts of good
beef stock. Let it stew very gently for full two hours from the time of
its first beginning to boil, and more, if the hare be old. Strain the soup
and pound together very fine the slices of ham and all the flesh of
the back, legs, and shoulders of the hare, and put this meat into a
stewpan with the liquor in which it was boiled, the crumb of two
French rolls, and half a pint of port wine. Set it on the stove to
simmer twenty minutes; then rub it through a sieve, place it again on
the stove till very hot, but do not let it boil: season it with salt and
cayenne, and send it to table directly.
Hare, 1; ham, 12 to 16 oz.; onions, 3 to 6; mace, 3 blades; faggot
of savoury herbs; beef stock, 3 quarts: 2 hours. Crumb of 2 rolls; port
wine, 1/2 pint; little salt and cayenne: 20 minutes.
A LESS EXPENSIVE HARE SOUP.[33]
33. The remains of a roasted hare, with the forcemeat and gravy, are admirably
calculated for making this soup.

Pour on two pounds of neck or shin of beef and a hare well


washed and carved into joints, one gallon of cold water, and when it
boils and has been thoroughly skimmed, add an ounce and a half of
salt, two onions, one large head of celery, three moderate-sized
carrots, a teaspoonful of black peppercorns, and six cloves.
Let these stew very gently for three hours, or longer, should the
hare not be perfectly tender. Then take up the principal joints, cut the
meat from them, mince, and pound it to a fine paste, with the crumb
of two penny rolls (or two ounces of the crumb of household bread)
which has been soaked in a little of the boiling soup, and then
pressed very dry in a cloth; strain, and mix smoothly with it the stock
from the remainder of the hare; pass the soup through a strainer,
season it with cayenne, and serve it when at the point of boiling; if
not sufficiently thick, add to it a tablespoonful of arrow-root
moistened with a little cold broth, and let the soup simmer for an
instant afterwards. Two or three glasses of port wine, and two
dozens of small forcemeat-balls, may be added to this soup with
good effect.
Beef, 2 lbs.; hare, 1; water, 1 gallon; salt, 1-1/2 oz.; onions, 2;
celery, 1 head; carrots, 3; bunch of savoury herbs; peppercorns, 1
teaspoonful; cloves, 6: 3 hours, or more. Bread, 2 oz.; cayenne,
arrow-root (if needed), 1 tablespoonful.
ECONOMICAL TURKEY SOUP.

The remains of a roast turkey, even after they have supplied the
usual mince and broil, will furnish a tureen of cheap and excellent
soup with the addition of a little fresh meat. Cut up rather small two
pounds of the neck or other lean joint of beef, and pour to it five pints
of cold water. Heat these very slowly; skim the liquor when it begins
to boil, and add to it an ounce of salt, a small, mild onion (the
proportion of all the vegetables may be much increased when they
are liked), a little celery, and the flesh and bones of the turkey, with
any gravy or forcemeat that may have been left with them. Let these
boil gently for about three hours; then strain off the soup through a
coarse sieve or cullender, and let it remain until the fat can be
entirely removed from it. It may then be served merely well thickened
with rice[34] which has previously been boiled very dry as for currie,
and stewed in it for about ten minutes; and seasoned with one large
heaped tablespoonful or more of minced parsley, and as much salt
and pepper or cayenne as it may require. This, as the reader will
perceive, is a somewhat frugal preparation, by which the residue of a
roast turkey may be turned to economical account; but it is a
favourite soup at some good English tables, where its very simplicity
is a recommendation. It can always be rendered more expensive,
and of richer quality, by the addition of lean ham or smoked beef,[35]
a larger weight of fresh meat, and catsup or other store-sauces.
34. It will be desirable to prepare six ounces of rice, and to use as much of it as
may be required, the reduction of the stock not being always equal, and the
same weight of rice therefore not being in all cases sufficient. Rice-flour can
be substituted for the whole grain and used as directed for Rice Flour Soup,
page 15.

35. As we have stated in our chapter of Foreign Cookery, the Jewish smoked
beef, of which we have given particulars there, imparts a superior flavour to
soups and gravies; and it is an economical addition to them, as a small
portion of it will much heighten their savour.

Turkey soup à la reine is made precisely like the Potage à la Reine


of fowls or pullets, of which the receipt will be found in another part
of this chapter.
PHEASANT SOUP.

Half roast a brace of well-kept pheasants, and flour them rather


thickly when they are first laid to the fire. As soon as they are nearly
cold take all the flesh from the breasts, put it aside, and keep it
covered from the air; carve down the remainder of the birds into
joints, bruise the bodies thoroughly, and stew the whole gently from
two to three hours in five pints of strong beef broth; then strain off the
soup, and press as much of it as possible from the pheasants. Let it
cool; and in the mean time strip the skins from the breasts, mince
them small, and pound them to the finest paste, with half as much
fresh butter, and half of dry crumbs of bread; season these well with
cayenne, sufficiently with salt, and moderately with pounded mace
and grated nutmeg, and add, when their flavour is liked, three or four
eschalots previously boiled tender in a little of the soup, left till cold,
and minced before they are put into the mortar. Moisten the mixture
with the yolks of two or three eggs, roll it into small balls of equal
size, dust a little flour upon them, skim all the fat from the soup, heat
it in a clean stewpan, and when it boils throw them in and poach
them from ten to twelve minutes, but first ascertain that the soup is
properly seasoned with salt and cayenne. We have recommended
that the birds should be partially roasted before they are put into the
soup-pot, because their flavour is much finer when this is done than
when they are simply stewed; they should be placed rather near to a
brisk fire that they may be quickly browned on the surface without
losing any of their juices, and the basting should be constant. A slight
thickening of rice-flour and arrow-root can be added to the soup at
pleasure, and the forcemeat-balls may be fried and dropped into the
tureen when they are preferred so. Half a dozen eschalots lightly
browned in butter, and a small head of celery, may also be thrown in
after the birds begin to stew, but nothing should be allowed to prevail
ever the natural flavour of the game itself; and this should be
observed equally with other kinds, as partridges, grouse, and
venison.
Pheasants, 2. roasted 20 to 25 minutes. Strong beef broth, or
stock, 5 pints: 2 to 3 hours. Forcemeat-balls: breasts of pheasants,
half as much dry bread-crumbs and of butter, salt, mace, cayenne;
yolks of 2 or 3 eggs (and at choice 3 or 4 boiled eschalots).
Obs.—The stock may be made of six pounds of shin of beef, and
four quarts of water reduced to within a pint of half. An onion, a large
carrot, a bunch of savoury herbs, and some salt and spice should be
added to it: one pound of neck of veal or of beef will improve it.
ANOTHER PHEASANT SOUP.

Boil down the half-roasted birds as directed in the foregoing


receipt, and add to the soup, after it is strained and re-heated, the
breasts pounded to the finest paste with nearly as much bread
soaked in a little of the stock and pressed very dry; for the proper
manner of mixing them, see Potage à la Reine, page 29. Half a pint
of small mushrooms cleaned as for pickling, then sliced rather
thickly, and stewed from ten to fifteen minutes without browning, in
an ounce or two of fresh butter, with a slight seasoning of mace,
cayenne, and salt, then turned into the mortar and pounded with the
other ingredients, will be found an excellent addition to the soup,
which must be passed through a strainer after the breasts are added
to it, brought to the point of boiling, and served with sippets à la
Reine, or with others simply fried of a delicate brown and well dried.
We have occasionally had a small quantity of delicious soup made
with the remains of birds which have been served at table; and
where game is frequently dressed, the cook, by reserving all the
fragments for the purpose, and combining different kinds, may often
send up a good tureen of such, made at a very slight cost.
Pheasants, 2; stock, 5 pints; bread soaked in gravy (see Panada,
Chapter VIII), nearly as much in bulk as the flesh of the breasts of
the birds; mushrooms, 1/2 pint, stewed in one or two oz. of butter 10
to 15 minutes, then pounded with flesh of pheasants. Salt, cayenne
and mace, to season properly.
PARTRIDGE SOUP.

This is, we think, superior in flavour to the pheasant soup. It


should be made in precisely the same manner, but three birds
allowed for it instead of two. Grouse and partridges together will
make a still finer one; the remains of roast grouse even, added to a
brace of partridges, will produce a very good effect.
MULLAGATAWNY SOUP.

Slice, and fry gently in some good butter three or four large
onions, and when they are of a fine equal amber-colour lift them out
with a slice and put them into a deep stewpot, or large thick
saucepan; throw a little more butter into the pan, and then brown
lightly in it a young rabbit, or the prime joints of two, or a fowl cut
down small, and floured. When the meat is sufficiently browned, lay
it upon the onions, pour gradually to them a quart of good boiling
stock, and stew it gently from three quarters of an hour to an hour;
then take it out, and pass the stock and onions through a fine sieve
or strainer. Add to them two pints and a half more of stock, pour the
whole into a clean pan, and when it boils stir to it two tablespoonsful
of currie-powder mixed with nearly as much of browned flour, and a
little cold water or broth, put in the meat, and simmer it for twenty
minutes or longer should it not be perfectly tender, add the juice of a
small lemon just before it is dished, serve it very hot, and send boiled
rice to table with it. Part of a pickled mango cut into strips about the
size of large straws, is sometimes served in this soup, after being
stewed in it for a few minutes; a little of the pickle itself should be
added with it. We have given here the sort of receipt commonly used
in England for mullagatawny, but a much finer soup may be made by
departing from it in some respects. The onions, of which the
proportion may be increased or diminished to the taste, after being
fried slowly and with care, that no part should be overdone, may be
stewed for an hour in the first quart of stock with three or four ounces
of grated cocoa-nut,[36] which will impart a rich mellow flavour to the
whole. After all of this that can be rubbed through the sieve has been
added to as much more stock as will be required for the soup, and
the currie-powder and thickening have been boiled in it for twenty
minutes, the flesh of part of a calf’s head,[37] previously stewed
almost tender, and cut as for mock turtle, with a sweetbread also
parboiled or stewed in broth, and divided into inch-squares, will
make an admirable mullagatawny, if simmered in the stock until they
have taken the flavour of the currie-seasoning. The flesh of a couple
of calves’ feet, with a sweetbread or two, may, when more
convenient, be substituted for the head. A large cupful of thick
cream, first mixed and boiled with a teaspoonful of flour or arrow-root
to prevent its curdling, and stirred into the soup before the lemon-
juice, will enrich and improve it much.
36. That our readers to whom this ingredient in soups is new, may not be misled,
we must repeat here, that although the cocoa-nut when it is young and fresh
imparts a peculiarly rich flavour to any preparation, it is not liked by all eaters,
and is better omitted when the taste of a party is not known, and only one
soup is served.

37. The scalp or skin only of a calf’s head will make excellent mullagatawny, with
good broth for stock; and many kinds of shell-fish also.

Rabbit, 1, or the best joints of, 2, or fowl, 1; large onions, 4 to 6;


stock, 1 quart: 3/4 to 1 hour. 2-1/2 pints more of stock; currie-powder,
2 heaped tablespoonsful, with 2 of browned flour; meat and all
simmered together 20 minutes or more; juice of lemon, 1 small; or
part of pickled mango stewed in the soup 3 to 4 minutes.
Or,—onions, 3 to 6; cocoa-nut, 3 to 4 oz.; stock, 1 quart; stewed 1
hour. Stock, 3 pints (in addition to the first quart); currie-powder and
thickening each, 2 large tablespoonsful: 20 minutes. Flesh of part of
calf s head and sweetbread, 15 minutes or more. Thick cream, 1
cupful; flour or arrow-root, 1 teaspoonful; boiled 2 minutes, and
stirred to the soup. Chili vinegar, 1 tablespoonful, or lemon-juice, 2
tablespoonsful.
Obs. 1.—The brain of the calf’s head stewed for twenty minutes in
a little of the stock, then rubbed through a sieve, diluted gradually
with more of the stock, and added as thickening to the soup, will be
found an admirable substitute for part of the flour.
Obs. 2.—Three or four pounds of a breast of veal, or an equal
weight of mutton, free from bone and fat, may take the place of
rabbits or fowls in this soup, for a plain dinner. The veal should be
cut into squares of an inch and a half, or into strips of an inch in
width, and two in length; and the mutton should be trimmed down in
the same way, or into very small cutlets.
Obs. 3.—For an elegant table, the joints of rabbit or of fowl should
always be boned before they are added to the soup, for which, in this
case, a couple of each will be needed for a single tureen, as all the
inferior joints must be rejected.
TO BOIL RICE FOR MULLAGATAWNY SOUPS, OR FOR
CURRIES.

The Patna, or small-grained rice, which is not so good as the


Carolina, for the general purposes of cookery, ought to be served
with currie. First take out the unhusked grains, then wash the rice in
several waters, and put it into a large quantity of cold water; bring it
gently to boil, keeping it uncovered, and boil it softly for fifteen
minutes, when it will be perfectly tender, and every grain will remain
distinct. Throw it into a large cullender, and let it drain for ten minutes
near the fire; should it not then appear quite dry, turn it into a dish,
and set it for a short time into a gentle oven, or let it steam in a clean
saucepan near the fire. It should neither be stirred, except just at
first, to prevent its lumping while it is still quite hard, nor touched with
either fork or spoon; the stewpan may be shaken occasionally,
should the rice seem to require it, and it should be thrown lightly from
the cullender upon the dish. A couple of minutes before it is done,
throw in some salt, and from the time of its beginning to boil remove
the scum as it rises.
Patna rice, 1/2 lb.; cold water, 2 quarts: boiled slowly, 15 minutes.
Salt, 1 large teaspoonful.
Obs.—This, of all the modes of boiling rice which we have tried,
and they have been very numerous, is indisputably the best. The
Carolina rice answers well dressed in the same manner, but requires
four or five minutes longer boiling: it should never be served until it is
quite tender. One or two minutes, more or less, will sometimes, from
the varying quality of the grain, be requisite to render it tender.
GOOD VEGETABLE MULLAGATAWNY.

Dissolve in a large stewpan or thick iron saucepan, four ounces of


butter, and when it is on the point of browning, throw in four large
mild onions sliced, three pounds weight of young vegetable marrow
cut in large dice and cleared from the skin and seeds, four large or
six moderate-sized cucumbers, pared, split, and emptied likewise of
their seeds, and from three to six large acid apples, according to the
taste; shake the pan often, and stew these over a gentle fire until
they are tolerably tender; then strew lightly over and mix well
amongst them, three heaped tablespoonsful of mild currie powder,
with nearly a third as much of salt, and let the vegetables stew from
twenty to thirty minutes longer; then pour to them gradually sufficient
boiling water (broth or stock if preferred) to just cover them, and
when they are reduced almost to a pulp press the whole through a
hair-sieve with a wooden spoon, and heat it in a clean stewpan, with
as much additional liquid as will make two quarts with that which was
first added. Give any flavouring that may be needed, whether of salt,
cayenne, or acid, and serve the soup extremely hot. Should any
butter appear on the surface, let it be carefully skimmed off, or stir in
a small dessertspoonful of arrow-root (smoothly mixed with a little
cold broth or water) to absorb it. Rice may be served with this soup
at pleasure, but as it is of the consistence of winter peas soup, it
scarcely requires any addition. The currie powder may be altogether
omitted for variety, and the whole converted into a plain vegetable
potage; or it may be rendered one of high savour, by browning all the
vegetables lightly, and adding to them rich brown stock. Tomatas,
when in season, may be substituted for the apples, after being
divided, and freed from their seeds.
Butter, 4 oz.; vegetable marrow, pared and scooped, 3 lbs.; large
mild onions, 4; large cucumbers, 4; or middling-sized, 6; apples, or
large tomatas, 3 to 6; 30 to 40 minutes. Mild currie-powder, 3 heaped
tablespoonsful; salt, one small tablespoonful 20 to 32 minutes.
Water, broth, or good stock, 2 quarts.
CUCUMBER SOUP.

Pare, split, and empty from eight to twenty[38] fine, well grown,
but not old cucumbers,—those which have the fewest seeds are best
for the purpose; throw a little salt over them, and leave them for an
hour to drain, then put them with the white part only of a couple of
mild onions into a deep stewpan or delicately clean saucepan, cover
them nearly half an inch with pale but good veal stock, and stew
them gently until they are perfectly tender, which will be in from three
quarters of an hour to an hour and a quarter; work the whole through
a hair-sieve, and add to it as much more stock as may be needed to
make the quantity of soup required for table; and as the cucumbers,
from their watery nature, will thicken it but little, stir to it when it boils,
as much arrow-root, rice-flour, or tous les mois (see page 1), as will
bring it to a good consistence; add from half to a whole pint of boiling
cream, and serve the soup immediately. Salt and cayenne sufficient
to season it, should be thrown over the cucumbers while they are
stewing. The yolks of six or eight eggs, mixed with a dessertspoonful
of chili vinegar, may be used for this soup instead of cream; three
dessertspoonsful of minced parsley may then be strewed into it a
couple of minutes before they are added: it must not, of course, be
allowed to boil after they are stirred in.
38. This is a great disparity of numbers; but some regard must be had to
expense, where the vegetable cannot be obtained with facility.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like