Generalized Linear Models and Extensions 4th Edition James W. Hardin 2024 scribd download
Generalized Linear Models and Extensions 4th Edition James W. Hardin 2024 scribd download
com
https://textbookfull.com/product/generalized-linear-models-
and-extensions-4th-edition-james-w-hardin/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/generalized-linear-models-and-
extensions-fourth-edition-hardin/
textboxfull.com
https://textbookfull.com/product/linear-and-generalized-linear-mixed-
models-and-their-applications-2nd-edition-jiming-jiang/
textboxfull.com
https://textbookfull.com/product/an-introduction-to-generalized-
linear-models-annette-j-dobson/
textboxfull.com
https://textbookfull.com/product/data-analysis-using-hierarchical-
generalized-linear-models-with-r-1st-edition-youngjo-lee/
textboxfull.com
Repeated Measures Design with Generalized Linear Mixed
Models for Randomized Controlled Trials 1st Edition
Toshiro Tango
https://textbookfull.com/product/repeated-measures-design-with-
generalized-linear-mixed-models-for-randomized-controlled-trials-1st-
edition-toshiro-tango/
textboxfull.com
https://textbookfull.com/product/linear-regression-models-1st-edition-
john-p-hoffmann/
textboxfull.com
https://textbookfull.com/product/advances-in-dea-theory-and-
applications-with-extensions-to-forecasting-models-first-edition-tone/
textboxfull.com
https://textbookfull.com/product/generalized-additive-models-an-
introduction-with-r-second-edition-simon-n-wood/
textboxfull.com
Generalized Linear Models
and Extensions
Fourth Edition
Joseph M. Hilbe Statistics, School of Social and Family Dynamics Arizona State
University
Published by Stata Press, 4905 Lakeway Drive, College Station, Texas 77845
Typeset in LATEX 2
10 9 8 7 6 5 4 3 2 1
No part of this book may be reproduced, stored in a retrieval system, or transcribed, in any form or by any
means—electronic, mechanical, photocopy, recording, or otherwise—without the prior written permission
of StataCorp LLC.
Stata, , Stata Press, Mata, , and NetCourse are registered trademarks of StataCorp LLC.
Stata and Stata Press are registered trademarks with the World Intellectual Property Organization of the
United Nations.
In memory of
Joseph M. Hilbe
Figures
Tables
Listings
Preface
1 Introduction
1.1 Origins and motivation
1.2 Notational conventions
1.3 Applied or theoretical?
1.4 Road map
1.5 Installing the support materials
I Foundations of Generalized Linear Models
2 GLMs
2.1 Components
2.2 Assumptions
2.3 Exponential family
2.4 Example: Using an offset in a GLM
2.5 Summary
3 GLM estimation algorithms
3.1 Newton–Raphson (using the observed Hessian)
3.2 Starting values for Newton–Raphson
3.3 IRLS (using the expected Hessian)
3.4 Starting values for IRLS
3.5 Goodness of fit
3.6 Estimated variance matrices
3.6.1 Hessian
3.6.2 Outer product of the gradient
3.6.3 Sandwich
3.6.4 Modified sandwich
3.6.5 Unbiased sandwich
3.6.6 Modified unbiased sandwich
3.6.7 Weighted sandwich: Newey–West
3.6.8 Jackknife
Usual jackknife
One-step jackknife
Weighted jackknife
Variable jackknife
3.6.9 Bootstrap
Usual bootstrap
Grouped bootstrap
3.7 Estimation algorithms
3.8 Summary
4 Analysis of fit
4.1 Deviance
4.2 Diagnostics
4.2.1 Cook’s distance
4.2.2 Overdispersion
4.3 Assessing the link function
4.4 Residual analysis
4.4.1 Response residuals
4.4.2 Working residuals
4.4.3 Pearson residuals
4.4.4 Partial residuals
4.4.5 Anscombe residuals
4.4.6 Deviance residuals
4.4.7 Adjusted deviance residuals
4.4.8 Likelihood residuals
4.4.9 Score residuals
4.5 Checks for systematic departure from the model
4.6 Model statistics
4.6.1 Criterion measures
AIC
BIC
4.6.2 The interpretation of R in linear regression
Percentage variance explained
The ratio of variances
A transformation of the likelihood ratio
A transformation of the F test
Squared correlation
4.6.3 Generalizations of linear regression R interpretations
Efron’s pseudo-R
McFadden’s likelihood-ratio index
Ben-Akiva and Lerman adjusted likelihood-ratio index
McKelvey and Zavoina ratio of variances
Transformation of likelihood ratio
Cragg and Uhler normed measure
4.6.4 More R measures
The count R
The adjusted count R
Veall and Zimmermann R
Cameron–Windmeijer R
4.7 Marginal effects
4.7.1 Marginal effects for GLMs
4.7.2 Discrete change for GLMs
II Continuous Response Models
Author index
Subject index
Figures
5.1 Pearson residuals obtained from linear model
5.2 Normal scores versus sorted Pearson residuals obtained from linear model
5.3 Pearson residuals versus kilocalories; Pearson residuals obtained from linear
model
5.4 Pearson residuals obtained from log-Gaussian model (two outliers removed)
5.5 Pearson residuals versus fitted values from log-Gaussian model (two outliers
removed)
5.6 Pearson residuals from lognormal model (log-transformed outcome, two
outliers removed, and zero outcome removed)
5.7 Pearson residuals versus fitted values from lognormal model (log-
transformed outcome, two outliers removed, and zero outcome removed)
5.8 Normal scores versus sorted Pearson residuals obtained from lognormal
model (log-transformed outcome, two outliers removed, and zero outcome
removed)
5.9 Pearson residuals versus kilocalories; Pearson residuals obtained from
lognormal model (log-transformed outcome, two outliers removed, and zero
outcome removed)
6.1 Anscombe residuals versus log (variance)
7.1 Inverse Gaussian ( , )
7.2 Inverse Gaussian ( , )
7.3 Inverse Gaussian ( , )
7.4 Inverse Gaussian ( , )
7.5 Inverse Gaussian ( , )
7.6 Inverse Gaussian ( , )
9.1 Sample proportions of girls reaching menarche for each age category
9.2 Predicted probabilities of girls reaching menarche for each age category
9.3 Predicted probabilities and sample proportions of girls reaching menarche
for each age category
10.1 Probit and logit functions
10.2 Predicted probabilities for probit and logit link function in grouped binary
models. The observed (sample) proportions are included as well.
10.3 Complementary log-log and log-log functions
10.4 Probit, logit, and identity functions
10.5 Observed proportion of carrot fly damage for each treatment (see
table 10.3)
13.1 Frequency of occurrence versus LOS
14.1 Probability mass functions for negative binomial models
14.2 Histogram of response variables created as a mixture of scaled Poissons
14.3 Graphs are organized for the conditional distribution of the outcome
conditional on the covariates ( ). The values of the covariates are (0,0) in
the upper left, (0,1) in the upper right, (1,0) in the lower left, and (1,1) in the
lower right. Bars represent the empirical distribution of the outcome variable.
Circles represent the estimated probabilities evaluated at
generated by the fitted Poisson regression model. Triangles
represent the estimated probabilities evaluated at
of the fitted heaped Poisson regression model.
15.1 Length of stay versus admission type for elective admissions
15.2 Length of stay versus admission type for urgent admissions
15.3 Length of stay versus admission type for emergency admissions
17.1 Pearson residuals versus linear predictor
17.2 Pearson residuals versus log(variance)
17.3 Pearson residuals versus linear predictor
17.4 Pearson residuals versus log(variance)
18.1 Simulation–extrapolation results
20.1 Centered number of doctor visits
20.2 Centered age in years
20.3 Centered number of doctor visits
20.4 Centered age in years
20.5 Out of work
20.6 Standardized age in years
Tables
2.1 Predicted values for various choices of variance function
9.1 Binomial regression models
9.2 Common binomial link functions
9.3 Variables from heart01.dta
10.1 Common binomial noncanonical link functions
10.2 Noncanonical binomial link functions ( )
10.3 1964 microplot data of carrot fly damage
10.4 Survivors among different categorizations of passengers on the Titanic
14.1 Other count-data models
14.2 Variance functions for count-data models; , , , , , and are
constants
14.3 Poisson and negative binomial panel-data models
14.4 Types of censoring for outcome
15.1 Multinomial (three-levels) logistic regression with one binary predictor
15.2 Multinomial (three-levels) logistic regression with one binary predictor
where the coefficients of the reference outcome ( ) are set to zero
16.1 Ordered (three-levels) logistic regression with one binary predictor
16.2 Ordered (three-levels) logistic regression with one binary predictor where
and
16.3 Cumulative logits outcome versus outcomes
16.4 Cumulative logits outcomes versus outcome
18.1 Stata commands for mixed-effects modeling
18.2 Equivalent commands in Stata
18.3 Equivalent random-effects logistic regression commands in Stata
19.1 Bivariate copula functions
19.2 Programs for generating bivariate outcomes with rejectsample
20.1 Built-in support for the bayes prefix
20.2 Additional built-in support for the bayes prefix
20.3 Built-in log likelihoods
20.4 Illustrated (user-specified) log likelihoods
20.5 Built-in prior distributions
20.6 Results from bayesmh with informative and noninformative priors
21.1 Resulting standard errors
21.2 Statistics for predict
21.3 Equivalent Stata commands
A.1 Variance functions
A.2 Link and inverse link functions ( )
A.3 First derivatives of link functions ( )
A.4 First derivatives of inverse link functions ( )
A.5 Second derivatives of link functions where and
Many people have contributed to the ideas presented in the new edition of
this book. John Nelder has been the foremost influence. Other important and
influential people include Peter Bruce, David Collett, David Hosmer, Stanley
Lemeshow, James Lindsey, J. Scott Long, Roger Newson, Scott Zeger, Kung-
Yee Liang, Raymond J. Carroll, H. Joseph Newton, Henrik Schmiediche,
Norman Breslow, Berwin Turlach, Gordon Johnston, Thomas Lumley, Bill
Sribney, Vince Wiggins, Mario Cleves, William Greene, Andrew Robinson,
Heather Presnal, and others. Specifically, for this edition, we thank Tammy
Cummings, Chelsea Deroche, Xinling Xu, Roy Bower, Julie Royer, James
Hussey, Alex McLain, Rebecca Wardrop, Gelareh Rahimi, Michael G. Smith,
Marco Geraci, Bo Cai, and Feifei Xiao.
Stata Press allowed us to dictate some of the style of this text. In writing this
material in other forms for short courses, we have always included equation
numbers for all equations rather than only for those equations mentioned in text.
Although this is not the standard editorial style for textbooks, we enjoy the
benefits of students being able to communicate questions and comments more
easily (and efficiently). We hope that readers find this practice as beneficial as
our short-course participants have found it.
Errata, datasets, and supporting Stata programs (do-files and ado-files) may
be found at the publisher’s site http://www.stata-press.com/books/generalized-
linear- models-and-extensions/. We also maintain these materials on the author
sites at http://www.thirdwaystat.com/jameshardin/ and at
https://works.bepress.com/joseph_hilbe/. We are very pleased to be able to
produce this newest edition. Working on this text from the first edition in 2001
over the past 17 years has been a tremendously satisfying experience.
James W. Hardin
Joseph M. Hilbe
March 2018
Chapter 1
Introduction
In updating this text, our primary goal is to convey the practice of analyzing data
via generalized linear models to researchers across a broad spectrum of scientific
fields. We lay out the framework used for describing various aspects of data and
for communicating tools for data analysis. This initial part of the text contains no
examples. Rather, we focus on the lexicon of generalized linear models used in
later chapters. These later chapters include examples from fields such as
biostatistics, economics, and survival analysis.
We wrote this text for researchers who want to understand the scope and
application of generalized linear models while being introduced to the
underlying theory. For brevity’s sake, we use the acronym GLM to refer to the
generalized linear model, but we acknowledge that GLM has been used elsewhere
as an acronym for the general linear model. The latter usage, of course, refers to
the area of statistical modeling based solely on the normal or Gaussian
probability distribution.
Nearly every text that addresses a statistical topic uses one or more statistical
computing packages to calculate and display results. We use Stata exclusively,
though we do refer occasionally to other software packages—especially when it
is important to highlight differences.
Some specific statistical models that make up GLMs are often found as
standalone software modules, typically fit using maximum likelihood methods
based on quantities from model-specific derivations. Stata has several such
commands for specific GLMs including poisson, logistic, and regress. Some
of these procedures were included in the Stata package from its first version.
More models have been addressed through commands written by users of Stata’s
programming language leading to the creation of highly complex statistical
models. Some of these community-contributed commands have since been
incorporated into the official Stata package. We highlight these commands and
illustrate how to fit models in the absence of a packaged command; see
especially chapter 14.
Stata’s glm command was originally created as a community-contributed
command (Hilbe 1993b) and then officially adopted into Stata two years later as
part of Stata 4.0. Examples of the glm command in this edition reflect
StataCorp’s continued updates to the command.
We believe that GLMs are best understood if their computational basis is clear.
Hence, we begin our exposition with an explanation of the foundations and
computation of GLMs; there are two major methodologies for developing
algorithms. We then show how simple changes to the base algorithms lead to
different GLM families, links, and even further extensions. In short, we attempt to
lay the GLM open to inspection and to make every part of it as clear as possible.
In this fashion, the reader can understand exactly how and why GLM algorithms
can be used, as well as altered, to better model a desired dataset.
Perhaps more than any other text in this area, we alternatively examine two
major computational GLM algorithms and their modifications:
2. Newton–Raphson
Interestingly, some of the models we present are calculated only by using one
of the above methods. Iteratively reweighted least squares is the more
specialized technique and is applied less often. Yet it is typically the algorithm of
choice for quasilikelihood models such as generalized estimating equations
(GEEs). On the other hand, truncated models that do not fit neatly into the
exponential family of distributions are modeled using Newton–Raphson methods
—and for this, too, we show why. Again, focusing on the details of calculation
should help the reader understand both the scope and the limits of a particular
model.
Whenever possible, we present the log likelihood for the model under
discussion. In writing the log likelihood, we include offsets so that interested
programmers can see how those elements enter estimation. In fact, we attempt to
offer programmers the ability to understand and write their own working GLMs,
plus many useful extensions. As programmers ourselves, we believe that there is
value in such a presentation; we would have much enjoyed having it at our
fingertips when we first entered this statistical domain.
1.2 Notational conventions
We use to denote the likelihood and the script to denote the log likelihood.
We use to denote the design matrix of independent (explanatory) variables.
When appropriate, we use boldface type to emphasize that we are referring to
a matrix; a lowercase letter with a subscript will refer to the th row from the
matrix .
We use to denote the dependent (response) variable and refer to the vector
as the coefficients of the design matrix. We use when we wish to discuss or
emphasize the fitted coefficients. Throughout the text, we discuss the role of the
(vector) linear predictor . In generalizing this concept, we also refer to
the augmented (by an offset) version of the linear predictor .
A common question regarding texts concerns their focus. Is the text applied or
theoretical? Our text is both. However, we would argue that it is basically
applied. We show enough technical details for the theoretician to understand the
underlying basis of GLMs. However, we believe that understanding the use and
limitations of a GLM includes understanding its estimation algorithm. For some,
dealing with formulas and algorithms appears thoroughly theoretical. We believe
that it aids in understanding the scope and limits of proper application. Perhaps
we can call the text a bit of both and not worry about classification. In any case,
for those who fear formulas, each formula and algorithm is thoroughly
explained. We hope that by book’s end the formulas and algorithms will seem
simple and meaningful. For completeness, we give the reader references to texts
that discuss more advanced topics and theory.
1.4 Road map
Part I of the text deals with the basic foundations of GLM. We detail the various
components of GLM, including various family, link, variance, deviance, and log-
likelihood functions. We also provide a thorough background and detailed
particulars of both the Newton–Raphson and iteratively reweighted least-squares
algorithms. The chapters that follow highlight this discussion, which describes
the framework through which the models of interest arise.
We also give the reader an overview of GLM residuals, introducing some that
are not widely known, but that nevertheless can be extremely useful for
analyzing a given model’s worth. We discuss the general notion of goodness of
fit and provide a framework through which you can derive more extensions to
GLM. We conclude this part with discussion and illustration of simulation and
data synthesis.
Add to the liquor in which a knuckle of veal has been boiled the
usual time for table as much water as will make altogether six quarts,
and stew in it gently sixpennyworth of beef bones and sixpennyworth
of pork-rinds. When the boiling is somewhat advanced, throw in the
skin of a calf’s head; and in an hour afterwards, or when it is quite
tender, lift it out and set it aside till wanted. Slice and fry four large
mild onions, stick into another eight or ten cloves, and put them into
the soup after it has stewed from six to seven hours. Continue the
boiling for two or three hours longer, then strain off the soup, and let
it remain until perfectly cold. When wanted for table, take it quite
clear from the fat and sediment, and heat it anew with the skin of the
calf’s head cut into dice, three ounces of loaf sugar, four
tablespoonsful of strained lemon-juice, two of soy, and three
wineglassesful of sherry; give it one boil, skim it well, and serve it as
hot as possible. Salt must be added to it sparingly in the first
instance on account of the soy: a proper seasoning of cayenne or
pepper must not, of course, be omitted.
This receipt was given to the writer, some years since, as a
perfectly successful imitation of a soup which was then, and is still,
she believes, selling in London at six shillings the quart. Never
having tasted the original Soupe des Galles she cannot say how far
it is a correct one; but she had it tested with great exactness when
she received it first, and found the result a very good soup prepared
at an extremely moderate cost. The pork-rinds, when long boiled,
afford a strong and flavourless jelly, which might be advantageously
used to give consistence to other soups. They may be procured
during the winter, usually at the butcher’s, but if not, at the
porkshops: they should be carefully washed before they are put into
the soup-pot. When a knuckle of veal cannot conveniently be had, a
pound or two of the neck and a morsel of scrag of mutton may
instead be boiled down with the beef-bones; or two or three pounds
of neck or shin of beef: but these will, of course, augment the cost of
the soup.
POTAGE À LA REINE.
Wash and soak thoroughly three young rabbits, put them whole
into the soup-pot, and pour on them seven pints of cold water or of
clear veal broth; when they have stewed gently about three quarters
of an hour lift them out, and take off the flesh of the backs, with a
little from the legs should there not be half a pound of the former;
strip off the skin, mince the meat very small, and pound it to the
smoothest paste; cover it from the air, and set it by. Put back into the
soup the bodies of the rabbits, with two mild onions of moderate
size, a head of celery, three carrots, a faggot of savoury herbs, two
blades of mace, a half-teaspoonful of peppercorns, and an ounce of
salt. Stew the whole softly three hours; strain it off, let it stand to
settle, pour it gently from the sediment, put from four to five pints into
a clean stewpan, and mix it very gradually while hot with the
pounded rabbit-flesh; this must be done with care, for if the liquid be
not added in very small portions at first, the meat will gather into
lumps and will not easily be worked smooth afterwards. Add as
much pounded mace and cayenne as will season the soup
pleasantly, and pass it through a coarse but very clean sieve; wipe
out the stewpan, put back the soup into it, and stir in when it boils, a
pint and a quarter of good cream[32] mixed with a tablespoonful of
the best arrow-root: salt, if needed, should be thrown in previously.
32. We give this receipt exactly as we had it first compounded, but less cream
and rather more arrow-root might be used for it, and would adapt it better to
the economist.
Cut down into joints, flour, and fry lightly, two full grown, or three
young rabbits; add to them three onions of moderate size, also fried
to a clear brown; on these pour gradually seven pints of boiling
water, throw in a large teaspoonful of salt, clear off all the scum with
care as it rises, and then put to the soup a faggot of parsley, four not
very large carrots, and a small teaspoonful of peppercorns; boil the
whole very softly from five hours to five and a half; add more salt if
needed, strain off the soup, let it cool sufficiently for the fat to be
skimmed clean from it, heat it afresh, and send it to table with
sippets of fried bread. Spice, with a thickening of rice-flour, or of
wheaten flour browned in the oven, and mixed with a spoonful or two
of very good mushroom catsup, or of Harvey’s sauce, can be added
at pleasure to the above, with a few drops of eschalot-wine, or
vinegar; but the simple receipt will be found extremely good without
them.
Rabbits, 2 full grown, or 3 small; onions fried, 3 middling-sized;
water, 7 pints; salt, 1 large teaspoonful or more; carrots, 4, a faggot
of parsley; peppercorns, 1 small teaspoonful: 5 to 5-1/2 hours.
SUPERLATIVE HARE SOUP.
Cut down a hare into joints, and put into a soup-pot, or large
stewpan, with about a pound of lean ham, in thick slices, three
moderate-sized mild onions, three blades of mace, a faggot of
thyme, sweet marjoram, and parsley, and about three quarts of good
beef stock. Let it stew very gently for full two hours from the time of
its first beginning to boil, and more, if the hare be old. Strain the soup
and pound together very fine the slices of ham and all the flesh of
the back, legs, and shoulders of the hare, and put this meat into a
stewpan with the liquor in which it was boiled, the crumb of two
French rolls, and half a pint of port wine. Set it on the stove to
simmer twenty minutes; then rub it through a sieve, place it again on
the stove till very hot, but do not let it boil: season it with salt and
cayenne, and send it to table directly.
Hare, 1; ham, 12 to 16 oz.; onions, 3 to 6; mace, 3 blades; faggot
of savoury herbs; beef stock, 3 quarts: 2 hours. Crumb of 2 rolls; port
wine, 1/2 pint; little salt and cayenne: 20 minutes.
A LESS EXPENSIVE HARE SOUP.[33]
33. The remains of a roasted hare, with the forcemeat and gravy, are admirably
calculated for making this soup.
The remains of a roast turkey, even after they have supplied the
usual mince and broil, will furnish a tureen of cheap and excellent
soup with the addition of a little fresh meat. Cut up rather small two
pounds of the neck or other lean joint of beef, and pour to it five pints
of cold water. Heat these very slowly; skim the liquor when it begins
to boil, and add to it an ounce of salt, a small, mild onion (the
proportion of all the vegetables may be much increased when they
are liked), a little celery, and the flesh and bones of the turkey, with
any gravy or forcemeat that may have been left with them. Let these
boil gently for about three hours; then strain off the soup through a
coarse sieve or cullender, and let it remain until the fat can be
entirely removed from it. It may then be served merely well thickened
with rice[34] which has previously been boiled very dry as for currie,
and stewed in it for about ten minutes; and seasoned with one large
heaped tablespoonful or more of minced parsley, and as much salt
and pepper or cayenne as it may require. This, as the reader will
perceive, is a somewhat frugal preparation, by which the residue of a
roast turkey may be turned to economical account; but it is a
favourite soup at some good English tables, where its very simplicity
is a recommendation. It can always be rendered more expensive,
and of richer quality, by the addition of lean ham or smoked beef,[35]
a larger weight of fresh meat, and catsup or other store-sauces.
34. It will be desirable to prepare six ounces of rice, and to use as much of it as
may be required, the reduction of the stock not being always equal, and the
same weight of rice therefore not being in all cases sufficient. Rice-flour can
be substituted for the whole grain and used as directed for Rice Flour Soup,
page 15.
35. As we have stated in our chapter of Foreign Cookery, the Jewish smoked
beef, of which we have given particulars there, imparts a superior flavour to
soups and gravies; and it is an economical addition to them, as a small
portion of it will much heighten their savour.
Slice, and fry gently in some good butter three or four large
onions, and when they are of a fine equal amber-colour lift them out
with a slice and put them into a deep stewpot, or large thick
saucepan; throw a little more butter into the pan, and then brown
lightly in it a young rabbit, or the prime joints of two, or a fowl cut
down small, and floured. When the meat is sufficiently browned, lay
it upon the onions, pour gradually to them a quart of good boiling
stock, and stew it gently from three quarters of an hour to an hour;
then take it out, and pass the stock and onions through a fine sieve
or strainer. Add to them two pints and a half more of stock, pour the
whole into a clean pan, and when it boils stir to it two tablespoonsful
of currie-powder mixed with nearly as much of browned flour, and a
little cold water or broth, put in the meat, and simmer it for twenty
minutes or longer should it not be perfectly tender, add the juice of a
small lemon just before it is dished, serve it very hot, and send boiled
rice to table with it. Part of a pickled mango cut into strips about the
size of large straws, is sometimes served in this soup, after being
stewed in it for a few minutes; a little of the pickle itself should be
added with it. We have given here the sort of receipt commonly used
in England for mullagatawny, but a much finer soup may be made by
departing from it in some respects. The onions, of which the
proportion may be increased or diminished to the taste, after being
fried slowly and with care, that no part should be overdone, may be
stewed for an hour in the first quart of stock with three or four ounces
of grated cocoa-nut,[36] which will impart a rich mellow flavour to the
whole. After all of this that can be rubbed through the sieve has been
added to as much more stock as will be required for the soup, and
the currie-powder and thickening have been boiled in it for twenty
minutes, the flesh of part of a calf’s head,[37] previously stewed
almost tender, and cut as for mock turtle, with a sweetbread also
parboiled or stewed in broth, and divided into inch-squares, will
make an admirable mullagatawny, if simmered in the stock until they
have taken the flavour of the currie-seasoning. The flesh of a couple
of calves’ feet, with a sweetbread or two, may, when more
convenient, be substituted for the head. A large cupful of thick
cream, first mixed and boiled with a teaspoonful of flour or arrow-root
to prevent its curdling, and stirred into the soup before the lemon-
juice, will enrich and improve it much.
36. That our readers to whom this ingredient in soups is new, may not be misled,
we must repeat here, that although the cocoa-nut when it is young and fresh
imparts a peculiarly rich flavour to any preparation, it is not liked by all eaters,
and is better omitted when the taste of a party is not known, and only one
soup is served.
37. The scalp or skin only of a calf’s head will make excellent mullagatawny, with
good broth for stock; and many kinds of shell-fish also.
Pare, split, and empty from eight to twenty[38] fine, well grown,
but not old cucumbers,—those which have the fewest seeds are best
for the purpose; throw a little salt over them, and leave them for an
hour to drain, then put them with the white part only of a couple of
mild onions into a deep stewpan or delicately clean saucepan, cover
them nearly half an inch with pale but good veal stock, and stew
them gently until they are perfectly tender, which will be in from three
quarters of an hour to an hour and a quarter; work the whole through
a hair-sieve, and add to it as much more stock as may be needed to
make the quantity of soup required for table; and as the cucumbers,
from their watery nature, will thicken it but little, stir to it when it boils,
as much arrow-root, rice-flour, or tous les mois (see page 1), as will
bring it to a good consistence; add from half to a whole pint of boiling
cream, and serve the soup immediately. Salt and cayenne sufficient
to season it, should be thrown over the cucumbers while they are
stewing. The yolks of six or eight eggs, mixed with a dessertspoonful
of chili vinegar, may be used for this soup instead of cream; three
dessertspoonsful of minced parsley may then be strewed into it a
couple of minutes before they are added: it must not, of course, be
allowed to boil after they are stirred in.
38. This is a great disparity of numbers; but some regard must be had to
expense, where the vegetable cannot be obtained with facility.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com