Machine Learning Applications From Computer Vision to Robotics 1st Edition Indranath Chatterjee - The ebook in PDF format is ready for download
Machine Learning Applications From Computer Vision to Robotics 1st Edition Indranath Chatterjee - The ebook in PDF format is ready for download
com
https://ebookmeta.com/product/machine-learning-applications-
from-computer-vision-to-robotics-1st-edition-indranath-
chatterjee/
OR CLICK HERE
DOWLOAD EBOOK
https://ebookmeta.com/product/machine-learning-algorithms-and-
applications-in-engineering-1st-edition-prasenjit-chatterjee/
ebookmeta.com
https://ebookmeta.com/product/psychology-global-edition-sixth-edition-
saundra-k-white-ciccarelli-j-noland/
ebookmeta.com
Quesadilla Cookbook Delicious Quesadilla Recipes for All
Types of Tasty Quesadillas 2nd Edition Booksumo Press
https://ebookmeta.com/product/quesadilla-cookbook-delicious-
quesadilla-recipes-for-all-types-of-tasty-quesadillas-2nd-edition-
booksumo-press/
ebookmeta.com
https://ebookmeta.com/product/applied-knowledge-in-paediatrics-mrcpch-
mastercourse-1st-edition-martin-hewitt/
ebookmeta.com
https://ebookmeta.com/product/morse-code-quilts-material-messages-for-
loved-ones-first-edition-maxwell/
ebookmeta.com
https://ebookmeta.com/product/axisymmetry-in-mechanical-
engineering-1st-edition-emanuel-willert/
ebookmeta.com
https://ebookmeta.com/product/the-complete-letters-of-henry-
james-1880-1883-volume-2-5th-edition-henry-james/
ebookmeta.com
Problems in Quantum Field Theory With Fully Worked
Solutions 1st Edition Gelis
https://ebookmeta.com/product/problems-in-quantum-field-theory-with-
fully-worked-solutions-1st-edition-gelis/
ebookmeta.com
Machine Learning Applications
IEEE Press
445 Hoes Lane
Piscataway, NJ 08854
Edited by
Indranath Chatterjee
Department of Computer Engineering
Tongmyong University
Busan, South Korea
Sheetal Zalte
Department of Computer Science
Shivaji University
Kolhapur, Maharashtra, India
Copyright © 2024 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form
or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy fee
to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax
(978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should
be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ
07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons,
Inc. and/or its affiliates in the United States and other countries and may not be used without written
permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc.
is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts
in preparing this book, they make no representations or warranties with respect to the accuracy
or completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by
sales representatives or written sales materials. The advice and strategies contained herein may not
be suitable for your situation. You should consult with a professional where appropriate. Further,
readers should be aware that websites listed in this work may have changed or disappeared between
when this work was written and when it is read. Neither the publisher nor authors shall be liable for
any loss of profit or any other commercial damages, including but not limited to special, incidental,
consequential, or other damages.
For general information on our other products and services or for technical support, please contact our
Customer Care Department within the United States at (800) 762-2974, outside the United States at
(317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print
may not be available in electronic formats. For more information about Wiley products, visit our web
site at www.wiley.com.
Contents
Index 219
xiii
Preface
1.1 Introduction
Bicego 2020; Florêncio et al. 2020; Nanni et al. 2020; Silva et al. 2020 etc.) Details of
the surveys of these issues are provided in (Costa et al. 2020; Hämäläinen et al. 2020).
All these methods use the Euclidean distance. Therefore, they are unacceptable for
solving the problem stated above: to classify objects represented by matrices of
independent identically distributed random values.
Our goal is to extend the featureless approach to similarity-based classification
using the nonparametric similarity measure and nonparametric two-sample test
of homogeneity. Due to the nonparametric nature of these tools, we do not use
any assumption about a hypothetical distribution of training sample. Also, as we
shall demonstrate below, these tools are universal in the sense that using the pro-
posed test, we can test the homogeneity hypothesis for all possible variants: differ-
ent location parameters and the same scale parameter, the same location
parameter and different scale parameters, and both different location and scale
parameters. The proposed similarity measure also is universal because it is appli-
cable to both samples without ties and with ties (duplicates).
Consider training samples a = (a1, a2, …, an) ∈ A and b = (b1, b2, …, bn) ∈ B from
populations A and B obeying distributions F and G that are absolutely continuous.
The classification problem for a test sample c = (c1, c2, …, cn) is reduced to testing
the homogeneity of c and a and c and b. There are various nonparametric two-
sample tests of homogeneity (Derrick et al. 2019). However, every test has own
drawbacks. For example, the Kolmogorov–Smirnov test is a universal test in the
sense that it tests the general hypothesis F = G, but it is very sensible to outliers
and need in large size of samples. The Wilcoxon sign rank test is not universal
because it tests only the hypothesis about location shift (i.e. whether E(a) signifi-
cantly differs from E(c)). In our opinion, the most effective and universal tool was
developed in Klyushin and Petunin (2003).
j i
P x a(i ) , a( j )
n 1
, j i, (1.1)
4 1 Statistical Similarity in Machine Learning
Note that h in (1.3) is also a binomial proportion. Therefore, the test for homo-
geneity may be formulated in the following way: samples are homogeneous if the
confidence interval for the binomial h covers 0.95, else it is rejected.
0.8
0.6
0.4
0.2
0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Alpha
Figure 1.1 Behavior of P-and KS-statistics in the testing the location and scale shift
hypothesis.
supposes that distributions have the same standard deviations. The results are
presented in Figure 1.1.
In Figure 1.1, we see that the p-statistics decreases as α increases. The point
where we can reject the null hypothesis about both location and scale shift is
α = 0.3. The Kolmogorov–Smirnov test detects the shift of location when α = 0.1
and the scale shift when α = 0.2.
The graph demonstrates very high sensitivity of both tests. But, the high
sensitivity of KS-statistics has a negative side because for α > 0.2 the p-value of the
KS-test varies close to 0. Therefore, it just recognizes the fact that distributions are
different but does not estimate the value of this difference. In contrast, the graph of
p-statistics is monotonic and may be used as a similarity measure between statistics
in all the ranges of the parameter α. These results may be easily reproduced for
pairs of samples drawn from various distributions (lognormal, uniform, gamma
et al.). The examples are observed in Klyushin and Martynenko (2021).
The test based on the p-statistics successfully estimated the similarity and
difference between the feature samples of patients with breast cancer (Andrushkiw
6 1 Statistical Similarity in Machine Learning
et al. 2007), detected change points in time series (Klyushin and Martynenko 2021),
and compared forecasting models for the COVID-19 epidemic curve
(Klyushin 2021). The applications of the p-statistics are not bounded by the
abovementioned problems. It may be useful to solve the problem if two rankings
come from the same distribution (Balázs et al. 2022) and constructing a statistical
depth (Goibert et al. 2022), for instance. At the best case, the proposed test is more
effective due to its universal nature; in the worst case, it is as effective as the
Kolmogorov–Smirnov and other tests (Klyushin 2021). The p-statistics is so-called
“soft” similarity measure. In contrast to other tests, the p-statistics is stable with
respect to outliers and anomalities. Therefore, it is a natural measure of similarity
between two samples.
1.6 Summary
References
Andrushkiw, R.I., Boroday, N.V., Klyushin, D.A., and Petunin, Y.I. (2007). Computer-
Aided Cytogenetic Method of Cancer Diagnosis. New York: Nova Publishers.
Balázs, R., Baranyi, M., and Héberger, K. (2022). Testing rankings with cross-
validation. arXiv https://doi.org/10.48550/arXiv.2105.11939.
Bicego, M. (2020). Dissimilarity random forest clustering. IEEE International
Conference on Data Mining (ICDM), Sorrento, Italy (17–20 November 2020),
pp. 936–941. IEEE. https://doi.org/10.1109/ICDM50108.2020.00105.
Caldas, W.L., Gomes, J.P.P., and Mesquita, D.P.P. (2018). Fast Co-MLM: an efficient
semi-supervised co-training method based on the minimal learning machine. New
Generation Computing 36: 41–58. https://doi.org/10.1007/s00354-017-0027-x.
Cao, H., Bernard, S., and Sabourin, R.&, Heutte, L. (2019). Random forest
dissimilarity based multi-view learning for radiomics application. Pattern
Recognition 88: 185–197. https://doi.org/10.1016/j.patcog.2018.11.011.
Reference 7
Costa, Y.M.G., Bertolini, D., Britto, A.S. et al. (2020). The dissimilarity approach: a
review. Artificial Intelligence Review 53: 2783–2808. https://doi.org/10.1007/
s10462-019-09746-z.
Derrick, B., White, P., and Toher, D. (2019). Parametric and non-parametric tests for
the comparison of two samples which both include paired and unpaired
observations. Journal of Modern Applied Statistical Methods 18: eP2847.
https://doi.org/10.22237/jmasm/1556669520.
Duin, R.P.W., de Ridder, D., and Tax, D.N.J. (1997). Experiments with a featureless
approach to pattern recognition. Pattern Recognition Letters 18: 1159–1166.
https://doi.org/10.1016/S0167-8655(97)00138-4.
Duin, R.P.W., Pekalska, E., and de Ridder, D. (1999). Relational discriminant analysis.
Pattern Recognition Letters 20: 1175–1181. https://doi.org/10.1016/S0167-
8655(99)00085-9.
Florêncio, J.A., Dias, M.L.D., and de Souza J́unior, A.H. (2018). A fuzzy c-means-
based approach for selecting reference points in minimal learning machines. In:
Fuzzy Information Processing (ed. G.A. Barreto and R. Coelho), 398–407. Cham:
Springer International Publishing. https://doi.org/10.1007/978-3-319-95312-0_34.
Florêncio, J.A., Oliveira, S.A., Gomes, J.P., and da Rocha Neto, A.R. (2020). A new
perspective for minimal learning machines: a lightweight approach.
Neurocomputing 401: https://doi.org/10.1016/j.neucom.2020.03.088.
Goibert, M., Clémençon, S., Irurozki, E., and Mozharovskyi, P. (2022). Statistical depth
functions for ranking distributions: definitions, statistical learning and applications.
Proceedings of the 25th International Conference on Artificial Intelligence and
Statistics AISTATS 2022, Valence, Spain (28–30 March 2022). https://hal.archives-
ouvertes.fr/hal-03537148/document. https://doi.org/10.48550/arXiv.2201.08105.
Hämäläinen, J., Alencar, A., Kärkkäinen, T. et al. (2020). Minimal learning machine:
theoretical results and clustering-based reference point selection. Journal of
Machine Learning Research 21: 1–29. http://jmlr.org/papers/v21/19-786.html.
Hill, B.M. (1968). Posterior distribution of percentiles: bayes’ theorem for sampling
from a population. Journal of the American Statistical Association 63: 677–691.
Kärkkäinen, T. (2019). Extreme minimal learning machine: ridge regression with
distance-based basis. Neurocomputing 342: 33–48. https://doi.org/10.1016/j.
neucom.2018.12.078.
Klyushin, D. (2021). Non-parametric k-sample tests for comparing forecasting
models. Polibits 62: 33–41. http://www.polibits.gelbukh.com/2020_62/Non-
Parametric%20k-Sample%20Tests%20for%20Comparing%20Forecasting%20Models.
pdf. https://doi.org/10.17562/PB-62-4.
Klyushin, D. and Martynenko, I. (2021). Nonparametric test for change point
detection in time series. Proceeding of 3rd International Workshop ʻModern
Machine Learning Technologies and Data Scienceʼ, MoMLeT&DS 2021. Volume I:
Main Conference, Lviv-Shatsk, Ukraine (5–6 June 2021), pp. 117–127. https://
ceur-ws.org/Vol-2917/paper11.pdf (accessed 12 November 2022).
8 1 Statistical Similarity in Machine Learning
Klyushin, D.A. and Petunin, Y.I. (2003). A nonparametric test for the equivalence of
populations based on a measure of proximity of samples. Ukrainian Mathematical
Journal 55: 181–198. https://doi.org/10.1023/A:1025495727612.
Kulis, B. (2013). Metric learning: a survey. Foundations and Trends in Machine
Learning 5: 287–364. https://doi.org/10.1561/2200000019.
Maia, A.N., Dias, M.L.D., Gomes, J.P.P., and da Rocha Neto, A.R. (2018). Optimally
selected minimal learning machine. In: Intelligent Data Engineering and
Automated Learning – IDEAL (ed. H. Yin, D. Camacho, P. Novais, and A.J.
Tallón-Ballesteros), 670–678. Cham: Springer International Publishing. https://doi.
org/10.1007/978-3-030-33617-2.
Mesquita, D.P.P., Gomes, J.P.P., and de Souza Junior, A.H. (2017). Ensemble of
efficient minimal learning machines for classification and regression. Neural
Processing Letters 46: 751–766. https://doi.org/10.1007/s11063-017-9587-5.
Mottl, V., Dvoenko, S., Seredin, O. et al. (2001). Featureless pattern recognition in an
imaginary Hilbert space and its application to protein fold classification. Machine
Learning and Data Mining in Pattern Recognition, Leipzig, Germany (25–27 July
2001), pp. 322–336. Lecture Notes in Computer Science, 2123. https://doi.org/
10.1007/3-540-44596-X_26.
Mottl, V., Seredin, O., Dvoenko, S. et al. (2002). Featureless pattern recognition in an
imaginary Hilbert space. International Conference on Pattern Recognition 2: 88–91.
https://doi.org/10.1109/ICPR.2002.1048244.
Mottl, V., Seredin, O., and Krasotkina, O. (2017). Compactness hypothesis, potential
functions, and rectifying linear space. Machine Learning: International Conference
Commemorating the 40th Anniversary of Emmanuil Braverman’s Decease, Boston,
MA, USA (28–30 April 2017), Invited Talks. https://doi.org/10.1007/978-3-319-
99492-5_3.
Nanni, L., Rigo, A., Lumini, A., and Brahnam, S. (2020). Spectrogram classification
using dissimilarity space. Applied Sciences 10: 4176. https://doi.org/10.3390/
app10124176.
Pekalska, E. and Duin, R.P.W. (2001). On combining dissimilarity representations. In:
Multiple Classifier Systems,. LNCS, 2096 (ed. J. Kittler and F. Roli), 359–368. Berlin:
Springer–Verlag. https://doi.org/10.1007/3-540-48219-9_36.
Pekalska, E. and Duin, R.P.W. (2005). The Dissimilarity Representation for Pattern
Recognition, Foundations and Applications. Singapore: World Scientific.
Pires, A.M. and Amado, C. (2008). Interval estimators for a binomial proportion:
Comparison of twenty methods. REVSTAT–Statistical Journal 6 (2): 165–197.
https://doi.org/10.57805/revstat.v6i2.63.
Seredin O., Mottl, V., Tatarchuk, A. et al. (2012). Convex support and relevance vector
machines for selective multimodal pattern recognition. Proceedings of the 21st
International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan
(11–15 November 2012), pp. 1647–1650. IEEE.
Reference 9
da Silva, A.C.F., Saïs, F., Waller, E., and Andres, F. (2020). Dissimilarity-based
approach for identity link invalidation. IEEE 29th International Conference on
Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE),
Bayonne, France (10–13 September 2020), pp. 251–256. IEEE. https://doi.
org/10.1109/WETICE49692.2020.00056.
de Souza Junior, A.H., de Corona, F., Barreto, G.A. et al. (2015). Minimal learning
machine: a novel supervised distance-based approach for regression and
classification. Neurocomputing 164: 34–44. https://doi.org/10.1016/
j.neucom.2014.11.073.
11
2.1 Introduction
Artificial intelligence (AI)’s machine learning (ML) subfield focuses on creating and
studying AI software that can teach itself new skills. Definition: ML is the study of
how to program computers to learn and make decisions in ways that are indistin-
guishable from human intelligence (Sarker 2021). The term “machine learning”
refers to a technique whereby a computer is taught to optimize a performance
metric by analyzing and learning from examples. Generalization and representa-
tion are at the heart of ML. The system’s ability to generalize to novel data samples
is a key feature. According to Herbert Simon, “learning” is the process through
which a system experiences adaptive alterations that improve its performance on
a given task or collection of activities the next time it is used. If the program’s
performance on tasks in class T improves with experience E, as measured by
the performance measure P, then we say that the program has learned from its past
performance and can apply that knowledge to future performance. Tom Mitchell
explains that “a computer program is said to learn from experience E concerning
some class of tasks T and performance measure P.” Robots with AI can learn from
their experiences, identify patterns, and infer their meaning (Patel and Patel 2016).
ML and AI have become so pervasive in our daily lives that they are no longer the
purview of specialized researchers trying to crack a difficult issue. Instead of being
a fluke, this development has a very natural feel to it. Organizations are now able
to harness a massive amount of data in developing solutions with far-reaching
To infer a function from data that have previously been characterized is the
purpose of supervised learning. The training data consist of instructional exam-
ples. Instances are represented by pairs, each of which has a class and an input
value. Supervised learning algorithms take a training set of examples and utilize
them to infer a function that can be used to map further instances. In the best-case
situation, the algorithm can reliably assign accurate labels to newly encountered
instances. The challenge of unsupervised learning in ML is to classify data with-
out labels into groups with a predefined degree of similarity (Bharti et al. 2021).
The lack of a clear error signal while looking at provided examples prevents the
learner from focusing on the best approach. Of the aforementioned learning prob-
lems, reinforcement learning is the broadest. Instead of being instructed about
what to do by a superior, a reinforcement learning agent must learn by experience.
To solve a problem, a learner employs reinforcement learning, in which he takes
action on his surroundings and receives feedback in the form of a reward or pun-
ishment. The system discovers the best plan of action by making mistakes.
According to research, the most beneficial plan of action may consist of a series
of actions carefully crafted to maximize returns (Bottou 2010). There is a lot of
unlabeled data but not a lot of tagged data in many real-world learning domains
like biology or text processing. As a rule, it takes a lot of effort and money to create
data that have been appropriately categorized. As a result, SSL refers to the method
of learning from a combination of labeled and unlabeled information. This kind
of learning combines features of both supervised and unsupervised methods.
Semi-supervised learning excels in scenarios when there is more unlabeled data
available than labeled data. This occurs when the cost of collecting data points is
low, while the cost of obtaining labels is high.
2.2 Methodological Advancement
of Machine Learning
Students come from a variety of backgrounds and have varying learning styles and
pedagogical requirements. The primary goal of an adaptive e-learning system is to
identify the specific requirements of each learner and then, following the training
process, supply that learner with content that is tailored to his or her specific
needs. Using ML and deep learning (DL) models with the right dataset may make
the training process of an e-learning system more robust. In addition, an efficient
intelligent mechanism is needed to automatically classify this content as belong-
ing to the learner’s category in a reasonable amount of time. This reduces the time
spent by the learner searching through the vast amounts of content available
within the e-learning environment to find something relevant to their specific
needs. By doing so, we can tailor the information to each user. However, a multi-
agent approach can be used in an e-learning system to tailor e-content to each
student by tracking how they engage with the system and gathering data on their
preferred methods of instruction (Araque et al. 2017).
student’s profile in the profiling system has a record of their academic and social
background. The user’s history has been modeled to create the model of the
pupil. Students will receive customized course materials based on their unique
learning plans, which are generated using data from both the student and con-
tent models.
view may be defined as their current mental or factual condition. All the informa-
tion from the current window pertains to a certain issue domain. Multi-perspective
learning is a process wherein individuals bring such as P1, P2, …, and Pn are
combined to help in decision-making.
theorem was used to determine the posterior probability (G|L) of class (Content;
Category), as follows:
L
P * P G
G
P G
L PL
In this formula, the likelihood of a class is denoted by P(L|G), and the prior
probabilities of content (G) and predictor category (P(L)) are used to determine
the likelihood of a class (L) given content (G). The conditional probability between
the categories and the content is calculated based on the data’s past.
The procedure uses a data set to generate a numeric value “LabelEncoder().fit_
transform(),” which permits a string to be converted into a numerical mode that
is machine-readable. The data will then be represented using the Naive Bayes
model’s preferred method.
For example:
The function “datafram[’Term’]=number.fit_transform(datafram[’Term’])” has
been used which represents the term “Software” with the number “10,”
“Multimedia” with the number “8” and so on.
a numerical format so that it can be used as input in our model. The function
“random. Uniform ()” has been employed, allowing us to produce random num-
bers (here, eight numbers for each cluster, yielding 96 materials for all clusters) in
a range from 0 to 1, to which a fixed number is subsequently appended.
For example:
The function “data [‘Software’]= np.random.uniform(0,1,8)+2” has been used, a
visual representation of the k-means clustering algorithm’s partitioning of the
learning material adaptability model into clusters. When introducing a brand-
new set of topics and a fresh set of student profiles, unexpected complications
might occur. To account for these variations, the frequency table database is being
dynamically updated. Less inaccuracy will be introduced into the system’s predic-
tions of relevant material based on a user’s profile (Bayar and Stamm 2018).
Time series are mathematical constructs that denote a series of data ordered and
indexed by time. In addition, a time series is a collection of measurements taken
at regular intervals in the past, called yt, each of which has a real value and a time
stamp. The importance of the data’s ordering across time is what sets time series
apart from other types of information. Time series values are typically collected
by keeping track of some process with time, with measurements taken at set
intervals. One mathematical definition of a time series is
Since a time series can only be observed a finite number of times, the underlying
process can be assumed to be a set of random variables in n dimensions. In
addition, it is beneficial to assume the underlying process is a stochastic one,
which allows for an infinite number of observations. When a mathematical func-
tion is applied to observe time series data, yt = f (time), then the series is said to
be deterministic. Additionally, the time series is said to be nondeterministic or
stochastic, when data are observed by the mathematical function yt = f (time, ϵ),
where ϵ is the random term. Furthermore, stationarity is an important feature in
time series. Properties (such as statistical properties) of a stationary time series
remain constant across time (Birajdar and Mankar 2013).
Moreover, when we talk about the statistical properties, we are referring to
things like the time series’ mean value, auto-correlation, and variance. Univariate
time series (UTS) and multivariate time series (MTS) are the two primary classifi-
cations of time series data. One way to think about an MTS is as an infinite series
of numerous UTSs. Both UTS and MTS are widely available now because of the
22 2 Development of ML-Based Methodologies for Adaptive Intelligent E-Learning Systems
That afternoon about four o’clock the paper came out, and right on
the front page of it was a big piece about Sands Jones and Darkie
Patt and the painting-race. Mr. Cuppy had done himself proud.
Everything was there that Catty had told him and a lot of things
Catty never thought of at all.
“This event,” said Editor Cuppy, “constitutes one of the most
remarkable examples of civic and business ingenuity ever manifested
in our midst. Our village will thrill at the prospect of such a contest
between such well-known citizens as Mr. Patt and Mr. Jones. There
have been horse-races and foot-races and balloon-races and dog-
races, but never to our knowledge has the earth seen a painting-
race. It remained for our town to set the lead in this new realm of
sport, and it remained for our new and valued citizens, Atkins & Son,
painters and decorators and contractors, to bring this honor to us. It
represents true enterprise. We should all extend the hand of
welcome to these progressive citizens. It is to be hoped that the
town will take formal notice of this event and that some sort of
celebration will be arranged to mark the start of the race. The least
that could be done would be to organize a parade to the place of the
contest, and to hear some words of congratulation and patriotism
spoken before the gladiators lay on their brushes.” There was a lot
more of it and Catty was tickled to death.
“I guess I git my ladders now,” he said.
“How?”
“Wait and see,” says he.
We walked over to Mr. Manning’s warehouse where Mr. Atkins was
mixing paints. He was about done when we got there, and Catty
grabbed onto him and told him to come along.
“Where?” says Mr. Atkins. “I want a chance to git off and rest and
look at birds a-flyin’ and clouds a-scuddin’ by.”
“After this,” says Catty, “about the only time you git to do that is
Sundays. You’re goin’ to be too busy the rest of the week.”
“I be, be I? Wa-al, where we goin’ now?”
“Barber’s,” says Catty.
“Hair-cuttin’ barber’s?”
“That’s the feller, Dad.”
“I’m goin’ to git my hair cut?”
“Whiskers, too.”
“Not clean off?” says Mr. Atkins, and his eyes got kind of frightened.
“Naw,” says Catty, “not off. Them whiskers is valuable, pervidin’
they’re used right. I’ve been thinkin’ up what kind of whiskers looks
most respectable and dignified and sich-like, and I got it all planned
out. Let’s hustle, Dad.”
So we went to the barber’s, and Catty herded his Dad into the chair,
and then told the barber just what he wanted done and how he
wanted it. He had a picture he had cut out of an old magazine of
some man that was president of a railroad, and he was about the
most dignified-looking man I ever see. His whiskers come down to a
sharp point and was that neat and handsome you wouldn’t believe
it. Catty held this picture up to the barber and told him to make his
Dad look as much like that as he could.
The barber he went to work slow and careful. Every little while he
would stand off and look at Mr. Atkins with his head on one side and
whistle through his teeth. Then he would sort of rush in and snip off
a chunk of hair and then stand off again and take another look. Mr.
Atkins sat like he was frozen solid and looked at the barber hard and
then looked in the glass, and then grunted down in his throat. It
took the barber ’most an hour to git through, but when he was done
you wouldn’t have known Mr. Atkins. He looked like he was ten years
younger and a million dollars richer. Why, if a man with whiskers like
his were fixed should stop you on the street and ask you to get him
change for a million-dollar bill, you would be surprised that he was
bothering with such small change.
Mr. Atkins looked at himself and waggled his head; then he looked at
himself some more, sideways, hideways, and wideways, and
mumbled and looked discontented.
“’Tain’t me,” says he. “Now, when I git up in the mornin’ and wash
my face and look in the glass I’ll have to git interduced or I’ll think
there’s a stranger a-hangin’ around. I got used to my face and I kind
of liked it. Now I got to start in all over to git used to this one.”
“It hain’t only your face that’s changed,” says Catty. “It’s all of you.
You’re respectable now. How does it feel?”
“Can’t say as yet. Can’t say as yet.... Goodness gracious, Peter! Now,
honest, Catty, is that me?”
“It’s you, Dad.”
“But that feller in the glass looks as if he liked to work, and all that.”
“He does,” says Catty.
“Then, ’tain’t me. I knowed it.... I wisht I had back my whiskers.”
Well, we went out of there and walked down the street, and all at
once I noticed that folks were pointing at us and whispering.
Everywhere you looked there was men reading the paper and talking
about it. It was almost like the night before election. The town was
stirred up, and when our town gets stirred it gets stirred clean to the
bottom. That painting-race had hit us right between the eyes, and I
could see that something was going to happen sure.
Dad had told me I could eat with Catty and his Dad, which I did. We
had fish cooked in the coals and water and bread and cheese. It was
a mighty fine meal. After supper we sat around awhile helping Mr.
Atkins get used to his whiskers, and then Catty says it was time to
go to court.
The court was in a room over the fire-engine hall, and when we got
there there was a crowd. It looked like all the town had been
arrested for something. There was women there, too, and one of
them was Mrs. Gage, the justice’s wife. I figured she was to blame
for trying to get Mr. Atkins chased out of town, and had come down
to make sure her husband did it. We went in and sat down inside
the railing, and pretty soon everybody else came in, and then Mr.
Gage sat down in his chair behind the desk and cleared his throat
and scowled at everybody as important as all-git-out.
“Case of the People against Atkins,” he says. “Is the defendant
present?”
“I be,” says Mr. Atkins.
“You’re charged with being a vagrant. Guilty or not guilty?”
“Wa-al,” says Mr. Atkins, looking like a banker that was thinking
about lending fifty thousand dollars, “there’s two ways of lookin’ at
it.”
“What two ways?” says Mr. Gage.
“If you look at it from the point of view that what I’m doin’ makes
me a vagrant, then I be one; but if you look at it from the point of
view that what I’m doin’ don’t make me a vagrant, then I hain’t.”
I looked back, and you could see heads nodding all over the room.
Those words of Mr. Atkins’s coming right out of that kind of whiskers
sounded as if they were a little wiser than Solomon.
“What do you think?” says the judge.
“I think I hain’t,” says Mr. Atkins.
“Defendant pleads not guilty,” says Mr. Gage. “Town-marshal
Piddlecomb, take the stand.”