100% found this document useful (9 votes)
34 views

Complete Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python 1st Edition Abhishek Nandy PDF For All Chapters

Open

Uploaded by

bimbonphungo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (9 votes)
34 views

Complete Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python 1st Edition Abhishek Nandy PDF For All Chapters

Open

Uploaded by

bimbonphungo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Download the full version of the textbook now at textbookfull.

com

Reinforcement Learning: With Open AI,


TensorFlow and Keras Using Python 1st Edition
Abhishek Nandy

https://textbookfull.com/product/reinforcement-
learning-with-open-ai-tensorflow-and-keras-using-
python-1st-edition-abhishek-nandy/

Explore and download more textbook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Applied Reinforcement Learning with Python: With OpenAI


Gym, Tensorflow, and Keras Beysolow Ii

https://textbookfull.com/product/applied-reinforcement-learning-with-
python-with-openai-gym-tensorflow-and-keras-beysolow-ii/

textbookfull.com

Deep Learning with Python Develop Deep Learning Models on


Theano and TensorFLow Using Keras Jason Brownlee

https://textbookfull.com/product/deep-learning-with-python-develop-
deep-learning-models-on-theano-and-tensorflow-using-keras-jason-
brownlee/
textbookfull.com

Deep Learning Projects Using TensorFlow 2: Neural Network


Development with Python and Keras 1st Edition Vinita
Silaparasetty
https://textbookfull.com/product/deep-learning-projects-using-
tensorflow-2-neural-network-development-with-python-and-keras-1st-
edition-vinita-silaparasetty/
textbookfull.com

Global intelligence oversight : governing security in the


twenty-first century 1st Edition Goldman

https://textbookfull.com/product/global-intelligence-oversight-
governing-security-in-the-twenty-first-century-1st-edition-goldman/

textbookfull.com
The Malevolent Republic A Short History of New India Ks
Komireddi

https://textbookfull.com/product/the-malevolent-republic-a-short-
history-of-new-india-ks-komireddi/

textbookfull.com

Ian McEwan 2nd Edition Sebastian Groes (Ed.)

https://textbookfull.com/product/ian-mcewan-2nd-edition-sebastian-
groes-ed/

textbookfull.com

Social Welfare Functions and Development: Measurement and


Policy Applications 1st Edition Nanak Kakwani

https://textbookfull.com/product/social-welfare-functions-and-
development-measurement-and-policy-applications-1st-edition-nanak-
kakwani/
textbookfull.com

The Classic Fairy Tales Second Edition Maria Tatar

https://textbookfull.com/product/the-classic-fairy-tales-second-
edition-maria-tatar/

textbookfull.com

A Complete Guide to Burp Suite : Learn to Detect


Application Vulnerabilities Sagar Rahalkar

https://textbookfull.com/product/a-complete-guide-to-burp-suite-learn-
to-detect-application-vulnerabilities-sagar-rahalkar/

textbookfull.com
20 Shades of Shifters Collection of Paranormal Romance
Novels 1st Edition Various Authors

https://textbookfull.com/product/20-shades-of-shifters-collection-of-
paranormal-romance-novels-1st-edition-various-authors/

textbookfull.com
Reinforcement
Learning
With Open AI, TensorFlow and
Keras Using Python

Abhishek Nandy
Manisha Biswas
Reinforcement
Learning
With Open AI, TensorFlow and
Keras Using Python

Abhishek Nandy
Manisha Biswas
Reinforcement Learning
Abhishek Nandy Manisha Biswas
Kolkata, West Bengal, India North 24 Parganas, West Bengal, India
ISBN-13 (pbk): 978-1-4842-3284-2 ISBN-13 (electronic): 978-1-4842-3285-9
https://doi.org/10.1007/978-1-4842-3285-9
Library of Congress Control Number: 2017962867
Copyright © 2018 by Abhishek Nandy and Manisha Biswas
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole
or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical
way, and transmission or information storage and retrieval, electronic adaptation, computer
software, or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark
symbol with every occurrence of a trademarked name, logo, or image we use the names, logos,
and images only in an editorial fashion and to the benefit of the trademark owner, with no
intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or not
they are subject to proprietary rights.
While the advice and information in this book are believed to be true and accurate at the
date of publication, neither the authors nor the editors nor the publisher can accept any legal
responsibility for any errors or omissions that may be made. The publisher makes no warranty,
express or implied, with respect to the material contained herein.
Cover image by Freepik (www.freepik.com)
Managing Director: Welmoed Spahr
Editorial Director: Todd Green
Acquisitions Editor: Celestin Suresh John
Development Editor: Matthew Moodie
Technical Reviewer: Avirup Basu
Coordinating Editor: Sanchita Mandal
Copy Editor: Kezia Endsley
Compositor: SPi Global
Indexer: SPi Global
Artist: SPi Global
Distributed to the book trade worldwide by Springer Science+Business Media New York,
233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505,
e-mail orders-ny@springer-sbm.com, or visit www.springeronline.com. Apress Media,
LLC is a California LLC and the sole member (owner) is Springer Science + Business Media
Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.
For information on translations, please e-mail rights@apress.com, or visit
http://www.apress.com/rights-permissions.
Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook
versions and licenses are also available for most titles. For more information, reference our
Print and eBook Bulk Sales web page at http://www.apress.com/bulk-sales.
Any source code or other supplementary material referenced by the author in this book is
available to readers on GitHub via the book’s product page, located at www.apress.com/
978-1-4842-3284-2. For more detailed information, please visit http://www.apress.com/
source-code.
Printed on acid-free paper
Contents

About the Authors���������������������������������������������������������������������������� vii


About the Technical Reviewer���������������������������������������������������������� ix
Acknowledgments���������������������������������������������������������������������������� xi
Introduction������������������������������������������������������������������������������������ xiii


■Chapter 1: Reinforcement Learning Basics������������������������������������ 1
What Is Reinforcement Learning?����������������������������������������������������������� 1
Faces of Reinforcement Learning����������������������������������������������������������� 6
The Flow of Reinforcement Learning������������������������������������������������������ 7
Different Terms in Reinforcement Learning�������������������������������������������� 9
Gamma������������������������������������������������������������������������������������������������������������������� 10
Lambda������������������������������������������������������������������������������������������������������������������� 10

Interactions with Reinforcement Learning�������������������������������������������� 10


RL Characteristics�������������������������������������������������������������������������������������������������� 11
How Reward Works������������������������������������������������������������������������������������������������ 12
Agents�������������������������������������������������������������������������������������������������������������������� 13
RL Environments����������������������������������������������������������������������������������������������������� 14

Conclusion��������������������������������������������������������������������������������������������� 18

■Chapter 2: RL Theory and Algorithms������������������������������������������� 19
Theoretical Basis of Reinforcement Learning��������������������������������������� 19
Where Reinforcement Learning Is Used������������������������������������������������ 21
Manufacturing�������������������������������������������������������������������������������������������������������� 22
Inventory Management������������������������������������������������������������������������������������������� 22

iii
■ Contents

Delivery Management��������������������������������������������������������������������������������������������� 22
Finance Sector�������������������������������������������������������������������������������������������������������� 23

Why Is Reinforcement Learning Difficult?��������������������������������������������� 23


Preparing the Machine�������������������������������������������������������������������������� 24
Installing Docker����������������������������������������������������������������������������������� 36
An Example of Reinforcement Learning with Python���������������������������� 39
What Are Hyperparameters?���������������������������������������������������������������������������������� 41
Writing the Code����������������������������������������������������������������������������������������������������� 41

What Is MDP?���������������������������������������������������������������������������������������� 47
The Markov Property���������������������������������������������������������������������������������������������� 48
The Markov Chain��������������������������������������������������������������������������������������������������� 49
MDPs���������������������������������������������������������������������������������������������������������������������� 53

SARSA��������������������������������������������������������������������������������������������������� 54
Temporal Difference Learning�������������������������������������������������������������������������������� 54
How SARSA Works�������������������������������������������������������������������������������������������������� 56

Q Learning��������������������������������������������������������������������������������������������� 56
What Is Q?�������������������������������������������������������������������������������������������������������������� 57
How to Use Q���������������������������������������������������������������������������������������������������������� 57
SARSA Implementation in Python��������������������������������������������������������������������������� 58
The Entire Reinforcement Logic in Python������������������������������������������������������������� 64

Dynamic Programming in Reinforcement Learning������������������������������ 68


Conclusion��������������������������������������������������������������������������������������������� 69

■Chapter 3: OpenAI Basics������������������������������������������������������������� 71
Getting to Know OpenAI������������������������������������������������������������������������ 71
Installing OpenAI Gym and OpenAI Universe����������������������������������������� 73
Working with OpenAI Gym and OpenAI������������������������������������������������� 75
More Simulations���������������������������������������������������������������������������������� 81

iv
■ Contents

OpenAI Universe������������������������������������������������������������������������������������ 84
Conclusion��������������������������������������������������������������������������������������������� 87

■Chapter 4: Applying Python to Reinforcement Learning�������������� 89
Q Learning with Python������������������������������������������������������������������������� 89
The Maze Environment Python File������������������������������������������������������������������������ 91
The RL_Brain Python File��������������������������������������������������������������������������������������� 94
Updating the Function�������������������������������������������������������������������������������������������� 95

Using the MDP Toolbox in Python���������������������������������������������������������� 97


Understanding Swarm Intelligence����������������������������������������������������� 109
Applications of Swarm Intelligence���������������������������������������������������������������������� 109
Swarm Grammars������������������������������������������������������������������������������������������������� 111
The Rastrigin Function������������������������������������������������������������������������������������������ 111
Swarm Intelligence in Python������������������������������������������������������������������������������� 116

Building a Game AI������������������������������������������������������������������������������ 119


The Entire TFLearn Code��������������������������������������������������������������������������������������� 124

Conclusion������������������������������������������������������������������������������������������� 128
■■Chapter 5: Reinforcement Learning with Keras,
TensorFlow, and ChainerRL�������������������������������������������������������� 129
What Is Keras?������������������������������������������������������������������������������������ 129
Using Keras for Reinforcement Learning�������������������������������������������� 130
Using ChainerRL���������������������������������������������������������������������������������� 134
Installing ChainerRL���������������������������������������������������������������������������������������������� 134
Pipeline for Using ChainerRL�������������������������������������������������������������������������������� 137

Deep Q Learning: Using Keras and TensorFlow����������������������������������� 145


Installing Keras-rl������������������������������������������������������������������������������������������������� 146
Training with Keras-rl������������������������������������������������������������������������������������������� 148

Conclusion������������������������������������������������������������������������������������������� 153

v
■ Contents

■■Chapter 6: Google’s DeepMind and the Future of


Reinforcement Learning������������������������������������������������������������� 155
Google DeepMind�������������������������������������������������������������������������������� 155
Google AlphaGo����������������������������������������������������������������������������������� 156
What Is AlphaGo?�������������������������������������������������������������������������������������������������� 157
Monte Carlo Search���������������������������������������������������������������������������������������������� 159
Man vs. Machines������������������������������������������������������������������������������� 161
Positive Aspects of AI������������������������������������������������������������������������������������������� 161
Negative Aspects of AI������������������������������������������������������������������������������������������ 161

Conclusion������������������������������������������������������������������������������������������� 163

Index���������������������������������������������������������������������������������������������� 165

vi
About the Authors

Abhishek Nandy has a B.Tech. in information


technology and considers himself a constant learner.
He is a Microsoft MVP in the Windows platform, an
Intel Black Belt Developer, as well as an Intel software
innovator. Abhishek has a keen interest in artificial
intelligence, IoT, and game development. He is
currently serving as an application architect at an IT
firm and consults in AI and IoT, as well does projects
in AI, Machine Learning, and deep learning. He is also
an AI trainer and drives the technical part of Intel AI
student developer program. He was involved in the first
Make in India initiative, where he was among the top
50 innovators and was trained in IIMA.

Manisha Biswas has a B.Tech. in information


technology and currently works as a software developer
at InSync Tech-Fin Solutions Ltd in Kolkata, India. She
is involved in several areas of technology, including
web development, IoT, soft computing, and artificial
intelligence. She is an Intel Software innovator and was
awarded the Shri Dewang Mehta IT Awards 2016 by
NASSCOM, a certificate of excellence for top academic
scores. She very recently formed a “Women in
Technology” community in Kolkata, India to empower
women to learn and explore new technologies. She
likes to invent things, create something new, and
invent a new look for the old things. When not in front
of her terminal, she is an explorer, a foodie, a doodler,
and a dreamer. She is always very passionate to share
her knowledge and ideas with others. She is following
her passion currently by sharing her experiences with the community so that others can
learn, which lead her to become Google Women Techmakers, Kolkata Chapter Lead.

vii
About the Technical
Reviewer

Avirup Basu is an IoT application developer at


Prescriber360 Solutions. He is a researcher in robotics
and has published papers through the IEEE.

ix
Acknowledgments

I want to dedicate this book to my parents.


—Abhishek Nandy

I want to dedicate this book to my mom and dad. Thank you to my teachers and my
co-author, Abhishek Nandy. Thanks also to Abhishek Sur, who mentors me at work
and helps me adapt to new technologies. I would also like to dedicate this book to my
company, InSync Tech-Fin Solutions Ltd., where I started my career and have grown
professionally.

—Manisha Biswas

xi
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
Introduction

This book is primarily based on a Machine Learning subset known as Reinforcement


Learning. We cover the basics of Reinforcement Learning with the help of the Python
programming language and touch on several aspects, such as Q learning, MDP, RL with
Keras, and OpenAI Gym and OpenAI Environment, and also cover algorithms related
to RL.
Users need a basic understanding of programming in Python to benefit from this
book.
The book is meant for people who want to get into Machine Learning and learn more
about Reinforcement Learning.

xiii
CHAPTER 1

Reinforcement Learning
Basics

This chapter is a brief introduction to Reinforcement Learning (RL) and includes some
key concepts associated with it.
In this chapter, we talk about Reinforcement Learning as a core concept and then
define it further. We show a complete flow of how Reinforcement Learning works. We
discuss exactly where Reinforcement Learning fits into artificial intelligence (AI). After
that we define key terms related to Reinforcement Learning. We start with agents and
then touch on environments and then finally talk about the connection between agents
and environments.

What Is Reinforcement Learning?


We use Machine Learning to constantly improve the performance of machines or
programs over time. The simplified way of implementing a process that improves
machine performance with time is using Reinforcement Learning (RL). Reinforcement
Learning is an approach through which intelligent programs, known as agents, work
in a known or unknown environment to constantly adapt and learn based on giving
points. The feedback might be positive, also known as rewards, or negative, also
called punishments. Considering the agents and the environment interaction, we then
determine which action to take.
In a nutshell, Reinforcement Learning is based on rewards and punishments.
Some important points about Reinforcement Learning:
• It differs from normal Machine Learning, as we do not look at
training datasets.
• Interaction happens not with data but with environments,
through which we depict real-world scenarios.

© Abhishek Nandy and Manisha Biswas 2018 1


A. Nandy and M. Biswas, Reinforcement Learning,
https://doi.org/10.1007/978-1-4842-3285-9_1
Chapter 1 ■ Reinforcement Learning Basics

• As Reinforcement Learning is based on environments, many


parameters come in to play. It takes lots of information to learn
and act accordingly.
• Environments in Reinforcement Learning are real-world
scenarios that might be 2D or 3D simulated worlds or game-
based scenarios.
• Reinforcement Learning is broader in a sense because the
environments can be large in scale and there might be a lot of
factors associated with them.
• The objective of Reinforcement Learning is to reach a goal.
• Rewards in Reinforcement Learning are obtained from the
environment.
The Reinforcement Learning cycle is depicted in Figure 1-1 with the help of a robot.

Figure 1-1. Reinforcement Learning cycle

2
Chapter 1 ■ Reinforcement Learning Basics

A maze is a good example that can be studied using Reinforcement Learning, in


order to determine the exact right moves to complete the maze (see Figure 1-2).

Figure 1-2. Reinforcement Learning can be applied to mazes

In Figure 1-3, we are applying Reinforcement Learning and we call it the


Reinforcement Learning box because within its vicinity the process of RL works. RL starts
with an intelligent program, known as agents, and when they interact with environments,
there are rewards and punishments associated. An environment can be either known
or unknown to the agents. The agents take actions to move to the next state in order to
maximize rewards.

3
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-3. Reinforcement Learning flow

In the maze, the centralized concept is to keep moving. The goal is to clear the maze
and reach the end as quickly as possible.
The following concepts of Reinforcement Learning and the working scenario are
discussed later this chapter.
• The agent is the intelligent program
• The environment is the maze
• The state is the place in the maze where the agent is
• The action is the move we take to move to the next state
• The reward is the points associated with reaching a particular
state. It can be positive, negative, or zero
We use the maze example to apply concepts of Reinforcement Learning. We will be
describing the following steps:

1. The concept of the maze is given to the agent.


2. There is a task associated with the agent and Reinforcement
Learning is applied to it.
3. The agent receives (a-1) reinforcement for every move it
makes from one state to other.
4. There is a reward system in place for the agent when it moves
from one state to another.

4
Chapter 1 ■ Reinforcement Learning Basics

The rewards predictions are made iteratively, where we update the value of each
state in a maze based on the value of the best subsequent state and the immediate reward
obtained. This is called the update rule.
The constant movement of the Reinforcement Learning process is based on
decision-making.
Reinforcement Learning works on a trial-and-error basis because it is very difficult to
predict which action to take when it is in one state. From the maze problem itself, you can
see that in order get the optimal path for the next move, you have to weigh a lot of factors.
It is always on the basis of state action and rewards. For the maze, we have to compute
and account for probability to take the step.
The maze also does not consider the reward of the previous step; it is specifically
considering the move to the next state. The concept is the same for all Reinforcement
Learning processes.
Here are the steps of this process:
1. We have a problem.
2. We have to apply Reinforcement Learning.
3. We consider applying Reinforcement Learning as a
Reinforcement Learning box.
4. The Reinforcement Learning box contains all essential
components needed for applying the Reinforcement Learning
process.
5. The Reinforcement Learning box contains agents,
environments, rewards, punishments, and actions.
Reinforcement Learning works well with intelligent program agents that give rewards
and punishments when interacting with an environment.
The interaction happens between the agents and the environments, as shown in
Figure 1-4.

Figure 1-4. Interaction between agents and environments

From Figure 1-4, you can see that there is a direct interaction between the agents and
its environments. This interaction is very important because through these exchanges,
the agent adapts to the environments. When a Machine Learning program, robot, or
Reinforcement Learning program starts working, the agents are exposed to known or
unknown environments and the Reinforcement Learning technique allows the agents to
interact and adapt according to the environment’s features.
Accordingly, the agents work and the Reinforcement Learning robot learns. In order
to get to a desired position, we assign rewards and punishments.

5
Chapter 1 ■ Reinforcement Learning Basics

Now, the program has to work around the optimal path to get maximum rewards if
it fails (that is, it takes punishments or receives negative points). In order to reach a new
position, which also is known as a state, it must perform what we call an action.
To perform an action, we implement a function, also known as a policy. A policy is
therefore a function that does some work.

Faces of Reinforcement Learning


As you see from the Venn diagram in Figure 1-5, Reinforcement Learning sits at the
intersection of many different fields of science.

Figure 1-5. All the faces of Reinforcement Learning

6
Chapter 1 ■ Reinforcement Learning Basics

The intersection points reveal a very strong feature of Reinforcement Learning—it


shows the science of decision-making. If we have two paths and have to decide which
path to take so that some point is met, a scientific decision-making process can be
designed.
Reinforcement Learning is the fundamental science of optimal decision-making.
If we focus on the computer science part of the Venn diagram in Figure 1-5, we
see that if we want to learn, it falls under the category of Machine Learning, which is
specifically mapped to Reinforcement Learning.
Reinforcement Learning can be applied to many different fields of science. In
engineering, we have devices that focus mostly on optimal control. In neuroscience, we
are concerned with how the brain works as a stimulant for making decisions and study
the reward system that works on the brain (the dopamine system).
Psychologists can apply Reinforcement Learning to determine how animals make
decisions. In mathematics, we have a lot of data applying Reinforcement Learning in
operations research.

The Flow of Reinforcement Learning


Figure 1-6 connects agents and environments.

Figure 1-6. RL structure

The interaction happens from one state to another. The exact connection starts
between an agent and the environment. Rewards are happening on a regular basis.
We take appropriate actions to move from one state to another.
The key points of consideration after going through the details are the following:
• The Reinforcement Learning cycle works in an interconnected
manner.
• There is distinct communication between the agent and the
environment.
• The distinct communication happens with rewards in mind.
• The object or robot moves from one state to another.
• An action is taken to move from one state to another

7
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-7 simplifies the interaction process.

Figure 1-7. The entire interaction process

An agent is always learning and finally makes a decision. An agent is a learner, which
means there might be different paths. When the agent starts training, it starts to adapt and
intelligently learns from its surroundings.
The agent is also a decision maker because it tries to take an action that will get it the
maximum reward.
When the agent starts interacting with the environment, it can choose an action and
respond accordingly.
From then on, new scenes are created. When the agent changes from one place to
another in an environment, every change results in some kind of modification. These
changes are depicted as scenes. The transition that happens in each step helps the agent
solve the Reinforcement Learning problem more effectively.

8
Chapter 1 ■ Reinforcement Learning Basics

Let’s look at another scenario of state transitioning, as shown in Figures 1-8 and 1-9.

Figure 1-8. Scenario of state changes

Figure 1-9. The state transition process

Learn to choose actions that maximize the following:

r0 +γr1 +γ2r2 +............... where 0< γ<1

At each state transition, the reward is a different value, hence we describe reward
with varying values in each step, such as r0, r1, r2, etc. Gamma (γ) is called a discount
factor and it determines what future reward types we get:
• A gamma value of 0 means the reward is associated with the
current state only
• A gamma value of 1 means that the reward is long-term

Different Terms in Reinforcement Learning


Now we cover some common terms associated with Reinforcement Learning.
There are two constants that are important in this case—gamma (γ) and lambda (λ),
as shown in Figure 1-10.

9
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-10. Showing values of constants

Gamma is common in Reinforcement Learning problems but lambda is used


generally in terms of temporal difference problems.

Gamma
Gamma is used in each state transition and is a constant value at each state change.
Gamma allows you to give information about the type of reward you will be getting in
every state. Generally, the values determine whether we are looking for reward values in
each state only (in which case, it’s 0) or if we are looking for long-term reward values (in
which case it’s 1).

Lambda
Lambda is generally used when we are dealing with temporal difference problems. It is
more involved with predictions in successive states.
Increasing values of lambda in each state shows that our algorithm is learning fast.
The faster algorithm yields better results when using Reinforcement Learning techniques.
As you’ll learn later, temporal differences can be generalized to what we call
TD(Lambda). We discuss it in greater depth later.

Interactions with Reinforcement Learning


Let’s now talk about Reinforcement Learning and its interactions. As shown in
Figure 1-11, the interactions between the agent and the environment occur with a reward.
We need to take an action to move from one state to another.

10
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-11. Reinforcement Learning interactions

Reinforcement Learning is a way of implementing how to map situations to actions


so as to maximize and find a way to get the highest rewards.
The machine or robot is not told which actions to take, as with other forms of
Machine Learning, but instead the machine must discover which actions yield the
maximum reward by trying them.
In the most interesting and challenging cases, actions affect not only the immediate
reward but also the next situation and all subsequent rewards.

RL Characteristics
We talk about characteristics next. The characteristics are generally what the agent does
to move to the next state. The agent considers which approach works best to make the
next move.
The two characteristics are
• Trial and error search.
• Delayed reward.
As you probably have gathered, Reinforcement Learning works on three things
combined:

(S,A,R)

Where S represents state, A represents action, and R represents reward.


If you are in a state S, you perform an action A so that you get a reward R at time
frame t+1. Now, the most important part is when you move to the next state. In this case,
we do not use the reward we just earned to decide where to move next. Each transition
has a unique reward and no reward from any previous state is used to determine the next
move. See Figure 1-12.

11
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-12. State change with time

The T change (the time frame) is important in terms of Reinforcement Learning.


Every occurrence of what we do is always a combination of what we perform in terms
of states, actions, and rewards. See Figure 1-13.

Figure 1-13. Another way of representing the state transition

How Reward Works


A reward is some motivator we receive when we transition from one state to another. It
can be points, as in a video game. The more we train, the more accurate we become, and
the greater our reward.

12
Chapter 1 ■ Reinforcement Learning Basics

Agents
In terms of Reinforcement Learning, agents are the software programs that make
intelligent decisions. Agents should be able to perceive what is happening in the
environment. Here are the basic steps of the agents:
1. When the agent can perceive the environment, it can make
better decisions.
2. The decision the agents take results in an action.
3. The action that the agents perform must be the best, the
optimal, one.
Software agents might be autonomous or they might work together with other agents
or with people. Figure 1-14 shows how the agent works.

Figure 1-14. The flow of the environment

13
Chapter 1 ■ Reinforcement Learning Basics

RL Environments
The environments in the Reinforcement Learning space are comprised of certain factors
that determine the impact on the Reinforcement Learning agent. The agent must adapt
accordingly to the environment. These environments can be 2D worlds or grids or even a
3D world.
Here are some important features of environments:
• Deterministic
• Observable
• Discrete or continuous
• Single or multiagent.

Deterministic
If we can infer and predict what will happen with a certain scenario in the future, we say
the scenario is deterministic.
It is easier for RL problems to be deterministic because we don’t rely on the
decision-making process to change state. It’s an immediate effect that happens with state
transitions when we are moving from one state to another. The life of a Reinforcement
Learning problem becomes easier.
When we are dealing with RL, the state model we get will be either deterministic or
non-deterministic. That means we need to understand the mechanisms behind how DFA
and NDFA work.

DFA (Deterministic Finite Automata)


DFA goes through a finite number of steps. It can only perform one action for a state. See
Figure 1-15.

Figure 1-15. Showing DFA

14
Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Korea
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Korea

Author: A. Hamilton

Release date: February 11, 2024 [eBook #72932]

Language: English

Original publication: New York: Charles Scribner's sons, 1904

Credits: Peter Becker and the Online Distributed Proofreading Team


at https://www.pgdp.net (This file was produced from
images generously made available by The Internet Archive)

*** START OF THE PROJECT GUTENBERG EBOOK KOREA ***


KOREA

TABLET IN SEOUL
KOREA

BY
ANGUS HAMILTON

WITH A NEWLY PREPARED MAP


AND NUMEROUS ILLUSTRATIONS

NEW YORK
CHARLES SCRIBNER’S SONS
153-157 FIFTH AVENUE
1904

All rights reserved


TO MY MOTHER
CONTENTS

INTRODUCTION
The Position of Russia in Manchuria—Comparative
Estimate of Naval and Military Resources of
Russia, Japan, and Korea Pp. xvii-xlii
CHAPTER I
Off the coast—Lack of survey intelligence—Island flora
—Forgotten voyagers—Superstitions and beliefs—
Outline of history Pp. 1-12
CHAPTER II
Physical peculiarities—Direction of advancement—
Indications of reform and prosperity—Chemulpo—
Population—Settlement—Trade Pp. 13-23

CHAPTER III
Move to the capital—A city of peace—Results of
foreign influence—In the beginning—Education—
Shops—Costume—Origin—Posts and telegraphs—
Methods of cleanliness Pp. 24-42

CHAPTER IV
The heart of the capital—Domestic economy—Female
slavery—Standards of morality—A dress rehearsal Pp. 43-58
CHAPTER V
The Court of Korea—The Emperor and his Chancellor
—The Empress and some Palace factions Pp. 59-69
CHAPTER VI
The passing of the Emperor—An Imperial pageant Pp. 70-80

CHAPTER VII
Sketch of Mr. McLeavy Brown—The Question of the
Customs—The suggested Loan Pp. 81-93

CHAPTER VIII
Foreign action in Korea—Exhausted Exchequer—Taxes
—Budgets—Debased currency—The Dai Ichi
Ginko—Dishonest officials Pp. 94-107
CHAPTER IX
Education—Arts and graces—Penal code—Marriage
and divorce—The rights of concubines—Position
of children—Government Pp. 108-116

CHAPTER X
Farmers—Farming and farm animals—Domestic
industries—Products—Quality and character of
food-stuffs Pp. 117-127

CHAPTER XI
Japan in Korea—Historical associations—In Old Fusan
—Political and economic interests—Abuse of
paramountcy Pp. 128-137

CHAPTER XII
The commercial prospects of Korea—Openings to
trade—Requirements of markets—Lack of British
enterprise Pp. 138-147
CHAPTER XIII
British, American, Japanese, French, German, and
Belgian interests—Railways and mining fictions—
Tabled counterfeited Imports Pp. 148-169

CHAPTER XIV
Some account of the treaty ports; Won-san, Fusan,
Mok-po—Character of export and import trade—
Local industries Pp. 170-181

CHAPTER XV
Treaty ports (continued)—Wi-ju—Syön-chyön-po—
Chin-am-po—Pyöng-yang—Kun-san—Syöng-chin Pp. 182-191

CHAPTER XVI
Russian interests—Russia and Japan—Ma-san-po—
Ching-kai-wan—Yong-an-po Pp. 192-206

CHAPTER XVII
By the wayside—A journey inland to Tong-ko-kai—
Inland beauties Pp. 207-215

CHAPTER XVIII
The German mines—Mineralogy and methods of
mining—A bear hunt—With gun and rifle Pp. 216-225
CHAPTER XIX
The monks and monasteries of the Diamond
Mountains—The Temple of Eternal Rest—The
Temple of the Tree of Buddha—Buddhism Pp. 226-240

CHAPTER XX
The abomination of desolation—Across Korea—The
east coast—Fishing and filth Pp. 241-252

CHAPTER XXI
Drought—Starvation—Inland disturbances—Rainfall
and disease Pp. 253-260
CHAPTER XXII
The missionary question—Ethics of Christianity—Cant
and commerce—The necessity for restraint Pp. 261-269
CHAPTER XXIII
Inland journeying—Ponies, servants, interpreters, food
and accommodation—What to take and how to
take it—Up the Han River, frolic and leisure Pp. 270-283
CHAPTER XXIV
Kang-wha, brief history of the island—A monastic
retreat, an ideal rest—Nocturnal visitors—
Midnight masses—Return to the capital—
Preparations for a great journey—Riots and
confusion Pp. 284-300
APPENDIX I
Schedule of train service P. 301

APPENDIX II
Return of all shipping entered at the open ports of
Korea during the year 1902 Pp. 302-304

APPENDIX III
Return of principal articles of export to foreign
countries from the open ports of Korea during the
years 1901-1902 P. 305

APPENDIX IV
Return of principal articles of imports to foreign
countries during the years 1901-1902 P. 306
APPENDIX V
Coast trade between treaty-ports in native produce
(net) P. 307
APPENDIX VI
Customs revenue P. 307
APPENDIX VII
Gold export to foreign countries P. 308

APPENDIX VIII
Table of minerals P. 309
ILLUSTRATIONS
Ceiling, Imperial Palace, Seoul Cover
Tablet in Seoul Frontispiece
PAGE
Devil Post outside Seoul 1
Guardian of a grave 9
Independence Arch 11
Pagoda at Seoul 12
A moment of leisure 13
At the Wells 17
Chemulpo 21
Pavilion on the wall of the Capital 23
Hen-seller 24
Not one whit Europeanised 33
A side alley 35
Native dress 37
They wear the Chang-ot 38
A study in hats 39
Means of locomotion 42
A Sang-no 43
White-coated, white-socked population 45
She may visit her friends 47
A middle-class family 49
In winter costume 51
A palace concubine 53
Dancing women of the Court 55
Boys 58
His Imperial Highness, Prince Yi-Cha-Sum 59
His Imperial Majesty the Emperor 60
The Hall of Audience, Seoul 64
Their Imperial Highnesses the Crown Prince and
Princess 67
A minor Royalty 69
Within the Palace grounds, Seoul 72
Imperial Throne, Seoul 74
Imperial Tablet-House, Seoul 77
An Imperial pavilion, Seoul 79
Mr. J. McLeavy Brown, C.M.G., LL.D. 82
British Legation, Seoul 88
The Imperial Library, Seoul 94
A Seoul gate 107
Justice is not tempered with mercy 113
Children of the lower class 115
The Korean and his bull 119
A spade furnished with ropes 121
Pounding grain 122
Carrying produce to market 123
Japanese Cavalry 128
The Guard of the Japanese Legation, Seoul 131
H.M.S. Astrea 137
Brick laying extraordinary 145
The Consulting-room of Miss Cooke 155
A railway siding 169
In New Fusan 177
Palace Gateway 180
Chemulpo 185
On the Yalu River 197
Chinese Encampment 203
Beyond the Capital 208
Woodland Glades 209
Country Carts 213
A pitched battle 215
A summer pleasaunce 224
The Abbot of Chang An Sa 227
The Abbot of Yu Chom Sa 233
Yu Chom Sa 237
An Altar-piece 239
Shin Ki Sa 243
The Abbot and Monks of Chang An Sa 245
A Fair Magician 251
Without the walls of Seoul 253
The Temple of Heaven, Seoul 255
An Imperial summer house, erected to mark the spot
where the corpse of the late Queen was burned
by the Japanese 260
A bridge scene in Seoul 261
The streets are magnificent 268
Beyond the Amur 281
On the Han River 282
Washing clothes in a drain 284
A day of festival 291
Russian post on the Korean Frontier 297
INTRODUCTION
Nothing is more natural than the circumstance that war should be
the outcome of the existing crisis; yet, equally, nothing is less
certain. If the area of hostilities were not confined to the Far East,
and the Power confronting Japan were any other than Russia, the
outbreak of war might be predicted positively. But with Russia,
consideration of the strategic qualities of her position in Manchuria
must exercise a paramount influence upon her movements. To those
who are not close students of military history, as well as to those
who do not possess an extensive knowledge of the situation, the
position in which Russia is placed equally affords the keenest
interest. Certainly in the annals of military history, excluding the
march of Napoleon upon Moscow, there is no war which may be said
to have developed a parallel to the task which besets Russia in
Manchuria and Korea. Her position at sea, moreover, is no better
than that which she holds on land. Upon land, a single line of railway
traversing the heart of an enemy’s country terminates at Port Arthur.
At sea, Vladivostock is cut off by reason of its position, while it is
inaccessible on account of its climate. These points, Port Arthur and
Vladivostock, define the extremities of the strategic position which
Russia holds in Manchuria. Excluding Vladivostock at this moment
from any especial consideration, Port Arthur is left for the opening
moves of this campaign. Therefore, Port Arthur, with a single line of
communications in its rear, becomes the pivot of the operations.
The aspect of Port Arthur from the sea is uninviting. Rugged hills,
offshoots from the range of mountains which divides the Liao-tung
peninsula, cluster round the bay, and encroaching upon the
foreshore and bearing neither trees nor vegetation, impart to the
surroundings a desolate and even wild appearance. Within the
headlands of the harbour, conforming with the indentations of the
coast, there are several bays shallow and unprofitable, but which in
time may become an important adjunct to the small area of deep
water which the harbour now possesses. Dredging operations have
been undertaken, but there is so much to be done that many years
must pass before Port Arthur receives any material addition to its
very restricted accommodation. The mud, brought down by the
streams which empty into the harbour, has already affected the
deep-water area, and since the harbour was constructed these
deposits have encroached very considerably upon the depth off
shore. At low water steamers, which lie up within sixty feet of the
wharf, rest upon mud in little more than a fathom of water, and at
the same time the space is so small that it is impossible for a dozen
vessels to anchor in the harbour with any comfort. Steamers, if any
larger in size than the small coasting-boats which call at Port Arthur
from China and Japan must anchor off the entrance, unloading and
re-charging from junks or tenders. In relation to the requirements of
the squadron Port Arthur is not nearly large enough. When cruisers
are taking in stores battleships remain outside, an arrangement
which is manifestly inconvenient in a period of emergency. It was for
this reason that the authorities constructed at Dalny—a few miles
from the fortress and within Pa-tien-wan Bay—a new town, together
with commercial docks and wharves, in order that Port Arthur might
be devoted more particularly to the needs of the navy.
Port Arthur is happy in the possession of all those objects which,
to a naval base, are component parts of its success. The dry dock,
somewhat weak and unsubstantial, is 385 feet in length, 34 feet in
depth, and 80 feet broad, while the naval basin is equal in surface
space to the total available steamer anchorage in the harbour
proper. When the dredging works in the harbour bays have been
completed it is hoped that a mean depth of four fathoms will have
been obtained. This systematic deepening of the harbour will give to
the fleet a surface anchorage considerably in excess of one square
mile, but until the work has been executed the value of Port Arthur
as a satisfactory naval base is infinitely less than the prestige which
it enjoys as an impregnable position.
Port Arthur possesses a small parade-ground, rifle-range, and
artillery practice-ground, torpedo-station and training reservation,
which will be enlarged when the bays are opened out. There is a
flash-light station and various schools of instruction—torpedo,
gunnery, telegraphy—while the arsenals and workshops which are
built around the naval basin and within the navy yards are very
thoroughly equipped. These effects, however, were mainly taken
over by Russia when she seized Port Arthur; their existence at the
present moment tends to show how impossible it is to under-
estimate the advantages which Russia derives from the possession
of this port, and how far-reaching are the consequences of the
monstrous blunder which Lord Salisbury committed when he
acquiesced in its usurpation.
Apart from the defences Russia, hitherto, has not added much to
Port Arthur; for the main part the troops have been quartered in the
old Chinese houses or in the former barracks of the Chinese troops,
affairs having been somewhat neglected in view of the prior claim
which the defences held. Now, however, fine barracks are in course
of construction, and, if there is no war, it is anticipated that ample
accommodation will be ready soon upon the shores of some of the
bays and on the hills. The defences are indeed magnificent. Very few
of the forts, which were in existence during the time of the Chinese,
remain. Since the Russian Government entered upon possession the
work of extending the perimeter of the defences, as well as
strengthening the fortifications, has been a continuous labour. It is
quite clear that the authorities are determined upon no half-
measures. They have gained Port Arthur, and they propose to keep
it. Upon the cliffs, rising immediately from the right of the harbour
entrance, there is a most powerful position, formed, I believe, of a
battery of six 21-inch Krupp guns, which was further supported by a
fort placed a few feet above the harbour, and sweeping its
immediate front, containing eight 10-inch Krupps. At the
corresponding elevations upon the opposite headland there were
two similar forts with identical batteries, while the mine fields within
the harbour are controlled from these two lower positions. Following
the hills to the south and north there are other forts; one in
particular, of great size, is placed upon the extreme crest of the
range, and, towering above all else, sweeps the sea and approaches
to the harbour for great distances. It is impossible to detect the
character of these guns, but from their position, and the extent of
the fort and the nature of the part which they are intended to fill, it
is improbable that they can be less than 27-ton guns, discharging
shells of about 500 lb. The interior line of forts is no less formidable,
and it must seem that Port Arthur can never be reduced by
bombardment alone, while any force attacking by land would be
severely handled by the positions from which the Russians propose
to defend their flanks and the neck. At the present, however, there is
a paucity of field-guns among the troops in garrison, in addition to
which many of the more recently constructed forts lack artillery;
while the opinion may be hazarded that the entire position has been
so over-fortified as to become a source of eventual weakness in the
ultimate disposition of the Russian force.
Of course a fight for the command of the sea must precede any
land operations. Japan is within fifteen hours steam of Fusan,
already a Japanese garrison-town, and of Ma-san-po, the port to
which Russia and Japan make equal claim. The strait separating
Japan from Korea is 200 miles broad, while Russia’s nearest base at
Port Arthur is 900 miles away on one hand and Vladivostock is 1200
miles away on the other. It follows therefore, that in Korea, and not
in Manchuria, the troops of the Japanese army would be landed.
Once established in Korea, Japan would be able to dispute the
supremacy of the sea on equal terms. In this respect the possession
by the Japanese of numerous torpedo craft confers a distinct
advantage upon them, since it will be within their power to utilise
their services if the Russian fleet were to attempt to check the
movement. The absence of any facilities for repairing damages
makes it certain that so far as possible the Russian fleet will evade
any serious engagement. It would be difficult to improve upon the
position of Japan in this respect. At Yokosuka, from which place a
large number of cruisers have been launched, there is a very
extensive building-yard, and Japan also possesses suitable docks for
large ships at Kure and Nagasaki. In all she has at her immediate
disposal some half a dozen docks, 400 ft. in length or more, and a
very skilful army of working mechanics and workmen in general. Port
Arthur must be regarded for practical purposes the naval base of
Russia in the Far East in the event of a cold-weather campaign.
Vladivostock is too far removed from the range of probable utility.
At this port, however, Russia has constructed one large dry dock,
one floating dock 301 ft. long, and a second dry dock has been laid
down. Against these two solitary and isolated centres, Japan
possesses naval bases, arsenals and docks at the following points on
her coast.

Yokosuka Arsenal, slip and dry dock.


Kure Arsenal, slip, dry dock, armour-plate works.
Sassebo Arsenal.
Maitsura New dockyard.
Nagasaki Three docks.
Takeshiki Coaling-station, naval base.
Ominato Base for small craft.
Kobe Torpedo repairing yard.
Matsmai Refitting station.

The squadrons which Japan and Russia will be able to employ in


this war are very formidable, and during the past few months each
Power has made strenuous efforts to increase the strength of its
fleet.
In January 1903 the aggregate tonnage of the Russian Pacific
Squadron stood at some 87,000 tons, the fleet including the
battleships Peresviet, Petropavlovsk, Poltava, Sevastopol, and the
cruisers Rossia, Gromoboi, and Rurik, with other smaller vessels.
In March the tonnage went up to 93,000 tons, thanks to the
arrival of the cruiser Askold from the Baltic.
In May the cruisers Diana, Pallada, Novik, and the battleship
Retvizan joined.
In June the cruisers Bogatyr and Boyarin reached the scene.
In July the battleship Probleda arrived.
In November the battleship Tzarevitch and the cruiser Bayan
further added to Russia’s strength.
In December the battleship Oslyabya, the armoured cruiser Dimitri
Donskoi, the protected cruisers Aurora and Almaz, and eleven
torpedo-boat destroyers.
In January 1904 the battleship Imperator Alexander III. leaves the
Baltic for the Far East.
Russia has laboured under great disadvantages to secure her
position in this region. In consequence of restricted shipbuilding
resources and owing to an unfortunate geographical position, Russia
has not enjoyed those opportunities of adding to her Pacific fleet
which have presented themselves to Japan. In effect, if not in fact,
Russia is compelled to maintain four navies. Unhappily, each is
isolated from the other, many hundreds of miles separating them.
Naval squadrons are concentrated in the Baltic, in the Black Sea, in
the Caspian Sea and in the Pacific. The Pacific squadron is of recent
establishment and of most modern construction. It dates back to
1898, from which time her policy of naval expansion began. Orders
were placed with France, Germany and America for cruisers and
battleships, coal was bought at Cardiff, and in a short space the
nucleus of a powerful fleet had sprung into existence. At the present
time these new ships are deficient in the various ratings, and
hundreds of mechanics, gunners and engineers have been
withdrawn from the Black Sea Squadron to do service with the
Pacific Fleet, moving to the Pacific Ocean from the Black Sea by
means of the Trans-Siberian Railway. Just now, and until the acute
phase of the crisis has disappeared or war has been declared, the
disposition of the Russian Pacific Squadron is as follows.
At Port Arthur, the battleships Petropavlovsk, Poltava, Sevastopol,
Peresviet, Retvizan, Probleda, and Tzarevitch; the first-class cruisers
Bayan, Askold, Pallada, Diana, and Varyag; the gunboats Bobr,
Gremyashtchi, and Koreetz; the transports Amur, Yenissei, and
Angara; the torpedo-cruisers Vsadnik, and Gaidamak; and the
destroyers Bezshumni, Bezposhadni, Bditelni, Bezstrashni, Boevoi,
Vnimatelni, Vnushitelni, Viposlivi, Vlastni, Burni, and Boiki.
At Vladivostock, the first-class cruisers Rossia, Gromoboi, Rurik,
and Bogatyr, the gunboat Mandchur, and the transport Lena.
At Chemulpo, the second-class cruiser Boyarin, and the destroyer
Grossovoi.
At Ma-san-po, the second-class cruiser Rasboinik.
In Nimrod Bay, the second-class cruiser Djijdjit.
At Newchwang, the gunboats Otvazhni and Sivutch.
At Nagasaki, the gunboat Gilvak.
It will be seen from this list that Russia practically has the whole
of her Pacific Squadron in and about the Yellow Sea. In addition to
this force there is the squadron now en suite for the Far East, which
lately passed through Bizerta. This comprises the battleship
Oslyabya, two second-class cruisers, Aurora and Dimitri Donskoi, and
eleven torpedo-boat destroyers. The added strength which Russia
will receive when these reinforcements, under Admiral Virenius,
reach her will give her a numerical superiority over Japan. The
greater efficiency, and that higher degree of skill, which is so
noticeable aboard the Japanese fleet, reduces this preponderance to
a mean level. However, Russia is by no means to be caught napping,
as the formation in Port Arthur of a reserve naval brigade tends to
show. Meanwhile, however, the subjoined detailed list presents the
principal vessels in the Russian Pacific Squadron. The officers
commanding are:

Vice-Admiral Stark,
Rear-Admiral Prince Ukhtomski,
Rear-Admiral Baron Shtakelberg,
Admiral Virenius (to join).

BATTLESHIPS

Speed, Chief
Built Tonnage
knots armament

4 12 in.
Tzarevitch (flagship) 1901 13,000 18
12 6 in.
4 10 in.
Probleda 1900 12,000 19
11 6 in.
4 12 in.
Poltava 1894 11,000 17
12 6 in.
4 12 in.
Sevastopol 1895 11,000 17
12 6 in.
4 12 in.
Petropavlovsk 1894 11,000 17
12 6 in.
4 10 in.
Peresviet 1898 12,000 19
10 6 in.
4 12 in.
Retvizan 1900 12,700 18
12 6 in.

Reinforcements to join: Oslyabya, 12,000 tons, 4 10-in. guns, 10


6-in. guns; Navarin, 9,000 tons, 4 12-in. guns, 8 6-in. guns;
Imperator Alexander III.
CRUISERS

Speed, Chief
Built Tonnage
knots armament
Askold 1900 7,000 23 12 6 in.
2 8 in.
Bayan 1900 8,000 21
8 6 in.
4 8 in.
Gromoboi 1899 12,000 20
16 6 in.
4 8 in.
Rossia 1896 12,000 20
16 6 in.
4 8 in.
Rurik 1892 11,000 18
16 6 in.
Bogatyr 1901 6,000 23 12 6 in.
Varyag 1899 6,000 23 12 6 in.
Diana 1899 7,000 20 8 6 in.
Pallada 1899 7,000 20 8 6 in.
Boyarin 1900 3,000 22 6 4.7 in.
Novik 1900 3,000 25 6 4.7 in.
Zabiuca 1878 1,300 14 Field guns
Djijdjit 1878 1,300 13 3 6 in.
Rasboinik 1879 1,300 13 3 6 in.

Reinforcements to join: Gremyashtchi, Admiral Nakhimoff; Aurora,


Admiral Korniloff; Otrajny, Dmitri Donskoi; Almaz.
The gunboats on this station number nine, the destroyers
eighteen, and the transports six. Thirteen destroyers are to join.
This fleet, with reinforcements, compares numerically with the
eventual strength of Japan as follows:

Battleships Cruisers
Russia 10 21
Japan 7 26

A proportion of Japanese cruisers would be needed for coast


defence, so that Russia is becoming numerically the stronger for sea
work. In addition, Russia also has a powerful auxiliary fleet,
consisting of ten steamers of the Black Sea Steam Navigation
Company, most of which were built on the Tyne, and average
fourteen knots. The Russian Volunteer Fleet Association numbers
twelve Tyne and Clyde built ships. They are also at the disposal of
the authorities.
Against this fighting array the Japanese are able to place vessels
of equal size and displacement; in the actual weight of metal the
Japanese are at a disadvantage, but in the thickness of the
armoured protection there is little to choose. Against this
comparative equality of the opposing fleets there must be borne in
mind the great advantage which Japan derives from her ability to
use her own fortified ports as naval bases. Indeed, this is of such
importance that the knowledge of this fact might induce her to risk
her whole strength in a single engagement. Again, in the mercantile
marine, which has increased enormously of recent years, Japan will
find all she may require for the purposes of transport and auxiliaries
to the war fleet. The principal vessels in the Japanese navy are here
indicated:
BATTLESHIPS

Nominal Gun Weight of


Name Displacement I.H.P.
Speed Protection Broadside Fire
Tons Knots In. Lbs.

Hatsuse
Asahi 15,000 15,000 18.0 14.6 4240
Shikishima
Mikasa 15,200 16,000 18.0 14.6 4225
Yashima
12,300 13,000 18.0 14.6 4000
Fuji

ARMOURED CRUISERS

Nominal Gun Weight of


Name Displacement I.H.P.
Speed Protection Broadside Fire
Tons Knots In. Lbs.

Tokiwa
9750 18,000 21.5 6.6 3568
Asama
Yaqumo 9850 16,000 20.0 6.6 3368
Azuma 9436 17,000 21.0 6.6 3368
Idzuma
9800 15,000 24.7 6.6 3568
Iwate

In addition to these, early in January 1904 the two cruisers


purchased in Italy from the Argentine Government will be ready for
sea.
PROTECTED CRUISERS

Nominal Gun Weight of


Name Displacement I.H.P.
Speed Protection Broadside Fire
Tons Knots In. Lbs.

Takasago 4300 15,500 24.0 4½.2 800


Kasagi
4784 15,500 22.5 4½.0 800
Chitose
Itsukushima
Hashidate 4277 5400 16.7 11.4 1260
Matsushima
Yoshino 4180 15,750 23.0 — 780
Naniwa
3727 7120 17.8 — 1196
Takachiho
Akitsushima 3150 8400 19.0 — 780
Nitaka
3420 9500 20.0 — 920
Tsushima
Suma
2700 8500 20.0 — 335
Akashi

In connection with the First Division of the Japanese Fleet an


interesting fact has transpired which, from reason of its association
with this country, will prove of more than ordinary interest. In case
of war it appears that with one exception the ships comprising this
division are all British built. Designs, armour-plating and armament
follow the type and standard of our own Navy, and it is therefore
obvious that we cannot fail to be stirred deeply by the results of any
collision which may occur. Each nation possesses in Far Eastern
waters ships supplied with the latest appliances which science and
ingenuity have devised. To the people of this Empire, whose security
rests primarily upon the Fleet, our interest in the engagements is
naturally the higher, by reason of the similarity between the ships
which will be engaged upon one side and those of our own Navy.
These vessels, all of which have received their war-paint, and whose
place of concentration is Nagasaki, some 585 nautical miles from
Port Arthur, are as follows:

Chief
Name Where built Tonnage
armament

4 12 in.
Hatsuse (B) Elswick 15,000
14 6 in.
4 12 in.
Shikishima (B) Thames 15,000
14 6 in.
4 12 in.
Asahi (B) Clyde 15,000
14 6 in.
4 12 in.
Fuji (B) Blackwall 12,500
10 6 in.
4 12 in.
Yashima (B) Elswick 12,500
10 6 in.
4 8 in.
Iwate (C) Elswick 10,000
10 6 in.
4 8 in.
Asama (C) Elswick 10,000
10 6 in.
4 8 in.
Idzuma (C) Elswick 10,000
14 6 in.
4 8 in.
Tokiwa (C) Elswick 10,000
10 6 in.
2 8 in.
Takasago (C) Elswick 4,300
10 4.7 in.
2 8 in.
Kasagi (C) Cramp (Philadelphia) 5,000
10 4.7 in.

(B) battleship; (C) cruiser.


A torpedo flotilla, numbering thirty-five vessels, forms part of this
division. The other divisions of the fleet for war comprise the
following:

Third
Second
division
division.
(Home).
Battleships 2 —
Cruisers 10 8
Small craft 30 80
In addition to these the auxiliary fleet numbers some forty
steamers, for the most part vessels belonging to the Nippon Yusen
Kaisha.
The present constitution of the Japanese Army dates from 1873,
and the Military Forces consist of—(1) the permanent or Regular
Army, with its Reserves and Recruiting Reserves; (2) the Territorial
Army; (3) the National Militia; and (4) the Militia of the various island
centres off the coast, &c. Military service is obligatory in the case of
every able-bodied male from the age of seventeen to forty years of
age. Of this period, three years are passed in the permanent or
Regular Army, four years and four months in the Regular Reserves,
five years in the Territorial Army, and the remaining liability in the
National Militia. The permanent Army, with its Reserves, conducts
operations abroad, and the Territorial Army and the Militia are for
home defence. These latter are equipped with Peabody and
Remington single-loading rifles. The up-to-date strength of the
permanent Army, on a war footing, which does not include the
Reserves, is as follows:

Rank and
Officers Horses
File

Infantry, 52 regiments of 3 battalions, = 4,160 143,000 52


156 battalions
Cavalry, 17 regiments of 3 squadrons, = 400 9,300 9,000
51 squadrons
Field and Mountain Artillery, 19 = 800 12,500 8,800
regiments of 6 batteries, total 114
batteries of 6 guns = 684 guns
Fortress Artillery, 20 battalions = 530 10,300 70
Engineers 13 Sapper battalions = 270 7,000 215
1 Railway battalion = 20 550 15
Transport, 13 battalions = 220 7,740 40,000
Total = 684 guns, 6400 officers, 190,390 rank and file, 58,152 horses.
The Reserves comprise 52 battalions of Infantry, 17 squadrons, 26
Engineer and Transport companies, and 19 batteries with 114 guns,
yielding a total of 1000 officers, 34,600 rank and file, and 9000
horses. Therefore, on mobilisation, the grand effective strength of
the Army available for service beyond the seas would amount to
7400 officers, 224,990 rank and file, 798 guns, and 67,152 horses.
Behind this, there is the Territorial Army, comprising 386 Infantry
battalions, 99 squadrons, 26 Engineer and Transport companies, and
about 70 batteries, or 11,735 officers, 348,100 men, 1116 guns, and
86,460 horses.
The Infantry and Engineers of the Regular Army have been
recently re-armed with the Meidji magazine rifle. The following
particulars show that the Japanese small arm is a superior weapon
to the Russian, which dates from 1891:

Japanese “Meidji,” model 1897.


Muzzle Sighted Weight No. of
Calibre. velocity. up to with Rounds
Ft.-Sec. Yards. Bayonet. in Mag.
.255 in. 2315 700 9 lb. 2 oz. 5
Russian “Three-Line,” model 1891.
.299 in. 1900 2500 9 lb. 12 oz. 5

The Regular Cavalry have the Meidji carbine. The Reserves are
armed with the Murata magazine rifle, model 1894, calibre .312 in.,
muzzle velocity 2000 feet-seconds, sighted up to 2187 yds., and
weight with bayonet, 9 lb. 1 oz. The equipment carried by the
Infantry soldier in the field weighs 43½ lbs.
The Regular Field and Mountain Artillery is armed with 2.95 in.
quick-firing equipment, with hydraulic compressor, throwing a 10 lb.
projectile. This is known as the Arisaka equipment. The Fortress and
Siege Artillery have the latest models of Krupp and Schneider-Canet
in siege guns, guns of position, and mortars. The Reserve Field

You might also like