100% found this document useful (1 vote)
39 views

Full Download Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python 1st Edition Abhishek Nandy PDF DOCX

Nandy

Uploaded by

yudinrubyeea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
39 views

Full Download Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python 1st Edition Abhishek Nandy PDF DOCX

Nandy

Uploaded by

yudinrubyeea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Download the Full Version of textbook for Fast Typing at textbookfull.

com

Reinforcement Learning: With Open AI, TensorFlow


and Keras Using Python 1st Edition Abhishek Nandy

https://textbookfull.com/product/reinforcement-learning-
with-open-ai-tensorflow-and-keras-using-python-1st-edition-
abhishek-nandy/

OR CLICK BUTTON

DOWNLOAD NOW

Download More textbook Instantly Today - Get Yours Now at textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Applied Reinforcement Learning with Python: With OpenAI


Gym, Tensorflow, and Keras Beysolow Ii

https://textbookfull.com/product/applied-reinforcement-learning-with-
python-with-openai-gym-tensorflow-and-keras-beysolow-ii/

textboxfull.com

Deep Learning with Python Develop Deep Learning Models on


Theano and TensorFLow Using Keras Jason Brownlee

https://textbookfull.com/product/deep-learning-with-python-develop-
deep-learning-models-on-theano-and-tensorflow-using-keras-jason-
brownlee/
textboxfull.com

Deep Learning Projects Using TensorFlow 2: Neural Network


Development with Python and Keras 1st Edition Vinita
Silaparasetty
https://textbookfull.com/product/deep-learning-projects-using-
tensorflow-2-neural-network-development-with-python-and-keras-1st-
edition-vinita-silaparasetty/
textboxfull.com

Deep Learning with Applications Using Python Chatbots and


Face, Object, and Speech Recognition With TensorFlow and
Keras Springerlink (Online Service)
https://textbookfull.com/product/deep-learning-with-applications-
using-python-chatbots-and-face-object-and-speech-recognition-with-
tensorflow-and-keras-springerlink-online-service/
textboxfull.com
Beginning Anomaly Detection Using Python-Based Deep
Learning: With Keras and PyTorch Sridhar Alla

https://textbookfull.com/product/beginning-anomaly-detection-using-
python-based-deep-learning-with-keras-and-pytorch-sridhar-alla/

textboxfull.com

Computer Vision Using Deep Learning Neural Network


Architectures with Python and Keras 1st Edition Vaibhav
Verdhan
https://textbookfull.com/product/computer-vision-using-deep-learning-
neural-network-architectures-with-python-and-keras-1st-edition-
vaibhav-verdhan/
textboxfull.com

Machine Learning Concepts with Python and the Jupyter


Notebook Environment: Using Tensorflow 2.0 Nikita
Silaparasetty
https://textbookfull.com/product/machine-learning-concepts-with-
python-and-the-jupyter-notebook-environment-using-
tensorflow-2-0-nikita-silaparasetty/
textboxfull.com

Natural language processing with TensorFlow Teach language


to machines using Python s deep learning library 1st
Edition Thushan Ganegedara
https://textbookfull.com/product/natural-language-processing-with-
tensorflow-teach-language-to-machines-using-python-s-deep-learning-
library-1st-edition-thushan-ganegedara/
textboxfull.com

Building an Enterprise Chatbot: Work with Protected


Enterprise Data Using Open Source Frameworks Abhishek
Singh
https://textbookfull.com/product/building-an-enterprise-chatbot-work-
with-protected-enterprise-data-using-open-source-frameworks-abhishek-
singh/
textboxfull.com
Reinforcement
Learning
With Open AI, TensorFlow and
Keras Using Python

Abhishek Nandy
Manisha Biswas
Reinforcement
Learning
With Open AI, TensorFlow and
Keras Using Python

Abhishek Nandy
Manisha Biswas
Reinforcement Learning
Abhishek Nandy Manisha Biswas
Kolkata, West Bengal, India North 24 Parganas, West Bengal, India
ISBN-13 (pbk): 978-1-4842-3284-2 ISBN-13 (electronic): 978-1-4842-3285-9
https://doi.org/10.1007/978-1-4842-3285-9
Library of Congress Control Number: 2017962867
Copyright © 2018 by Abhishek Nandy and Manisha Biswas
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole
or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical
way, and transmission or information storage and retrieval, electronic adaptation, computer
software, or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark
symbol with every occurrence of a trademarked name, logo, or image we use the names, logos,
and images only in an editorial fashion and to the benefit of the trademark owner, with no
intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or not
they are subject to proprietary rights.
While the advice and information in this book are believed to be true and accurate at the
date of publication, neither the authors nor the editors nor the publisher can accept any legal
responsibility for any errors or omissions that may be made. The publisher makes no warranty,
express or implied, with respect to the material contained herein.
Cover image by Freepik (www.freepik.com)
Managing Director: Welmoed Spahr
Editorial Director: Todd Green
Acquisitions Editor: Celestin Suresh John
Development Editor: Matthew Moodie
Technical Reviewer: Avirup Basu
Coordinating Editor: Sanchita Mandal
Copy Editor: Kezia Endsley
Compositor: SPi Global
Indexer: SPi Global
Artist: SPi Global
Distributed to the book trade worldwide by Springer Science+Business Media New York,
233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505,
e-mail orders-ny@springer-sbm.com, or visit www.springeronline.com. Apress Media,
LLC is a California LLC and the sole member (owner) is Springer Science + Business Media
Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.
For information on translations, please e-mail rights@apress.com, or visit
http://www.apress.com/rights-permissions.
Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook
versions and licenses are also available for most titles. For more information, reference our
Print and eBook Bulk Sales web page at http://www.apress.com/bulk-sales.
Any source code or other supplementary material referenced by the author in this book is
available to readers on GitHub via the book’s product page, located at www.apress.com/
978-1-4842-3284-2. For more detailed information, please visit http://www.apress.com/
source-code.
Printed on acid-free paper
Contents

About the Authors���������������������������������������������������������������������������� vii


About the Technical Reviewer���������������������������������������������������������� ix
Acknowledgments���������������������������������������������������������������������������� xi
Introduction������������������������������������������������������������������������������������ xiii


■Chapter 1: Reinforcement Learning Basics������������������������������������ 1
What Is Reinforcement Learning?����������������������������������������������������������� 1
Faces of Reinforcement Learning����������������������������������������������������������� 6
The Flow of Reinforcement Learning������������������������������������������������������ 7
Different Terms in Reinforcement Learning�������������������������������������������� 9
Gamma������������������������������������������������������������������������������������������������������������������� 10
Lambda������������������������������������������������������������������������������������������������������������������� 10

Interactions with Reinforcement Learning�������������������������������������������� 10


RL Characteristics�������������������������������������������������������������������������������������������������� 11
How Reward Works������������������������������������������������������������������������������������������������ 12
Agents�������������������������������������������������������������������������������������������������������������������� 13
RL Environments����������������������������������������������������������������������������������������������������� 14

Conclusion��������������������������������������������������������������������������������������������� 18

■Chapter 2: RL Theory and Algorithms������������������������������������������� 19
Theoretical Basis of Reinforcement Learning��������������������������������������� 19
Where Reinforcement Learning Is Used������������������������������������������������ 21
Manufacturing�������������������������������������������������������������������������������������������������������� 22
Inventory Management������������������������������������������������������������������������������������������� 22

iii
■ Contents

Delivery Management��������������������������������������������������������������������������������������������� 22
Finance Sector�������������������������������������������������������������������������������������������������������� 23

Why Is Reinforcement Learning Difficult?��������������������������������������������� 23


Preparing the Machine�������������������������������������������������������������������������� 24
Installing Docker����������������������������������������������������������������������������������� 36
An Example of Reinforcement Learning with Python���������������������������� 39
What Are Hyperparameters?���������������������������������������������������������������������������������� 41
Writing the Code����������������������������������������������������������������������������������������������������� 41

What Is MDP?���������������������������������������������������������������������������������������� 47
The Markov Property���������������������������������������������������������������������������������������������� 48
The Markov Chain��������������������������������������������������������������������������������������������������� 49
MDPs���������������������������������������������������������������������������������������������������������������������� 53

SARSA��������������������������������������������������������������������������������������������������� 54
Temporal Difference Learning�������������������������������������������������������������������������������� 54
How SARSA Works�������������������������������������������������������������������������������������������������� 56

Q Learning��������������������������������������������������������������������������������������������� 56
What Is Q?�������������������������������������������������������������������������������������������������������������� 57
How to Use Q���������������������������������������������������������������������������������������������������������� 57
SARSA Implementation in Python��������������������������������������������������������������������������� 58
The Entire Reinforcement Logic in Python������������������������������������������������������������� 64

Dynamic Programming in Reinforcement Learning������������������������������ 68


Conclusion��������������������������������������������������������������������������������������������� 69

■Chapter 3: OpenAI Basics������������������������������������������������������������� 71
Getting to Know OpenAI������������������������������������������������������������������������ 71
Installing OpenAI Gym and OpenAI Universe����������������������������������������� 73
Working with OpenAI Gym and OpenAI������������������������������������������������� 75
More Simulations���������������������������������������������������������������������������������� 81

iv
■ Contents

OpenAI Universe������������������������������������������������������������������������������������ 84
Conclusion��������������������������������������������������������������������������������������������� 87

■Chapter 4: Applying Python to Reinforcement Learning�������������� 89
Q Learning with Python������������������������������������������������������������������������� 89
The Maze Environment Python File������������������������������������������������������������������������ 91
The RL_Brain Python File��������������������������������������������������������������������������������������� 94
Updating the Function�������������������������������������������������������������������������������������������� 95

Using the MDP Toolbox in Python���������������������������������������������������������� 97


Understanding Swarm Intelligence����������������������������������������������������� 109
Applications of Swarm Intelligence���������������������������������������������������������������������� 109
Swarm Grammars������������������������������������������������������������������������������������������������� 111
The Rastrigin Function������������������������������������������������������������������������������������������ 111
Swarm Intelligence in Python������������������������������������������������������������������������������� 116

Building a Game AI������������������������������������������������������������������������������ 119


The Entire TFLearn Code��������������������������������������������������������������������������������������� 124

Conclusion������������������������������������������������������������������������������������������� 128
■■Chapter 5: Reinforcement Learning with Keras,
TensorFlow, and ChainerRL�������������������������������������������������������� 129
What Is Keras?������������������������������������������������������������������������������������ 129
Using Keras for Reinforcement Learning�������������������������������������������� 130
Using ChainerRL���������������������������������������������������������������������������������� 134
Installing ChainerRL���������������������������������������������������������������������������������������������� 134
Pipeline for Using ChainerRL�������������������������������������������������������������������������������� 137

Deep Q Learning: Using Keras and TensorFlow����������������������������������� 145


Installing Keras-rl������������������������������������������������������������������������������������������������� 146
Training with Keras-rl������������������������������������������������������������������������������������������� 148

Conclusion������������������������������������������������������������������������������������������� 153

v
■ Contents

■■Chapter 6: Google’s DeepMind and the Future of


Reinforcement Learning������������������������������������������������������������� 155
Google DeepMind�������������������������������������������������������������������������������� 155
Google AlphaGo����������������������������������������������������������������������������������� 156
What Is AlphaGo?�������������������������������������������������������������������������������������������������� 157
Monte Carlo Search���������������������������������������������������������������������������������������������� 159
Man vs. Machines������������������������������������������������������������������������������� 161
Positive Aspects of AI������������������������������������������������������������������������������������������� 161
Negative Aspects of AI������������������������������������������������������������������������������������������ 161

Conclusion������������������������������������������������������������������������������������������� 163

Index���������������������������������������������������������������������������������������������� 165

vi
About the Authors

Abhishek Nandy has a B.Tech. in information


technology and considers himself a constant learner.
He is a Microsoft MVP in the Windows platform, an
Intel Black Belt Developer, as well as an Intel software
innovator. Abhishek has a keen interest in artificial
intelligence, IoT, and game development. He is
currently serving as an application architect at an IT
firm and consults in AI and IoT, as well does projects
in AI, Machine Learning, and deep learning. He is also
an AI trainer and drives the technical part of Intel AI
student developer program. He was involved in the first
Make in India initiative, where he was among the top
50 innovators and was trained in IIMA.

Manisha Biswas has a B.Tech. in information


technology and currently works as a software developer
at InSync Tech-Fin Solutions Ltd in Kolkata, India. She
is involved in several areas of technology, including
web development, IoT, soft computing, and artificial
intelligence. She is an Intel Software innovator and was
awarded the Shri Dewang Mehta IT Awards 2016 by
NASSCOM, a certificate of excellence for top academic
scores. She very recently formed a “Women in
Technology” community in Kolkata, India to empower
women to learn and explore new technologies. She
likes to invent things, create something new, and
invent a new look for the old things. When not in front
of her terminal, she is an explorer, a foodie, a doodler,
and a dreamer. She is always very passionate to share
her knowledge and ideas with others. She is following
her passion currently by sharing her experiences with the community so that others can
learn, which lead her to become Google Women Techmakers, Kolkata Chapter Lead.

vii
About the Technical
Reviewer

Avirup Basu is an IoT application developer at


Prescriber360 Solutions. He is a researcher in robotics
and has published papers through the IEEE.

ix
Acknowledgments

I want to dedicate this book to my parents.


—Abhishek Nandy

I want to dedicate this book to my mom and dad. Thank you to my teachers and my
co-author, Abhishek Nandy. Thanks also to Abhishek Sur, who mentors me at work
and helps me adapt to new technologies. I would also like to dedicate this book to my
company, InSync Tech-Fin Solutions Ltd., where I started my career and have grown
professionally.

—Manisha Biswas

xi
Introduction

This book is primarily based on a Machine Learning subset known as Reinforcement


Learning. We cover the basics of Reinforcement Learning with the help of the Python
programming language and touch on several aspects, such as Q learning, MDP, RL with
Keras, and OpenAI Gym and OpenAI Environment, and also cover algorithms related
to RL.
Users need a basic understanding of programming in Python to benefit from this
book.
The book is meant for people who want to get into Machine Learning and learn more
about Reinforcement Learning.

xiii
CHAPTER 1

Reinforcement Learning
Basics

This chapter is a brief introduction to Reinforcement Learning (RL) and includes some
key concepts associated with it.
In this chapter, we talk about Reinforcement Learning as a core concept and then
define it further. We show a complete flow of how Reinforcement Learning works. We
discuss exactly where Reinforcement Learning fits into artificial intelligence (AI). After
that we define key terms related to Reinforcement Learning. We start with agents and
then touch on environments and then finally talk about the connection between agents
and environments.

What Is Reinforcement Learning?


We use Machine Learning to constantly improve the performance of machines or
programs over time. The simplified way of implementing a process that improves
machine performance with time is using Reinforcement Learning (RL). Reinforcement
Learning is an approach through which intelligent programs, known as agents, work
in a known or unknown environment to constantly adapt and learn based on giving
points. The feedback might be positive, also known as rewards, or negative, also
called punishments. Considering the agents and the environment interaction, we then
determine which action to take.
In a nutshell, Reinforcement Learning is based on rewards and punishments.
Some important points about Reinforcement Learning:
• It differs from normal Machine Learning, as we do not look at
training datasets.
• Interaction happens not with data but with environments,
through which we depict real-world scenarios.

© Abhishek Nandy and Manisha Biswas 2018 1


A. Nandy and M. Biswas, Reinforcement Learning,
https://doi.org/10.1007/978-1-4842-3285-9_1
Chapter 1 ■ Reinforcement Learning Basics

• As Reinforcement Learning is based on environments, many


parameters come in to play. It takes lots of information to learn
and act accordingly.
• Environments in Reinforcement Learning are real-world
scenarios that might be 2D or 3D simulated worlds or game-
based scenarios.
• Reinforcement Learning is broader in a sense because the
environments can be large in scale and there might be a lot of
factors associated with them.
• The objective of Reinforcement Learning is to reach a goal.
• Rewards in Reinforcement Learning are obtained from the
environment.
The Reinforcement Learning cycle is depicted in Figure 1-1 with the help of a robot.

Figure 1-1. Reinforcement Learning cycle

2
Chapter 1 ■ Reinforcement Learning Basics

A maze is a good example that can be studied using Reinforcement Learning, in


order to determine the exact right moves to complete the maze (see Figure 1-2).

Figure 1-2. Reinforcement Learning can be applied to mazes

In Figure 1-3, we are applying Reinforcement Learning and we call it the


Reinforcement Learning box because within its vicinity the process of RL works. RL starts
with an intelligent program, known as agents, and when they interact with environments,
there are rewards and punishments associated. An environment can be either known
or unknown to the agents. The agents take actions to move to the next state in order to
maximize rewards.

3
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-3. Reinforcement Learning flow

In the maze, the centralized concept is to keep moving. The goal is to clear the maze
and reach the end as quickly as possible.
The following concepts of Reinforcement Learning and the working scenario are
discussed later this chapter.
• The agent is the intelligent program
• The environment is the maze
• The state is the place in the maze where the agent is
• The action is the move we take to move to the next state
• The reward is the points associated with reaching a particular
state. It can be positive, negative, or zero
We use the maze example to apply concepts of Reinforcement Learning. We will be
describing the following steps:

1. The concept of the maze is given to the agent.


2. There is a task associated with the agent and Reinforcement
Learning is applied to it.
3. The agent receives (a-1) reinforcement for every move it
makes from one state to other.
4. There is a reward system in place for the agent when it moves
from one state to another.

4
Chapter 1 ■ Reinforcement Learning Basics

The rewards predictions are made iteratively, where we update the value of each
state in a maze based on the value of the best subsequent state and the immediate reward
obtained. This is called the update rule.
The constant movement of the Reinforcement Learning process is based on
decision-making.
Reinforcement Learning works on a trial-and-error basis because it is very difficult to
predict which action to take when it is in one state. From the maze problem itself, you can
see that in order get the optimal path for the next move, you have to weigh a lot of factors.
It is always on the basis of state action and rewards. For the maze, we have to compute
and account for probability to take the step.
The maze also does not consider the reward of the previous step; it is specifically
considering the move to the next state. The concept is the same for all Reinforcement
Learning processes.
Here are the steps of this process:
1. We have a problem.
2. We have to apply Reinforcement Learning.
3. We consider applying Reinforcement Learning as a
Reinforcement Learning box.
4. The Reinforcement Learning box contains all essential
components needed for applying the Reinforcement Learning
process.
5. The Reinforcement Learning box contains agents,
environments, rewards, punishments, and actions.
Reinforcement Learning works well with intelligent program agents that give rewards
and punishments when interacting with an environment.
The interaction happens between the agents and the environments, as shown in
Figure 1-4.

Figure 1-4. Interaction between agents and environments

From Figure 1-4, you can see that there is a direct interaction between the agents and
its environments. This interaction is very important because through these exchanges,
the agent adapts to the environments. When a Machine Learning program, robot, or
Reinforcement Learning program starts working, the agents are exposed to known or
unknown environments and the Reinforcement Learning technique allows the agents to
interact and adapt according to the environment’s features.
Accordingly, the agents work and the Reinforcement Learning robot learns. In order
to get to a desired position, we assign rewards and punishments.

5
Chapter 1 ■ Reinforcement Learning Basics

Now, the program has to work around the optimal path to get maximum rewards if
it fails (that is, it takes punishments or receives negative points). In order to reach a new
position, which also is known as a state, it must perform what we call an action.
To perform an action, we implement a function, also known as a policy. A policy is
therefore a function that does some work.

Faces of Reinforcement Learning


As you see from the Venn diagram in Figure 1-5, Reinforcement Learning sits at the
intersection of many different fields of science.

Figure 1-5. All the faces of Reinforcement Learning

6
Chapter 1 ■ Reinforcement Learning Basics

The intersection points reveal a very strong feature of Reinforcement Learning—it


shows the science of decision-making. If we have two paths and have to decide which
path to take so that some point is met, a scientific decision-making process can be
designed.
Reinforcement Learning is the fundamental science of optimal decision-making.
If we focus on the computer science part of the Venn diagram in Figure 1-5, we
see that if we want to learn, it falls under the category of Machine Learning, which is
specifically mapped to Reinforcement Learning.
Reinforcement Learning can be applied to many different fields of science. In
engineering, we have devices that focus mostly on optimal control. In neuroscience, we
are concerned with how the brain works as a stimulant for making decisions and study
the reward system that works on the brain (the dopamine system).
Psychologists can apply Reinforcement Learning to determine how animals make
decisions. In mathematics, we have a lot of data applying Reinforcement Learning in
operations research.

The Flow of Reinforcement Learning


Figure 1-6 connects agents and environments.

Figure 1-6. RL structure

The interaction happens from one state to another. The exact connection starts
between an agent and the environment. Rewards are happening on a regular basis.
We take appropriate actions to move from one state to another.
The key points of consideration after going through the details are the following:
• The Reinforcement Learning cycle works in an interconnected
manner.
• There is distinct communication between the agent and the
environment.
• The distinct communication happens with rewards in mind.
• The object or robot moves from one state to another.
• An action is taken to move from one state to another

7
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-7 simplifies the interaction process.

Figure 1-7. The entire interaction process

An agent is always learning and finally makes a decision. An agent is a learner, which
means there might be different paths. When the agent starts training, it starts to adapt and
intelligently learns from its surroundings.
The agent is also a decision maker because it tries to take an action that will get it the
maximum reward.
When the agent starts interacting with the environment, it can choose an action and
respond accordingly.
From then on, new scenes are created. When the agent changes from one place to
another in an environment, every change results in some kind of modification. These
changes are depicted as scenes. The transition that happens in each step helps the agent
solve the Reinforcement Learning problem more effectively.

8
Chapter 1 ■ Reinforcement Learning Basics

Let’s look at another scenario of state transitioning, as shown in Figures 1-8 and 1-9.

Figure 1-8. Scenario of state changes

Figure 1-9. The state transition process

Learn to choose actions that maximize the following:

r0 +γr1 +γ2r2 +............... where 0< γ<1

At each state transition, the reward is a different value, hence we describe reward
with varying values in each step, such as r0, r1, r2, etc. Gamma (γ) is called a discount
factor and it determines what future reward types we get:
• A gamma value of 0 means the reward is associated with the
current state only
• A gamma value of 1 means that the reward is long-term

Different Terms in Reinforcement Learning


Now we cover some common terms associated with Reinforcement Learning.
There are two constants that are important in this case—gamma (γ) and lambda (λ),
as shown in Figure 1-10.

9
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-10. Showing values of constants

Gamma is common in Reinforcement Learning problems but lambda is used


generally in terms of temporal difference problems.

Gamma
Gamma is used in each state transition and is a constant value at each state change.
Gamma allows you to give information about the type of reward you will be getting in
every state. Generally, the values determine whether we are looking for reward values in
each state only (in which case, it’s 0) or if we are looking for long-term reward values (in
which case it’s 1).

Lambda
Lambda is generally used when we are dealing with temporal difference problems. It is
more involved with predictions in successive states.
Increasing values of lambda in each state shows that our algorithm is learning fast.
The faster algorithm yields better results when using Reinforcement Learning techniques.
As you’ll learn later, temporal differences can be generalized to what we call
TD(Lambda). We discuss it in greater depth later.

Interactions with Reinforcement Learning


Let’s now talk about Reinforcement Learning and its interactions. As shown in
Figure 1-11, the interactions between the agent and the environment occur with a reward.
We need to take an action to move from one state to another.

10
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-11. Reinforcement Learning interactions

Reinforcement Learning is a way of implementing how to map situations to actions


so as to maximize and find a way to get the highest rewards.
The machine or robot is not told which actions to take, as with other forms of
Machine Learning, but instead the machine must discover which actions yield the
maximum reward by trying them.
In the most interesting and challenging cases, actions affect not only the immediate
reward but also the next situation and all subsequent rewards.

RL Characteristics
We talk about characteristics next. The characteristics are generally what the agent does
to move to the next state. The agent considers which approach works best to make the
next move.
The two characteristics are
• Trial and error search.
• Delayed reward.
As you probably have gathered, Reinforcement Learning works on three things
combined:

(S,A,R)

Where S represents state, A represents action, and R represents reward.


If you are in a state S, you perform an action A so that you get a reward R at time
frame t+1. Now, the most important part is when you move to the next state. In this case,
we do not use the reward we just earned to decide where to move next. Each transition
has a unique reward and no reward from any previous state is used to determine the next
move. See Figure 1-12.

11
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-12. State change with time

The T change (the time frame) is important in terms of Reinforcement Learning.


Every occurrence of what we do is always a combination of what we perform in terms
of states, actions, and rewards. See Figure 1-13.

Figure 1-13. Another way of representing the state transition

How Reward Works


A reward is some motivator we receive when we transition from one state to another. It
can be points, as in a video game. The more we train, the more accurate we become, and
the greater our reward.

12
Chapter 1 ■ Reinforcement Learning Basics

Agents
In terms of Reinforcement Learning, agents are the software programs that make
intelligent decisions. Agents should be able to perceive what is happening in the
environment. Here are the basic steps of the agents:
1. When the agent can perceive the environment, it can make
better decisions.
2. The decision the agents take results in an action.
3. The action that the agents perform must be the best, the
optimal, one.
Software agents might be autonomous or they might work together with other agents
or with people. Figure 1-14 shows how the agent works.

Figure 1-14. The flow of the environment

13
Chapter 1 ■ Reinforcement Learning Basics

RL Environments
The environments in the Reinforcement Learning space are comprised of certain factors
that determine the impact on the Reinforcement Learning agent. The agent must adapt
accordingly to the environment. These environments can be 2D worlds or grids or even a
3D world.
Here are some important features of environments:
• Deterministic
• Observable
• Discrete or continuous
• Single or multiagent.

Deterministic
If we can infer and predict what will happen with a certain scenario in the future, we say
the scenario is deterministic.
It is easier for RL problems to be deterministic because we don’t rely on the
decision-making process to change state. It’s an immediate effect that happens with state
transitions when we are moving from one state to another. The life of a Reinforcement
Learning problem becomes easier.
When we are dealing with RL, the state model we get will be either deterministic or
non-deterministic. That means we need to understand the mechanisms behind how DFA
and NDFA work.

DFA (Deterministic Finite Automata)


DFA goes through a finite number of steps. It can only perform one action for a state. See
Figure 1-15.

Figure 1-15. Showing DFA

14
Chapter 1 ■ Reinforcement Learning Basics

We are showing a state transition from a start state to a final state with the help of
a diagram. It is a simple depiction where we can say that, with some input value that is
assumed as 1 and 0, the state transition occurs. The self-loop is created when it gets a
value and stays in the same state.

NDFA (Nondeterministic Finite Automaton)


If we are working in a scenario where we don’t know exactly which state a machine will
move into, this is a case of NDFA. See Figure 1-16.

Figure 1-16. NDFA

The working principle of the state diagram in Figure 1-16 can be explained as
follows. In NDFA the issue is when we are transitioning from one state to another, there is
more than one option available, as we can see in Figure 1-16. From State S0 after getting
an input such as 0, it can stay in state S0 or move to state S1. There is decision-making
involved here, so it becomes difficult to know which action to take.

Observable
If we can say that the environment around us is fully observable, we have a perfect
scenario for implementing Reinforcement Learning.
An example of perfect observability is a chess game. An example of partial
observability is a poker game, where some of the cards are unknown to any one player.

15
Chapter 1 ■ Reinforcement Learning Basics

Discrete or Continuous
If there is more than one choice for transitioning to the next state, that is a continuous
scenario. When there are a limited number of choices, that’s called a discrete scenario.

Single Agent and Multiagent Environments


Solutions in Reinforcement Learning can be of single agent types or multiagent types.
Let’s take a look at multiagent Reinforcement Learning first.
When we are dealing with complex problems, we use multiagent Reinforcement
Learning. Complex problems might have different environments where the agent is doing
different jobs to get involved in RL and the agent also wants to interact. This introduces
different complications in determining transitions in states.
Multiagent solutions are based on the non-deterministic approach.
They are non-deterministic because when the multiagents interact, there might be
more than one option to change or move to the next state and we have to make decisions
based on that ambiguity.
In multiagent solutions, the agent interactions between different environments are
enormous. They are enormous because the amount of activity involved in references to
environments is very large. This is because the environments might be different types and
the multiagents might have different tasks to do in each state transition.
The difference between single-agent and multiagent solutions are as follows:
• Single-agent scenarios involve intelligent software in which the
interaction happens in one environment only. If there is another
environment simultaneously, it cannot interact with the first
environment.
• When there is little bit of convergence in Reinforcement
Learning. Convergence is when the agent needs to interact far
more often in different environments to make a decision. This
scenario is tackled by multiagents, as single agents cannot tackle
convergence. Single agents cannot tackle convergence because
it connects to other environments when there might be different
scenarios involving simultaneous decision-making.
• Multiagents have dynamic environments compared to
single agents. Dynamic environments can involve changing
environments in the places to interact with.

16
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-17 shows the single-agent scenario.

Figure 1-17. Single agent

Figure 1-18 shows how multiagents work. There is an interaction between two agents
in order to make the decision.

17
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-18. Multiagent scenario

Conclusion
This chapter touched on the basics of Reinforcement Learning and covered some key
concepts. We covered states and environments and how the structure of Reinforcement
Learning looks.
We also touched on the different kinds of interactions and learned about single-
agent and multiagent solutions.
The next chapter covers algorithms and discusses the building blocks of
Reinforcement Learning.

18
Exploring the Variety of Random
Documents with Different Content
The Project Gutenberg eBook of The Akkra case
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: The Akkra case

Author: Miriam Allen De Ford

Illustrator: Dan Adkins

Release date: November 22, 2023 [eBook #72197]

Language: English

Original publication: United States: Ziff-Davis Publishing Company,


1961

Credits: Greg Weeks, Mary Meehan and the Online Distributed


Proofreading Team at http://www.pgdp.net

*** START OF THE PROJECT GUTENBERG EBOOK THE AKKRA


CASE ***
Miriam de Ford has given a good deal
of thought to crime and criminology
of other times and spaces (see
Editorial). Now she turns her talents
to constructing a "true crime" of the
future—and its solution. Herewith,
then, a criminologist's lecture-report
on:

THE AKKRA CASE

By MIRIAM ALLEN de FORD

Illustrated by ADKINS

[Transcriber's Note: This etext was produced from


Amazing Stories January 1962.
Extensive research did not uncover any evidence that
the U.S. copyright on this publication was renewed.]
Deliberate murder being so very rare a crime in our society, an
account of any instance of it must attract the attention not only of
criminologists but also of the general public. Very many of my
auditors must remember the Akkra case well, since it occurred only
last year. This, however, is the first attempt to set forth the bizarre
circumstances hitherto known only to the authorities and to a few
specialists.
On February 30 last, the body of a young girl was found under the
Central Park mobilway in Newyork I. She had been struck on the
head with some heavy object which had fractured her skull, and her
auburn hair was matted with congealed blood. Two boys illegally
trespassing on one of the old dirt roads in the park itself stumbled
upon the corpse. She was fully dressed, but barefoot, with her
socsandals lying beside her. An autopsy showed only one unusual
thing—she was a virgin, though she was fully mature.
Two hundred years ago, say, this would have been a case for the
homicide branch of the city police. Now, of course, there are no city
police, all local law enforcement being in the hands of the Federal
government, with higher supervision and appeal to the Interpol; and
since there has been no reported murder (except in Africa and
China, where this crime has not yet been entirely eradicated) for at
least 20 years, Fedpol naturally has no specialists in homicide.
Investigation therefore was up to the General Branch in Newyork
Complex I.
The murderer had stupidly broken off the welded serial number disc
from her wristlet—stupidly, because of course everybody's
fingerprints and retinal pattern are on file with Interpol from birth. It
was soon discovered that the victim was one Madolin Akkra, born in
Newyork I of mixed Irish, Siamese, and Swedish descent, aged 18
years and seven months. Since it is against the law for any minor
(under 25) to be gainfully employed, and there was no record of any
exemption-permit, she had necessarily to be a student. She was
found to be studying spaceship maintenance at Upper Newyork
Combined Technicum.
People who deride Fedpol and call it a useless anachronism don't
know what they are talking about. It is true that in our society criminal
tendencies are understood to be a disease, amenable to treatment,
not a free-will demonstration of anti-social proclivities. But it is also
true that every member of Fedpol, down to the merest rookie
policeman, is a trained specialist in some field, and that most of its
officers are graduate psychiatrists. As soon as Madolin Akkra's
identity was determined, it was easy to find out everything about her.
The circumstances surrounding her in life were sufficiently odd in
themselves. Her mother was dead, but she lived with her own father
and full younger sister in a small (only 20 stories and 80 living-units)
co-operative apartment house in the old district formerly called
Westchester, once an "exclusive" settlement but now considerably
run down, and populated for the most part by low-income families.
Few of the residents had more than one helicopter per family, and
many of them had to commute to their jobs or schools by public
copter. The building where the Akkras lived was shabby, its chrome
and plastic well worn, and showed the effects of a negligent local
upkeep system. The Akkras even prepared and ate some of their
meals in their own quarters—an almost unheard-of anachronism.
The father had served his 20 years of productive labor from 25 to 45,
and the whole family was therefore supported by public funds of one
sort or another. When the Fedpol officers commenced their
investigation by interviewing this man, they found him one of the
worst social throwbacks discovered in many years—doubtless a
prime reason for the bizarre misfortune which had overtaken his
misguided daughter. To begin with, the investigators wanted to know,
why had he not reported his daughter missing? To this, Pol Akkra
made the astonishing reply that the girl was old enough to know her
own business, and that he had never asked any questions as to
what she did! Everyone knows it is every adult's responsibility to
report any deviation by the young more serious than the mischievous
trespassing by the boys who had found Madolin Akkra's body, and
who at least had gone to Fedpol at once. The officers could get no
lead whatever from the girl's father.
To find the murderer, it was of first importance to establish the
background of this strange case. Access to the park is difficult—has
been difficult ever since, more than a century ago, the area became
a hunting-ground for thieves and hoodlums, and was transformed
into a cultivated forest and garden preserved for aesthetic reasons,
and to be viewed only from the mobilways above. (The boys who
found the body are, of course, proof that the sealing-off of the park is
not entirely effective—but surely only a daring and agile child could
insinuate himself under the thorn-set hedges surrounding the park,
or swing down to the tree-tops from the structure above.)
If the victim had been killed elsewhere, how was her body carried to
the spot where it was found? Both murderer and corpse would have
had to penetrate unobserved into an almost impenetrable area.
Could the body have been thrown from above? But if so, how could
the remains of a full-grown girl have been transported from either a
ground car or a copter on to the crowded mobilway, brightly lighted
all night long? She must have gone there alive, either under duress
or of her own accord.
The first and most natural question, to Fedpol, was: who did have
access to the park? The answer was, the gardeners. But the
gardeners were out: they were all robots, even their supervisor. No
robot is able to harm a human being. Moreover, no robot could have
brought the victim in from outside if she had been killed elsewhere.
The gardeners never leave the park, and they would repel any
strange robot from elsewhere who tried to enter it. And one could
hardly imagine a sane human being who would go to the park for a
rendezvous with a robot!

It was Madolin's little sister, Margret, who interrupted the futile


interrogation of the surly and resistant Pol Akkra and provided the
first clue. She caught the eye of the investigating officer, Inspector
Dugal Kazazian, and quietly went into the next room, where
Kazazian followed her after posting his assistant with the father.
"I promised Madolin I would never tell on her," she whispered, "but
now she's—now it doesn't matter." She had loved her sister; her
eyes were puffy from weeping. "She—she'd been going to Naturist
get-togethers."
Kazazian almost groaned aloud. He might have known—this was the
first time they had been linked with murder, but it seemed to him that
in almost every other affair he had investigated for the past few
years, the subversive Naturists somehow had crept in. And if he had
reflected, he would have suspected them already, since there seems
to be no school or college which does not harbor an underground
branch of these criminal lunatics.
I need hardly explain to my auditors who and what the Naturists are.
But to keep the record complete, let me say briefly that this
pernicious worldwide conspiracy, founded 50 years ago by the
notorious Ali Chaim Pertinuzzi, is engaged in an organized campaign
to tear down all the marvelous technical achievements of our
civilization. It pretends to believe that we should eat "natural" foods
and wear "natural" textiles instead of synthetics, walk instead of ride,
teach children the obsolete art of reading (reading what?—the
antique books preserved in museums?), make our own music,
painting, and sculpture instead of enjoying the exquisite products of
perfected machines, open up all parks and the few remaining rural
preserves to campers, hunters and fishers (if any specimens worth
hunting can be found outside zoos), and what they call "hikers"—in a
word, go back to the confused, reactionary world of our ancestors.
From this hodgepodge of "principles" it is a natural transition to
political and economic subversion. No wonder that the information
that Madolin Akkra had been corrupted by this vile outfit sent a chill
down Inspector Kazazian's spine.

It explained a great deal, however. The Naturists profess to oppose


our healthy system of sexual experimentation, and Madolin had been
a virgin. The weird family situation, and her father's attitude both
toward her and toward the Fedpol, aroused suspicion that he too
was affiliated with the Naturists, not simply that Madolin had flirted
with the outer edges of the treasonable organization, as a "fellow-
seeker," without her father's knowledge.
Suppose the girl, fundamentally decent and ethically-minded, had
revolted against the false doctrine and threatened to betray its
advocates? Then she might have been killed to silence her—and
what more likely than that, as a piece of brazen defiance, her
murdered corpse should have been deposited in the only bit of
"natural" ground still remaining in the Newyork area?
But how, and by whom?

The first step, of course, was to fling a dragnet around all known or
suspected Naturists in the district. In a series of flying raids they
were rounded up; and since there no longer exist those depositories
for offenders formerly known as prisons, they were kept
incommunicado in the psychiatric wards of the various hospitals. For
good measure, Pol Akkra was included. Margret, at 13, was old
enough to take care of herself.
Next, all Madolin's classmates at the Technicum, the operators of her
teach-communicators, and members of other classes with whom it
was learned she had been on familiar terms, were subjected to an
intensive electronic questioning. (Several of these were themselves
discovered to be tainted with Naturism, and were interned with the
rest.) One of the tenets of Naturism is a return to the outworn system
of monogamy, and the questioning was directed particularly to the
possibility that Madolin had formed half of one of the notorious
Naturist "steady couples," who often associate without or before
actual mating. But day after day the investigators came up with not
the slightest usable lead.
Please do not think I am underrating Fedpol. Nothing could have
been more thorough than the investigation they undertook. But this
turned out in the end to be a case which by its very nature
obfuscated the normal methods of criminological science. Fedpol
itself has acknowledged this, by its formation in recent months of the
Affiliated Assistance Corps, made up of amateurs who volunteer for
the detection of what are now called Class X crimes—those so far off
the beaten path that professionals are helpless before them.
For it was an amateur who solved Madolin Akkra's murder—her own
little sister. When Margret Akkra reaches the working age of 25 she
will be offered a paid post as Newyork Area Co-ordinator of the AAC.
Left alone by her father's internment, Margret began to devote her
whole time out of school hours to the pursuit of the person or
persons who had killed her sister. She had told Kazazian all she
actually knew; but that was only her starting-point. Though she
herself, as she had told the Inspector, believed that the murder might
be traced to Madolin's connection with the Naturist (and though she
probably at least suspected her father to be involved with them also),
she did not confine herself to that theory, as the Fedpol, with its
scientific training, was obliged to do.
Concealed under a false floor in her father's bedroom—mute
evidence of his Naturist affiliation—she found a cache of printed
books—heirlooms which should long ago have been presented to a
museum for consultation by scholars only. They dated back to the
20th century, and were of the variety then known as "mystery
stories." Margret of course could not read them. But she
remembered now, with revulsion, how, when she and Madolin were
small children, their mother had sometimes (with windows closed
and the videophone turned off) amused them by telling them ancient
myths and legends that by their very nature Margret now realized
must have come from these contraband books.
Unlike her father and her sister, and apparently her mother as well,
Margret Akkra had remained a wholesome product of a civilized
education. She had nothing but horror and contempt for the
subversive activities in the midst of which, she knew now, she had
grown up. The very fact, which became plain to her for the first time,
that her parents had lived together, without changing partners, until
her mother had died, was evidence enough of their aberration.
But, stricken to the heart as the poor girl was, she could not cease to
love those she had always loved, or to be diverted from her
resolution to solve her sister's murder. Shudder as she might at the
memory of those subversive books, she yet felt they might
inadvertently serve to assist her.
It was easy to persuade the school authorities that her shock and
distress over Madolin's death had slowed up her conscious mind,
and to get herself assigned to a few sessions with the electronic
memory stimulator. It took only two or three to bring back in detail the
suppressed memories, and to enable her to extrapolate from them.

One feature of these so-called "mysteries" that came back to her


struck Margret with especial force—the frequent assertion that
murderers always return to the scene of their crime. She decided
that she too must plant herself at the spot where her sister's body
had been found, and lie in wait for the returning killer.
It would be useless to try to obtain official permission, but she was
only 13, as lean and agile as any other child, and if boys could evade
the hedges and the robot gardeners, so could she. The audiovids
had displayed plenty of pictures of the exact scene, and Margret
knew where to find it. But an inspection of the hedges showed her
that it would be easier for her to get in from above, at night—a likelier
time also for her prey.
She located a place where the trees grew almost to the mobilway
and shaded a section of it between the lamps. Perched on the stand-
pave and watching for a pause in the stream of gliders-by, she
dropped lightly into a tree and climbed down to the park beneath.
Hiding from the gardeners, she made her way to the bushes where
the boys had discovered Madolin.
For nearly a week, fortified by Sleepnomer pills, Margret spent every
moment after dark in this hideaway. It was a long, nerve-wracking
vigil: the close contact with leaves and grass, the sound of the wind
in the trees, the unaccustomed darkness away from the lights above,
the frightening approach of wild squirrels and rabbits and even birds,
the necessity to stay concealed from passing robots, kept her on
edge. But stubbornly she persisted. And at last she was rewarded.
It was not late—only about 20 o'clock—when she heard a scramble
and bump not far from her own means of access to the park. It was
not the first time since her watch began that she had heard other
adventurers, invariably small and rather scared boys who dared one
another to walk for a few feet along the dirt paths, then in a panic
rushed back the way they had come. But this time the steps came
directly toward her—human footsteps, not the shuffle of a robot.
Hidden behind a bush, Margret saw them approach—two boys of
about her own age. And then, with a sickening lurch of her heart, she
recognized them. She had seen them, acclaimed as heroes, on the
videoscreen. They were the two who had found Madolin. She could
hear every word they said.
"Come on," one of them urged in a hoarse whisper. "There's nothing
to be afraid of."
"Yes, there is," the other objected. "Ever since then, they've got the
gardeners wired to describe and report anybody they find inside the
park."
"I don't care. We've got to find it. Give me the beamer."

Margret crouched behind the thickest part of the shrubbery, her infra-
red camera at the alert. The tape-attachment was already activated.
The second boy still held back. "I told you then," he muttered, "that
we shouldn't have reported it at all. We should have got out of here
and never said a word to anyone."
"We couldn't," the first boy said, shocked. "It would have been anti-
social. Haven't you ever learned anything in school?"
"Well, it's anti-social to kill somebody, too, isn't it?"
Margret pressed the button on the camera. Enlarged enough, even
the identification discs on the boys' wristlets would show.
"How could we guess there was a human being there, except us?
What was she doing here, anyway? Come on, Harri, we've got to
find that thing. It's taken us long enough to get a chance to sneak in
here."
"Maybe they've found it already," said Harri fearfully.
"No, they haven't; if they had, they'd have taken us in as soon as
they dusted the fingerprints."
"All right, it's not anywhere on the path. Put the beamer on the
ground where it will shine in front of us, and let's get down on our
stomachs and hunt underneath the bushes."
Grabbing her camera, Margret jumped to her feet and dashed past
the startled boys. She heard a scream—that would be Harri—and
then their feet pounding after her. But she had a head start, and her
eyes were more accustomed to the dark than theirs could be. She
reached a tree, shinnied up it, jumped from one of its limbs to
another on a higher tree beneath the mobilway, chinned herself up,
and made her way out safely.
She went straight to Fedpol headquarters and asked for Inspector
Kazazian.
The frightened boys were picked up at once. They were brought into
headquarters, where they had been praised and thanked before, and
as soon as they saw the pictures and heard the tape-recording they
confessed everything.
That night, they said, they were being initiated into one of those
atavistic fraternities which it seems impossible for the young to
outgrow or the authorities to suppress. As part of their ordeal, they
had been required to sneak into Central Park and to bring back as
proof of their success a captured robot gardener. Between them they
had decided that the only way they could ever get their booty would
be to disassemble the robot, for though it could not injure them, if
they took hold of it, its communication-valve would blow and the
noise would bring others immediately; so they had taken along what
seemed to them a practical weapon—a glass brick pried out of the
back of a locker in the school gym. Hurled by a strong and practiced
young arm, it could de-activate the robot's headpiece.
When, as they waited in the darkness for a gardener to appear, they
saw a figure moving about in the shrubbery bordering the path, one
of them—neither would say which one it was—let fly. To their horror,
instead of the clang of heavy glass against metal, they heard a
muffled thud as the brick struck flesh and bone. They started to run
away. But after a few paces they forced themselves to return.
It was a girl, and the blow had knocked her flat. Her head was
bleeding badly and she was moaning. Terrified, they knelt beside
her. She gasped once and lay still. One of the boys laid a trembling
hand on her breast, the other seized her wrist. There was no heart-
beat and there was no pulse. On an impulse, the boy holding her
wrist wrenched away her identification disc.
Panic seized them, and they dashed away, utterly forgetting the
brick, which at their first discovery one of them had had the foresight
to kick farther into the shrubbery, out of view. Sick and shaking, they
made their way out of the park and separated. The boy who had the
disc threw it into the nearest sewer-grating.
The next day, after school, they met again and talked it over. Finally
they decided they must go to Fedpol and report; but to protect
themselves they would say only that they had found a dead body.

Day after day, they kept seeing and hearing about the case on the
videaud, and pledged each other to silence. Then suddenly one of
the boys had a horrible thought—they had forgotten that the brick
would show their fingerprints!... They had come desperately to
search for it when Margret overheard them. Kazazian's men found it
without any difficulty; it had been just out of the gardeners' regular
track.
In view of the accidental nature of the whole affair, and the boys' full
confession, they got off easy. They were sentenced to only five
years' confinement in a psychiatric retraining school.
The suspects against whom nothing could be proved were released
and kept under surveillance. Pol Akkra, and all the proved Naturists,
were sentenced to prefrontal lobotomies. Margret Akkra, in return for
her help in solving the mystery, secured permission to take her father
home with her. A purged and docile man, he was quite capable of
the routine duties of housekeeping.
The killing of Madolin Akkra was solved. But one question remained:
how and why had she been in Central Park at all?
The answer, when it came, was surprising and embarrassingly
simple. And this is the part that has never been told before.
Pol Akkra, a mere simulacrum of the man he had been, no longer
knew his living daughter or remembered his dead one. But in the
recesses of his invaded brain some faint vestiges of the past
lingered, and occasionally and unexpectedly swam up to his
dreamlike consciousness.
One day he said suddenly: "Didn't I once know a girl named
Madolin?"
"Yes, father," Margret answered gently, tears in her eyes.
"Funny about her." He laughed his ghastly Zombie chuckle. "I told
her that was a foolish idea, even if it was good Nat—Nat-something
theory."
"What idea was that?"
"I—I've forgotten," he said vaguely. Then he brightened. "Oh, yes, I
remember. Stand barefoot in fresh soil for an hour in the light of the
full moon and you'll never catch cold again.
"She was subject to colds, I think." (About the only disease left we
have as yet no cure for.) He sighed. "I wonder if she ever tried it."
THE END
*** END OF THE PROJECT GUTENBERG EBOOK THE AKKRA
CASE ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part of
this license, apply to copying and distributing Project Gutenberg™
electronic works to protect the PROJECT GUTENBERG™ concept
and trademark. Project Gutenberg is a registered trademark, and
may not be used if you charge for an eBook, except by following the
terms of the trademark license, including paying royalties for use of
the Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is very
easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund from
the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like