Building Responsible AI Algorithms: A Framework for Transparency, Fairness, Safety, Privacy, and Robustness 1st Edition Toju Duke 2024 scribd download
Building Responsible AI Algorithms: A Framework for Transparency, Fairness, Safety, Privacy, and Robustness 1st Edition Toju Duke 2024 scribd download
com
https://ebookmeta.com/product/building-responsible-ai-
algorithms-a-framework-for-transparency-fairness-safety-
privacy-and-robustness-1st-edition-toju-duke/
OR CLICK HERE
DOWLOAD NOW
https://ebookmeta.com/product/responsible-ai-implementing-ethical-and-
unbiased-algorithms-1st-edition-sray-agarwal/
ebookmeta.com
https://ebookmeta.com/product/responsible-ai-1st-edition-patrick-hall/
ebookmeta.com
https://ebookmeta.com/product/aptitude-learning-and-instruction-
volume-3-conative-and-affective-process-analyses-richard-e-snow-
editor/
ebookmeta.com
Practical Trend Analysis Applying Signals and Indicators
to Improve Trade Timing 2nd Edition Michael C. Thomsett
https://ebookmeta.com/product/practical-trend-analysis-applying-
signals-and-indicators-to-improve-trade-timing-2nd-edition-michael-c-
thomsett/
ebookmeta.com
https://ebookmeta.com/product/terrorism-in-the-late-victorian-
novel-1st-edition-barbara-arnett-melchiori/
ebookmeta.com
https://ebookmeta.com/product/the-work-of-life-writing-essays-and-
lectures-1st-edition-g-thomas-couser/
ebookmeta.com
https://ebookmeta.com/product/weather-studies-investigations-
manual-2021-22-summer-2022-american-metorological-society/
ebookmeta.com
https://ebookmeta.com/product/naruto-a-mythical-and-religious-
analysis-2nd-edition-anonyous/
ebookmeta.com
Hijab The Islamic Commandments of Hijab Dr Mohammad Ismail
Memon Madani
https://ebookmeta.com/product/hijab-the-islamic-commandments-of-hijab-
dr-mohammad-ismail-memon-madani/
ebookmeta.com
Building
Responsible AI
Algorithms
A Framework for Transparency,
Fairness, Safety, Privacy, and
Robustness
—
Toju Duke
Building Responsible
AI Algorithms
A Framework for Transparency,
Fairness, Safety, Privacy,
and Robustness
Toju Duke
Building Responsible AI Algorithms: A Framework for Transparency,
Fairness, Safety, Privacy, and Robustness
Toju Duke
London, UK
Introduction����������������������������������������������������������������������������������������xv
Part I: Foundation������������������������������������������������������������������������1
Chapter 1: Responsibility���������������������������������������������������������������������3
Avoiding the Blame Game�������������������������������������������������������������������������������������4
Being Accountable������������������������������������������������������������������������������������������������6
Eliminating Toxicity�����������������������������������������������������������������������������������������������9
Thinking Fairly����������������������������������������������������������������������������������������������������11
Protecting Human Privacy�����������������������������������������������������������������������������������12
Ensuring Safety���������������������������������������������������������������������������������������������������13
Summary������������������������������������������������������������������������������������������������������������14
v
Table of Contents
Social Benefits����������������������������������������������������������������������������������������������������23
Privacy, Safety, and Security�������������������������������������������������������������������������������31
Summary������������������������������������������������������������������������������������������������������������35
Chapter 3: Data�����������������������������������������������������������������������������������37
The History of Data���������������������������������������������������������������������������������������������38
Data Ethics����������������������������������������������������������������������������������������������������������39
Ownership�����������������������������������������������������������������������������������������������������40
Data Control���������������������������������������������������������������������������������������������������41
Transparency�������������������������������������������������������������������������������������������������42
Accountability������������������������������������������������������������������������������������������������42
Equality����������������������������������������������������������������������������������������������������������43
Privacy�����������������������������������������������������������������������������������������������������������43
Intention��������������������������������������������������������������������������������������������������������43
Outcomes������������������������������������������������������������������������������������������������������44
Data Curation������������������������������������������������������������������������������������������������������44
Best Practices�����������������������������������������������������������������������������������������������������45
Annotation and Filtering��������������������������������������������������������������������������������45
Rater Diversity�����������������������������������������������������������������������������������������������48
Synthetic Data�����������������������������������������������������������������������������������������������49
Data Cards and Datasheets���������������������������������������������������������������������������49
Model Cards��������������������������������������������������������������������������������������������������50
Tools��������������������������������������������������������������������������������������������������������������������56
Alternative Datasets��������������������������������������������������������������������������������������������57
Summary������������������������������������������������������������������������������������������������������������58
vi
Table of Contents
Chapter 5: Safety��������������������������������������������������������������������������������77
AI Safety��������������������������������������������������������������������������������������������������������������78
Autonomous Learning with Benign Intent�����������������������������������������������������78
Human Controlled with Benign Intent������������������������������������������������������������79
vii
Table of Contents
Chapter 6: Human-in-the-Loop�����������������������������������������������������������95
Understanding Human-in-the-Loop��������������������������������������������������������������������95
Human Annotation Case Study: Jigsaw Toxicity Classification���������������������������96
Rater Diversity Case Study: Jigsaw Toxicity Classification���������������������������������98
Task Design���������������������������������������������������������������������������������������������������99
Measures�����������������������������������������������������������������������������������������������������100
Results and Conclusion�������������������������������������������������������������������������������100
Risks and Challenges����������������������������������������������������������������������������������102
Summary�����������������������������������������������������������������������������������������������������103
Chapter 7: Explainability������������������������������������������������������������������105
Explainable AI (XAI)�������������������������������������������������������������������������������������������106
Implementing Explainable AI�����������������������������������������������������������������������������107
Data Cards���������������������������������������������������������������������������������������������������107
Model Cards������������������������������������������������������������������������������������������������109
Open-Source Toolkits����������������������������������������������������������������������������������110
Accountability����������������������������������������������������������������������������������������������111
viii
Table of Contents
Dimensions of AI Accountability������������������������������������������������������������������������112
Governance Structures��������������������������������������������������������������������������������112
Data�������������������������������������������������������������������������������������������������������������112
Performance Goals and Metrics������������������������������������������������������������������112
Monitoring Plans�����������������������������������������������������������������������������������������113
Explainable AI Tools�������������������������������������������������������������������������������������113
Summary�����������������������������������������������������������������������������������������������������116
Chapter 8: Privacy����������������������������������������������������������������������������117
Privacy Preserving AI����������������������������������������������������������������������������������������118
Federated Learning�������������������������������������������������������������������������������������119
Differential Privacy��������������������������������������������������������������������������������������121
Summary����������������������������������������������������������������������������������������������������������123
Chapter 9: Robustness����������������������������������������������������������������������125
Robust ML Models��������������������������������������������������������������������������������������������126
Sampling�����������������������������������������������������������������������������������������������������126
Bias Mitigation (Preprocessing)�������������������������������������������������������������������127
Data Balancing��������������������������������������������������������������������������������������������127
Data Augmentation��������������������������������������������������������������������������������������128
Cross-Validation������������������������������������������������������������������������������������������128
Ensembles���������������������������������������������������������������������������������������������������128
Bias Mitigation (In-Processing and Post-Processing)���������������������������������129
Transfer Learning����������������������������������������������������������������������������������������129
Adversarial Training�������������������������������������������������������������������������������������129
Making Your ML Models Robust������������������������������������������������������������������130
Model Challenges����������������������������������������������������������������������������������������������132
Data Quality�������������������������������������������������������������������������������������������������132
Model Decay������������������������������������������������������������������������������������������������132
ix
Table of Contents
Feature Stability������������������������������������������������������������������������������������������133
Precision versus Recall�������������������������������������������������������������������������������133
Input Perturbations��������������������������������������������������������������������������������������134
Summary����������������������������������������������������������������������������������������������������������134
Appendix A: References��������������������������������������������������������������������149
Index�������������������������������������������������������������������������������������������������179
x
About the Author
Toju Duke with over 18 years experience
spanning across Advertising, Retail, Not-For
Profit and Tech, Toju is a popular speaker,
author, thought leader and consultant on
Responsible AI. Toju spent 10 years at Google
where she spent the last couple of years as
a Programme Manager on Responsible AI
leading various Responsible AI programmes
across Google’s product and research teams
with a primary focus on large-scale models
and Responsible AI processes. Prior to her time
spent on Google’s research team, Toju was the
EMEA product lead for Google Travel and worked as a specialist across a
couple of Google’s advertising products during her tenure. She is also the
founder of Diverse AI, a community interest organisation with a mission
to support and champion underrepresented groups to build a diverse and
inclusive AI future. She provides consultation and advice on Responsible
AI practices worldwide.
xi
About the Technical Reviewer
Maris Sekar is a professional computer
engineer, senior data scientist (Data Science
Council of America), and certified information
systems auditor (ISACA). He has a passion
for using storytelling to communicate about
high-risk items in an organization to enable
better decision making and drive operational
efficiencies. He has cross-functional work
experience in various domains, including risk
management, oil and gas, and utilities. Maris has led many initiatives for
organizations, such as PricewaterhouseCoopers LLP, Shell Canada Ltd.,
and TC Energy. Maris’ love for data has motivated him to win awards, write
articles, and publish papers on applied machine learning and data science.
xiii
Introduction
I’ve always been a huge fan of technology and innovation and a great
admirer of scientists and inventors who pushed the boundaries of
innovation, some trying 99 times, 10,000 times, and more before achieving
their goals and making history. Take the famous inventor of the lightbulb,
Thomas Edison, or the brilliant Grace Hopper, who invented the
computer. And before computers were transformed into machines, we had
human computers, such as the super-intelligent “hidden figures,” Mary
Jackson, Katherine Johnson, and Dorothy Vaughan of NASA (National
Aeronautics and Space Administration). We also have amazing geniuses
like Albert Einstein, whose theories on relativity introduced many new
ways of evaluating energy, time, space, gravity, and matter. Or the likes of
Alexander Graham Bell, who introduced the telephone, and Josephine
Cochrane, who we should thank for saving us from washing dishes by
hand and invented the ubiquitous dishwasher!
These are just a few innovators and inventors who contributed
greatly to technology, made our lives better and easier, and shed light on
unknown phenomena. And there are many other sung and unsung heroes
who contributed greatly to the world of science and technology.
Fast forward to today, to an ever-changing and evolving world:
Humans are still inventing, creating, and introducing breakthroughs,
especially in the field of technology. Many recent inventions are driven
by artificial intelligence (AI), which is made up of deep learning networks
(a form of AI based on neural networks, designed to mimic neurons in
the human brain). For example, ChatGPT (a conversational AI built on a
large language model), which is designed to provide intelligent answers
to questions and solve many difficult tasks, has become the world’s fastest
xv
Introduction
growing app, with over 100 million users in just months. It still blows my
mind how “intelligent” this app, and other dialogue AI systems similar to
it, is. Another example is the various image recognition AI systems, which
are used across the healthcare, automotive, criminal justice, agriculture,
and telecommunications industries. We also have voice assistants such
as Siri, Google Assistant, and Alexa, speech recognition systems, and so
on. There’s also DeepMind’s (a UK-based AI company) Alphafold, which
predicts 3D models of protein structures, contributing immensely to
the medical field and driving further drug development and discovery.
Alphafold solved a long-standing problem in the history of biology and
medical science.
While we have these and so many more amazing use cases and
success stories of AI and machine learning (ML) applications/products,
it’s important to note that there are also fundamental issues that plague
these technologies. These range from bias, toxicity, harm, hate speech,
misinformation, privacy, human rights violations, and sometimes the loss
of life, to mention a few. Although AI technologies are great and highly
beneficial to society in various ways, AI sometimes produces harm due
to the lack of representative and diverse training data, lack of data ethics
and curation best practices, less than optimal fine-tuning and training
methods, and the sometimes harmful ways these AI applications are used.
In this book, I cover some examples where AI has gone drastically
wrong and affected people’s lives in ways that had a ripple effect on various
groups and communities. Despite these various downfalls, I believe that
AI has the potential to solve some of the world’s biggest problems, and it
is being used in various ways to tackle long-standing issues like climate
change, as an example, by a good number of organizations. While we have
many well-meaning individuals developing these highly “intelligent”
machines, it’s important to understand the various challenges faced by
these systems and humanity at large and explore the possible ways to
address, resolve, and combat these problems.
xvi
Introduction
xvii
PART I
Foundation
CHAPTER 1
Responsibility
The term responsibility is a relatable and simple term. Everyone, or almost
everyone, deems themselves to be responsible in most if not every area of
their lives. There’s a sense of fulfillment and gratification when you think
you have carried out a responsible act. Being responsible refers to carrying
out a duty or job that you’re charged with.1 Most people in positions of
authority feel a sense of responsibility to execute their jobs effectively.
This includes parents, lawyers, law enforcement officers, healthcare
professionals, members of a jury, employees, employers, and every
member of society who has reached decision-making age. This chapter
delves into AI responsibility and the need for building responsible AI
systems.
Despite the fact that we were encouraged to be responsible at a
very early age, it’s often not accounted for in technology fields, and in
particular in machine learning (ML) subfields, such as natural language
processing, computer vision, deep learning, neural networks, and so on.
Now you might argue that this isn’t entirely true. There are ethical artificial
intelligence (AI) and responsible AI practices developed every day.
Although ethics and responsibility have been long-standing conversations
that have taken place over the years, it’s only recently, within the last 2-5
years, that we’ve seen an uptick in the adoption of responsible AI practices
across industries and research communities. We have also seen more
interest, from policy makers and regulatory bodies, in ensuring that AI is
human-centric and trustworthy.2
There are several reasons that responsible AI has been slowly adopted
over the years, the most prominent being that it’s a new field that’s slowly
gaining recognition across the various AI practitioners. It’s a bit sad that
responsible and ethical practices were not adopted at scale, despite the
66 plus years of AI introduction.3 Taking some cues from mental health
experts, let’s look at a few recommendations for acting responsibly.
4
Chapter 1 Responsibility
The problem with the blame culture is that it tends to negatively focus
on people, which consequently prevents the right lessons to be learned—
what caused the problem and how to prevent it from happening again. In
light of this, a blameless post-mortem helps engineering teams understand
the reasons an incident occurred, without blaming anyone or any team in
particular. This, in turn, enables the teams to focus on the solution rather
than the problem. The key focus is to understand the measures to put in
place to prevent similar incidents from happening.4
If you’ve been working in the AI field long enough, particularly ethical
AI, you’ve heard of the infamous “trolley problem,” a series of experiments
in ethics and psychology made up of ethical dilemmas. This problem
asks whether you would sacrifice one person to save a larger number of
people.5 In 2014, an experiment called the Moral Machine was developed
by researchers at the MIT Media Lab. The Moral Machine was designed
to crowdsource people’s decisions about how self-driving cars should
prioritize lives in different variations of the trolley problem.6
In a (paraphrased) scenario where a self-driving car’s brakes fail and
it has two passengers onboard when approaching a zebra crossing with
five pedestrians walking across—an elderly couple, a dog, a toddler, and
a young woman—who should the car hit? Should the car hit the elderly
couple and avoid the other pedestrians, or should it hit the little boy? Or
should the car swerve and avoid the pedestrians but potentially harm the
passengers? In other words, should it take action or stay on course? Which
lives matter more—humans versus pets, passengers versus pedestrians,
young versus old, fit over sickly, higher social status versus lower, cisgender
versus non-binary?
In cases in which AI-related failures led to injury or death, I believe
everyone who was involved in the development of the offending vehicle
should be held accountable. That is, from the research scientist, to the
engineer, to the CTO, to the legal officer, to marketing, public relations, and
so on. It’s the responsibility of everyone involved in the development of
5
Chapter 1 Responsibility
Being Accountable
When people accept accountability, it means they understand their
contribution to a given situation. Being accountable also means avoiding
the same mistakes over and over again. In some cases, this requires
giving an account or statement about the part the person had to play.
Not so surprisingly, accountability is a key component of responsible
AI. According to the Organisation for Economic Cooperation and
Development (OECD), companies and individuals developing, deploying,
and operating AI systems should be held accountable for their proper
functioning in line with the OECD’s values-based principle of AI.7 Chapter 2
delves further into the topic of responsible AI principles.
Let’s look at a couple of examples where AI drastically impacted the
lives of certain members of society, and the responsible companies were
held accountable. Before delving into these stories, I’d like to take a pit stop
and state that I’m a huge advocate and supporter of AI. I strongly believe
that AI has the potential to solve some of the world’s most challenging
problems, ranging from climate change, to healthcare issues, to education,
and so on. AI has also been adopted in several projects for good, otherwise
known as AI for Social Good.
For example, top tech companies such as Google, Microsoft, IBM, and
Intel are working on projects ranging from environmental protection to
humanitarian causes, to cancer diagnostics and treatment, and wildlife
conversation challenges, among others.8
AI also has many business benefits, including reducing operational
costs, increasing efficiency, growing revenue, and improving customer
experiences. The global AI market size was valued at $93.5 billion in 2021,
6
Chapter 1 Responsibility
7
Chapter 1 Responsibility
could have led to his false arrest. Knowing that the mere fact that he’s a
black man living in the United States makes him an easy target for the
police, Williams may not have been altogether surprised, but he must
have been quite saddened and anxious, hoping he’d return to his family in
good time.
In a world where racism and discrimination still very much exists,
it’s quite appalling to see these societal issues prevalent in technologies
employed and used by people in authority. These people of authority are
the same individuals who are employed to protect our communities. If
they decide to use technology and AI systems in their jobs, it’s their duty to
ensure that these systems promote fairness and equal treatment of people
from different backgrounds, cultures, ethnicities, races, disabilities, social
classes, and socio-economic statuses.
The law enforcement agency that committed the blunder apologized
in a statement,9 stating that Williams could have the case and fingerprint
data expunged. When we consider “accountability” and what it entails, an
apology and removal of his record is not enough. What the county needs
to do to is make sure this sort of life-changing error doesn’t happen again.
They need to aim for clean data, run tests across the different subgroups
for potential biases, and maintain transparency and explainability
by documenting information on the data and the model, including
explanations of the tasks and objectives the model was designed for.
Carrying out accuracy and error checks will also help ensure results are
accurate and less biased.
Some facial recognition software has been banned in certain use cases
in the United States, including in New York, which passed a law in 2021
prohibiting facial recognition at schools, and in California, which passed
a law that banned law enforcement from using facial recognition on their
body cameras. Maryland also passed a law that prohibits the use of facial
recognition during interviews without signed consent.12 Despite this
8
Chapter 1 Responsibility
progress, there has been a steady increase in states recalling their bans on
facial recognition; for example, Virginia recently eliminated its prohibition
on police use of facial recognition, only one year after approving the ban.13
I’m happy to state that a few tech companies—Google and most
recently Microsoft, Amazon, and IBM—stopped selling facial recognition
technology software to police departments and have called for federal
regulation of the technology.14
Across the globe, there are only two countries in the world that have
banned the use of facial recognition—Belgium and Luxembourg.15 In
Europe, the draft EU AI act released in April 2021 aims to limit the use
of biometric identification systems, including facial recognition.16 While
some parts of the world are still deliberating on how they’ll use facial
recognition software, it’s encouraging to see there are countries, regulatory
bodies, and organizations that recognize the dangers of facial recognition
technologies and are ready to hold businesses accountable.
Eliminating Toxicity
Distancing yourself from people who exhibit toxic traits is advice that
mental health practitioners provide to anyone seeking guidance about
responsibility. By the same token, removing toxicity from ML models is
a fundamental tenant of responsible AI. As datasets are built from the
Internet, which certainly includes human data and biases, these datasets
tend to have toxic terms, phrases, images, and ideas embedded in them.
It’s important to note that “toxicity” is contextual. What one person regards
as toxic another might not, depending on their community, beliefs,
experiences, and so on. In this context, toxic refers to the way the model is
used; it’s “toxic” when used in a harmful manner.
9
Chapter 1 Responsibility
10
Chapter 1 Responsibility
Thinking Fairly
In August 2020, hundreds of students in the UK gathered in front of the
Department for Education chanting “swear words” at the algorithm.
Thousands of students in England and Wales had received their “A-level”
exam grades, which were scored by an algorithm. Due to the pandemic
and social distancing measures, the “A-level” exams were cancelled and
the UK’s Office of Qualifications and Examinations Regulation (Ofqual)
decided to estimate the A-level grades using an algorithm.24
11
Chapter 1 Responsibility
12
Chapter 1 Responsibility
Ensuring Safety
When developing algorithms, developers must ensure that AI is deployed
in a safe manner that does not harm or endanger its users. AI safety is one
of the dimensions of responsible AI and it’s important to bear in mind
when developing ML systems. As an example, there are several online
13
Chapter 1 Responsibility
Summary
This chapter laid the foundation for responsible AI by looking at the term
“responsibility” and what it means to be responsible and accountable
while protecting human rights, preserving user privacy, and ensuring
human safety. You saw various examples, from the well-known ethical
question of the “trolley problem” to several real-life examples of AI models
that displayed “irresponsible” behavior, and the detrimental effects they’ve
had. The next chapter looks at the next building block of a responsible AI
framework—principles.
14
CHAPTER 2
AI Principles
The first chapter set the foundation for responsible AI frameworks,
kicking off with responsibility and a few examples of AI and its ethical
limitations. This chapter delves into “AI principles,” which are fundamental
components of building responsible AI systems.
Any organization developing and building AI systems should base
these systems on a set of principles, otherwise known as AI principles
or guidelines. AI principles are stepping stones for all types of AI
development carried out across an organization. They are meant to be the
foundation for AI systems and describe how they should be responsibly
developed, trained, tested, and deployed.30
A good number of organizations and governing bodies have a defined
set of AI principles that act as a guiding force for these organizations, and
beyond. AI communities have seen a steady increase in AI principles/
guidelines over the past few years. While the design and outline of AI
principles are fundamental to the development of AI principles, it’s
important that governance and implementation processes are put in place
to execute these principles across an organization.
Most AI principles aim to develop ethical, responsible, safe,
trustworthy, and transparent ML models centered on the following areas:
fairness, robustness, privacy, security, trust, safety, human-centric AI and
explainability, where explainability is comprised of transparency and
accountability. The first section of this chapter looks at fairness, bias, and
human-centered values and explores how these apply to AI principles.
16
Random documents with unrelated
content Scribd suggests to you:
quite impervious, but becoming—at any rate in the case of the larger
and more important pair—open previous to the final ecdysis. We
have mentioned the contradictory opinions of Réaumur and Dufour,
and will now add the views of some modern investigators. Oustalet
says[341] that there are two pairs of spiracles in the nymphs; the first
pair is quite visible to the naked eye, and is situate between pro- and
meso-notum; it is in the nymph closed by a membrane. The other
pair of spiracles is placed above the posterior pair of legs, is small
and completely closed. He does not state what stage of growth was
attained by the nymphs he examined. Palmén was of opinion that
not only thoracic but abdominal spiracles exist in the nymph,[342] and
that they are completely closed so that no air enters them; he says
that the spiracles have tracheae connected with them, that at each
moult the part closing the spiracles is shed with some of the tracheal
exuviae attached to it. The breathing orifices are therefore for a short
time at each ecdysis open, being subsequently again closed by
some exudation or secretion. This view of Palmén's has been
thought improbable by Hagen and Dewitz, who operated by placing
nymphs in alcohol or warm water and observing the escape of
bubbles from the spots where the supposed breathing orifices are
situate. Both these observers found much difference in the results
obtained in the cases of young and of old nymphs. Hagen concludes
that the first pair of thoracic spiracles are functionally active, and that
abdominal stigmata exist though functionless; he appears to be of
opinion that when the first thoracic stigma is closed this is the result
of the abutting against it of a closed trachea. Dewitz found[343] that
in the adult nymph of Aeschna the thoracic stigma is well developed,
while the other stigmata—to what number and in what position is not
stated—are very small. In a half-grown Aeschnid nymph he found
the thoracic stigma to be present in an undeveloped form. On
placing a full-grown nymph in alcohol, gas escaped from the stigma
in question, but in immature nymphs no escape of gas occurred
although they were subjected to a severe test. A specimen that,
when submitted to the above-mentioned immersion, emitted gas,
subsequently moulted, and thereafter air escaped from the spiracle
previously impervious. The observations of Hagen and Dewitz are
perhaps not so adverse to the views of Palmén as has been
supposed, so that it would not be a matter for surprise if Palmén's
views on this point should be shown to be quite correct.
Odonata are among the few kinds of Insects that are known to form
swarms and migrate. Swarms of this kind have been frequently
observed in Europe and in North America; they usually consist of
species of the genus Libellula, but species of various other genera
also swarm, and sometimes a swarm may consist of more than one
species. L. quadrimaculata is the species that perhaps most
frequently forms these swarms in Europe; a large migration of this
species is said to occur every year in the Charente inférieure from
north to south.[346] It is needless to say that the instincts and stimuli
connected with these migrations are not understood.
The Odonata have no close relations with any other group of Insects.
They were associated by Latreille with the Ephemeridae, in a family
called Subulicornia. The members of the two groups have, in fact, a
certain resemblance in some of the features of their lives, especially
in the sudden change, without intermediate condition, from aquatic to
aerial life; but in all important points of structure, and in their
dispositions, dragon-flies and may-flies are totally dissimilar, and
there is no intermediate group to connect them. We have already,
said that the Odonata consist of two very distinct divisions—
Anisopterides and Zygopterides. The former group comprises the
subfamilies Gomphinae, Cordulegasterinae, Aeschninae,
Corduliinae, and Libellulinae,—Insects having the hinder wings
slightly larger than the anterior pair; while the Zygopterides consist of
only two subfamilies—Calepteryginae and Agrioninae; they have the
wings of the two pairs equal in size, or the hinder a little the smaller.
The two groups Gomphinae and Calepteryginae are each, in several
respects, of lower development than the others, and authorities are
divided in opinion as to which of the two should be considered the
more primitive. It is therefore of much interest to find that there exists
an Insect that shares the characters of the two primitive subfamilies
in a striking manner. This Insect, Palaeophlebia superstes (Fig. 272),
has recently been discovered in Japan, and is perhaps the most
interesting dragon-fly yet obtained. De Selys Longchamps refers it to
the subfamily Calepteryginae, on account of the nature of its wings;
were the Insect, however, deprived of these organs, no one would
think of referring Palaeophlebia to the group in question, for it has
the form, colour, and appearance of a Gomphine Odonate.
Moreover, the two sexes differ in an important character,—the form
of the head and eyes. In this respect the female resembles a
Gomphine of inferior development; while the male, by the shape and
large size of the ocular organs, may be considered to combine the
characters of Gomphinae and Calepteryginae. The Insect is very
remarkable in colour, the large eyes being red in the dead examples.
We do not, however, know what may be their colour during life, as
only one pair of the species is known, and there is no record as to
the life-history and habits. De Selys considers the nearest ally of this
Insect to be Heterophlebia dislocata, a fossil dragon-fly found in the
Lower Lias of England.
CHAPTER XIX
Fig. 281.—A, Last three abdominal segments and bases of the three
caudal processes of Cloëon dipterum: r, dorsal vessel; kl, ostia
thereof; k, special terminal chamber of the dorsal vessel with its
entrance a; b, blood-vessel of the left caudal process; B, twenty-
sixth joint of the left caudal process from below; b, a portion of the
blood-vessel; o, orifice in the latter. (After Zimmermann.)
The life-history has not been fully ascertained in the case of any
species of may-fly; it is known, however, that the development of the
nymph sometimes occupies a considerable period, and it is thought
that in the case of some species this extends to as much as three
years. It is rare to find the post-embryonic development of an Insect
occupying so long a period, so that we are justified in saying that
brief as may be the life of the may-fly itself, the period of preparation
for it is longer than usual. Réaumur says, speaking of the winged fly,
that its life is so short that some species never see the sun. Their
emergence from the nymph-skin taking place at sunset, the duties of
the generation have been, so far as these individuals are concerned,
completed before the morning, and they die before sunrise. He
thinks, indeed, that individuals living thus long are to be looked on as
Methuselahs among their fellows, most of whom, he says, live only
an hour or half an hour.[364] It is by no means clear to which species
these remarks of Réaumur refer; they are doubtless correct in
certain cases, but in others the life of the adult is not so very short,
and in some species may, in all probability, extend over three or four
days; indeed, if the weather undergo an unfavourable change so as
to keep them motionless, the life of the flies may be prolonged for a
fortnight.
Nearly 300 species of Ephemeridae are known, but this may be only
a fragment of what actually exist, very little being known of may-flies