100% found this document useful (1 vote)
21 views

Download full Vertically Integrated Architectures: Versioned Data Models, Implicit Services, and Persistence-Aware Programming 1st Edition Jos Jong ebook all chapters

Versioned

Uploaded by

rottimunetqp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
21 views

Download full Vertically Integrated Architectures: Versioned Data Models, Implicit Services, and Persistence-Aware Programming 1st Edition Jos Jong ebook all chapters

Versioned

Uploaded by

rottimunetqp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Download the Full Version of textbook for Fast Typing at textbookfull.

com

Vertically Integrated Architectures: Versioned


Data Models, Implicit Services, and Persistence-
Aware Programming 1st Edition Jos Jong

https://textbookfull.com/product/vertically-integrated-
architectures-versioned-data-models-implicit-services-and-
persistence-aware-programming-1st-edition-jos-jong/

OR CLICK BUTTON

DOWNLOAD NOW

Download More textbook Instantly Today - Get Yours Now at textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Biota Grow 2C gather 2C cook Loucas

https://textbookfull.com/product/biota-grow-2c-gather-2c-cook-loucas/

textboxfull.com

SQL & NoSQL Databases: Models, Languages, Consistency


Options and Architectures for Big Data Management 1st
Edition Andreas Meier
https://textbookfull.com/product/sql-nosql-databases-models-languages-
consistency-options-and-architectures-for-big-data-management-1st-
edition-andreas-meier/
textboxfull.com

Advanced R Statistical Programming and Data Models:


Analysis, Machine Learning, and Visualization 1st Edition
Matt Wiley
https://textbookfull.com/product/advanced-r-statistical-programming-
and-data-models-analysis-machine-learning-and-visualization-1st-
edition-matt-wiley/
textboxfull.com

Autonomous and Integrated Parking and Transportation


Services 1st Edition Amalendu Chatterjee (Author)

https://textbookfull.com/product/autonomous-and-integrated-parking-
and-transportation-services-1st-edition-amalendu-chatterjee-author/

textboxfull.com
Deciphering Data Architectures: Choosing Between a Modern
Data Warehous, Data Fabric, Data Lakehouse, and Data Mesh
1st Edition James Serra
https://textbookfull.com/product/deciphering-data-architectures-
choosing-between-a-modern-data-warehous-data-fabric-data-lakehouse-
and-data-mesh-1st-edition-james-serra/
textboxfull.com

Deciphering Data Architectures: Choosing Between a Modern


Data Warehouse, Data Fabric, Data Lakehouse, and Data Mesh
1st Edition James Serra
https://textbookfull.com/product/deciphering-data-architectures-
choosing-between-a-modern-data-warehouse-data-fabric-data-lakehouse-
and-data-mesh-1st-edition-james-serra/
textboxfull.com

Advances in Parallel and Distributed Computing and


Ubiquitous Services UCAWSN PDCAT 2015 1st Edition James J.
(Jong Hyuk) Park
https://textbookfull.com/product/advances-in-parallel-and-distributed-
computing-and-ubiquitous-services-ucawsn-pdcat-2015-1st-edition-james-
j-jong-hyuk-park/
textboxfull.com

From Variability Tolerance to Approximate Computing in


Parallel Integrated Architectures and Accelerators 1st
Edition Abbas Rahimi
https://textbookfull.com/product/from-variability-tolerance-to-
approximate-computing-in-parallel-integrated-architectures-and-
accelerators-1st-edition-abbas-rahimi/
textboxfull.com

Witches 1st Edition Jong

https://textbookfull.com/product/witches-1st-edition-jong/

textboxfull.com
Vertically
Integrated
Architectures
Versioned Data Models,
Implicit Services, and
Persistence-Aware Programming

Jos Jong
Vertically Integrated
Architectures
Versioned Data Models,
Implicit Services, and
Persistence-Aware
Programming

Jos Jong
Vertically Integrated Architectures: Versioned Data Models, Implicit
Services, and Persistence-Aware Programming
Jos Jong
AMSTELVEEN, The Netherlands

ISBN-13 (pbk): 978-1-4842-4251-3 ISBN-13 (electronic): 978-1-4842-4252-0


https://doi.org/10.1007/978-1-4842-4252-0
Library of Congress Control Number: 2018966806
Copyright © 2019 by Jos Jong
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or
part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way,
and transmission or information storage and retrieval, electronic adaptation, computer software,
or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark
symbol with every occurrence of a trademarked name, logo, or image, we use the names, logos,
and images only in an editorial fashion and to the benefit of the trademark owner, with no
intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or not
they are subject to proprietary rights.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the author nor the editors nor the publisher can accept any legal
responsibility for any errors or omissions that may be made. The publisher makes no warranty,
express or implied, with respect to the material contained herein.
Managing Director, Apress Media LLC: Welmoed Spahr
Acquisitions Editor: Louise Corrigan
Development Editor: James Markham
Coordinating Editor: Nancy Chen
Cover designed by eStudioCalamar
Cover image designed by Freepik (www.freepik.com)
Distributed to the book trade worldwide by Springer Science+Business Media New York,
233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505,
e-mail orders-ny@springer-sbm.com, or visit www.springeronline.com. Apress Media, LLC is a
California LLC and the sole member (owner) is Springer Science+Business Media Finance Inc
(SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.
For information on translations, please e-mail rights@apress.com, or visit www.apress.com/
rights-permissions.
Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook
versions and licenses are also available for most titles. For more information, reference our Print
and eBook Bulk Sales web page at www.apress.com/bulk-sales.
Any source code or other supplementary material referenced by the author in this book is
available to readers on GitHub via the book’s product page, located at www.apress.com/
9781484242513. For more detailed information, please visit www.apress.com/source-code.
Printed on acid-free paper
I dedicate this book to the worldwide software
engineering community.
Stay curious, keep innovating, and push our
profession forward.
Table of Contents
About the Author���������������������������������������������������������������������������������ix
Acknowledgments�������������������������������������������������������������������������������xi
Preface����������������������������������������������������������������������������������������������xiii

Chapter 1: The Problem������������������������������������������������������������������������1


Never-Ending Complexity��������������������������������������������������������������������������������������2
Data Models Everywhere��������������������������������������������������������������������������������������4
Growing Insecurity������������������������������������������������������������������������������������������������5
Architectural Fog���������������������������������������������������������������������������������������������������6
Language Wars�����������������������������������������������������������������������������������������������������8
Frameworks by the Dozen����������������������������������������������������������������������������������10
Summary������������������������������������������������������������������������������������������������������������11

Chapter 2: History: How Did We Get Here?�����������������������������������������15


How and Why JSON Conquered��������������������������������������������������������������������������16
How and Why We (Used to) Love Tables��������������������������������������������������������������19
How and Why We Reverted to 3GL����������������������������������������������������������������������23
Summary������������������������������������������������������������������������������������������������������������28

Chapter 3: Analysis: What’s Going Wrong?�����������������������������������������29


To DRY or Not to DRY�������������������������������������������������������������������������������������������30
Serving Business Logic���������������������������������������������������������������������������������������33
Tiers That Divide Us��������������������������������������������������������������������������������������������36
One Level Too Low����������������������������������������������������������������������������������������������40

v
Table of Contents

Limited by Frameworks��������������������������������������������������������������������������������������41
Human Factors����������������������������������������������������������������������������������������������������44
Summary������������������������������������������������������������������������������������������������������������46

Chapter 4: Five Dogmas That Hold Us Back����������������������������������������49


We Need Three Tiers�������������������������������������������������������������������������������������������50
Layers Are Good��������������������������������������������������������������������������������������������������53
Dependencies Are Bad����������������������������������������������������������������������������������������56
Monolithic Is Ugly������������������������������������������������������������������������������������������������60
4GL Is Dead���������������������������������������������������������������������������������������������������������62

Chapter 5: The Solution: Vertical Integration�������������������������������������67


A Vertically Integrated Architecture���������������������������������������������������������������������68
A Unified Conceptual Data Model������������������������������������������������������������������������73
Implicit Services�������������������������������������������������������������������������������������������������77
A Persistence-Aware Language��������������������������������������������������������������������������79
Challenges����������������������������������������������������������������������������������������������������������82
Summary������������������������������������������������������������������������������������������������������������85

Chapter 6: The Art of Querying�����������������������������������������������������������87


It’s All About Relationships����������������������������������������������������������������������������������89
Relationships or Properties?�������������������������������������������������������������������������������94
Property Chaining���������������������������������������������������������������������������������������������101
Filters����������������������������������������������������������������������������������������������������������������108
Summary����������������������������������������������������������������������������������������������������������111

Chapter 7: The IR Model�������������������������������������������������������������������113


Unifying Ownership�������������������������������������������������������������������������������������������114
The IR Model�����������������������������������������������������������������������������������������������������118
Archiving Instead of Deleting����������������������������������������������������������������������������127

vi
Table of Contents

Associations������������������������������������������������������������������������������������������������������131
Attributes����������������������������������������������������������������������������������������������������������135
Values vs. Items������������������������������������������������������������������������������������������������136
Putting It All Together����������������������������������������������������������������������������������������141
Inheritance?������������������������������������������������������������������������������������������������������146
Summary����������������������������������������������������������������������������������������������������������151

Chapter 8: Implicit Services�������������������������������������������������������������153


Perfect Granularity��������������������������������������������������������������������������������������������154
The Service Request�����������������������������������������������������������������������������������������156
Access Control��������������������������������������������������������������������������������������������������160
Update vs. Read Services���������������������������������������������������������������������������������161
Schema Evolution vs. Service Versioning���������������������������������������������������������162
Inverse Schema Mappings��������������������������������������������������������������������������������167
Mapping Exemptions����������������������������������������������������������������������������������������172
Summary����������������������������������������������������������������������������������������������������������173

Chapter 9: Persistence-Aware Programming�����������������������������������175


Introducing Functions���������������������������������������������������������������������������������������177
About Variables�������������������������������������������������������������������������������������������������182
Control Flow and Subcontexts��������������������������������������������������������������������������183
Procedures vs. Functions����������������������������������������������������������������������������������186
Making It Perform���������������������������������������������������������������������������������������������188
Beyond the von Neumann Model����������������������������������������������������������������������192
Exploiting Referential Transparency������������������������������������������������������������������194
The Importance of Intrinsics�����������������������������������������������������������������������������196
The Contribution of Set Orientation�������������������������������������������������������������������198
Source Code Reconsidered�������������������������������������������������������������������������������199

vii
Table of Contents

The Role of Internal Memory�����������������������������������������������������������������������������206


Summary����������������������������������������������������������������������������������������������������������209

Chapter 10: User Interface Integration���������������������������������������������211


Query Aggregation��������������������������������������������������������������������������������������������213
Handling Uncommitted Data�����������������������������������������������������������������������������220
Generic Query Tools������������������������������������������������������������������������������������������230
End-User Development�������������������������������������������������������������������������������������233
Summary����������������������������������������������������������������������������������������������������������236

Index�������������������������������������������������������������������������������������������������239

viii
About the Author
Jos Jong is a self-employed independent
senior software engineer and software
architect. He has been developing software
for more than 35 years, in both technical and
enterprise environments. His knowledge
ranges from mainframes, C++, and Smalltalk
to Python, Java, and Objective-C. He has
worked with numerous different platforms
and kept studying to learn about other
programming languages and concepts. In
addition to developing many generic components, some code generators,
and advanced data synchronization solutions, he has prototyped several
innovative database and programming language concepts. He is an abstract
thinker who loves to study the fundamentals of software engineering and is
always eager to reflect on new trends. You can find out more about Jos on
his blog (https://josjong.com/) or connect with him on LinkedIn
(www.linkedin.com/in/jos-jong/) and Twitter (@jos_jong_nl).

ix
Acknowledgments
There are many people who encouraged me to push forward with my ideas
and eventually write this book. I’d like to thank all of them for inspiring
me. Colleagues who were skeptical at the time helped me to rethink and
refine certain aspects of my vision. I want to thank them for the interesting
discussions. To the members of Know-IT, a group I have been a member
of for ten years, my special thanks for all the patience you have shown
me when I was suggesting better solutions again and again in whatever
discussions we were having. I want to thank the people who read my early
drafts: my good friends Marc, Rudolf, Edwin, Winfried, Peter-Paul, and
especially Remco for doing most of the initial translations and essentially
being the first full peer-reviewer. I also would like to thank Robbert, for all
the inspirational words and for setting deadlines. And special thanks to my
sister, Marian, my parents, my good friends Ger, Wilma, Nina, Sudais, and
others who supported me.

xi
Preface
I guess most books start with lots of half-related notes and ideas. So far,
so good. But my first notes and drawings date back 30 years. During my
studies, I learned about real databases and how they magically hide a lot of
technical details from the programmer.
With SQL, I saw the beauty of a fully thought through conceptual data
model, brought to life by a neat and powerful query language. However,
I also remember asking myself whether tables are really such a good
choice to represent data. The relational model was obviously better than
anything else out there. But influenced by other methods I studied, such
as Sjir Nijssen’s natural language information analysis method (NIAM), I
imagined data more as a network of abstract objects (facts) joined together
by relationships. In SQL, you have to specify the actual relationships, based
on attributes, with every query, again and again. And because applications
are mostly not built using SQL, every query also requires its own glue code,
to fit inside the accompanying 3GL programming language. Why? These
early thought experiments eventually became the main premise of this
book.
Why doesn’t the user interface understand the underlying data model,
so that a lot of things can be arranged automatically? Why do we program
in two, three, or four languages to build a single application? And why do
we manually have to pass around strings with pieces of keys and data, as
we do with JSON nowadays?
My inspiration to resolve these dilemmas over and over is born of
frustration, experimentation, study, and lots of discussions within my peer
group. I never was a computer scientist and, as practical as I like to be,
loved working on concrete projects. But I used every slightly more generic

xiii
Preface

challenge in any project to think and experiment with potential solutions.


It always helped me to go beyond merely passing around records
between screens, for example, with generic data reporting solutions,
code generation, when useful, and fancy synchronization solutions. I
also started studying scientific papers on related subjects. All this comes
together in this book.
In an attempt to convince people that two-tier architectures and
the ideas behind 4GL/RAD-languages deserve a second chance, I start
with a thorough analysis of where we stand. Although I agree that most
contemporary architectural principles were born out of necessity, I will
explain how they eventually led to mostly disconnected tiers that we have
to cobble together repeatedly. True, this gives us a lot of flexibility, but it
forces us to write a lot of code that could be deduced from the system’s
data model. It also results in a lot of code duplication across layers and
tiers. I believe that at least 70%–80% of what we write does not concern the
business logic the application is about.
At the same time, I recognize the problems with 4GL and RAD that
made them fail. And although it helps that platforms such as OutSystems
and Mendix reintroduced the 4GL approach under the name low-code,
I still see problems. Code generation cannot optimize for every real-life
scenario, merging lots of existing techniques sounds compatible but is very
constraining at the same time, and the versioning of external interfaces is
still troublesome, as in the nineties.
What we must pursue are fundamental new concepts that are general-­
purpose and flexible at the same time, not just trying to mimic what we
currently do manually. I would like to preach going back to the drawing
board, getting rid of the anxiety to create a totally new programming
language, build an actual compiler, and escape today’s dogmas.
I’m convinced that the second half of my book introduces, or at least
seeds, solutions to escape the current dilemmas. With a single unified
conceptual data model, we can build what I call implicit services and a
persistence-aware programming language to express only pure business

xiv
Preface

logic. I show that what has made two-tier architectures inflexible and not-­
general purpose so far is the lack of support for data model versioning and
a more conceptual approach to data modeling.
I hope my book will inspire experienced developers to explore these
ideas. I believe that the challenges will be an interesting pursuit for
computer science students. Software development is still in its infancy.
I hope that my contribution will steer us away from the endless stream of
frameworks that we see today. Trying to solve each individual problem
with a separate framework mostly brought us more complexity and
certainly did not increase developer productivity for the last decade or so.
Most of my ideas take the form of what-if proposals. That is not
because I haven’t experimented with some of them. For example, I built
a prototype to explore the persistence aware programming language that
I present. It impressed some people, but, for now, it is not a real product.
But who knows what the future will bring.

xv
CHAPTER 1

The Problem
Problems are not stop signs, they are guidelines.
—Robert H. Schuller

Like being in a swamp. That is how it must feel if you end up in a software
development team, after switching from another profession. Just when
you think you have a firm grasp of things, someone comes along with yet
another new concept, principle, or framework. You just want to finish
that one screen you’re working on. But there are larger stakes. And, to be
honest, after listening to all the arguments, you’re on the verge of being
convinced. Another framework gets added to the project.
The accumulation of frameworks year after year must pay off.
You would expect things to have gotten super easy. And yet, every time
a new team member is added, it becomes apparent that the team has
created its own little universe. The new guy or gal has to absorb all the
frameworks he or she is not familiar with and learn a whole new set of
architectural principles.
I am not talking here about commonsense principles every self-­
respecting software engineer is expected to know. The problem lies in the
never-ending avalanche of new ideas and experiments that manifest in the
form of still more new architectural principles and frameworks.

© Jos Jong 2019 1


J. Jong, Vertically Integrated Architectures,
https://doi.org/10.1007/978-1-4842-4252-0_1
Chapter 1 The Problem

As well-intended as they may be, it is impossible to master them all.


Every new person on the team has to be brought up to speed before being
able to contribute. Even the developers present when the architecture or
framework was introduced will have had lengthy discussions as to how,
what, and why of the most recent implementation. It may have been
looking very promising in PowerPoint, but reality can be tough. Once the
hatchet is buried, two factions remain. One has poured so much energy in
the defense of all the principles, it cannot let go. That faction will continue
to preach that if you apply them properly, the gains are huge. The other
faction is glad the discussion is over. It gets back to work, to get things done,
partly exercising architecture in name only.
Maybe we shouldn’t make such a fuss about this. The information and
communications (ICT) industry is in its infancy. Although we have been
developing software for several decades now, there is still so much room
for improvement that we should actually praise people for trying new
things. But this comes with pitfalls. Many initiatives could be categorized
as old wine in new bottles. We don’t need yet another implementation
of an existing concept. We need better concepts that last longer than
the typical three-to-five-year life expectancy of many frameworks and
architectural principles. We need a more fundamental approach.

Never-Ending Complexity
Developing software is a complex endeavor. While one would have
expected it to have gotten easier over time, the exact opposite seems true.
Back in the day, one could build a whole system using Turbo Pascal, C#,
Java, and some SQL. But today, before you know it, you’re once again
Googling the latest features of HTML, to see which browser does or does
not support them. Your CSS files are getting out of hand, so you start
generating them with Sass. And while you were using Angular previously,
you’re thinking about switching to React. Your CV is growing and growing.

2
Chapter 1 The Problem

One of your team members has a lot of experience with Node.js. He


makes sure to mention this every lunchtime. The idea emerges to build
that one new service with Node.js instead of with Java: “That’s one of
the benefits of microservices: to choose the best-fitting implementation
for each service.” We read about how efficient Node.js can be in
asynchronously accessing multiple external services. Naturally, some team
members, after ten years of doing Java projects, are up for something new.
They get excited!
The Java faction has not been resting on its laurels either. It left behind
a past with J2EE, Entity Beans, JSP, Hibernate, JSF, and numerous other
frameworks and concepts. But after first adopting several flavors of Spring,
the itch starts again. Someone on your team suggests implementing
certain logic using the Actor model. The proof of concept was successful.
The developers’ screens now show information about the Actor framework
more and more often. Challenge accepted.
The polyglotting school of thought embraces the idea of mastering
multiple programming languages. And, of course, it makes sense to take a
look at another programming language every now and then. You can learn
about different programming styles. And that generally makes you a better
programmer. The question, however, arises: What to do in five or ten years,
when systems built with multiple languages have to be maintained? Sure,
if you’re cynical, this can be viewed as a guarantee of job security, but it
can really burden the customer with additional costs and frustrations.
It is already difficult to find an experienced programmer, let alone a
full stack developer with knowledge of exactly the languages, concepts,
and frameworks utilized in a particular project. Naturally, part of the
increased complexity results from the fact that we expect more from the
applications we build. User interfaces today are much more advanced than
the character-based screens of yesteryear. We access data from multiple
sources, and we may have to serve millions of concurrent users. Despite all
this, most of the complexity we encounter can be attributed to an industry
that’s rapidly evolving.

3
Chapter 1 The Problem

Let’s conclude that it is nobody’s fault. We are all involved in a quest to


simplify software development, and we certainly don’t want to go back to
the old days. But the smorgasbord of concepts and techniques fragments
the industry. The Internet has enabled the exchange of knowledge but
also led to an ocean of new ideas, both good and bad. Instead of getting
simpler, building an application is getting more complex.

Data Models Everywhere


One of the creeds of software engineering is DRY—don’t repeat yourself.
However, if you take a look at the source code of any randomly selected
system, you will see repetition happening everywhere, within layers
and across layers and tiers. A huge proportion of the code we write is
essentially just retrieving, converting, and returning the exact same data
items, albeit in different contexts and in a variety of formats. Why are we
writing all that code?
At its core, virtually all software consists of storing, processing, and
displaying information. The structure of that information can be described
as entities, attributes, and relationships. Every item in such a model will be
reflected in the software you develop—in the names and types of columns
of a table; in the names of classes, getters, and setters in the data types
that those receive and return; in the JSON structures that are exchanged
with the client or other systems; and I could go on. In the UI client itself,
the data model will probably be reflected in HTML elements, JavaScript
structures, source code, and even partly in CSS styles. All this results in the
definition of a given entity, attribute, or relationship being expressed in,
let’s say, five to ten places, if not more.
This means that for every entity, attribute, or relationship added to
the system, we have to write multiple pieces of code in different layers,
probably even in different programming languages. The actual amount
may depend on the development environment at hand. However, it is

4
Chapter 1 The Problem

and remains manual labor. Adding an attribute might not be such a big
deal, but with adding entities and relationships, there is typically more
complexity involved.
The fact that a data model manifests itself in so many ways also leads
to the duplication of code taking care of validations and constraints. Users
prefer to have immediate feedback from the user interface when data
entered does not fit certain constraints. But to be sure to never store invalid
data, you want the same check to be repeated in the service layer as well.
And maybe you would even have the database check it again. Now imagine
we develop multiple UI clients (web, mobile), and we end up with three to
five places in which we do the exact same validations for which we write,
and maintain, specific pieces of code.
It is fair to say that while we mastered the art of code reuse in most
programming languages, there is still one aspect that leads to a lot of code
repetition, and that is the data model itself.

G
 rowing Insecurity
With half the planet now sharing its personal and business information
via online services, you would expect software security to be rock solid.
Instead, the number of data breaches has been on the rise for years (1,579
reported cases and 178 million accounts exposed in 2017 in the United
States alone).1
How can it be that an industry with so much money and prestige at
stake make and accept so many blunders? Undeniably, one reason is the
aforementioned ever-increasing complexity. No matter how easy it can be
to avoid a particular security threat, risks lurk at every level. Any nontrivial

1
 tatista, “Annual number of data breaches and exposed records in the United
S
States from 2005 to 2018 (in millions),” www.statista.com/statistics/273550/
data-breaches-recorded-in-the-united-states-by-number-of-breaches-
and-records-exposed/, 2018.

5
Chapter 1 The Problem

service is made up of several software components: load balancers,


hypervisors, virtual machines, operating systems, containers, application
servers, databases, frameworks, let alone its own code. It requires a lot of
joint effort from knowledgeable people to keep secure every corner of such
a software stack. A problem can arise from something as little as a bug in
the application’s source code. But a suboptimal configuration of any of the
involved components or not updating certain parts of the software stack
can also result in a huge security scandal.
Specific security risks exist in the realm of authentication and
authorization. While there are numerous standard solutions for these
mechanisms, many are still, at least partly, coded by hand, sometimes just
to avoid dependence on a third party, but also because the integration
with a standard solution may be more difficult to do. As a consequence,
the average programmer is often involved in low-level security, to a point
where it becomes notoriously unsafe.
Finally, there is the risk of URL vulnerabilities. A URL can reveal a lot about
the structure of a system. It often happens that people modify a URL and
gain access to data they are not authorized to. Once again, this problem
exists because programmers have to identify and cater to these risks
themselves.
There is obviously no excuse for being lazy about software security.
But with the current state of ICT, it is as if we are trying to keep the general
public out of private rooms in an office building that is a labyrinth of floors
and little rooms with doors and windows everywhere, while we attempt to
construct some of the key locks ourselves.

Architectural Fog
Software architecture can be both the solution and the cause of problems.
As mentioned, no matter how good the intentions of architects, they
may cast a team into a fog-laden land, and fairly often, the benefits they
promise are never realized.

6
Chapter 1 The Problem

Architecture is about having a blueprint, in the form of guidelines,


for how to construct a system. Building software is a relatively new
phenomenon. Especially because of the continuous influx of new ideas, it
is good to agree on a number of principles. First is to achieve consistency,
in the sense of avoiding every programmer doing things in his/her own
way, but also to gain advantages, in terms of scalability, performance, and
future proofing. Sadly, there are many ways in which this can get derailed.
The preferences of the particular architect play a part. If such a person
is close to the software development process, is experienced, is able to
maintain a good overview, and is able to translate this to a blueprint for
the team, there is potential for a solid foundation. Not every architect,
however, is that skilled, let alone capable of conveying a vision to the team.
Plenty of things can go wrong in this regard.
I’ve already discussed the fact that we are bombarded by so many
different architectural visions and that some architectures can fuel a
lot of debate. Take the idea of microservices, for example. Everybody
understands the wisdom of partitioning a bigger system into subsystems,
either because there are multiple owners or stakeholders or simply because
the whole is too large to be managed by a single team. But microservices
take this idea one step further, resulting in an ongoing debate on how
small, tiny, or microscopic a microservice should or must be.
Combining different architectural concepts can be another challenge.
Ideas may work in isolation, but having them work together in a
meaningful way might introduce some dilemmas. Or what if they don’t fit
a particular environment very well. Perhaps the concept is very useful for a
certain industry but way too excessive for your current project.
One could see architecture as an unsolidified framework, especially
in light of the fact that we have so much freedom with the current third-
generation programming languages (3GL), this creates a need for blueprints.
The more a language and its accompanying frameworks steer you in a
particular direction, the less need there is for an architectural story.

7
Chapter 1 The Problem

The freedom to choose a different architecture for each project is cool


and exiting. However, when it comes to quickly building robust and secure
systems, it’s rather sad that every team has to reinvent the wheel.

Language Wars
Software exists by virtue of programming languages. So, you would expect
them to improve every year, to help us out with all the issues mentioned
so far in this chapter. The general perception is that they do. But do they?
It is also commonly accepted that frameworks can be used to extend a
programming language. But is that really true?
Virtually all software is built with 3GLs today. And these languages
did indeed evolve. Object orientation, garbage collection, and exception
handling became a standard feature for many languages. And such things
as closures, coroutines, and functional extensions are getting more and
more popular. However, object orientation was invented with Simula and
Smalltalk in the sixties and seventies. The same is true for closures and
functional programming. And coroutines have been at the core of Erlang
since the eighties. So, what we mostly see is cherry-picking from other
programming languages, both old and contemporary. In that sense, the my-
language-is-better-than-yours approach is mostly about syntactic sugaring.
There is nothing wrong with that. But what about our more fundamental needs?
While almost no software system can do without storing data and
network communications, 3GLs are only concerned with in-memory data
handling and processing. That’s why we still deal with data persistency as
a second-class citizen. The same is true for client-server communication.
This is where code plumbing comes in. A lot of source code deals with
the marshaling and unmarshaling of messages, filling and extracting
screen data, or assembling and processing database queries. Recall that
fourth-generation languages (4GL) in the nineties delivered in this area.
There may be good reasons why we have stuck to 3GLs since then. It is still
interesting, nonetheless, to see what we can learn from that era.

8
Chapter 1 The Problem

You may think that a smart 3GL-based framework can be as good as a


language with integrated persistency and data communication. And, yes,
frameworks can soften the sharp edges, in this respect, but they will never
fundamentally solve all the related challenges.
Frameworks have been used forever, to simplify the interaction with
the database or to invoke remote services. From Entity Beans to Hibernate,
and from EJB to SOAP, all have been implemented using frameworks,
either by using runtime reflection or code generation, each with their own
range of problems and added complexity. Deeper integration to solve
these issues can never be achieved, because one simply cannot augment a
language at the level.
This is because frameworks play a particular role within a
programming language. The language itself offers the programmer a
host of abstractions and features that, in theory, allow the development
of every conceivable construct. A framework, on the other hand,
is basically just a collection of data structures and accompanying
functions. This makes it impossible to add a truly new abstraction to a
language by utilizing a framework.
Consider memory management. To add garbage collection to a
language, one has to gain control over every creation of an object and,
possibly, every reference to an object. In most languages (C++ could be
regarded an exception here), this is utterly impossible to achieve.
Another example is the concept of coroutines. Think of coroutines
as mini-processes, in that they are suspended when waiting for blocking
I/O, while giving other coroutines the chance to continue their execution.
The implementation of coroutines is typically based on the concept of
segmented stacks, which can be enlarged on demand. How software deals
with a stack is so intrinsically linked to the compiler and accompanying
runtime that such an abstraction cannot really be implemented with a
framework.

9
Chapter 1 The Problem

So, adding frameworks cannot compensate for a language lacking


certain features. Some features just require a more fundamental and
integrated approach.
This is certainly true for data persistency and data communication.
As long as we keep trying to use frameworks to implement these aspects,
we will never be able to write source code in terms of persistent or remote
data. A programming language not being aware of what is really happening
in this sense is like having a constant impedance mismatch between the
intentions of the programmer and the programming language, with code
plumbing and suboptimal execution as a result.
So, yes, languages did evolve. Take garbage collection. That has proven
to be a great success. It prevents memory leaks and simplifies our code
dramatically. We also see functional programming features that made their
way to 3GL, for example, making it easier to write compact expressions to
handle collections. But while being the main features of any application,
persistency and network communication have remained the neglected
stepchildren and are the main cause of large amounts of plumbing
recurring in every software project.

Frameworks by the Dozen


Frameworks can keep us from reinventing the wheel, but, as mentioned,
the market is flooded with them. Their quality varies greatly, and because
the framework du jour is replaced by something better the following year,
we could be creating legacy code every day.
Now and then, this demands a responsible response to the following
questions by every software developer and architect: Do I stay with the
old? Do I use that hot new framework everybody is raving about? Or will
I ever be a frontrunner, by pouring my heart and soul into something
completely new?

10
Chapter 1 The Problem

One factor in this balancing act is the perceived quality of these pieces
of art. It is a plus when a framework is widely used. This makes it stand
up under scrutiny. But even then, the question remains whether it is
maintained well enough, and how much priority is given to fixing bugs. It
is a good thing for something to be open source, but it is still tough to tell
your client that a given bug is caused by a framework, and you either have
to dive into the dark secrets of that framework or wait for others to fix it.
Besides this, popularity and quality are not constant over time. Perhaps
the initiators have already moved on to the next great idea. They take a few
shortcuts, and along the way the quality of the product begins to suffer, all
while there is a huge lock-in for your own project.
A framework may also constrain you. Not every framework plays well
with other frameworks. That means the choice of a framework cannot be
viewed independently of that of others. Sometimes frameworks overlap;
sometimes they interfere; and sometimes they force you into using other
frameworks.
As mentioned at the beginning of this chapter, frameworks may be
intended to simplify things, but they can just as easily increase overall
complexity. While it is nice to have these extensive lists on our résumés, if
we’re not careful, we are creating legacy code every single day. It requires
a lot of experience and a pragmatic approach to not bog down a project in
unnecessary complexity. The big question that we should ask ourselves is
why do we need so many frameworks anyway?

Summary
In this chapter, I have discussed the following:

• The ICT landscape is very immature and is, therefore,


constantly on the move.

• Software development, in a way, is getting more


complex rather than simpler.

11
Chapter 1 The Problem

• There is a lot of code repetition, owing to the data


model having to be represented in different ways in
multiple layers and tiers.

• Software security can partly be blamed by the


complexity of the platforms we use and the low-
level code we sometimes have to write with regard to
authorization and authentication.

• Although the whole point of software architecture is to


provide a steady foundation, new architectural ideas
come along every year.

• 3GLs do not cater to two of the most essential aspects


of any software system: data persistency and data
communication. To use frameworks to compensate for
that is a suboptimal solution.

• To help us out with some of these issues, we use a lot of


frameworks. But that creates new problems and legacy
every day.

After this pile-up of misery, some people may wonder why anyone
would still want to become a programmer. But that’s not how it works with
professionals. Developing software is an inherently complex profession
that requires a certain drive to get ahead. That we have to write more code
than what would strictly be necessary is not the end of the world. And the
fact that the complexity of the trade continues to increase is perhaps even
a bonus for all those professionals who by their nature always want to learn
new things and thrive on challenges.
However, something lurks beneath the surface. The vast amount of
copycat code that we write again and again undoubtedly takes time. And
time equals money. It makes projects take longer. More code also increases
the risk of introducing bugs, which raises the demand for testing and

12
Chapter 1 The Problem

leads to higher operational costs. More lines of code also make it harder
to modify a system, because it implies more dependencies and a bigger
impact by any change requested.
Besides all this, we cannot ignore the fact that it becomes more and
more difficult to find good programmers. And things won’t improve if we
continue to increase the complexity of the profession, expecting candidates
to have résumés with endless lists of acronyms. It is cool to be a full stack
developer who knows all the tricks of the trade, but an ivory tower getting
higher and higher is not going to benefit our clients in the long term.
The big question in this book is how to get out of this impasse.
Therefore, in Chapter 3, I will provide a complete analysis of all the
problems mentioned so far. But because we can certainly learn from
mistakes made in the past, I will first delve into a bit of history in Chapter 2.

13
CHAPTER 2

History: How Did


We Get Here?
A people without the knowledge of their past history, origin
and culture is like a tree without roots.
—Marcus Garvey

The landscape of programming languages, concepts, and frameworks that


we see today did not appear out of thin air. And there is plenty to learn
from this past. Therefore, before I start talking about solutions, in the next
chapters, it is of value to study why we do things the way we do. It will help
us to broaden our view and give us the opportunity to learn from both
historic failures and successes.
In the nineties, when fourth-generation languages (4GL) and Rapid
Application Development (RAD) tools gained popularity, to add a new
attribute to an application, you could just change the database schema
and adapt the related screen definitions. There was no such thing as a
service layer to be rewritten or deployed. And with the introduction of
object databases later on, there finally seemed to be a solution for the
impedance mismatch between object-oriented programming languages
and databases. There are so many concepts we can still draw inspiration
from, if only by understanding why these concepts exited stage left.

© Jos Jong 2019 15


J. Jong, Vertically Integrated Architectures,
https://doi.org/10.1007/978-1-4842-4252-0_2
Chapter 2 History: How Did We Get Here?

Some may just have been ahead of their time. Others were limited
in functionality, getting too complex in certain scenarios, lacking
compatibility and openness, or just becoming less popular, owing to
changing market situations.
It is no different with all the hundreds of frameworks that are added
to GitHub every year. Of course, all these initiatives are praiseworthy,
but even frameworks end up in the garbage can more often than not.
Some may be built on a great idea but were badly implemented, lacking
sufficient functionality, or did not appear as simple as the README.md
seemed to suggest.
We must realize that sometimes true progress cannot be made
unless we take a few steps back. Back to the drawing board, as they say.
Innovation is not fueled by simply stacking upon existing techniques. Once
in a while, we must return to the roots of earlier developments and take
another direction.

How and Why JSON Conquered


System integration with what we now call services is not a recent invention.
As far back as the seventies, solutions were available in various flavors
of remote procedure calls (RPCs). Later attempts to further abstract and
standardize such remote interfaces, such as CORBA, EJB, and SOAP, in the
end, gave way to plain and simple XML or JSON over HTTP. Let’s dig into
the hows and whys of these attempts and see what we can learn.
Connecting two software systems boils down to the exchange of
data, in the sense of a message sent back and forth. If that happens
in a synchronous way, we can view it as an RPC. In the seventies and
eighties, there were numerous standards in vogue that operated on this
principle, usually in the form of plain function calls, without an abstract
concept such as a component, as was later introduced with CORBA.

16
Chapter 2 History: How Did We Get Here?

They were mostly built around libraries, to take care of the marshaling
and unmarshaling of messages into binary formats. But these standards
were very much lacking compatibility with each other, because they were
often associated with a particular programming language or application
development environment. This didn’t fly in an IT landscape that became
increasingly heterogeneous. They were also relatively low level, with little
or no support for higher-level abstractions, objects, or security.
In a response to this, solutions emerged that bridged the gap between
both different operating systems (OSs) and programming environments,
while at the same time, striving for a higher level of abstraction. Instead
of simply invoking a remote procedure, one could now think in terms of
objects or components. This was the realm of CORBA, EJB, and OLE—all
of which appeared very promising at the time. Some were even designed in
such a way that it did not matter whether a function was invoked locally or
remotely, apart from the huge difference in performance, obviously.
But, as we know, this is not the end of the story. Many developers
floundered when faced with the complexity of these standards. CORBA
especially became notorious for its many concepts and accompanying
acronyms—seemingly, way too much for something that essentially comes
down to sending messages back and forth. EJB and OLE had the additional
drawback of still being bound to a specific programming environment
(Java and .NET, respectively).
One of the more fundamental problems was that there was no easy
way to deal with multiple versions of components and functions—a
serious issue in a world where the essence of having separate systems is
that they can, or sometimes must, have their own release schedule. It was
also difficult to stay away from a lot of complexity, such as generating stubs
and skeletons or different ways of handling transactions. All this, while
simplifying system-to-system communication was the core goal of these
developments.

17
Exploring the Variety of Random
Documents with Different Content
observed time of fall and the mean time of fall , that is, the square
of the average fluctuation in the time of fall through the distance ,
we obtain after replacing the ideal time by the mean time

In any actual work will be kept considerably less than ⅒ the


mean time if the irregularities due to the observer’s errors are not
to mask the irregularities due to the Brownian movements, so that
(29) is sufficient for practically all working conditions.[88]
The work of Mr. Fletcher and of the author was done by both of
the methods represented in equations (28) and (29). The 9 drops
reported upon in Mr. Fletcher’s paper in 1911[89] yielded the results
shown below in which is the number of displacements used in
each case in determining or .
TABLE XIV

1.68 125
1.67 136
1.645 321
1.695 202
1.73 171
1.65 200
1.66 84
1.785 411
1.65 85
When weights are assigned proportional to the number of
observations taken, as shown in the last column of Table XIV, there
results for the weighted mean value which represents an average of
1,735 displacements, or
, as against ,
the value found in electrolysis. The agreement between theory and
experiment is then in this case about as good as one-half of 1 per
cent, which is well within the limits of observational error.
This work seemed to demonstrate, with considerably greater
precision than had been attained in earlier Brownian-movement work
and with a minimum of assumptions, the correctness of the Einstein
equation, which is in essence merely the assumption that a particle
in a gas, no matter how big or how little it is or out of what it is made,
is moving about with a mean translatory kinetic energy which is a
universal constant dependent only on temperature. To show how
well this conclusion has been established I shall refer briefly to a few
later researches.
In 1914 Dr. Fletcher, assuming the value of which I had
published[90] for oil drops moving through air, made new and
improved Brownian-movement measurements in this medium and
solved for the original Einstein equation, which, when modified
precisely as above by replacing by and

becomes

He took, all told, as many as 18,837 ’s, not less than 5,900 on a
single drop, and obtained . This cannot be
regarded as an altogether independent determination of , since it
involves my value. Agreeing, however, of as well as it does with
my value of , it does show with much conclusiveness that both
Einstein’s equation and my corrected form of Stokes’s equation
apply accurately to the motion of oil drops of the size here used,
namely, those of radius from cm. to cm.
.
In 1915 Mr. Carl Eyring tested by equation (29) the value of
on oil drops, of about the same size, in hydrogen and came out
within .6 per cent of the value found in electrolysis, the probable
error being, however, some 2 per cent.
Precisely similar tests on substances other than oils were made
by Dr. E. Weiss[91] and Dr. Karl Przibram.[92] The former worked with
silver particles only half as large as the oil particles mentioned
above, namely, of radii between 1 and . and obtained
instead of 9,650, as in
electrolysis. This is indeed 11 per cent too high, but the limits of error
in Weiss’s experiments were in his judgment quite as large as this.
K. Przibram worked on suspensions in air of five or six different
substances, the radii varying from 200 to 600 , and though his
results varied among themselves by as much as 100 per cent, his
mean value came within 6 per cent of 9,650. Both of the last two
observers took too few displacements on a given drop to obtain a
reliable mean displacement, but they used so many drops that their
mean still has some significance.
It would seem, therefore, that the validity of Einstein’s Brownian-
movement equation had been pretty thoroughly established in
gases. In liquids too it has recently been subjected to much more
precise test than had formerly been attained. Nordlund,[93] in 1914,
using minute mercury particles in water and assuming Stokes’s Law
of fall and Einstein’s equations, obtained . While in
1915 Westgren at Stockholm[94] by a very large number of
measurements on colloidal gold, silver, and selenium particles, of
diameter from 65 to 130 ( ), obtained a
result which he thinks is correct to one-half of 1 per cent, this value is
, which agrees perfectly with the
value which I obtained from the measurements on the isolation and
measurement of the electron.
It has been because of such agreements as the foregoing that
the last trace of opposition to the kinetic and atomic hypotheses of
matter has disappeared from the scientific world, and that even
Ostwald has been willing to make such a statement as that quoted
on p. 10.
CHAPTER VIII
IS THE ELECTRON ITSELF
DIVISIBLE?
It would not be in keeping with the method of modern science to
make any dogmatic assertion as to the indivisibility of the electron.
Such assertions used to be made in high-school classes with respect
to the atoms of the elements, but the far-seeing among physicists,
like Faraday, were always careful to disclaim any belief in the
necessary ultimateness of the atoms of chemistry, and that simply
because there existed until recently no basis for asserting anything
about the insides of the atom. We knew that there was a smallest
thing which took part in chemical reactions and we named that thing
the atom, leaving its insides entirely to the future.
Precisely similarly the electron was defined as the smallest
quantity of electricity which ever was found to appear in electrolysis,
and nothing was then said or is now said about its necessary
ultimateness. Our experiments have, however, now shown that this
quantity is capable of isolation and exact measurement, and that all
the kinds of charges which we have been able to investigate are
exact multiples of it. Its value is .

I. A SECOND METHOD OF OBTAINING


I have presented one way of measuring this charge, but there is
an indirect method of arriving at it which was worked out
independently by Rutherford and Geiger[95] and Regener.[96] The
unique feature in this method consists in actually counting the
number of -particles shot off per second by a small speck of radium
or polonium through a given solid angle and computing from this the
number of these particles emitted per second by one gram of the
radium or polonium. Regener made his determination by counting
the scintillations produced on a diamond screen in the focal plane of
his observing microscope. He then caught in a condenser all the -
particles emitted per second by a known quantity of his polonium
and determined the total quantity of electricity delivered to the
condenser by them. This quantity of electricity divided by the number
of particles emitted per second gave the charge on each particle.
Because the -particles had been definitely proved to be helium
atoms[97] and the value of found for them showed that if they
were helium they ought to carry double the electronic charge,
Regener divided his result by 2 and obtained

He estimated his error at 3 per cent. Rutherford and Geiger made


their count by letting the -particles from a speck of radium shoot
into a chamber and produce therein sufficient ionization by collision
to cause an electrometer needle to jump every time one of them
entered. These authors measured the total charge as Regener did
and, dividing by 2 the charge on each -particle, they obtained

All determinations of from radioactive data involve one or the other


of these two counts, namely, that of Rutherford and Geiger or that of
Regener. Thus, Boltwood and Rutherford[98] measured the total
weight of helium produced in a second by a known weight of radium.
Dividing this by the number of -particles (helium atoms) obtained
from Rutherford and Geiger’s count, they obtain the mass of one
atom of helium from which the number in a given weight, or volume
since the gas density is known, is at once obtained. They published
for the number of molecules in a gas per cubic centimeter at 0°76
cm., , which corresponds to
This last method, like that of the Brownian movements, is actually a
determination of , rather than of , since is obtained from it only
through the relation .
Indeed, this is true of all methods of estimating , so far as I am
aware, except the oil-drop method and the Rutherford-Geiger-
Regener method, and of these two the latter represents the
measurement of the mean charge on an immense number of -
particles. Thus a person who wished to contend that the unit charge
appearing in electrolysis is only a mean charge which may be made
up of individual charges which vary widely among themselves, in
much the same way in which the atomic weight assigned to neon
has recently been shown to be a mean of the weights of at least two
different elements inseparable chemically, could not be gainsaid,
save on the basis of the evidence contained in the oil-drop
experiments; for these constitute the only method which has been
found of measuring directly the charge on each individual ion. It is of
interest and significance for the present discussion, however, that
the mean charge on an -particle has been directly measured and
that it comes out, within the limits of error of the measurement, at
exactly two electrons—as it should according to the evidence
furnished by measurements on the -particles.

II. THE EVIDENCE FOR THE EXISTENCE OF A SUB-ELECTRON


Now, the foregoing contention has actually been made, and
evidence has been presented which purports to show that electric
charges exist which are much smaller than the electron. Since this
raises what may properly be called the most fundamental question of
modern physics, the evidence needs very careful consideration. This
evidence can best be appreciated through a brief historical review of
its origin.
The first measurements on the mobilities in electric fields of
swarms of charged particles of microscopically visible sizes were
made by H. A. Wilson[99] in 1903, as detailed in chap. III. These
measurements were repeated with modifications by other observers,
including ourselves, during the years immediately following. De
Broglie’s modification, published in 1908,[100] consisted in sucking
the metallic clouds discovered by Hemsalech and De Watteville,[101]
produced by sparks or arcs between metal electrodes, into the focal
plane of an ultra-microscope and observing the motions of the
individual particles in this cloud in a horizontal electrical field
produced by applying a potential difference to two vertical parallel
plates in front of the objective of his microscope. In this paper De
Broglie first commented upon the fact that some of these particles
were charged positively, some negatively, and some not at all, and
upon the further fact that holding radium near the chamber caused
changes in the charges of the particles. He promised quantitative
measurements of the charges themselves. One year later he fulfilled
the promise,[102] and at practically the same time Dr. Ehrenhaft[103]
published similar measurements made with precisely the
arrangement described by De Broglie a year before. Both men, as
Dr. Ehrenhaft clearly pointed out,[104] while observing individual
particles, obtained only a mean charge, since the different
measurements entering into the evaluation of were made on
different particles. So far as concerns , these measurements, as
everyone agrees, were essentially cloud measurements like
Wilson’s.
In the spring and summer of 1909 I isolated individual water
droplets and determined the charges carried by each one,[105] and in
April, 1910, I read before the American Physical Society the full
report on the oil-drop work in which the multiple relations between
charges were established, Stokes’s Law corrected, and accurately
determined.[106] In the following month (May, 1910) Dr. Ehrenhaft,
[107] having seen that a vertical condenser arrangement made
possible, as shown theoretically and experimentally in the 1909
papers mentioned above, the independent determination of the
charge on each individual particle, read the first paper in which he
had used this arrangement in place of the De Broglie arrangement
which he had used theretofore. He reported results identical in all
essential particulars with those which I had published on water drops
the year before, save that where I obtained consistent and simple
multiple relations between charges carried by different particles he
found no such consistency in these relations. The absolute values of
these charges obtained on the assumption of Stokes’s Law
fluctuated about values considerably lower than .
Instead, however, of throwing the burden upon Stokes’s Law or upon
wrong assumptions as to the density of his particles, he remarked in
a footnote that Cunningham’s theoretical correction to Stokes’s Law,
[108] which he (Ehrenhaft) had just seen, would make his values
come still lower, and hence that no failure of Stokes’s Law could be
responsible for his low values. He considered his results therefore as
opposed to the atomic theory of electricity altogether, and in any
case as proving the existence of charges much smaller than that of
the electron.[109]
The apparent contradiction between these results and mine was
explained when Mr. Fletcher and myself showed[110] experimentally
that Brownian movements produced just such apparent fluctuations
as Ehrenhaft observed when the is computed, as had been done in
his work, from one single observation of a speed under gravity and a
corresponding one in an electric field. We further showed that the
fact that his values fluctuated about too low an average value meant
simply that his particles of gold, silver, and mercury were less dense
because of surface impurities, oxides or the like, than he had
assumed. The correctness of this explanation would be well-nigh
demonstrated if the values of computed by equations (28) or
(29) in chap. VII from a large number of observations on Brownian
movements always came out as in electrolysis, for in these
equations no assumption has to be made as to the density of the
particles. As a matter of fact, all of the nine particles studied by us
and computed by Mr. Fletcher[111] showed the correct value of ,
while only six of them as computed by me fell on, or close to, the line
which pictures the law of fall of an oil drop through air (Fig. 5, p.
106). This last fact was not published in 1911 because it took me
until 1913 to determine with sufficient certainty a second
approximation to the complete law of fall of a droplet through air; in
other words, to extend curves of the sort given in Fig. 5 to as large
values of as correspond to particles small enough to show large
Brownian movements. As soon as I had done this I computed all the
nine drops which gave correct values of and found that two of
them fell way below the line, one more fell somewhat below, while
one fell considerably above it. This meant obviously that these four
particles were not spheres of oil alone, two of them falling much too
slowly to be so constituted and one considerably too rapidly. There
was nothing at all surprising about this result, since I had explained
fully in my first paper on oil drops[112] that until I had taken great
precaution to obtain dust-free air “the values of came out
differently, even for drops showing the same velocity under gravity.”
In the Brownian-movement work no such precautions to obtain dust-
free air had been taken because we wished to test the general
validity of equations (28) and (29). That we actually used in this test
two particles which had a mean density very much smaller than that
of oil and one which was considerably too heavy, was fortunate since
it indicated that our result was indeed independent of the material
used.
It is worthy of remark that in general, even with oil drops, almost
all of those behaving abnormally fall too slowly, that is, they fall
below the line of Fig. 5 and only rarely does one fall above it. This is
because the dust particles which one is likely to observe, that is,
those which remain long in suspension in the air, are either in
general lighter than oil or else expose more surface and hence act
as though they were lighter. When one works with particles made of
dense metals this behavior will be still more marked, since all
surface impurities of whatever sort will diminish the density. The
possibility, however, of freeing oil-drop experiments from all such
sources of error is shown by the fact that although during the year
1915-16 I studied altogether as many as three hundred drops, there
was not one which did not fall within less than 1 per cent of the line
of Fig. 5. It will be shown, too, in this chapter, that in spite of the
failure of the Vienna experimenters, it is possible under suitable
conditions to obtain mercury drops which behave, even as to law of
fall, in practically all cases with perfect consistency and normality.
When E. Weiss in Prag and K. Przibram in the Vienna laboratory
itself, as explained in chap. VII, had found that for all the
substances which they worked with, including silver particles like
those used by Ehrenhaft, gave about the right value of , although
yielding much too low values of when the latter was computed from
the law of fall of silver particles, the scientific world practically
universally accepted our explanation of Ehrenhaft’s results and
ceased to concern itself with the idea of a sub-electron.[113]
In 1914 and 1915, however, Professor Ehrenhaft[114] and two of
his pupils, F. Zerner[115] and D. Konstantinowsky,[116] published new
evidence for the existence of such a sub-electron and the first of
these authors has kept up some discussion of the matter up to the
present. These experimenters make three contentions. The first is
essentially that they have now determined for their particles by
equation (29); and although in many instances it comes out as in
electrolysis, in some instances it comes out from 20 per cent to 50
per cent too low, while in a few cases it is as low as one-fourth or
one-fifth of the electrolytic value. Their procedure is in general to
publish, not the value of , but, instead, the value of obtained
from by inserting Perrin’s value of ( ) in (29) and
then solving for . This is their method of determining “from the
Brownian movements.”
Their second contention is the same as that originally advanced,
namely, that, in some instances, when is determined with the aid of
Stokes’s Law of fall (equation 12, p. 91), even when Cunningham’s
correction or my own (equation 15, p. 101) is employed, the result
comes out very much lower than . Their third claim is
that the value of , determined as just explained from the Brownian
movements, is in general higher than the value computed from the
law of fall, and that the departures become greater and greater the
smaller the particle. These observers conclude therefore that we oil-
drop observers failed to detect sub-electrons because our droplets
were too big to be able to reveal their existence. The minuter
particles which they study, however, seem to them to bring these
sub-electrons to light. In other words, they think the value of the
smallest charge which can be caught from the air actually is a
function of the radius of the drop on which it is caught, being smaller
for small drops than for large ones.
Ehrenhaft and Zerner even analyze our report on oil droplets and
find that these also show in certain instances indications of sub-
electrons, for they yield in these observers’ hands too low values of
, whether computed from the Brownian movements or from the law
of fall. When the computations are made in the latter way is found,
according to them, to decrease with decreasing radius, as is the
case in their experiments on particles of mercury and gold.

III. CAUSES OF THE DISCREPANCIES


Now, the single low value of which these authors find in the
oil-drop work is obtained by computing from some twenty-five
observations on the times of fall, and an equal number on the times
of rise, of a particle which, before we had made any
computations at all, we reported upon[117] for the sake of showing
that the Brownian movements would produce just such fluctuations
as Ehrenhaft had observed when the conditions were those under
which he worked. When I compute by equation (29), using
merely the twenty-five times of fall, I find the value of comes out
26 per cent low, just as Zerner finds it to do. If, however, I omit the
first reading it comes out but 11 per cent low. In other words, the
omission of one single reading changes the result by 15 per cent.
Furthermore, Fletcher[118] has shown that these same data, though
treated entirely legitimately, but with a slightly different grouping than
that used by Zerner, can be made to yield exactly the right value of
. This brings out clearly the futility of attempting to test a
statistical theorem by so few observations as twenty-five, which is
nevertheless more than Ehrenhaft usually uses on his drops.
Furthermore, I shall presently show that unless one observes under
carefully chosen conditions, his own errors of observation and the
slow evaporation of the drop tend to make obtained from
equation (29) come out too low, and these errors may easily be
enough to vitiate the result entirely. There is, then, not the slightest
indication in any work which we have thus jar done on oil drops that
comes out too small.
Next consider the apparent variation in when it is computed
from the law of fall. Zerner computes from my law of fall in the case
of the nine drops published by Fletcher, in which came out as in
electrolysis, and finds that one of them yields ,
one , one , one
, while the other five yield about the right value,
namely, . In other words (as stated on p. 165 above),
five of these drops fall exactly on my curve (Fig. 5), one falls
somewhat above it, one somewhat below, while two are entirely off
and very much too low. These two, therefore, I concluded were not
oil at all, but dust particles. Since Zerner computes the radius from
the rate of fall, these two dust particles which fall much too slowly,
and therefore yield too low values of , must, of course, yield
correspondingly low values of . Since they are found to do so,
Zerner concludes that our oil drops, as well as Ehrenhaft’s mercury
particles, yield decreasing values of with decreasing radius. His
own tabulation does not show this. It merely shows three erratic
values of , two of which are very low and one rather high. But a
glance at all the other data which I have published on oil drops
shows the complete falsity of this position,[119] for these data show
that after I had eliminated dust all of my particles yielded exactly the
same value of “ ” whatever their size[120]. The only possible
interpretation then which could be put on these two particles which
yielded correct values of , but too slow rates of fall, was that
which I put upon them, namely, that they were not spheres of oil.
As to the Vienna data on mercury and gold, Dr. Ehrenhaft
publishes, all told, data on just sixteen particles and takes for his
Brownian-movement calculations on the average fifteen times of fall
and fifteen of rise on each, the smallest number being 6 and the
largest 27. He then computes his statistical average from
observations of this sort. Next he assumes Perrin’s value of ,
namely, , which corresponds to , and obtains
instead by the Brownian-movement method, i.e., the method,
the following values of , the exponential term being omitted for the
sake of brevity: 1.43, 2.13, 1.38, 3.04, 3.5, 6.92, 4.42, 3.28, .84.
Barring the first three and the last of these, the mean value of is
just about what it should be, namely, 4.22 instead of 4.1. Further, the
first three particles are the heaviest ones, the first one falling
between his cross-hairs in 3.6 seconds, and its fluctuations in time of
fall are from 3.2 to 3.85 seconds, that is, three-tenths of a second on
either side of the mean value. Now, these fluctuations are only
slightly greater than those which the average observer will make in
timing the passage of a uniformly moving body across equally
spaced cross-hairs. This means that in these observations two
nearly equally potent causes were operating to produce fluctuations.
The observed ’s were, of course, then, larger than those due to
Brownian movements alone, and might easily, with but a few
observations, be two or three times as large. Since appears in
the denominator of equation (29), it will be seen at once that
because of the observer’s timing errors a series of observed ’s
will always tend to be larger than the due to Brownian
movements alone, and hence that the Brownian-movement method
always tends to yield too low a value of , and accordingly too low
a value of . It is only when the observer’s mean error is wholly
negligible in comparison with the Brownian-movement fluctuations
that this method will not yield too low a value of . The overlooking of
this fact is, in my judgment, one of the causes of the low values of
recorded by Dr. Ehrenhaft.
Again, in the original work on mercury droplets which I produced
both by atomizing liquid mercury and by condensing the vapor from
boiling mercury,[121] I noticed that such droplets evaporated for a
time even more rapidly than oil, and other observers who have since
worked with mercury have reported the same behavior.[122] The
amount of this effect may be judged from the fact that one particular
droplet of mercury recently under observation in this laboratory had
at first a speed of 1 cm. in 20 seconds, which changed in half an
hour to 1 cm. in 56 seconds. The slow cessation, however, of this
evaporation indicates that the drop slowly becomes coated with
some sort of protecting film. Now, if any evaporation whatever is
going on while successive times of fall are being observed—and as
a matter of fact changes due to evaporation or condensation are
always taking place to some extent—the apparent will be larger
than that due to Brownian movements, even though these
movements are large enough to prevent the observer from noticing,
in taking twenty or thirty readings, that the drop is continually
changing. These changes combined with the fluctuations in due to
the observer’s error are sufficient, I think, to explain all of the low
values of e obtained by Dr. Ehrenhaft by the Brownian-movement
method. Indeed, I have myself repeatedly found coming out less
than half of its proper value until I corrected for the evaporation of
the drop, and this was true when the evaporation was so slow that its
rate of fall changed but 1 or 2 per cent in a half-hour. But it is not
merely evaporation which introduces an error of this sort. The
running down of the batteries, the drifting of the drop out of focus, or
anything which causes changes in the times of passage across the
equally spaced cross-hairs tends to decrease the apparent value of
. There is, then, so far as I can see, no evidence at all in any of
the data published to date that the Brownian-movement method
actually does yield too low a value of “ ”, and very much positive
evidence that it does not was given in the preceding chapter.
Indeed, the same type of Brownian-movement work which
Fletcher and I did upon oil-drops ten years ago (see preceding
chapter) has recently been done in Vienna with the use of particles
of selenium, and with results which are in complete harmony with our
own. The observer, E. Schmid,[123] takes as many as 1,500 “times of
fall” upon a given particle, the radius of which is in one case as low
as —quite as minute as any used by Dr. Ehrenhaft—
and obtains in all cases values of by “the Brownian-movement
method” which are in as good agreement with our own as could be
expected in view of the necessary observational error. This complete
check of our work in Vienna itself should close the argument so far
as the Brownian movements are concerned.
That and computed from the law of fall become farther and
farther removed from the values of and computed from the
Brownian movements, the smaller these particles appear to be, is
just what would be expected if the particles under consideration have
surface impurities or non-spherical shapes or else are not mercury at
all.[124] If, further, exact multiple relations hold for them, as at least a
dozen of us, including Dr. Ehrenhaft himself, now find that they
invariably do, there is scarcely any other interpretation possible
except that of incorrect assumptions as to density.[see footnote 124]
Again, the fact that these data are all taken when the observers are
working with the exceedingly dense substances, mercury and gold,
volatilized in an electric arc, and when, therefore, anything not
mercury or gold, but assumed to be, would yield very low values of
and , is in itself a very significant circumstance. The further fact that
Dr. Ehrenhaft implies that normal values of e very frequently appear
in his work,[125] while these low erratic drops represent only a part of
the data taken, is suggestive. When one considers, too, that in place
of the beautiful consistency and duplicability shown in the oil-drop
work, Dr. Ehrenhaft and his pupils never publish data on any two
particles which yield the same value of , but instead find only
irregularities and erratic behavior,[126] just as they would expect to
do with non-uniform particles, or with particles having dust specks
attached to them, one wonders why any explanation other than the
foreign-material one, which explains all the difficulties, has ever been
thought of. As a matter of fact, in our work with mercury droplets, we
have found that the initial rapid evaporation gradually ceases, just as
though the droplets had become coated with some foreign film which
prevents further loss. Dr. Ehrenhaft himself, in speaking of the
Brownian movements of his metal particles, comments on the fact
that they seem at first to show large movements which grow smaller
with time.[127] This is just what would happen if the radius were
increased by the growth of a foreign film.
Now what does Dr. Ehrenhaft say to these very obvious
suggestions as to the cause of his troubles? Merely that he has
avoided all oxygen, and hence that an oxide film is impossible. Yet
he makes his metal particle by striking an electric arc between metal
electrodes. This, as everyone knows, brings out all sorts of occluded
gases. Besides, chemical activity in the electric arc is tremendously
intense, so that there is opportunity for the formation of all sorts of
higher nitrides, the existence of which in the gases coming from
electric arcs has many times actually been proved. Dr. Ehrenhaft
says further that he photographs big mercury droplets and finds
them spherical and free from oxides. But the fact that some drops
are pure mercury is no reason for assuming that all of them are, and
it is only the data on those which are not which he publishes.
Further, because big drops which he can see and measure are of
mercury is no justification at all for assuming that sub-microscopic
particles are necessarily also spheres of pure mercury. In a word, Dr.
Ehrenhaft’s tests as to sphericity and purity are all absolutely
worthless as applied to the particles in question, which according to
him have radii of the order .—a figure a hundred times
below the limit of sharp resolution.

IV. THE BEARING OF THE VIENNA WORK ON THE QUESTION


OF THE EXISTENCE OF A SUB-ELECTRON
But let us suppose that these observers do actually work with
particles of pure mercury and gold, as they think they do, and that
the observational and evaporational errors do not account for the low
values of . Then what conclusion could legitimately be drawn
from their data? Merely this and nothing more, that (1) Einstein’s
Brownian-movement equation is not universally applicable, and (2)
that the law of motion of their very minute charged particles through
air is not yet fully known.[128] So long as they find exact multiple
relationships, as Dr. Ehrenhaft now does, between the charges
carried by a given particle when its charge is changed by the capture
of ions or the direct loss of electrons, the charges on these ions must
be the same as the ionic charges which I have accurately and
consistently measured and found equal to
; for they, in their experiments,
capture exactly the same sort of ions, produced in exactly the same
way as those which I captured and measured in my experiments.
That these same ions have one sort of a charge when captured by a
big drop and another sort when captured by a little drop is obviously
absurd. If they are not the same ions which are caught, then in order
to reconcile the results with the existence of the exact multiple
relationship found by Dr. Ehrenhaft as well as ourselves, it would be
necessary to assume that there exist in the air an infinite number of
different kinds of ionic charges corresponding to the infinite number
of possible radii of drops, and that when a powerful electric field
drives all of these ions toward a given drop this drop selects in each
instance just the charge which corresponds to its particular radius.
Such an assumption is not only too grotesque for serious
consideration, but it is directly contradicted by my experiments, for I
have repeatedly pointed out that with a given value of I obtain
exactly the same value of , whether I work with big drops or with
little ones.

V. NEW PROOF OF THE CONSTANCY OF


For the sake of subjecting the constancy of to the most
searching test, I have made new measurements of the same kind as
those heretofore reported, but using now a range of sizes which
overlaps that in which Dr. Ehrenhaft works. I have also varied
through wide limits the nature and density of both the gas and the
drops. Fig. 13 (I) contains new oil-drop data taken in air; Fig. 13 (II)
similar data taken in hydrogen. The radii of these drops, computed
by the very exact method given in the Physical Review[129] vary
tenfold, namely, from .000025 cm. to .00023 cm. Dr. Ehrenhaft’s
range is from .000008 cm. to .000025 cm. It will be seen that these
drops fall in every instance on the lines of Fig. 13, I and II, and hence
that they all yield exactly the same value of , namely,
. The details of the measurements, which are just like
those previously given, will be entirely omitted. There is here not a
trace of an indication that the value of “ ” becomes smaller as “ ”
decreases. The points on these two curves represent consecutive
series of observations, not a single drop being omitted in the case of
either the air or the hydrogen. This shows the complete uniformity
and consistency which we have succeeded in obtaining in the work
with oil drops.
That mercury drops show a similar behavior was somewhat
imperfectly shown in the original observations which I published on
mercury.[130] I have since fully confirmed the conclusions there
reached. That mercury drops can with suitable precautions be made
to behave practically as consistently as oil is shown in Fig. 13 (III),
which represents data obtained by blowing into the observing
chamber above the pinhole in the upper plate a cloud of mercury
droplets formed by the condensation of the vapor arising from boiling
mercury. These results have been obtained in the Ryerson
Laboratory with my apparatus by Mr. John B. Derieux. Since the
pressure was here always atmospheric, the drops progress in the
order of size from left to right, the largest having a diameter about
three times that of the smallest, the radius of which is .00003244 cm.
The original data may be found in the Physical Review, December,
1916. In Fig. 13 (IV) is found precisely similar data taken with my
apparatus by Dr. J. Y. Lee on solid spheres of shellac falling in air.
[131] Further, very beautiful work, of this same sort, also done with
my apparatus, has recently been published by Dr. Yoshio Ishida
(Phys. Rev., May, 1923), who, using many different gases, obtains a
group of lines like those shown in Fig. 13, all of which though of
different slopes, converge upon one and the same value of “ ”,
namely, .
Fig. 13

These results establish with absolute conclusiveness the


correctness of the assertion that the apparent value of the electron is
not in general a function of the gas in which the particle falls, of the
materials used, or of the radius of the drop on which it is caught,
even when that drop is of mercury, and even when it is as small as
some of those with which Dr. Ehrenhaft obtained his erratic results. If
it appears to be so with his drops, the cause cannot possibly be
found in actual fluctuations in the charge of the electron without
denying completely the validity of my results. But these results have
now been checked, in their essential aspects, by scores of
observers, including Dr. Ehrenhaft himself. Furthermore, it is not my
results alone with which Dr. Ehrenhaft’s contention clashes. The
latter is at variance also with all experiments like those of Rutherford
and Geiger and Regener on the measurement of the charges carried
by - and -particles, for these are infinitely smaller than any
particles used by Dr. Ehrenhaft; and if, as he contends, the value of
the unit out of which a charge is built up is smaller and smaller the
smaller the capacity of the body on which it is found, then these -
particle charges ought to be extraordinarily minute in comparison
with the charges on our oil drops. Instead of this, the charge on the
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like