Download full Vertically Integrated Architectures: Versioned Data Models, Implicit Services, and Persistence-Aware Programming 1st Edition Jos Jong ebook all chapters
Download full Vertically Integrated Architectures: Versioned Data Models, Implicit Services, and Persistence-Aware Programming 1st Edition Jos Jong ebook all chapters
com
https://textbookfull.com/product/vertically-integrated-
architectures-versioned-data-models-implicit-services-and-
persistence-aware-programming-1st-edition-jos-jong/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/biota-grow-2c-gather-2c-cook-loucas/
textboxfull.com
https://textbookfull.com/product/autonomous-and-integrated-parking-
and-transportation-services-1st-edition-amalendu-chatterjee-author/
textboxfull.com
Deciphering Data Architectures: Choosing Between a Modern
Data Warehous, Data Fabric, Data Lakehouse, and Data Mesh
1st Edition James Serra
https://textbookfull.com/product/deciphering-data-architectures-
choosing-between-a-modern-data-warehous-data-fabric-data-lakehouse-
and-data-mesh-1st-edition-james-serra/
textboxfull.com
https://textbookfull.com/product/witches-1st-edition-jong/
textboxfull.com
Vertically
Integrated
Architectures
Versioned Data Models,
Implicit Services, and
Persistence-Aware Programming
—
Jos Jong
Vertically Integrated
Architectures
Versioned Data Models,
Implicit Services, and
Persistence-Aware
Programming
Jos Jong
Vertically Integrated Architectures: Versioned Data Models, Implicit
Services, and Persistence-Aware Programming
Jos Jong
AMSTELVEEN, The Netherlands
v
Table of Contents
Limited by Frameworks��������������������������������������������������������������������������������������41
Human Factors����������������������������������������������������������������������������������������������������44
Summary������������������������������������������������������������������������������������������������������������46
vi
Table of Contents
Associations������������������������������������������������������������������������������������������������������131
Attributes����������������������������������������������������������������������������������������������������������135
Values vs. Items������������������������������������������������������������������������������������������������136
Putting It All Together����������������������������������������������������������������������������������������141
Inheritance?������������������������������������������������������������������������������������������������������146
Summary����������������������������������������������������������������������������������������������������������151
vii
Table of Contents
Index�������������������������������������������������������������������������������������������������239
viii
About the Author
Jos Jong is a self-employed independent
senior software engineer and software
architect. He has been developing software
for more than 35 years, in both technical and
enterprise environments. His knowledge
ranges from mainframes, C++, and Smalltalk
to Python, Java, and Objective-C. He has
worked with numerous different platforms
and kept studying to learn about other
programming languages and concepts. In
addition to developing many generic components, some code generators,
and advanced data synchronization solutions, he has prototyped several
innovative database and programming language concepts. He is an abstract
thinker who loves to study the fundamentals of software engineering and is
always eager to reflect on new trends. You can find out more about Jos on
his blog (https://josjong.com/) or connect with him on LinkedIn
(www.linkedin.com/in/jos-jong/) and Twitter (@jos_jong_nl).
ix
Acknowledgments
There are many people who encouraged me to push forward with my ideas
and eventually write this book. I’d like to thank all of them for inspiring
me. Colleagues who were skeptical at the time helped me to rethink and
refine certain aspects of my vision. I want to thank them for the interesting
discussions. To the members of Know-IT, a group I have been a member
of for ten years, my special thanks for all the patience you have shown
me when I was suggesting better solutions again and again in whatever
discussions we were having. I want to thank the people who read my early
drafts: my good friends Marc, Rudolf, Edwin, Winfried, Peter-Paul, and
especially Remco for doing most of the initial translations and essentially
being the first full peer-reviewer. I also would like to thank Robbert, for all
the inspirational words and for setting deadlines. And special thanks to my
sister, Marian, my parents, my good friends Ger, Wilma, Nina, Sudais, and
others who supported me.
xi
Preface
I guess most books start with lots of half-related notes and ideas. So far,
so good. But my first notes and drawings date back 30 years. During my
studies, I learned about real databases and how they magically hide a lot of
technical details from the programmer.
With SQL, I saw the beauty of a fully thought through conceptual data
model, brought to life by a neat and powerful query language. However,
I also remember asking myself whether tables are really such a good
choice to represent data. The relational model was obviously better than
anything else out there. But influenced by other methods I studied, such
as Sjir Nijssen’s natural language information analysis method (NIAM), I
imagined data more as a network of abstract objects (facts) joined together
by relationships. In SQL, you have to specify the actual relationships, based
on attributes, with every query, again and again. And because applications
are mostly not built using SQL, every query also requires its own glue code,
to fit inside the accompanying 3GL programming language. Why? These
early thought experiments eventually became the main premise of this
book.
Why doesn’t the user interface understand the underlying data model,
so that a lot of things can be arranged automatically? Why do we program
in two, three, or four languages to build a single application? And why do
we manually have to pass around strings with pieces of keys and data, as
we do with JSON nowadays?
My inspiration to resolve these dilemmas over and over is born of
frustration, experimentation, study, and lots of discussions within my peer
group. I never was a computer scientist and, as practical as I like to be,
loved working on concrete projects. But I used every slightly more generic
xiii
Preface
xiv
Preface
logic. I show that what has made two-tier architectures inflexible and not-
general purpose so far is the lack of support for data model versioning and
a more conceptual approach to data modeling.
I hope my book will inspire experienced developers to explore these
ideas. I believe that the challenges will be an interesting pursuit for
computer science students. Software development is still in its infancy.
I hope that my contribution will steer us away from the endless stream of
frameworks that we see today. Trying to solve each individual problem
with a separate framework mostly brought us more complexity and
certainly did not increase developer productivity for the last decade or so.
Most of my ideas take the form of what-if proposals. That is not
because I haven’t experimented with some of them. For example, I built
a prototype to explore the persistence aware programming language that
I present. It impressed some people, but, for now, it is not a real product.
But who knows what the future will bring.
xv
CHAPTER 1
The Problem
Problems are not stop signs, they are guidelines.
—Robert H. Schuller
Like being in a swamp. That is how it must feel if you end up in a software
development team, after switching from another profession. Just when
you think you have a firm grasp of things, someone comes along with yet
another new concept, principle, or framework. You just want to finish
that one screen you’re working on. But there are larger stakes. And, to be
honest, after listening to all the arguments, you’re on the verge of being
convinced. Another framework gets added to the project.
The accumulation of frameworks year after year must pay off.
You would expect things to have gotten super easy. And yet, every time
a new team member is added, it becomes apparent that the team has
created its own little universe. The new guy or gal has to absorb all the
frameworks he or she is not familiar with and learn a whole new set of
architectural principles.
I am not talking here about commonsense principles every self-
respecting software engineer is expected to know. The problem lies in the
never-ending avalanche of new ideas and experiments that manifest in the
form of still more new architectural principles and frameworks.
Never-Ending Complexity
Developing software is a complex endeavor. While one would have
expected it to have gotten easier over time, the exact opposite seems true.
Back in the day, one could build a whole system using Turbo Pascal, C#,
Java, and some SQL. But today, before you know it, you’re once again
Googling the latest features of HTML, to see which browser does or does
not support them. Your CSS files are getting out of hand, so you start
generating them with Sass. And while you were using Angular previously,
you’re thinking about switching to React. Your CV is growing and growing.
2
Chapter 1 The Problem
3
Chapter 1 The Problem
4
Chapter 1 The Problem
and remains manual labor. Adding an attribute might not be such a big
deal, but with adding entities and relationships, there is typically more
complexity involved.
The fact that a data model manifests itself in so many ways also leads
to the duplication of code taking care of validations and constraints. Users
prefer to have immediate feedback from the user interface when data
entered does not fit certain constraints. But to be sure to never store invalid
data, you want the same check to be repeated in the service layer as well.
And maybe you would even have the database check it again. Now imagine
we develop multiple UI clients (web, mobile), and we end up with three to
five places in which we do the exact same validations for which we write,
and maintain, specific pieces of code.
It is fair to say that while we mastered the art of code reuse in most
programming languages, there is still one aspect that leads to a lot of code
repetition, and that is the data model itself.
G
rowing Insecurity
With half the planet now sharing its personal and business information
via online services, you would expect software security to be rock solid.
Instead, the number of data breaches has been on the rise for years (1,579
reported cases and 178 million accounts exposed in 2017 in the United
States alone).1
How can it be that an industry with so much money and prestige at
stake make and accept so many blunders? Undeniably, one reason is the
aforementioned ever-increasing complexity. No matter how easy it can be
to avoid a particular security threat, risks lurk at every level. Any nontrivial
1
tatista, “Annual number of data breaches and exposed records in the United
S
States from 2005 to 2018 (in millions),” www.statista.com/statistics/273550/
data-breaches-recorded-in-the-united-states-by-number-of-breaches-
and-records-exposed/, 2018.
5
Chapter 1 The Problem
Architectural Fog
Software architecture can be both the solution and the cause of problems.
As mentioned, no matter how good the intentions of architects, they
may cast a team into a fog-laden land, and fairly often, the benefits they
promise are never realized.
6
Chapter 1 The Problem
7
Chapter 1 The Problem
Language Wars
Software exists by virtue of programming languages. So, you would expect
them to improve every year, to help us out with all the issues mentioned
so far in this chapter. The general perception is that they do. But do they?
It is also commonly accepted that frameworks can be used to extend a
programming language. But is that really true?
Virtually all software is built with 3GLs today. And these languages
did indeed evolve. Object orientation, garbage collection, and exception
handling became a standard feature for many languages. And such things
as closures, coroutines, and functional extensions are getting more and
more popular. However, object orientation was invented with Simula and
Smalltalk in the sixties and seventies. The same is true for closures and
functional programming. And coroutines have been at the core of Erlang
since the eighties. So, what we mostly see is cherry-picking from other
programming languages, both old and contemporary. In that sense, the my-
language-is-better-than-yours approach is mostly about syntactic sugaring.
There is nothing wrong with that. But what about our more fundamental needs?
While almost no software system can do without storing data and
network communications, 3GLs are only concerned with in-memory data
handling and processing. That’s why we still deal with data persistency as
a second-class citizen. The same is true for client-server communication.
This is where code plumbing comes in. A lot of source code deals with
the marshaling and unmarshaling of messages, filling and extracting
screen data, or assembling and processing database queries. Recall that
fourth-generation languages (4GL) in the nineties delivered in this area.
There may be good reasons why we have stuck to 3GLs since then. It is still
interesting, nonetheless, to see what we can learn from that era.
8
Chapter 1 The Problem
9
Chapter 1 The Problem
10
Chapter 1 The Problem
One factor in this balancing act is the perceived quality of these pieces
of art. It is a plus when a framework is widely used. This makes it stand
up under scrutiny. But even then, the question remains whether it is
maintained well enough, and how much priority is given to fixing bugs. It
is a good thing for something to be open source, but it is still tough to tell
your client that a given bug is caused by a framework, and you either have
to dive into the dark secrets of that framework or wait for others to fix it.
Besides this, popularity and quality are not constant over time. Perhaps
the initiators have already moved on to the next great idea. They take a few
shortcuts, and along the way the quality of the product begins to suffer, all
while there is a huge lock-in for your own project.
A framework may also constrain you. Not every framework plays well
with other frameworks. That means the choice of a framework cannot be
viewed independently of that of others. Sometimes frameworks overlap;
sometimes they interfere; and sometimes they force you into using other
frameworks.
As mentioned at the beginning of this chapter, frameworks may be
intended to simplify things, but they can just as easily increase overall
complexity. While it is nice to have these extensive lists on our résumés, if
we’re not careful, we are creating legacy code every single day. It requires
a lot of experience and a pragmatic approach to not bog down a project in
unnecessary complexity. The big question that we should ask ourselves is
why do we need so many frameworks anyway?
Summary
In this chapter, I have discussed the following:
11
Chapter 1 The Problem
After this pile-up of misery, some people may wonder why anyone
would still want to become a programmer. But that’s not how it works with
professionals. Developing software is an inherently complex profession
that requires a certain drive to get ahead. That we have to write more code
than what would strictly be necessary is not the end of the world. And the
fact that the complexity of the trade continues to increase is perhaps even
a bonus for all those professionals who by their nature always want to learn
new things and thrive on challenges.
However, something lurks beneath the surface. The vast amount of
copycat code that we write again and again undoubtedly takes time. And
time equals money. It makes projects take longer. More code also increases
the risk of introducing bugs, which raises the demand for testing and
12
Chapter 1 The Problem
leads to higher operational costs. More lines of code also make it harder
to modify a system, because it implies more dependencies and a bigger
impact by any change requested.
Besides all this, we cannot ignore the fact that it becomes more and
more difficult to find good programmers. And things won’t improve if we
continue to increase the complexity of the profession, expecting candidates
to have résumés with endless lists of acronyms. It is cool to be a full stack
developer who knows all the tricks of the trade, but an ivory tower getting
higher and higher is not going to benefit our clients in the long term.
The big question in this book is how to get out of this impasse.
Therefore, in Chapter 3, I will provide a complete analysis of all the
problems mentioned so far. But because we can certainly learn from
mistakes made in the past, I will first delve into a bit of history in Chapter 2.
13
CHAPTER 2
Some may just have been ahead of their time. Others were limited
in functionality, getting too complex in certain scenarios, lacking
compatibility and openness, or just becoming less popular, owing to
changing market situations.
It is no different with all the hundreds of frameworks that are added
to GitHub every year. Of course, all these initiatives are praiseworthy,
but even frameworks end up in the garbage can more often than not.
Some may be built on a great idea but were badly implemented, lacking
sufficient functionality, or did not appear as simple as the README.md
seemed to suggest.
We must realize that sometimes true progress cannot be made
unless we take a few steps back. Back to the drawing board, as they say.
Innovation is not fueled by simply stacking upon existing techniques. Once
in a while, we must return to the roots of earlier developments and take
another direction.
16
Chapter 2 History: How Did We Get Here?
They were mostly built around libraries, to take care of the marshaling
and unmarshaling of messages into binary formats. But these standards
were very much lacking compatibility with each other, because they were
often associated with a particular programming language or application
development environment. This didn’t fly in an IT landscape that became
increasingly heterogeneous. They were also relatively low level, with little
or no support for higher-level abstractions, objects, or security.
In a response to this, solutions emerged that bridged the gap between
both different operating systems (OSs) and programming environments,
while at the same time, striving for a higher level of abstraction. Instead
of simply invoking a remote procedure, one could now think in terms of
objects or components. This was the realm of CORBA, EJB, and OLE—all
of which appeared very promising at the time. Some were even designed in
such a way that it did not matter whether a function was invoked locally or
remotely, apart from the huge difference in performance, obviously.
But, as we know, this is not the end of the story. Many developers
floundered when faced with the complexity of these standards. CORBA
especially became notorious for its many concepts and accompanying
acronyms—seemingly, way too much for something that essentially comes
down to sending messages back and forth. EJB and OLE had the additional
drawback of still being bound to a specific programming environment
(Java and .NET, respectively).
One of the more fundamental problems was that there was no easy
way to deal with multiple versions of components and functions—a
serious issue in a world where the essence of having separate systems is
that they can, or sometimes must, have their own release schedule. It was
also difficult to stay away from a lot of complexity, such as generating stubs
and skeletons or different ways of handling transactions. All this, while
simplifying system-to-system communication was the core goal of these
developments.
17
Exploring the Variety of Random
Documents with Different Content
observed time of fall and the mean time of fall , that is, the square
of the average fluctuation in the time of fall through the distance ,
we obtain after replacing the ideal time by the mean time
1.68 125
1.67 136
1.645 321
1.695 202
1.73 171
1.65 200
1.66 84
1.785 411
1.65 85
When weights are assigned proportional to the number of
observations taken, as shown in the last column of Table XIV, there
results for the weighted mean value which represents an average of
1,735 displacements, or
, as against ,
the value found in electrolysis. The agreement between theory and
experiment is then in this case about as good as one-half of 1 per
cent, which is well within the limits of observational error.
This work seemed to demonstrate, with considerably greater
precision than had been attained in earlier Brownian-movement work
and with a minimum of assumptions, the correctness of the Einstein
equation, which is in essence merely the assumption that a particle
in a gas, no matter how big or how little it is or out of what it is made,
is moving about with a mean translatory kinetic energy which is a
universal constant dependent only on temperature. To show how
well this conclusion has been established I shall refer briefly to a few
later researches.
In 1914 Dr. Fletcher, assuming the value of which I had
published[90] for oil drops moving through air, made new and
improved Brownian-movement measurements in this medium and
solved for the original Einstein equation, which, when modified
precisely as above by replacing by and
becomes
He took, all told, as many as 18,837 ’s, not less than 5,900 on a
single drop, and obtained . This cannot be
regarded as an altogether independent determination of , since it
involves my value. Agreeing, however, of as well as it does with
my value of , it does show with much conclusiveness that both
Einstein’s equation and my corrected form of Stokes’s equation
apply accurately to the motion of oil drops of the size here used,
namely, those of radius from cm. to cm.
.
In 1915 Mr. Carl Eyring tested by equation (29) the value of
on oil drops, of about the same size, in hydrogen and came out
within .6 per cent of the value found in electrolysis, the probable
error being, however, some 2 per cent.
Precisely similar tests on substances other than oils were made
by Dr. E. Weiss[91] and Dr. Karl Przibram.[92] The former worked with
silver particles only half as large as the oil particles mentioned
above, namely, of radii between 1 and . and obtained
instead of 9,650, as in
electrolysis. This is indeed 11 per cent too high, but the limits of error
in Weiss’s experiments were in his judgment quite as large as this.
K. Przibram worked on suspensions in air of five or six different
substances, the radii varying from 200 to 600 , and though his
results varied among themselves by as much as 100 per cent, his
mean value came within 6 per cent of 9,650. Both of the last two
observers took too few displacements on a given drop to obtain a
reliable mean displacement, but they used so many drops that their
mean still has some significance.
It would seem, therefore, that the validity of Einstein’s Brownian-
movement equation had been pretty thoroughly established in
gases. In liquids too it has recently been subjected to much more
precise test than had formerly been attained. Nordlund,[93] in 1914,
using minute mercury particles in water and assuming Stokes’s Law
of fall and Einstein’s equations, obtained . While in
1915 Westgren at Stockholm[94] by a very large number of
measurements on colloidal gold, silver, and selenium particles, of
diameter from 65 to 130 ( ), obtained a
result which he thinks is correct to one-half of 1 per cent, this value is
, which agrees perfectly with the
value which I obtained from the measurements on the isolation and
measurement of the electron.
It has been because of such agreements as the foregoing that
the last trace of opposition to the kinetic and atomic hypotheses of
matter has disappeared from the scientific world, and that even
Ostwald has been willing to make such a statement as that quoted
on p. 10.
CHAPTER VIII
IS THE ELECTRON ITSELF
DIVISIBLE?
It would not be in keeping with the method of modern science to
make any dogmatic assertion as to the indivisibility of the electron.
Such assertions used to be made in high-school classes with respect
to the atoms of the elements, but the far-seeing among physicists,
like Faraday, were always careful to disclaim any belief in the
necessary ultimateness of the atoms of chemistry, and that simply
because there existed until recently no basis for asserting anything
about the insides of the atom. We knew that there was a smallest
thing which took part in chemical reactions and we named that thing
the atom, leaving its insides entirely to the future.
Precisely similarly the electron was defined as the smallest
quantity of electricity which ever was found to appear in electrolysis,
and nothing was then said or is now said about its necessary
ultimateness. Our experiments have, however, now shown that this
quantity is capable of isolation and exact measurement, and that all
the kinds of charges which we have been able to investigate are
exact multiples of it. Its value is .
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com