Download Complete Deploy Machine Learning Models to Production: With Flask, Streamlit, Docker, and Kubernetes on Google Cloud Platform 1st Edition Pramod Singh PDF for All Chapters
Download Complete Deploy Machine Learning Models to Production: With Flask, Streamlit, Docker, and Kubernetes on Google Cloud Platform 1st Edition Pramod Singh PDF for All Chapters
com
https://textbookfull.com/product/deploy-machine-learning-
models-to-production-with-flask-streamlit-docker-and-
kubernetes-on-google-cloud-platform-1st-edition-pramod-
singh/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/biota-grow-2c-gather-2c-cook-loucas/
textboxfull.com
https://textbookfull.com/product/learn-pyspark-build-python-based-
machine-learning-and-deep-learning-models-1st-edition-pramod-singh/
textboxfull.com
Practical Machine Learning with AWS : Process, Build,
Deploy, and Productionize Your Models Using AWS Himanshu
Singh
https://textbookfull.com/product/practical-machine-learning-with-aws-
process-build-deploy-and-productionize-your-models-using-aws-himanshu-
singh/
textboxfull.com
Apress Standard
The publisher, the authors and the editors are safe to assume that the
advice and information in this book are believed to be true and accurate
at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the
material contained herein or for any errors or omissions that may have
been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
In this first chapter, we are going to discuss some of the fundamentals of machine learning and deep
learning. We are also going to look at different business verticals that are being transformed by using
machine learning. Finally, we are going to go over the traditional steps of training and building a rather
simple machine learning model and deep learning model on a cloud platform (Databricks) before moving on
to the next set of chapters on productionization. If you are aware of these concepts and feel comfortable
with your level of expertise on machine learning already, I encourage you to skip the next two sections and
move on to the last section, where I mention the development environment and give pointers to the book’s
accompanying codebase and data download information so that you are able to set up the environment
appropriately. This chapter is divided into three sections. The first section covers the introduction to the
fundamentals of machine learning. The second section dives into the basics of deep learning and the details
of widely used deep neural networks. Each of the previous sections is followed up by the code to build a
model on the cloud platform. The final section is about the requirements and environment setup for the
remainder of the chapters in the book.
History
Machine learning/deep learning is not new; in fact, it goes back to 1940s when for the first time an attempt
was made to build something that had some amount of built-in intelligence. The great Alan Turing worked
on building this unique machine that could decrypt German code during World War II. That was the
beginning of machine intelligence era, and within a few years, researchers started exploring this field in
great detail across many countries. ML/DL was considered to be significantly powerful in terms of
transforming the world at that time, and an enormous number of funds were granted to bring it to life.
Nearly everybody was very optimistic. By late 1960s, people were already working on machine vision
learning and developing robots with machine intelligence.
While it all looked good on the surface level, there were some serious challenges that were impeding the
progress in this field. Researchers were finding it extremely difficult to create intelligence in the machines.
Primarily it was due to a couple of reasons. One of them was that the processing power of computers in
those days was not enough to handle and process large amounts of data, and the reason was the availability
of relevant data itself. Despite the support of government and the availability of sufficient funds, the ML/AI
research hit a roadblock from the period of the late 1960s to the early 1990s. This block of time period is
also known as the “AI winters” among the community members.
In the late 1990s, corporations once again became interested in AI. The Japanese government unveiled
plans to develop a fifth-generation computer to advance machine learning. AI enthusiasts believed that soon
computers would be able to carry on conversations, translate languages, interpret pictures, and reason like
people. In 1997, IBM’s Deep Blue became the first computer to beat a reigning world chess champion, Garry
Kasparov. Some AI funding dried up when the dot-com bubble burst in the early 2000s. Yet machine learning
continued its march, largely thanks to improvements in computer hardware.
Rise in Data
The first most prominent reason for this trend is the massive rise in data generation in the past couple of
decades. Data was always present, but it’s imperative to understand the exact reason behind this abundance
of data. In the early days, the data was generated by employees or workers of particular organizations as
they would save the data into systems, but there were limited data points holding only a few variables. Then
came the revolutionary Internet, and generic information was made accessible to virtually everyone using
the Internet. With the Internet, the users got the control to enter and generate their own data. This was a
colossal shift as the total number of Internet users in the world grew at an exploding rate, and the amount of
data created by these users grew at an even higher rate. All of this data—login/sign-up forms capturing user
details, photos and videos uploads on various social platforms, and other online activities—led to the
coining of the term Big Data. As a result, the challenges that ML and AI researchers faced in earlier times due
to a lack of data points were completely eliminated, and this proved to be a major enabler for the adoption of
in ML and AI.
Finally, from a data perspective, we have already reached the next level as machines are generating and
accumulating data. Every device around us is capturing data such as cars, buildings, mobiles, watches, and
flight engines. They are embedded with multiple monitoring sensors and are recording data every second.
This data is even higher in magnitude than the user-generated data and commonly referred as Internet of
Things (IoT) data.
Improved ML Algorithms
Over the last few years, there has been tremendous progress in terms of the availability of new and
upgraded algorithms that have not only improved the predictions accuracy but also solved multiple
challenges that traditional ML faced. In the first phase, which was a rule-based system, one had to define all
the rules first and then design the system within those set of rules. It became increasingly difficult to control
and update the number of rules as the environment was too dynamic. Hence, traditional ML came into the
picture to replace rule-based systems. The challenge with this approach was that the data scientist had to
spent a lot of time to hand design the features for building the model (known as feature engineering), and
there was an upper threshold in terms of predictions accuracy that these models could never go above no
matter if the input data size increased. The third phase was the introduction of deep neural networks where
the network would figure out the most important features on its own and also outperform other ML
algorithms. In addition, some other approaches that have been creating a lot of buzz over the last few years
are as follows:
Meta learning
Transfer learning (nano nets)
Capsule networks
Deep reinforcement learning
Generative adversarial networks (GANs)
Machine Learning
Now that we know a little bit of history around machine learning, we can go over the fundamentals of
machine learning. We can break down ML into four parts, as shown in Figure 1-1.
Supervised machine learning
Unsupervised machine learning
Semi-supervised machine learning
Reinforcement machine learning
Classification refers to the case when the output variable is a discrete value or categorical in nature.
Classification comes in two types.
Binary classification
Multiclassification
When the target class is of two categories, it is referred to as binary, and when it is more than two
classes, it is known as multiclassifications, as shown in Figure 1-4.
Figure 1-4 Binary versus multiclass
Another property of supervised learning is that the model’s performance can be evaluated. Based on the
type of model (classification or regression), the evaluation metric can be applied, and performance results
can be measured. This happens mainly by splitting the training data into two sets (the train set and the
validation set) and training the model on the train set and testing its performance on the validation set since
we already know the right label/outcome for the validation set.
Unsupervised Learning
Unsupervised learning is another category of machine learning that is used heavily in business applications.
It is different from supervised learning in terms of the output labels. In unsupervised learning, we build the
models on similar sort of data as of supervised learning except for the fact that this dataset does not contain
any label or outcomes column. Essentially, we apply the model on the data without any right answers. In
unsupervised learning, the machine tries to find hidden patterns and useful signals in the data that can be
later used for other applications. The main objective is to probe the data and come up with hidden patterns
and a similarity structure within the dataset, as shown in Figure 1-5. One of the use cases is to find patterns
within the customer data and group the customers into different clusters. It can also identify those
attributes that distinguish between any two groups. From a validation perspective, there is no measure of
accuracy for unsupervised learning. The clustering done by person A can be totally different from that of
person B based on the parameters used to build the model. There are different types of unsupervised
learning.
K-means clustering
Mapping of nearest neighbor
Semi-supervised Learning
As the name suggests, semi-supervised learning lies somewhere in between supervised and unsupervised
learning. In fact, it uses both of the techniques. This type of learning is mainly relevant in scenarios when we
are dealing with a mixed sort of dataset, which contains both labeled and unlabeled data. Sometimes it’s just
unlabeled data completely, but we label some part of it manually. The whole idea of semi-supervised
learning is to use this small portion of labeled data to train the model and then use it for labeling the other
remaining part of data, which can then be used for other purposes. This is also known as pseudo-labeling as
it labels the unlabeled data using the predictions made by the supervised model. To quote a simple example,
say we have lots of images of different brands from social media and most of it is unlabeled. Now using semi-
supervised learning, we can label some of these images manually and then train our model on the labeled
images. We then use the model predictions to label the remaining images to transform the unlabeled data to
labeled data completely.
The next step in semi-supervised learning is to re-train the model on entire labeled dataset. The
advantage that it offers is that the model gets trained on a bigger dataset, which was not the case earlier and
is now more robust and better at predictions. The other advantage is that semi-supervised learning saves a
lot of effort and time that could go in to manually label the data. The flipside of doing all this is that it’s
difficult to get the high performance of the pseudo-labeling as it uses a small part of the labeled data to make
the predictions. However, it is still a better option rather than manually labeling the data, which can be
expensive and time-consuming at the same time. This is how semi-supervised learning uses both the
supervised and unsupervised learning to generate the labeled data. Businesses that face challenges
regarding costs associated with the labeled training process usually go for semi-supervised learning.
Reinforcement Learning
Reinforcement learning is the fourth kind of learning and is little different in terms of the data usage and its
predictions. Reinforcement learning is a big research area in itself, and an entire book could be written just
on it. The main difference between the other kinds of learning and reinforcement learning is that we need
data, mainly historical data, to train the models, whereas reinforcement learning works on a reward system,
as shown in Figure 1-6. It is primarily decision-making based on certain actions that the agent takes to
change its state while trying to maximize the rewards. Let’s break this down to individual elements using a
visualization.
Autonomous agent: This is the main character in this whole learning who is responsible for taking
action. If it is a game, the agent makes the moves to finish or reach the end goal.
Actions: These are set of possible steps that the agent can take to move forward in the task. Each action
will have some effect on the state of the agent and can result in either reward or penalty. For example, in a
game of tennis, the actions might be to serve, return, move left or right, etc.
Reward: This is the key to making progress in reinforcement learning. Rewards enable the agents to
take actions based on if they’re positive rewards or penalties. It is an instant feedback mechanism that
differentiates it from traditional supervised and unsupervised learning techniques.
Environment: This is the territory in which the agent gets to play in. The environment decides whether
the actions that the agent takes results in rewards or penalties.
State: The position the agent is in at any given point of time defines the state of the agent. To move
forward or reach the end goal, the agent has to keep changing states in the positive direction to maximize
the rewards.
The unique thing about reinforcement learning is that there is an immediate feedback mechanism that
drives the next behavior of the agent based on a reward system. Most of the applications that use
reinforcement learning are in navigation, robotics, and gaming. However, it can be also used to build
recommender systems.
Now let’s go over some of the important concepts in machine learning as its critical to have a good
understanding of these aspects before moving on to the machine learning in production.
Gradient Descent
At the end of the day, the machine learning model is as good as the loss it’s able to minimize in its
predictions. There are different types of loss functions pertaining to a specific category of problems, and
most often in the typical classification or regression tasks, we try to minimize the mean squared error and
log loss during training and cross validation. If we think of the loss as a curve, as shown in Figure 1-7,
gradient descent helps us to reach the point where the loss value is at its minimum. We start a random point
based on the initial weights or parameters in the model and move in the direction where it starts reducing.
One thing worth remembering here is that gradient descent takes big steps when it’s far away from the
actual minima, whereas once it reaches a nearby value, the step sizes become very small to not miss the
minima.
To move toward the minimum value point, it starts with taking the derivative of the error with respect to
the parameters/coefficients (weights in case of neural networks) and tries to find the point where the slope
of this error curve is equal to zero. One of the important components in gradient descent is the learning rate
as it decides how quickly or how slowly it descends toward the lowest error value. If learning rate
parameters are set to be higher value, then chances are that it might skip the lowest value, and on the
contrary, if learning rate is too small, it would take a long time to converge. Hence, the learning rate becomes
an important part in the overall gradient descent process.
The overall aim of gradient descent is to reach to a corresponding combination of input coefficients that
reflect the minimum errors based on the training data. So, in a way we try to change these coefficient values
from earlier values to have minimum loss. This is achieved by the process of subtracting the product of the
learning rate and the slope (derivative of error with regard to the coefficient) from the old coefficient value.
This alteration in coefficient values keeps happening until there is no more change in the coefficient/weights
of the model as it signifies that the gradient descent has reached the minimum value point in the loss curve.
Another type of gradient descent technique is stochastic gradient descent (SGD) , which deals with a
similar approach for minimizing the error toward zero but with sets of data points instead of considering all
data in one go. It takes sample data from input data and applies gradient descent to find the point of lowest
error.
Performance Metrics
There are different ways in which the performance of a machine learning model can be evaluated depending
on the nature of algorithm used. As mentioned previously, there are broadly two categories of models:
regression and classification. For the models that predict a continuous target, such as R-square, root mean
squared error (RMSE) can be used, whereas for the latter, an accuracy measure is the standard metric.
However, the cases where there is class imbalance and the business needs to focus on only one out of the
positive or negative class, measures such as precision and recall can be used.
Now that we have gone over the fundamentals and important concepts in machine learning, it’s time for
us to build a simple machine learning model on a cloud platform, namely, Databricks.
Databricks is an easy and convenient way to get started with cloud infrastructure to build and run
machine learning models (single-threaded as well as distributed). I have given a deep introduction of the
Databricks platform in a couple of my earlier books (Machine Learning Using PySpark and Learn PySpark).
The objective of this section in this chapter is to give you a flavor of how to get up and running with ML on
the cloud by just signing up for any of the major cloud services providers (Google, Amazon, Microsoft,
Databricks). Most of these platforms allows users to simply sign up and use the ML services (in some cases
with limited capabilities) for a predefined period or up to the extent of exhausting the free credit points.
Databricks allows you to use the community edition of its platform that offers up to 6 GB of cluster size. We
are going to use the community edition to build and understand a decision tree model on a fake currency
dataset. The dataset contains four attributes of the currency notes that can be used to detect whether a
currency note is genuine or fake. Since we are using the community edition, there is a limitation on the size
of the dataset, and hence it’s been kept relatively small for demo purpose.
Note Sign up for the Databricks community edition to run this code.
The first step is to start a new cluster with the default settings as we are not building a complicated model
here. Once the cluster is up and running, we need to simply upload the data to Databricks from the local
system. The next step is to create a new notebook and attach it to the cluster we created earlier. The next
step is to import all required libraries and confirm that the data was uploaded successfully.
[In]: display(dbutils.fs.ls("/FileStore/tables/"))
The next step is to create a Spark dataframe from the table and later convert it to a pandas dataframe to
build the model.
[In:sparkDF=spark.read.csv('/FileStore/tables/currency_note_data.csv',
header="true", inferSchema="true")
[In]: df=sparkDF.toPandas()
We can take a look at the top five rows of the dataframe by using the pandas head function. This confirms
that we have a total of five columns including the target column (Class).
[In]: df.head(5)
[Out]:
As mentioned earlier, the data size is relatively small, and we can see that it contains just 1,372 records
in total, but the target class seems to be well balanced, and hence we are not dealing with an imbalanced
class.
[In]: df.shape
[Out]: (1372, 5)
[In]: df.Class.value_counts()
[Out]:
0 762
1 610
We can also check whether there are any missing values in the dataframe by using the info function. The
dataframe seems to contain no missing values as such.
[In]: df.info()
[Out]:
The next step is to split the data into training and test sets using the train test split functionality
[In]:X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.25,random_
Now that we have the training set separated out, we can build a decision tree with default
hyperparameters to keep things simple. Remember, the objective of building this model is simply to
introduce the process of training a model on a cloud platform. If you want to train a much more complicated
model, please feel free to add your own steps such as enhanced feature engineering, hyperparameter tuning,
baseline models, visualization, or more. We are going to build much more complicated models that include
all the previous steps in later chapters of this book.
[In]: dec_tree=DecisionTreeClassifier().fit(X_train,y_train)
[In]: dec_tree.score(X_test,y_test)
[Out]: 0.9854227405247813
We can see that the decision tree seems to be doing incredibly well on the test data. We can also go over
the other performance metrics apart from accuracy using the classification report function.
[Out]:
Deep Learning
In this section of the chapter, we will go over the fundamentals of deep learning and its underlying operating
principles. Deep learning has been in the limelight for quite a few years now and is improving leaps and
bounds in terms of solving various business challenges. From image captioning to language translation to
self-driving cars, deep learning has become an important component in the larger scheme of things. To give
you an example, Google’s products such as Gmail, YouTube, Search, Maps, and Assistance are all using deep
learning in some or the other way in the background due to its incredible ability to provide far better results
compared to some of the other traditional machine learning algorithms.
But what exactly is deep learning? Well, before even getting into deep learning, we must understand
what neural networks are. Deep learning in fact is sort of an extension to the neural network. As mentioned
earlier in the chapter, neural networks are not new, but they didn’t take off due to various limitations. Those
limitations don’t exist anymore, and businesses and research community are able to leverage the true power
of neural networks now.
In supervised learning settings, there is a specific input and corresponding output. The objective of the
machine learning algorithms is to use this data and approximate the relationship between input and output
variables. In some cases, this relationship is evident and easy to capture, but in realistic scenarios, the
relationship between the input and output variables is complex and nonlinear in nature. To give an example,
for a self-driving car, the input variables could be as follows:
Terrain
Distance from nearest object
Traffic light
Sign boards
The output needs to be either turn, drive fast or slowly, apply brakes, etc. As you might think, the
relationship between input variables and output variables is pretty complex in nature. Hence, the traditional
machine learning algorithm finds it hard to map this kind of relationship. Deep learning outperforms
machine learning algorithms in such situations as it is able to learn those nonlinear features as well.
Let’s say we have two binary inputs (X1, X2) and the weights of their respective connections (W1, W2).
The weights can be considered similar to the coefficients of input variables in traditional machine learning.
These weights indicate how important the particular input feature is in the model. The summation function
calculates the total sum of the input. The activation function then uses this total summated value and gives a
certain output, as shown in Figure 1-12. Activation is sort of a decision-making function. Based on the type
of activation function used, it gives an output accordingly. There are different types of activation functions
that can be used in a neural network layer.
Activation Functions
Activation functions play a critical role in neural networks as the output varies based on the type of
activation function used. There are typically four main activation functions that are widely used. We will
briefly cover these in this section.
Hyperbolic Tangent
The other activation function is known as the hyperbolic tangent activation function , or tanh. This function
ensures the value remains between -1 to 1 irrespective of the output, as shown in Figure 1-14. The formula
of the tanh activation function is as follows:
The next step is to pass this sum through an activation function. Let’s consider using a sigmoid function
that returns values between 0 and 1 irrespective of the input. The sigmoid function would calculate the
value as shown here and in Figure 1-18:
Neural Network
When we combine multiple neurons, we end up with a neural network. The simplest and most basic neural
network can be built using just the input and output neurons, as shown in Figure 1-19.
Figure 1-19 Simple network
The challenge with using a neural network like this is that it can only learn linear relationships and
cannot perform well in cases where the relationship between the input and output is nonlinear. As we have
already seen, in real-world scenarios, the relationship is hardly simple and linear. Hence, we need to
introduce an additional layer of neurons between the input and output layers to increase its capability to
learn different kinds of nonlinear relationships as well. This additional layer of neurons is known as the
hidden layer , as shown in Figure 1-20. It is responsible for introducing the nonlinearities into the learning
process of the network. Neural networks are also known as universal approximators since they carry the
ability to approximate any relationship between the input and output variables no matter how complex and
nonlinear it is nature. A lot depends on the number of hidden layers in the networks and the total number of
neurons in each hidden layer. Given enough hidden layers, it can perform incredibly well at mapping this
relationship.
Training Process
A neural network is all about the various connections (red lines) and different weights associated with these
connections. The training of neural networks primarily includes adjusting these weights in such a way that
the model can predict with a higher amount of accuracy. To understand how neural networks are trained,
let’s break down the steps of network training.
Step 1: Take the input values as shown in Figure 1-21 and calculate the output values that are passed to
hidden neurons. The weights used for the first iteration of the sum calculation are generated randomly.
Another Random Scribd Document
with Unrelated Content
Selbstmord auszuschliessen. Da, wie wir oben erwähnten,
Pistolenschüsse ungleich grössere Verwüstungen anrichten als
Schüsse aus Revolvern, und aus diesen rascher und mehrmals
hintereinander gefeuert werden kann, so ist es begreiflich, warum
bei ersteren verhältnissmässig seltener mehrere Schusswunden an
e i n e m Selbstmörder gefunden werden, als bei letzteren und bei
diesen desto häufiger, je kleiner das Caliber des Revolvers gewesen
ist und je weniger daher die unmittelbare Explosionsgewalt des
Pulvers, sondern nur das meist kleine Projectil zur Wirkung gelangte.
Dies gilt insbesondere von den Taschenrevolvern mit ihren winzigen
Projectilen, die so enge Schusscanäle erzeugen, dass der
Selbstmörder, selbst nachdem er bereits das Herz getroffen,
nochmals zu feuern vermag. Fig. 73 zeigt die Eingangsöffnungen
von zwei bei einem Selbstmörder gefundenen Revolverschüssen, von
denen der eine durch die linke, der andere durch die rechte
Herzkammer gegangen war, und unlängst kam ein Fall vor, wo an
dem Selbstmörder, einem alten Officier, 6 Schussöffnungen sich
fanden. Eine über der Glabella frontis, welche bis zur äusseren, eine
kreisförmige Fissur zeigenden Tafel des 1 Cm. dicken compacten
Schädels führte, der eine kuchenförmig plattgedrückte Spitzkugel
von 7 Mm. Caliber aufsass, ferner eine zweite am rechten Jochbein,
welche, ohne den Schädel zu eröffnen, in einen quer durch beide
Orbiten ziehenden Canal führte, der beide Nn. optici durchtrennte
und am linken Jochbein mit einer grossen Ausgangsöffnung endete
und endlich 3 dicht beisammenstehende Einschüsse in der
Herzgegend, von denen einer die linke Lunge und die zwei anderen
die linke Herzkammer und die Brustaorta durchdrangen. Offenbar
waren die 2 Schüsse in den Kopf die ersten, die Schüsse durch das
Herz die letzten gewesen. Aus dem Gesagten erklärt sich, warum
gegenwärtig ungleich häufiger verunglückte Selbstmordversuche
durch Erschiessen und Heilungen solcher Selbstmörder vorkommen,
als dies früher der Fall war, da sich herausstellt, dass die grösste
Mehrzahl dieser Fälle Verletzungen betrifft, die mit Revolvern
zugefügt waren.
Einen Fall, in welchem ein Selbstmörder 4 Schüsse gegen seine Brust abfeuerte
und doch mit dem Leben davon kam, hat L o r i n s e r (Wiener med. Wochenschr.
1871, XXI, 12) veröffentlicht. Die Schusswaffe war ein vierläufiger Revolver. Ein
Schuss war zwischen der 2. und 3. Rippe links neben dem Brustbein, ein zweiter
zwischen der 3. und 4., der dritte zwischen der 4. und 5. und der vierte zwischen
der 5. und 6. Rippe in den Thorax eingedrungen. Alle Wunden waren in der
Umgebung geschwärzt; unterhalb des linken Schulterblattes eine blau sugillirte
Stelle, darunter eine Kugel zu fühlen. Pneumothorax, Heilung ohne Extraction der
Kugeln. Aehnliche Fälle hat K u m a r (Bericht des Rudolfspitales für 1875 und
Wiener med. Blätter. 1879, Nr. 28 u. s. f.) mitgetheilt. Ueber 7 Fälle geheilter
Schussverletzungen des Thorax berichtet N e d o p i l (Wiener med. Wochenschr.
1877, Nr. 18 bis 20). Nur in einem einzigen derselben war die Waffe eine kleine
Pistole, in allen übrigen ein kleiner Handrevolver.
Fig. 73.
Fälle dieser Art könnten, wenn sie nicht durch die Umstände
klargelegt sind, die grössten Täuschungen veranlassen. In der Regel
betreffen sie jedoch entweder Geisteskranke oder Individuen, denen,
wie eben Geisteskranken oder Gefangenen, andere bequemere Mittel
zum Selbstmorde nicht zu Gebote stehen, doch sind thatsächlich
solche Vorgänge auch von Personen unternommen worden, denen
die Möglichkeit, sich durch andere und bequemere Methoden
umzubringen, nicht benommen war.
Blutspuren oder Flecke, die den Verdacht erregen, dass sie von
Blut herrühren, können sich finden entweder beim Localaugenschein
oder an Individuen, beziehungsweise ihnen gehörigen
Gegenständen, insbesondere Waffen, die im Verdachte stehen, oder
eben solcher Befunde wegen in den Verdacht kommen, die That
begangen zu haben.
Das Verhalten der Blutspuren am Orte,
wo eine vermeintlich verbrecherische That Befund von Blutspuren.
begangen wurde, kann mitunter die wichtigsten Aufklärungen geben
über verschiedene, für die gerichtliche Untersuchung
bedeutungsvolle Umstände, und es ist daher jedesmal darauf ein
besonderes Augenmerk zu richten. Es ist sowohl das Verhalten der
Blutspuren an der Leiche selbst, als in der Umgebung zu beachten.
Wir haben bereits bei der Besprechung des Selbstmordes durch
Halsabschneiden darauf aufmerksam gemacht, wie wichtig das
Vertheiltsein der Blutspuren an der Leiche eines mit
durchschnittenem Halse gefundenen Individuums für die
Entscheidung der Frage sein kann, ob ein Selbstmord vorliegt oder
ein Mord begangen wurde. Diese Verhältnisse sind sofort zu
erheben, da es begreiflich ist, dass ihre Erhebung selten mehr einen
Werth hat, wenn bereits mit der Leiche herummanipulirt worden war.
Es ist ausser auf die Vertheilung des aus der Wunde
herausgeflossenen Blutes und auf das Verhalten der Hände der
Leiche auch darauf zu achten, ob sich nicht Spuren fremder blutiger
Hände an der Leiche finden.
Hochinteressant in dieser Beziehung ist der von
Blutspritzer.
Ta y l o r (l. c. I, 522) erwähnte Fall, wo auf dem
Rücken der l i n k e n Hand eines mit durchschnittenem
Halse todt gefundenen Individuums der Abdruck einer blutigen, ebenfalls l i n k e n
Hand constatirt und dadurch der Mord ausser Zweifel gestellt wurde. In einem von
uns begutachteten Falle (Vierteljahrschr. f. gerichtl. Med. N. F., XIX, 89) ergaben
sich an der Leiche eines erwürgten Mannes zahlreiche, blutig aufgekratzte Stellen
in der Kehlkopfgegend, und am Hemde, mit welchem die Leiche allein bekleidet
war, an beiden Oberarmen Blutspuren, die offenbar Abdrücke blutiger Hände
darstellten, so dass kein Zweifel bestehen konnte, dass der Thäter mit seinen noch
blutigen Händen den Erwürgten an den Oberarmen gefasst hatte, um gewisse
Lageveränderungen vorzunehmen.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com