Learn Data Analysis with Python: Lessons in Coding First Edition A.J. Henley instant download
Learn Data Analysis with Python: Lessons in Coding First Edition A.J. Henley instant download
https://ebookmeta.com/product/learn-data-analysis-with-python-
lessons-in-coding-first-edition-a-j-henley/
https://ebookmeta.com/product/learn-python-by-coding-video-games-
intermediate-a-step-by-step-guide-to-coding-in-python-fast-
patrick-felicia/
https://ebookmeta.com/product/tiny-python-projects-learn-coding-
and-testing-with-puzzles-and-games-1st-edition-ken-youens-clark/
https://ebookmeta.com/product/learn-python-by-coding-video-games-
beginner-1st-edition-patrick-felicia/
https://ebookmeta.com/product/figures-of-natality-reading-the-
political-in-the-age-of-goethe-1st-edition-joseph-d-oneil/
https://ebookmeta.com/product/dk-eyewitness-madrid-travel-guide-
dk-eyewitness/
https://ebookmeta.com/product/clinical-dermatology-5th-edition-
richard-b-weller-hamish-j-a-hunter-margaret-w-mann/
https://ebookmeta.com/product/the-lost-child-complex-in-
australian-film-jung-story-and-playing-beneath-the-past-1st-
edition-terrie-waddell-2/
https://ebookmeta.com/product/sons-of-abraham-vol-2-pawns-of-
terror-1st-edition-joseph-ray/
An Analysis of Robert E Lucas Jr s Why Doesn t Capital
Flow from Rich to Poor Countries 1st Edition Padraig
Belton
https://ebookmeta.com/product/an-analysis-of-robert-e-lucas-jr-s-
why-doesn-t-capital-flow-from-rich-to-poor-countries-1st-edition-
padraig-belton/
A. J. Henley and Dave Wolf
Dave Wolf
Sterling Business Advantage, LLC, Adamstown, Maryland, USA
While the advice and information in this book are believed to be true
and accurate at the date of publication, neither the authors nor the
editors nor the publisher can accept any legal responsibility for any
errors or omissions that may be made. The publisher makes no
warranty, express or implied, with respect to the material contained
herein.
What Is Anaconda?
Getting Started
Your Turn
Your Turn
Your Turn
Your Turn
Your Turn
Your Turn
Your Turn
Cleaning Data
Standardizing Dates
Binning Data
Selecting Columns
Your Turn
Your Turn
Sorting Data
Your Turn
Correlation
Your Turn
Regression
Your Turn
Your Turn
Your Turn
Your Turn
Your Turn
Your Turn
Your Turn
Graph a Dataset:Histogram
Your Turn
Your Turn
Graph a Dataset:Scatter Plot
Your Turn
Analysis Exercise 1
Analysis Exercise 2
Analysis Exercise 3
Analysis Exercise 4
Analysis Project
Required Deliverables
Index
About the Authors and About the
Technical Reviewer
About the Authors
A. J. Henley
Dave Wolf
If you are already using Python for data analysis, just browse this
book’s table of contents. You will probably find a bunch of things
that you wish you knew how to do in Python. If so, feel free to turn
directly to that chapter and get to work. Each lesson is, as much as
possible, self-contained.
What Is Anaconda?
Anaconda is the easiest way to ensure that you don’t spend all day
installing Jupyter. Simply download the Anaconda package and run
the installer. The Anaconda software package contains everything
you need to create a Python development environment. Anaconda
comes in two versions—one for Python 2.7 and one for Python 3.x.
For the purposes of this guide, install the one for Python 2.7.
Anaconda is an open source data-science platform. It contains
over 100 packages for use with Python, R, and Scala. You can
download and install Anaconda quickly with minimal effort. Once
installed, you can update the packages or Python version or create
environments for different projects.
Getting Started
1. Download and install Anaconda at
https://www.anaconda.com/download .
path_to_zip_file = "datasets.zip"
directory_to_extract_to = ""
import zipfile
zip_ref = zipfile.ZipFile(path_to_zip_file, 'r')
zip_ref.extractall(directory_to_extract_to)
zip_ref.close()
The first stage of data analysis is getting the data. Moving your data
from where you have it stored into your analytical tools and back out
again can be a difficult task if you don't know what you are doing.
Python and its libraries try to make it as easy as possible.
With just a few lines of code, you will be able to import and
export data in the following formats:
CSV
Excel
SQL
import pandas as pd
Location = "datasets/smallgradesh.csv"
df = pd.read_csv(Location, header=None)
df.head()
As you can see, our dataframe lacks column headers. Or, rather,
there are headers, but they weren't loaded as headers; they were
loaded as row one of your data. To load data that includes headers,
you can use the code shown in Listing 2-3.
import pandas as pd
Location = "datasets/gradedata.csv"
df = pd.read_csv(Location)
df.head()
If you have a dataset that doesn't include headers, you can add
them afterward. To add them, we can use one of the options shown
in Listing 2-5.
import pandas as pd
Location = "datasets/smallgrades.csv"
# To add headers as we load the data...
df = pd.read_csv(Location, names=
['Names','Grades'])
# To add headers to a dataframe
df.columns = ['Names','Grades']
Listing 2-5 Loading Data from CSV File and Adding Headers
Your Turn
Can you make a dataframe from a file you have uploaded and
imported on your own? Let's find out. Go to the following website,
which contains U.S. Census data (
http://census.ire.org/data/bulkdata.html ), and
download the CSV datafile for a state. Now, try to import that data
into Python.
import pandas as pd
names = ['Bob','Jessica','Mary','John','Mel']
grades = [76,95,77,78,99]
GradeList = zip(names,grades)
df = pd.DataFrame(data = GradeList, columns=
['Names','Grades'])
df.to_csv('studentgrades.csv',index=False,header
=False)
Lines 1 to 6 are the lines that create the dataframe. Line 7 is the
code to export the dataframe df to a CSV file called
studentgrades.csv.
The only parameters we use are index and header. Setting
these parameters to false will prevent the index and header names
from being exported. Change the values of these parameters to get
a better understanding of their use.
If you want in-depth information about the to_csv method, you
can, of course, use the code shown in Listing 2-7.
df.to_csv?
Your Turn
Can you export the dataframe created by the code in Listing 2-8 to
CSV?
import pandas as pd
names = ['Bob','Jessica','Mary','John','Mel']
grades = [76,95,77,78,99]
bsdegrees = [1,1,0,0,1]
msdegrees = [2,1,0,0,0]
phddegrees = [0,1,0,0,0]
Degrees =
zip(names,grades,bsdegrees,msdegrees,phddegrees)
columns = ['Names','Grades','BS','MS','PhD']
df = pd.DataFrame(data = Degrees,
columns=column)
df
import pandas as pd
Location = "datasets/gradedata.xlsx"
df = pd.read_excel(Location)
df.head()
df.columns =
['first','last','sex','age','exer','hrs','grd','ad
dr']
df.head()
Your Turn
Can you make a dataframe from a file you have uploaded and
imported on your own? Let's find out. Go to
https://www.census.gov/support/USACdataDownloads.h
tml and download one of the Excel datafiles at the bottom of the
page. Now, try to import that data into Python.
import pandas as pd
names = ['Bob','Jessica','Mary','John','Mel']
grades = [76,95,77,78,99]
GradeList = zip(names,grades)
df = pd.DataFrame(data = GradeList,
columns=['Names','Grades'])
writer = pd.ExcelWriter('dataframe.xlsx',
engine="xlsxwriter")
df.to_excel(writer, sheet_name="Sheet1")
writer.save()
Listing 2-12 Exporting a Dataframe to Excel
If you wish, you can save different dataframes to different
sheets, and with one .save() you will create an Excel file with
multiple worksheets (see Listing 2-13).
writer =
pd.ExcelWriter('dataframe.xlsx',engine='xlsxwriter
')
df.to_excel(writer, sheet_name="Sheet1")
df2.to_excel(writer, sheet_name="Sheet2")
writer.save()
Your Turn
Can you export the dataframe created by the code shown in Listing
2-14 to Excel?
import pandas as pd
names = ['Nike','Adidas','New
Balance','Puma',’Reebok’]
grades = [176,59,47,38,99]
PriceList = zip(names,prices)
df = pd.DataFrame(data = PriceList, columns=
['Names',’Prices’])
import pandas as pd
import numpy as np
all_data = pd.DataFrame()
df = pd.read_excel("datasets/data1.xlsx")
all_data = all_data.append(df,ignore_index=True)
df = pd.read_excel("datasets/data2.xlsx")
all_data = all_data.append(df,ignore_index=True)
df = pd.read_excel("datasets/data3.xlsx")
all_data = all_data.append(df,ignore_index=True)
all_data.describe()
import pandas as pd
import numpy as np
import glob
all_data = pd.DataFrame()
for f in glob.glob("datasets/data*.xlsx"):
df = pd.read_excel(f)
all_data =
all_data.append(df,ignore_index=True)
all_data.describe()
Your Turn
In the datasets/weekly_call_data folder, there are 104 files of
weekly call data for two years. Your task is to try to load all of that
data into one dataframe.
import pandas as pd
from sqlalchemy import create_engine
# Connect to sqlite db
db_file = r'datasets/gradedata.db'
engine = create_engine(r"sqlite:///{}"
.format(db_file))
sql = 'SELECT * from test'
'where Grades in (76,77,78)'
sales_data_df = pd.read_sql(sql, engine)
sales_data_df
Once you know the name of a table you wish to view (let's say it
was test), if you want to know the names of the fields in that
table, you can change your SQL statement to that shown in Listing
2-19.
sales_data_df.read_sql?
import pandas as pd
names = ['Bob','Jessica','Mary','John','Mel']
grades = [76,95,77,78,99]
GradeList = zip(names,grades)
df = pd.DataFrame(data = GradeList,
columns=['Names', 'Grades'])
df
import os
import sqlite3 as lite
db_filename = r'mydb.db'
con = lite.connect(db_filename)
df.to_sql('mytable',
con,
flavor='sqlite',
schema=None,
if_exists='replace',
index=True,
index_label=None,
chunksize=None,
dtype=None)
con.close()
df.to_sql?
Your Turn
This might be a little tricky, but can you create a sqlite table that
contains the data found in datasets/gradedata.csv?
import pandas as pd
from numpy import random
from numpy.random import randint
names = ['Bob','Jessica','Mary','John','Mel']
random.seed(500)
Listing 2-25 Seeding Random Generator
This seeds the random number generator. If you use the same
seed, you will get the same "random” numbers.
What we will try to do is this:
1. randint(low=0,high=len(names))
Generates a random integer between zero and the length of
the list names.
2. names[n]
Selects the name where its index is equal to n.
3. for i in range(n)
Loops until i is equal to n, i.e., 1,2,3,….n.
4. random_names =
Selects a random name from the name list and does this n
times.
randnames = []
for i in range(1000):
name = names[randint(low=0,high=len(names))]
randnames.append(name)
births = []
for i in range(1000):
births.append(randint(low=0, high=1000))
Listing 2-27 Selecting 1000 Random Numbers
And, finally, zip the two lists together and create the dataframe
(Listing 2-28).
BabyDataSet2 = list(zip(randnames,births))
df = pd.DataFrame(data = BabyDataSet2,
columns=['Names', 'Births'])
df
Listing 2-28 Creating Dataset from the Lists of Random Names and Numbers
Your Turn
Create a dataframe called parkingtickets with 250 rows
containing a name and a number between 1 and 25.
© A.J. Henley and Dave Wolf 2018
A.J. Henley and Dave Wolf, Learn Data Analysis with Python,
https://doi.org/10.1007/978-1-4842-3486-0_3
The second step of data analysis is cleaning the data. Getting data
ready for analytical tools can be a difficult task. Python and its
libraries try to make it as easy as possible.
With just a few lines of code, you will be able to get your data
ready for analysis. You will be able to
clean the data;
create new variables; and
organize the data.
Cleaning Data
To be useful for most analytical tasks , data must be clean. This
means it should be consistent, relevant, and standardized. In this
chapter, you will learn how to
remove outliers;
remove inappropriate values;
remove duplicates;
remove punctuation;
remove whitespace;
standardize dates; and
standardize text.
Let's see what these look like (Listings 3-1 and 3-2).
import pandas as pd
Location = "datasets/gradedata.csv"
df = pd.read_csv(Location)
meangrade = df['grade'].mean()
stdgrade = df['grade'].std()
toprange = meangrade + stdgrade * 1.96
botrange = meangrade - stdgrade * 1.96
copydf = df
copydf = copydf.drop(copydf[copydf['grade']
> toprange].index)
copydf = copydf.drop(copydf[copydf['grade']
< botrange].index)
copydf
import pandas as pd
Location = "datasets/gradedata.csv"
df = pd.read_csv(Location)
q1 = df['grade'].quantile(.25)
q3 = df['grade'].quantile(.75)
iqr = q3-q1
toprange = q3 + iqr * 1.5
botrange = q1 - iqr * 1.5
copydf = df
copydf = copydf.drop(copydf[copydf['grade']
> toprange].index)
copydf = copydf.drop(copydf[copydf['grade']
< botrange].index)
copydf
Your Turn
Load the dataset datasets/outlierdata.csv. Can you remove
the outliers? Try it with both methods.
import pandas as pd
df =
pd.read_csv("datasets/gradedatamissing.csv")
df.head()
df_no_missing = df.dropna()
df_no_missing
import numpy as np
df['newcol'] = np.nan
df.head()
To drop any columns that contain nothing but empty values, see
Listing 3-6.
df.dropna(axis=1, how="all")
df.fillna(0)
To fill in missing grades with the mean value of grade, see Listing
3-8.
df["grade"].fillna(df["grade"].mean(),
inplace=True)
df["grade"].fillna(df.groupby("gender")
["grade"].transform("mean"), inplace=True)
Your Turn
Load the dataset datasets/missinggrade.csv. Your mission, if
you choose to accept it, is to delete rows with missing grades and to
replace the missing values in hours of exercise by the mean value for
that gender.
import pandas as pd
names = ['Bob','Jessica','Mary','John','Mel']
grades = [76,-2,77,78,101]
GradeList = zip(names,grades)
df = pd.DataFrame(data = GradeList,
columns=['Names', 'Grades'])
df
Your Turn
Using the dataset from this section, can you replace all the subzero
grades with a grade of zero?
import pandas as pd
names =
['Jan','John','Bob','Jan','Mary','Jon','Mel','Mel'
]
grades = [95,78,76,95,77,78,99,100]
GradeList = zip(names,grades)
df = pd.DataFrame(data = GradeList,
columns=['Names', 'Grades'])
df
Listing 3-14 Creating Dataset with Duplicates
To indicate the duplicate rows, we can simply run the code seen
in Listing 3-15.
df.duplicated()
df.drop_duplicates()
You might be asking, “What if the entire row isn't duplicated, but
I still know it's a duplicate?" This can happen if someone does your
survey or retakes an exam again, so the name is the same, but the
observation is different. In this case, where we know that a duplicate
name means a duplicate entry, we can use the code seen in Listing
3-17.
df.drop_duplicates(['Names'], keep="last")
Listing 3-17 Drop Rows with Duplicate Names, Keeping the Last Observation
Your Turn
Load the dataset datasets/dupedata.csv. We figure people
with the same address are duplicates. Can you drop the duplicated
rows while keeping the first?
import pandas as pd
Location = "datasets/gradedata.csv"
## To add headers as we load the data...
df = pd.read_csv(Location)
df.head()
import string
exclude = set(string.punctuation)
def remove_punctuation(x):
try:
x = ''.join(ch for ch in x if ch not in
exclude)
except:
pass
return x
df.address =
df.address.apply(remove_punctuation)
df
Standardizing Dates
One of the problems with consolidating data from different sources
is that different people and different systems can record dates
differently. Maybe they use 01/03/1980 or they use 01/03/80 or
even they use 1980/01/03. Even though they all refer to January 3,
1980, analysis tools may not recognize them all as dates if you are
switching back and forth between the different formats in the same
column (Listing 3-22).
import pandas as pd
names = ['Bob','Jessica','Mary','John','Mel']
grades = [76,95,77,78,99]
bsdegrees = [1,1,0,0,1]
msdegrees = [2,1,0,0,0]
phddegrees = [0,1,0,0,0]
bdates = ['1/1/1945','10/21/76','3/3/90',
'04/30/1901','1963-09-01']
GradeList =
zip(names,grades,bsdegrees,msdegrees,
phddegrees,bdates)
columns=
['Names','Grades','BS','MS','PhD',"bdates"]
df = pd.DataFrame(data = GradeList,
columns=columns)
df
Listing 3-22 Creating Dataframe with Different Date Formats
Listing 3-23 shows a function that standardizes dates to single
format.
df.bdates = df.bdates.apply(standardize_date)
df
import pandas as pd
names = ['Bob','Jessica','Mary','John','Mel']
grades = [76,95,77,78,99]
bsdegrees = [1,1,0,0,1]
msdegrees = [2,1,0,0,0]
phddegrees = [0,1,0,0,0]
ssns = ['867-53-0909','333-22-4444','123-12-
1234',
'777-93-9311','123-12-1423']
GradeList =
zip(names,grades,bsdegrees,msdegrees,
phddegrees,ssns)
columns=['Names','Grades','BS','MS','PhD',"ssn"]
df = pd.DataFrame(data = GradeList,
columns=columns)
df
Listing 3-26 Remove Hyphens from SSNs and Add Leading Zeros if Necessary
Binning Data
Sometimes, you will have discrete data that you need to group into
bins. (Think: converting numeric grades to letter grades.) In this
lesson, we will learn about binning (Listing 3-27).
import pandas as pd
Location = "datasets/gradedata.csv"
df = pd.read_csv(Location)
df.head()
Now that the data is loaded, we need to define the bins and
group names (Listing 3-28).
Notice that there is one more bin value than there are
group_names. This is because there needs to be a top and bottom
limit for each bin.
pd.value_counts(df['lettergrade'])
Your Turn
Recreate the dataframe from this section and create a column
classifying the row as pass or fail. This is for a master's program that
requires a grade of 80 or above for a student to pass.
import pandas as pd
Location = "datasets/gradedata.csv"
df = pd.read_csv(Location)
df.head()
Then, we use binning to divide the data into letter grades (Listing
3-32).
df.groupby('letterGrades')['hours'].mean()
gender_preScore =
df['grade'].groupby(df['gender'])
gender_preScore.mean()
Your Turn
Import the datasets/gradedata.csv file and create a new
binned column of the 'status' as either passing (> 70) or failing
(<=70). Then, compute the mean hours of exercise of the female
students with a 'status' of passing.
import pandas as pd
Location = "datasets/gradedata.csv"
df = pd.read_csv(Location)
df.head()
If we want to find the rows with the lowest grades, we will need
to rank all rows in ascending order by grade. Listing 3-37 shows the
code to create a new column that is the rank of the value of grade in
ascending order.
df['graderanked'] =
df['grade'].rank(ascending=1)
df.tail()
W. L. Crawford
A leading criminal lawyer of Texas. Upon leaving Jefferson he moved
to Dallas.
Hector McKay
Hector McKay, born in Tennessee, came to Texas with his mother
and family when very young, settled near Elysian Fields, where the
family remained many years. The old McKay burying ground is there.
He was a member of Ector’s Brigade during the Civil War, enlisting at
Marshall. He attained the rank of Captain. After the war, he practiced
law in Marshall where he was a law partner of Judge Mabry and later
of W. T. Armistead. Captain McKay was one of the prominent lawyers
of early days of Jefferson.
Captain Moss
Captain Moss, the grandfather of Mrs. Will Sims, of Jefferson, in
1836 operated and owned one of the finest steamboats on the river
—The Hempstead. He assisted Captain Shreve in blowing out the
rafts to make Cypress Bayou navigable to Jefferson and during the
Mexican war he transported soldiers across the river into Texas.
Mr. T. L. Lyon
Mr. T. L. Lyon, with his family, came to Jefferson during the summer
of 1867. For many years Captain Lyon was a member of the firm,
Mooring and Lyon, buying cotton and doing general mercantile
business on Dallas Street. They commanded a wide scope of
business in the palmy days of the city.
33
Royal A. Ferris
Mr. Ferris was a leading lawyer of Dallas, Texas, and he too was
reared in Jefferson.
Nelson Phillips
Chief Justice of the Supreme Court of Texas, and a leading lawyer of
the State, made his home in Dallas and was a product of Marion
County, living near Jefferson.
W. B. Harrison
A leading business man and banker of Ft. Worth, Texas for many
years started his business career in Jefferson during the palmy days.
Jefferson is truly the “mother city” of the State of Texas as it was the
largest and most noted in the early ’70’s and gave to the balance of
the State leading business and professional men to make “The Lone
Star State” great.
The old families, who were not so famous, but were the real stamina
of the town down through the ages, when prosperity had passed on
to other fields and living was hard—yet they lived on and kept the
home fires burning until today Jefferson seems doomed to again
come to be, and is known the state over as a promising oil center—
with prosperity again in view. These with their children may be
numbered by the hundreds. Among them we find:
Dr. B. J. Terry
Dr. T. H. Stallcup
W. B. Stallcup
Ward Taylor
Dick Terry
J. H. Rowell
S. W. Moseley
Shep Haywood
T. L. Torrans
W. P. Schluter
Louis Schluter
R. B. Walker
W. B. Kennon
J. B. Zachery
W. B. Sims
D. C. Wise
A. Urquhart
S. A. Spellings
Capt. Lyon
A. Stutz
T. J. Rogers
W. J. Sedberry
Sam Moseley
W. B. Ward
Sam Ward
J. M. DeWare
B. J. Benefield
J. C. Preston
P. Eldridge
I. Goldberg
M. Bower
J. J. Rives
H. Rives
These with many others have done much for Jefferson and 34
Texas—So “Come to Texas” and be sure you come to
Jefferson.
Benj. H. Epperson
Benj. H. Epperson was born in Mississippi in about 1828. He was
educated in North Carolina and at Princeton University, New Jersey.
He came to Texas and settled at Clarksville sometime in the ’40’s. He
studied law, was admitted to the bar and practiced with marked
ability and success. He was active Whig politician before and after
the war and was the candidate of his party for governor in 1851 at a
time when he was below the constitutional age. In 1852 he was at
the head of the Texas delegation to the Whig National convention.
He served in the Legislature practically from 1858 until his death. He
was a personal friend of Sam Houston and was consulted by
Houston on numerous affairs of state.
35
W. P. Torrans
W. P. Torrans, born in Mobile, Alabama, in 1849 moved to Houston,
Texas and in 1850 came to Jefferson. He established a general
mercantile business on Austin Street, in the building next to the
present Goldberg Feed Store.
The Torrans business has run continuously all these years and is
known as the Torrans Manufacturing Company, a very flourishing
business, owned and operated by T. L. Torrans, who is one of
Jefferson’s most prominent and active citizens, and a son of W. P.
Torrans. Mr. T. L. Torrans married Miss Elizabeth Schluter, daughter
of the late Mr. and Mrs. Louis Schluter, who was a very prominent
lawyer in this city.
Tom Lee Torrans, Jr., a son of Mr. Torrans, is now active manager of
the store. Mrs. Kelly Spearman and Louis Torrans are also children of
Mr. and Mrs. Tom Torrans.
Taken from the “Dallas and Texas fifty years ago” column in the
Dallas News on Jan. 25, the story above was sent The Jefferson
Journal by Ollie B. Webb, Texas and Pacific official, with the notation
that “it seems to me will be of interest to many in Jefferson.”
Both were men who stood out as leaders in their time. Both they
and their descendants have many friends in Jefferson.
Mr. Ballauf came to Jefferson in 1867, he and his wife making the
trip by boat.
He opened a general merchandise on the corner of Marshall and
Austin streets, later moving to Walnut and Lafayette and later to
Austin street. The G. A. Kelly foundry of Kellyville was purchased by
Mr. Ballauf and the material used for all manufactured articles was
secured in Marion county. Mr. Ballauf operated the foundry until
1895. His mercantile business was later devoted entirely to hardware
and mill machinery.
Mr. Ballauf sold his business to his son Fred and his son-in-law
Eugene Meyer in 1897, the banking business was discontinued and
Mr. Ballauf retired from business having spent thirty years without a
failure, assignment or compromise with creditors.
Robert Potter
March 3, 1843, Senator Robert Potter, a signer of the Texas
Declaration of Independence and first secretary of the Navy of the
Republic, was murdered at his home on Caddo Lake.
Rose was near by with some slaves clearing a woodland and when
he saw Potter’s men, he lay upon the ground while one of his slaves,
“Uncle Jerry,” piled brush over him and effectually concealed him
from view.
Shortly after Mr. Spellings became president the bank became one of
the honor banks of the United States and maintained this position to
the present time, and throughout the most depressing period the
banks have ever faced the Rogers National Bank of Jefferson under
Mr. Spellings’ guidance maintained more than its legal reserve,
willingly met the demands made upon his bank and was never
embarrassed to the least extent.
The fact that this bank has achieved this distinction stamps it as one
of the strongest institutions for its size in the whole United States.
You can see, therefore, that it does mean a great deal to you as a
depositor or as a possible depositor, that this is a “Roll of Honor”
bank. In addition to giving you “more than the law requires” in
protection, we are only striving to give you a “double measure” of
courteous and friendly service.
His brilliant mind, sterling qualities of character won for him the title
of “Honest Dave.” He was one of the lawyers in the famous Diomond
Bessie-Rothchild case.
The Rev. D. B. Culberson, Sr., the father of Col. Culberson, was one
of the early pastors of the First Baptist Church of Jefferson.
CLUBS
The first choruses “Carmen” and “In Old Madrid” were presented at
the District meeting of Federated Women’s Clubs.
The Wednesday Music Club was a member of the East Texas Music
Festival during its seven years of existence and with the May Belle
Hale Symphony Orchestra, as Co-Hostess, entertained the Festival in
May 1924.
Mrs. G. T. Haggard was the Club president last year and she and
Mrs. Murph Smith DeWare are the only members who have served
the club, uninterruptedly from its beginning.
The Club had the honor and pleasure of taking part in the wedding
of one of its members, Miss Ethel Leaf to Mr. J. M. DeWare by
rendering “Lohengrin’s Wedding March.” This really was a “few”
years back but a happy memory to those present on such a joyous
occasion.
MARION COUNTY
Possibly few counties have the distinction of having been a part of so
many other counties as has Marion County, so no wonder she is so
“tiny” in size after having been sliced and served to six different
others.
The records at Austin, Texas, tell us that Marion County was first a
part of Red River County, later a part of Shelby, Bowie, Titus, Cass
and Harrison. Cass County was for ten years known as Davis County.
Thus again taking the name of Cass, so really another “slice” may
have been taken off Marion.
Marion County today has an abundant supply of high grade iron ore;
saw mills, chair factory, an abundant supply of the purest and best
artesian water to be found any where. The county is well adapted to
the raising of hogs and cattle. The most delicious sweet potatoes,
fruit and berries of all kinds. Mayhaws grow wild and from these is
made a most palatable and beautiful jelly; in fact almost anything
will do well in Marion County. There are many kinds of clover
growing wild.
In the gift of this splendid piece of work lay the life time love of
Jefferson, a devotion of a little immigrant girl grown to womanhood,
and the gratitude of her children to a little city that had given Mother
and Father happiness.
The fountain is entirely of purest bronze and is 13½ feet high, with
bowls of 7½ feet broad, and has a statue six feet tall representing
“L’ducation,” the total cost being $4,000.
The fountain is still used, as was originally intended, for the good of
man, stock and dogs, and the pure water that flows through it was
given the ladies of Jefferson by the late W. B. Ward in appreciation
for work done in the prohibition election many years ago.
Just before Mr. Stern’s death their old servant “Aunt Caroline” and he
were talking and he told her that he thanked God he had set the
colored people free, and she replied, “But thanks be to him mos’en
fer giben me my good marsar and misses, who gib me my close, my
vittles and my medicine.”
WALNUT GROVE
Five miles south of Linden there stands today an immense walnut
grove. Planted on both sides of the old dirt road, one hundred or
more of these trees are all that are left of the 320 planted by Mr. Jim
Lockett, more than 60 years ago. The trees make a dense shade and
a beautiful lane.
The story is, that Mr. Lockett in a reminiscent mood, thought, that
the country some day would run out of split rails, with which to
make fences. Realizing that wire would some day be used for
making fences he knew that fence posts would be needed, so he
ordered his farm hands to plant in every other corner of the rail
fence a slim seedling walnut tree to be used for future fence posts.
They are standing today waiting for the wire and we are told that
when the new highway was built that it was moved over 200 feet to
keep from injuring the roots of Mr. Lockett’s trees.
Mr. Lockett passed away more than 20 years ago, but his Walnut line
is still a joy to the many who pass that way and many people gather
the Walnuts by the bushel each fall.
Another interesting thing that Mr. Lockett had on his farm was his
water gin. One of the neighbors said, “that in its day it could really
go after the cotton.”
The water was brought to the gin through a series of ditches and
water troughs a mile and a quarter long.
From overhead and controlled by a gate, the water fell onto the top
of a large wooden wheel 36 feet in diameter. Around the wheel were
attached buckets holding 15 gallons of water each, and when
enough buckets were filled with water the wheel began turning and
the gin ran. “She would launch out five bales a day, if you got going
by daylight.”
44
MURDER ALLEY
“Murder Alley” may be reached by taking the left where Line Street
divides, going south to the river, leaving the Barbee home on the
right.
The name “Murder Alley” was derived from the fact that one and
often two dead bodies would be found each morning in this alley.
Amid all the dissipation into which her betrayer had thrust her, the
woman had remained true and steadfast to the man she loved, for
whom she had given up her innocence and home. Through all this
time she had relied upon Rothchild’s promise to make her his wife
and she prayed that the promise might be fulfilled.
The girl believed him and blessed him and they left Cincinnati, 46
together, traveling westward and passing through Texarkana.
From the moment Rothchild promised to make Bessie Moore his wife
he had been planning the woman’s murder. They left the Texas and
Pacific railway at Kildare, Rothchild telling the woman that they
would go through Linden, the County seat of Cass County, to be
married, choosing that spot, he said, because it was so obscure that
news of the marriage would not be heard outside the little town in
which the ceremony was to be performed. His real intention was to
murder the woman on the road. He was thwarted in this by being
compelled to make the trip on a public coach, there being no such
thing as private conveyances in Kildare.