Python for Data Analysis Data Wrangling with Pandas NumPy and IPython 1st Edition Wes Mckinney pdf download
Python for Data Analysis Data Wrangling with Pandas NumPy and IPython 1st Edition Wes Mckinney pdf download
https://ebookmeta.com/product/python-for-data-analysis-data-
wrangling-with-pandas-numpy-and-ipython-1st-edition-wes-mckinney/
https://ebookmeta.com/product/python-data-analysis-numpy-
matplotlib-and-pandas-bernd-klein/
https://ebookmeta.com/product/python-for-data-analysis-3rd-
edition-second-early-release-wes-mckinney/
https://ebookmeta.com/product/data-analysis-with-python-
introducing-numpy-pandas-matplotlib-and-essential-elements-of-
python-programming-1st-edition-rituraj-dixit/
https://ebookmeta.com/product/illiberal-europe-eastern-europe-
from-the-fall-of-the-berlin-wall-to-the-war-in-ukraine-2nd-
edition-leon-marc/
Mapping the Field of Adult and Continuing Education An
International Compendium 1st Edition Alan B. Knox
https://ebookmeta.com/product/mapping-the-field-of-adult-and-
continuing-education-an-international-compendium-1st-edition-
alan-b-knox/
https://ebookmeta.com/product/what-really-happens-in-vegas-true-
stories-of-the-people-who-make-vegas-vegas-1st-edition-patterson/
https://ebookmeta.com/product/marketing-research-delivering-
customer-insight-4th-edition-alan-wilson/
https://ebookmeta.com/product/mastering-financial-pattern-
recognition-finding-and-back-testing-candlestick-patterns-with-
python-1st-edition-sofien-kaabar-2/
Competition Cauldrons Conspiracy Moonflower Mystery 5
1st Edition Beverly Rearick
https://ebookmeta.com/product/competition-cauldrons-conspiracy-
moonflower-mystery-5-1st-edition-beverly-rearick/
Python for Data Analysis
Download from Wow! eBook <www.wowebook.com>
Wes McKinney
Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions
are also available for most titles (http://my.safaribooksonline.com). For more information, contact our
corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com.
Editors: Julie Steele and Meghan Blanchette Indexer: BIM Publishing Services
Production Editor: Melanie Yarbrough Cover Designer: Karen Montgomery
Copyeditor: Teresa Exley Interior Designer: David Futato
Proofreader: BIM Publishing Services Illustrator: Rebecca Demarest
Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of
O’Reilly Media, Inc. Python for Data Analysis, the cover image of a golden-tailed tree shrew, and related
trade dress are trademarks of O’Reilly Media, Inc.
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as
trademarks. Where those designations appear in this book, and O’Reilly Media, Inc., was aware of a
trademark claim, the designations have been printed in caps or initial caps.
While every precaution has been taken in the preparation of this book, the publisher and author assume
no responsibility for errors or omissions, or for damages resulting from the use of the information con-
tained herein.
ISBN: 978-1-449-31979-3
[LSI]
1349356084
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
1. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
What Is This Book About? 1
Why Python for Data Analysis? 2
Python as Glue 2
Solving the “Two-Language” Problem 2
Why Not Python? 3
Essential Python Libraries 3
NumPy 4
pandas 4
matplotlib 5
IPython 5
SciPy 6
Installation and Setup 6
Windows 7
Apple OS X 9
GNU/Linux 10
Python 2 and Python 3 11
Integrated Development Environments (IDEs) 11
Community and Conferences 12
Navigating This Book 12
Code Examples 13
Data for Examples 13
Import Conventions 13
Jargon 13
Acknowledgements 14
2. Introductory Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.usa.gov data from bit.ly 17
Counting Time Zones in Pure Python 19
iii
Counting Time Zones with pandas 21
MovieLens 1M Data Set 26
Measuring rating disagreement 30
US Baby Names 1880-2010 32
Analyzing Naming Trends 36
Conclusions and The Path Ahead 43
iv | Table of Contents
Operations between Arrays and Scalars 85
Basic Indexing and Slicing 86
Boolean Indexing 89
Fancy Indexing 92
Transposing Arrays and Swapping Axes 93
Universal Functions: Fast Element-wise Array Functions 95
Data Processing Using Arrays 97
Expressing Conditional Logic as Array Operations 98
Mathematical and Statistical Methods 100
Methods for Boolean Arrays 101
Sorting 101
Unique and Other Set Logic 102
File Input and Output with Arrays 103
Storing Arrays on Disk in Binary Format 103
Saving and Loading Text Files 104
Linear Algebra 105
Random Number Generation 106
Example: Random Walks 108
Simulating Many Random Walks at Once 109
Table of Contents | v
Other pandas Topics 151
Integer Indexing 151
Panel Data 152
vi | Table of Contents
8. Plotting and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
A Brief matplotlib API Primer 219
Figures and Subplots 220
Colors, Markers, and Line Styles 224
Ticks, Labels, and Legends 225
Annotations and Drawing on a Subplot 228
Saving Plots to File 231
matplotlib Configuration 231
Plotting Functions in pandas 232
Line Plots 232
Bar Plots 235
Histograms and Density Plots 238
Scatter Plots 239
Plotting Maps: Visualizing Haiti Earthquake Crisis Data 241
Python Visualization Tool Ecosystem 247
Chaco 248
mayavi 248
Other Packages 248
The Future of Visualization Tools? 249
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
Table of Contents | ix
Preface
The scientific Python ecosystem of open source libraries has grown substantially over
the last 10 years. By late 2011, I had long felt that the lack of centralized learning
resources for data analysis and statistical applications was a stumbling block for new
Python programmers engaged in such work. Key projects for data analysis (especially
NumPy, IPython, matplotlib, and pandas) had also matured enough that a book written
about them would likely not go out-of-date very quickly. Thus, I mustered the nerve
to embark on this writing project. This is the book that I wish existed when I started
using Python for data analysis in 2007. I hope you find it useful and are able to apply
these tools productively in your work.
xi
This icon indicates a warning or caution.
xii | Preface
How to Contact Us
Please address comments and questions concerning this book to the publisher:
O’Reilly Media, Inc.
1005 Gravenstein Highway North
Sebastopol, CA 95472
800-998-9938 (in the United States or Canada)
707-829-0515 (international or local)
707-829-0104 (fax)
We have a web page for this book, where we list errata, examples, and any additional
information. You can access this page at http://oreil.ly/python_for_data_analysis.
To comment or ask technical questions about this book, send email to
bookquestions@oreilly.com.
For more information about our books, courses, conferences, and news, see our website
at http://www.oreilly.com.
Find us on Facebook: http://facebook.com/oreilly
Follow us on Twitter: http://twitter.com/oreillymedia
Watch us on YouTube: http://www.youtube.com/oreillymedia
Preface | xiii
CHAPTER 1
Preliminaries
1
Why Python for Data Analysis?
For many people (myself among them), the Python language is easy to fall in love with.
Since its first appearance in 1991, Python has become one of the most popular dynamic,
programming languages, along with Perl, Ruby, and others. Python and Ruby have
become especially popular in recent years for building websites using their numerous
web frameworks, like Rails (Ruby) and Django (Python). Such languages are often
called scripting languages as they can be used to write quick-and-dirty small programs,
or scripts. I don’t like the term “scripting language” as it carries a connotation that they
cannot be used for building mission-critical software. Among interpreted languages
Python is distinguished by its large and active scientific computing community. Adop-
tion of Python for scientific computing in both industry applications and academic
research has increased significantly since the early 2000s.
For data analysis and interactive, exploratory computing and data visualization, Python
will inevitably draw comparisons with the many other domain-specific open source
and commercial programming languages and tools in wide use, such as R, MATLAB,
SAS, Stata, and others. In recent years, Python’s improved library support (primarily
pandas) has made it a strong alternative for data manipulation tasks. Combined with
Python’s strength in general purpose programming, it is an excellent choice as a single
language for building data-centric applications.
Python as Glue
Part of Python’s success as a scientific computing platform is the ease of integrating C,
C++, and FORTRAN code. Most modern computing environments share a similar set
of legacy FORTRAN and C libraries for doing linear algebra, optimization, integration,
fast fourier transforms, and other such algorithms. The same story has held true for
many companies and national labs that have used Python to glue together 30 years’
worth of legacy software.
Most programs consist of small portions of code where most of the time is spent, with
large amounts of “glue code” that doesn’t run often. In many cases, the execution time
of the glue code is insignificant; effort is most fruitfully invested in optimizing the
computational bottlenecks, sometimes by moving the code to a lower-level language
like C.
In the last few years, the Cython project (http://cython.org) has become one of the
preferred ways of both creating fast compiled extensions for Python and also interfacing
with C and C++ code.
2 | Chapter 1: Preliminaries
ideas to be part of a larger production system written in, say, Java, C#, or C++. What
people are increasingly finding is that Python is a suitable language not only for doing
research and prototyping but also building the production systems, too. I believe that
more and more companies will go down this path as there are often significant organ-
izational benefits to having both scientists and technologists using the same set of pro-
grammatic tools.
pandas
pandas provides rich data structures and functions designed to make working with
structured data fast, easy, and expressive. It is, as you will see, one of the critical in-
gredients enabling Python to be a powerful and productive data analysis environment.
The primary object in pandas that will be used in this book is the DataFrame, a two-
dimensional tabular, column-oriented data structure with both row and column labels:
>>> frame
total_bill tip sex smoker day time size
1 16.99 1.01 Female No Sun Dinner 2
2 10.34 1.66 Male No Sun Dinner 3
3 21.01 3.5 Male No Sun Dinner 3
4 23.68 3.31 Male No Sun Dinner 2
5 24.59 3.61 Female No Sun Dinner 4
6 25.29 4.71 Male No Sun Dinner 4
7 8.77 2 Male No Sun Dinner 2
8 26.88 3.12 Male No Sun Dinner 4
9 15.04 1.96 Male No Sun Dinner 2
10 14.78 3.23 Male No Sun Dinner 2
pandas combines the high performance array-computing features of NumPy with the
flexible data manipulation capabilities of spreadsheets and relational databases (such
as SQL). It provides sophisticated indexing functionality to make it easy to reshape,
slice and dice, perform aggregations, and select subsets of data. pandas is the primary
tool that we will use in this book.
4 | Chapter 1: Preliminaries
For financial users, pandas features rich, high-performance time series functionality
and tools well-suited for working with financial data. In fact, I initially designed pandas
as an ideal tool for financial data analysis applications.
For users of the R language for statistical computing, the DataFrame name will be
familiar, as the object was named after the similar R data.frame object. They are not
the same, however; the functionality provided by data.frame in R is essentially a strict
subset of that provided by the pandas DataFrame. While this is a book about Python, I
will occasionally draw comparisons with R as it is one of the most widely-used open
source data analysis environments and will be familiar to many readers.
The pandas name itself is derived from panel data, an econometrics term for multidi-
mensional structured data sets, and Python data analysis itself.
matplotlib
matplotlib is the most popular Python library for producing plots and other 2D data
visualizations. It was originally created by John D. Hunter (JDH) and is now maintained
by a large team of developers. It is well-suited for creating plots suitable for publication.
It integrates well with IPython (see below), thus providing a comfortable interactive
environment for plotting and exploring data. The plots are also interactive; you can
zoom in on a section of the plot and pan around the plot using the toolbar in the plot
window.
IPython
IPython is the component in the standard scientific Python toolset that ties everything
together. It provides a robust and productive environment for interactive and explor-
atory computing. It is an enhanced Python shell designed to accelerate the writing,
testing, and debugging of Python code. It is particularly useful for interactively working
with data and visualizing data with matplotlib. IPython is usually involved with the
majority of my Python work, including running, debugging, and testing code.
Aside from the standard terminal-based IPython shell, the project also provides
• A Mathematica-like HTML notebook for connecting to IPython through a web
browser (more on this later).
• A Qt framework-based GUI console with inline plotting, multiline editing, and
syntax highlighting
• An infrastructure for interactive parallel and distributed computing
I will devote a chapter to IPython and how to get the most out of its features. I strongly
recommend using it while working through this book.
6 | Chapter 1: Preliminaries
• Scientific Python base: NumPy, SciPy, matplotlib, and IPython. These are all in-
cluded in EPDFree.
• IPython Notebook dependencies: tornado and pyzmq. These are included in EPD-
Free.
• pandas (version 0.8.2 or higher).
At some point while reading you may wish to install one or more of the following
packages: statsmodels, PyTables, PyQt (or equivalently, PySide), xlrd, lxml, basemap,
pymongo, and requests. These are used in various examples. Installing these optional
libraries is not necessary, and I would would suggest waiting until you need them. For
example, installing PyQt or PyTables from source on OS X or Linux can be rather
arduous. For now, it’s most important to get up and running with the bare minimum:
EPDFree and pandas.
For information on each Python package and links to binary installers or other help,
see the Python Package Index (PyPI, http://pypi.python.org). This is also an excellent
resource for finding new Python packages.
Windows
To get started on Windows, download the EPDFree installer from http://www.en
thought.com, which should be an MSI installer named like epd_free-7.3-1-win-
x86.msi. Run the installer and accept the default installation location C:\Python27. If
you had previously installed Python in this location, you may want to delete it manually
first (or using Add/Remove Programs).
Next, you need to verify that Python has been successfully added to the system path
and that there are no conflicts with any prior-installed Python versions. First, open a
command prompt by going to the Start Menu and starting the Command Prompt ap-
plication, also known as cmd.exe. Try starting the Python interpreter by typing
python. You should see a message that matches the version of EPDFree you installed:
C:\Users\Wes>python
Python 2.7.3 |EPD_free 7.3-1 (32-bit)| (default, Apr 12 2012, 14:30:37) on win32
Type "credits", "demo" or "enthought" for more information.
>>>
If you installed other versions of Python, be sure to delete any other Python-related
directories from both the system and user Path variables. After making a path alterna-
tion, you have to restart the command prompt for the changes to take effect.
Once you can launch Python successfully from the command prompt, you need to
install pandas. The easiest way is to download the appropriate binary installer from
http://pypi.python.org/pypi/pandas. For EPDFree, this should be pandas-0.9.0.win32-
py2.7.exe. After you run this, let’s launch IPython and check that things are installed
correctly by importing pandas and making a simple matplotlib plot:
C:\Users\Wes>ipython --pylab
Python 2.7.3 |EPD_free 7.3-1 (32-bit)|
Type "copyright", "credits" or "license" for more information.
In [2]: plot(arange(10))
If successful, there should be no error messages and a plot window will appear. You
can also check that the IPython HTML notebook can be successfully run by typing:
$ ipython notebook --pylab=inline
EPDFree on Windows contains only 32-bit executables. If you want or need a 64-bit
setup on Windows, using EPD Full is the most painless way to accomplish that. If you
would rather install from scratch and not pay for an EPD subscription, Christoph
Gohlke at the University of California, Irvine, publishes unofficial binary installers for
8 | Chapter 1: Preliminaries
all of the book’s necessary packages (http://www.lfd.uci.edu/~gohlke/pythonlibs/) for 32-
and 64-bit Windows.
Apple OS X
To get started on OS X, you must first install Xcode, which includes Apple’s suite of
software development tools. The necessary component for our purposes is the gcc C
and C++ compiler suite. The Xcode installer can be found on the OS X install DVD
that came with your computer or downloaded from Apple directly.
Once you’ve installed Xcode, launch the terminal (Terminal.app) by navigating to
Applications > Utilities. Type gcc and press enter. You should hopefully see some-
thing like:
$ gcc
i686-apple-darwin10-gcc-4.2.1: no input files
Download from Wow! eBook <www.wowebook.com>
Now you need to install EPDFree. Download the installer which should be a disk image
named something like epd_free-7.3-1-macosx-i386.dmg. Double-click the .dmg file to
mount it, then double-click the .mpkg file inside to run the installer.
When the installer runs, it automatically appends the EPDFree executable path to
your .bash_profile file. This is located at /Users/your_uname/.bash_profile:
# Setting PATH for EPD_free-7.3-1
PATH="/Library/Frameworks/Python.framework/Versions/Current/bin:${PATH}"
export PATH
Should you encounter any problems in the following steps, you’ll want to inspect
your .bash_profile and potentially add the above directory to your path.
Now, it’s time to install pandas. Execute this command in the terminal:
$ sudo easy_install pandas
Searching for pandas
Reading http://pypi.python.org/simple/pandas/
Reading http://pandas.pydata.org
Reading http://pandas.sourceforge.net
Best match: pandas 0.9.0
Downloading http://pypi.python.org/packages/source/p/pandas/pandas-0.9.0.zip
Processing pandas-0.9.0.zip
Writing /tmp/easy_install-H5mIX6/pandas-0.9.0/setup.cfg
Running pandas-0.9.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-H5mIX6/
pandas-0.9.0/egg-dist-tmp-RhLG0z
Adding pandas 0.9.0 to easy-install.pth file
Installed /Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/
site-packages/pandas-0.9.0-py2.7-macosx-10.5-i386.egg
Processing dependencies for pandas
Finished processing dependencies for pandas
To verify everything is working, launch IPython in Pylab mode and test importing pan-
das then making a plot interactively:
In [2]: plot(arange(10))
If this succeeds, a plot window with a straight line should pop up.
GNU/Linux
Linux details will vary a bit depending on your Linux flavor, but here I give details for
Debian-based GNU/Linux systems like Ubuntu and Mint. Setup is similar to OS X with
the exception of how EPDFree is installed. The installer is a shell script that must be
executed in the terminal. Depending on whether you have a 32-bit or 64-bit system,
you will either need to install the x86 (32-bit) or x86_64 (64-bit) installer. You will then
have a file named something similar to epd_free-7.3-1-rh5-x86_64.sh. To install it,
execute this script with bash:
$ bash epd_free-7.3-1-rh5-x86_64.sh
After accepting the license, you will be presented with a choice of where to put the
EPDFree files. I recommend installing the files in your home directory, say /home/wesm/
epd (substituting your own username for wesm).
Once the installer has finished, you need to add EPDFree’s bin directory to your
$PATH variable. If you are using the bash shell (the default in Ubuntu, for example), this
means adding the following path addition in your .bashrc:
export PATH=/home/wesm/epd/bin:$PATH
Obviously, substitute the installation directory you used for /home/wesm/epd/. After
doing this you can either start a new terminal process or execute your .bashrc again
with source ~/.bashrc.
10 | Chapter 1: Preliminaries
You need a C compiler such as gcc to move forward; many Linux distributions include
gcc, but others may not. On Debian systems, you can install gcc by executing:
sudo apt-get install gcc
If you type gcc on the command line it should say something like:
$ gcc
gcc: no input files
If you installed EPDFree as root, you may need to add sudo to the command and enter
the sudo or root password. To verify things are working, perform the same checks as
in the OS X section.
I encourage you to download the data and use it to replicate the book’s code examples
and experiment with the tools presented in each chapter. I will happily accept contri-
butions, scripts, IPython notebooks, or any other materials you wish to contribute to
the book's repository for all to enjoy.
12 | Chapter 1: Preliminaries
Code Examples
Most of the code examples in the book are shown with input and output as it would
appear executed in the IPython shell.
In [5]: code
Out[5]: output
At times, for clarity, multiple code examples will be shown side by side. These should
be read left to right and executed separately.
In [5]: code In [6]: code2
Out[5]: output Out[6]: output2
Import Conventions
The Python community has adopted a number of naming conventions for commonly-
used modules:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
This means that when you see np.arange, this is a reference to the arange function in
NumPy. This is done as it’s considered bad practice in Python software development
to import everything (from numpy import *) from a large package like NumPy.
Jargon
I’ll use some terms common both to programming and data science that you may not
be familiar with. Thus, here are some brief definitions:
Munge/Munging/Wrangling
Describes the overall process of manipulating unstructured and/or messy data into
a structured or clean form. The word has snuck its way into the jargon of many
modern day data hackers. Munge rhymes with “lunge”.
Acknowledgements
It would have been difficult for me to write this book without the support of a large
number of people.
On the O’Reilly staff, I’m very grateful for my editors Meghan Blanchette and Julie
Steele who guided me through the process. Mike Loukides also worked with me in the
proposal stages and helped make the book a reality.
I received a wealth of technical review from a large cast of characters. In particular,
Martin Blais and Hugh White were incredibly helpful in improving the book’s exam-
ples, clarity, and organization from cover to cover. James Long, Drew Conway, Fer-
nando Pérez, Brian Granger, Thomas Kluyver, Adam Klein, Josh Klein, Chang She, and
Stéfan van der Walt each reviewed one or more chapters, providing pointed feedback
from many different perspectives.
I got many great ideas for examples and data sets from friends and colleagues in the
data community, among them: Mike Dewar, Jeff Hammerbacher, James Johndrow,
Kristian Lum, Adam Klein, Hilary Mason, Chang She, and Ashley Williams.
I am of course indebted to the many leaders in the open source scientific Python com-
munity who’ve built the foundation for my development work and gave encouragement
while I was writing this book: the IPython core team (Fernando Pérez, Brian Granger,
Min Ragan-Kelly, Thomas Kluyver, and others), John Hunter, Skipper Seabold, Travis
Oliphant, Peter Wang, Eric Jones, Robert Kern, Josef Perktold, Francesc Alted, Chris
Fonnesbeck, and too many others to mention. Several other people provided a great
deal of support, ideas, and encouragement along the way: Drew Conway, Sean Taylor,
Giuseppe Paleologo, Jared Lander, David Epstein, John Krowas, Joshua Bloom, Den
Pilsworth, John Myles-White, and many others I’ve forgotten.
I’d also like to thank a number of people from my formative years. First, my former
AQR colleagues who’ve cheered me on in my pandas work over the years: Alex Reyf-
man, Michael Wong, Tim Sargen, Oktay Kurbanov, Matthew Tschantz, Roni Israelov,
Michael Katz, Chris Uga, Prasad Ramanan, Ted Square, and Hoon Kim. Lastly, my
academic advisors Haynes Miller (MIT) and Mike West (Duke).
On the personal side, Casey Dinkin provided invaluable day-to-day support during the
writing process, tolerating my highs and lows as I hacked together the final draft on
14 | Chapter 1: Preliminaries
top of an already overcommitted schedule. Lastly, my parents, Bill and Kim, taught me
to always follow my dreams and to never settle for less.
Acknowledgements | 15
CHAPTER 2
Introductory Examples
This book teaches you the Python tools to work productively with data. While readers
may have many different end goals for their work, the tasks required generally fall into
a number of different broad groups:
Interacting with the outside world
Reading and writing with a variety of file formats and databases.
Preparation
Cleaning, munging, combining, normalizing, reshaping, slicing and dicing, and
transforming data for analysis.
Transformation
Applying mathematical and statistical operations to groups of data sets to derive
new data sets. For example, aggregating a large table by group variables.
Modeling and computation
Connecting your data to statistical models, machine learning algorithms, or other
computational tools
Presentation
Creating interactive or static graphical visualizations or textual summaries
In this chapter I will show you a few data sets and some things we can do with them.
These examples are just intended to pique your interest and thus will only be explained
at a high level. Don’t worry if you have no experience with any of these tools; they will
be discussed in great detail throughout the rest of the book. In the code examples you’ll
see input and output prompts like In [15]:; these are from the IPython shell.
17
In the case of the hourly snapshots, each line in each file contains a common form of
web data known as JSON, which stands for JavaScript Object Notation. For example,
if we read just the first line of a file you may see something like
In [15]: path = 'ch02/usagov_bitly_data2012-03-16-1331923249.txt'
In [16]: open(path).readline()
Out[16]: '{ "a": "Mozilla\\/5.0 (Windows NT 6.1; WOW64) AppleWebKit\\/535.11
(KHTML, like Gecko) Chrome\\/17.0.963.78 Safari\\/535.11", "c": "US", "nk": 1,
"tz": "America\\/New_York", "gr": "MA", "g": "A6qOVH", "h": "wfLQtf", "l":
"orofrog", "al": "en-US,en;q=0.8", "hh": "1.usa.gov", "r":
"http:\\/\\/www.facebook.com\\/l\\/7AQEFzjSi\\/1.usa.gov\\/wfLQtf", "u":
"http:\\/\\/www.ncbi.nlm.nih.gov\\/pubmed\\/22415991", "t": 1331923247, "hc":
1331822918, "cy": "Danvers", "ll": [ 42.576698, -70.954903 ] }\n'
Python has numerous built-in and 3rd party modules for converting a JSON string into
a Python dictionary object. Here I’ll use the json module and its loads function invoked
on each line in the sample file I downloaded:
import json
path = 'ch02/usagov_bitly_data2012-03-16-1331923249.txt'
records = [json.loads(line) for line in open(path)]
If you’ve never programmed in Python before, the last expression here is called a list
comprehension, which is a concise way of applying an operation (like json.loads) to a
collection of strings or other objects. Conveniently, iterating over an open file handle
gives you a sequence of its lines. The resulting object records is now a list of Python
dicts:
In [18]: records[0]
Out[18]:
{u'a': u'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like
Gecko) Chrome/17.0.963.78 Safari/535.11',
u'al': u'en-US,en;q=0.8',
u'c': u'US',
u'cy': u'Danvers',
u'g': u'A6qOVH',
u'gr': u'MA',
u'h': u'wfLQtf',
u'hc': 1331822918,
u'hh': u'1.usa.gov',
u'l': u'orofrog',
u'll': [42.576698, -70.954903],
u'nk': 1,
u'r': u'http://www.facebook.com/l/7AQEFzjSi/1.usa.gov/wfLQtf',
u't': 1331923247,
u'tz': u'America/New_York',
u'u': u'http://www.ncbi.nlm.nih.gov/pubmed/22415991'}
1. http://www.usa.gov/About/developer-resources/1usagov.shtml
The u here in front of the quotation stands for unicode, a standard form of string en-
coding. Note that IPython shows the time zone string object representation here rather
than its print equivalent:
In [20]: print records[0]['tz']
America/New_York
KeyError: 'tz'
Oops! Turns out that not all of the records have a time zone field. This is easy to handle
as we can add the check if 'tz' in rec at the end of the list comprehension:
In [26]: time_zones = [rec['tz'] for rec in records if 'tz' in rec]
In [27]: time_zones[:10]
Out[27]:
[u'America/New_York',
u'America/Denver',
u'America/New_York',
u'America/Sao_Paulo',
u'America/New_York',
u'America/New_York',
u'Europe/Warsaw',
u'',
u'',
u'']
Just looking at the first 10 time zones we see that some of them are unknown (empty).
You can filter these out also but I’ll leave them in for now. Now, to produce counts by
time zone I’ll show two approaches: the harder way (using just the Python standard
library) and the easier way (using pandas). One way to do the counting is to use a dict
to store counts while we iterate through the time zones:
def get_counts(sequence):
counts = {}
If you know a bit more about the Python standard library, you might prefer to write
the same thing more briefly:
from collections import defaultdict
def get_counts2(sequence):
counts = defaultdict(int) # values will initialize to 0
for x in sequence:
counts[x] += 1
return counts
I put this logic in a function just to make it more reusable. To use it on the time zones,
Download from Wow! eBook <www.wowebook.com>
In [32]: counts['America/New_York']
Out[32]: 1251
In [33]: len(time_zones)
Out[33]: 3440
If we wanted the top 10 time zones and their counts, we have to do a little bit of dic-
tionary acrobatics:
def top_counts(count_dict, n=10):
value_key_pairs = [(count, tz) for tz, count in count_dict.items()]
value_key_pairs.sort()
return value_key_pairs[-n:]
We have then:
In [35]: top_counts(counts)
Out[35]:
[(33, u'America/Sao_Paulo'),
(35, u'Europe/Madrid'),
(36, u'Pacific/Honolulu'),
(37, u'Asia/Tokyo'),
(74, u'Europe/London'),
(191, u'America/Denver'),
(382, u'America/Los_Angeles'),
(400, u'America/Chicago'),
(521, u''),
(1251, u'America/New_York')]
In [51]: counts.most_common(10)
Out[51]:
[(u'America/New_York', 1251),
(u'', 521),
(u'America/Chicago', 400),
(u'America/Los_Angeles', 382),
(u'America/Denver', 191),
(u'Europe/London', 74),
(u'Asia/Tokyo', 37),
(u'Pacific/Honolulu', 36),
(u'Europe/Madrid', 35),
(u'America/Sao_Paulo', 33)]
In [292]: frame
Out[292]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3560 entries, 0 to 3559
Data columns:
_heartbeat_ 120 non-null values
a 3440 non-null values
al 3094 non-null values
c 2919 non-null values
cy 2919 non-null values
g 3440 non-null values
gr 2919 non-null values
h 3440 non-null values
hc 3440 non-null values
hh 3440 non-null values
kw 93 non-null values
l 3440 non-null values
ll 2919 non-null values
nk 3440 non-null values
r 3440 non-null values
t 3440 non-null values
tz 3440 non-null values
In [293]: frame['tz'][:10]
Out[293]:
0 America/New_York
1 America/Denver
2 America/New_York
3 America/Sao_Paulo
4 America/New_York
5 America/New_York
6 Europe/Warsaw
7
8
9
Name: tz
The output shown for the frame is the summary view, shown for large DataFrame ob-
jects. The Series object returned by frame['tz'] has a method value_counts that gives
us what we’re looking for:
In [294]: tz_counts = frame['tz'].value_counts()
In [295]: tz_counts[:10]
Out[295]:
America/New_York 1251
521
America/Chicago 400
America/Los_Angeles 382
America/Denver 191
Europe/London 74
Asia/Tokyo 37
Pacific/Honolulu 36
Europe/Madrid 35
America/Sao_Paulo 33
Then, we might want to make a plot of this data using plotting library, matplotlib. You
can do a bit of munging to fill in a substitute value for unknown and missing time zone
data in the records. The fillna function can replace missing (NA) values and unknown
(empty strings) values can be replaced by boolean array indexing:
In [296]: clean_tz = frame['tz'].fillna('Missing')
In [299]: tz_counts[:10]
Out[299]:
America/New_York 1251
Unknown 521
America/Chicago 400
America/Los_Angeles 382
America/Denver 191
Missing 120
Making a horizontal bar plot can be accomplished using the plot method on the
counts objects:
In [301]: tz_counts[:10].plot(kind='barh', rot=0)
See Figure 2-1 for the resulting figure. We’ll explore more tools for working with this
kind of data. For example, the a field contains information about the browser, device,
or application used to perform the URL shortening:
In [302]: frame['a'][1]
Out[302]: u'GoogleMaps/RochesterNY'
In [303]: frame['a'][50]
Out[303]: u'Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101 Firefox/10.0.2'
In [304]: frame['a'][51]
Out[304]: u'Mozilla/5.0 (Linux; U; Android 2.2.2; en-us; LG-P925/V10e Build/FRG83G) AppleWebKit/533.1 (K
Parsing all of the interesting information in these “agent” strings may seem like a
daunting task. Luckily, once you have mastered Python’s built-in string functions and
regular expression capabilities, it is really not so bad. For example, we could split off
the first token in the string (corresponding roughly to the browser capability) and make
another summary of the user behavior:
In [305]: results = Series([x.split()[0] for x in frame.a.dropna()])
In [306]: results[:5]
Out[306]:
0 Mozilla/5.0
1 GoogleMaps/RochesterNY
2 Mozilla/4.0
3 Mozilla/5.0
4 Mozilla/5.0
Now, suppose you wanted to decompose the top time zones into Windows and non-
Windows users. As a simplification, let’s say that a user is on Windows if the string
'Windows' is in the agent string. Since some of the agents are missing, I’ll exclude these
from the data:
In [308]: cframe = frame[frame.a.notnull()]
In [310]: operating_system[:5]
Out[310]:
0 Windows
1 Not Windows
2 Windows
3 Not Windows
4 Windows
Name: a
Then, you can group the data by its time zone column and this new list of operating
systems:
In [311]: by_tz_os = cframe.groupby(['tz', operating_system])
The group counts, analogous to the value_counts function above, can be computed
using size. This result is then reshaped into a table with unstack:
In [312]: agg_counts = by_tz_os.size().unstack().fillna(0)
In [313]: agg_counts[:10]
Out[313]:
a Not Windows Windows
tz
245 276
Africa/Cairo 0 3
Africa/Casablanca 0 1
Africa/Ceuta 0 2
Africa/Johannesburg 0 1
Africa/Lusaka 0 1
America/Anchorage 4 1
America/Argentina/Buenos_Aires 1 0
Finally, let’s select the top overall time zones. To do so, I construct an indirect index
array from the row counts in agg_counts:
# Use to sort in ascending order
In [314]: indexer = agg_counts.sum(1).argsort()
In [315]: indexer[:10]
Out[315]:
tz
24
Africa/Cairo 20
Africa/Casablanca 21
Africa/Ceuta 92
Africa/Johannesburg 87
Africa/Lusaka 53
America/Anchorage 54
America/Argentina/Buenos_Aires 57
America/Argentina/Cordoba 26
America/Argentina/Mendoza 55
I then use take to select the rows in that order, then slice off the last 10 rows:
In [316]: count_subset = agg_counts.take(indexer)[-10:]
In [317]: count_subset
Out[317]:
a Not Windows Windows
tz
America/Sao_Paulo 13 20
Europe/Madrid 16 19
Pacific/Honolulu 0 36
Asia/Tokyo 2 35
Europe/London 43 31
America/Denver 132 59
America/Los_Angeles 130 252
America/Chicago 115 285
245 276
America/New_York 339 912
Then, as shown in the preceding code block, this can be plotted in a bar plot; I’ll make
it a stacked bar plot by passing stacked=True (see Figure 2-2) :
In [319]: count_subset.plot(kind='barh', stacked=True)
The plot doesn’t make it easy to see the relative percentage of Windows users in the
smaller groups, but the rows can easily be normalized to sum to 1 then plotted again
(see Figure 2-3):
In [321]: normed_subset = count_subset.div(count_subset.sum(1), axis=0)
Figure 2-3. Percentage Windows and non-Windows users in top-occurring time zones
All of the methods employed here will be examined in great detail throughout the rest
of the book.
You can verify that everything succeeded by looking at the first few rows of each Da-
taFrame with Python's slice syntax:
In [334]: users[:5]
Out[334]:
user_id gender age occupation zip
0 1 F 1 10 48067
1 2 M 56 16 70072
2 3 M 25 15 55117
3 4 M 45 7 02460
4 5 M 25 20 55455
In [335]: ratings[:5]
Out[335]:
user_id movie_id rating timestamp
0 1 1193 5 978300760
1 1 661 3 978302109
2 1 914 3 978301968
3 1 3408 4 978300275
4 1 2355 5 978824291
In [336]: movies[:5]
Out[336]:
movie_id title genres
0 1 Toy Story (1995) Animation|Children's|Comedy
1 2 Jumanji (1995) Adventure|Children's|Fantasy
2 3 Grumpier Old Men (1995) Comedy|Romance
3 4 Waiting to Exhale (1995) Comedy|Drama
In [337]: ratings
Out[337]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1000209 entries, 0 to 1000208
Data columns:
user_id 1000209 non-null values
movie_id 1000209 non-null values
rating 1000209 non-null values
timestamp 1000209 non-null values
dtypes: int64(4)
Note that ages and occupations are coded as integers indicating groups described in
the data set’s README file. Analyzing the data spread across three tables is not a simple
task; for example, suppose you wanted to compute mean ratings for a particular movie
by sex and age. As you will see, this is much easier to do with all of the data merged
together into a single table. Using pandas’s merge function, we first merge ratings with
users then merging that result with the movies data. pandas infers which columns to
use as the merge (or join) keys based on overlapping names:
In [338]: data = pd.merge(pd.merge(ratings, users), movies)
In [339]: data
Out[339]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1000209 entries, 0 to 1000208
Data columns:
user_id 1000209 non-null values
movie_id 1000209 non-null values
rating 1000209 non-null values
timestamp 1000209 non-null values
gender 1000209 non-null values
age 1000209 non-null values
occupation 1000209 non-null values
zip 1000209 non-null values
title 1000209 non-null values
genres 1000209 non-null values
dtypes: int64(6), object(4)
In [340]: data.ix[0]
Out[340]:
user_id 1
movie_id 1
rating 5
timestamp 978824268
gender F
age 1
occupation 10
zip 48067
title Toy Story (1995)
genres Animation|Children's|Comedy
Name: 0
In [342]: mean_ratings[:5]
Out[342]:
gender F M
title
$1,000,000 Duck (1971) 3.375000 2.761905
'Night Mother (1986) 3.388889 3.352941
'Til There Was You (1997) 2.675676 2.733333
'burbs, The (1989) 2.793478 2.962085
...And Justice for All (1979) 3.828571 3.689024
This produced another DataFrame containing mean ratings with movie totals as row
labels and gender as column labels. First, I’m going to filter down to movies that re-
ceived at least 250 ratings (a completely arbitrary number); to do this, I group the data
by title and use size() to get a Series of group sizes for each title:
In [343]: ratings_by_title = data.groupby('title').size()
In [344]: ratings_by_title[:10]
Out[344]:
title
$1,000,000 Duck (1971) 37
'Night Mother (1986) 70
'Til There Was You (1997) 52
'burbs, The (1989) 303
...And Justice for All (1979) 199
1-900 (1994) 2
10 Things I Hate About You (1999) 700
101 Dalmatians (1961) 565
101 Dalmatians (1996) 364
12 Angry Men (1957) 616
In [346]: active_titles
Out[346]:
Index(['burbs, The (1989), 10 Things I Hate About You (1999),
101 Dalmatians (1961), ..., Young Sherlock Holmes (1985),
Zero Effect (1998), eXistenZ (1999)], dtype=object)
The index of titles receiving at least 250 ratings can then be used to select rows from
mean_ratings above:
In [347]: mean_ratings = mean_ratings.ix[active_titles]
In [348]: mean_ratings
Out[348]:
<class 'pandas.core.frame.DataFrame'>
Index: 1216 entries, 'burbs, The (1989) to eXistenZ (1999)
SKELETON OF MAMMOTH.
MAMMOTH (Restored).
W. BOYD DAWKINS.
H. W. OAKLEY.
ORDER HYRACOIDEA (CONIES).
What is the Coney?—Mention in the Bible—General Appearance—Real
Place—Range—Varieties—Coney of the Bible—Cape Coney—Ashkoko
of Abyssinia—Mr. Winwood Reade’s Account of the Habits of the Cape
Coney—Skull, Dentition, Ribs, &c.
“The Hyrax capensis,” writes Mr. Reade, “is found living at the
Cape of Good Hope, inhabiting the hollows and caves of the rocks,
both on the hill-sides and on the sea shore, a little above high-water
mark. It seems to live in families, and in its wild state is remarkably
shy. In the cold weather it is fond of coming out of its hole and
warming itself in the sun on the side of a rock, and in summer it
enjoys the breeze on the top of the hills, but in both instances, as
well as when it feeds, a sentinel is always placed on the look-out,
generally an old male, which gives notice of any approach of danger
by a long shrill cry.
“Its principal food is the young tops of shrubs, especially those
which are aromatic, but it also eats herbs, grass, and the tops of
flowers. To eat it tastes much like a Rabbit. It is recorded that one
gentleman caught two young ones which he kept for some time.
They became very tame, and as they were allowed the run of the
house would follow him about, jump on to his lap, or creep into his
bed for the sake of the warmth. One brought home by Mr. Hennah
would also run inquisitively about the cabins, climbing up and
examining every person and thing, but startled by any noise, it
would run away and hide itself. When shut up for long, it became
savage and snarled and tried to bite at everything that came in its
way. This animal, both when wild as well as when tame, is very
cleanly in its habits. From its faintly crying in its sleep it may be
supposed that it dreams. It has also been heard to chew its food at
night. When tame it will eat a variety of things, the leaves of plants,
bruised Indian corn, raw potatoes, bread, and onions, and will
greedily lick up salt. The one brought home by Mr. Hennah was very
sensible of the cold, for when a candle was placed near its cage, it
would come as close as possible to the bars, and sit still to receive
as much warmth as it could. I am inclined to think that the female
does not produce more than two young ones at a time, from having
observed in several instances but two following the old ones. Its
name at the Cape is the Dasse, which is, I believe, the Dutch for a
Badger.”
SKULL OF CONEY.
W. BOYD DAWKINS.
H. W. OAKLEY.
KIANG, OR WILD ASS OF TIBET.
CHAPTER I.
PERISSODACTYLA—THE EQUIDÆ, OR HORSE
FAMILY.
Order UNGULATA—Divisions—PERISSODACTYLA—Characteristics—EQUIDÆ
—Species—Descent—First Domestic Horses in Europe—Used for Food
—Mention of the Horse in the Bible—War-Chariots—The Horse among
the Greeks and Romans—In Britain—Attempts to Improve the Breed
—Colour—Teeth—“The Mark”—The Foot—Skull—Disease from the
Gad-fly—RACE-HORSE—TROTTING HORSE OF AMERICA—DRAY HORSE—
SHETLAND PONY—ARAB AND BARB—PERSIAN HORSE—WILD HORSES IN AMERICA
—Habits—Byron’s “Mazeppa”—Capture and Breaking in—WILD HORSES
IN AUSTRALIA—THE ASS—Species—Stripes—Characteristics—MULE AND
HINNY—WILD ASS OF TIBET—ONAGER—WILD ASS OF ABYSSINIA—ZEBRAS—
BURCHELL’S ZEBRA—QUAGGA—FOSSIL EQUIDÆ—Distribution—HIPPARION.
TARPAN.