100% found this document useful (4 votes)
4K views

Immediate Download Mastering PostgreSQL 15 Advanced Techniques To Build and Manage Scalable Reliable and Fault Tolerant Database Applications 5th Edition Hans-Jurgen Schonig Ebooks 2024

build

Uploaded by

mootineranka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
4K views

Immediate Download Mastering PostgreSQL 15 Advanced Techniques To Build and Manage Scalable Reliable and Fault Tolerant Database Applications 5th Edition Hans-Jurgen Schonig Ebooks 2024

build

Uploaded by

mootineranka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Download the full version of the textbook now at textbookfull.

com

Mastering PostgreSQL 15 Advanced techniques


to build and manage scalable reliable and
fault tolerant database applications 5th
Edition Hans-Jurgen Schonig
https://textbookfull.com/product/mastering-
postgresql-15-advanced-techniques-to-build-and-
manage-scalable-reliable-and-fault-tolerant-
database-applications-5th-edition-hans-jurgen-
schonig/

Explore and download more textbook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

MASTERING JAVASCRIPT DESIGN PATTERNS create scalable and


reliable applications with advanced Javascript design
patterns using reliable code 3rd Edition Tomas Corral
Cosas
https://textbookfull.com/product/mastering-javascript-design-patterns-
create-scalable-and-reliable-applications-with-advanced-javascript-
design-patterns-using-reliable-code-3rd-edition-tomas-corral-cosas/
textbookfull.com

Advanced methods for fault diagnosis and fault-tolerant


control Steven X. Ding

https://textbookfull.com/product/advanced-methods-for-fault-diagnosis-
and-fault-tolerant-control-steven-x-ding/

textbookfull.com

Introducing Istio Service Mesh for Microservices Build and


Deploy Resilient Fault Tolerant Cloud Native Applications
2nd Edition Burr Sutter
https://textbookfull.com/product/introducing-istio-service-mesh-for-
microservices-build-and-deploy-resilient-fault-tolerant-cloud-native-
applications-2nd-edition-burr-sutter/
textbookfull.com

The International Political Economy of Oil and Gas 1st


Edition Slawomir Raszewski (Eds.)

https://textbookfull.com/product/the-international-political-economy-
of-oil-and-gas-1st-edition-slawomir-raszewski-eds/

textbookfull.com
Fragility Fracture Nursing: Holistic Care and Management
of the Orthogeriatric Patient Karen Hertz

https://textbookfull.com/product/fragility-fracture-nursing-holistic-
care-and-management-of-the-orthogeriatric-patient-karen-hertz/

textbookfull.com

The Handmaid s Tale Margaret Atwood

https://textbookfull.com/product/the-handmaid-s-tale-margaret-atwood/

textbookfull.com

Big Ideas for Curious Minds An Introduction to Philosophy


The School Of Life

https://textbookfull.com/product/big-ideas-for-curious-minds-an-
introduction-to-philosophy-the-school-of-life/

textbookfull.com

Mathematical Modelling and Scientific Computing with


Applications ICMMSC 2018 Indore India July 19 21 Springer
Proceedings in Mathematics Statistics 308 Santanu Manna
(Editor)
https://textbookfull.com/product/mathematical-modelling-and-
scientific-computing-with-applications-icmmsc-2018-indore-india-
july-19-21-springer-proceedings-in-mathematics-statistics-308-santanu-
manna-editor/
textbookfull.com

Misadventures of Max Crumbly 3 Masters of Mischief The


Misadventures of Max Crumbly 1st Edition Rachel Renée
Russell Russell Rachel Renée
https://textbookfull.com/product/misadventures-of-max-
crumbly-3-masters-of-mischief-the-misadventures-of-max-crumbly-1st-
edition-rachel-renee-russell-russell-rachel-renee/
textbookfull.com
Fate and Impact of Microplastics in Marine Ecosystems.
From the Coastline to the Open Sea Juan Baztan

https://textbookfull.com/product/fate-and-impact-of-microplastics-in-
marine-ecosystems-from-the-coastline-to-the-open-sea-juan-baztan/

textbookfull.com
Mastering PostgreSQL 15

Advanced techniques to build and manage scalable, reliable,


and fault-tolerant database applications

Hans-Jürgen Schönig

BIRMINGHAM—MUMBAI
Mastering PostgreSQL 15
Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any
form or by any means, without the prior written permission of the publisher, except in the case of brief quotations
embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented.
However, the information contained in this book is sold without warranty, either express or implied. Neither the
author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged
to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products
mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the
accuracy of this information.

Group Product Manager: Reshma Raman


Publishing Product Manager: Devika Battike
Senior Editor: Nazia Shaikh
Content Development Editor: Priyanka Soam
Technical Editor: Sweety Pagaria
Copy Editor: Safis Editing
Project Coordinator: Farheen Fathima
Proofreader: Safis Editing
Indexer: Sejal Dsilva
Production Designer: Vijay Kamble
Marketing Coordinator: Nivedita Singh

First published: Jan 2018


Second edition: Oct 2018
Third edition: Nov 2019
Fourth edition: Nov 2020
Fifth published: Jan 2023

Production reference: 1270123

Published by Packt Publishing Ltd.


Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.

ISBN 978-1-80324-834-9
www.packtpub.com
Contributors

About the author


Hans-Jürgen Schönig has 20 years’ experience with PostgreSQL. He is the CEO of a PostgreSQL
consulting and support company called CYBERTEC PostgreSQL International GmbH. It has successfully
served countless customers around the globe. Before founding CYBERTEC PostgreSQL International
GmbH in 2000, he worked as a database developer at a private research company that focused on the
Austrian labor market, where he primarily worked on data mining and forecast models. He has also
written several books about PostgreSQL.
About the reviewers
Burhan Akbulut is the co-founder of PostgresTech. It is a company that provides PostgreSQL consultancy
and support to start-ups and enterprise companies. Burhan Akbulut started his career as a PostgreSQL
consultant at CookSoft, a well-known PostgreSQL consulting firm founded by Şahap Aşçı, where he
provided consultancy and support to many international customers. Before founding PostgresTech,
he worked at Vodafone as an open source database senior specialist responsible for all PostgreSQL
databases. He has especially focused on database management with IaC, management of cloud databases,
and migration from other databases to PostgreSQL during his career.

I would like to thank my colleague Şeyma Mintaş who helped me review the book.

Marcelo Diaz is a software engineer with more than 15 years of experience, with a special focus on
PostgreSQL. He is passionate about open source software and has promoted its application in critical
and high-demand environments, working as a software developer and consultant for both private and
public companies. He currently works very happily at Cybertec and as a technical reviewer for Packt
Publishing. He enjoys spending his leisure time with his daughter, Malvina, and his wife, Romina.
He also likes playing football.

Dinesh Kumar Chemuduru works as a principal architect (OSS) at Tessell Inc. He has been working
with PostgreSQL since 2011, and he also worked as a consultant at AWS. He is also an author and
contributor to a few popular open source solutions. He co-authored PostgreSQL High Performance
Cookbook 9.6, which was released in 2016. He loves to code in Dart, Go, Angular, and C++ and loves
to deploy them in Kubernetes.

Thanks and love to my wife, Manoja Reddy, and my kids, Yashvi and Isha.
Table of Contents

Prefacexiii

1
PostgreSQL 15 Overview 1
Making use of DBA-related features 1 Working around NULL and UNIQUE 9
Removing support for old pg_dump 1 Adding the MERGE command
Deprecating Python 2 2 to PostgreSQL 10
Fixing the public schema 2 Using performance-related features 11
Adding pre-defined roles 2 Adding multiple compression algorithms 11
Adding permissions to variables 3 Handling parallel queries more efficiently 12
Improving pg_stat_statements 3 Improved statistics handling 12
New wait events 4 Prefetching during WAL recovery 12
Adding logging functionality 4
Additional replication features 12
Understanding developer-related Two-phase commit for logical decoding 12
features7 Adding row and column filtering 13
Security invoker views 7 Improving ALTER SUBSCRIPTION 13
ICU locales 7 Supporting compressed base backups 14
Better numeric 8 Introducing archiving libraries 15
Handling ON DELETE 9
Summary15

2
Understanding Transactions and Locking 17
Working with PostgreSQL Transactional DDLs 23
transactions17 Understanding basic locking 24
Handling errors inside a transaction 21
Avoiding typical mistakes and explicit locking 26
Making use of SAVEPOINT 22
vi Table of Contents

Making use of FOR SHARE Optimizing storage and


and FOR UPDATE 30 managing cleanup 39
Understanding transaction Configuring VACUUM and autovacuum 41
isolation levels 33 Watching VACUUM at work 43
Considering serializable snapshot isolation Limiting transactions by making
transactions35 use of snapshot too old 47
Making use of more VACUUM features 47
Observing deadlocks and
similar issues 36 Summary48
Utilizing advisory locks 38 Questions48

3
Making Use of Indexes 49
Understanding simple queries Understanding PostgreSQL
and the cost model 50 index types 80
Making use of EXPLAIN 51 Hash indexes 81
Digging into the PostgreSQL cost model 53 GiST indexes 81
Deploying simple indexes 55 GIN indexes 84
Making use of sorted output 56 SP-GiST indexes 85
Using bitmap scans effectively 59 BRINs86
Using indexes in an intelligent way 59 Adding additional indexes 88
Understanding index de-duplication 62
Achieving better answers
Improving speed using with fuzzy searching 90
clustered tables 62 Taking advantage of pg_trgm 90
Clustering tables 66 Speeding up LIKE queries 92
Making use of index-only scans 67 Handling regular expressions 93

Understanding additional Understanding full-text searches 94


B-tree features 68 Comparing strings 95
Combined indexes 68 Defining GIN indexes 95
Adding functional indexes 69 Debugging your search 96
Reducing space consumption 70 Gathering word statistics 98
Adding data while indexing 72 Taking advantage of exclusion operators 98

Introducing operator classes 72 Summary99


Creating an operator class for a B-tree 74 Questions100
Table of Contents vii

4
Handling Advanced SQL 101
Supporting range types 102 Using sliding windows 121
Querying ranges efficiently 103 Abstracting window clauses 128
Handling multirange types 105 Using on-board windowing functions 129
When to use range types 107 Writing your own aggregates 137
Introducing grouping sets 107 Creating simple aggregates 137
Loading some sample data 108 Adding support for parallel queries 141
Applying grouping sets 109 Improving efficiency 142
Investigating performance 111 Writing hypothetical aggregates 144
Combining grouping sets with Handling recursions 146
the FILTER clause 113
UNION versus UNION ALL 147
Making use of ordered sets 114 Inspecting a practical example 148
Understanding hypothetical Working with JSON and JSONB 150
aggregates116
Displaying and creating JSON documents 150
Utilizing windowing functions Turning JSON documents into rows 152
and analytics 117 Accessing a JSON document 153
Partitioning data 118
Ordering data inside a window 119
Summary157

5
Log Files and System Statistics 159
Gathering runtime statistics 159 Configuring the postgresql.conf file 184
Working with PostgreSQL system views 160 Summary191
Creating log files 184 Questions191

6
Optimizing Queries for Good Performance 193
Learning what the optimizer does 193 Understanding execution plans 209
A practical example – how the query Approaching plans systematically 209
optimizer handles a sample query 194 Spotting problems 211
viii Table of Contents

Understanding and fixing joins 217 Handling partitioning strategies 233


Getting joins right 217 Using range partitioning 234
Processing outer joins 219 Utilizing list partitioning 236
Understanding the join_collapse_limit Handling hash partitions 238
variable220
Adjusting parameters for
Enabling and disabling good query performance 239
optimizer settings 221 Speeding up sorting 243
Understanding genetic query optimization 225 Speeding up administrative tasks 246

Partitioning data 226 Making use of parallel queries 247


Creating inherited tables 226 What is PostgreSQL able to do in parallel? 252
Applying table constraints 229 Parallelism in practice 252
Modifying inherited structures 231
Introducing JIT compilation 253
Moving tables in and out of partitioned
Configuring JIT 254
structures232
Running queries 255
Cleaning up data 232
Understanding PostgreSQL 15.x partitioning 233 Summary257

7
Writing Stored Procedures 259
Understanding stored Introducing PL/Perl 292
procedure languages 259 Introducing PL/Python 300
Understanding the fundamentals of stored Improving functions 304
procedures versus functions 261
Reducing the number of function calls 304
The anatomy of a function 261
Using functions for
Exploring various stored
various purposes 307
procedure languages 265
Summary309
Introducing PL/pgSQL 267
Writing stored procedures in PL/pgSQL 290 Questions309

8
Managing PostgreSQL Security 311
Managing network security 311 Managing the pg_hba.conf file 316
Understanding bind addresses Handling instance-level security 321
and connections 312 Defining database-level security 326
Table of Contents ix

Adjusting schema-level permissions 328 Inspecting permissions 340


Working with tables 331 Reassigning objects and dropping
Handling column-level security 332 users344
Configuring default privileges 334 Summary345
Digging into row-level security 335 Questions346

9
Handling Backup and Recovery 347
Performing simple dumps 347 Replaying backups 355
Running pg_dump 348 Handling global data 356
Passing passwords and Summary357
connection information 349
Questions357
Extracting subsets of data 352

Handling various formats 352

10
Making Sense of Backups and Replication 359
Understanding the transaction log 360 Performing failovers and
Looking at the transaction log 360 understanding timelines 383
Understanding checkpoints 361 Managing conflicts 385
Optimizing the transaction log 362 Making replication more reliable 387

Transaction log archiving Upgrading to synchronous


and recovery 363 replication388
Configuring for archiving 364 Adjusting durability 389
Using archiving libraries 365 Making use of replication slots 391
Configuring the pg_hba.conf file 365 Handling physical replication slots 392
Creating base backups 366 Handling logical replication slots 394
Replaying the transaction log 371
Cleaning up the transaction log archive 375 Making use of the CREATE
PUBLICATION and CREATE
Setting up asynchronous replication 376 SUBSCRIPTION commands 397
Performing a basic setup 377 Setting up an HA cluster
Halting and resuming replication 379 using Patroni 400
Checking replication to ensure availability 380
Understand how Patroni operates 400
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
x Table of Contents

Installing Patroni 401 Summary418


Creating Patroni templates 406 Questions419

11
Deciding on Useful Extensions 421
Understanding how extensions work 421 Encrypting data with pgcrypto 439
Checking for available extensions 423 Prewarming caches with pg_prewarm 439
Inspecting performance with
Making use of contrib modules 426 pg_stat_statements441
Using the adminpack module 426 Inspecting storage with pgstattuple 441
Applying bloom filters 428 Fuzzy searching with pg_trgm 443
Deploying btree_gist and btree_gin 431 Connecting to remote servers
dblink – considering phasing out 432 using postgres_fdw 443
Fetching files with file_fdw 433
Other useful extensions 449
Inspecting storage using pageinspect 435
Investigating caching with pg_buffercache 437 Summary449

12
Troubleshooting PostgreSQL 451
Approaching an unknown database 451 Understanding noteworthy
Inspecting pg_stat_activity 452 error scenarios 462
Querying pg_stat_activity 452 Facing clog corruption 462
Understanding checkpoint messages 463
Checking for slow queries 455 Managing corrupted data pages 464
Inspecting individual queries 456 Careless connection management 465
Digging deeper with perf 457 Fighting table bloat 465
Inspecting the log 458 Summary466
Checking for missing indexes 459 Questions466
Checking for memory and I/O 460
Table of Contents xi

13
Migrating to PostgreSQL 467
Migrating SQL statements Using the OFFSET clause 475
to PostgreSQL 467 Using temporal tables 475
Using LATERAL joins 468 Matching patterns in time series 476
Using grouping sets 468 Moving from Oracle to PostgreSQL 476
Using the WITH clause – common
Using the oracle_fdw extension to move data 476
table expressions 469
Using ora_migrator for fast migration 479
Using the WITH RECURSIVE clause 470
CYBERTEC Migrator – migration for
Using the FILTER clause 471
the “big boys” 480
Using windowing functions 472
Using Ora2Pg to migrate from Oracle 481
Using ordered sets – the WITHIN
Common pitfalls 483
GROUP clause 472
Using the TABLESAMPLE clause 473 Summary485
Using limit/offset 474

Index487

Other Books You May Enjoy 500


Preface
Mastering the art of handling data is an ever more important skill that is important to have. In a digital
world, “data” is more or less the “new oil” – an important asset that drives the world. Every sector of
IT is data-driven. It does not matter whether you are at the forefront of machine learning or whether
you are working on bookkeeping software – at the end of the day, IT is all about data.
PostgreSQL has become a hot technology in the area of open source, and it is an excellent technology
to store and process data in the most efficient way possible. This book will teach you how to use
PostgreSQL in the most professional way and explain how to operate, optimize, and monitor this core
technology, which has become so popular over the years.
By the end of the book, you will be able to use PostgreSQL to its utmost capacity by applying advanced
technology and cutting-edge features.

Who this book is for


This book is ideal for PostgreSQL developers and administrators alike who want to familiarize themselves
with the technology. It will provide you with deep insights and explain advanced technologies such
as clustering, modern analytics, and a lot more.
Prior exposure to PostgreSQL and basic SQL knowledge is required to follow along.

What this book covers


Chapter 1, PostgreSQL 15 Overview, guides you through the most important features that have made
it into the new release of PostgreSQL and explains how those features can be used.
Chapter 2, Understanding Transactions and Locking, explains the fundamental concepts of transactions
and locking. Both topics are key requirements to understand storage management in PostgreSQL.
Chapter 3, Making Use of Indexes, introduces the concept of indexes, which are the key ingredient
when dealing with performance in general. You will learn about simple indexes as well as more
sophisticated concepts.
Chapter 4, Handling Advanced SQL, unleashes the full power of SQL and outlines the most advanced
functionality a query language has to offer. You will learn about windowing functions, ordered sets,
hypothetical aggregates, and a lot more. All those techniques will open a totally new world of functionality.
Chapter 5, Log Files and System Statistics, explains how you can use runtime statistics collected by
PostgreSQL to make operations easier and to debug the database. You will be guided through the
internal information-gathering infrastructure.
xiv Preface

Chapter 6, Optimizing Queries for Good Performance, is all about good query performance and
outlines optimization techniques that are essential to bringing your database up to speed to handle
even bigger workloads.
Chapter 7, Writing Stored Procedures, introduces you to the concept of server-side code such as functions,
stored procedures, and a lot more. You will learn how to write triggers and dive into server-side logic.
Chapter 8, Managing PostgreSQL Security, helps you to make your database more secure, and explains
what can be done to ensure safety and data protection at all levels.
Chapter 9, Handling Backup and Recovery, helps you to make copies of your database to protect yourself
against crashes and database failure.
Chapter 10, Making Sense of Backups and Replication, follows up on backups and recovery and explains
additional techniques, such as streaming replication, clustering, and a lot more. It covers the most
advanced topics.
Chapter 11, Deciding on Useful Extensions, explores extensions and additional useful features that can
be added to PostgreSQL.
Chapter 12, Troubleshooting PostgreSQL, completes the circle of topics and explains what can be done
if things don’t work as expected. You will learn how to find the most common issues and understand
how problems can be fixed.
Chapter 13, Migrating to PostgreSQL, teaches you how to move your databases to PostgreSQL efficiently
and quickly. It covers the most common database systems people will migrate from.

To get the most out of this book


This book has been written for a broad audience. However, some basic knowledge of SQL is necessary to
follow along and make full use of the examples presented. In general, it is also a good idea to familiarize
yourself with basic Unix commands as most of the book has been produced on Linux and macOS.

Software/hardware covered in the book Operating system requirements


Pgadmin4 Windows, macOS, or Linux
PostgreSQL 15
SQL Shell (psql)

Note:
Some parts of chapters i.e., 8, 9, 10, 11,12 and 13 are mostly dedicated to unix / linux and mac
users and rest runs fine on windows.
Preface xv

Download the example code files


You can download the example code files for this book from GitHub at https://github.com/
PacktPublishing/Mastering-PostgreSQL-15-. If there’s an update to the code, it will
be updated in the GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://
github.com/PacktPublishing/. Check them out!

Conventions used
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file
extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “. You
cannot run it inside a SELECT statement. Instead, you have to invoke CALL. The following listing
shows the syntax of the CALL command.”
A block of code is set as follows:

test=# \h CALL
Command:    CALL
Description: invoke a procedure
Syntax:
CALL name ( [ argument ] [, ...] )
URL: https://www.postgresql.org/docs/15/sql-call.html

When we wish to draw your attention to a particular part of a code block, the relevant lines or items
are set in bold:

openssl req -x509 -in server.req -text


  -key server.key -out server.crt

Any command-line input or output is written as follows:

# - Connection Settings –
# listen_addresses = 'localhost'
# what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
xvi Preface

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance,
words in menus or dialog boxes appear in bold. Here is an example: “Select System info from the
Administration panel.”

Tips or important notes


Appear like this.

Get in touch
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, email us at customercare@
packtpub.com and mention the book title in the subject of your message.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen.
If you have found a mistake in this book, we would be grateful if you would report this to us. Please
visit www.packtpub.com/support/errata and fill in the form.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would
be grateful if you would provide us with the location address or website name. Please contact us at
copyright@packtpub.com with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you
are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts


Once you’ve read Mastering PostgreSQL 15, we’d love to hear your thoughts! Please click here to go
straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we’re delivering
excellent quality content.
Preface xvii

Download a free PDF copy of this book


Thanks for purchasing this book!
Do you like to read on the go but are unable to carry your print books everywhere?
Is your eBook purchase not compatible with the device of your choice?
Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.
Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical
books directly into your application.
The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content
in your inbox daily
Follow these simple steps to get the benefits:

1. Scan the QR code or visit the link below

https://packt.link/free-ebook/9781803248349

2. Submit your proof of purchase


3. That’s it! We’ll send your free PDF and other benefits to your email directly
1
PostgreSQL 15 Overview
A full year has passed and the PostgreSQL community has released version 15 of the database, which
includes various powerful new features that will benefit a large user base. The PostgreSQL project
has also come a long way and improvements are being added constantly at a rapid pace that make
the database more useful, more efficient, and more widely accepted. The times when PostgreSQL was
some unknown obscure thing are long gone. PostgreSQL has reached the data center and has been
widely adopted in many large companies as well as by governments around the world.
In this chapter, you will learn about new features that made it into PostgreSQL 15. Those features add
additional capabilities, more performance, as well as security and enhanced usability.
The following topics are covered in this chapter:

• DBA-related features
• Developer-related features
• Performance-related features
• Additional replication features

Of course, there is always more stuff. However, let us focus on the most important changes affecting
most users.

Making use of DBA-related features


In PostgreSQL 15, a couple of developer-related features were added. Some features were also finally
deprecated and have been removed from the server. In this section, we will go through the most
important major changes.

Removing support for old pg_dump


One of the first things that is worth noting is that support for really old databases has been removed
from pg_dump. PostgreSQL databases that are older than PostgreSQL 9.2 are not supported anymore.
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
2 PostgreSQL 15 Overview

Considering that PostgreSQL 9.2.0 was released to the PostgreSQL community FTP server on September
10, 2012, most people should have gotten rid of their PostgreSQL 9.1 (and older) systems by now.
If you have not been able to upgrade since then, we highly recommend doing that. It is still possible
to upgrade from such an old version to PostgreSQL 15. However, you will need an intermediate step
and to use pg_dump twice.

Deprecating Python 2
PostgreSQL allows developers to write stored procedures in various languages. This includes Python
but is not limited to it. The trouble is that Python 2.x has been deprecated for a long time already.
Starting with version 15, the PostgreSQL community has also dropped support for PL/Python2U and
only supports version 3 from now on.
This means that all code that is still in Python 2 should be moved to Python 3 in order to function properly.

Fixing the public schema


Up to PostgreSQL 14, the public schema that exists in every database has been available to every
user. This has caused various security concerns among the user base. Basically it is easy to just use
the following:

REVOKE ALL ON SCHEMA public FROM public;

This was rarely done but caused security leaks people were generally not aware of. With the introduction
of PostgreSQL, the situation has changed. The public schema is, from now on, not available to the
general public and you have to be granted permission to use it like before. The new behavior will make
applications a lot safer and ensure that permissions are not there accidentally.

Adding pre-defined roles


In recent versions of PostgreSQL, more and more pre-defined users have been added. The core idea
is to ensure that people do not have to use their superuser accounts so often. For security reasons, it
is not recommended to use superusers unless explicitly needed. However, with the introduction of
pre-defined roles, it is a lot easier to run things without superusers:

test=# SELECT rolname


FROM pg_authid
WHERE oid < 16384
AND rolname <> CURRENT_USER;
          rolname
Making use of DBA-related features 3

---------------------------
pg_database_owner
pg_read_all_data
pg_write_all_data
pg_monitor
pg_read_all_settings
pg_read_all_stats
pg_stat_scan_tables
pg_read_server_files
pg_write_server_files
pg_execute_server_program
pg_signal_backend
pg_checkpointer
(12 rows)

With the introduction of PostgreSQL, a new role has been added, pg_checkpointer, which allows
users to manually run checkpoints if needed.

Adding permissions to variables


However, there is more. It is now possible to define permissions on variables. This was not possible
before version 15. Here is an example:

GRANT SET ON PARAMETER track_functions TO hans;

This new feature allows administrators to disable bad behavior and prohibit bad parameter settings
that can compromise the availability of the entire server.

Improving pg_stat_statements
Every version will also provide us with some improvements related to pg_stat_statements,
which in my judgment is the key to good performance. Consider the following code snippet:

test=# \d pg_stat_statements
                      View "public.pg_stat_statements"
         Column         |       Type       | ...
------------------------+------------------+ ...
userid                 | oid              | ...
...
jit_functions          | bigint           | ...
4 PostgreSQL 15 Overview

jit_generation_time    | double precision | ...


jit_inlining_count     | bigint           | ...
jit_inlining_time      | double precision | ...
jit_optimization_count | bigint           | ...
jit_optimization_time  | double precision | ...
jit_emission_count     | bigint           | ...
jit_emission_time      | double precision | ...

The module is now able to display information about the JIT compilation process and helps to detect
JIT-related performance problems. Those problems are not too frequent – however, it can happen that
once in a while, a JIT compilation process takes too long. This is especially true if you are running a
query containing hundreds of columns.

New wait events


What is also new in PostgreSQL is a couple of wait events that give you some insights into where time
is the list. The following events have been added to the system:

• ArchiveCommand
• ArchiveCleanupCommand
• RestoreCommand
• RecoveryEndCommand

Those events complement the existing wait event infrastructure and give some insights into
replication-related issues.

Adding logging functionality


PostgreSQL 15 comes with a spectacular new feature: JSON logging. While JSON is comparatively large
compared to the standard log format, it still comes with a couple of advantages, such as easy parsing.
Let us configure JSON logging in postgresql.conf:

log_destination = 'jsonlog'     # Valid values are combinations


of
                                # stderr, csvlog, jsonlog,
syslog, and
                                # eventlog, depending on
platform.
                                # csvlog and jsonlog require
                                # logging_collector to be on.
Making use of DBA-related features 5

# This is used when logging to stderr:


logging_collector = on
                                # Enable capturing of stderr,
jsonlog,
                                # and csvlog into log files.
Required
                                # to be on for csvlogs and
jsonlogs.
                                # (change requires restart)

The output might look as follows:

[hs@hansmacbook log]$ head postgresql-Fri.json


{"timestamp":"2022-11-04 08:50:59.000
CET","pid":32183,"session_id":"6364c462.7db7","line_
num":1,"session_start":"2022-11-04 08:50:58
CET","txid":0,"error_severity":"LOG","message":"ending log
output to stderr","hint":"Future log
output will go to log destination \"jsonlog\".","backend_
type":"postmaster","query_id":0}
{"timestamp":"2022-11-04 08:50:59.000
CET","pid":32183,"session_id":"6364c462.7db7","line_
num":2,"session_start":"2022-11-04 08:50:58
CET","txid":0,"error_severity":"LOG","message":"starting
PostgreSQL 15.0 on x86_64-apple-
darwin21.6.0, compiled by Apple clang version 13.1.6
(clang-1316.0.21.2.5), 64-
bit","backend_type":"postmaster","query_id":0}

Reading a tightly packed file containing millions of JSON documents is not really user-friendly, so I
recommend using a tool such as jq to make this stream more readable and user-friendly to process:

[hs@hansmacbook log]$ tail -f postgresql-Fri.json | jq


{
  "timestamp": "2022-11-04 08:50:59.000 CET",
  "pid": 32183,
  "session_id": "6364c462.7db7",
  "line_num": 1,
  "session_start": "2022-11-04 08:50:58 CET",
6 PostgreSQL 15 Overview

  "txid": 0,
  "error_severity": "LOG",
  "message": "ending log output to stderr",
  "hint": "Future log output will go to log destination
\"jsonlog\".",
  "backend_type": "postmaster",
  "query_id": 0
}
{
  "timestamp": "2022-11-04 08:50:59.000 CET",
  "pid": 32183,
  "session_id": "6364c462.7db7",
  "line_num": 2,
  "session_start": "2022-11-04 08:50:58 CET",
  "txid": 0,
  "error_severity": "LOG",
  "message": "starting PostgreSQL 15.0 on x86_64-apple-
darwin21.6.0, compiled by Apple clang version
13.1.6 (clang-1316.0.21.2.5), 64-bit",
  "backend_type": "postmaster",
  "query_id": 0
}
{
  "timestamp": "2022-11-04 08:50:59.006 CET",
  "pid": 32183,
  "session_id": "6364c462.7db7",
  "line_num": 3,
  "session_start": "2022-11-04 08:50:58 CET",
  "txid": 0,
  "error_severity": "LOG",
  "message": "listening on IPv6 address \"::1\", port 5432",
  "backend_type": "postmaster",
  "query_id": 0
}
...

In general, it is recommended to not use JSON logs excessively as they occupy a fair amount of space.
Random documents with unrelated
content Scribd suggests to you:
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.

You might also like