Performance Tips
Query performance can be affected by many things. Some of these can
be manipulated by the user, while others are fundamental to the underlying
design of the system. This chapter provides some hints about understanding
and tuning PostgreSQL performance.
Using EXPLAIN
PostgreSQL devises a query
plan for each query it is given. Choosing the right
plan to match the query structure and the properties of the data
is absolutely critical for good performance. You can use the
EXPLAIN command to see what query plan the system
creates for any query.
Plan-reading is an art that deserves an extensive tutorial, which
this is not; but here is some basic information.
The numbers that are currently quoted by EXPLAIN are:
Estimated start-up cost (Time expended before output scan can start,
e.g., time to do the sorting in a sort node.)
Estimated total cost (If all rows are retrieved, which they may not
be --- a query with a LIMIT> clause will stop short of paying the total cost,
for example.)
Estimated number of rows output by this plan node (Again, only if
executed to completion.)
Estimated average width (in bytes) of rows output by this plan
node
The costs are measured in units of disk page fetches. (CPU effort
estimates are converted into disk-page units using some
fairly arbitrary fudge factors. If you want to experiment with these
factors, see the list of run-time configuration parameters in the
&cite-admin;.)
It's important to note that the cost of an upper-level node includes
the cost of all its child nodes. It's also important to realize that
the cost only reflects things that the planner/optimizer cares about.
In particular, the cost does not consider the time spent transmitting
result rows to the frontend --- which could be a pretty dominant
factor in the true elapsed time, but the planner ignores it because
it cannot change it by altering the plan. (Every correct plan will
output the same row set, we trust.)
Rows output is a little tricky because it is not the
number of rows
processed/scanned by the query --- it is usually less, reflecting the
estimated selectivity of any WHERE>-clause constraints that are being
applied at this node. Ideally the top-level rows estimate will
approximate the number of rows actually returned, updated, or deleted
by the query.
Here are some examples (using the regress test database after a
VACUUM ANALYZE>, and 7.3 development sources):
regression=# EXPLAIN SELECT * FROM tenk1;
QUERY PLAN
-------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00..333.00 rows=10000 width=148)
This is about as straightforward as it gets. If you do
SELECT * FROM pg_class WHERE relname = 'tenk1';
you will find out that tenk1 has 233 disk
pages and 10000 rows. So the cost is estimated at 233 page
reads, defined as costing 1.0 apiece, plus 10000 * cpu_tuple_cost which is
currently 0.01 (try SHOW cpu_tuple_cost).
Now let's modify the query to add a WHERE> condition:
regression=# EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 1000;
QUERY PLAN
------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00..358.00 rows=1033 width=148)
Filter: (unique1 < 1000)
The estimate of output rows has gone down because of the WHERE> clause.
However, the scan will still have to visit all 10000 rows, so the cost
hasn't decreased; in fact it has gone up a bit to reflect the extra CPU
time spent checking the WHERE> condition.
The actual number of rows this query would select is 1000, but the
estimate is only approximate. If you try to duplicate this experiment,
you will probably get a slightly different estimate; moreover, it will
change after each ANALYZE command, because the
statistics produced by ANALYZE are taken from a
randomized sample of the table.
Modify the query to restrict the condition even more:
regression=# EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50;
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using tenk1_unique1 on tenk1 (cost=0.00..179.33 rows=49 width=148)
Index Cond: (unique1 < 50)
and you will see that if we make the WHERE> condition selective
enough, the planner will
eventually decide that an index scan is cheaper than a sequential scan.
This plan will only have to visit 50 rows because of the index,
so it wins despite the fact that each individual fetch is more expensive
than reading a whole disk page sequentially.
Add another clause to the WHERE> condition:
regression=# EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50 AND
regression-# stringu1 = 'xxx';
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using tenk1_unique1 on tenk1 (cost=0.00..179.45 rows=1 width=148)
Index Cond: (unique1 < 50)
Filter: (stringu1 = 'xxx'::name)
The added clause stringu1 = 'xxx' reduces the
output-rows estimate, but not the cost because we still have to visit the
same set of rows. Notice that the stringu1> clause
cannot be applied as an index condition (since this index is only on
the unique1> column). Instead it is applied as a filter on
the rows retrieved by the index. Thus the cost has actually gone up
a little bit to reflect this extra checking.
Let's try joining two tables, using the fields we have been discussing:
regression=# EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50
regression-# AND t1.unique2 = t2.unique2;
QUERY PLAN
----------------------------------------------------------------------------
Nested Loop (cost=0.00..327.02 rows=49 width=296)
-> Index Scan using tenk1_unique1 on tenk1 t1
(cost=0.00..179.33 rows=49 width=148)
Index Cond: (unique1 < 50)
-> Index Scan using tenk2_unique2 on tenk2 t2
(cost=0.00..3.01 rows=1 width=148)
Index Cond: ("outer".unique2 = t2.unique2)
In this nested-loop join, the outer scan is the same index scan we had
in the example before last, and so its cost and row count are the same
because we are applying the unique1 < 50 WHERE> clause at that node.
The t1.unique2 = t2.unique2 clause is not relevant yet, so it doesn't
affect row count of the outer scan. For the inner scan, the unique2> value of the
current
outer-scan row is plugged into the inner index scan
to produce an index condition like
t2.unique2 = constant. So we get the
same inner-scan plan and costs that we'd get from, say, EXPLAIN SELECT
* FROM tenk2 WHERE unique2 = 42. The costs of the loop node are then set
on the basis of the cost of the outer scan, plus one repetition of the
inner scan for each outer row (49 * 3.01, here), plus a little CPU
time for join processing.
In this example the loop's output row count is the same as the product
of the two scans' row counts, but that's not true in general, because
in general you can have WHERE> clauses that mention both relations and
so can only be applied at the join point, not to either input scan.
For example, if we added WHERE ... AND t1.hundred < t2.hundred,
that would decrease the output row count of the join node, but not change
either input scan.
One way to look at variant plans is to force the planner to disregard
whatever strategy it thought was the winner, using the enable/disable
flags for each plan type. (This is a crude tool, but useful. See
also .)
regression=# SET enable_nestloop = off;
SET
regression=# EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50
regression-# AND t1.unique2 = t2.unique2;
QUERY PLAN
--------------------------------------------------------------------------
Hash Join (cost=179.45..563.06 rows=49 width=296)
Hash Cond: ("outer".unique2 = "inner".unique2)
-> Seq Scan on tenk2 t2 (cost=0.00..333.00 rows=10000 width=148)
-> Hash (cost=179.33..179.33 rows=49 width=148)
-> Index Scan using tenk1_unique1 on tenk1 t1
(cost=0.00..179.33 rows=49 width=148)
Index Cond: (unique1 < 50)
This plan proposes to extract the 50 interesting rows of tenk1
using ye same olde index scan, stash them into an in-memory hash table,
and then do a sequential scan of tenk2, probing into the hash table
for possible matches of t1.unique2 = t2.unique2 at each tenk2 row.
The cost to read tenk1 and set up the hash table is entirely start-up
cost for the hash join, since we won't get any rows out until we can
start reading tenk2. The total time estimate for the join also
includes a hefty charge for the CPU time to probe the hash table
10000 times. Note, however, that we are not charging 10000 times 179.33;
the hash table setup is only done once in this plan type.
It is possible to check on the accuracy of the planner's estimated costs
by using EXPLAIN ANALYZE>. This command actually executes the query,
and then displays the true run time accumulated within each plan node
along with the same estimated costs that a plain EXPLAIN shows.
For example, we might get a result like this:
regression=# EXPLAIN ANALYZE
regression-# SELECT * FROM tenk1 t1, tenk2 t2
regression-# WHERE t1.unique1 < 50 AND t1.unique2 = t2.unique2;
QUERY PLAN
-------------------------------------------------------------------------------
Nested Loop (cost=0.00..327.02 rows=49 width=296)
(actual time=1.18..29.82 rows=50 loops=1)
-> Index Scan using tenk1_unique1 on tenk1 t1
(cost=0.00..179.33 rows=49 width=148)
(actual time=0.63..8.91 rows=50 loops=1)
Index Cond: (unique1 < 50)
-> Index Scan using tenk2_unique2 on tenk2 t2
(cost=0.00..3.01 rows=1 width=148)
(actual time=0.29..0.32 rows=1 loops=50)
Index Cond: ("outer".unique2 = t2.unique2)
Total runtime: 31.60 msec
Note that the actual time
values are in milliseconds of
real time, whereas the cost
estimates are expressed in
arbitrary units of disk fetches; so they are unlikely to match up.
The thing to pay attention to is the ratios.
In some query plans, it is possible for a subplan node to be executed more
than once. For example, the inner index scan is executed once per outer
row in the above nested-loop plan. In such cases, the
loops
value reports the
total number of executions of the node, and the actual time and rows
values shown are averages per-execution. This is done to make the numbers
comparable with the way that the cost estimates are shown. Multiply by
the loops
value to get the total time actually spent in
the node.
The Total runtime shown by EXPLAIN ANALYZE includes
executor start-up and shut-down time, as well as time spent processing
the result rows. It does not include parsing, rewriting, or planning
time. For a SELECT> query, the total run time will normally be just a
little larger than the total time reported for the top-level plan node.
For INSERT>, UPDATE>, and DELETE> commands, the total run time may be
considerably larger, because it includes the time spent processing the
result rows. In these commands, the time for the top plan node
essentially is the time spent computing the new rows and/or locating
the old ones, but it doesn't include the time spent making the changes.
It is worth noting that EXPLAIN> results should not be extrapolated
to situations other than the one you are actually testing; for example,
results on a toy-sized table can't be assumed to apply to large tables.
The planner's cost estimates are not linear and so it may well choose
a different plan for a larger or smaller table. An extreme example
is that on a table that only occupies one disk page, you'll nearly
always get a sequential scan plan whether indexes are available or not.
The planner realizes that it's going to take one disk page read to
process the table in any case, so there's no value in expending additional
page reads to look at an index.
Statistics Used by the Planner
As we saw in the previous section, the query planner needs to estimate
the number of rows retrieved by a query in order to make good choices
of query plans. This section provides a quick look at the statistics
that the system uses for these estimates.
One component of the statistics is the total number of entries in each
table and index, as well as the number of disk blocks occupied by each
table and index. This information is kept in
pg_class's reltuples
and relpages columns. We can look at it
with queries similar to this one:
regression=# SELECT relname, relkind, reltuples, relpages FROM pg_class
regression-# WHERE relname LIKE 'tenk1%';
relname | relkind | reltuples | relpages
---------------+---------+-----------+----------
tenk1 | r | 10000 | 233
tenk1_hundred | i | 10000 | 30
tenk1_unique1 | i | 10000 | 30
tenk1_unique2 | i | 10000 | 30
(4 rows)
Here we can see that tenk1 contains 10000
rows, as do its indexes, but the indexes are (unsurprisingly) much
smaller than the table.
For efficiency reasons, reltuples
and relpages are not updated on-the-fly,
and so they usually contain only approximate values (which is good
enough for the planner's purposes). They are initialized with dummy
values (presently 1000 and 10 respectively) when a table is created.
They are updated by certain commands, presently VACUUM>,
ANALYZE>, and CREATE INDEX>. A stand-alone
ANALYZE>, that is one not part of VACUUM>,
generates an approximate reltuples value
since it does not read every row of the table.
Most queries retrieve only a fraction of the rows in a table, due
to having WHERE> clauses that restrict the rows to be examined.
The planner thus needs to make an estimate of the
selectivity> of WHERE> clauses, that is, the fraction of
rows that match each clause of the WHERE> condition. The information
used for this task is stored in the pg_statistic
system catalog. Entries in pg_statistic are
updated by ANALYZE> and VACUUM ANALYZE> commands,
and are always approximate even when freshly updated.
Rather than look at pg_statistic directly,
it's better to look at its view pg_stats
when examining the statistics manually. pg_stats
is designed to be more easily readable. Furthermore,
pg_stats is readable by all, whereas
pg_statistic is only readable by the superuser.
(This prevents unprivileged users from learning something about
the contents of other people's tables from the statistics. The
pg_stats view is restricted to show only
rows about tables that the current user can read.)
For example, we might do:
regression=# SELECT attname, n_distinct, most_common_vals FROM pg_stats WHERE tablename = 'road';
attname | n_distinct | most_common_vals
---------+------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | -0.467008 | {"I- 580 Ramp","I- 880 Ramp","Sp Railroad ","I- 580 ","I- 680 Ramp","I- 80 Ramp","14th St ","5th St ","Mission Blvd","I- 880 "}
thepath | 20 | {"[(-122.089,37.71),(-122.0886,37.711)]"}
(2 rows)
regression=#
shows the columns that
exist in pg_stats.
pg_stats Columns
Name
Type
Description
tablename
name
Name of the table containing the column
attname
name
Column described by this row
null_frac
real
Fraction of column's entries that are null
avg_width
integer
Average width in bytes of the column's entries
n_distinct
real
If greater than zero, the estimated number of distinct values
in the column. If less than zero, the negative of the number of
distinct values divided by the number of rows. (The negated form
is used when ANALYZE> believes that the number of distinct values
is likely to increase as the table grows; the positive form is used
when the column seems to have a fixed number of possible values.)
For example, -1 indicates a unique column in which the number of
distinct values is the same as the number of rows.
most_common_vals
text[]
A list of the most common values in the column. (Omitted if
no values seem to be more common than any others.)
most_common_freqs
real[]
A list of the frequencies of the most common values,
i.e., number of occurrences of each divided by total number of rows.
histogram_bounds
text[]
A list of values that divide the column's values into
groups of approximately equal population. The
most_common_vals>, if present, are omitted from the
histogram calculation. (Omitted if column data type does not have a
<> operator, or if the most_common_vals>
list accounts for the entire population.)
correlation
real
Statistical correlation between physical row ordering and
logical ordering of the column values. This ranges from -1 to +1.
When the value is near -1 or +1, an index scan on the column will
be estimated to be cheaper than when it is near zero, due to reduction
of random access to the disk. (Omitted if column data type does
not have a <> operator.)
The maximum number of entries in the most_common_vals>
and histogram_bounds> arrays can be set on a
column-by-column basis using the ALTER TABLE SET STATISTICS>
command. The default limit is presently 10 entries. Raising the limit
may allow more accurate planner estimates to be made, particularly for
columns with irregular data distributions, at the price of consuming
more space in pg_statistic and slightly more
time to compute the estimates. Conversely, a lower limit may be
appropriate for columns with simple data distributions.
Controlling the Planner with Explicit JOIN> Clauses
Beginning with PostgreSQL 7.1 it has been possible
to control the query planner to some extent by using the explicit JOIN>
syntax. To see why this matters, we first need some background.
In a simple join query, such as
SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id;
the planner is free to join the given tables in any order. For
example, it could generate a query plan that joins A to B, using
the WHERE> condition a.id = b.id>, and then
joins C to this joined table, using the other WHERE>
condition. Or it could join B to C and then join A to that result.
Or it could join A to C and then join them with B --- but that
would be inefficient, since the full Cartesian product of A and C
would have to be formed, there being no applicable condition in the
WHERE> clause to allow optimization of the join. (All
joins in the PostgreSQL executor happen
between two input tables, so it's necessary to build up the result
in one or another of these fashions.) The important point is that
these different join possibilities give semantically equivalent
results but may have hugely different execution costs. Therefore,
the planner will explore all of them to try to find the most
efficient query plan.
When a query only involves two or three tables, there aren't many join
orders to worry about. But the number of possible join orders grows
exponentially as the number of tables expands. Beyond ten or so input
tables it's no longer practical to do an exhaustive search of all the
possibilities, and even for six or seven tables planning may take an
annoyingly long time. When there are too many input tables, the
PostgreSQL planner will switch from exhaustive
search to a genetic probabilistic search
through a limited number of possibilities. (The switch-over threshold is
set by the GEQO_THRESHOLD run-time
parameter described in the &cite-admin;.)
The genetic search takes less time, but it won't
necessarily find the best possible plan.
When the query involves outer joins, the planner has much less freedom
than it does for plain (inner) joins. For example, consider
SELECT * FROM a LEFT JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id);
Although this query's restrictions are superficially similar to the
previous example, the semantics are different because a row must be
emitted for each row of A that has no matching row in the join of B and C.
Therefore the planner has no choice of join order here: it must join
B to C and then join A to that result. Accordingly, this query takes
less time to plan than the previous query.
The PostgreSQL query planner treats all
explicit JOIN> syntaxes as constraining the join order, even though
it is not logically necessary to make such a constraint for inner
joins. Therefore, although all of these queries give the same result:
SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id;
SELECT * FROM a CROSS JOIN b CROSS JOIN c WHERE a.id = b.id AND b.ref = c.id;
SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id);
but the second and third take less time to plan than the first. This effect
is not worth worrying about for only three tables, but it can be a
lifesaver with many tables.
You do not need to constrain the join order completely in order to
cut search time, because it's OK to use JOIN> operators in a plain
FROM> list. For example,
SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...;
forces the planner to join A to B before joining them to other tables,
but doesn't constrain its choices otherwise. In this example, the
number of possible join orders is reduced by a factor of 5.
If you have a mix of outer and inner joins in a complex query, you
might not want to constrain the planner's search for a good ordering
of inner joins inside an outer join. You can't do that directly in the
JOIN> syntax, but you can get around the syntactic limitation by using
subselects. For example,
SELECT * FROM d LEFT JOIN
(SELECT * FROM a, b, c WHERE ...) AS ss
ON (...);
Here, joining D must be the last step in the query plan, but the
planner is free to consider various join orders for A, B, C.
Constraining the planner's search in this way is a useful technique
both for reducing planning time and for directing the planner to a
good query plan. If the planner chooses a bad join order by default,
you can force it to choose a better order via JOIN> syntax --- assuming
that you know of a better order, that is. Experimentation is recommended.
Populating a Database
One may need to do a large number of table insertions when first
populating a database. Here are some tips and techniques for making that as
efficient as possible.
Disable Autocommit
Turn off autocommit and just do one commit at
the end. (In plain SQL, this means issuing BEGIN
at the start and COMMIT at the end. Some client
libraries may do this behind your back, in which case you need to
make sure the library does it when you want it done.)
If you allow each insertion to be committed separately,
PostgreSQL is doing a lot of work for each
record added.
An additional benefit of doing all insertions in one transaction
is that if the insertion of one record were to fail then the
insertion of all records inserted up to that point would be rolled
back, so you won't be stuck with partially loaded data.
Use COPY FROM
Use COPY FROM STDIN to load all the records in one
command, instead of using
a series of INSERT commands. This reduces parsing,
planning, etc.
overhead a great deal. If you do this then it is not necessary to turn
off autocommit, since it is only one command anyway.
Remove Indexes
If you are loading a freshly created table, the fastest way is to
create the table, bulk-load with COPY, then create any
indexes needed
for the table. Creating an index on pre-existing data is quicker than
updating it incrementally as each record is loaded.
If you are augmenting an existing table, you can DROP
INDEX, load the table, then recreate the index. Of
course, the database performance for other users may be adversely
affected during the time that the index is missing. One should also
think twice before dropping unique indexes, since the error checking
afforded by the unique constraint will be lost while the index is missing.
Run ANALYZE Afterwards
It's a good idea to run ANALYZE or VACUUM
ANALYZE anytime you've added or updated a lot of data,
including just after initially populating a table. This ensures that
the planner has up-to-date statistics about the table. With no statistics
or obsolete statistics, the planner may make poor choices of query plans,
leading to bad performance on queries that use your table.