Using Explain Analyze in Postgres

In the last post, we discussed EXPLAIN and how it can be used to obtain the query plan for a query. EXPLAIN ANALYZE is a variation of EXPLAIN that provides additional information about the query. In addition to displaying all of the output EXPLAIN does, EXPLAIN ANALYZE will also run the query, track some metrics about the execution of each part of the query, and display that information. Let’s look at what happens when we run the query from the last post to get the pet with the oldest owner with EXPLAIN ANALYZE:

> EXPLAIN ANALYZE
SELECT pets.name
FROM people, pets
WHERE people.id = pets.owner_id
ORDER BY people.age
DESC LIMIT 1;
                                                                          QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.86..6.20 rows=1 width=36) (actual time=261.981..261.981 rows=1 loops=1)
   ->  Nested Loop  (cost=0.86..5343066.52 rows=1000000 width=36) (actual time=261.980..261.980 rows=1 loops=1)
         ->  Index Scan Backward using people_age_idx on people  (cost=0.43..514360.60 rows=10000054 width=8) (actual time=0.021..150.869 rows=100238 loops=1)
         ->  Index Scan using pets_owner_id_idx on pets  (cost=0.42..0.46 rows=2 width=36) (actual time=0.001..0.001 rows=0 loops=100238)
               Index Cond: (owner_id = people.id)
 Planning time: 0.296 ms
 Execution time: 262.009 ms
(7 rows)

(It may help to re-read the second to last paragraph of the last post to remind yourself how the query is executing.)

In the output, for each stage of the plan, there are two sets of numbers displayed, each surrounded by parenthesis. The first set of numbers (the ones starting with ‘cost=’) contains some estimates made by Postgres. As mentioned in the last post, these estimates are useless most of the time and you are better off ignoring them. The second set of numbers (the ones starting with ‘actual time=’) contain the actual run time statistics gathered by Postgres during query execution. Those are the numbers you will want to pay attention to. The format of those numbers is:

(actual time=STARTUP TIME..TOTAL TIME rows=ROW COUNT loops=LOOP COUNT)

Let’s start with an explanation of the loop count since the other numbers depend on it. The loop count, as the name sounds, is the number of times that node (by node I mean part of the plan) was executed. Most nodes will only be executed once, but there are cases where a node can be executed more than once. That mainly happens when a node is a child of a nested loop node. In the plan above, all of the nodes were executed only once except for the index scan on pets_owner_id_idx which was executed 100,238 different times. This is because Postgres uses the index scan to lookup the up the pets for each person as it iterates through the people from oldest to youngest.

The row count is fairly straightforward. It’s the average number of rows returned by the node each time it is executed. In other words, it’s the total numbers of rows returned by the node divided by the number of times the node was executed.

As for the total time, the name is somewhat misleading. The total time is the average amount of time spent executing the node on each execution of the node. For example, the index scan on pets_owner_id_idx has a total time 0.001ms. This means on average it took 0.001ms. Since the node was executed a 100,238 times, a total of ~100ms was spent executing that node.

Last is the startup time. The startup time is the amount of time it takes for the node to return a single row. Like the other numbers, this is averaged across every execution of the node. Personally I have never found the startup time useful and you’ll probably want to ignore it.

Together these numbers make it possible to determine where all of the time is spent during query execution. Looking at the plan above, it looks like most of the time is spent in two different parts of the query, the two index scans. A total of ~150ms is spent in the first index scan (150ms total time * 1 loop) and a total of ~100ms (.001ms total time * 100,238 loops) is spent in the second. This makes up ~250ms of the 262ms total execution time.

If you recall, the query executes by iterating over people in order from oldest to youngest, and checks if that person has a pet. If that person has a pet, the name of that pet is returned as the result of the query. Based on the output of EXPLAIN ANALYZE, it appears Postgres has to iterate through 100,238 people before it finds one person that has a pet. Since most of the time is spent fetching the rows for those people from the people table, and then looking up the pets for those people in the pets table, if we could avoid going through so many people, we may be able to dramatically speed up the query.

One way to do this would be to include the age of the owner of each pet in the pets table. If we then indexed that field we could quickly find the pet with the oldest owner and avoid going through all of 100,237 people that don’t have pets. In theory this would allow Postgres to execute the query much more quickly than it can currently. Let’s see what happens when we add an owner_age field with the age of the owner, create an index on it, and rewrite the query to take advantage of it:

> EXPLAIN ANALYZE
SELECT pets.name
FROM pets
ORDER BY owner_age DESC
LIMIT 1;
                                                                    QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.42..0.49 rows=1 width=36) (actual time=0.021..0.021 rows=1 loops=1)
   ->  Index Scan Backward using pets_owner_age_idx on pets  (cost=0.42..61468.43 rows=1000000 width=36) (actual time=0.020..0.020 rows=1 loops=1)
 Planning time: 0.142 ms
 Execution time: 0.054 ms
(4 rows)

We can see the query now executes over 5000x faster. Now all Postgres needs to do is take the last row in the pets_owner_age_idx. Thanks to EXPLAIN ANALYZE, we were able to both determine where all of the time is being spent for the query and figure out how to optimize it.

3 thoughts on “Using Explain Analyze in Postgres

  1. Hi,

    Thanks for the explanation.

    I am curious about the optimization at the end of this post. Now the new pets.ower_age column contains the same value as the people.age column. When a new row in the people table is inserted, updated or deleted, the pets table also needs to be updated.

    Is this considered a good database schema design?

    1. Copying data is a strategy called “denormalization”. It’s generally considered to be a tradeoff between safety and speed. It’s considered to be good design if you need to make your queries as fast as possible.

Leave a Reply

Your email address will not be published. Required fields are marked *