Time for An Intermission

I’ve been writing a lot these past two months. I decided I’m going to take a break for a little bit. I plan on starting to write continuously again within the next 2-4 weeks. That is all.

Postgres JSONB

JSONB is a nifty Postgres type that allows you to store unstructured data inside of Postgres. A common use case of JSONB is to represent a mapping from a set of keys to arbitrary values. JSONB is nice for this because the set of keys can be completely different for each value. It is also possible to express hierarchical data through JSONB.

As an example of where JSONB is incredibly useful, the company I work for, Heap, makes heavy use of JSONB. At Heap, we use JSONB to store events that happen on our customers’ websites. These events include pageviews, clicks, as well as custom events created by our customers. All of these different kinds of events have completely different properties. This makes JSONB a great tool for our use case. More concretely, we have an events table with a fairly simple schema:

CREATE TABLE events (
    user_id bigint,
    time bigint, 
    data jsonb
);

With JSONB, this simple schema is able to take care of most of our use cases. For example, a click event on the “login” button may look something like the following:

INSERT INTO events
SELECT 0 AS user_id,
       1498800692837 AS time,
       '{"type": "click",
         "target_text": "login",
         "page": "/login"}'::jsonb AS data;

And a pageview on the homepage may look like:

INSERT INTO events
SELECT 1 AS user_id,
       1498800692837 AS time,
       '{"type": "pageview",
         "page": "/home",
         "referrer": "www.google.com"}'::jsonb AS data;

JSONB lets us easily express all of these different kinds of events. Then when we want to query the data, it’s fairly easy to get the data out of the data column. For example, if to see what pages are viewed the most frequently, we can run a query such as:

SELECT (data ->> 'page'), count(*)
FROM events
WHERE (data ->> 'type') = 'pageview'
GROUP BY (data ->> 'page');

We use this same general idea to power all of the analysis Heap is able to perform. This includes funnels (of people that did A, how many later did B) as well as retention queries (of people that did A, how many people did B within N weeks).

Of course JSONB isn’t free. Due to our heavy use of JSONB, we’ve ran into a decent number of issues with JSONB. One problem is that the keys need to be repeated in every event. This winds up wasting a lot of space. I did an experiment where I pulled out most of the fields we store in JSONB and found that we could save ~30% of our disk usage by not using JSONB!

Another problem that is much worse is the lack of statistics. Normally Postgres collects statistics about the different columns of a table. This includes a histogram of each column as well as an estimate of the number of distinct elements in the column. At query time, Postgres uses these statistics to determine what query plan to use. Currently for JSONB, Postgres has no way of collecting statistics over it. In certain cases, this leads Postgres to making some very bad query plans. My manager goes into both of these issues in more depth in a blog post he wrote on our company blog.

Depending on your exact needs, JSONB can be a god send. JSONB makes it easy to store whatever data you want in Postgres without worrying about an overarching common format for all of your data.

track_io_timing

The parameter track_io_timing is a relatively unknown, but super helpful parameter when optimizing queries. As the name suggests, when the parameter is turned on, Postgres will track how long I/O takes. Then, when you run a query with EXPLAIN (ANALYZE, BUFFERS), Postgres will display how much time was spent just performing I/O.

You normally don’t want to have track_io_timing always on since it incurs a significant amount of overhead. To get around this, when you want to time how long a query is spending performing I/O, you can use a transaction with SET LOCAL track_io_timing = on;. This will enable track_io_timing only during the transaction. As a specific example of track_io_timing, here’s a simple query over a table I have laying around:

> BEGIN; SET track_io_timing = ON; EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM pets; COMMIT;
                                                QUERY PLAN                                                 
-----------------------------------------------------------------------------------------------------------
 Seq Scan on pets  (cost=0.00..607.08 rows=40008 width=330) (actual time=8.318..38.126 rows=40009 loops=1)
   Buffers: shared read=207
   I/O Timings: read=30.927
 Planning time: 161.577 ms
 Execution time: 42.104 ms

The I/O Timings field shows us that of the 42ms spent executing the query, ~31ms was spent performing I/O. Now if we perform the query again when the data is cached:

> BEGIN; SET track_io_timing = ON; EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM pets; COMMIT;
                                                QUERY PLAN                                                
----------------------------------------------------------------------------------------------------------
 Seq Scan on pets  (cost=0.00..607.08 rows=40008 width=330) (actual time=0.004..7.504 rows=40009 loops=1)
   Buffers: shared hit=207
 Planning time: 0.367 ms
 Execution time: 11.478 ms

We can see the query is just about 31ms faster! This time the query does not show any information about the I/O timing since no time was spent performing I/O, due to the data being cached.

When benchmarking queries, I also make sure to make use of track_io_timing so I can see whether the expensive part of the query is performing I/O, or if the expensive part is something else entirely.

Avoiding Torn Pages

For the Postgres file layout, Postgres reads and writes data to disk 8kb at a time. Most operating systems make use of a smaller page size, such as 4kb. If Postgres is running on one of these operating systems, an interesting edge case can occur. Since Postgres writes to disk in units of 8kb and the OS writes to disk in units of 4kb, if the power went out at just the right time, it is possible that only 4kb of an 8kb write Postgres was performing were written to disk. This edge case is sometimes referred to as “torn pages”. Postgres does have a way of working around torn pages, but it does increase the amount of I/O Postgres needs to perform.

Under normal circumstances, Postgres uses a technique called WAL to prevent data loss. At a high level, WAL works by creating a log on disk of all changes made by a transaction to the database, before the changes themselves are persisted to disk. Since creating a single continuous log on disk is much cheaper than performing random writes to disk, WAL reduces the amount of I/O Postgres needs to perform without the risk of data loss. If Postgres crashes, Postgres will be able to recover all of the changes that weren’t persisted to disk by replaying the WAL log.

Although keeping track of all of the changes made does allow Postgres to recover from common crashes in which every write was either done completely or not at all, it does not let Postgres recover from a torn page. Due to the specifics with the implementation of WAL log1, in the case of a torn page, the changes alone do not provide Postgres with enough information to determine what changes should be applied to each half of the page.

To recover from torn pages, Postgres does something called “full-page writes”. Whenever Postgres makes a change to a page, it writes a full copy of the page to the WAL log. That way, when using the WAL log to recover from a crash, Postgres does not need to pay attention to the contents of the page stored on disk. Postgres is able to  recover the entire state of the page just from the WAL log, sidestepping the problem of torn pages entirely! To avoid constantly writing full copies of every page to WAL, Postgres checks if a full copy of the page was recently written to WAL, and if so, will not write a full copy of the page since it will still be able to recover the complete page from WAL.

There is actually a parameter, full_page_writes, that allows you to disable this behavior. If you care about preventing data corruption, there are very few cases in which you should disable it. The only real case is if the OS/filesystem has built-in protection against torn pages. For example, the ZFS filesystem provides it’s own transactional guarantees and prevents torn pages. This is largely due to the copy-on-write nature of ZFS.

Postgres TOAST

TOAST aka The Oversized Attribute Storage Technique aka the best thing since sliced bread, is a technique Postgres uses to store large values. Under the normal Postgres data layout, every row is stored on 8kb pages, with no row spanning multiple pages. To support rows that can be larger than 8kb, Postgres makes use of TOAST.

If Postgres is about to store a row and the row is over 2kb, TOAST will kick in and Postgres will first attempt to shrink the row by compressing the large variable width fields of the row. If the row is still over 2kb after compressing the fields of it, Postgres will then repeatedly store the large fields outside of the row until the row is under 2kb1. To do so, Postgres splits up the compressed value into individual chunks of ~2kb2. Each of those chunks are then stored in a “TOAST table”. Every regular Postgres table has a corresponding TOAST table in which the TOASTed values are stored. A value stored in this manner is commonly referred to as “TOASTed”.

Every TOAST table has three columns. It has a column called chunk_id which is used to distinguish the specific TOASTed values each chunk is for. All chunks of the same toasted value have the same chunk_id. The chunk_id is what is stored in the original row and is what allows Postgres to determine what chunks are for a given value. The second field of a TOAST table is chunk_seq which determines the ordering of the chunks with the same chunk_id. The first chunk of a TOASTed value has chunk_seq=0, the second has chunk_seq=1 and so on. The last column is chunk_data which contains the actual data for the TOASTed value.

At query time, when a TOASTed value is needed, Postgres will use an index on the TOAST table on (chunk_id, chunk_seq) to lookup all of the chunks with a given chunk_id sorted by chunk_seq. From there, it can stitch all of the chunks back together and decompress that result to obtain the original value of the row.

Under certain circumstances, TOAST can actually make queries faster. If a TOASTed field isn’t needed to answer a query, Postgres doesn’t have to read the chunks for the TOASTed value, and can skip reading the value into memory. In some cases, this will dramatically reduce the amount of disk I/O Postgres needs to perform to answer a query.

You can actually access the TOAST table for a given table directly and inspect the values stored in it. As a demonstration, let’s create a table messages that has a single column message:

CREATE TABLE messages (message text);

We can then insert a few random strings to be TOASTed:

INSERT INTO messages
SELECT (SELECT 
        string_agg(chr(floor(random() * 26)::int + 65), '')
        FROM generate_series(1,10000)) 
FROM generate_series(1,10);

Now that we have a table with values we know are toasted, we first need to lookup the name of the TOAST table for the messages table. We can do this by looking up the name of the corresponding TOAST table with the following query:

> SELECT reltoastrelid::regclass 
> FROM pg_class 
> WHERE relname = 'messages';
      reltoastrelid      
-------------------------
 pg_toast.pg_toast_59611
(1 row)

The query pulls information from the pg_class Postgres table, which is a table where Postgres stores metadata about tables.

Now that we have the name of the TOAST table, we can read from it just like any other table:

> SELECT * FROM pg_toast.pg_toast_59611;
 chunk_id | chunk_seq | chunk_data
----------+-----------+------------
    59617 |         0 | \x4c4457...
    59617 |         1 | \x424d4b...
...

Note that the chunk_data is the binary of the compressed version of the field, so it isn’t exactly human readable.

Overall, TOAST is a clever technique that reuses the ordinary Postgres storage technique to store larger values. It’s completely transparent to the user, yet if you really want to, you can dig into your TOAST tables and see exactly how Postgres is storing your data.

The File Layout of Postgres Tables

Although Postgres may seem magical, it really isn’t. When data is stored in Postgres, Postgres in turn stores that data in regular files in the filesystem. In this blog post, we’ll take a look at how Postgres uses files to represent data stored in the database.

First of all, each table in Postgres is represented by one or more underlying files. Each 1GB chunk of the table is stored in a separate file. It is actually pretty easy to find the actual underlying files for a table. To do so, you first need to find the Postgres data directory, which is the directory in which Postgres keeps all of your data. You can find where the data directory is by running SHOW DATA_DIRECTORY;. When I run it locally, I see the following:

> SHOW DATA_DIRECTORY;
        data_directory        
------------------------------
 /var/lib/postgresql/9.5/main
(1 row)

Now that you know where the Postgres data directory is, you will need to find where the files for the specific table we are looking for is located. To do so, you can use the pg_relation_filepath function with the name of the table you want to find the file for. The function will return the relative filepath of the files from the data directory. Here is what I see when I run the command on a table I have locally:

> SELECT pg_relation_filepath('people');
 pg_relation_filepath 
----------------------
 base/16387/51330
(1 row)

Together with the location of the location of the data directory, this gives us the location of the files for the people table. All of the files are stored in /var/lib/postgresql/9.5/main/base/16387/. The first GB of the table is stored in a file called 51330, the second in a file called 51330.1, the third in 51330.2, and so on. You can actually read and write data to the file yourself, but I heavily suggest not doing so as you will most likely wind up corrupting your database.

Now that we’ve found the actual files, let’s walk through how each file is laid out. Each file is broken up into 8kb chunks, called “pages”1. For example, a 1.5GB table will be stored across two files and 196,608 pages2 and look like the following:

Each row is stored on a single page (with the exception of when a row is too large, in which case a technique called TOAST is used). Pages are the unit of which Postgres reads and writes data to the filesystem. Whenever Postgres reads a row it needs to answer a query from disk , Postgres reads the entire page the row is on. When Postgres writes to a row on a page, it writes a whole new copy of the entire page to disk at one time. Postgres operates in this way for numerous reasons which are outside of the scope of this blog post.

Pages themselves have the following format:

The header is 24 bytes and contains various metadata about the page, including a checksum and information necessary for WAL. The row offsets is of pointers into the rows field, with the Nth pointer pointing to the Nth row. The offsets can be used to quickly lookup an arbitrary row of a page. If we emphasize the individual rows on the page, the page winds up looking like:

The first thing you likely noticed is that the first rows are stored at the back of the page. That is so the offsets and the actual row data can both grow towards the middle. If a new row is inserted, we can allocate a new offset from the front of the free space, and allocate the space for the row from the back of the free space.

As for each row, they have a format that looks like the following:

The header of each row is 23 bytes and includes the transaction ids for MVCC as well as other metadata about the row. Based on the table schema, each field of the row is either a fixed width type or a variable width type. If the field is fixed width, Postgres already knows how long the field is and just stores the field data directly in the row.

If the field is variable width there are two possibilities for how the field is stored. Under normal circumstances, it would be stored directly the row with a header detailing how large the field is. In certain special cases, or when it’s impossible to store the field directly in the row, the field will be stored outside of the row using a technique using TOAST, which we will take a look at in my next post.

To recap, each row is stored on an 8kb page along with several other rows. Each page in turn is part of a 1GB file. While processing a query, when Postgres needs to fetch a row from disk, Postgres will read the entire page the row is stored on. This is, at a high level, how Postgres represents data stored in it on disk.

How to Write a Postgres SQL UDF

Being able to write a Postgres UDF (user-defined function) is a simple skill that goes a long way. SQL UDFs let you give a name to part or all of a SQL query and use that name to refer to that SQL code. It works just like any user-defined function in your favorite programming language.

As a simple example, in the last post we came up with a query for incrementing a counter in a table of counters:

INSERT INTO counters
SELECT <id> AS id, <amount> AS VALUE
    ON CONFLICT (id) DO
    UPDATE SET value = counters.value + excluded.value;

When we used this query multiple times, we had to copy and paste it once for each time we used it. To avoid this problem, we could define a UDF that runs the query and then only increment the counters through the UDF. In general, most of the time when you define a SQL UDF, you’ll use code like the following:

CREATE OR REPLACE FUNCTION <function name>(<arguments>)
RETURNS <return type> AS $$
  <queries to run>
$$ LANGUAGE SQL;

This will define a UDF with the given name that runs the queries in the body whenever it is called. Inside of the queries, you’ll be able to refer to any of the arguments passed to the function. If we convert the query we had for incrementing a counter into a UDF, we wind up with the following UDF definition:

CREATE OR REPLACE FUNCTION 
increment_counter(counter_id bigint, amount bigint)
-- Use void as the return type because this function 
-- returns no value.
RETURNS void AS $$
  INSERT INTO counters
  SELECT counter_id AS id, amount AS value
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + excluded.value;
$$ LANGUAGE SQL;

With this UDF we can now use the UDF instead of the original query:

> SELECT * FROM counters;
 id | value 
----+-------
(0 rows)

> SELECT increment_counter(1, 10);

> SELECT * FROM counters;
 id | value 
----+-------
  1 |    10
(1 row)

> SELECT increment_counter(1, 5);

> SELECT * FROM counters;
 id | value 
----+-------
  1 |    15
(1 row)

> SELECT increment_counter(2, 5);

> SELECT * FROM counters;
 id | value 
----+-------
  1 |    15
  2 |     5
(2 rows)

> SELECT increment_counter(3, 20);

> SELECT * FROM counters;
 id | value 
----+-------
  1 |    15
  2 |     5
  3 |    20
(3 rows)

This is much better than what we had before.

One of the more interesting classes of UDFs are those that return rows instead of a single result. To define such a UDF, you specify SETOF TABLE (<columns>) as the return type. For example, if we wanted a UDF that returned the top N counters, we could define one as such:

CREATE OR REPLACE FUNCTION top_counters(n bigint)
RETURNS TABLE (id bigint, value bigint) AS $$
  SELECT * FROM counters ORDER BY value DESC LIMIT n;
$$ LANGUAGE SQL;

Then we can use it like:

> SELECT * FROM top_counters(2);
 id | value 
----+-------
  3 |    20
  1 |    15
(2 rows)

You can then use the function as part of a larger SQL query. For example, if you wanted to find the sum of the values of the top 10 counters, you could do that with the following straightforward SQL query:

SELECT sum(value) FROM top_counters(10);

To recap, UDFs are a great way to simplify SQL queries. I find them to be especially useful when I am reusing the same subquery in a bunch of different places.

Postgres Upserts

Since Postgres 9.5, Postgres has supported a useful a feature called UPSERT. For a reason I can’t figure out, this feature is referred to as UPSERT, even though there is no UPSERT SQL command. In addition to being a useful feature, UPSERT is fairly interesting from a “behind the scenes” perspective as well.

If you haven’t noticed yet, the word “upsert” is a portmanteau of the words “update” and “insert”. As a feature, UPSERT allows you to insert a new data if that data does not already exist and specify an action to be performed instead if that data does already exist. More specifically, when there is a unique constraint on a column (a constraint specifying all values of a column are distinct from each other), UPSERT allow to say “insert this row if it does not violate the unique constraint, otherwise perform this action to resolve the conflict”.

As an example, let’s say we have a counters table where each row represents a counter. The table has two columns, id and value, where the id specifies the counter we are referring to, and value is the number of times the counter has been incremented. It would be nice if we could increment a counter without needing to create the counter in advance. This is a problem for UPSERT. First let’s create the table:

CREATE TABLE counters (id bigint UNIQUE, value bigint);

It’s important the the id column is marked as unique. Without that we would be unable to use UPSERT.

To write an UPSERT query, you first write a normal INSERT for the case when the constraint is not violated. In this case, when a counter with a given id does not already exist, we want to create a new counter with the given id and the value 1. An INSERT that does this looks like:

INSERT INTO counters (id, value)
SELECT <id> AS id, 1 AS value;

Then to make it an UPSERT, you add to the end of it ON CONFLICT (<unique column>) DO <action>. The action can either be NOTHING, in which case the query will be ignored, or it can be UPDATE SET <column1> = <expr1>, <column2> = <expr2> … This will modify the existing row and update the corresponding columns to the new values. In this case we want to use the UPDATE form to increment the value of the counter. The whole query winds up looking like:

INSERT INTO counters
SELECT <id> AS id, 0 AS value
    ON CONFLICT (id) DO 
    UPDATE SET value = counters.value + 1;

When you run the above command with a given id, it will create a new counter with the value 1 if a counter with the id does not already exist. Otherwise it will increment the value of the existing counter. Here’s some examples of its use:

> SELECT * FROM counters;
 id | value
----+-------
(0 rows)

> INSERT INTO counters
  SELECT 0 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     1
(1 row)

> INSERT INTO counters
  SELECT 0 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     2
(1 row)

> INSERT INTO counters
  SELECT 0 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     3
(1 row)

> INSERT INTO counters
  SELECT 1 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     3
  1 |     1
(2 rows)

> INSERT INTO counters
  SELECT 1 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     3
  1 |     2

One last bit about UPSERT, you can use the faux table excluded to refer to the new row being inserted. This is useful if you either want to values of the old row with values of the new row, or make the values of the row a combination of the values of the old and new rows. As an example, let’s say we want to extend the counter example to increment by an arbitrary amount. That can be done with:

INSERT INTO counters
SELECT <id> AS id, <amount> AS VALUE
    ON CONFLICT (id) DO
    UPDATE SET value = counters.value + excluded.value;

This even works if you are incrementing multiple counters simultaneously all by different amounts.

What makes UPSERT so interesting to me is that it works even in concurrent situations. UPSERT still works even if other INSERT and UPDATE queries are all running simultaneously! Prior to the UPSERT feature there was a fairly complex method to emulate UPSERT. That method involved using PL/pgSQL to alternate between running INSERT and UPDATE statements until one of them succeeded. The statements need to be ran in a loop because it is possible for a different INSERT to run before the UPSERT INSERT was ran, and a row could be deleted before the UPDATE could be ran. The UPSERT feature takes care of all of this for you, while at the same time providing a single command for the common pattern inserting data if it does not already exist and otherwise modifying the old data!

Postgres Transaction Isolation Levels

This post is part 3 in a three part series exploring transaction isolation issues in Postgres. Part 1 introduced some of the common problems that come up with transaction isolation in Postgres. Part 2 introduced row level locks, one way of dealing with some of the problems around transaction isolation. This part introduces the different transaction isolation levels and how they affect the different problems introduced in part 1.

The main, and easiest way to deal with the transaction isolation issues introduced in Part 1 is through changing the transaction isolation level. In Postgres each transaction has it’s own isolation level. The isolation level of a transaction determines the isolation semantics of the transaction. In other words, the transaction isolation level determines what the state of the database statements ran within the transaction see, as well as how concurrent modifications to the same row are resolved.

Postgres provides a total of three different transaction isolation levels: read committed, repeatable read, and serializable. Read committed is the default and provides the semantics we’ve been discussing so far. The isolation semantics described in part 1 around SELECTUPDATE, and DELETE are the semantics provided under read committed. The repeatable read and serializable isolation levels each provide a different set of semantics for SELECTUPDATE, and DELETE. The semantics under repeatable read are as follows:

SELECT

Whenever a repeatable read transaction starts, it takes a snapshot of the current state of the database. All queries within the transaction use this snapshot. This behavior is opposed to the behavior of read committed in which each query has it’s own snapshot. The effects of all transactions that committed before the transaction are visible to all queries within the transaction while all transactions that either failed or committed after the transaction started are invisible to all statements ran in the transaction.

UPDATE and DELETE

Under repeatable read, UPDATE and DELETE use the snapshot created when the transaction started to find all rows to be modified. When an UPDATE or DELETE statement attempts to modify a row currently being modified by another transaction, it waits until the other transaction commits or aborts. If the other transaction aborts, the statement modifies the row and continues. On the other hand, if the other transaction commits, the UPDATE/DELETE statement will abort.

This behavior is completely different from that provided by read committed and somewhat surprising. In read committed, if two statements attempt to modify the same row at the same time, the modifications will be performed one after the other. In repeatable read, one of the statements will be aborted to prevent any isolation issues! This is generally the reason why people prefer read committed over repeatable read. When someone uses repeatable read, their code has to be prepared to retry transactions if any of them fail.


On it’s own, repeatable read prevents all of the transaction isolation problems but one. The only class of issues repeatable read does not prevent are serialization anomalies. That is what the serializable transaction isolation level is for. The serializable isolation level behaves exactly like repeatable read except it specifically detects when two transactions will not serialize correctly and will abort one of them to prevent a serialization anomaly. Like repeatable read, if you use serializable you should have code ready to retry aborted transactions.

To change the isolation level Postgres uses, you can set default_transaction_isolation to the desired level. All of the examples in the rest of this post make use of the repeatable read isolation levels, unless explicitly mentioned otherwise.

Now that we’ve got an understanding of the semantics behind the other isolation levels, let’s take a look at how they affect the examples in part 1.

Non-Repeatable Reads

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

S1> BEGIN;
S1> SELECT * FROM ints;
 n 
---
 1

S2> BEGIN;
S2> UPDATE ints SET n = 2;
S2> COMMIT;

S1> SELECT * FROM ints;
 n 
---
 1

S1> COMMIT;

S3> BEGIN;
S3> SELECT * FROM ints;
 n 
---
 2

S3> COMMIT;

With the repeatable read isolation level, S1 avoids the non-repeatable read since, unlike read committed, the second query makes use of a snapshot created at the start of the transaction. This is where repeatable read gets it’s name, it can be used to avoid non-repeatable reads.

Lost Updates

Due to the semantics around conflicting updates, repeatable read does prevent lost updates, but it does so in a less than ideal way:

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

S1> BEGIN;
S1> SELECT * FROM ints;
 n
---
 1

S2> BEGIN;
S2> SELECT * FROM ints;
 n
---
 1

S2> UPDATE ints SET n = 2; -- Computed server side.
S2> COMMIT;

S1> UPDATE ints SET n = 2; -- Computed server side.
ERROR:  could not serialize access due to concurrent UPDATE

S1> ROLLBACK;

S3> BEGIN;
S3> SELECT * FROM ints;
 n
---
 2

S3> COMMIT;

What happened here is that since two UPDATEs attempted to modify the same row at the same time, Postgres aborted one of them to prevent a lost update. For this reason, if you are just trying to avoid lost updates, you should prefer to use row level locks with SELECT … FOR UPDATE under read committed. That will allow both UPDATEs to be performed, without either UPDATE being lost or aborted.

Phantom Reads

Repeatable read eliminates phantom reads for the same reason it eliminates non-repeatable reads:

S0> BEGIN;
S0> SELECT count(*) FROM ints;
 count
-------
     0

S1> BEGIN;
S1> INSERT INTO ints SELECT 1;
S1> COMMIT;

S0> SELECT count(*) FROM ints;
 count
-------
     0

S0> COMMIT;

S2> BEGIN;
S2> SELECT COUNT(*) FROM ints;
 count
-------
     1

S2> COMMIT;

Preventing phantom reads is one reason why you would prefer to use the repeatable read isolation level instead of row level locks.

Skipped Modification

Just like with a lost update, repeatable read will abort one of the transactions in a skipped modification:

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> INSERT INTO ints SELECT 2;
S0> SELECT * FROM ints;
 n
---
 1
 2

S0> COMMIT

S1> BEGIN;
S1> UPDATE ints SET n = n+1;

S2> BEGIN;
S2> DELETE FROM ints WHERE n = 2;
-- S2 blocks since the DELETE is trying to modify a row
-- currently being updated.

S1> COMMIT;
-- S2 aborts with the error:
ERROR:  could not serialize access due to concurrent update

S2> ROLLBACK;

S3> BEGIN;
S3> SELECT * FROM ints;
 n
---
 2
 3

S3> COMMIT;

S2 is aborted for the same reason S1 is aborted in the lost update example. The DELETE tries to modify a row which was modified after the snapshot was taken. Since the version of the row in the snapshot is out of date, Postgres aborts the transaction to prevent any isolation issues.

Serialization Anomalies

Unlike all of the other interactions, repeatable read does not eliminate serialization anomalies:

S0> BEGIN;
S0> SELECT count(*) FROM ints;
 count
-------
     0
(1 row)
 
S1> BEGIN;
S1> SELECT count(*) FROM ints;
 count
-------
     0
(1 ROW)
 
S1> INSERT INTO ints SELECT 1;
S1> COMMIT;
 
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

Fortunately, if you do want to completely prevent serialization anomalies, you can use the serializable isolation level. If we use serializable in the example instead of repeatable read, here is what happens:

S0> BEGIN;
S0> SELECT count(*) FROM ints;
 count
-------
     0

S1> BEGIN;
S1> SELECT count(*) FROM ints;
 count
-------
     0

S1> INSERT INTO ints SELECT 1;
S1> COMMIT;

S0> INSERT INTO ints SELECT 1;
ERROR:  could not serialize access due to read/write dependencies among transactions
DETAIL:  Reason code: Canceled on identification as a pivot, during write.
HINT:  The transaction might succeed if retried.

S0> ROLLBACK;

S3> BEGIN;
S3> SELECT * FROM ints;
 n
---
 1

S3> COMMIT;

What happened here is that Postgres detected that the pattern of reads and writes wouldn’t serialize properly, so it aborted one of the transactions.


Like row level locks, the repeatable read and serializable isolation levels are simple, yet at the same time they introduce a lot of complexity. Use of either repeatable read or serializable dramatically increases the chances that any given transaction will fail, making code that interacts with the database much more complicated, and database performance much less predictable.

In general, if you can, you should try to use the read committed isolation level and write your code in such a way that you don’t run into the different isolation issues mentioned in part 1. If you absolutely have to, you can use the tools mentioned in these last two posts to fend off all of the isolation issues.

Postgres Row Level Locks

This post is part 2 in a three part series exploring transaction isolation in Postgres. Part 1 introduced some of the problems that come up under the default isolation settings. This part and the next one introduce common ways of eliminating those problems.

As mentioned in part 1, there are many different cases where concurrent transactions can interact with each other. One of the most basic ways to avoid a few of the interactions is to use row level locks. Row level locks are a way of preventing concurrent modification to a row. Let’s take a look at how row-level locks change the behavior of each of the examples in the part 1:

Non-Repeatable Reads

To recap, a non-repeatable read is when a single transaction rereads a single row twice and finds it to be different the second time. Here’s the example from part 1 demonstrating non-repeatable reads:

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

S1> BEGIN;
S1> SELECT * FROM ints;
 n
---
 1

S2> BEGIN;
S2> UPDATE ints SET n = 2;
S2> COMMIT;

S1> SELECT * FROM ints;
 n
---
 2

S1> COMMIT;

If S1 were to acquire a row level lock on the row as soon as it first read the row, S2 would unable from updating the rows as long as S1 holds the row level lock. To acquire row level locks with a SELECT statement, you just add FOR SHARE to the end of the SELECT. The lock will be released once the transaction commits. Rewriting the example with FOR SHARE, it now looks like:

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

S1> BEGIN;
S1> SELECT * FROM ints FOR SHARE;
 n
---
 1

S2> BEGIN;
S2> UPDATE ints SET n = n+1;
-- Blocks because the row being updated is locked by 
-- another transaction.

S1> SELECT * FROM ints;
 n
---
 1

S1> COMMIT;
-- The UPDATE in S2 completes since S1 released its lock
-- on the row.

S2> COMMIT;

What FOR SHARE does is acquire a “read lock” on each of the rows returned by the SELECT statement. Once the read locks are acquired, no other transaction will be able to update or delete the rows until the transaction holding the read locks commits and releases the locks. Multiple transactions can possess read locks on the same row, and it is impossible for any transaction to update a row as long as at least one other transaction has a read lock on the row.

Lost Updates

Since multiple transactions can acquire read locks on a single row at the same time, FOR SHARE doesn’t exactly handle lost updates well. Here’s the example of a lost update from part 1 rewritten to use FOR SHARE:

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

S1> BEGIN;
S1> SELECT * FROM ints;
 n
---
 1

S2> BEGIN;
S2> SELECT * FROM ints;
 n
---
 1

S2> UPDATE ints SET n = 2; -- Computed server side.
-- Blocks because S1 has a read lock on the row.

S1> UPDATE ints SET n = 2; -- Computed server side.
ERROR:  deadlock detected

S1> ROLLBACK;

S3> BEGIN;
S3> SELECT * FROM ints;
 n
---
 2

S3> COMMIT;

What happened here is the read lock acquired by S1 is preventing the UPDATE in S2 from running and the read lock acquired by S2 is preventing the UPDATE in S1 from running. Postgres detects that this is a deadlock and aborts the transaction in S1.

To get around this issue, you’ll want to use a variation of FOR SHARE called FOR UPDATEFOR UPDATE also acquires a locks on the rows being selected, but instead of acquiring read locks on each row, it acquires write locks on each row. A write lock is similar to a read lock, but as long as a transaction has a write lock on the row, no other transaction can have a read or write lock on the same row. In fact, a write lock is what UPDATE and DELETE grab before they modify a row. Let’s take a look at what happens when we instead use FOR UPDATE instead of FOR SHARE:

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

S1> BEGIN;
S1> SELECT * FROM ints FOR UPDATE;
 n
---
 1

S2> SELECT * FROM ints FOR UPDATE;
-- Blocks because S1 has a write lock.

S1> UPDATE ints SET n = 2; -- Computed server side.
S1> COMMIT;
-- S1 releases the write lock and S2 unblocks. 
-- The SELECT in S2 returns:
 n
---
 2

S2> UPDATE ints SET n = 3; -- Computed server side.
S2> COMMIT;

S3> BEGIN;
S3> SELECT * FROM ints;
 n
---
 3

S3> COMMIT;

By using FOR UPDATE, S1 signals it’s about to modify the rows it is selecting. S2 also wants to modify the row, so when it sees S1 is about to modify the row, it waits for S1 to complete before reading the row. This guarantees that S2 sees an up to date value. In effect, by using FOR UPDATE, S1 makes the read and write performed on the row happen in a single step.

Phantom Reads

Although row level locks are able to prevent non-repeatable reads, they are unable to prevent phantom reads, even though they seem to be similar issues. If we add FOR SHARE to the example of a phantom read in the part 1:

S0> BEGIN;

S0> SELECT count(*) 
S0> FROM (SELECT * FROM ints FOR SHARE) sub;
 count
-------
     0

S1> BEGIN;
S1> INSERT INTO ints SELECT 1;
S1> COMMIT;

S0> SELECT count(*) FROM ints;
 count
-------
     1

S0> COMMIT;

The phantom read still happens because FOR SHARE only acquires locks on the rows already in the table. It does not prevent new rows from being inserted into the table.

Skipped Modification

Row level locks don’t help here. The UPDATE and DELETE statements already grab row level write locks on the rows being modified. FOR SHARE and FOR UPDATE cannot be used since no SELECT statement is involved in a skipped modification.

Serialization Anomalies

Row level locks do not help here for the same reason they don’t help phantom reads. There’s nothing to prevent the INSERT statements from inserting the new rows.


Overall row level locks are a easy way of handling two of the isolation issues that come up. Fortunately, those two are the ones that most commonly come up. The downside of row level locks, is that while easy, they do add a fair amount of complexity and overhead. You now have to worry about one transaction holding a lock for too long, which you mostly wouldn’t have to worry about otherwise. Next, we’ll take a look at a different approach to solving some of the interactions – the other transaction isolation levels.