How to Improve Your Productivity as a Working Programmer

For the past few weeks, I’ve been obsessed with improving my productivity. During this time, I’ve continuously been monitoring the amount of work I’ve been getting done and have been experimenting with changes to make myself more productive. After only two months, I can now get significantly more work done than I did previously in the same amount of time.

If you had asked me my opinion on programmer productivity before I started this process, I wouldn’t have had much to say. After looking back and seeing how much more I can get done, I now think that understanding how to be more productive is one of the most important skills a programmer can have. Here are a few changes I’ve made in the past few weeks that have had a noticeable impact on my productivity:

Eliminating Distractions

One of the first and easiest changes I made was eliminating as many distractions as possible. Previously, I would spend a nontrivial portion of my day reading through Slack/email/Hacker News. Nearly all of that time could have been used much more effectively if I had only used that time to focus on getting my work done.

To eliminate as many distractions as possible, I first eliminated my habit of pulling out my phone whenever I got a marginal amount of work done. Now, as soon as I take my phone out of my pocket, I immediately put it back in. To make Slack less of a distraction, I left every Slack room that I did not derive immediate value from. Currently I’m only a in a few rooms that are directly relevant to my team and the work I do. In addition, I only allow myself to check Slack at specific times throughout the day. These times are before meetings, as well as before lunch and at the end of the day. I specifically do not check Slack when I first get into the office and instead immediately get started working.

Getting into the Habit of Getting into Flow

Flow is that state of mind where all of your attention is focused solely at the task at hand, sometimes referred to as “the zone”. I’ve worked on setting up my environment to maximize the amount of time I’m in flow. I moved my desk over onto the quiet side of the office and try set up long periods of time where I won’t be interrupted. When I want to get into flow, I’ll put on earmuffs, close all of my open tabs, and focus all of my energy at the task in front of me.

Scheduling My Day Around When I’m Most Productive

When I schedule my day, there are now two goals I have in mind. The first is to arrange all of my meetings together. This is to maximize the amount of time I can get into flow. The worst possible schedule I’ve encountered is having several meetings, all 30 minutes apart from each other. 30 minutes isn’t enough time for me to get any significant work done before being interrupted by my next meeting. Instead by aligning all of my meetings right next to each other, I go straight from one to the next. This way I have fewer larger blocks of time where I can get into flow and stay in flow.

The second goal I aim for is to arrange my schedule so I am working at the times of the day when I am most productive. I usually find myself most productive in the mornings. By the time 4pm rolls around, I am typically exhausted and have barely enough energy to get any work done at all. To reduce the effect this had on my productivity, I now schedule meetings specifically at the times of the day when I’m least productive. It doesn’t take a ton of energy to sit through a meeting, and scheduling my day this way allows me to work when I’m most productive. Think of it this way. If I can move a single 30 minute meeting from the time when I’m most productive to the time of the time at which I’m the least productive, I just added 30 minutes of productive time to my day.

Watching Myself Code

One incredibly useful exercise I’ve found is to watch myself program. Throughout the week, I have a program running in the background that records my screen. At the end of the week, I’ll watch a few segments from the previous week. Usually I will watch the times that felt like it took a lot longer to complete some task than it should have. While watching them, I’ll pay attention to specifically where the time went and figure out what I could have done better. When I first did this, I was really surprised at where all of my time was going.

For example, previously when writing code, I would write all my code for a new feature up front and then test all of the code collectively. When testing code this way, I would have to isolate which function the bug was in and then debug that individual function. After watching a recording of myself writing code, I realized I was spending about a quarter of the total time implementing the feature tracking down which functions the bugs were in! This was completely non-obvious to me and I wouldn’t have found it out without recording myself. Now that I’m aware that I spent so much time isolating which function a bugs are in, I now test each function as I write it to make sure they work. This allows me to write code a lot faster as it dramatically reduces the amount of time it takes to debug my code.

Tracking My Progress and Implementing Changes

At the end of every day, I spend 15 minutes thinking about my day. I think about what went right, as well as what went wrong and how I could have done better. At the end of the 15 minutes, I’ll write up my thoughts. Every Saturday, I’ll reread what I wrote for the week and implement changes based on any patterns I noticed.

As an example of a simple change that came out of this, previously on weekends I would spend an hour or two every morning on my phone before getting out of bed. That was time that would have been better used doing pretty much anything else. To eliminate that problem, I put my phone far away from my bed at night. Then when I wake up, I force myself to get straight into the shower without checking my phone. This makes it extremely difficult for me to waste my morning in bed on my phone, saving me several hours every week.

Being  Patient

I didn’t make all of these changes at once. I only introduced one or two of them at a time. If I had tried to implement all of these changes at once, I would have quickly burned out and given up. Instead, I was able to make a lot more changes by introducing each change more slowly. It only takes one or two changes each week for things to quickly snowball. After only a few weeks, I’m significantly more productive than I was previously. Making any progress at change at all is a lot better than no change. I think Stanford professor John Ousterhout’s quote describes this aptly. In his words, “a little bit of slope makes up for a lot of y-intercept”.

Time for An Intermission

I’ve been writing a lot these past two months. I decided I’m going to take a break for a little bit. I plan on starting to write continuously again within the next 2-4 weeks. That is all.

Postgres JSONB

JSONB is a nifty Postgres type that allows you to store unstructured data inside of Postgres. A common use case of JSONB is to represent a mapping from a set of keys to arbitrary values. JSONB is nice for this because the set of keys can be completely different for each value. It is also possible to express hierarchical data through JSONB.

As an example of where JSONB is incredibly useful, the company I work for, Heap, makes heavy use of JSONB. At Heap, we use JSONB to store events that happen on our customers’ websites. These events include pageviews, clicks, as well as custom events created by our customers. All of these different kinds of events have completely different properties. This makes JSONB a great tool for our use case. More concretely, we have an events table with a fairly simple schema:

CREATE TABLE events (
    user_id bigint,
    time bigint, 
    data jsonb
);

With JSONB, this simple schema is able to take care of most of our use cases. For example, a click event on the “login” button may look something like the following:

INSERT INTO events
SELECT 0 AS user_id,
       1498800692837 AS time,
       '{"type": "click",
         "target_text": "login",
         "page": "/login"}'::jsonb AS data;

And a pageview on the homepage may look like:

INSERT INTO events
SELECT 1 AS user_id,
       1498800692837 AS time,
       '{"type": "pageview",
         "page": "/home",
         "referrer": "www.google.com"}'::jsonb AS data;

JSONB lets us easily express all of these different kinds of events. Then when we want to query the data, it’s fairly easy to get the data out of the data column. For example, if to see what pages are viewed the most frequently, we can run a query such as:

SELECT (data ->> 'page'), count(*)
FROM events
WHERE (data ->> 'type') = 'pageview'
GROUP BY (data ->> 'page');

We use this same general idea to power all of the analysis Heap is able to perform. This includes funnels (of people that did A, how many later did B) as well as retention queries (of people that did A, how many people did B within N weeks).

Of course JSONB isn’t free. Due to our heavy use of JSONB, we’ve ran into a decent number of issues with JSONB. One problem is that the keys need to be repeated in every event. This winds up wasting a lot of space. I did an experiment where I pulled out most of the fields we store in JSONB and found that we could save ~30% of our disk usage by not using JSONB!

Another problem that is much worse is the lack of statistics. Normally Postgres collects statistics about the different columns of a table. This includes a histogram of each column as well as an estimate of the number of distinct elements in the column. At query time, Postgres uses these statistics to determine what query plan to use. Currently for JSONB, Postgres has no way of collecting statistics over it. In certain cases, this leads Postgres to making some very bad query plans. My manager goes into both of these issues in more depth in a blog post he wrote on our company blog.

Depending on your exact needs, JSONB can be a god send. JSONB makes it easy to store whatever data you want in Postgres without worrying about an overarching common format for all of your data.

track_io_timing

The parameter track_io_timing is a relatively unknown, but super helpful parameter when optimizing queries. As the name suggests, when the parameter is turned on, Postgres will track how long I/O takes. Then, when you run a query with EXPLAIN (ANALYZE, BUFFERS), Postgres will display how much time was spent just performing I/O.

You normally don’t want to have track_io_timing always on since it incurs a significant amount of overhead. To get around this, when you want to time how long a query is spending performing I/O, you can use a transaction with SET LOCAL track_io_timing = on;. This will enable track_io_timing only during the transaction. As a specific example of track_io_timing, here’s a simple query over a table I have laying around:

> BEGIN; SET track_io_timing = ON; EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM pets; COMMIT;
                                                QUERY PLAN                                                 
-----------------------------------------------------------------------------------------------------------
 Seq Scan on pets  (cost=0.00..607.08 rows=40008 width=330) (actual time=8.318..38.126 rows=40009 loops=1)
   Buffers: shared read=207
   I/O Timings: read=30.927
 Planning time: 161.577 ms
 Execution time: 42.104 ms

The I/O Timings field shows us that of the 42ms spent executing the query, ~31ms was spent performing I/O. Now if we perform the query again when the data is cached:

> BEGIN; SET track_io_timing = ON; EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM pets; COMMIT;
                                                QUERY PLAN                                                
----------------------------------------------------------------------------------------------------------
 Seq Scan on pets  (cost=0.00..607.08 rows=40008 width=330) (actual time=0.004..7.504 rows=40009 loops=1)
   Buffers: shared hit=207
 Planning time: 0.367 ms
 Execution time: 11.478 ms

We can see the query is just about 31ms faster! This time the query does not show any information about the I/O timing since no time was spent performing I/O, due to the data being cached.

When benchmarking queries, I also make sure to make use of track_io_timing so I can see whether the expensive part of the query is performing I/O, or if the expensive part is something else entirely.

Avoiding Torn Pages

For the Postgres file layout, Postgres reads and writes data to disk 8kb at a time. Most operating systems make use of a smaller page size, such as 4kb. If Postgres is running on one of these operating systems, an interesting edge case can occur. Since Postgres writes to disk in units of 8kb and the OS writes to disk in units of 4kb, if the power went out at just the right time, it is possible that only 4kb of an 8kb write Postgres was performing were written to disk. This edge case is sometimes referred to as “torn pages”. Postgres does have a way of working around torn pages, but it does increase the amount of I/O Postgres needs to perform.

Under normal circumstances, Postgres uses a technique called WAL to prevent data loss. At a high level, WAL works by creating a log on disk of all changes made by a transaction to the database, before the changes themselves are persisted to disk. Since creating a single continuous log on disk is much cheaper than performing random writes to disk, WAL reduces the amount of I/O Postgres needs to perform without the risk of data loss. If Postgres crashes, Postgres will be able to recover all of the changes that weren’t persisted to disk by replaying the WAL log.

Although keeping track of all of the changes made does allow Postgres to recover from common crashes in which every write was either done completely or not at all, it does not let Postgres recover from a torn page. Due to the specifics with the implementation of WAL log1, in the case of a torn page, the changes alone do not provide Postgres with enough information to determine what changes should be applied to each half of the page.

To recover from torn pages, Postgres does something called “full-page writes”. Whenever Postgres makes a change to a page, it writes a full copy of the page to the WAL log. That way, when using the WAL log to recover from a crash, Postgres does not need to pay attention to the contents of the page stored on disk. Postgres is able to  recover the entire state of the page just from the WAL log, sidestepping the problem of torn pages entirely! To avoid constantly writing full copies of every page to WAL, Postgres checks if a full copy of the page was recently written to WAL, and if so, will not write a full copy of the page since it will still be able to recover the complete page from WAL.

There is actually a parameter, full_page_writes, that allows you to disable this behavior. If you care about preventing data corruption, there are very few cases in which you should disable it. The only real case is if the OS/filesystem has built-in protection against torn pages. For example, the ZFS filesystem provides it’s own transactional guarantees and prevents torn pages. This is largely due to the copy-on-write nature of ZFS.

Postgres TOAST

TOAST aka The Oversized Attribute Storage Technique aka the best thing since sliced bread, is a technique Postgres uses to store large values. Under the normal Postgres data layout, every row is stored on 8kb pages, with no row spanning multiple pages. To support rows that can be larger than 8kb, Postgres makes use of TOAST.

If Postgres is about to store a row and the row is over 2kb, TOAST will kick in and Postgres will first attempt to shrink the row by compressing the large variable width fields of the row. If the row is still over 2kb after compressing the fields of it, Postgres will then repeatedly store the large fields outside of the row until the row is under 2kb1. To do so, Postgres splits up the compressed value into individual chunks of ~2kb2. Each of those chunks are then stored in a “TOAST table”. Every regular Postgres table has a corresponding TOAST table in which the TOASTed values are stored. A value stored in this manner is commonly referred to as “TOASTed”.

Every TOAST table has three columns. It has a column called chunk_id which is used to distinguish the specific TOASTed values each chunk is for. All chunks of the same toasted value have the same chunk_id. The chunk_id is what is stored in the original row and is what allows Postgres to determine what chunks are for a given value. The second field of a TOAST table is chunk_seq which determines the ordering of the chunks with the same chunk_id. The first chunk of a TOASTed value has chunk_seq=0, the second has chunk_seq=1 and so on. The last column is chunk_data which contains the actual data for the TOASTed value.

At query time, when a TOASTed value is needed, Postgres will use an index on the TOAST table on (chunk_id, chunk_seq) to lookup all of the chunks with a given chunk_id sorted by chunk_seq. From there, it can stitch all of the chunks back together and decompress that result to obtain the original value of the row.

Under certain circumstances, TOAST can actually make queries faster. If a TOASTed field isn’t needed to answer a query, Postgres doesn’t have to read the chunks for the TOASTed value, and can skip reading the value into memory. In some cases, this will dramatically reduce the amount of disk I/O Postgres needs to perform to answer a query.

You can actually access the TOAST table for a given table directly and inspect the values stored in it. As a demonstration, let’s create a table messages that has a single column message:

CREATE TABLE messages (message text);

We can then insert a few random strings to be TOASTed:

INSERT INTO messages
SELECT (SELECT 
        string_agg(chr(floor(random() * 26)::int + 65), '')
        FROM generate_series(1,10000)) 
FROM generate_series(1,10);

Now that we have a table with values we know are toasted, we first need to lookup the name of the TOAST table for the messages table. We can do this by looking up the name of the corresponding TOAST table with the following query:

> SELECT reltoastrelid::regclass 
> FROM pg_class 
> WHERE relname = 'messages';
      reltoastrelid      
-------------------------
 pg_toast.pg_toast_59611
(1 row)

The query pulls information from the pg_class Postgres table, which is a table where Postgres stores metadata about tables.

Now that we have the name of the TOAST table, we can read from it just like any other table:

> SELECT * FROM pg_toast.pg_toast_59611;
 chunk_id | chunk_seq | chunk_data
----------+-----------+------------
    59617 |         0 | \x4c4457...
    59617 |         1 | \x424d4b...
...

Note that the chunk_data is the binary of the compressed version of the field, so it isn’t exactly human readable.

Overall, TOAST is a clever technique that reuses the ordinary Postgres storage technique to store larger values. It’s completely transparent to the user, yet if you really want to, you can dig into your TOAST tables and see exactly how Postgres is storing your data.

The File Layout of Postgres Tables

Although Postgres may seem magical, it really isn’t. When data is stored in Postgres, Postgres in turn stores that data in regular files in the filesystem. In this blog post, we’ll take a look at how Postgres uses files to represent data stored in the database.

First of all, each table in Postgres is represented by one or more underlying files. Each 1GB chunk of the table is stored in a separate file. It is actually pretty easy to find the actual underlying files for a table. To do so, you first need to find the Postgres data directory, which is the directory in which Postgres keeps all of your data. You can find where the data directory is by running SHOW DATA_DIRECTORY;. When I run it locally, I see the following:

> SHOW DATA_DIRECTORY;
        data_directory        
------------------------------
 /var/lib/postgresql/9.5/main
(1 row)

Now that you know where the Postgres data directory is, you will need to find where the files for the specific table we are looking for is located. To do so, you can use the pg_relation_filepath function with the name of the table you want to find the file for. The function will return the relative filepath of the files from the data directory. Here is what I see when I run the command on a table I have locally:

> SELECT pg_relation_filepath('people');
 pg_relation_filepath 
----------------------
 base/16387/51330
(1 row)

Together with the location of the location of the data directory, this gives us the location of the files for the people table. All of the files are stored in /var/lib/postgresql/9.5/main/base/16387/. The first GB of the table is stored in a file called 51330, the second in a file called 51330.1, the third in 51330.2, and so on. You can actually read and write data to the file yourself, but I heavily suggest not doing so as you will most likely wind up corrupting your database.

Now that we’ve found the actual files, let’s walk through how each file is laid out. Each file is broken up into 8kb chunks, called “pages”1. For example, a 1.5GB table will be stored across two files and 196,608 pages2 and look like the following:

Each row is stored on a single page (with the exception of when a row is too large, in which case a technique called TOAST is used). Pages are the unit of which Postgres reads and writes data to the filesystem. Whenever Postgres reads a row it needs to answer a query from disk , Postgres reads the entire page the row is on. When Postgres writes to a row on a page, it writes a whole new copy of the entire page to disk at one time. Postgres operates in this way for numerous reasons which are outside of the scope of this blog post.

Pages themselves have the following format:

The header is 24 bytes and contains various metadata about the page, including a checksum and information necessary for WAL. The row offsets is of pointers into the rows field, with the Nth pointer pointing to the Nth row. The offsets can be used to quickly lookup an arbitrary row of a page. If we emphasize the individual rows on the page, the page winds up looking like:

The first thing you likely noticed is that the first rows are stored at the back of the page. That is so the offsets and the actual row data can both grow towards the middle. If a new row is inserted, we can allocate a new offset from the front of the free space, and allocate the space for the row from the back of the free space.

As for each row, they have a format that looks like the following:

The header of each row is 23 bytes and includes the transaction ids for MVCC as well as other metadata about the row. Based on the table schema, each field of the row is either a fixed width type or a variable width type. If the field is fixed width, Postgres already knows how long the field is and just stores the field data directly in the row.

If the field is variable width there are two possibilities for how the field is stored. Under normal circumstances, it would be stored directly the row with a header detailing how large the field is. In certain special cases, or when it’s impossible to store the field directly in the row, the field will be stored outside of the row using a technique using TOAST, which we will take a look at in my next post.

To recap, each row is stored on an 8kb page along with several other rows. Each page in turn is part of a 1GB file. While processing a query, when Postgres needs to fetch a row from disk, Postgres will read the entire page the row is stored on. This is, at a high level, how Postgres represents data stored in it on disk.

How to Write a Postgres SQL UDF

Being able to write a Postgres UDF (user-defined function) is a simple skill that goes a long way. SQL UDFs let you give a name to part or all of a SQL query and use that name to refer to that SQL code. It works just like any user-defined function in your favorite programming language.

As a simple example, in the last post we came up with a query for incrementing a counter in a table of counters:

INSERT INTO counters
SELECT <id> AS id, <amount> AS VALUE
    ON CONFLICT (id) DO
    UPDATE SET value = counters.value + excluded.value;

When we used this query multiple times, we had to copy and paste it once for each time we used it. To avoid this problem, we could define a UDF that runs the query and then only increment the counters through the UDF. In general, most of the time when you define a SQL UDF, you’ll use code like the following:

CREATE OR REPLACE FUNCTION <function name>(<arguments>)
RETURNS <return type> AS $$
  <queries to run>
$$ LANGUAGE SQL;

This will define a UDF with the given name that runs the queries in the body whenever it is called. Inside of the queries, you’ll be able to refer to any of the arguments passed to the function. If we convert the query we had for incrementing a counter into a UDF, we wind up with the following UDF definition:

CREATE OR REPLACE FUNCTION 
increment_counter(counter_id bigint, amount bigint)
-- Use void as the return type because this function 
-- returns no value.
RETURNS void AS $$
  INSERT INTO counters
  SELECT counter_id AS id, amount AS value
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + excluded.value;
$$ LANGUAGE SQL;

With this UDF we can now use the UDF instead of the original query:

> SELECT * FROM counters;
 id | value 
----+-------
(0 rows)

> SELECT increment_counter(1, 10);

> SELECT * FROM counters;
 id | value 
----+-------
  1 |    10
(1 row)

> SELECT increment_counter(1, 5);

> SELECT * FROM counters;
 id | value 
----+-------
  1 |    15
(1 row)

> SELECT increment_counter(2, 5);

> SELECT * FROM counters;
 id | value 
----+-------
  1 |    15
  2 |     5
(2 rows)

> SELECT increment_counter(3, 20);

> SELECT * FROM counters;
 id | value 
----+-------
  1 |    15
  2 |     5
  3 |    20
(3 rows)

This is much better than what we had before.

One of the more interesting classes of UDFs are those that return rows instead of a single result. To define such a UDF, you specify SETOF TABLE (<columns>) as the return type. For example, if we wanted a UDF that returned the top N counters, we could define one as such:

CREATE OR REPLACE FUNCTION top_counters(n bigint)
RETURNS TABLE (id bigint, value bigint) AS $$
  SELECT * FROM counters ORDER BY value DESC LIMIT n;
$$ LANGUAGE SQL;

Then we can use it like:

> SELECT * FROM top_counters(2);
 id | value 
----+-------
  3 |    20
  1 |    15
(2 rows)

You can then use the function as part of a larger SQL query. For example, if you wanted to find the sum of the values of the top 10 counters, you could do that with the following straightforward SQL query:

SELECT sum(value) FROM top_counters(10);

To recap, UDFs are a great way to simplify SQL queries. I find them to be especially useful when I am reusing the same subquery in a bunch of different places.

Postgres Upserts

Since Postgres 9.5, Postgres has supported a useful a feature called UPSERT. For a reason I can’t figure out, this feature is referred to as UPSERT, even though there is no UPSERT SQL command. In addition to being a useful feature, UPSERT is fairly interesting from a “behind the scenes” perspective as well.

If you haven’t noticed yet, the word “upsert” is a portmanteau of the words “update” and “insert”. As a feature, UPSERT allows you to insert a new data if that data does not already exist and specify an action to be performed instead if that data does already exist. More specifically, when there is a unique constraint on a column (a constraint specifying all values of a column are distinct from each other), UPSERT allow to say “insert this row if it does not violate the unique constraint, otherwise perform this action to resolve the conflict”.

As an example, let’s say we have a counters table where each row represents a counter. The table has two columns, id and value, where the id specifies the counter we are referring to, and value is the number of times the counter has been incremented. It would be nice if we could increment a counter without needing to create the counter in advance. This is a problem for UPSERT. First let’s create the table:

CREATE TABLE counters (id bigint UNIQUE, value bigint);

It’s important the the id column is marked as unique. Without that we would be unable to use UPSERT.

To write an UPSERT query, you first write a normal INSERT for the case when the constraint is not violated. In this case, when a counter with a given id does not already exist, we want to create a new counter with the given id and the value 1. An INSERT that does this looks like:

INSERT INTO counters (id, value)
SELECT <id> AS id, 1 AS value;

Then to make it an UPSERT, you add to the end of it ON CONFLICT (<unique column>) DO <action>. The action can either be NOTHING, in which case the query will be ignored, or it can be UPDATE SET <column1> = <expr1>, <column2> = <expr2> … This will modify the existing row and update the corresponding columns to the new values. In this case we want to use the UPDATE form to increment the value of the counter. The whole query winds up looking like:

INSERT INTO counters
SELECT <id> AS id, 0 AS value
    ON CONFLICT (id) DO 
    UPDATE SET value = counters.value + 1;

When you run the above command with a given id, it will create a new counter with the value 1 if a counter with the id does not already exist. Otherwise it will increment the value of the existing counter. Here’s some examples of its use:

> SELECT * FROM counters;
 id | value
----+-------
(0 rows)

> INSERT INTO counters
  SELECT 0 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     1
(1 row)

> INSERT INTO counters
  SELECT 0 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     2
(1 row)

> INSERT INTO counters
  SELECT 0 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     3
(1 row)

> INSERT INTO counters
  SELECT 1 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     3
  1 |     1
(2 rows)

> INSERT INTO counters
  SELECT 1 AS id, 1 AS VALUE
      ON CONFLICT (id) DO
      UPDATE SET value = counters.value + 1;

> SELECT * FROM counters;
 id | value
----+-------
  0 |     3
  1 |     2

One last bit about UPSERT, you can use the faux table excluded to refer to the new row being inserted. This is useful if you either want to values of the old row with values of the new row, or make the values of the row a combination of the values of the old and new rows. As an example, let’s say we want to extend the counter example to increment by an arbitrary amount. That can be done with:

INSERT INTO counters
SELECT <id> AS id, <amount> AS VALUE
    ON CONFLICT (id) DO
    UPDATE SET value = counters.value + excluded.value;

This even works if you are incrementing multiple counters simultaneously all by different amounts.

What makes UPSERT so interesting to me is that it works even in concurrent situations. UPSERT still works even if other INSERT and UPDATE queries are all running simultaneously! Prior to the UPSERT feature there was a fairly complex method to emulate UPSERT. That method involved using PL/pgSQL to alternate between running INSERT and UPDATE statements until one of them succeeded. The statements need to be ran in a loop because it is possible for a different INSERT to run before the UPSERT INSERT was ran, and a row could be deleted before the UPDATE could be ran. The UPSERT feature takes care of all of this for you, while at the same time providing a single command for the common pattern inserting data if it does not already exist and otherwise modifying the old data!

Postgres Transaction Isolation Levels

This post is part 3 in a three part series exploring transaction isolation issues in Postgres. Part 1 introduced some of the common problems that come up with transaction isolation in Postgres. Part 2 introduced row level locks, one way of dealing with some of the problems around transaction isolation. This part introduces the different transaction isolation levels and how they affect the different problems introduced in part 1.

The main, and easiest way to deal with the transaction isolation issues introduced in Part 1 is through changing the transaction isolation level. In Postgres each transaction has it’s own isolation level. The isolation level of a transaction determines the isolation semantics of the transaction. In other words, the transaction isolation level determines what the state of the database statements ran within the transaction see, as well as how concurrent modifications to the same row are resolved.

Postgres provides a total of three different transaction isolation levels: read committed, repeatable read, and serializable. Read committed is the default and provides the semantics we’ve been discussing so far. The isolation semantics described in part 1 around SELECTUPDATE, and DELETE are the semantics provided under read committed. The repeatable read and serializable isolation levels each provide a different set of semantics for SELECTUPDATE, and DELETE. The semantics under repeatable read are as follows:

SELECT

Whenever a repeatable read transaction starts, it takes a snapshot of the current state of the database. All queries within the transaction use this snapshot. This behavior is opposed to the behavior of read committed in which each query has it’s own snapshot. The effects of all transactions that committed before the transaction are visible to all queries within the transaction while all transactions that either failed or committed after the transaction started are invisible to all statements ran in the transaction.

UPDATE and DELETE

Under repeatable read, UPDATE and DELETE use the snapshot created when the transaction started to find all rows to be modified. When an UPDATE or DELETE statement attempts to modify a row currently being modified by another transaction, it waits until the other transaction commits or aborts. If the other transaction aborts, the statement modifies the row and continues. On the other hand, if the other transaction commits, the UPDATE/DELETE statement will abort.

This behavior is completely different from that provided by read committed and somewhat surprising. In read committed, if two statements attempt to modify the same row at the same time, the modifications will be performed one after the other. In repeatable read, one of the statements will be aborted to prevent any isolation issues! This is generally the reason why people prefer read committed over repeatable read. When someone uses repeatable read, their code has to be prepared to retry transactions if any of them fail.


On it’s own, repeatable read prevents all of the transaction isolation problems but one. The only class of issues repeatable read does not prevent are serialization anomalies. That is what the serializable transaction isolation level is for. The serializable isolation level behaves exactly like repeatable read except it specifically detects when two transactions will not serialize correctly and will abort one of them to prevent a serialization anomaly. Like repeatable read, if you use serializable you should have code ready to retry aborted transactions.

To change the isolation level Postgres uses, you can set default_transaction_isolation to the desired level. All of the examples in the rest of this post make use of the repeatable read isolation levels, unless explicitly mentioned otherwise.

Now that we’ve got an understanding of the semantics behind the other isolation levels, let’s take a look at how they affect the examples in part 1.

Non-Repeatable Reads

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

S1> BEGIN;
S1> SELECT * FROM ints;
 n 
---
 1

S2> BEGIN;
S2> UPDATE ints SET n = 2;
S2> COMMIT;

S1> SELECT * FROM ints;
 n 
---
 1

S1> COMMIT;

S3> BEGIN;
S3> SELECT * FROM ints;
 n 
---
 2

S3> COMMIT;

With the repeatable read isolation level, S1 avoids the non-repeatable read since, unlike read committed, the second query makes use of a snapshot created at the start of the transaction. This is where repeatable read gets it’s name, it can be used to avoid non-repeatable reads.

Lost Updates

Due to the semantics around conflicting updates, repeatable read does prevent lost updates, but it does so in a less than ideal way:

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

S1> BEGIN;
S1> SELECT * FROM ints;
 n
---
 1

S2> BEGIN;
S2> SELECT * FROM ints;
 n
---
 1

S2> UPDATE ints SET n = 2; -- Computed server side.
S2> COMMIT;

S1> UPDATE ints SET n = 2; -- Computed server side.
ERROR:  could not serialize access due to concurrent UPDATE

S1> ROLLBACK;

S3> BEGIN;
S3> SELECT * FROM ints;
 n
---
 2

S3> COMMIT;

What happened here is that since two UPDATEs attempted to modify the same row at the same time, Postgres aborted one of them to prevent a lost update. For this reason, if you are just trying to avoid lost updates, you should prefer to use row level locks with SELECT … FOR UPDATE under read committed. That will allow both UPDATEs to be performed, without either UPDATE being lost or aborted.

Phantom Reads

Repeatable read eliminates phantom reads for the same reason it eliminates non-repeatable reads:

S0> BEGIN;
S0> SELECT count(*) FROM ints;
 count
-------
     0

S1> BEGIN;
S1> INSERT INTO ints SELECT 1;
S1> COMMIT;

S0> SELECT count(*) FROM ints;
 count
-------
     0

S0> COMMIT;

S2> BEGIN;
S2> SELECT COUNT(*) FROM ints;
 count
-------
     1

S2> COMMIT;

Preventing phantom reads is one reason why you would prefer to use the repeatable read isolation level instead of row level locks.

Skipped Modification

Just like with a lost update, repeatable read will abort one of the transactions in a skipped modification:

S0> BEGIN;
S0> INSERT INTO ints SELECT 1;
S0> INSERT INTO ints SELECT 2;
S0> SELECT * FROM ints;
 n
---
 1
 2

S0> COMMIT

S1> BEGIN;
S1> UPDATE ints SET n = n+1;

S2> BEGIN;
S2> DELETE FROM ints WHERE n = 2;
-- S2 blocks since the DELETE is trying to modify a row
-- currently being updated.

S1> COMMIT;
-- S2 aborts with the error:
ERROR:  could not serialize access due to concurrent update

S2> ROLLBACK;

S3> BEGIN;
S3> SELECT * FROM ints;
 n
---
 2
 3

S3> COMMIT;

S2 is aborted for the same reason S1 is aborted in the lost update example. The DELETE tries to modify a row which was modified after the snapshot was taken. Since the version of the row in the snapshot is out of date, Postgres aborts the transaction to prevent any isolation issues.

Serialization Anomalies

Unlike all of the other interactions, repeatable read does not eliminate serialization anomalies:

S0> BEGIN;
S0> SELECT count(*) FROM ints;
 count
-------
     0
(1 row)
 
S1> BEGIN;
S1> SELECT count(*) FROM ints;
 count
-------
     0
(1 ROW)
 
S1> INSERT INTO ints SELECT 1;
S1> COMMIT;
 
S0> INSERT INTO ints SELECT 1;
S0> COMMIT;

Fortunately, if you do want to completely prevent serialization anomalies, you can use the serializable isolation level. If we use serializable in the example instead of repeatable read, here is what happens:

S0> BEGIN;
S0> SELECT count(*) FROM ints;
 count
-------
     0

S1> BEGIN;
S1> SELECT count(*) FROM ints;
 count
-------
     0

S1> INSERT INTO ints SELECT 1;
S1> COMMIT;

S0> INSERT INTO ints SELECT 1;
ERROR:  could not serialize access due to read/write dependencies among transactions
DETAIL:  Reason code: Canceled on identification as a pivot, during write.
HINT:  The transaction might succeed if retried.

S0> ROLLBACK;

S3> BEGIN;
S3> SELECT * FROM ints;
 n
---
 1

S3> COMMIT;

What happened here is that Postgres detected that the pattern of reads and writes wouldn’t serialize properly, so it aborted one of the transactions.


Like row level locks, the repeatable read and serializable isolation levels are simple, yet at the same time they introduce a lot of complexity. Use of either repeatable read or serializable dramatically increases the chances that any given transaction will fail, making code that interacts with the database much more complicated, and database performance much less predictable.

In general, if you can, you should try to use the read committed isolation level and write your code in such a way that you don’t run into the different isolation issues mentioned in part 1. If you absolutely have to, you can use the tools mentioned in these last two posts to fend off all of the isolation issues.