This is part one of a three part series on the main ways Postgres has of fetching rows from a table.
Of the three main ways Postgres has fetching rows from a table, a sequential scan is the most basic. To execute a sequential scan, Postgres literally iterates through a table a row at a time and returns the rows requested in the query. As an example, let’s say we have a table people that looks like the following:
And the query:
SELECT * FROM people WHERE age > 35;
If Postgres were to execute the query with a sequential scan, Postgres will iterate through each person in the people table, check if that person is older than 35, and then include that person in the result if they pass the filter.
There are two main reasons why Postgres will execute sequential scans. The first is that a sequential scan is always possible. No matter what the schema of the table is, or what indexes exist on the table, Postgres always has the option of executing a sequential scan.
The other main reason is that in some cases a sequential scan is actually faster than the other options available. When reading data from disk, reading data sequentially is usually faster than reading the data in a random order. If a large portion of the table will be returned by a query, a sequential scan usually winds up being the best way to execute it. This is because a sequential scan performs sequential I/O whereas the other options available mostly perform random I/O.