When paging through data that comes from a DB, you need to know how many pages there will be to render the page jump controls.
Currently I do that by running the query twice, once wrapped in a
count() to determine the total results, and a second time with a limit applied to get back just the results I need for the current page.
This seems inefficient. Is there a better way to determine how many results would have been returned before
LIMIT was applied?
I am using PHP and Postgres.
SELECT foo , count(*) OVER() AS full_count FROM bar WHERE <some condition> ORDER BY <some col> LIMIT <pagesize> OFFSET <offset>
Note that this can be considerably more expensive than without the total count. All rows have to be counted, and a possible shortcut taking just the top rows from a matching index may not be helpful any more.
Doesn't matter much with small tables or
LIMIT. Matters for a substantially bigger
Corner case: when
OFFSET is at least as great as the number of rows from the base query, no row is returned. So you also get no
full_count. Possible alternative:
Consider the sequence of events:
WHERE clause (and
JOIN conditions, but not here) filter qualifying rows from the base table(s).
GROUP BY and aggregate functions would go here.)
Window functions are applied considering all qualifying rows (depending on the
OVER clause and the frame specification of the function). The simple
count(*) OVER() is based on all rows.
DISTINCT ON would go here.)
OFFSET are applied based on the established order to select rows to return.
OFFSET becomes increasingly inefficient with a growing number of rows in the table. Consider alternative approaches if you need better performance:
There are completely different approaches to get the count of affected rows (not the full count before
LIMIT were applied). Postgres has internal bookkeeping how many rows where affected by the last SQL command. Some clients can access that information or count rows themselves (like psql).
For instance, you can retrieve the number of affected rows in plpgsql immediately after executing an SQL command with:
GET DIAGNOSTICS integer_var = ROW_COUNT;
Or you can use
pg_num_rows in PHP. Or similar functions in other clients.
As I describe on my blog, MySQL has a feature called SQL_CALC_FOUND_ROWS. This removes the need to do the query twice, but it still needs to do the query in its entireity, even if the limit clause would have allowed it to stop early.
As far as I know, there is no similar feature for PostgreSQL. One thing to watch out for when doing pagination (the most common thing for which LIMIT is used IMHO): doing an "OFFSET 1000 LIMIT 10" means that the DB has to fetch at least 1010 rows, even if it only gives you 10. A more performant way to do is to remember the value of the row you are ordering by for the previous row (the 1000th in this case) and rewrite the query like this: "... WHERE order_row > value_of_1000_th LIMIT 10". The advantage is that "order_row" is most probably indexed (if not, you've go a problem). The disadvantage being that if new elements are added between page views, this can get a little out of synch (but then again, it may not be observable by visitors and can be a big performance gain).