How to implement Pagination properly

by Daniel Lübke
Pagination Navigation

Pagination is very important pattern for API design. Thus, it is part of our Microservice API Patterns project where you can also find a good explanation of it. However, I often see this pattern implemented without reaping all of its benefits with regards to efficiency. Within this blog post I want to discuss two common pitfalls and options for addressing them.

Pagination Pattern

MAP Pagination Icon

The Pagination pattern essentially is a mechanism for chunking a large result set and give the consumer of an API a way to request those chunks. Chunks in this pattern are called pages - thus its name - and consist of a number of records to be returned. This allows clients to request a manageable amount of data that due to its reduced size can be a) transferred quickly over the network and b) require less memory and possibly other computing resources on the client side. As a user you can often experience this pattern when large search results, e.g., in Google or any reasonable online shop, are paged and you can navigate through the result pages.

But this pattern can also help to reduce server-side resource consumption, which is unfortunately neglected frequently. Let's discuss two design decisions you have to make.

Pitfall 1: Server-Side Filtering

The first problem, which only has become more eminent with the introduction of Java's lamba expressions and Java Stream API, is to fetch all data from the database server and filter it on the application server. A typical example of code that I regularly find is as follows:


List<Result> results = someDao.getResults(filter1, filter2, filter3);
int totalCount = results.size();
List<Result> page = results.stream().filter(...).skip(...).limit(...).collect(toList());

However by transferring the whole dataset from the database to the application server this solution consumes network bandwidth between the database and the application servers, increases load on the database server and will eventually exceed the heap size of your application server, when users or data size increase. Consequently, every efficient implementation of the pagination pattern must use efficient queries to the database, which only return the necessary rows for creating the paginated response. Unfortunately, database servers use different extensions of SQL for achieving this. In MySQL it is a LIMIT X, Y clause, in SQL server it is ROWNUM or OFFSET and so on. This means that you must either learn your database's syntax or get to know your OR mapping framework's way of handling this. For example, JPA & Hibernate offer the setFirstResult and setMaxResults methods on a Query object. It doesn't matter how you implement your database access but be sure (and check) that only necessary data is fetched from the database! I have frequently seen software - also infrastructure components - that use pagination on the Web frontend but crash due to out of memory exceptions when operated on large datasets. This is always a hint for a bad implementation.

Hint: If you see excessive use of for-loops and streams on database results, it is a clear smell that the queries are not performed optimally!

Pitfall 2: Number of Total Results

Another quite common problem is to fetch the number of total records, which is also a quite common requirement. Unfortunately, you see a wrong implementation for Pitfall 1 above due to it: Developers argue that they need to load the whole result into memory in order to retrieve parts of the data and to calculate the total number of results. Fortunately, this isn't the case.

There are essentially two options for implementing this without risking to run out of heap memory:

  1. Use a separate query to fetch the count of the result,
  2. Use a combined query to fetch both with native SQL (where available).

Both options will consume no additional memory on the application server. However, they have different benefits and drawbacks given the following quality goals:

  • Database Load & Response Time: Using a combined query will reduce the database load because only one query is run, which returns all results. Also the query is effectively executed twice, which increases the response time of your API call (it will not double as not the same amount of data is transferred is less, though). This gets more important the more complex the query for fetching the data is.
  • Data Consistency: Having two separate queries without a transaction can return the wrong number of total results because in between both queries data might have changed. However, this is (usually) not of much concern in practice, because the data in this case would have changed until the client request the next page anyways.
  • Concurrency: The two queries for the result and total count can be combined within a transaction to guarantee consistency between the total number and the result. However, this would result in longer and possibly larger locks on the affected tables; thereby reducing concurrency.
  • Developer Skills: Astonishingly many developers are not proficient in SQL and can "only" use JPA or other frameworks to access data. Such developers can be overwhelmed when having to use specialized SQL like this.
  • Database Access Consistency: If all database queries are performed using an OR mapper like JPA, it might be a worthwhile trade to keep consistency and not use native SQL.
  • Portability: The native SQL required is not standardized nor portable. If you have to support many different databases, you will likely choose the option to run two queries. However, like strong data consistency in this(!) case, portability of different databases is also often overrated because database migrations are very rare and many software systems will only use one type of database anyway.
  • Available Database: Not all databases have the possiblity to get the chunked result and the total count in one query. This leaves your implementation with two queries.
  • In my current project we opted to use the native functionality built into the database server in order to provide the most efficent implementation. Portability was not an issue. MS SQL Server is used and there is a very good overview on pagination with count options by Andrey Zavadskiy. But as a warning: We also did our measurements for our use case in our environment and the efficiency of these options were different than in his analysis. This means that the results are likely highly influenced by your environment and your use case. As a result, you have to run the experiment yourself to determine which option is best.

    With these two pitfalls in mind you can write (or review) a paginated service implementation that won't break under increased load (which does not happen on your development machine but later on production as we all know...). But also as a tester you are now aware of possible problems: try to load test paginated user interfaces or APIs with large datasets and look at the heap consumption of the application.

    I hope you found this article useful and you can now avoid some problems in your API implementations. Please consider subscribing to notifications for new articles below and forward this article to fellow developers who are concerned with implementing paginated APIs or user interfaces. Also, have a look at the Microservice API Patterns (MAP) website because it has more information about the Pagination pattern, as well as many other patterns to structure, improve and evolve service interface contracts.

    If you want to comment please drop me an email or engage on Twitter.

    <<< Previous Blog Post
    How Process Modeling can help you write better structured code (part 1)
    Next Blog Post >>>
    Easily Spoting Problems in Your Process With BPMN - COVID Edition

To stay up to date, please subscribe to our newsletter and receive notifications whenever something interesting happens, for example a new blog post has been published!