Hugh's Blog

Handling Pagination with Async Iterators

When you're interacting with a server from your frontend Javascript code, you might need to handle paging. Paging is a technique used by API designers to avoid enormous (and sometimes impossibly large) responses to requests when providing access to large collections of information to clients. Instead of returning every single item in a collection as a response to a request, an API might return the first 50 of the items in the collection, and a message to the client to say "this isn't all the items in the collection. If you want to get the next 50 items, here's how".

That's what the Spotify API does. When you need to get a list of albums by particularly prolific performers, you won't necessarily be able to get them all in one page, and will have to handle pagination to get all the albums.

It's possible to interact with pagination in an imperative way.

let artistId = '6sFIWsNpZYqfjUpaCgueju';

async function loadAlbums(artistId, authToken) {
  let endpoint = `${artistId}/albums?limit=20&include_groups=album`;

  let albums = [];
  // We'll set endpoint to when we receive it in the response.
  // When there is no more data, the API will set to null, and we'll
  // escape this while loop.
  while (endpoint) {
    const response = await fetch(endpoint, {
      headers: {
        "Authorization": `Bearer ${authToken}`

    if (!response.ok) {
      throw new Error("Request failed");

    const page = await response.json();

    albums = albums.concat(page.items);

    endpoint =;
  return albums;

for (let album of (await loadAlbums(artistId, YOUR_OWN_AUTH_TOKEN))) {

This code works, but there are some problems with it.

The code that is consuming the data is mixed with the code that handles pagination.

You can extract the code that handles the pagination by converting the whole block into an async function. But since functions can only return data once, you're stuck until all the requests are finished before you can return albums and use it.

This is where async generators come in. Generators are functions that can yield multiple results, rather than just one. Asynchronous (async) generators are analogous to Promises that can resolve multiple times. They also provide syntactic sugar to make it easier to iterate over the yielded values - for await ... of syntax.

Async Iterators are one solution to this problem - observables are another solution, but they haven't made it into the EcmaScript specification.

The following is some example code that demonstrates how to use a recursive async generator to yield each page of albums one by one until we're out of pages. You'll see how the code that consumes the albums uses the for await ... of syntax to access the results of the generator

async function* pageThroughResource(endpoint, authToken) {
  async function* makeRequest(_endpoint) {
    const response = await fetch(_endpoint, {
      "headers": {
        "Authorization": `Bearer ${authToken}`
    if (!response.ok) {
      throw new Error(await response.text());

    const page = await response.json()

    yield page;

    if ( {
      yield * makeRequest(;

  yield * makeRequest(endpoint);

async function* loadAlbums(artistId, authToken) {
  const endpoint = `${artistId}/albums?limit=20&include_groups=album`
  const result = pageThroughResource(endpoint, authToken);

  for await (const page of result) {
    for (let album of page.items) {
      yield album;

for await (const album of loadAlbums("6sFIWsNpZYqfjUpaCgueju", YOUR_OWN_AUTH_TOKEN)) {

In this example, the code that's responsible for making requests to the paginated external service is abstract - the behavior responsible for managing the pagination (the pageThroughResource function) doesn't know about what it's paginating through. The logic that knows about loading albums (the loadAlbums) function is what handles the specific details of the API that we're calling. The only assumption that the pageThroughResource function makes is that the response object from the API returns a field called next which provides the URL of the next page of the resource listing. This means that you can re-use the pageThroughResource function on any API call you need to make that has the same pagination design.

The code achieves the separation of these two distinct behaviors by creating functions that return asynchronous iterators. pageThroughResource returns an asynchronous iterator, but also internally defines another function, makeRequest, that also returns an asynchronous iterator. pageThroughResource uses the yield * syntax to yield to whatever makeRequest's resulting async iterator returns. The code is organized this way so that makeRequest is able to call itself recursively. Inside makeRequest, first the JSON result of the response of the API call is yielded, and the user can use it immediately. After that, only if the response contains a next field, makeRequest will delegate control of the generator to another instance of itself, made to handle the next page. While that request is being made, the calling code already has access to the result of the first page. That means we don't have to wait until all the pages are loaded before we can start using the information we get from the API.

These specific functions make a few assumptions, including:

  • the API you're calling will return JSON
  • the JSON that your API returns will contain a field called next, which provides the next page of the resource listing for you to call

But you can use this pattern in your own code, tailored to however your API handles response types and pagination data. You could even use this pattern to page through a resource in a GraphQL API.

One specific drawback to point out: iterators in Javascript don't have the map, reduce, and filter algorithms that you might know from arrays - you'll have to use the for await .. of syntax to handle their output. Maybe one day we'll get that interface!

I hope this helps you keep your code nice and maintainable!