🖥 TECH >> Testing performance of bulk writes in IndexedDB

Gautam Panickar SS
4 min readDec 12, 2020
Photo by Brett Sayles from Pexels

I have been adding objects to the IDB store normally, until I realised one case when my app took a little longer than expected to add about 250 objects to the DB. That’s how I ended up looking for an alternate option to add objects in bulk to IDB. Like most of the sapiens, I ran the following search query on Google..

bulk add objects into IndexedDB store

Like every developer, I was offered with an answer on StackOverflow. I found that IDB doesn’t offer a bulk-add API. So, one could either choose conventional method of iterating over the data and adding it individually or use third-party IDB implementations which provide bulk-add APIs assuring good performance.

One among the third-party wrappers is Dexie.js . They offer a bulkAdd method which serves the purpose. So, I decided to delve into their Github repository for more information.

Dexie.js bulkAdd method

The area of interest here is the mutate method. Let’s see what it looks like. By the way, Github’s code navigation is such an awesome feature to have.

Dexie.js mutate method

The method is a little too long, but can be easily read and understood. The addition happens in the loop inside isAddOrPut condition. But, that’s all we need. Nevertheless, they are also using the same add API of IDB.

So, let the testing begin!

I am going to use chrome snippets for interacting with IDB. If you are unaware of that, check out my previous article. All tests are going to include insertion of at least 1000 objects into the store.

I am testing on an Ubuntu 64 bit OS with 4GB RAM and the browser used is Chrome 87.

The store DB initialisation and test data can be found at https://gist.github.com/GautamPanickar/dd91435378af0ca07eeb28a24725492c.

Execute the the scripts given below and check the browser console.

Test 1 >> Bulk write objects in individual transactions

It took a little more than 3 seconds to add 1000 objects to the store. This was because, each object was added in a different transaction, resulting in a total of 1000 transactions.

The benefit of having individual transactions is that, when any addition to store fails for a particular reason, only it’s corresponding transaction fails and rest of them remain unaffected by the failure. This can be tested by modifying the script, to throw any errors while inserting to store(like adding an already added contact again).

Test 2>> Bulk write objects in single transaction

Wow! The result after execution is even more surprising. It took less than 250 ms to clear and insert 1000 objects. This is mainly because the whole addition took place in a single transaction.

This also means that, if a single insertion fails, the rest of them are also affected. The whole transaction fails and nothing will be inserted or cleared. You can test it like the way I mentioned in Test 1

When the length of records to write is more than 1000, it’d be better to divide the writes in order of 500 or 1000 and allot it to separate transactions.

Test 3 >> Bulk write objects in 3 separate transactions for read-update-delete, without clearing the store

This test case is more on the logical side, to replace add with put whenever necessary and delete stale objects. It uses 3 transaction for read, update and delete respectively.

First the existing records in store is fetched via a get and common objects are identified by comparing the id. The common records are updated with an update , the non-existent objects are deleted with a delete and rest are added using add API.

For adding 1250 objects (1000 existing and 250 new objects) on an existing store of 1000 objects, it took below 350 ms. That’s really good, given the fact that we are not clearing the DB initially as in Test Case 1 & 2.

I leave it up to you on choosing the best practice. If I have missed any other test cases for evaluating the performance, please leave a comment.

--

--