Bulk Updation/Insertion of Database Tables in Java using Hibernate – Optimized Way

Hibernate is the most popular orm framework used to interact with databases in java . In this article  we will see what are the various ways using which bulk selection and updation in any table can be done and what is the most effective way when using the hibernate framework in java . 

I  experimented with three ways which are as follows : 

  • Using Hibernate’s Query.list() method.
  • Using ScrollableResults with FORWARD_ONLY scroll mode.
  • Using ScrollableResults with FORWARD_ONLY scroll mode in a StatelessSession.

To decide which one gives best performance for our use case, following tests i performed using the above three ways listed.

  • Select and update 1000 rows.

Let’s see the Code and results by applying above three ways to the operation stated above one by one. 

Using Hibernate’s Query.list() method.

Code Executed : 

   List rows;
        Session session = getSession();
        Transaction transaction = session.beginTransaction();
        try {
            Query query = session.createQuery("FROM PersonEntity WHERE id > :maxId ORDER BY id").setParameter("maxId",
                    MAX_ID_VALUE);
            query.setMaxResults(1000);
            rows = query.list();
            int count = 0;
            for (Object row : rows) {
                PersonEntity personEntity = (PersonEntity) row;
                personEntity.setName(randomAlphaNumeric(30));
                session.saveOrUpdate(personEntity);
                //Always flush and clear the session after updating 50(jdbc_batch_size specified in hibernate.properties) rows
                if (++count % 50 == 0) {
                    session.flush();
                    session.clear();
                }
            }
        } finally {
            if (session != null && session.isOpen()) {
                transaction.commit();
                session.close();
            }
        }

Tests Results : 

  • Time taken:- 360s to 400s
  • Heap Pattern:- gradually increased from 13m to 51m(from jconsole). 

Using ScrollableResults with FORWARD_ONLY scroll mode.

With this we are expecting that it should consume less memory that the 1st approach . Let’s see the results 

Code Executed : 

Session session = getSession();
        Transaction transaction = session.beginTransaction();
        ScrollableResults scrollableResults = session
                .createQuery("FROM PersonEntity WHERE id > " + MAX_ID_VALUE + " ORDER BY id")
                .setMaxResults(1000).scroll(ScrollMode.FORWARD_ONLY);
        int count = 0;
        try {
            while (scrollableResults.next()) {
                PersonEntity personEntity = (PersonEntity) scrollableResults.get(0);
                personEntity.setName(randomAlphaNumeric(30));
                session.saveOrUpdate(personEntity);
                if (++count % 50 == 0) {
                    session.flush();
                    session.clear();
                }
            }
        } finally {
            if (session != null && session.isOpen()) {
                transaction.commit();
                session.close();
            }
        }

Tests Results : 

  • Time taken:- 185s to 200s
  • Heap Pattern:- gradually increased from 13mb to 41mb (measured same using jconsole)

Using ScrollableResults with FORWARD_ONLY scroll mode in a StatelessSession.

A stateless session does not implement a first-level cache nor interact with any second-level cache, nor does it implement transactional write-behind or automatic dirty checking, nor do operations cascade to associated instances. Collections are ignored by a stateless session. Operations performed via a stateless session bypass Hibernate’s event model and interceptors.   

These type of session is always recommended in case of bulk updation as we really do not need these overheads of hibernate features in these type of usecases . 

Code Executed : 

 StatelessSession session = getStatelessSession();
        Transaction transaction = session.beginTransaction();
        ScrollableResults scrollableResults = session
                .createQuery("FROM PersonEntity WHERE id > " + MAX_ID_VALUE + " ORDER BY id")
                .setMaxResults(TRANSACTION_BATCH_SIZE).scroll(ScrollMode.FORWARD_ONLY);
        try {
            while (scrollableResults.next()) {
                PersonEntity personEntity = (PersonEntity) scrollableResults.get(0);
                personEntity.setName(randomAlphaNumeric(20));
                session.update(personEntity);
            }
        } finally {
            if (session != null && session.isOpen()) {
                transaction.commit();
                session.close();
            }
        }

Tests Results : 

  • Time taken:- 185s to 200s
  • Heap Pattern:- gradually increased from 13mb to 39mb

I also performed the same tests with 2000 rows and the results obtained were as follows:-

Results:-

  • Using list():- time taken:- approx 750s, heap pattern:- gradually increased from 13mb to 74 mb
  • Using ScrollableResultSet:- time taken:- approx 380s, heap pattern:- gradually increased from 13mb to 46mb
  • Using Stateless:- time taken:- approx 380s, heap pattern:- gradually increased from 13mb to 43mb

Blocker Problem with all above approaches Tried

ScrollableResults and Stateless ScrollableResults give almost the same performance which is much better than Query.list(). But there is still one problem with all the above approaches. Locking, all the above approaches select and update the data in same transaction, this means for as long as the transaction is running, the rows on which updates have been performed will be locked and any other operations will have to wait for the transaction to finish.

Solution : 

There are two things which we should do here to solve above problem : 

  •  we need to select and update data in different transactions.
  • And updation of these types should be done in Batches

So again I performed the same tests as above but this time update was performed in a different transaction which was commited in batches of 50.

Note:- In case of Scrollable and Stateless we need a different session also, as we need the original session and transaction to scroll through the results.

Results using Batch Processing

  • Using list():- time taken:- approx 400s, heap pattern:- gradually increased from 13mb to 61 mb
  • Using ScrollableResultSet:- time taken:- approx 380s, heap pattern:- gradually increased from 13mb to 51mb
  • Using Stateless:- time taken:- approx 190s, heap pattern:- gradually increased from 13mb to 44mb

Observation:- This temporal performance of ScrollableResults dropped down to become almost equal to Query.list(), but performance of Stateless remained almost same.

Summary and Conclusion

As from all the above experimentation  , in cases where we need to do bulk selection and updation, the best approach in terms of memory consumption and time is as follows : 

  • Use ScrollableResults in a Stateless Session.
  • Perform selection and updation in different transactions in batches of 20 to 50 (Batch Processing) (Note -*-  Batch size  can depend on the case to case basis)

  Sample Code with the best approach

  StatelessSession session = getStatelessSession();
        Transaction transaction = session.beginTransaction();
        ScrollableResults scrollableResults = session
                .createQuery("FROM PersonEntity WHERE id > " + MAX_ID_VALUE + " ORDER BY id")
                .setMaxResults(TRANSACTION_BATCH_SIZE).scroll(ScrollMode.FORWARD_ONLY);
        int count = 0;
        try {
            StatelessSession updateSession = getStatelessSession();
            Transaction updateTransaction = updateSession.beginTransaction();
            while (scrollableResults.next()) {
                PersonEntity personEntity = (PersonEntity) scrollableResults.get(0);
                personEntity.setName(randomAlphaNumeric(5));
                updateSession.update(personEntity);
                if (++count % 50 == 0) {
                    updateTransaction.commit();
                    updateTransaction = updateSession.beginTransaction();
                }
            }
            updateSession.close();
        } finally {
            if (session != null && session.isOpen()) {
                transaction.commit();
                session.close();
            }
        }

With the   java frameworks like spring and others this code may be even more smaller , like one not needing to  take care of session closing etc . Above code is written in plain java using hibernate. 

Please  try with large data and comment us the results , Also if you have some other better approach to do this please comment . 

Thank You for reading the article

Tuning PostgreSQL Database Memory Configuration Parameters to Optimize Performance

Introduction 

Out of the box, the default PostgreSQL configuration is not tuned for any particular workload. Default values are set to ensure that PostgreSQL runs everywhere, with the least resources it can consume and so that it doesn’t cause any vulnerabilities. It has default settings for all of the database parameters. It is primarily the responsibility of the database administrator or developer to tune PostgreSQL according to their system’s workload. In this blog, we will establish basic guidelines for setting PostgreSQL database parameters to improve database performance according to workload.

Bear in mind that while optimizing PostgreSQL server configuration improves performance, a database developer must also be diligent when writing queries for the application. If queries perform full table scans where an index could be used or perform heavy joins or expensive aggregate operations, then the system can still perform poorly even if the database parameters are tuned. It is important to pay attention to performance when writing database queries.

Nevertheless, database parameters are very important too, so let’s take a look at the eight that have the greatest potential to improve performance , But before that it is important to understand the memory architecture  basic of postgres  : 

Memory in PostgreSQL can be classified into two categories:

  1. Local Memory area: It is allocated by each backend process for its own use.
  2. Shared memory area: It is used by all processes of a PostgreSQL server.

So Now i will explain tunable parameters and what should be the value of these tunable parameters   : . 

shared_buffer

PostgreSQL uses its own buffer and also uses kernel buffered IO. That means data is stored in memory twice, first in PostgreSQL buffer and then kernel buffer. Unlike other databases, PostgreSQL does not provide direct IO. This is called double buffering. The PostgreSQL buffer is called shared_buffer which is the most effective tunable parameter for most operating systems. This parameter sets how much dedicated memory will be used by PostgreSQL for cache.

The default value of shared_buffer is set very low and you will not get much benefit from that. It’s low because certain machines and operating systems do not support higher values. But in most modern machines, you need to increase this value for optimal performance.

The recommended value is 25% of your total machine RAM. You should try some lower and higher values because in some cases we achieve good performance with a setting over 25%. The configuration really depends on your machine and the working data set. If your working set of data can easily fit into your RAM, then you might want to increase the shared_buffer value to contain your entire database, so that the whole working set of data can reside in cache. That said, you obviously do not want to reserve all RAM for PostgreSQL.

In production environments, it is observed that a large value for shared_buffer gives really good performance, though you should always benchmark to find the right balance.

Alternatively, while a larger shared_buffers value can increase performance in ‘read heavy’ use cases, having a large shared_buffer value can be detrimental for ‘write heavy’ use cases, as the entire contents of shared_buffers must be processed during writes.

Please note that the database server needs to be restarted after this change.

testdb=# SHOW shared_buffers;
shared_buffers
----------------
128MB
(1 row)

wal_buffers

Write-Ahead Logging (WAL) is a standard method for ensuring integrity of data. Much like in the shared_buffers setting, PostgreSQL writes WAL records into buffers and then these buffers are flushed to disk.

The default size of the buffer is set by the  wal_buffers setting- initially at 16MB. If the system being tuned has a large number of concurrent connections, then a higher value for  wal_buffers can provide better performance.

effective_cache_size

effective_cache_size has the reputation of being a confusing PostgreSQL settings, and as such, many times the setting is left to the default value.

The effective_cache_size value provides a ‘rough estimate’ of the number of how much memory is available for disk caching by the operating system and within the database itself, after taking into account what’s used by the OS itself and other applications.

This value is used only by the PostgreSQL query planner to figure out whether plans it’s considering would be expected to fit in RAM or not. As such, it’s a bit of a fuzzy number to define for general use cases.

A conservative value for  effective_cache_size  would be ½(50%) of the total memory available on the system. Most commonly, the value is set to 75% of the total system memory on a dedicated DB server, but can vary depending on the specific discrete needs on a particular server workload.

If the value for effective_cache_size  is too low, then the query planner may decide not to use some indexes, even if they would help greatly increase query speed.

So Conclusively , General recommendation for effective_cache_size is as follows.

  • Set the value to the amount of file system cache available. On UNIX/Linux like systems, add the free+cached numbers from free or top commands to get an estimate
  • If you don’t know, set the value to the 50%  or 75%  of total system memory specially if dedicated DB server

work_mem

This configuration is used for complex sorting. If you have to do complex sorting then increase the value of work_mem for good results. In-memory sorts are much faster than sorts spilling to disk. Setting a very high value can cause a memory bottleneck for your deployment environment because this parameter is per user sort operation. Therefore, if you have many users trying to execute sort operations, then the system will allocate work_mem * total sort operations  for all users. Setting this parameter globally can cause very high memory usage. So it is highly recommended to modify this at the session level.

12345678910testdb=# SET work_mem TO “2MB”;testdb=# EXPLAIN SELECT * FROM bar ORDER BY bar.b;                                    QUERY PLAN                                     ———————————————————————————–Gather Merge  (cost=509181.84..1706542.14 rows=10000116 width=24)   Workers Planned: 4   ->  Sort  (cost=508181.79..514431.86 rows=2500029 width=24)         Sort Key: b         ->  Parallel Seq Scan on bar  (cost=0.00..88695.29 rows=2500029 width=24)(5 rows)

The initial query’s sort node has an estimated cost of 514431.86. Cost is an arbitrary unit of computation. For the above query, we have a work_mem of only 2MB. For testing purposes, let’s increase this to 256MB and see if there is any impact on cost.

123456789testdb=# SET work_mem TO “256MB”;testdb=# EXPLAIN SELECT * FROM bar ORDER BY bar.b;                                    QUERY PLAN                                     ———————————————————————————–Gather Merge  (cost=355367.34..1552727.64 rows=10000116 width=24)   Workers Planned: 4   ->  Sort  (cost=354367.29..360617.36 rows=2500029 width=24)         Sort Key: b         ->  Parallel Seq Scan on bar  (cost=0.00..88695.29 rows=2500029 width=24)

The query cost is reduced to 360617.36 from 514431.86 — a 30% reduction.

So Conclusively , Setting the value Higher alway results beer sorting and hashing , But setting it in local queries is always recommended . Set value high in queries where you expect high sorting otherwise low global value is good . 

maintenance_work_mem

maintenance_work_mem is a memory setting used for maintenance tasks. The default value is 64MB. Setting a large value helps in tasks like VACUUM, RESTORE, CREATE INDEX, ADD FOREIGN KEY and ALTER TABLE.

It is necessary to remember that when autovacuum runs, up to autovacuum_max_workers times this memory may be allocated, so be careful not to set the default value too high.

The default value of maintenance_work_mem = 64MB.

General recommendation to set maintenance_work_mem is as follows.

  • Set the value 10% of system memory, up to 1GB
  • Maybe you can set it even higher if you are having VACUUM problems

We can also temporarily increase this memory while creating indexes or at the time of dump restores or while performing full vacuums . 

synchronous_commit

This is used to enforce that commit will wait for WAL to be written on disk before returning a success status to the client. This is a trade-off between performance and reliability. If your application is designed such that performance is more important than the reliability, then turn off synchronous_commit. This means that there will be a time gap between the success status and a guaranteed write to disk. In the case of a server crash, data might be lost even though the client received a success message on commit. In this case, a transaction commits very quickly because it will not wait for a WAL file to be flushed, but reliability is compromised.

Temp_buffers

This parameter sets the maximum number of temporary buffers used by each database session. The session local buffers are used only for access to temporary tables. The setting of this parameter can be changed within individual sessions but only before the first use of temporary tables within the session.

PostgreSQL database utilizes this memory area for holding the temporary tables of each session, these will be cleared when the connection is closed.

The default value of temp_buffer = 8MB.

Conclusion

There are more parameters that can be tuned to gain better performance but those have less impact than the ones highlighted here. In the end, we must always keep in mind that not all parameters are relevant for all applications types. Some applications perform better by tuning a parameter and some don’t. Tuning PostgreSQL Database Parameters must be done for the specific needs of an application and the OS it runs on.  

Also ,  Performance tuning does not only depends on postgres configuration parameters ,  there are many system parameters also on which postgresql performance depends So, in my next blog i will explain some system parameters which can affect postgresql performance .