In this Blog, we will see one of the issues and solutions which I found when one one of our production servers , our java application is becoming slow due to more gc pauses .
I will explain this particular approach which can be one of the reasons for initiating more gc pauses .
To understand this article , one must have basic knowledge of G1GC algorithm of java for garbage collection .
Don’t worry if you don’t have knowledge of G1GC , I will make articles on basics of G1GC later and then you can read this article again .
So, Let’s start from what issue we are facing
Issue : applications become unresponsive in between very frequently .
Analysis :
after debugging from jvm level stats from JMX bean dumping it was clear that GC collection time was increased so much in between
Heap Also increasing
After that we enabled gc log by using-Xlog:gc=debug:file=/tmp/gc.log in jvm arguments while starting application .
Analyzing gc.log , we found Full GC is triggering many times and whenever FullGC triggers , it generally stop the application for sometime , in java language we call it STW (Stop the World) .
Generally there are following type of events in G1GC :
Minor: Eden + Survivor From -> Survivor To
Mixed: Minor + (# reclaimable Tenured regions / -XX:G1MixedGCCountTarget) regions of Tenured
Full GC: All regions evacuated
Minor/Mixed + To-space exhaustion: Minor/Mixed + rollback + Full GC
In a smoothly running application, batches of Minor events alternate with batches of Mixed events should be there only . Full GC events and To-space exhaustion are things you absolutely don’t want to see when running G1GC, they need to be detected and eliminated and if they are running they should be run by some external events like (jstack,jmap etc …) .
For in depth details of these events , as already stated I will make a blog series on explaining G1GC concepts , for now you can search on the net .
Now, coming back to our debugging ,
We checked that no external command for taking thread dump or heap dump or histogram was made that can possibly initiate Full GC event .
So , the question now was why this full GC is Triggering .
On Further Researching we found that Humongous objects can be one of the reasons for triggering the Full GC event .
Now what is Humongous objects ? ? ?
A Brief Definition is : Any single data allocation ≥ G1HeapRegionSize/2 is considered a Humongousobject, which is allocated out of contiguous regions of Free space, which are then added to Tenured. As Humongous objects are allocated out of Free space. Allocation failures trigger GC events. If an allocation failure from Free space triggers GC, the GC event will be a Full GC, which is very undesirable in most circumstances. To avoid Full GC events in an application with lots of Humongous objects, one must ensure the Free space pool is large enough as compared to Eden that Eden will always fill up first .
So , We started checking if our application is generating Humongous objects .
And from gc.log we found that lots of Humongous objects are created which were the reasons for triggering Full GC events .
I made following commands to check the Humongous objects specially in linux :
It will give you the size of humongous objects generated in my application.
We have java less than Oracle JDK 8u45 version and forjava greater than this , it is written in release notes that these Humongous objects also get collected in Minor events also .
Search for “G1 now collects unreachable Humongous objects during young collections” in the
Hibernate is the most popular orm framework used to interact with databases in java . In this article we will see what are the various ways using which bulk selection and updation in any table can be done and what is the most effective way when using the hibernate framework in java .
I experimented with three ways which are as follows :
Using Hibernate’s Query.list() method.
Using ScrollableResults with FORWARD_ONLY scroll mode.
Using ScrollableResults with FORWARD_ONLY scroll mode in a StatelessSession.
To decide which one gives best performance for our use case, following tests i performed using the above three ways listed.
Select and update 1000 rows.
Let’s see the Code and results by applying above three ways to the operation stated above one by one.
Using Hibernate’s Query.list() method.
Code Executed :
List rows;
Session session = getSession();
Transaction transaction = session.beginTransaction();
try {
Query query = session.createQuery("FROM PersonEntity WHERE id > :maxId ORDER BY id").setParameter("maxId",
MAX_ID_VALUE);
query.setMaxResults(1000);
rows = query.list();
int count = 0;
for (Object row : rows) {
PersonEntity personEntity = (PersonEntity) row;
personEntity.setName(randomAlphaNumeric(30));
session.saveOrUpdate(personEntity);
//Always flush and clear the session after updating 50(jdbc_batch_size specified in hibernate.properties) rows
if (++count % 50 == 0) {
session.flush();
session.clear();
}
}
} finally {
if (session != null && session.isOpen()) {
transaction.commit();
session.close();
}
}
Tests Results :
Time taken:- 360s to 400s
Heap Pattern:- gradually increased from 13m to 51m(from jconsole).
Using ScrollableResults with FORWARD_ONLY scroll mode.
With this we are expecting that it should consume less memory that the 1st approach . Let’s see the results
Code Executed :
Session session = getSession();
Transaction transaction = session.beginTransaction();
ScrollableResults scrollableResults = session
.createQuery("FROM PersonEntity WHERE id > " + MAX_ID_VALUE + " ORDER BY id")
.setMaxResults(1000).scroll(ScrollMode.FORWARD_ONLY);
int count = 0;
try {
while (scrollableResults.next()) {
PersonEntity personEntity = (PersonEntity) scrollableResults.get(0);
personEntity.setName(randomAlphaNumeric(30));
session.saveOrUpdate(personEntity);
if (++count % 50 == 0) {
session.flush();
session.clear();
}
}
} finally {
if (session != null && session.isOpen()) {
transaction.commit();
session.close();
}
}
Tests Results :
Time taken:- 185s to 200s
Heap Pattern:- gradually increased from 13mb to 41mb (measured same using jconsole)
Using ScrollableResults with FORWARD_ONLY scroll mode in a StatelessSession.
A stateless session does not implement a first-level cache nor interact with any second-level cache, nor does it implement transactional write-behind or automatic dirty checking, nor do operations cascade to associated instances. Collections are ignored by a stateless session. Operations performed via a stateless session bypass Hibernate’s event model and interceptors.
These type of session is always recommended in case of bulk updation as we really do not need these overheads of hibernate features in these type of usecases .
Code Executed :
StatelessSession session = getStatelessSession();
Transaction transaction = session.beginTransaction();
ScrollableResults scrollableResults = session
.createQuery("FROM PersonEntity WHERE id > " + MAX_ID_VALUE + " ORDER BY id")
.setMaxResults(TRANSACTION_BATCH_SIZE).scroll(ScrollMode.FORWARD_ONLY);
try {
while (scrollableResults.next()) {
PersonEntity personEntity = (PersonEntity) scrollableResults.get(0);
personEntity.setName(randomAlphaNumeric(20));
session.update(personEntity);
}
} finally {
if (session != null && session.isOpen()) {
transaction.commit();
session.close();
}
}
Tests Results :
Time taken:- 185s to 200s
Heap Pattern:- gradually increased from 13mb to 39mb
I also performed the same tests with 2000 rows and the results obtained were as follows:-
Results:-
Using list():- time taken:- approx 750s, heap pattern:- gradually increased from 13mb to 74 mb
Using ScrollableResultSet:- time taken:- approx 380s, heap pattern:- gradually increased from 13mb to 46mb
Using Stateless:- time taken:- approx 380s, heap pattern:- gradually increased from 13mb to 43mb
Blocker Problem with all above approaches Tried
ScrollableResults and Stateless ScrollableResults give almost the same performance which is much better than Query.list(). But there is still one problem with all the above approaches. Locking, all the above approaches select and update the data in same transaction, this means for as long as the transaction is running, the rows on which updates have been performed will be locked and any other operations will have to wait for the transaction to finish.
Solution :
There are two things which we should do here to solve above problem :
we need to select and update data in different transactions.
And updation of these types should be done in Batches
So again I performed the same tests as above but this time update was performed in a different transaction which was commited in batches of 50.
Note:- In case of Scrollable and Stateless we need a different session also, as we need the original session and transaction to scroll through the results.
Results using Batch Processing
Using list():- time taken:- approx 400s, heap pattern:- gradually increased from 13mb to 61 mb
Using ScrollableResultSet:- time taken:- approx 380s, heap pattern:- gradually increased from 13mb to 51mb
Using Stateless:- time taken:- approx 190s, heap pattern:- gradually increased from 13mb to 44mb
Observation:- This temporal performance of ScrollableResults dropped down to become almost equal to Query.list(), but performance of Stateless remained almost same.
Summary and Conclusion
As from all the above experimentation , in cases where we need to do bulk selection and updation, the best approach in terms of memory consumption and time is as follows :
Use ScrollableResults in a Stateless Session.
Perform selection and updation in different transactions in batches of 20 to 50 (Batch Processing) (Note -*- Batch size can depend on the case to case basis)
Sample Code with the best approach
StatelessSession session = getStatelessSession();
Transaction transaction = session.beginTransaction();
ScrollableResults scrollableResults = session
.createQuery("FROM PersonEntity WHERE id > " + MAX_ID_VALUE + " ORDER BY id")
.setMaxResults(TRANSACTION_BATCH_SIZE).scroll(ScrollMode.FORWARD_ONLY);
int count = 0;
try {
StatelessSession updateSession = getStatelessSession();
Transaction updateTransaction = updateSession.beginTransaction();
while (scrollableResults.next()) {
PersonEntity personEntity = (PersonEntity) scrollableResults.get(0);
personEntity.setName(randomAlphaNumeric(5));
updateSession.update(personEntity);
if (++count % 50 == 0) {
updateTransaction.commit();
updateTransaction = updateSession.beginTransaction();
}
}
updateSession.close();
} finally {
if (session != null && session.isOpen()) {
transaction.commit();
session.close();
}
}
With the java frameworks like spring and others this code may be even more smaller , like one not needing to take care of session closing etc . Above code is written in plain java using hibernate.
Please try with large data and comment us the results , Also if you have some other better approach to do this please comment .
In our last blog we learned about Need and Usage of Hypothetical indexes in Postgresql . Now we can check easily in live environment also if some particular index will be helpful or not , but how we get to know which index to test . It requires in-depth knowledge of indexing and experience in Postgresql , But in PostgreSql we can get automatic recommendation of indexes for specific queries by using three extensions hypog,pg_stat_statements,pg_qualstats .
Now Let’s move directly to the practical part how we can use this feature in postgres :
i am doing experiments on Postgres10 installed on Centos7 .
Set sample rate of pgqual stats to 1 in postgresql.conf . This rate define how frequently monitor and analyze the queries . Value ‘1’ represents that keep track of all queries
pg_qualstats.sample_rate = '1'
Create the function which will be used to detect usable indexes :
CREATE OR REPLACE FUNCTION find_usable_indexes()
RETURNS VOID AS
$$
DECLARE
l_queries record;
l_querytext text;
l_idx_def text;
l_bef_exp text;
l_after_exp text;
hypo_idx record;
l_attr record;
/* l_err int; */
BEGIN
CREATE TABLE IF NOT EXISTS public.idx_recommendations (queryid bigint,
query text, current_plan jsonb, recmnded_index text, hypo_plan jsonb);
FOR l_queries IN
SELECT t.relid, t.relname, t.queryid, t.attnames, t.attnums,
pg_qualstats_example_query(t.queryid) as query
FROM
(
SELECT qs.relid::regclass AS relname, qs.relid AS relid, qs.queryid,
string_agg(DISTINCT attnames.attnames,',') AS attnames, qs.attnums
FROM pg_qualstats_all qs
JOIN pg_qualstats q ON q.queryid = qs.queryid
JOIN pg_stat_statements ps ON q.queryid = ps.queryid
JOIN pg_amop amop ON amop.amopopr = qs.opno
JOIN pg_am ON amop.amopmethod = pg_am.oid,
LATERAL
(
SELECT pg_attribute.attname AS attnames
FROM pg_attribute
JOIN unnest(qs.attnums) a(a) ON a.a = pg_attribute.attnum
AND pg_attribute.attrelid = qs.relid
ORDER BY pg_attribute.attnum) attnames,
LATERAL unnest(qs.attnums) attnum(attnum)
WHERE NOT
(
EXISTS
(
SELECT 1
FROM pg_index i
WHERE i.indrelid = qs.relid AND
(arraycontains((i.indkey::integer[])[0:array_length(qs.attnums, 1) - 1],
qs.attnums::integer[]) OR arraycontains(qs.attnums::integer[],
(i.indkey::integer[])[0:array_length(i.indkey, 1) + 1]) AND i.indisunique)))
GROUP BY qs.relid, qs.queryid, qs.qualnodeid, qs.attnums) t
GROUP BY t.relid, t.relname, t.queryid, t.attnames, t.attnums
LOOP
/* RAISE NOTICE '% : is queryid',l_queries.queryid; */
execute 'explain (FORMAT JSON) '||l_queries.query INTO l_bef_exp;
execute 'select hypopg_reset()';
execute 'SELECT indexrelid,indexname FROM hypopg_create_index(''CREATE INDEX on '||l_queries.relname||'('||l_queries.attnames||')'')' INTO hypo_idx;
execute 'explain (FORMAT JSON) '||l_queries.query INTO l_after_exp;
execute 'select hypopg_get_indexdef('||hypo_idx.indexrelid||')' INTO l_idx_def;
INSERT INTO public.idx_recommendations (queryid,query,current_plan,recmnded_index,hypo_plan)
VALUES (l_queries.queryid,l_querytext,l_bef_exp::jsonb,l_idx_def,l_after_exp::jsonb);
END LOOP;
execute 'select hypopg_reset()';
END;
$$ LANGUAGE plpgsql;
Usage
Now Let’s make a table with 10 Crores rows as following :
testdb=# create table orders as select s as orderno, md5(random()::text) as orderitem , now() as order_created from generate_Series(1,100000000) s;
SELECT 100000000
Now lets run a query on which we want to check if need to make any index
testdb=# select * from orders where orderno = 80000 ;
-[ RECORD 1 ]-+---------------------------------
orderno | 80000
orderitem | 03b41c2f32d99e9a597010608946c4c6
order_created | 2021-05-22 17:52:21.435936+05:30
Now run following queries to find out which indexes are recommend by this extension and what are the improvement percentage by applying these indexes hypothetically
testdb=# select find_usable_indexes();
find_usable_indexes
---------------------
(1 row)
testdb=# select b.query, a.recmnded_index,round((((a.current_plan->0->'Plan'->>'Total Cost')::numeric-(hypo_plan->0->'Plan'->>'Total Cost')::numeric)*100/(a.current_plan->0->'Plan'->>'Total Cost')::numeric),2) as percent_improvd FROM idx_recommendations a JOIN pg_stat_statements b ON a.queryid = b.queryid WHERE round((((current_plan->0->'Plan'->>'Total Cost')::numeric-(hypo_plan->0->'Plan'->>'Total Cost')::numeric)*100/(current_plan->0->'Plan'->>'Total Cost')::numeric),2) > 0 order by 3 desc ;
query | recmnded_index | percent_improvd
-----------------------------------------------------+------------------------------------------------------------------+-----------------
select * from orders where orderno = $1 | CREATE INDEX ON public.orders USING btree (orderno) | 100.00
Above analysis was internally done by creating the indexes hypothetically not by making real indexes .
Please Note here that do not fully rely on automatic index recommendation . Yes we have no doubt it is very very useful feature , but please also check logically why these recommended indexes are useful and should you really create it or not.
Now , You can try the explained feature with more complex queries and comment on this article explaining your result with your queries . So let’s experiment and comment .
In this Blog We will first cover what are hypothetical indexes and what is need of these type of indexes and then secondly we shall see the practical usage of these indexes .
Need of Hypothetical Indexes :
As the name suggests these are not real indexes, these are hypothetical indexes i.e.. They are virtual indexes which PostgreSQL query planner does not consider when running queries .
Now the question arises where these Hypothetical Indexes are useful ? ? ?
First Let’s discuss one scenario , we have a large table which is currently in the production environment and we need to make some index on live db and we are not sure whether that index will be useful or not , we even don’t know if by making that index our production environment may be down !!!
So , Solution of above problem will be following :
Lets ignore the risk and make the index on live table which can result the following scenario
first of all it will take lots of time depending on data present in table
Live queries may get affected badly if we are not sure if the index we are making will increase or decrease the cost .
We also do not know the size of the index it may be too large which can again impact the production database server .
Another solution is to replicate the production database to the local dev environment and then apply all the hits and try there and then apply at the production environment . it seems a very safe and effective approach in almost all cases but this will take too much of time in setting up the things and testing with actual scenario .
Third Solution is Hypothetical Indexes as this functionality will create imaginary indexes not real indexes . But there are some things to note about these indexes :
it creates an index in our connection’s private memory. None of the catalog tables nor the original tables are actually touched.
The only way to see if we can benefit from that index is by running an EXPLAIN <QUERY>.
If you wish to run an EXPLAIN ANALYZE that runs that SQL and provides the run time stats, it would not be able to use that hypothetical index. This is because a hypothetical index does not exist in reality.
Currently these indexes will work on BTREE ONLY . However you can try if it works on other type of indexes .
On Creating extension following functions will be created .
testdb=# select proname from pg_proc where proname ilike '%hyp%';
proname
----------------------
hypopg_reset_index
hypopg_reset
hypopg_create_index
hypopg_drop_index
hypopg
hypopg_list_indexes
hypopg_relation_size
hypopg_get_indexdef
Usage
Now Let’s make a table with 10 Crores rows as following :
testdb=# create table orders as select s as orderno, md5(random()::text) as orderitem , now() as order_created from generate_Series(1,100000000) s;
SELECT 100000000
Now check COST of a query by running explain :
testdb=# explain select * from orders where orderno > 80000 order by order_created desc limit 100 ;
QUERY PLAN
--------------------------------------------------------------------------------
Limit (cost=3600088.98..3600089.23 rows=100 width=44)
-> Sort (cost=3600088.98..3688095.27 rows=35202513 width=44)
Sort Key: order_created DESC
-> Seq Scan on orders (cost=0.00..2254674.25 rows=35202513 width=44)
Filter: (orderno > 80000)
(5 rows)
Now create the Hypothetical Index :
testdb=# SELECT * FROM hypopg_create_index('create index order_created_idx on orders(order_created)');
indexrelid | indexname
------------+-----------------------------------
24797 | <24797>btree_orders_order_created
(1 row)
Now again do Explain to check if above index may be useful or not :
testdb=# explain select * from orders where orderno > 80000 order by order_created desc limit 100 ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.07..4.29 rows=100 width=45)
-> Index Scan Backward using "<24797>btree_orders_order_created" on orders (cost=0.07..4215496.19 rows=99917459 width=45)
Filter: (orderno > 80000)
(3 rows)
from both Explain command output we can clearly see the diffrence in cost and can also see that planner is using newly created hypothetical index .
We can Drop the index as follows :
testdb=# select * from hypopg_drop_index(24797);
hypopg_drop_index
-------------------
t
(1 row)
We can also check the estimated size of index created virtually as follows :
testdb=# SELECT * FROM hypopg_create_index('create index order_created_idx on orders(order_created)');
indexrelid | indexname
------------+-----------------------------------
24798 | <24798>btree_orders_order_created
(1 row)
testdb=# select * from pg_size_pretty(hypopg_relation_size(24798));
pg_size_pretty
----------------
2990 MB
(1 row)
Now lets create actual index and see what will be the actual size
testdb=# create index order_created_idx on orders(order_created);
CREATE INDEX
testdb=# \di+ order_created_idx
List of relations
Schema | Name | Type | Owner | Table | Size | Description
--------+-------------------+-------+----------+--------+---------+-------------
public | order_created_idx | index | postgres | orders | 2142 MB |
(1 row)
Regarding Load Average shown in Linux there are many confusion around the world like
What is load average show in top command ?
What this values represent ?
When it will be high when low ?
When to consider it as critical ?
In which scenarios it can increase ?
In this Blog we will talk about the answers of all these .
What are these three values shown in above image ?
The three numbers represent averages over progressively longer periods of time (one, five, and fifteen-minute averages) and that lower numbers are better. Higher numbers represent a problem or an overloaded machine .
Now before getting into what is good value, what is bad value , what are the reasons which can affect these values , We will understand these on a machine with one single-core processor.
The traffic analogy
A single-core CPU is like a single lane of traffic. Imagine you are a bridge operator … sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time. If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they’re in for delays.
So, Bridge Operator, what numbering system are you going to use? How about:
0.00 means there’s no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there’s no backup, and an arriving car will just go right on.
1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.
over 1.00 means there’s backup. How much? Well, 2.00 means that there are two lanes worth of cars total — one lane’s worth on the bridge, and one lane’s worth waiting. 3.00 means there are three lanes worth total — one lane’s worth on the bridge, and two lanes’ worth waiting. Etc.
Like the bridge operator, you’d like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also, like the bridge operator, you are still ok if you get some temporary spikes above 1.00 … but when you’re consistently above 1.00, you need to worry.
So you’re saying the ideal load is 1.00?
Well, not exactly. The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70:
But now a days we many multiple cores systems or multiple processors system .
Got a quad-processor system? It’s still healthy with a load of 3.00.
On a multi-processor system, the load is relative to the number of processor cores available. The “100% utilization” mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.
If we go back to the bridge analogy, the “1.00” really means “one lane’s worth of traffic”. On a one-lane bridge, that means it’s filled up. On a two-lane bridge, a load of 1.00 means it’s at 50% capacity — only one lane is full, so there’s another whole lane that can be filled.
Same with CPUs: a load of 1.00 is 100% CPU utilization on a single-core box. On a dual-core box, a load of 2.00 is 100% CPU utilization.
Which leads us to two new Rules of Thumb:
The “number of cores = max load” Rule of Thumb: on a multicore system, your load should not exceed the number of cores available.
The “cores is cores” Rule of Thumb: How the cores are spread out over CPUs doesn’t matter. Two quad-cores == four dual-cores == eight single-cores. It’s all eight cores for these purposes.
But What to extract from here that if load is going beyond number of Cores are we in crunch of Cores ?
Well Not exactly , For this We need to further debug or analyse TOP command data to come to a conclusion .
In above output , This coloured part shows CPU used by user process(us) and by system process(sy) . Now if these values are around 99-100%, it means there is crunch of cpu cores on your system or some process is consuming more CPU . So, in this case either increase cores or optimise you application which is consuming more CPU .
Now let’s take another scenario :
In above image , coloured parts shows amount of time CPU is waiting in doing Input/Output(I/O) . So say if this values is going above say 80% ,then also load average on server will increase . It means either you disk read/write speed is slow or your applications is reading/writing too much on your system beyond system capability . In this case either diagnose your hard disk read/write speed or check why your application is reading/writing so much .
Now let’s take one more scenario :
If values Above coloured output goes beyond certain limit , it means softirq(si) are consuming cpu . it can be due to network/disk interrupts . Either they are not getting enough CPU or there is some misconfiguration in you ethernet ports due to which interrupts are not able to handle packets receiving or transmitting . These types of problem occurs more on VM environment rather than physical machine .
Now , Lest take one last scenario :
This above part will help you in case of Virtual Machine Environment .
If %st increases to certain limit say it is remaining more than 50% , it means that you are getting half of CPU time from base machine and someone else is consuming you CPU time as you may be on shared CPU infra . In above case also Load can increase .
I hope it covers all major scenarios in which load average can increase .
Now there are many points open here like how to check read/write speed, how to check which application is consuming more CPU , How to check which interrupts are causing problem . For all of these stay tuned to hello worlds .
In my previous Article i explained Tuning PostgreSQL Database Memory Configuration Parameters to Optimize Performance and as i said Database performance does not only depend on Postgresql configurations but also on system parameters .Poorly configured OS kernel parameters can cause degradation in database server performance. Therefore, it is imperative that these parameters are configured according to the database server and its workload. In this article i will be talking about centos/redhat linux system specially .
Story
I will start the article with small story where on one of our client huge amount of writes were there and customer have provided us 200 GB of RAM for that dedicated database server , So there were no problem of resources.
Now what was happening that after sometime system loads get increased so much and on debugging we found no special query around the time when load increases . Somewhere over internet we found if we clear the system cache regularly then issue will be resolved .
We then schedule a cron to clear system cache after some regular interval and issue got resolved .
Now the question is why issue was not coming after this ? ? ?
And the Answer is that due to large cache size as we have so much of ram available lots of data is collected in RAM (in GB’s) and when this whole data flushes out on to the disk , system load becomes high at that time
So from that we came to know that we also need to tune some system parameters also to optimize system and database(postgresql) performance .
In above case we tuned vm.dirty_background_ratio and vm.dirty_ratio , these two system(os) parameters to resolve the issue .
Kernel parameters Tuning
Now what values we set for these above two parameters described in story and what are all other important Linux kernel parameters that can affect database server performance which we can tune are described as follows :
The vm.dirty_background_ratio is the percentage of memory filled with dirty pages that need to be flushed to disk. Flushing is done in the background. The value of this parameter ranges from 0 to 100; however, a value lower than 5 may not be effective and some kernels do not internally support it. The default value is 10 on most Linux systems. You can gain performance for write-intensive operations with a lower ratio, which means that Linux flushes dirty pages in the background.
You need to set a value of vm.dirty_background_bytes depending on your disk speed.
There are no “good” values for these two parameters since both depend on the hardware. However, setting vm.dirty_background_ratio to 5 and vm.dirty_background_bytes to 25% of your disk speed improves performance by up to ~25% in most cases.
vm.dirty_ratio / dirty_bytes
This is the same as vm.dirty_background_ratio / dirty_background_bytes except that the flushing is done in the foreground, blocking the application. So vm.dirty_ratio should be higher than vm.dirty_background_ratio. This will ensure that background processes kick in before the foreground processes to avoid blocking the application, as much as possible. You can tune the difference between the two ratios depending on your disk IO
vm.swappiness
vm.swappiness is another kernel parameter that can affect the performance of the database. This parameter is used to control the swappiness (swapping pages to and from swap memory into RAM) behavior on a Linux system. The value ranges from 0 to 100. It controls how much memory will be swapped or paged out. Zero means disable swap and 100 means aggressive swapping.
You may get good performance by setting lower values.
Setting a value of 0 in newer kernels may cause the OOM Killer (out of memory killer process in Linux) to kill the process. Therefore, you can be on the safe side and set the value to 1 if you want to minimize swapping. The default value on a Linux system is 60. A higher value causes the MMU (memory management unit) to utilize more swap space than RAM, whereas a lower value preserves more data/code in memory.
A smaller value is a good bet to improve performance in PostgreSQL.
vm.overcommit_memory / vm.overcommit_ratio
Applications acquire memory and free that memory when it is no longer needed. But in some cases, an application acquires too much memory and does not release it. This can invoke the OOM killer. Here are the possible values for vm.overcommit_memory parameter with a description for each:
Heuristic overcommit, Do it intelligently (default); based kernel heuristics
vm.overcommit_ratio is the percentage of RAM that is available for overcommitment. A value of 50% on a system with 2 GB of RAM may commit up to 3 GB of RAM.
A value of 2 for vm.overcommit_memory yields better performance for PostgreSQL. This value maximizes RAM utilization by the server process without any significant risk of getting killed by the OOM killer process. An application will be able to overcommit, but only within the overcommit ratio, thus reducing the risk of having OOM killer kill the process. Hence a value to 2 gives better performance than the default 0 value. However, reliability can be improved by ensuring that memory beyond an allowable range is not overcommitted. It avoids the risk of the process being killed by OOM-killer.
On systems without swap, one may experience a problem when vm.overcommit_memory is 2.
Generally speaking almost all applications which uses more memory depends on this , For example , In Redis setting this value 1 is best .
Turn On Huge Pages
Linux, by default, uses 4K memory pages, BSD has Super Pages, whereas Windows has Large Pages. A page is a chunk of RAM that is allocated to a process. A process may own more than one page depending on its memory requirements. The more memory a process needs the more pages that are allocated to it. The OS maintains a table of page allocation to processes. The smaller the page size, the bigger the table, the more time required to look up a page in that page table. Therefore, huge pages make it possible to use a large amount of memory with reduced overheads; fewer page lookups, fewer page faults, faster read/write operations through larger buffers. This results in improved performance.
PostgreSQL has support for bigger pages on Linux only. By default, Linux uses 4K of memory pages, so in cases where there are too many memory operations, there is a need to set bigger pages. Performance gains have been observed by using huge pages with sizes 2 MB and up to 1 GB. The size of Huge Page can be set boot time. You can easily check the huge page settings and utilization on your Linux box using cat /proc/meminfo | grep -i huge command.
Get HugePage Info – On Linux (only)
Note: This is only for Linux, for other OS this operation is ignored $ cat /proc/meminfo | grep -i huge AnonHugePages: 0 kB ShmemHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB
In this example, although huge page size is set at 2,048 (2 MB), the total number of huge pages has a value of 0. which signifies that huge pages are disabled.
Script to quantify Huge Pages
This is a simple script which returns the number of Huge Pages required. Execute the script on your Linux box while your PostgreSQL is running. Ensure that $PGDATA environment variable is set to PostgreSQL’s data directory.
Now you can see that most of the huge pages are in use.
Note: The sample value for HugePages used here is very low, which is not a normal value for a big production machine. Please assess the required number of pages for your system and set those accordingly depending on your system’s workload and resources.
Now, Tuning Postgresql parameters and kernel parameters is not enough for good Postgresql performance there are many other things like
How you are making Query
Proper Indexing — For this you can follow indexing series on our blog
Proper partitioning and sharding accroding to business usecase
and many more .
Stay tuned to get more blogs on optimizing postgresql performance
Out of the box, the default PostgreSQL configuration is not tuned for any particular workload. Default values are set to ensure that PostgreSQL runs everywhere, with the least resources it can consume and so that it doesn’t cause any vulnerabilities. It has default settings for all of the database parameters. It is primarily the responsibility of the database administrator or developer to tune PostgreSQL according to their system’s workload. In this blog, we will establish basic guidelines for setting PostgreSQL database parameters to improve database performance according to workload.
Bear in mind that while optimizing PostgreSQL server configuration improves performance, a database developer must also be diligent when writing queries for the application. If queries perform full table scans where an index could be used or perform heavy joins or expensive aggregate operations, then the system can still perform poorly even if the database parameters are tuned. It is important to pay attention to performance when writing database queries.
Nevertheless, database parameters are very important too, so let’s take a look at the eight that have the greatest potential to improve performance , But before that it is important to understand the memory architecture basic of postgres :
Memory in PostgreSQL can be classified into two categories:
Local Memory area: It is allocated by each backend process for its own use.
Shared memory area: It is used by all processes of a PostgreSQL server.
So Now i will explain tunable parameters and what should be the value of these tunable parameters : .
shared_buffer
PostgreSQL uses its own buffer and also uses kernel buffered IO. That means data is stored in memory twice, first in PostgreSQL buffer and then kernel buffer. Unlike other databases, PostgreSQL does not provide direct IO. This is called double buffering. The PostgreSQL buffer is called shared_buffer which is the most effective tunable parameter for most operating systems. This parameter sets how much dedicated memory will be used by PostgreSQL for cache.
The default value of shared_buffer is set very low and you will not get much benefit from that. It’s low because certain machines and operating systems do not support higher values. But in most modern machines, you need to increase this value for optimal performance.
The recommended value is 25% of your total machine RAM. You should try some lower and higher values because in some cases we achieve good performance with a setting over 25%. The configuration really depends on your machine and the working data set. If your working set of data can easily fit into your RAM, then you might want to increase the shared_buffer value to contain your entire database, so that the whole working set of data can reside in cache. That said, you obviously do not want to reserve all RAM for PostgreSQL.
In production environments, it is observed that a large value for shared_buffer gives really good performance, though you should always benchmark to find the right balance.
Alternatively, while a larger shared_buffers value can increase performance in ‘read heavy’ use cases, having a large shared_buffer value can be detrimental for ‘write heavy’ use cases, as the entire contents of shared_buffers must be processed during writes.
Please note that the database server needs to be restarted after this change.
testdb=# SHOW shared_buffers; shared_buffers ---------------- 128MB (1 row)
wal_buffers
Write-Ahead Logging (WAL) is a standard method for ensuring integrity of data. Much like in the shared_buffers setting, PostgreSQL writes WAL records into buffers and then these buffers are flushed to disk.
The default size of the buffer is set by the wal_buffers setting- initially at 16MB. If the system being tuned has a large number of concurrent connections, then a higher value for wal_buffers can provide better performance.
effective_cache_size
effective_cache_size has the reputation of being a confusing PostgreSQL settings, and as such, many times the setting is left to the default value.
The effective_cache_size value provides a ‘rough estimate’ of the number of how much memory is available for disk caching by the operating system and within the database itself, after taking into account what’s used by the OS itself and other applications.
This value is used only by the PostgreSQL query planner to figure out whether plans it’s considering would be expected to fit in RAM or not. As such, it’s a bit of a fuzzy number to define for general use cases.
A conservative value for effective_cache_size would be ½(50%) of the total memory available on the system. Most commonly, the value is set to 75% of the total system memory on a dedicated DB server, but can vary depending on the specific discrete needs on a particular server workload.
If the value for effective_cache_size is too low, then the query planner may decide not to use some indexes, even if they would help greatly increase query speed.
So Conclusively , General recommendation for effective_cache_size is as follows.
Set the value to the amount of file system cache available. On UNIX/Linux like systems, add the free+cached numbers from free or top commands to get an estimate
If you don’t know, set the value to the 50% or 75% of total system memory specially if dedicated DB server
work_mem
This configuration is used for complex sorting. If you have to do complex sorting then increase the value of work_mem for good results. In-memory sorts are much faster than sorts spilling to disk. Setting a very high value can cause a memory bottleneck for your deployment environment because this parameter is per user sort operation. Therefore, if you have many users trying to execute sort operations, then the system will allocate work_mem * total sort operations for all users. Setting this parameter globally can cause very high memory usage. So it is highly recommended to modify this at the session level.
12345678910
testdb=# SET work_mem TO “2MB”;testdb=# EXPLAIN SELECT * FROM bar ORDER BY bar.b; QUERY PLAN ———————————————————————————–Gather Merge (cost=509181.84..1706542.14 rows=10000116 width=24) Workers Planned: 4 -> Sort (cost=508181.79..514431.86 rows=2500029 width=24) Sort Key: b -> Parallel Seq Scan on bar (cost=0.00..88695.29 rows=2500029 width=24)(5 rows)
The initial query’s sort node has an estimated cost of 514431.86. Cost is an arbitrary unit of computation. For the above query, we have a work_mem of only 2MB. For testing purposes, let’s increase this to 256MB and see if there is any impact on cost.
123456789
testdb=# SET work_mem TO “256MB”;testdb=# EXPLAIN SELECT * FROM bar ORDER BY bar.b; QUERY PLAN ———————————————————————————–Gather Merge (cost=355367.34..1552727.64 rows=10000116 width=24) Workers Planned: 4 -> Sort (cost=354367.29..360617.36 rows=2500029 width=24) Sort Key: b -> Parallel Seq Scan on bar (cost=0.00..88695.29 rows=2500029 width=24)
The query cost is reduced to 360617.36 from 514431.86 — a 30% reduction.
So Conclusively , Setting the value Higher alway results beer sorting and hashing , But setting it in local queries is always recommended . Set value high in queries where you expect high sorting otherwise low global value is good .
maintenance_work_mem
maintenance_work_mem is a memory setting used for maintenance tasks. The default value is 64MB. Setting a large value helps in tasks like VACUUM, RESTORE, CREATE INDEX, ADD FOREIGN KEY and ALTER TABLE.
It is necessary to remember that when autovacuum runs, up to autovacuum_max_workers times this memory may be allocated, so be careful not to set the default value too high.
The default value of maintenance_work_mem = 64MB.
General recommendation to set maintenance_work_mem is as follows.
Set the value 10% of system memory, up to 1GB
Maybe you can set it even higher if you are having VACUUM problems
We can also temporarily increase this memory while creating indexes or at the time of dump restores or while performing full vacuums .
synchronous_commit
This is used to enforce that commit will wait for WAL to be written on disk before returning a success status to the client. This is a trade-off between performance and reliability. If your application is designed such that performance is more important than the reliability, then turn off synchronous_commit. This means that there will be a time gap between the success status and a guaranteed write to disk. In the case of a server crash, data might be lost even though the client received a success message on commit. In this case, a transaction commits very quickly because it will not wait for a WAL file to be flushed, but reliability is compromised.
Temp_buffers
This parameter sets the maximum number of temporary buffers used by each database session. The session local buffers are used only for access to temporary tables. The setting of this parameter can be changed within individual sessions but only before the first use of temporary tables within the session.
PostgreSQL database utilizes this memory area for holding the temporary tables of each session, these will be cleared when the connection is closed.
The default value of temp_buffer = 8MB.
Conclusion
There are more parameters that can be tuned to gain better performance but those have less impact than the ones highlighted here. In the end, we must always keep in mind that not all parameters are relevant for all applications types. Some applications perform better by tuning a parameter and some don’t. Tuning PostgreSQL Database Parameters must be done for the specific needs of an application and the OS it runs on.
Also , Performance tuning does not only depends on postgres configuration parameters , there are many system parameters also on which postgresql performance depends So, in my next blog i will explain some system parameters which can affect postgresql performance .
In this article, I will explain some consequences of session leaks, which I faced when an issue came to me.
Session Leak is very common in developers’ lives. In this article, I will explain some consequences of session leaks, which I faced when an issue came to me and how I came to the root cause of the issue.
Issue and Analysis
Issue: Load is increasing on the server and Postgres queries are consuming CPU and taking time.
I had to resolve this issue and find the root cause of it.
Analysis: After all my debugging, I came to the following analysis:
There is a table that has 200 rows, but the number of live tuples showing there is more than that (around 60K). We are using Postgresql 9.3.
The following are the queries that I ran.
select count(*) from subscriber_offset_manager;
count
-------
200 (1 row)
SELECT schemaname,relname,n_live_tup,n_dead_tup FROM pg_stat_user_tables where relname='subscriber_offset_manager' ORDER BY n_dead_tup ;
schemaname | relname | n_live_tup | n_dead_tup
------------+---------------------------+------------+------------
public | subscriber_offset_manager | 61453 | 5 (1 row)
But as seen from pg_stat_activity and pg_locks, we are not able to track any open connection.
SELECT query, state,locktype,mode FROM pg_locks JOIN pg_stat_activity USING (pid) WHERE relation::regclass = 'subscriber_offset_manager'::regclass ;
query | state | locktype | mode
-------+-------+----------+------
(0 rows)
I also tried full vacuum on this table. Below were the results:
All the times no rows were removed
Many times all the live tuples become dead tuples.
Here is the output of the running vacuum command:
vacuum FULL VERBOSE ANALYZE subscriber_offset_manager;
INFO: vacuuming "public.subscriber_offset_manager"
INFO: "subscriber_offset_manager": found 0 removable, 67920 nonremovable row versions in714 pages
DETAIL: 67720 dead row versions cannot be removed yet. CPU 0.01s/0.06u sec elapsed 0.13 sec.
INFO: analyzing "public.subscriber_offset_manager"
INFO: "subscriber_offset_manager": scanned 710 of 710 pages, containing 200 live rows and67720 dead rows; 200 rows in sample, 200 estimated total rows VACUUM
after that i checked for live and dead tuples for that table as follows :
SELECT schemaname,relname,n_live_tup,n_dead_tup FROM pg_stat_user_tables where relname='subscriber_offset_manager' ORDER BY n_dead_tup
schemaname | relname | n_live_tup | n_dead_tup
------------+---------------------------+------------+------------
public | subscriber_offset_manager | 200 | 67749
After 10 seconds:
SELECT schemaname,relname,n_live_tup,n_dead_tup FROM pg_stat_user_tables where relname='subscriber_offset_manager' ORDER BY n_dead_tup ;
schemaname | relname | n_live_tup | n_dead_tup
------------+---------------------------+------------+------------
public | subscriber_offset_manager | 68325 | 132
All the dead tuples moved to live tuples instead of cleaning up.
One more interesting observation: When I stop my Java app and then do a full vacuum, it works fine (number of rows and live tuples become equal). So there is something wrong if we select and update continuously from the Java app.
After all the research and analysis and help for stack overflow and after following many links, I found the following root cause.
Root Cause:
When there is one long-running transaction or a database session leak, then dead tuples are created after the start time of that transaction and will not be cleaned up by the vacuum for all the tables for that database. This is due to the PostgreSQL vacuum process that checks for a transaction ID less than the transaction ID of the oldest transaction for cleaning dead rows. The transaction ID is generated globally.
When I checked, I found a transaction that was opened for too long and when I killed it, the vacuum worked fine.
Please read the below-given links for detailed info and effects of not ending database transaction and some Postgres internals. These links helped me a lot in solving the issue.
Many times, we face issues of heap dump in our Java server. Heap increases because your system is generating many objects, and if their lifecycle is long and the system keeps on generating these objects, they will consume all the RAM and heap dump may occur. This may occur due to memory leaks, bad coding, etc.
But these are not the only reasons for heap dump. Once, I faced an issue with our customer where heap dump was occurring regularly, and after debugging, I found that the root cause of the issue was fine logging. You may be surprised how logging can cause heap dump.
Too much logging can cause heap dump even if they are fine level logs, not info logs. A surprising fact is that fine logging was not enabled.
Why it happened is as follows:
Generally, we log fine logging like in this example:
logger.fine("fine logs ") .
Now, what happens behind the scenes (in the JVM) is that even if fine logs are disabled except for the string “fine logs,” objects will be created as it is passed as an argument if the check for fine logging is inside the fine method.
If there are too many fine logs in the system and there are many threads available doing these operations with lots of fine logs, many char[] objects are created and fill the heap/RAM.
The Solution
There are two ways we should do fine logging.
1. Check the “if” condtion before every fine log, as follows:
if (logger.isFineLoggable()) { logger.fine("your log "); }
But, this does not look cool (lots of “if” in the code…)
2. With Java 8, Java gives a new API for logger, which does not take string as an argument; it takes function, which returns string as an argument and evaluates it when this function is called inside the fine method. This way, an object is not created until the function is called.
Here is an example:
logger.fine(() -> "fine logs"); .
With functional programming, the code looks nice and the problem was resolved.