1

My mule application is comprised of 2 nodes running in a cluster, and it listens to IBM MQ Cluster (basically connecting to 2 MQ via queue manager). There are situations where one mule node pulls or takes more than 80% of message from MQ cluster and another mule node picks rest 20%. This is causing CPU performance issues. We have double checked that all load balancing is proper, and very few times we get CPU performance problem. Please can anybody give some ideas what could be possible reason for it.

Example: last scenario was created where there are 200000 messages in queue, and node2 mule server picked 92% of message from queue within few minutes.

8
  • Does this answer your question? How does IBM MQ QM distribute messages over multiple consumers Commented Dec 19, 2019 at 14:34
  • The answer to the other question describes why you see more messages consumed on 1 or your 2 mule servers. Commented Dec 19, 2019 at 14:35
  • Just to be sure from the link provided - "my applications stores persistent message in MQ. One mule flow puts message on queue ABCD, and another mule flow gets message from same queue ABCD". So, do you mean a retention lock is possible over the queue when message count > 200000 or message size > 4MB ??? Commented Dec 19, 2019 at 15:41
  • Based on my recent analysis for last 5 months, we had the same problem 14 times, and every time it was mule server node 2 which had CPU usage alert. node 1 was fine through out this year. If queue contention lock occurs, then I expect it to happen over both nodes... Commented Dec 19, 2019 at 16:17
  • I'm talking specifically about Mark Taylor's answer on why distribution is uneven. Mark is from IBM. Messages will not be given to each of the two servers 50/50, they will be given to the most recent consumer that is ready to accept a new message. Commented Dec 19, 2019 at 16:18

2 Answers 2

1

This issue has been fixed now. Got into the root cause - our mule application running on MULE_NODE01 reads/writes to WMQ_NODE01, and similarly for node 2. One of the mule node (lets say MULE_NODE02) reads from linux/windows file system and puts huge messages to its corresponding WMQ_NODE02. Now, its IBM MQ which tries to push maximum load to other WMQ node to balance the work load. That's why MULE_NODE01 reads all those loaded files from WMQ_NODE01 and causes CPU usage alerts.

@JoshMc your clue helped a lot in understanding the issues, thanks a lot for helping.

Its WMQ node in a cluster which tries to push maximum load to other WMQ node, seems like this is how MQ works internally.

To solve this, we are now connecting our mule node to MQ gateway, rather making 1-to-1 connectivity

Sign up to request clarification or add additional context in comments.

Comments

0

This could be solved by avoiding the racing condition caused by multiple listeners. Configure the listener in the cluster to the primary node only. republish the message to a persistent VM queue. move the logic to another flow that could be triggered via a VM listener and let the Mule cluster do the load balancing.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.