0

I’m running MySQL server on Linux.

I have a long-standing issue on my database server — it keeps consuming an excessive amount of memory for no clear reason. As a result, I was forced to periodically restart it to bring the system back to a stable state. The attached screenshot shows the memory usage trend over the past 90 days. ram in mysql server Here are some details:

OS: CentOS / RHEL 7

MySQL version: 5.7.x (can specify exact version) Configuration file: /etc/my.cnf.d/myserver.cnf

[mysqld@myserver]
net_read_timeout = 4800
tmp_table_size = 512M
innodb_lock_wait_timeout = 120
myisam_sort_buffer_size = 3G
read_rnd_buffer_size = 16M
max_heap_table_size = 512M
max_connect_errors = 10000
binlog_format = row
innodb_buffer_pool_size = 500M
myisam_max_sort_file_size = 1MB
thread_cache_size = 16
max_connections = 6000
max_allowed_packet = 256M
log_error = error.log
binlog_cache_size = 16m
innodb_log_buffer_size = 8M
slow_query_log_file = slow.log
innodb_max_dirty_pages_pct = 10
default_storage_engine = innodb
back_log = 50
explicit_defaults_for_timestamp = 1
innodb_log_file_size = 50331648
ft_min_word_len = 4
sql_mode = NO_ENGINE_SUBSTITUTION
thread_stack = 192K
wait_timeout = 3000
bind_address = 0.0.0.0
log_bin_trust_function_creators = 1
net_write_timeout = 4800
innodb_flush_log_at_trx_commit = 0
bulk_insert_buffer_size = 64M
transaction_isolation = REPEATABLE-READ
table_open_cache = 10000
sort_buffer_size = 4M
read_buffer_size = 4M
long_query_time = 2
default_authentication_plugin = mysql_native_password
default_password_lifetime = 0
key_buffer_size = 32M
join_buffer_size = 8M
log_slave_updates = 0
character_set_client_handshake = False
character_set_server = utf8mb4
collation_server = utf8mb4_unicode_ci
mysqlx = 0
skip_name_resolve
innodb_file_per_table
replicate_ignore_table = mysql.user
replicate_ignore_table = mysql.db
datadir = /var/lib/mysql_multi_instance/myserver
log_bin = myserver-bin
relay_log = myserver-relay
port = 32729
socket = /var/lib/mysql_multi_instance/myserver/mysql.sock
server_id = 3112729

Monitoring shows steady growth in RES and VIRT in top No large queries or import jobs are running Restarting the mysql servers temporarily fixes the issue, but memory slowly grows again

Things I’ve already checked:

No obvious long-running queries (SHOW PROCESSLIST clean) innodb_buffer_pool_size is set reasonably No memory leaks from client side (connections normal) Error log doesn’t show anything critical

My questions: What could cause MySQL to allocate memory unexpectedly even under low load? What tools or metrics can I use to identify which component (e.g., InnoDB, query cache, memory fragmentation, etc.) is consuming it? Are there recommended debugging steps for MySQL memory growth in multi-instance setups? Any advice or diagnostic steps would be appreciated.

UPDATE

To clarify — we are using InnoDB as the storage engine.

We have a single large physical server that runs several Java applications and four separate MySQL instances. Two of those database instances behave normally and maintain stable memory usage, but the other two continuously allocate additional memory over time.

What’s confusing is that all four MySQL servers share the same configuration, yet only two of them exhibit this memory growth. The databases themselves are different, but their configuration parameters (including InnoDB buffer pool, query cache, etc.) are identical.

Here’s a snapshot from the server showing the memory footprint of each instance:

$ ps -aux | grep mysqld
mysql     79974  0.5  4.8 9767484 6330264 ?     Ssl  Oct08  39:06 /usr/sbin/mysqld --defaults-group-suffix=@11129
mysql     80517  0.5  6.7 12127612 8852076 ?    Ssl  Oct08  42:00 /usr/sbin/mysqld --defaults-group-suffix=@11126
mysql    116894  0.4  1.2 3841544 1670672 ?     Ssl  Sep25 116:47 /usr/sbin/mysqld --defaults-group-suffix=@11119
mysql    118122  0.4  1.1 3826188 1535424 ?     Ssl  Sep25 119:09 /usr/sbin/mysqld --defaults-group-suffix=@11118

As you can see, two instances (@11126 and @11129) are consuming a significantly higher amount of memory compared to the others, even though they are configured the same way.

We’re trying to determine what could cause these specific instances to allocate more memory — whether it’s related to workload patterns, internal InnoDB behavior, or a memory leak in MySQL itself.

If you need any more additional information please write a comment

10
  • Is your database actually using the InnoDB Storage Engine? You didn't tell us. You're using a decade old version of MySQL, that reached end of support two years ago, so I wouldn't be surprised if you still use the MyISAM engine. In that case key_buffer_size is important. I do notice that you've set the memory limits very low, which is strange if you really have 128 GB RAM. I also cannot not tell, from the picture, that it is MySQL taking up all the memory. Commented Oct 13 at 13:47
  • I think that phpMyAdmin can already give you some basic insight and advice. I'm, of course, saying that because I'm using it. I don't pretend to know all the tools for MySQL. Commented Oct 13 at 13:48
  • I appreciate you taking the time to look into my issue. I updated the article answering your questions, please take a look and tell me if you need more information. Commented Oct 13 at 15:55
  • Check Java; it eats up memory, then does a garbage collection. Commented Oct 13 at 16:59
  • 1
    man7.org/linux/man-pages/man1/ps.1.html says it's in KiB for both RSS and VSZ. As for adding up to the picture, I think the memory graph shown in the question includes four mysqld instances, as well as multiple Java applications with unknown memory size, as well as all other processes. As you said, there's not enough specific information to conclude the memory consumption problem is in MySQL Server. Commented Oct 14 at 15:12

1 Answer 1

0

There are a lot of red flags in your config.

You said you are using InnoDB rather than MyISAM but you have this?

myisam_sort_buffer_size = 3G
myisam_max_sort_file_size = 1MB
key_buffer_size = 32M

These are per table, so can lead to huge memory usage quickly if your code isn't cleaning up after itself:

tmp_table_size = 512M
max_heap_table_size = 512M

All per connection memory can be multiplied by this number: max_connections = 6000

This is absurdly small for 6000 connections:

thread_cache_size = 16

Per connection settings:

max_allowed_packet = 256M
bulk_insert_buffer_size = 64M
sort_buffer_size = 4M
read_buffer_size = 4M
join_buffer_size = 8M

join_buffer_size in particular is a great way to ruin your performance and memory usage, and if you find that you need it, you need to fix your indexing instead.

You probably shouldn't be doing this unless you have very specifically benchmarked it to make a difference.

read_rnd_buffer_size = 16M

You almost certainly don't need this:

binlog_cache_size = 16m

It looks like you are falling into the very common trap of pre-emptively overtuning.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.