2

I am using GridDB CE to handle time-series data and run long-running analytical queries. During these queries, I observe significant memory usage spikes that degrade performance or lead to errors. Here is my setup:

  • GridDB Version: 4.6.0.
  • Cluster Configuration: 3 nodes with 8GB RAM each.
  • Data Characteristics: ~10 million rows per container.

Example query:

  SELECT AVG(temperature)  
  FROM sensor_data  
  WHERE timestamp BETWEEN TIMESTAMPADD(DAY, -7, NOW()) AND NOW();

Memory usage increases with query duration, especially for aggregates over large ranges. How can I optimize memory usage? Specifically:

  1. Are there configuration parameters (e.g., cache size) that I should adjust?
  2. Does indexing reduce query memory usage for time-series containers?
  3. Should queries be chunked into smaller parts, or are there built-in GridDB features to manage memory better?
  4. Does adding nodes or rebalancing the cluster help distribute memory load?

I’ve tried adding indexes and breaking queries into smaller chunks, but I’m looking for specific GridDB strategies to improve performance while minimizing memory use.

1
  • 1
    Have you tried a compound index on [timestamp, temperature]? Commented Dec 11, 2024 at 17:08

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.