I have the following table in my MySQL DB:
CREATE TABLE messages (
id VARCHAR(36) NOT NULL PRIMARY KEY,
chat_id VARCHAR(36) NOT NULL,
author_id VARCHAR(36) NOT NULL,
content VARCHAR(500) character set utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
visible TINYINT NOT NULL DEFAULT 1,
request_id VARCHAR(128),
created_at BIGINT signed NOT NULL,
updated_at BIGINT signed NOT NULL,
UNIQUE INDEX messages_chat_id_created_at(chat_id, created_at DESC)
);
It has a size of ~400GB and ~700 mil rows.
The only query I run on this table is that one:
SELECT
*
FROM
messages
WHERE
chat_id = :chatId
AND created_at <= :createdAt
AND visible = 1
ORDER BY created_at DESC
LIMIT 20
The table is continuously growing, and although in 90% of cases only the most recent data is fetched, I have to keep old messages in the DB both due to our retention policy and to support cases where users come back to their old conversations.
The problem I have is that although p99 oscillates around 60ms, I get a pretty consistent MAX latency of as much as 750ms.
Index range scan on m using messages_chat_id_created_at, with index condition: ((m.chat_id = ?) and (m.created_at))
Rows returned: 10
Latency: 370.9 ms
Is there a quick win I can apply to speed things up a little?
created_at <= :createdAt, but you say: "in 90% of cases only the most recent data is fetched". This doesn't seem to match? I don't understand.createdAt = NOWand if user wants to load more messages then client can call the same method withcreatedAtset to the timestamp of the oldest message he has loaded so far.created_ator the primary key?*_idstrings and not integers? ForVARCHAR(36)and collationutf8mb4*up to 144 bytes must be compared, instead of 4 or 8 bytes for int or bigint.