2

I use the socket option SO_TIMESTAMP to get kernel timestamps on received UDP datagrams with recvmsg().

It works well on Linux, not on macOS. In practice, it seems that macOS does return the timestamp in the ancillary data, but uses an incorrect message type to identify it. Experts advice would be appreciated.

The following sample program listens for one single UDP datagram, using the socket option SO_TIMESTAMP, and looks for the timestamp in the ancillary data from recvmsg(). The message level, type, and size are displayed for all messages in the ancillary data.

#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/socket.h>
#include <arpa/inet.h>

void check(int success, const char* msg)
{
    if (!success) {
        perror(msg);
        exit(EXIT_FAILURE);
    }
}

int main(int argc, char* argv[])
{
    // Get local port number from command line.
    int port = argc < 2 ? 0 : atoi(argv[1]);
    check(port > 0, "specify a UDP port");

    int sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
    check(sock >= 0, "socket()");

    int optval = 1;
    check(setsockopt(sock, SOL_SOCKET, SO_TIMESTAMP, &optval, sizeof(optval)) == 0, "setsockopt(SO_TIMESTAMP)");

    struct sockaddr_in addr;
    addr.sin_family = AF_INET;
    addr.sin_addr.s_addr = 0;
    addr.sin_port = htons(port);
    check(bind(sock, (struct sockaddr*)(&addr), sizeof(addr)) == 0, "bind()");

    char indata[1024];
    char ancil_data[1024];

    struct iovec vec;
    vec.iov_base = indata;
    vec.iov_len = sizeof(indata);

    struct msghdr hdr;
    memset(&hdr, 0, sizeof(hdr));
    hdr.msg_iov = &vec;
    hdr.msg_iovlen = 1; // number of iovec structures
    hdr.msg_control = ancil_data;
    hdr.msg_controllen = sizeof(ancil_data);

    printf("waiting for message on UDP port %d (use echo foo >/dev/udp/127.0.0.1/%d)\n", port, port);

    ssize_t insize = recvmsg(sock, &hdr, 0);
    check(insize >= 0, "recvmsg()");

    for (struct cmsghdr* cmsg = CMSG_FIRSTHDR(&hdr); cmsg != NULL; cmsg = CMSG_NXTHDR(&hdr, cmsg)) {
        printf("cmsg_level: %d, cmsg_type: %d, cmsg_len: %d\n", cmsg->cmsg_level, cmsg->cmsg_type, (int)(cmsg->cmsg_len));
        if (cmsg->cmsg_level == SOL_SOCKET && cmsg->cmsg_type == SO_TIMESTAMP && cmsg->cmsg_len >= sizeof(struct timeval)) {
            const struct timeval* ts = (const struct timeval*)(CMSG_DATA(cmsg));
            printf("timestamp: %d.%06d\n", (int)(ts->tv_sec), (int)(ts->tv_usec));
        }
    }

    close(sock);
}

On Ubuntu 25.04 (Linux kernel 6.14.0), we get this:

$ ./timestamp 12345
waiting for message on UDP port 12345 (use echo foo >/dev/udp/127.0.0.1/12345)
cmsg_level: 1, cmsg_type: 29, cmsg_len: 32
timestamp: 1760214400.671861
  • cmsg_level 1 is SOL_SOCKET
  • cmsg_type 29 is SO_TIMESTAMP

On macOS 15.6.1 (Darwin 24.6.0, XNU kernel 11417.140.69~1), we get this:

$ ./timestamp 12345
waiting for message on UDP port 12345 (use echo foo >/dev/udp/127.0.0.1/12345)
cmsg_level: 65535, cmsg_type: 2, cmsg_len: 28
  • cmsg_level 65535 is SOL_SOCKET
  • cmsg_type 2 is SO_ACCEPTCONN (the expected SO_TIMESTAMP is 1024 on macOS)

There is no timestamp. I don't understand the SO_ACCEPTCONN entry. Especially with length 28. It is documented as a "value indicating whether or not this socket has been marked to accept connections with listen(2)". Just an integer, for getsockopt(), not expected in recvmsg() ancillary data, not with 28 bytes.

If I remove the setsockopt(...SO_TIMESTAMP...), there is no longer any control message, no more SO_ACCEPTCONN.

So, we can suspect that the SO_ACCEPTCONN entry is in fact a SO_TIMESTAMP entry with an incorrect value in cmsg_type. To confirm this, I modified the code to accept SO_ACCEPTCONN instead of SO_TIMESTAMP (therefore interpreting the data as a struct timeval), loop on message reception, compute and display differences between supposed timestamps. Then, I send a UDP message every two seconds and the displayed difference is exactly 2,000,000 microseconds.

Therefore, we demonstrate that macOS can return timestamps but with the wrong cmsg_type, SO_ACCEPTCONN instead of SO_TIMESTAMP.

Do you think that this is the right interpretation?

How to report macOS bugs? The "Apple feedback assistant" seems to be more application-oriented. Is is appropriate for developers?

1 Answer 1

2

Quoting the FreeBSD man page for setsockopt (as the macOS man page doesn't seem to include this information):

The cmsghdr fields have the following values for TIMESTAMP by default:

   cmsg_len   = CMSG_LEN(sizeof(struct timeval));
   cmsg_level = SOL_SOCKET;
   cmsg_type  = SCM_TIMESTAMP;

Notice that cmsg_type uses SCM_TIMESTAMP, not SO_TIMESTAMP. The value of SCM_TIMESTAMP is defined as 0x02 in <sys/socket.h>.

Sign up to request clarification or add additional context in comments.

2 Comments

Thank you @robertklep. The confusing part is that Linux has this: "#define SCM_TIMESTAMP SO_TIMESTAMP", hiding the error of using SO_TIMESTAMP instead of SCM_TIMESTAMP on Linux.
@ThierryLelegard so you just were lucky that it worked on Linux ;D

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.