1

I am implementing a udp listen server based on this https://linux.m2osw.com/c-implementation-udp-clientserver. I noticed when establishing a timeout receiver the author included "f_socket+1" when making the select call. I am wondering what exactly this is doing? Any explanation helpful, thank you!

excerpt of function from above link:

    FD_ZERO(&s);
    FD_SET(f_socket, &s);
    struct timeval timeout;
    timeout.tv_sec = max_wait_ms / 1000;
    timeout.tv_usec = (max_wait_ms % 1000) * 1000;
    int retval = select(f_socket + 1, &s, &s, &s, &timeout); 
1
  • Probably an array of sockets Commented Apr 22, 2020 at 16:58

1 Answer 1

5

See https://pubs.opengroup.org/onlinepubs/007908799/xsh/select.html

The nfds argument specifies the range of file descriptors to be tested. The select() function tests file descriptors in the range of 0 to nfds-1.

Thus, that argument should be set to 1 greater than the maximum file descriptor you want to monitor.

Sign up to request clarification or add additional context in comments.

1 Comment

FYI, (e)poll() is generally preferred over select(), then you don't have to worry about this, since (e)poll() is given an array of specific socket descriptors to work with, it doesn't have to deal with ranges of descriptors.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.