fd_set in select (2) drops when trying to set fd greater than 1024

Introduction

Depending on the description on the net, it is often written that select can only monitor up to 1024 FDs because the argument fd_set accepts only 1024 fds, but in reality it is ** 1024 or more. Does not accept fd **. Even if it is one, it will be dropped if you try to FD_SET a number of 1024 or more. That's all for the conclusion, but I would like to follow the contents.

If you look closely, you can also find it in the select man page.

fd_set is a fixed size buffer. It is undefined what happens when FD_CLR () or FD_SET () is executed on an fd that is negative or has a value greater than or equal to FD_SETSIZE.

There is an undefined behavior declaration of fear.

select(2) Programs that monitor file descriptors and socket descriptors, typically server programs and other programs that do not know when an incoming call arrives, are commonly used system calls to listen for incoming calls.

int select(int nfds, fd_set *readfds, fd_set *writefds,
           fd_set *exceptfds, struct timeval *timeout);

Aside from the details, roughly speaking, it is a system call that monitors the fd (file descriptor) registered in fd_set. By linking select (2) to read (2), data can be read more efficiently and event-driven than busy waiting for read. It's like the originator of boost :: asio :: async_read.

fd_set and FD_SET

As the name implies, fd_set is a structure that represents a set of fd. Looking at my sys / select.h (quite omitted)

typedef long int __fd_mask;
# define __FD_SETSIZE 1024
# define __NFDBITS (8 * (int)sizeof(__fd_mask));

typedef struct
  {
    __fd_mask fds_bits[__FD_SETSIZE / __NFDBITS];
# define __FDS_BITS(set) ((set)->fds_bits)
  } fd_set;

If you read it, if it is a 64-bit version of Linux, long is 64bit, so

typedef struct {
    long int fds_bits[16];
} fd_set;

Therefore, in the end, it becomes a structure with an area of 1024bit (64bit * 16). And it is the FD_SET macro that sets the fd included here.

#define	__FD_ELT(d) ((d) / __NFDBITS)
#define	__FD_MASK(d) ((__fd_mask) (1UL << ((d) % __NFDBITS)))
#define __FD_SET(d, set) ((void) (__FDS_BITS (set)[__FD_ELT (d)] |= __FD_MASK (d)))
#define	FD_SET(fd, fdsetp) __FD_SET (fd, fdsetp)

It's a little difficult to understand, so if you solve the macro,

fdsetp->fds_bits[fd/64] |= (__fd_mask) (1UL << (fd % 64));

In short, it is processing to set the bit of the fd digit of fdsetp (pointer of fd_set type variable).

Do you understand?

That's right. The moment fd exceeds 1024, it falls with a buffer overflow. Moreover, FD_SETSIZE is hard-coded with define and cannot be changed because it is scary. redhat's QA page also says that you shouldn't rewrite FD_SETSIZE. The upper limit of ʻulimitis often 1024 even in modern major distributions, but I think that it is a natural tuning for server program writers to remove the upper limit with ulimit. Especially if you are using docker or something, you can easily remove the restriction as an option. However, if the library you are using in your program is implemented withselect (2)`, it will fall unexpectedly!

Summary

If you use a descriptor to remove the limit of ʻulimitin a program using select (2), it will fail withFD_SET. In fact, I ran into this problem when writing a program using a library of a certain NW protocol. From now on, those who write low-layer NW programs should avoid using select (2). Please use poll (2)`!

Recommended Posts

fd_set in select (2) drops when trying to set fd greater than 1024
Error when trying to install psycopg2 in Python
Set opset to embed in ONNX
UnicodeEncodeError when trying to run radon