Re: strange select() behavior



On Sep 16, 4:20 am, pac...@xxxxxxxxxxxxx (Alan Curry) wrote:

Why? Suppose, for example, a UDP send won't block because there's
space in the interface buffer. Perhaps another process, even without
sharing descriptors, might fill that interface buffer. In this case,
'select' accurately reports that a send would not have blocked, but a
subsequent 'recvmsg' should block.

So now the topic changes to sending... a global (i.e. not specific to a
single socket) buffer is full? Isn't that what ENOBUFS is for?

What do you do when a blocking socket operation returns ENOBUFS?

The premise was that the implementation chose to block until buffer
space was available. So 'select' triggers when buffer space is
available because the operation would not block. Later, the buffer
space is consumed. Now the operation will block.

I already explained several ways. Suppose another process globally
enables UDP checksum validation. Suppose another process shrinks UDP
memory and the implementation decides to drop your datagram.

Yes, other processes can be relevant without access to the specific file
descriptor, if they chance some global state, including memory pressure.
Anyway for every scenario you can think of the answer is very simple. It's
just 1 rule. If select promised you a read with no blocking, anything that
prevents a successful read is an error. If you don't like the name
ESORRYSELECTLIEDTOYOU, then EIO. EIO is vague enough, and it can already
happen anywhere, anytime so you have to be ready to deal with it.

Right, but as already explained, that's impossible. There is no
unambiguous way to figure out which read it is that's not supposed to
block.

EIO doesn't help, because it's a semantic change. Your argument is
supposed to be that we can change this in a way that makes existing
code "just work". Semantic changes will break existing code. Rather
than checking existing code for breakage with the semantic change, it
makes more sense just to fix the code to work with the existing
semantics.

The problem of unambiguously identifying the "subsequent operation" is
provably insoluble. The best solution is simply to require the
application to identify what operation it thinks is subsequent. This
is precisely what the O_NONBLOCK flag already does.

This is not a "predict the future" model. It's a "maintain consistency with
the past" model.

It is predict the future, because it requires the implementation to
ensure that a future operation, done with the system in an unknown
state, will have a particular result.

You are simply incorrect and still haven't answered the simplest of
questions. Find a reliable way to handle the case where UDP
checksumming is globally enabled in-between 'select' and 'recvmsg'.

I told you. Inside select(), set a "predicted readability" status bit in the
socket object for every fd that is being indicated as readable. Inside read()
or recvmsg(), change every instance of go_to_sleep_now() to
  if(thesocket->predicted_readability)
    return EIO;
  else
    go_to_sleep_now();
(Also you have to clear the bit before all returns. Probably you'd copy its
value into a local variable at the top of the function, then clear it and
check the copy later.)

This is a semantic change that breaks existing code. So instead of
fixing existing code to sensibly handle EIO (which, I admit, it might
already do) save yourself all the trouble and just fix existing code
to set the socket non-blocking (which, of course, it also might
already do). So this accomplishes *nothing*. It would purely be a
change for no good reason, introducing a bizarre quirk in 'recvmsg'
handling.

It implements precisely this:

  After select() indicates readability, the next read on the object will not
  sleep.

It doesn't attempt to implement your straw man proposals. I'm not sure what
you think is unreliable about it.

What is unreliable about it is that the application cost to set the
socket blocking and is expecting the next operation to block if it
cannot complete immediately. This is simply asking for broken
behavior.

Note that it is broken behavior to assume the 'recvmsg' that followed
the 'select' should get different semantics. The 'select' and

You may define "dropped unread packet because of obscure networking reasons"
as not an error. That's a questionable definition.

UDP packets can always be dropped for any reason.

'recvmsg' might be called by related code that should be able to rely
on your supposed guarantee. Or it might be called by unrelated code
that is relying on a blocking socket to block if there are no
datagrams ready.

Yes, you glorious Straw Man Constructor, there are types of programs that
could not make use of the predictive select model I have described. That does
not make it useless to all possible programs. (I'm getting tired of
explaining this.) It would have been exactly the thing needed to make inetd
not hang. Instead of constructing ever more elaborate examples of user code
that would not benefit from the ESORRYSELECTLIEDTOYOU error, and demanding
that I explain how they can benefit from it, why don't you look at how my
predictive select would interact with the "buggy" inetd select loop (real
code, not straw men). Would the denial of service bug disappear, or wouldn't
it?

You know for a fact that 'inetd' would sensibly handle the EIO?
Setting the socket non-blocking would also fix the bug.

operation. And if the operation that seems next is unrelated, its
semantics cannot change. This is an iron-clad argument.

You act like you don't understand how programming works. The relationship
between the select and whatever happens next is not random. The select call
and the following read may have been written by different people at different
times, but they are together in one program. (Or, with a shared file
descriptor, a closely related group of programs with an agreement amongst
themselves as to how file descriptors are to be managed.) When they were put
together, it was for a logical purpose. There's no such thing as "unrelated".

Huh? What if "select" was called purely for statistical purposes on
every socket the program has? And what if the subsequent "read" or
"recvmsg" comes from a library that has no idea the application called
"select"?

You are missing the point that the "select" function is not supposed
to change anything and programs should be able to rely on that.

Please address the impossibility argument without breaking the case
where the subsequent operation comes from unrelated code. You cannot
do it.

"Unrelated code" is nonsense. You don't call select because your foot itches.
You call select because you want to know when some file descriptors are
ready, because you intend to do something with them. Whatever you intend to
do -- regardless of if it's in the same function, or in a library that you
call into, or in another process that you have shared your fds with -- if
select tells you that it can be done, and then later it turns out that it
can't be done, it's not unreasonable to consider the change to be an error.

You call "select" for many reasons, including just to see what
operations are possible and what aren't. There is not, and never has
been, a requirement that "select" be followed by an operation from the
same code. You are breaking solid code to "fix" broken code. That's
just wrong.

DS
.