linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Steve Hocking <shocking@pgs.com>
To: Stephen Hemminger <shemminger@osdl.org>
Cc: David Miller <davem@redhat.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: Possible FIFO race in lock_sock()
Date: Mon, 03 Mar 2003 14:38:45 -0600	[thread overview]
Message-ID: <200303032038.h23KcjY24553@mugwump.hstn.tensor.pgs.com> (raw)
In-Reply-To: Message from Stephen Hemminger <shemminger@osdl.org>  of "28 Feb 2003 13:25:50 PST." <1046467548.30194.258.camel@dell_ss3.pdx.osdl.net>

> Doing a review to understand socket locking, found a race by inspection
> but don't have a test to reproduce the problem.
> 
> It appears lock_sock() basically reinvents a semaphore in order to have
> FIFO wakeup and allow test semantics for the bottom half.  The problem
> is that when socket is locked, the wakeup is guaranteed FIFO.  It is
> possible for another requester to sneak in the window between when owner
> is cleared and the next queued requester is woken up.
> 
> Don't know what this impacts, perhaps out of order data on a pipe?
> 
> Scenario:
> 	Multiple requesters (A, B, C, D) and new requester N
> 	
> 	Assume A gets socket lock and is owner. 
> 	B, C, D are on the wait queue.
> 	A release_lock which ends up waking up B
> 	Before B runs and acquires socket lock:
> 	   N requests socket lock and sees owner is NULL so it grabs it.
> 
> The patch just checks the waitq before proceeding with the fast path.
> 
> Other alternatives:
> 1. Ignore it we aren't guaranteeing FIFO anyway.
> 	- then why bother with waitq when spin lock will do.
> 2. Replace socket_lock with a semaphore
> 	- with changes to BH to get same semantics
> 3. Implement a better FIFO spin lock

We may've seen a problem relating to this in the 2.4.xx series of kernels. It 
presents itself within various comms libs (MPI, PVM) as an invalid message and 
occasionally with data being sent to the wrong process on a node. It's very 
intermittent, and seems to occur less as the speed of the machine in question 
increases. We have some 1400 dual CPU nodes, a mixture of P3 and P4 boxes, 
with jobs spanning up to a hundred nodes, so it tends to pop out rather more 
frequently than most people would see it.


	Stephen
-- 
  The views expressed above are not those of PGS Tensor.

    "We've heard that a million monkeys at a million keyboards could produce
     the Complete Works of Shakespeare; now, thanks to the Internet, we know
     this is not true."            Robert Wilensky, University of California



      parent reply	other threads:[~2003-03-03 20:28 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-02-28 21:25 Possible FIFO race in lock_sock() Stephen Hemminger
2003-03-01  1:01 ` David S. Miller
2003-03-03 20:38 ` Steve Hocking [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200303032038.h23KcjY24553@mugwump.hstn.tensor.pgs.com \
    --to=shocking@pgs.com \
    --cc=davem@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shemminger@osdl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).