From: Lou Langholtz <ldl@aros.net>
To: Paul Clements <Paul.Clements@SteelEye.com>
Cc: Andrew Morton <akpm@osdl.org>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] 2.6.0 NBD driver: remove send/recieve race for request
Date: Fri, 08 Aug 2003 00:30:46 -0600 [thread overview]
Message-ID: <3F334396.7030008@aros.net> (raw)
In-Reply-To: <3F332ED7.712DFE5D@SteelEye.com>
Paul Clements wrote:
>. . .
>Here's the patch to fix up several race conditions in nbd. It requires
>reverting the already included (but admittedly incomplete)
>nbd-race-fix.patch that's in -mm5.
>
>Andrew, please apply.
>
>Thanks,
>Paul
>
>------------------------------------------------------------------------
>
>--- linux-2.6.0-test2-mm4-PRISTINE/drivers/block/nbd.c Sun Jul 27 12:58:51 2003
>+++ linux-2.6.0-test2-mm4/drivers/block/nbd.c Thu Aug 7 18:02:23 2003
>@@ -416,11 +416,19 @@ void nbd_clear_que(struct nbd_device *lo
> BUG_ON(lo->magic != LO_MAGIC);
> #endif
>
>+retry:
> do {
> req = NULL;
> spin_lock(&lo->queue_lock);
> if (!list_empty(&lo->queue_head)) {
> req = list_entry(lo->queue_head.next, struct request, queuelist);
>+ if (req->ref_count > 1) { /* still in xmit */
>+ spin_unlock(&lo->queue_lock);
>+ printk(KERN_DEBUG "%s: request %p: still in use (%d), waiting...\n",
>+ lo->disk->disk_name, req, req->ref_count);
>+ schedule_timeout(HZ); /* wait a second */
>
Isn't there something more deterministic than just waiting a second and
hoping things clear up that you can use here? How about not clearing the
queue unless lo->sock is NULL and using whatever lock it is now that's
protecting lo->sock. That way the queue clearing race can be eliminated too.
>+ goto retry;
>+ }
> list_del_init(&req->queuelist);
> }
> spin_unlock(&lo->queue_lock);
>@@ -490,6 +498,7 @@ static void do_nbd_request(request_queue
> }
>
> list_add(&req->queuelist, &lo->queue_head);
>+ req->ref_count++; /* make sure req does not get freed */
> spin_unlock(&lo->queue_lock);
>
> nbd_send_req(lo, req);
>@@ -499,12 +508,14 @@ static void do_nbd_request(request_queue
> lo->disk->disk_name);
> spin_lock(&lo->queue_lock);
> list_del_init(&req->queuelist);
>+ req->ref_count--;
> spin_unlock(&lo->queue_lock);
> nbd_end_request(req);
> spin_lock_irq(q->queue_lock);
> continue;
> }
>
>+ req->ref_count--;
> spin_lock_irq(q->queue_lock);
>
Since ref_count isn't atomic, shouldn't ref_count only be changed while
the queue_lock is held???
next prev parent reply other threads:[~2003-08-08 6:30 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-08-05 16:51 [PATCH] 2.6.0 NBD driver: remove send/recieve race for request Lou Langholtz
2003-08-05 19:37 ` Paul Clements
2003-08-05 22:48 ` Lou Langholtz
2003-08-06 0:51 ` Paul Clements
2003-08-06 7:34 ` Lou Langholtz
2003-08-08 5:02 ` Paul Clements
2003-08-08 5:27 ` Andrew Morton
2003-08-08 17:05 ` Paul Clements
2003-08-08 6:30 ` Lou Langholtz [this message]
2003-08-08 6:43 ` Andrew Morton
2003-08-08 6:59 ` Jens Axboe
2003-08-08 15:00 ` Paul Clements
2003-08-25 9:58 ` Jens Axboe
2003-08-08 16:47 ` Paul Clements
2003-08-08 20:07 ` [PATCH 2.6.0-test2-mm] nbd: fix send/receive/shutdown/disconnect races Paul Clements
2003-08-09 22:10 ` [PATCH 2.4.22-pre] nbd: fix race conditions Paul Clements
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3F334396.7030008@aros.net \
--to=ldl@aros.net \
--cc=Paul.Clements@SteelEye.com \
--cc=akpm@osdl.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).