From: Oleg Nesterov <oleg@redhat.com>
To: Andrew Morton <akpm@linux-foundation.org>,
Markus Pargmann <mpa@pengutronix.de>
Cc: Tejun Heo <tj@kernel.org>,
nbd-general@lists.sourceforge.net, linux-kernel@vger.kernel.org
Subject: [PATCH 1/1] kthread: introduce kthread_get_run() to fix __nbd_ioctl()
Date: Sun, 25 Oct 2015 15:27:13 +0100 [thread overview]
Message-ID: <20151025142713.GA30965@redhat.com> (raw)
In-Reply-To: <20151025142655.GA30961@redhat.com>
It is not safe to use the task_struct returned by kthread_run(threadfn)
if threadfn() can exit before the "owner" does kthread_stop(), nothing
protects this task_struct.
So __nbd_ioctl() looks buggy; a killed nbd_thread_send() can exit, free
its task_struct, and then kthread_stop() can use the freed/reused memory.
Add the new trivial helper, kthread_get_run(). Hopefully it will have more
users, this patch changes __nbd_ioctl() as an example.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
drivers/block/nbd.c | 5 +++--
include/linux/kthread.h | 12 ++++++++++++
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 93b3f99..b85e7a0 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -754,8 +754,8 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
else
blk_queue_flush(nbd->disk->queue, 0);
- thread = kthread_run(nbd_thread_send, nbd, "%s",
- nbd_name(nbd));
+ thread = kthread_get_run(nbd_thread_send, nbd, "%s",
+ nbd_name(nbd));
if (IS_ERR(thread)) {
mutex_lock(&nbd->tx_lock);
return PTR_ERR(thread);
@@ -765,6 +765,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
error = nbd_thread_recv(nbd);
nbd_dev_dbg_close(nbd);
kthread_stop(thread);
+ put_task_struct(thread);
mutex_lock(&nbd->tx_lock);
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 13d5520..b0465cc 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -37,6 +37,18 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
__k; \
})
+/* Same as kthread_run() but also pin the task_struct */
+#define kthread_get_run(threadfn, data, namefmt, ...) \
+({ \
+ struct task_struct *__k \
+ = kthread_create(threadfn, data, namefmt, ## __VA_ARGS__); \
+ if (!IS_ERR(__k)) { \
+ get_task_struct(__k); \
+ wake_up_process(__k); \
+ } \
+ __k; \
+})
+
void kthread_bind(struct task_struct *k, unsigned int cpu);
void kthread_bind_mask(struct task_struct *k, const struct cpumask *mask);
int kthread_stop(struct task_struct *k);
--
1.5.5.1
next prev parent reply other threads:[~2015-10-25 13:30 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-25 14:26 [PATCH 0/1] kthread: introduce kthread_get_run() to fix __nbd_ioctl() Oleg Nesterov
2015-10-25 14:27 ` Oleg Nesterov [this message]
2015-10-26 7:33 ` [PATCH 1/1] " Markus Pargmann
2015-10-28 15:27 ` Oleg Nesterov
2015-10-27 0:26 ` Christoph Hellwig
2015-10-27 7:03 ` Markus Pargmann
2015-10-28 15:37 ` Oleg Nesterov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151025142713.GA30965@redhat.com \
--to=oleg@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mpa@pengutronix.de \
--cc=nbd-general@lists.sourceforge.net \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).