From: Xie Yongji <xieyongji@bytedance.com>
To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com,
sgarzare@redhat.com, parav@nvidia.com, bob.liu@oracle.com,
hch@infradead.org, rdunlap@infradead.org, willy@infradead.org,
viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org,
corbet@lwn.net
Cc: virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, kvm@vger.kernel.org, linux-aio@kvack.org,
linux-fsdevel@vger.kernel.org
Subject: [RFC v3 01/11] eventfd: track eventfd_signal() recursion depth separately in different cases
Date: Tue, 19 Jan 2021 12:59:10 +0800 [thread overview]
Message-ID: <20210119045920.447-2-xieyongji@bytedance.com> (raw)
In-Reply-To: <20210119045920.447-1-xieyongji@bytedance.com>
Now we have a global percpu counter to limit the recursion depth
of eventfd_signal(). This can avoid deadlock or stack overflow.
But in stack overflow case, it should be OK to increase the
recursion depth if needed. So we add a percpu counter in eventfd_ctx
to limit the recursion depth for deadlock case. Then it could be
fine to increase the global percpu counter later.
Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
---
fs/aio.c | 3 ++-
fs/eventfd.c | 20 +++++++++++++++++++-
include/linux/eventfd.h | 5 +----
3 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index 1f32da13d39e..5d82903161f5 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -1698,7 +1698,8 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
list_del(&iocb->ki_list);
iocb->ki_res.res = mangle_poll(mask);
req->done = true;
- if (iocb->ki_eventfd && eventfd_signal_count()) {
+ if (iocb->ki_eventfd &&
+ eventfd_signal_count(iocb->ki_eventfd)) {
iocb = NULL;
INIT_WORK(&req->work, aio_poll_put_work);
schedule_work(&req->work);
diff --git a/fs/eventfd.c b/fs/eventfd.c
index e265b6dd4f34..2df24f9bada3 100644
--- a/fs/eventfd.c
+++ b/fs/eventfd.c
@@ -25,6 +25,8 @@
#include <linux/idr.h>
#include <linux/uio.h>
+#define EVENTFD_WAKE_DEPTH 0
+
DEFINE_PER_CPU(int, eventfd_wake_count);
static DEFINE_IDA(eventfd_ida);
@@ -42,9 +44,17 @@ struct eventfd_ctx {
*/
__u64 count;
unsigned int flags;
+ int __percpu *wake_count;
int id;
};
+bool eventfd_signal_count(struct eventfd_ctx *ctx)
+{
+ return (this_cpu_read(*ctx->wake_count) ||
+ this_cpu_read(eventfd_wake_count) > EVENTFD_WAKE_DEPTH);
+}
+EXPORT_SYMBOL_GPL(eventfd_signal_count);
+
/**
* eventfd_signal - Adds @n to the eventfd counter.
* @ctx: [in] Pointer to the eventfd context.
@@ -71,17 +81,19 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
* it returns true, the eventfd_signal() call should be deferred to a
* safe context.
*/
- if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count)))
+ if (WARN_ON_ONCE(eventfd_signal_count(ctx)))
return 0;
spin_lock_irqsave(&ctx->wqh.lock, flags);
this_cpu_inc(eventfd_wake_count);
+ this_cpu_inc(*ctx->wake_count);
if (ULLONG_MAX - ctx->count < n)
n = ULLONG_MAX - ctx->count;
ctx->count += n;
if (waitqueue_active(&ctx->wqh))
wake_up_locked_poll(&ctx->wqh, EPOLLIN);
this_cpu_dec(eventfd_wake_count);
+ this_cpu_dec(*ctx->wake_count);
spin_unlock_irqrestore(&ctx->wqh.lock, flags);
return n;
@@ -92,6 +104,7 @@ static void eventfd_free_ctx(struct eventfd_ctx *ctx)
{
if (ctx->id >= 0)
ida_simple_remove(&eventfd_ida, ctx->id);
+ free_percpu(ctx->wake_count);
kfree(ctx);
}
@@ -423,6 +436,11 @@ static int do_eventfd(unsigned int count, int flags)
kref_init(&ctx->kref);
init_waitqueue_head(&ctx->wqh);
+ ctx->wake_count = alloc_percpu(int);
+ if (!ctx->wake_count) {
+ kfree(ctx);
+ return -ENOMEM;
+ }
ctx->count = count;
ctx->flags = flags;
ctx->id = ida_simple_get(&eventfd_ida, 0, 0, GFP_KERNEL);
diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
index fa0a524baed0..1a11ebbd74a9 100644
--- a/include/linux/eventfd.h
+++ b/include/linux/eventfd.h
@@ -45,10 +45,7 @@ void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt);
DECLARE_PER_CPU(int, eventfd_wake_count);
-static inline bool eventfd_signal_count(void)
-{
- return this_cpu_read(eventfd_wake_count);
-}
+bool eventfd_signal_count(struct eventfd_ctx *ctx);
#else /* CONFIG_EVENTFD */
--
2.11.0
next prev parent reply other threads:[~2021-01-19 8:42 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-19 4:59 [RFC v3 00/11] Introduce VDUSE - vDPA Device in Userspace Xie Yongji
2021-01-19 4:59 ` Xie Yongji [this message]
2021-01-20 4:24 ` [RFC v3 01/11] eventfd: track eventfd_signal() recursion depth separately in different cases Jason Wang
2021-01-20 6:52 ` Yongji Xie
2021-01-27 3:37 ` Jason Wang
2021-01-27 9:11 ` Yongji Xie
2021-01-28 3:04 ` Jason Wang
2021-01-28 3:08 ` Jens Axboe
2021-01-28 5:12 ` Yongji Xie
2021-01-28 3:52 ` Yongji Xie
2021-01-28 4:31 ` Jason Wang
2021-01-28 6:08 ` Yongji Xie
2021-01-19 4:59 ` [RFC v3 02/11] eventfd: Increase the recursion depth of eventfd_signal() Xie Yongji
2021-01-19 4:59 ` [RFC v3 03/11] vdpa: Remove the restriction that only supports virtio-net devices Xie Yongji
2021-01-20 3:46 ` Jason Wang
2021-01-20 6:46 ` Yongji Xie
2021-01-20 11:08 ` Stefano Garzarella
2021-01-27 3:33 ` Jason Wang
2021-01-27 8:57 ` Stefano Garzarella
2021-01-28 3:11 ` Jason Wang
[not found] ` <20210129150359.caitcskrfhqed73z@steredhat>
2021-01-30 11:33 ` Yongji Xie
2021-02-01 11:05 ` Stefano Garzarella
2021-01-27 8:59 ` Stefano Garzarella
2021-01-27 9:05 ` Yongji Xie
2021-01-19 4:59 ` [RFC v3 04/11] vhost-vdpa: protect concurrent access to vhost device iotlb Xie Yongji
2021-01-20 3:44 ` Jason Wang
2021-01-20 6:44 ` Yongji Xie
2021-01-19 4:59 ` [RFC v3 05/11] vdpa: shared virtual addressing support Xie Yongji
2021-01-20 5:55 ` Jason Wang
2021-01-20 7:10 ` Yongji Xie
2021-01-27 3:43 ` Jason Wang
2021-01-19 4:59 ` [RFC v3 06/11] vhost-vdpa: Add an opaque pointer for vhost IOTLB Xie Yongji
2021-01-20 6:24 ` Jason Wang
2021-01-20 7:52 ` Yongji Xie
2021-01-27 3:51 ` Jason Wang
2021-01-27 9:27 ` Yongji Xie
2021-01-19 5:07 ` [RFC v3 07/11] vdpa: Pass the netlink attributes to ops.dev_add() Xie Yongji
2021-01-19 5:07 ` [RFC v3 08/11] vduse: Introduce VDUSE - vDPA Device in Userspace Xie Yongji
2021-01-19 14:53 ` Jonathan Corbet
2021-01-20 2:25 ` Yongji Xie
2021-01-19 17:53 ` Randy Dunlap
2021-01-20 2:42 ` Yongji Xie
2021-01-26 8:08 ` Jason Wang
2021-01-27 8:50 ` Yongji Xie
2021-01-28 4:27 ` Jason Wang
2021-01-28 6:03 ` Yongji Xie
2021-01-28 6:14 ` Jason Wang
2021-01-28 6:43 ` Yongji Xie
2021-01-26 8:19 ` Jason Wang
2021-01-27 8:59 ` Yongji Xie
2021-01-19 5:07 ` [RFC v3 09/11] vduse: Add VDUSE_GET_DEV ioctl Xie Yongji
2021-01-19 5:07 ` [RFC v3 10/11] vduse: grab the module's references until there is no vduse device Xie Yongji
2021-01-26 8:09 ` Jason Wang
2021-01-27 8:51 ` Yongji Xie
2021-01-19 5:07 ` [RFC v3 11/11] vduse: Introduce a workqueue for irq injection Xie Yongji
2021-01-26 8:17 ` Jason Wang
2021-01-27 9:00 ` Yongji Xie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210119045920.447-2-xieyongji@bytedance.com \
--to=xieyongji@bytedance.com \
--cc=axboe@kernel.dk \
--cc=bcrl@kvack.org \
--cc=bob.liu@oracle.com \
--cc=corbet@lwn.net \
--cc=hch@infradead.org \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-aio@kvack.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=parav@nvidia.com \
--cc=rdunlap@infradead.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=viro@zeniv.linux.org.uk \
--cc=virtualization@lists.linux-foundation.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).