From: Xie Yongji <xieyongji@bytedance.com>
To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com,
sgarzare@redhat.com, parav@nvidia.com, akpm@linux-foundation.org,
rdunlap@infradead.org, willy@infradead.org,
viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org,
corbet@lwn.net
Cc: virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, kvm@vger.kernel.org, linux-aio@kvack.org,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Subject: [RFC v2 02/13] eventfd: track eventfd_signal() recursion depth separately in different cases
Date: Tue, 22 Dec 2020 22:52:10 +0800 [thread overview]
Message-ID: <20201222145221.711-3-xieyongji@bytedance.com> (raw)
In-Reply-To: <20201222145221.711-1-xieyongji@bytedance.com>
Now we have a global percpu counter to limit the recursion depth
of eventfd_signal(). This can avoid deadlock or stack overflow.
But in stack overflow case, it should be OK to increase the
recursion depth if needed. So we add a percpu counter in eventfd_ctx
to limit the recursion depth for deadlock case. Then it could be
fine to increase the global percpu counter later.
Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
---
fs/aio.c | 3 ++-
fs/eventfd.c | 20 +++++++++++++++++++-
include/linux/eventfd.h | 5 +----
3 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index 1f32da13d39e..5d82903161f5 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -1698,7 +1698,8 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
list_del(&iocb->ki_list);
iocb->ki_res.res = mangle_poll(mask);
req->done = true;
- if (iocb->ki_eventfd && eventfd_signal_count()) {
+ if (iocb->ki_eventfd &&
+ eventfd_signal_count(iocb->ki_eventfd)) {
iocb = NULL;
INIT_WORK(&req->work, aio_poll_put_work);
schedule_work(&req->work);
diff --git a/fs/eventfd.c b/fs/eventfd.c
index e265b6dd4f34..2df24f9bada3 100644
--- a/fs/eventfd.c
+++ b/fs/eventfd.c
@@ -25,6 +25,8 @@
#include <linux/idr.h>
#include <linux/uio.h>
+#define EVENTFD_WAKE_DEPTH 0
+
DEFINE_PER_CPU(int, eventfd_wake_count);
static DEFINE_IDA(eventfd_ida);
@@ -42,9 +44,17 @@ struct eventfd_ctx {
*/
__u64 count;
unsigned int flags;
+ int __percpu *wake_count;
int id;
};
+bool eventfd_signal_count(struct eventfd_ctx *ctx)
+{
+ return (this_cpu_read(*ctx->wake_count) ||
+ this_cpu_read(eventfd_wake_count) > EVENTFD_WAKE_DEPTH);
+}
+EXPORT_SYMBOL_GPL(eventfd_signal_count);
+
/**
* eventfd_signal - Adds @n to the eventfd counter.
* @ctx: [in] Pointer to the eventfd context.
@@ -71,17 +81,19 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
* it returns true, the eventfd_signal() call should be deferred to a
* safe context.
*/
- if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count)))
+ if (WARN_ON_ONCE(eventfd_signal_count(ctx)))
return 0;
spin_lock_irqsave(&ctx->wqh.lock, flags);
this_cpu_inc(eventfd_wake_count);
+ this_cpu_inc(*ctx->wake_count);
if (ULLONG_MAX - ctx->count < n)
n = ULLONG_MAX - ctx->count;
ctx->count += n;
if (waitqueue_active(&ctx->wqh))
wake_up_locked_poll(&ctx->wqh, EPOLLIN);
this_cpu_dec(eventfd_wake_count);
+ this_cpu_dec(*ctx->wake_count);
spin_unlock_irqrestore(&ctx->wqh.lock, flags);
return n;
@@ -92,6 +104,7 @@ static void eventfd_free_ctx(struct eventfd_ctx *ctx)
{
if (ctx->id >= 0)
ida_simple_remove(&eventfd_ida, ctx->id);
+ free_percpu(ctx->wake_count);
kfree(ctx);
}
@@ -423,6 +436,11 @@ static int do_eventfd(unsigned int count, int flags)
kref_init(&ctx->kref);
init_waitqueue_head(&ctx->wqh);
+ ctx->wake_count = alloc_percpu(int);
+ if (!ctx->wake_count) {
+ kfree(ctx);
+ return -ENOMEM;
+ }
ctx->count = count;
ctx->flags = flags;
ctx->id = ida_simple_get(&eventfd_ida, 0, 0, GFP_KERNEL);
diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
index fa0a524baed0..1a11ebbd74a9 100644
--- a/include/linux/eventfd.h
+++ b/include/linux/eventfd.h
@@ -45,10 +45,7 @@ void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt);
DECLARE_PER_CPU(int, eventfd_wake_count);
-static inline bool eventfd_signal_count(void)
-{
- return this_cpu_read(eventfd_wake_count);
-}
+bool eventfd_signal_count(struct eventfd_ctx *ctx);
#else /* CONFIG_EVENTFD */
--
2.11.0
next prev parent reply other threads:[~2020-12-22 14:54 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-22 14:52 [RFC v2 00/13] Introduce VDUSE - vDPA Device in Userspace Xie Yongji
2020-12-22 14:52 ` [RFC v2 01/13] mm: export zap_page_range() for driver use Xie Yongji
2020-12-22 15:44 ` Christoph Hellwig
2020-12-22 14:52 ` Xie Yongji [this message]
2020-12-22 14:52 ` [RFC v2 03/13] eventfd: Increase the recursion depth of eventfd_signal() Xie Yongji
2020-12-22 14:52 ` [RFC v2 04/13] vdpa: Remove the restriction that only supports virtio-net devices Xie Yongji
2020-12-22 14:52 ` [RFC v2 05/13] vdpa: Pass the netlink attributes to ops.dev_add() Xie Yongji
2020-12-22 14:52 ` [RFC v2 06/13] vduse: Introduce VDUSE - vDPA Device in Userspace Xie Yongji
2020-12-23 8:08 ` Jason Wang
2020-12-23 14:17 ` Yongji Xie
2020-12-24 3:01 ` Jason Wang
2020-12-24 8:34 ` Yongji Xie
2020-12-25 6:59 ` Jason Wang
2021-01-08 13:32 ` Bob Liu
2021-01-10 10:03 ` Yongji Xie
2020-12-22 14:52 ` [RFC v2 07/13] vduse: support get/set virtqueue state Xie Yongji
2020-12-22 14:52 ` [RFC v2 08/13] vdpa: Introduce process_iotlb_msg() in vdpa_config_ops Xie Yongji
2020-12-23 8:36 ` Jason Wang
2020-12-23 11:06 ` Yongji Xie
2020-12-24 2:36 ` Jason Wang
2020-12-24 7:24 ` Yongji Xie
2020-12-22 14:52 ` [RFC v2 09/13] vduse: Add support for processing vhost iotlb message Xie Yongji
2020-12-23 9:05 ` Jason Wang
2020-12-23 12:14 ` [External] " Yongji Xie
2020-12-24 2:41 ` Jason Wang
2020-12-24 7:37 ` Yongji Xie
2020-12-25 2:37 ` Yongji Xie
2020-12-25 7:02 ` Jason Wang
2020-12-25 11:36 ` Yongji Xie
2020-12-25 6:57 ` Jason Wang
2020-12-25 10:31 ` Yongji Xie
2020-12-28 7:43 ` Jason Wang
2020-12-28 8:14 ` Yongji Xie
2020-12-28 8:43 ` Jason Wang
2020-12-28 9:12 ` Yongji Xie
2020-12-29 9:11 ` Jason Wang
2020-12-29 10:26 ` Yongji Xie
2020-12-30 6:10 ` Jason Wang
2020-12-30 7:09 ` Yongji Xie
2020-12-30 8:41 ` Jason Wang
2020-12-30 10:12 ` Yongji Xie
2020-12-31 2:49 ` Jason Wang
2020-12-31 5:15 ` Yongji Xie
2020-12-31 5:49 ` Jason Wang
2020-12-31 6:52 ` Yongji Xie
2020-12-31 7:11 ` Jason Wang
2020-12-31 8:00 ` Yongji Xie
2020-12-22 14:52 ` [RFC v2 10/13] vduse: grab the module's references until there is no vduse device Xie Yongji
2020-12-22 14:52 ` [RFC v2 11/13] vduse/iova_domain: Support reclaiming bounce pages Xie Yongji
2020-12-22 14:52 ` [RFC v2 12/13] vduse: Add memory shrinker to reclaim " Xie Yongji
2020-12-22 14:52 ` [RFC v2 13/13] vduse: Introduce a workqueue for irq injection Xie Yongji
2020-12-23 6:38 ` [RFC v2 00/13] Introduce VDUSE - vDPA Device in Userspace Jason Wang
2020-12-23 8:14 ` Jason Wang
2020-12-23 10:59 ` Yongji Xie
2020-12-24 2:24 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201222145221.711-3-xieyongji@bytedance.com \
--to=xieyongji@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=bcrl@kvack.org \
--cc=corbet@lwn.net \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-aio@kvack.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=parav@nvidia.com \
--cc=rdunlap@infradead.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=viro@zeniv.linux.org.uk \
--cc=virtualization@lists.linux-foundation.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).