From: Paolo Bonzini <pbonzini@redhat.com>
To: Thomas Gleixner <tglx@linutronix.de>,
Jason Wang <jasowang@redhat.com>,
Daniel Bristot de Oliveira <bristot@redhat.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
"Michael S. Tsirkin" <mst@redhat.com>,
Juri Lelli <jlelli@redhat.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
Al Viro <viro@zeniv.linux.org.uk>, He Zhe <zhe.he@windriver.com>,
Jens Axboe <axboe@fb.com>
Subject: Re: 5.13-rt1 + KVM = WARNING: at fs/eventfd.c:74 eventfd_signal()
Date: Wed, 28 Jul 2021 12:21:20 +0200 [thread overview]
Message-ID: <810e01ef-9b71-5b44-8498-b8a377d4e51b@redhat.com> (raw)
In-Reply-To: <87pmv23lru.ffs@nanos.tec.linutronix.de>
On 28/07/21 10:06, Thomas Gleixner wrote:
> On Wed, Jul 14 2021 at 12:35, Paolo Bonzini wrote:
>> On 14/07/21 11:23, Jason Wang wrote:
>>
>> +static local_lock_t eventfd_wake_lock = INIT_LOCAL_LOCK(eventfd_wake_lock);
>> DEFINE_PER_CPU(int, eventfd_wake_count);
>>
>> static DEFINE_IDA(eventfd_ida);
>> @@ -71,8 +73,11 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
>> * it returns true, the eventfd_signal() call should be deferred to a
>> * safe context.
>> */
>> - if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count)))
>> + local_lock(&eventfd_wake_lock);
>> + if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count))) {
>> + local_unlock(&eventfd_wake_lock);
>> return 0;
>> + }
>>
>> spin_lock_irqsave(&ctx->wqh.lock, flags);
>> this_cpu_inc(eventfd_wake_count);
>> @@ -83,6 +88,7 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
>> wake_up_locked_poll(&ctx->wqh, EPOLLIN);
>> this_cpu_dec(eventfd_wake_count);
>> spin_unlock_irqrestore(&ctx->wqh.lock, flags);
>> + local_unlock(&eventfd_wake_lock);
>
> Yes, that cures it, but if you think about what this wants to prevent,
> then having the recursion counter per CPU is at least suboptimal.
>
> Something like the untested below perhaps?
Yes, that works (it should just be #ifdef CONFIG_EVENTFD).
On !PREEMPT_RT the percpu variable consumes memory while your patch uses
none (there are plenty of spare bits in current), but it is otherwise
basically the same. On PREEMPT_RT the local_lock is certainly more
expensive.
Thanks,
Paolo
> Thanks,
>
> tglx
> ---
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -1695,7 +1695,7 @@ static int aio_poll_wake(struct wait_que
> list_del(&iocb->ki_list);
> iocb->ki_res.res = mangle_poll(mask);
> req->done = true;
> - if (iocb->ki_eventfd && eventfd_signal_count()) {
> + if (iocb->ki_eventfd && !eventfd_signal_allowed()) {
> iocb = NULL;
> INIT_WORK(&req->work, aio_poll_put_work);
> schedule_work(&req->work);
> --- a/fs/eventfd.c
> +++ b/fs/eventfd.c
> @@ -25,8 +25,6 @@
> #include <linux/idr.h>
> #include <linux/uio.h>
>
> -DEFINE_PER_CPU(int, eventfd_wake_count);
> -
> static DEFINE_IDA(eventfd_ida);
>
> struct eventfd_ctx {
> @@ -67,21 +65,21 @@ struct eventfd_ctx {
> * Deadlock or stack overflow issues can happen if we recurse here
> * through waitqueue wakeup handlers. If the caller users potentially
> * nested waitqueues with custom wakeup handlers, then it should
> - * check eventfd_signal_count() before calling this function. If
> - * it returns true, the eventfd_signal() call should be deferred to a
> + * check eventfd_signal_allowed() before calling this function. If
> + * it returns false, the eventfd_signal() call should be deferred to a
> * safe context.
> */
> - if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count)))
> + if (WARN_ON_ONCE(current->in_eventfd_signal))
> return 0;
>
> spin_lock_irqsave(&ctx->wqh.lock, flags);
> - this_cpu_inc(eventfd_wake_count);
> + current->in_eventfd_signal = 1;
> if (ULLONG_MAX - ctx->count < n)
> n = ULLONG_MAX - ctx->count;
> ctx->count += n;
> if (waitqueue_active(&ctx->wqh))
> wake_up_locked_poll(&ctx->wqh, EPOLLIN);
> - this_cpu_dec(eventfd_wake_count);
> + current->in_eventfd_signal = 0;
> spin_unlock_irqrestore(&ctx->wqh.lock, flags);
>
> return n;
> --- a/include/linux/eventfd.h
> +++ b/include/linux/eventfd.h
> @@ -43,11 +43,9 @@ int eventfd_ctx_remove_wait_queue(struct
> __u64 *cnt);
> void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt);
>
> -DECLARE_PER_CPU(int, eventfd_wake_count);
> -
> -static inline bool eventfd_signal_count(void)
> +static inline bool eventfd_signal_allowed(void)
> {
> - return this_cpu_read(eventfd_wake_count);
> + return !current->in_eventfd_signal;
> }
>
> #else /* CONFIG_EVENTFD */
> @@ -78,9 +76,9 @@ static inline int eventfd_ctx_remove_wai
> return -ENOSYS;
> }
>
> -static inline bool eventfd_signal_count(void)
> +static inline bool eventfd_signal_allowed(void)
> {
> - return false;
> + return true;
> }
>
> static inline void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt)
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -863,6 +863,8 @@ struct task_struct {
> /* Used by page_owner=on to detect recursion in page tracking. */
> unsigned in_page_owner:1;
> #endif
> + /* Recursion prevention for eventfd_signal() */
> + unsigned in_eventfd_signal:1;
>
> unsigned long atomic_flags; /* Flags requiring atomic access. */
>
>
next prev parent reply other threads:[~2021-07-28 10:21 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-14 8:01 5.13-rt1 + KVM = WARNING: at fs/eventfd.c:74 eventfd_signal() Daniel Bristot de Oliveira
2021-07-14 8:10 ` Paolo Bonzini
2021-07-14 9:23 ` Jason Wang
2021-07-14 10:35 ` Paolo Bonzini
2021-07-14 10:41 ` Michael S. Tsirkin
2021-07-14 10:44 ` Paolo Bonzini
2021-07-14 12:20 ` Daniel Bristot de Oliveira
2021-07-15 4:14 ` Jason Wang
2021-07-15 5:58 ` Paolo Bonzini
2021-07-15 6:45 ` Jason Wang
2021-07-15 8:22 ` Daniel Bristot de Oliveira
2021-07-15 8:44 ` He Zhe
2021-07-15 9:51 ` Paolo Bonzini
2021-07-15 10:10 ` He Zhe
2021-07-15 11:05 ` Paolo Bonzini
2021-07-16 2:26 ` Jason Wang
2021-07-16 2:43 ` He Zhe
2021-07-16 2:46 ` Jason Wang
2021-07-15 9:46 ` Paolo Bonzini
2021-07-15 12:34 ` Daniel Bristot de Oliveira
[not found] ` <20210715102249.2205-1-hdanton@sina.com>
2021-07-15 12:31 ` Daniel Bristot de Oliveira
[not found] ` <20210716020611.2288-1-hdanton@sina.com>
2021-07-16 6:54 ` Paolo Bonzini
[not found] ` <20210716075539.2376-1-hdanton@sina.com>
2021-07-16 7:59 ` Paolo Bonzini
[not found] ` <20210716093725.2438-1-hdanton@sina.com>
2021-07-16 11:55 ` Paolo Bonzini
2021-07-18 12:42 ` Hillf Danton
2021-07-19 15:38 ` Paolo Bonzini
2021-07-21 7:04 ` Hillf Danton
2021-07-21 7:25 ` Thomas Gleixner
2021-07-21 10:11 ` Hillf Danton
2021-07-21 10:59 ` Paolo Bonzini
2021-07-22 5:58 ` Hillf Danton
2021-07-23 2:23 ` Hillf Danton
2021-07-23 7:59 ` Paolo Bonzini
2021-07-23 9:48 ` Hillf Danton
2021-07-23 10:56 ` Paolo Bonzini
2021-07-24 4:33 ` Hillf Danton
2021-07-26 11:03 ` Paolo Bonzini
2021-07-28 8:06 ` Thomas Gleixner
2021-07-28 10:21 ` Paolo Bonzini [this message]
2021-07-28 19:07 ` Thomas Gleixner
2021-07-29 11:01 ` [PATCH] eventfd: Make signal recursion protection a task bit Thomas Gleixner
2021-07-29 14:32 ` Daniel Bristot de Oliveira
2021-07-29 19:23 ` Daniel Bristot de Oliveira
2021-08-26 7:03 ` Jason Wang
2021-08-27 23:41 ` [tip: sched/core] " tip-bot2 for Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=810e01ef-9b71-5b44-8498-b8a377d4e51b@redhat.com \
--to=pbonzini@redhat.com \
--cc=axboe@fb.com \
--cc=bigeasy@linutronix.de \
--cc=bristot@redhat.com \
--cc=jasowang@redhat.com \
--cc=jlelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=tglx@linutronix.de \
--cc=viro@zeniv.linux.org.uk \
--cc=zhe.he@windriver.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.