All of lore.kernel.org
 help / color / mirror / Atom feed
From: Markus Armbruster <armbru@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>,
	 qemu-devel@nongnu.org,  Fam Zheng <fam@euphon.net>,
	 qemu-block@nongnu.org,
	 Emanuele Giuseppe Esposito <eesposit@redhat.com>,
	 Kevin Wolf <kwolf@redhat.com>,
	 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	 Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 5/6] hmp: convert handle_hmp_command() to AIO_WAIT_WHILE_UNLOCKED()
Date: Thu, 02 Mar 2023 17:25:41 +0100	[thread overview]
Message-ID: <87jzzzi0hm.fsf@pond.sub.org> (raw)
In-Reply-To: <20230302154834.GA2497705@fedora> (Stefan Hajnoczi's message of "Thu, 2 Mar 2023 10:48:34 -0500")

Stefan Hajnoczi <stefanha@redhat.com> writes:

> On Thu, Mar 02, 2023 at 04:02:22PM +0100, Markus Armbruster wrote:
>> Stefan Hajnoczi <stefanha@redhat.com> writes:
>> 
>> > On Thu, Mar 02, 2023 at 08:17:43AM +0100, Markus Armbruster wrote:
>> >> Stefan Hajnoczi <stefanha@redhat.com> writes:
>> >> 
>> >> > The HMP monitor runs in the main loop thread. Calling
>> >> 
>> >> Correct.
>> >> 
>> >> > AIO_WAIT_WHILE(qemu_get_aio_context(), ...) from the main loop thread is
>> >> > equivalent to AIO_WAIT_WHILE_UNLOCKED(NULL, ...) because neither unlocks
>> >> > the AioContext and the latter's assertion that we're in the main loop
>> >> > succeeds.
>> >> >
>> >> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> >> > ---
>> >> >  monitor/hmp.c | 2 +-
>> >> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >> >
>> >> > diff --git a/monitor/hmp.c b/monitor/hmp.c
>> >> > index 2aa85d3982..5ecbdac802 100644
>> >> > --- a/monitor/hmp.c
>> >> > +++ b/monitor/hmp.c
>> >> > @@ -1167,7 +1167,7 @@ void handle_hmp_command(MonitorHMP *mon, const char *cmdline)
>> >> >          Coroutine *co = qemu_coroutine_create(handle_hmp_command_co, &data);
>> >> >          monitor_set_cur(co, &mon->common);
>> >> >          aio_co_enter(qemu_get_aio_context(), co);
>> >> > -        AIO_WAIT_WHILE(qemu_get_aio_context(), !data.done);
>> >> > +        AIO_WAIT_WHILE_UNLOCKED(NULL, !data.done);
>> >> >      }
>> >> >  
>> >> >      qobject_unref(qdict);
>> >> 
>> >> Acked-by: Markus Armbruster <armbru@redhat.com>
>> >> 
>> >> For an R-by, I need to understand this in more detail.  I'm not familiar
>> >> with the innards of AIO_WAIT_WHILE() & friends, so I need to go real
>> >> slow.
>> >> 
>> >> We change
>> >> 
>> >>     ctx from qemu_get_aio_context() to NULL
>> >>     unlock from true to false
>> >> 
>> >> in
>> >> 
>> >>     bool waited_ = false;                                          \
>> >>     AioWait *wait_ = &global_aio_wait;                             \
>> >>     AioContext *ctx_ = (ctx);                                      \
>> >>     /* Increment wait_->num_waiters before evaluating cond. */     \
>> >>     qatomic_inc(&wait_->num_waiters);                              \
>> >>     /* Paired with smp_mb in aio_wait_kick(). */                   \
>> >>     smp_mb();                                                      \
>> >>     if (ctx_ && in_aio_context_home_thread(ctx_)) {                \
>> >>         while ((cond)) {                                           \
>> >>             aio_poll(ctx_, true);                                  \
>> >>             waited_ = true;                                        \
>> >>         }                                                          \
>> >>     } else {                                                       \
>> >>         assert(qemu_get_current_aio_context() ==                   \
>> >>                qemu_get_aio_context());                            \
>> >>         while ((cond)) {                                           \
>> >>             if (unlock && ctx_) {                                  \
>> >>                 aio_context_release(ctx_);                         \
>> >>             }                                                      \
>> >>             aio_poll(qemu_get_aio_context(), true);                \
>> >>             if (unlock && ctx_) {                                  \
>> >>                 aio_context_acquire(ctx_);                         \
>> >>             }                                                      \
>> >>             waited_ = true;                                        \
>> >>         }                                                          \
>> >>     }                                                              \
>> >>     qatomic_dec(&wait_->num_waiters);                              \
>> >>     waited_; })
>> >> 
>> >> qemu_get_aio_context() is non-null here, correct?
>> >
>> > qemu_get_aio_context() always returns the main loop thread's AioContext.
>> 
>> So it's non-null.
>
> Yes. Sorry, I should have answered directly :).
>
>> > qemu_get_current_aio_context() returns the AioContext that was most
>> > recently set in the my_aiocontext thread-local variable for IOThreads,
>> > the main loop's AioContext for BQL threads, or NULL for threads
>> > that don't use AioContext at all.
>> >
>> >> What's the value of in_aio_context_home_thread(qemu_get_aio_context())?
>> >
>> > This function checks whether the given AioContext is associated with
>> > this thread. In a BQL thread it returns true if the context is the main
>> > loop's AioContext. In an IOThread it returns true if the context is the
>> > IOThread's AioContext. Otherwise it returns false.
>> 
>> I guess that means in_aio_context_home_thread(qemu_get_aio_context()) is
>> true in the main thread.
>> 
>> Before the patch, the if's condition is true, and we execute
>> 
>>            while ((cond)) {                                           \
>>                aio_poll(ctx_, true);                                  \
>>                waited_ = true;                                        \
>>            }                                                          \
>> 
>> Afterwards, it's false, and we execute
>> 
>> >>     }                                                              \
>> >>     qatomic_dec(&wait_->num_waiters);                              \
>> >>     waited_; })
>> >> 
>> >> qemu_get_aio_context() is non-null here, correct?
>> >
>> > qemu_get_aio_context() always returns the main loop thread's AioContext.
>> 
>> So it's non-null.
>> 
>> > qemu_get_current_aio_context() returns the AioContext that was most
>> > recently set in the my_aiocontext thread-local variable for IOThreads,
>> > the main loop's AioContext for BQL threads, or NULL for threads
>> > that don't use AioContext at all.
>> >
>> >> What's the value of in_aio_context_home_thread(qemu_get_aio_context())?
>> >
>> > This function checks whether the given AioContext is associated with
>> > this thread. In a BQL thread it returns true if the context is the main
>> > loop's AioContext. In an IOThread it returns true if the context is the
>> > IOThread's AioContext. Otherwise it returns false.
>> 
>> I guess that means in_aio_context_home_thread(qemu_get_aio_context()) is
>> true in the main thread.
>
> Yes.
>
>> Before the patch, the if's condition is true, and we execute
>> 
>>            while ((cond)) {                                           \
>>                aio_poll(ctx_, true);                                  \
>>                waited_ = true;                                        \
>>            }                                                          \
>> 
>> Afterwards, it's false, and we instead execute
>> 
>>            assert(qemu_get_current_aio_context() ==                   \
>>                   qemu_get_aio_context());                            \
>>            while ((cond)) {                                           \
>>                if (unlock && ctx_) {                                  \
>>                    aio_context_release(ctx_);                         \
>>                }                                                      \
>>                aio_poll(qemu_get_aio_context(), true);                \
>>                if (unlock && ctx_) {                                  \
>>                    aio_context_acquire(ctx_);                         \
>>                }                                                      \
>>                waited_ = true;                                        \
>>            }                                                          \
>> 
>> The assertion is true: both operands of == are the main loop's
>> AioContext.
>
> Yes.
>
>> The if conditions are false, because unlock is.
>> 
>> Therefore, we execute the exact same code.
>> 
>> All correct?
>
> Yes, exactly.

Thank you!

Reviewed-by: Markus Armbruster <armbru@redhat.com>



  reply	other threads:[~2023-03-02 16:26 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-01 20:57 [PATCH 0/6] block: switch to AIO_WAIT_WHILE_UNLOCKED() where possible Stefan Hajnoczi
2023-03-01 20:57 ` [PATCH 1/6] block: don't acquire AioContext lock in bdrv_drain_all() Stefan Hajnoczi
2023-03-07 17:17   ` Kevin Wolf
2023-03-07 19:20     ` Stefan Hajnoczi
2023-03-08  8:48       ` Kevin Wolf
2023-03-08 14:26         ` Stefan Hajnoczi
2023-03-08 17:25           ` Kevin Wolf
2023-03-09 12:38             ` Stefan Hajnoczi
2023-03-01 20:57 ` [PATCH 2/6] block: convert blk_exp_close_all_type() to AIO_WAIT_WHILE_UNLOCKED() Stefan Hajnoczi
2023-03-02 10:36   ` Philippe Mathieu-Daudé
2023-03-02 13:08     ` Stefan Hajnoczi
2023-03-02 14:16       ` Philippe Mathieu-Daudé
2023-03-02 16:00         ` Stefan Hajnoczi
2023-03-07 20:37   ` Philippe Mathieu-Daudé
2023-03-01 20:57 ` [PATCH 3/6] block: convert bdrv_graph_wrlock() " Stefan Hajnoczi
2023-03-02 10:19   ` Philippe Mathieu-Daudé
2023-03-07 20:37     ` Philippe Mathieu-Daudé
2023-03-01 20:57 ` [PATCH 4/6] block: convert bdrv_drain_all_begin() " Stefan Hajnoczi
2023-03-07 20:37   ` Philippe Mathieu-Daudé
2023-03-01 20:58 ` [PATCH 5/6] hmp: convert handle_hmp_command() " Stefan Hajnoczi
2023-03-02  7:17   ` Markus Armbruster
2023-03-02 13:22     ` Stefan Hajnoczi
2023-03-02 15:02       ` Markus Armbruster
2023-03-02 15:48         ` Stefan Hajnoczi
2023-03-02 16:25           ` Markus Armbruster [this message]
2023-03-07 20:39   ` Philippe Mathieu-Daudé
2023-03-01 20:58 ` [PATCH 6/6] monitor: convert monitor_cleanup() " Stefan Hajnoczi
2023-03-02  7:20   ` Markus Armbruster
2023-03-02 16:26     ` Markus Armbruster
2023-03-07 20:39   ` Philippe Mathieu-Daudé
2023-03-07 17:29 ` [PATCH 0/6] block: switch to AIO_WAIT_WHILE_UNLOCKED() where possible Kevin Wolf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87jzzzi0hm.fsf@pond.sub.org \
    --to=armbru@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eesposit@redhat.com \
    --cc=fam@euphon.net \
    --cc=hreitz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.