qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* xen-block: race condition when stopping the device (WAS: Re: [Xen-devel] [xen-4.13-testing test] 144736: regressions - FAIL)
       [not found]         ` <a65ae7dca64f4f718f116b9174893730@EX13D32EUC003.ant.amazon.com>
@ 2019-12-14  0:34           ` Julien Grall
  2019-12-16  9:34             ` Durrant, Paul
  0 siblings, 1 reply; 4+ messages in thread
From: Julien Grall @ 2019-12-14  0:34 UTC (permalink / raw)
  To: Durrant, Paul, Ian Jackson
  Cc: Jürgen Groß,
	Stefano Stabellini, qemu-devel, osstest service owner,
	Anthony Perard, xen-devel

Hi Paul,

On 13/12/2019 15:55, Durrant, Paul wrote:
>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
>> Julien Grall
>> Sent: 13 December 2019 15:37
>> To: Ian Jackson <ian.jackson@citrix.com>
>> Cc: Jürgen Groß <jgross@suse.com>; xen-devel@lists.xenproject.org; Stefano
>> Stabellini <sstabellini@kernel.org>; osstest service owner <osstest-
>> admin@xenproject.org>; Anthony Perard <anthony.perard@citrix.com>
>> Subject: Re: [Xen-devel] [xen-4.13-testing test] 144736: regressions -
>> FAIL
>>
>> +Anthony
>>
>> On 13/12/2019 11:40, Ian Jackson wrote:
>>> Julien Grall writes ("Re: [Xen-devel] [xen-4.13-testing test] 144736:
>> regressions - FAIL"):
>>>> AMD Seattle boards (laxton*) are known to fail booting time to time
>>>> because of PCI training issue. We have workaround for it (involving
>>>> longer power cycle) but this is not 100% reliable.
>>>
>>> This wasn't a power cycle.  It was a software-initiated reboot.  It
>>> does appear to hang in the firmware somewhere.  Do we expect the pci
>>> training issue to occur in this case ?
>>
>> The PCI training happens at every reset (including software). So I may
>> have confused the workaround for firmware corruption with the PCI
>> training. We definitely have a workfround for the former.
>>
>> For the latter, I can't remember if we did use a new firmware or just
>> hope it does not happen often.
>>
>> I think we had a thread on infra@ about the workaround some times last
>> year. Sadly this was sent on my Arm e-mail address and I didn't archive
>> it before leaving :(. Can you have a look if you can find the thread?
>>
>>>
>>>>>>     test-armhf-armhf-xl-vhd      18 leak-check/check         fail
>> REGR.
>>>>>> vs. 144673
>>>>>
>>>>> That one is strange. A qemu process seems to have have died producing
>>>>> a core file, but I couldn't find any log containing any other
>> indication
>>>>> of a crashed program.
>>>>
>>>> I haven't found anything interesting in the log. @Ian could you set up
>>>> a repro for this?
>>>
>>> There is some heisenbug where qemu crashes with very low probability.
>>> (I forget whether only on arm or on x86 too).  This has been around
>>> for a little while.  I doubt this particular failure will be
>>> reproducible.
>>
>> I can't remember such bug been reported on Arm before. Anyway, I managed
>> to get the stack trace from gdb:
>>
>> Core was generated by `/usr/local/lib/xen/bin/qemu-system-i386
>> -xen-domid 1 -chardev socket,id=libxl-c'.
>> Program terminated with signal SIGSEGV, Segmentation fault.
>> #0  0x006342be in xen_block_handle_requests (dataplane=0x108e600) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-
>> dir/hw/block/dataplane/xen-block.c:531
>> 531
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-
>> dir/hw/block/dataplane/xen-block.c:
>> No such file or directory.
>> [Current thread is 1 (LWP 1987)]
>> (gdb) bt
>> #0  0x006342be in xen_block_handle_requests (dataplane=0x108e600) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-
>> dir/hw/block/dataplane/xen-block.c:531
>> #1  0x0063447c in xen_block_dataplane_event (opaque=0x108e600) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-
>> dir/hw/block/dataplane/xen-block.c:626
>> #2  0x008d005c in xen_device_poll (opaque=0x107a3b0) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-dir/hw/xen/xen-
>> bus.c:1077
>> #3  0x00a4175c in run_poll_handlers_once (ctx=0x1079708,
>> timeout=0xb1ba17f8) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-dir/util/aio-
>> posix.c:520
>> #4  0x00a41826 in run_poll_handlers (ctx=0x1079708, max_ns=8000,
>> timeout=0xb1ba17f8) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-dir/util/aio-
>> posix.c:562
>> #5  0x00a41956 in try_poll_mode (ctx=0x1079708, timeout=0xb1ba17f8) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-dir/util/aio-
>> posix.c:597
>> #6  0x00a41a2c in aio_poll (ctx=0x1079708, blocking=true) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-dir/util/aio-
>> posix.c:639
>> #7  0x0071dc16 in iothread_run (opaque=0x107d328) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-
>> dir/iothread.c:75
>> #8  0x00a44c80 in qemu_thread_start (args=0x1079538) at
>> /home/osstest/build.144736.build-armhf/xen/tools/qemu-xen-dir/util/qemu-
>> thread-posix.c:502
>> #9  0xb67ae5d8 in ?? ()
>> Backtrace stopped: previous frame identical to this frame (corrupt stack?)
>>
>> This feels like a race condition between the init/free code with
>> handler. Anthony, does it ring any bell?
>>
> 
>  From that stack bt it looks like an iothread managed to run after the sring was NULLed. This should not be able happen as the dataplane should have been moved back onto QEMU's main thread context before the ring is unmapped.

My knowledge of this code is fairly limited, so correct me if I am wrong.

blk_set_aio_context() would set the context for the block aio. AFAICT, 
the only aio for the block is xen_block_complete_aio().

In the stack above, we are not dealing with a block aio but an aio tie 
to the event channel (see the call from xen_device_poll). So I don't 
think the blk_set_aio_context() would affect the aio.

So it would be possible to get the iothread running because we received 
a notification on the event channel while we are stopping the block (i.e 
xen_block_dataplane_stop()).

If xen_block_dataplane_stop() grab the context lock first, then the 
iothread dealing with the event may wait on the lock until its released.

By the time the lock is grabbed, we may have free all the resources 
(including srings). So the event iothread will end up to dereference a 
NULL pointer.

It feels to me we need a way to quiesce all the iothreads (blk, 
event,...) before continuing. But I am a bit unsure how to do this in QEMU.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: xen-block: race condition when stopping the device (WAS: Re: [Xen-devel] [xen-4.13-testing test] 144736: regressions - FAIL)
  2019-12-14  0:34           ` xen-block: race condition when stopping the device (WAS: Re: [Xen-devel] [xen-4.13-testing test] 144736: regressions - FAIL) Julien Grall
@ 2019-12-16  9:34             ` Durrant, Paul
  2019-12-16  9:50               ` [Xen-devel] xen-block: race condition when stopping the device (WAS: " Durrant, Paul
  0 siblings, 1 reply; 4+ messages in thread
From: Durrant, Paul @ 2019-12-16  9:34 UTC (permalink / raw)
  To: Julien Grall, Ian Jackson
  Cc: Jürgen Groß,
	xen-devel, Stefano Stabellini, osstest service owner,
	Anthony Perard, qemu-devel

> -----Original Message-----
[snip]
> >>
> >> This feels like a race condition between the init/free code with
> >> handler. Anthony, does it ring any bell?
> >>
> >
> >  From that stack bt it looks like an iothread managed to run after the
> sring was NULLed. This should not be able happen as the dataplane should
> have been moved back onto QEMU's main thread context before the ring is
> unmapped.
> 
> My knowledge of this code is fairly limited, so correct me if I am wrong.
> 
> blk_set_aio_context() would set the context for the block aio. AFAICT,
> the only aio for the block is xen_block_complete_aio().

Not quite. xen_block_dataplane_start() calls xen_device_bind_event_channel() and that will add an event channel fd into the aio context, so the shared ring is polled by the iothread as well as block i/o completion.

> 
> In the stack above, we are not dealing with a block aio but an aio tie
> to the event channel (see the call from xen_device_poll). So I don't
> think the blk_set_aio_context() would affect the aio.
> 

For the reason I outline above, it does.

> So it would be possible to get the iothread running because we received
> a notification on the event channel while we are stopping the block (i.e
> xen_block_dataplane_stop()).
> 

We should assume an iothread can essentially run at any time, as it is a polling entity. It should eventually block polling on fds assign to its aio context but I don't think the abstraction guarantees that it cannot be awoken for other reasons (e.g. off a timeout). However and event from the frontend will certainly cause the evtchn fd poll to wake up.

> If xen_block_dataplane_stop() grab the context lock first, then the
> iothread dealing with the event may wait on the lock until its released.
> 
> By the time the lock is grabbed, we may have free all the resources
> (including srings). So the event iothread will end up to dereference a
> NULL pointer.
> 

I think the problem may actually be that xen_block_dataplane_event() does not acquire the context and thus is not synchronized with xen_block_dataplane_stop(). The documentation in multiple-iothreads.txt is not clear whether a poll handler called by an iothread needs to acquire the context though; TBH I would not have thought it necessary.

> It feels to me we need a way to quiesce all the iothreads (blk,
> event,...) before continuing. But I am a bit unsure how to do this in
> QEMU.
> 

Looking at virtio-blk.c I see that it does seem to close off its evtchn equivalent from iothread context via aio_wait_bh_oneshot(). So I wonder whether the 'right' thing to do is to call xen_device_unbind_event_channel() using the same mechanism to ensure xen_block_dataplane_event() can't race.

  Paul

> Cheers,
> 
> --
> Julien Grall

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [Xen-devel] xen-block: race condition when stopping the device (WAS: Re: [xen-4.13-testing test] 144736: regressions - FAIL)
  2019-12-16  9:34             ` Durrant, Paul
@ 2019-12-16  9:50               ` Durrant, Paul
  2019-12-16 10:24                 ` Durrant, Paul
  0 siblings, 1 reply; 4+ messages in thread
From: Durrant, Paul @ 2019-12-16  9:50 UTC (permalink / raw)
  To: Durrant, Paul, Julien Grall, Ian Jackson
  Cc: Jürgen Groß,
	Stefano Stabellini, qemu-devel, osstest service owner,
	Anthony Perard, xen-devel

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
> Durrant, Paul
> Sent: 16 December 2019 09:34
> To: Julien Grall <julien@xen.org>; Ian Jackson <ian.jackson@citrix.com>
> Cc: Jürgen Groß <jgross@suse.com>; Stefano Stabellini
> <sstabellini@kernel.org>; qemu-devel@nongnu.org; osstest service owner
> <osstest-admin@xenproject.org>; Anthony Perard
> <anthony.perard@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [Xen-devel] xen-block: race condition when stopping the
> device (WAS: Re: [xen-4.13-testing test] 144736: regressions - FAIL)
> 
> > -----Original Message-----
> [snip]
> > >>
> > >> This feels like a race condition between the init/free code with
> > >> handler. Anthony, does it ring any bell?
> > >>
> > >
> > >  From that stack bt it looks like an iothread managed to run after the
> > sring was NULLed. This should not be able happen as the dataplane should
> > have been moved back onto QEMU's main thread context before the ring is
> > unmapped.
> >
> > My knowledge of this code is fairly limited, so correct me if I am
> wrong.
> >
> > blk_set_aio_context() would set the context for the block aio. AFAICT,
> > the only aio for the block is xen_block_complete_aio().
> 
> Not quite. xen_block_dataplane_start() calls
> xen_device_bind_event_channel() and that will add an event channel fd into
> the aio context, so the shared ring is polled by the iothread as well as
> block i/o completion.
> 
> >
> > In the stack above, we are not dealing with a block aio but an aio tie
> > to the event channel (see the call from xen_device_poll). So I don't
> > think the blk_set_aio_context() would affect the aio.
> >
> 
> For the reason I outline above, it does.
> 
> > So it would be possible to get the iothread running because we received
> > a notification on the event channel while we are stopping the block (i.e
> > xen_block_dataplane_stop()).
> >
> 
> We should assume an iothread can essentially run at any time, as it is a
> polling entity. It should eventually block polling on fds assign to its
> aio context but I don't think the abstraction guarantees that it cannot be
> awoken for other reasons (e.g. off a timeout). However and event from the
> frontend will certainly cause the evtchn fd poll to wake up.
> 
> > If xen_block_dataplane_stop() grab the context lock first, then the
> > iothread dealing with the event may wait on the lock until its released.
> >
> > By the time the lock is grabbed, we may have free all the resources
> > (including srings). So the event iothread will end up to dereference a
> > NULL pointer.
> >
> 
> I think the problem may actually be that xen_block_dataplane_event() does
> not acquire the context and thus is not synchronized with
> xen_block_dataplane_stop(). The documentation in multiple-iothreads.txt is
> not clear whether a poll handler called by an iothread needs to acquire
> the context though; TBH I would not have thought it necessary.
> 
> > It feels to me we need a way to quiesce all the iothreads (blk,
> > event,...) before continuing. But I am a bit unsure how to do this in
> > QEMU.
> >
> 
> Looking at virtio-blk.c I see that it does seem to close off its evtchn
> equivalent from iothread context via aio_wait_bh_oneshot(). So I wonder
> whether the 'right' thing to do is to call
> xen_device_unbind_event_channel() using the same mechanism to ensure
> xen_block_dataplane_event() can't race.

Digging around the virtio-blk history I see:

commit 1010cadf62332017648abee0d7a3dc7f2eef9632
Author: Stefan Hajnoczi <stefanha@redhat.com>
Date:   Wed Mar 7 14:42:03 2018 +0000

    virtio-blk: fix race between .ioeventfd_stop() and vq handler

    If the main loop thread invokes .ioeventfd_stop() just as the vq handler
    function begins in the IOThread then the handler may lose the race for
    the AioContext lock.  By the time the vq handler is able to acquire the
    AioContext lock the ioeventfd has already been removed and the handler
    isn't supposed to run anymore!

    Use the new aio_wait_bh_oneshot() function to perform ioeventfd removal
    from within the IOThread.  This way no races with the vq handler are
    possible.

    Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
    Reviewed-by: Fam Zheng <famz@redhat.com>
    Acked-by: Paolo Bonzini <pbonzini@redhat.com>
    Message-id: 20180307144205.20619-3-stefanha@redhat.com
    Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

...so I think xen-block has exactly the same problem. I think we may also be missing a qemu_bh_cancel() to make sure block aio completions are stopped. I'll prep a patch.

  Paul

> 
>   Paul
> 
> > Cheers,
> >
> > --
> > Julien Grall
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [Xen-devel] xen-block: race condition when stopping the device (WAS: Re: [xen-4.13-testing test] 144736: regressions - FAIL)
  2019-12-16  9:50               ` [Xen-devel] xen-block: race condition when stopping the device (WAS: " Durrant, Paul
@ 2019-12-16 10:24                 ` Durrant, Paul
  0 siblings, 0 replies; 4+ messages in thread
From: Durrant, Paul @ 2019-12-16 10:24 UTC (permalink / raw)
  To: Julien Grall, Ian Jackson
  Cc: Jürgen Groß,
	Stefano Stabellini, qemu-devel, osstest service owner,
	Anthony Perard, xen-devel

> -----Original Message-----
> From: Durrant, Paul <pdurrant@amazon.com>
> Sent: 16 December 2019 09:50
> To: Durrant, Paul <pdurrant@amazon.com>; Julien Grall <julien@xen.org>;
> Ian Jackson <ian.jackson@citrix.com>
> Cc: Jürgen Groß <jgross@suse.com>; Stefano Stabellini
> <sstabellini@kernel.org>; qemu-devel@nongnu.org; osstest service owner
> <osstest-admin@xenproject.org>; Anthony Perard
> <anthony.perard@citrix.com>; xen-devel@lists.xenproject.org
> Subject: RE: [Xen-devel] xen-block: race condition when stopping the
> device (WAS: Re: [xen-4.13-testing test] 144736: regressions - FAIL)
> 
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
> > Durrant, Paul
> > Sent: 16 December 2019 09:34
> > To: Julien Grall <julien@xen.org>; Ian Jackson <ian.jackson@citrix.com>
> > Cc: Jürgen Groß <jgross@suse.com>; Stefano Stabellini
> > <sstabellini@kernel.org>; qemu-devel@nongnu.org; osstest service owner
> > <osstest-admin@xenproject.org>; Anthony Perard
> > <anthony.perard@citrix.com>; xen-devel@lists.xenproject.org
> > Subject: Re: [Xen-devel] xen-block: race condition when stopping the
> > device (WAS: Re: [xen-4.13-testing test] 144736: regressions - FAIL)
> >
> > > -----Original Message-----
> > [snip]
> > > >>
> > > >> This feels like a race condition between the init/free code with
> > > >> handler. Anthony, does it ring any bell?
> > > >>
> > > >
> > > >  From that stack bt it looks like an iothread managed to run after
> the
> > > sring was NULLed. This should not be able happen as the dataplane
> should
> > > have been moved back onto QEMU's main thread context before the ring
> is
> > > unmapped.
> > >
> > > My knowledge of this code is fairly limited, so correct me if I am
> > wrong.
> > >
> > > blk_set_aio_context() would set the context for the block aio. AFAICT,
> > > the only aio for the block is xen_block_complete_aio().
> >
> > Not quite. xen_block_dataplane_start() calls
> > xen_device_bind_event_channel() and that will add an event channel fd
> into
> > the aio context, so the shared ring is polled by the iothread as well as
> > block i/o completion.
> >
> > >
> > > In the stack above, we are not dealing with a block aio but an aio tie
> > > to the event channel (see the call from xen_device_poll). So I don't
> > > think the blk_set_aio_context() would affect the aio.
> > >
> >
> > For the reason I outline above, it does.
> >
> > > So it would be possible to get the iothread running because we
> received
> > > a notification on the event channel while we are stopping the block
> (i.e
> > > xen_block_dataplane_stop()).
> > >
> >
> > We should assume an iothread can essentially run at any time, as it is a
> > polling entity. It should eventually block polling on fds assign to its
> > aio context but I don't think the abstraction guarantees that it cannot
> be
> > awoken for other reasons (e.g. off a timeout). However and event from
> the
> > frontend will certainly cause the evtchn fd poll to wake up.
> >
> > > If xen_block_dataplane_stop() grab the context lock first, then the
> > > iothread dealing with the event may wait on the lock until its
> released.
> > >
> > > By the time the lock is grabbed, we may have free all the resources
> > > (including srings). So the event iothread will end up to dereference a
> > > NULL pointer.
> > >
> >
> > I think the problem may actually be that xen_block_dataplane_event()
> does
> > not acquire the context and thus is not synchronized with
> > xen_block_dataplane_stop(). The documentation in multiple-iothreads.txt
> is
> > not clear whether a poll handler called by an iothread needs to acquire
> > the context though; TBH I would not have thought it necessary.
> >
> > > It feels to me we need a way to quiesce all the iothreads (blk,
> > > event,...) before continuing. But I am a bit unsure how to do this in
> > > QEMU.
> > >
> >
> > Looking at virtio-blk.c I see that it does seem to close off its evtchn
> > equivalent from iothread context via aio_wait_bh_oneshot(). So I wonder
> > whether the 'right' thing to do is to call
> > xen_device_unbind_event_channel() using the same mechanism to ensure
> > xen_block_dataplane_event() can't race.
> 
> Digging around the virtio-blk history I see:
> 
> commit 1010cadf62332017648abee0d7a3dc7f2eef9632
> Author: Stefan Hajnoczi <stefanha@redhat.com>
> Date:   Wed Mar 7 14:42:03 2018 +0000
> 
>     virtio-blk: fix race between .ioeventfd_stop() and vq handler
> 
>     If the main loop thread invokes .ioeventfd_stop() just as the vq
> handler
>     function begins in the IOThread then the handler may lose the race for
>     the AioContext lock.  By the time the vq handler is able to acquire
> the
>     AioContext lock the ioeventfd has already been removed and the handler
>     isn't supposed to run anymore!
> 
>     Use the new aio_wait_bh_oneshot() function to perform ioeventfd
> removal
>     from within the IOThread.  This way no races with the vq handler are
>     possible.
> 
>     Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>     Reviewed-by: Fam Zheng <famz@redhat.com>
>     Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>     Message-id: 20180307144205.20619-3-stefanha@redhat.com
>     Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> 
> ...so I think xen-block has exactly the same problem. I think we may also
> be missing a qemu_bh_cancel() to make sure block aio completions are
> stopped. I'll prep a patch.
> 

Having discussed with Julien off-list, we agreed that the oneshot handler may be overly elaborate for our purposes and actually destroying the event channel at that point will still pose problems for pending aio. What we really need is an equivalent of blk_set_aio_context() for event channels.

  Paul

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-12-16 10:25 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <osstest-144736-mainreport@xen.org>
     [not found] ` <e05a3bc4-2c1d-7e71-af42-d6362c4f6d07@suse.com>
     [not found]   ` <6ea2af3c-b277-1118-7c83-ebcb0809d449@xen.org>
     [not found]     ` <24051.30893.31444.319978@mariner.uk.xensource.com>
     [not found]       ` <7a0ef296-eb50-fbda-63e2-8d890fad5111@xen.org>
     [not found]         ` <a65ae7dca64f4f718f116b9174893730@EX13D32EUC003.ant.amazon.com>
2019-12-14  0:34           ` xen-block: race condition when stopping the device (WAS: Re: [Xen-devel] [xen-4.13-testing test] 144736: regressions - FAIL) Julien Grall
2019-12-16  9:34             ` Durrant, Paul
2019-12-16  9:50               ` [Xen-devel] xen-block: race condition when stopping the device (WAS: " Durrant, Paul
2019-12-16 10:24                 ` Durrant, Paul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).