All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] Recent patches for 2.4
@ 2015-07-31  8:12 Peter Lieven
  2015-07-31  8:22 ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Lieven @ 2015-07-31  8:12 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable

Hi Paolo, hi Stefan,

you submitted some fixes for 2.4 recently. None of the folloing had qemu-stable in CC. Is this not stable material?

ca96ac4 AioContext: force event loop iteration using BH
a076972 AioContext: avoid leaking BHs on cleanup
fed105e virtio-blk-dataplane: delete bottom half before the AioContext is freed
05e514b AioContext: optimize clearing the EventNotifier
21a03d1 AioContext: fix broken placement of event_notifier_test_and_clear
eabc977 AioContext: fix broken ctx->dispatching optimization
52c91da memory: do not add a reference to the owner of aliased regions
edec47c main-loop: fix qemu_notify_event for aio_notify optimization
deb809e memory: count number of active VGA logging clients
ab28bd2 rcu: actually register threads that have RCU read-side critical sections
9172f42 qemu-char: handle EINTR for TCP character devices

Thanks,
Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] Recent patches for 2.4
  2015-07-31  8:12 [Qemu-devel] Recent patches for 2.4 Peter Lieven
@ 2015-07-31  8:22 ` Paolo Bonzini
  2015-07-31  8:35   ` Peter Lieven
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2015-07-31  8:22 UTC (permalink / raw)
  To: Peter Lieven, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable



On 31/07/2015 10:12, Peter Lieven wrote:
> Hi Paolo, hi Stefan,
> 
> you submitted some fixes for 2.4 recently. None of the folloing had qemu-stable in CC. Is this not stable material?

In general I send less and less patches to qemu-stable as we get closer
to the new release, because I'm not sure anymore if they apply.

Let's look at your list:

> ca96ac4 AioContext: force event loop iteration using BH
> a076972 AioContext: avoid leaking BHs on cleanup
> fed105e virtio-blk-dataplane: delete bottom half before the AioContext is freed
> 05e514b AioContext: optimize clearing the EventNotifier
> 21a03d1 AioContext: fix broken placement of event_notifier_test_and_clear
> eabc977 AioContext: fix broken ctx->dispatching optimization

These could in principle be backported, but the bug was only visible on
aarch64 hosts and only in relatively special circumstances (UEFI
firmware, KVM, virtio-scsi, many disks) so I didn't Cc qemu-stable.

> 52c91da memory: do not add a reference to the owner of aliased regions

This could be backported, yes.  Feel free to send it to qemu-stable.
However, the bug was only visible with virtio 1.

> edec47c main-loop: fix qemu_notify_event for aio_notify optimization

Part of the above AioContext series.

> deb809e memory: count number of active VGA logging clients

Not this one, it's new in 2.5.

> ab28bd2 rcu: actually register threads that have RCU read-side critical sections

Probably has conflicts, but can be sent to qemu-stable.

> 9172f42 qemu-char: handle EINTR for TCP character devices

Can be sent to qemu-stable if it applies to 2.4.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] Recent patches for 2.4
  2015-07-31  8:22 ` Paolo Bonzini
@ 2015-07-31  8:35   ` Peter Lieven
  2015-07-31  9:29     ` Paolo Bonzini
  2015-08-04 18:20     ` Michael Roth
  0 siblings, 2 replies; 23+ messages in thread
From: Peter Lieven @ 2015-07-31  8:35 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable

Am 31.07.2015 um 10:22 schrieb Paolo Bonzini:
>
> On 31/07/2015 10:12, Peter Lieven wrote:
>> Hi Paolo, hi Stefan,
>>
>> you submitted some fixes for 2.4 recently. None of the folloing had qemu-stable in CC. Is this not stable material?
> In general I send less and less patches to qemu-stable as we get closer
> to the new release, because I'm not sure anymore if they apply.

My background was that Michael Roth just send an announcement for an upcoming 2.3.1 release.

>
> Let's look at your list:
>
>> ca96ac4 AioContext: force event loop iteration using BH
>> a076972 AioContext: avoid leaking BHs on cleanup
>> fed105e virtio-blk-dataplane: delete bottom half before the AioContext is freed
>> 05e514b AioContext: optimize clearing the EventNotifier
>> 21a03d1 AioContext: fix broken placement of event_notifier_test_and_clear
>> eabc977 AioContext: fix broken ctx->dispatching optimization
> These could in principle be backported, but the bug was only visible on
> aarch64 hosts and only in relatively special circumstances (UEFI
> firmware, KVM, virtio-scsi, many disks) so I didn't Cc qemu-stable.

I was not aware that all those patches belong to a single bug on aarch64.

>
>> 52c91da memory: do not add a reference to the owner of aliased regions
> This could be backported, yes.  Feel free to send it to qemu-stable.
> However, the bug was only visible with virtio 1.

Applies cleanly to 2.3.1-staging.

>
>> edec47c main-loop: fix qemu_notify_event for aio_notify optimization
> Part of the above AioContext series.

So either the whole series or none of them I guess?

>
>> deb809e memory: count number of active VGA logging clients
> Not this one, it's new in 2.5.
>
>> ab28bd2 rcu: actually register threads that have RCU read-side critical sections
> Probably has conflicts, but can be sent to qemu-stable.

Has conflicts and I am not 100% sure to resolve them.

>
>> 9172f42 qemu-char: handle EINTR for TCP character devices
> Can be sent to qemu-stable if it applies to 2.4.

Applies cleanly to 2.3.1-staging.

Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] Recent patches for 2.4
  2015-07-31  8:35   ` Peter Lieven
@ 2015-07-31  9:29     ` Paolo Bonzini
  2015-08-04  9:22       ` [Qemu-devel] [Qemu-stable] " Peter Lieven
  2015-09-03 13:11       ` Peter Lieven
  2015-08-04 18:20     ` Michael Roth
  1 sibling, 2 replies; 23+ messages in thread
From: Paolo Bonzini @ 2015-07-31  9:29 UTC (permalink / raw)
  To: Peter Lieven, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable



On 31/07/2015 10:35, Peter Lieven wrote:
> Am 31.07.2015 um 10:22 schrieb Paolo Bonzini:
>>> 52c91da memory: do not add a reference to the owner of aliased regions
>> This could be backported, yes.  Feel free to send it to qemu-stable.
>> However, the bug was only visible with virtio 1.
> 
> Applies cleanly to 2.3.1-staging.

Good.

>>> edec47c main-loop: fix qemu_notify_event for aio_notify optimization
>> Part of the above AioContext series.
> 
> So either the whole series or none of them I guess?

It's a separate bug, and theoretically it's there in 2.3.1 as well, but
no one ever reproduced it (it would hang in make check) so not worthwhile.

>>> 9172f42 qemu-char: handle EINTR for TCP character devices
>> Can be sent to qemu-stable if it applies to 2.4.
> 
> Applies cleanly to 2.3.1-staging.

Good.  (I messed up 2.3 vs. 2.4 of course).

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-07-31  9:29     ` Paolo Bonzini
@ 2015-08-04  9:22       ` Peter Lieven
  2015-08-04 11:53         ` Paolo Bonzini
  2015-09-03 13:11       ` Peter Lieven
  1 sibling, 1 reply; 23+ messages in thread
From: Peter Lieven @ 2015-08-04  9:22 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable

Am 31.07.2015 um 11:29 schrieb Paolo Bonzini:
>
> On 31/07/2015 10:35, Peter Lieven wrote:
>> Am 31.07.2015 um 10:22 schrieb Paolo Bonzini:
>>>> 52c91da memory: do not add a reference to the owner of aliased regions
>>> This could be backported, yes.  Feel free to send it to qemu-stable.
>>> However, the bug was only visible with virtio 1.
>> Applies cleanly to 2.3.1-staging.
> Good.
>
>>>> edec47c main-loop: fix qemu_notify_event for aio_notify optimization
>>> Part of the above AioContext series.
>> So either the whole series or none of them I guess?
> It's a separate bug, and theoretically it's there in 2.3.1 as well, but
> no one ever reproduced it (it would hang in make check) so not worthwhile.

Can you give me a pointer what the symtoms where?
I have a qemu-img convert job on x86_64 that reproducibly hangs on bdrv_drain_all
at the end of the convert process.
I convert from nfs:// to local storage here. I try to figure out which BS
reports busy. Qemu here is still 2.2.1.

Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-08-04  9:22       ` [Qemu-devel] [Qemu-stable] " Peter Lieven
@ 2015-08-04 11:53         ` Paolo Bonzini
  2015-08-04 11:57           ` Peter Lieven
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2015-08-04 11:53 UTC (permalink / raw)
  To: Peter Lieven, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable



On 04/08/2015 11:22, Peter Lieven wrote:
>>>>> edec47c main-loop: fix qemu_notify_event for aio_notify optimization
>>>> Part of the above AioContext series.
>>> So either the whole series or none of them I guess?
>> It's a separate bug, and theoretically it's there in 2.3.1 as well, but
>> no one ever reproduced it (it would hang in make check) so not
>> worthwhile.
> 
> Can you give me a pointer what the symtoms where?

If a thread tries to wake up the main thread using qemu_notify_event(),
the main thread will never wake up.  This for example could happen if
the first thread calls qemu_set_fd_handler() or timer_mod().

> I have a qemu-img convert job on x86_64 that reproducibly hangs on
> bdrv_drain_all at the end of the convert process.
> I convert from nfs:// to local storage here. I try to figure out which BS
> reports busy. Qemu here is still 2.2.1.

qemu-img does not use main-loop, so this cannot be the cause.

The AioContext bugs only happen when you have a thread executing the
main loop and one thread executing aio_poll, so they can also be
excluded as the cause of qemu-img problems.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-08-04 11:53         ` Paolo Bonzini
@ 2015-08-04 11:57           ` Peter Lieven
  2015-08-04 12:09             ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Lieven @ 2015-08-04 11:57 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable

Am 04.08.2015 um 13:53 schrieb Paolo Bonzini:
>
> On 04/08/2015 11:22, Peter Lieven wrote:
>>>>>> edec47c main-loop: fix qemu_notify_event for aio_notify optimization
>>>>> Part of the above AioContext series.
>>>> So either the whole series or none of them I guess?
>>> It's a separate bug, and theoretically it's there in 2.3.1 as well, but
>>> no one ever reproduced it (it would hang in make check) so not
>>> worthwhile.
>> Can you give me a pointer what the symtoms where?
> If a thread tries to wake up the main thread using qemu_notify_event(),
> the main thread will never wake up.  This for example could happen if
> the first thread calls qemu_set_fd_handler() or timer_mod().
>
>> I have a qemu-img convert job on x86_64 that reproducibly hangs on
>> bdrv_drain_all at the end of the convert process.
>> I convert from nfs:// to local storage here. I try to figure out which BS
>> reports busy. Qemu here is still 2.2.1.
> qemu-img does not use main-loop, so this cannot be the cause.
>
> The AioContext bugs only happen when you have a thread executing the
> main loop and one thread executing aio_poll, so they can also be
> excluded as the cause of qemu-img problems.

Okay, what I found out is that in aio_poll I get revents = POLLIN for
the nfs file descriptor. But there is no data available on the socket.
But as a consequence progress is true and we loop here forever.

I have seen that is a common bug in Linux to return POLLIN on a fd
even there is no data available. I don't have this problem in general,
in this case no qemu-img or qemu process would ever terminate when
nfs is involved, but in this special case it happens reproducible.

Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-08-04 11:57           ` Peter Lieven
@ 2015-08-04 12:09             ` Paolo Bonzini
  2015-08-04 12:29               ` Peter Lieven
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2015-08-04 12:09 UTC (permalink / raw)
  To: Peter Lieven, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable



On 04/08/2015 13:57, Peter Lieven wrote:
> 
> Okay, what I found out is that in aio_poll I get revents = POLLIN for
> the nfs file descriptor. But there is no data available on the socket.

Does read return 0 or EAGAIN?

If it returns EAGAIN, the bug is in the QEMU main loop or the kernel.
It should never happen that poll returns POLLIN and read returns EAGAIN.

If it returns 0, it means the other side called shutdown(fd, SHUT_WR).
Then I think the bug is in the libnfs driver or more likely libnfs.  You
should stop polling the POLLIN event after read has returned 0 once.

The same bug could be in libiscsi as well.

Paolo

> But as a consequence progress is true and we loop here forever.
> 
> I have seen that is a common bug in Linux to return POLLIN on a fd
> even there is no data available. I don't have this problem in general,
> in this case no qemu-img or qemu process would ever terminate when
> nfs is involved, but in this special case it happens reproducible.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-08-04 12:09             ` Paolo Bonzini
@ 2015-08-04 12:29               ` Peter Lieven
  2015-08-04 12:53                 ` Peter Lieven
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Lieven @ 2015-08-04 12:29 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, ronnie sahlberg, qemu-stable

Am 04.08.2015 um 14:09 schrieb Paolo Bonzini:
>
> On 04/08/2015 13:57, Peter Lieven wrote:
>> Okay, what I found out is that in aio_poll I get revents = POLLIN for
>> the nfs file descriptor. But there is no data available on the socket.
> Does read return 0 or EAGAIN?
>
> If it returns EAGAIN, the bug is in the QEMU main loop or the kernel.
> It should never happen that poll returns POLLIN and read returns EAGAIN.
>
> If it returns 0, it means the other side called shutdown(fd, SHUT_WR).
> Then I think the bug is in the libnfs driver or more likely libnfs.  You
> should stop polling the POLLIN event after read has returned 0 once.

You might be right. Ronnie originally used the FIONREAD ioctl before every read and considered
the socket as disconnected if the available bytes returned where 0.
I found that I get available bytes == 0 from that ioctl even if the socket was not closed.
This seems to be some kind of bug in Linux - at least what I have thought.

See BUGS in the select(2) manpage.

        Under Linux, select() may report a socket file descriptor as "ready for reading", while nevertheless a subsequent read blocks. This could for example happen when data  has  arrived  but
        upon  examination  has  wrong checksum and is discarded. There may be other circumstances in which a file descriptor is spuriously reported as ready.  Thus it may be safer to use O_NON‐
        BLOCK on sockets that should not block.

I will debug further, but it seems to be that I receive a POLLIN even if there is no data available. I see 0 bytes from the recv call inside libnfs and continue without a deadlock - at least
so far.

Would it be a good idea to count the number of 0 bytes from recv and react after I received 0 bytes for a number of consecutive times?

And then: stop polling POLLIN or reconnect?

Thanks,
Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-08-04 12:29               ` Peter Lieven
@ 2015-08-04 12:53                 ` Peter Lieven
  2015-08-04 23:23                   ` ronnie sahlberg
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Lieven @ 2015-08-04 12:53 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, ronnie sahlberg, qemu-stable

Am 04.08.2015 um 14:29 schrieb Peter Lieven:
> Am 04.08.2015 um 14:09 schrieb Paolo Bonzini:
>>
>> On 04/08/2015 13:57, Peter Lieven wrote:
>>> Okay, what I found out is that in aio_poll I get revents = POLLIN for
>>> the nfs file descriptor. But there is no data available on the socket.
>> Does read return 0 or EAGAIN?
>>
>> If it returns EAGAIN, the bug is in the QEMU main loop or the kernel.
>> It should never happen that poll returns POLLIN and read returns EAGAIN.
>>
>> If it returns 0, it means the other side called shutdown(fd, SHUT_WR).
>> Then I think the bug is in the libnfs driver or more likely libnfs.  You
>> should stop polling the POLLIN event after read has returned 0 once.
>
> You might be right. Ronnie originally used the FIONREAD ioctl before every read and considered
> the socket as disconnected if the available bytes returned where 0.
> I found that I get available bytes == 0 from that ioctl even if the socket was not closed.
> This seems to be some kind of bug in Linux - at least what I have thought.
>
> See BUGS in the select(2) manpage.
>
>        Under Linux, select() may report a socket file descriptor as "ready for reading", while nevertheless a subsequent read blocks. This could for example happen when data  has  arrived  but
>        upon  examination  has  wrong checksum and is discarded. There may be other circumstances in which a file descriptor is spuriously reported as ready.  Thus it may be safer to use O_NON‐
>        BLOCK on sockets that should not block.
>
> I will debug further, but it seems to be that I receive a POLLIN even if there is no data available. I see 0 bytes from the recv call inside libnfs and continue without a deadlock - at least
> so far.
>
> Would it be a good idea to count the number of 0 bytes from recv and react after I received 0 bytes for a number of consecutive times?
>
> And then: stop polling POLLIN or reconnect?

Okay, got it. Ronnie was using FIONREAD without checking for EAGAIN or EINTR.

I will send a patch for libnfs to reconnect if count == 0. Libiscsi is not affected, it reconnects if count is 0.

Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-07-31  8:35   ` Peter Lieven
  2015-07-31  9:29     ` Paolo Bonzini
@ 2015-08-04 18:20     ` Michael Roth
  1 sibling, 0 replies; 23+ messages in thread
From: Michael Roth @ 2015-08-04 18:20 UTC (permalink / raw)
  To: Peter Lieven, Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable

Quoting Peter Lieven (2015-07-31 03:35:38)
> Am 31.07.2015 um 10:22 schrieb Paolo Bonzini:
> >
> > On 31/07/2015 10:12, Peter Lieven wrote:
> >> Hi Paolo, hi Stefan,
> >>
> >> you submitted some fixes for 2.4 recently. None of the folloing had qemu-stable in CC. Is this not stable material?
> > In general I send less and less patches to qemu-stable as we get closer
> > to the new release, because I'm not sure anymore if they apply.
> 
> My background was that Michael Roth just send an announcement for an upcoming 2.3.1 release.
> 
> >
> > Let's look at your list:
> >
> >> ca96ac4 AioContext: force event loop iteration using BH
> >> a076972 AioContext: avoid leaking BHs on cleanup
> >> fed105e virtio-blk-dataplane: delete bottom half before the AioContext is freed
> >> 05e514b AioContext: optimize clearing the EventNotifier
> >> 21a03d1 AioContext: fix broken placement of event_notifier_test_and_clear
> >> eabc977 AioContext: fix broken ctx->dispatching optimization
> > These could in principle be backported, but the bug was only visible on
> > aarch64 hosts and only in relatively special circumstances (UEFI
> > firmware, KVM, virtio-scsi, many disks) so I didn't Cc qemu-stable.
> 
> I was not aware that all those patches belong to a single bug on aarch64.
> 
> >
> >> 52c91da memory: do not add a reference to the owner of aliased regions
> > This could be backported, yes.  Feel free to send it to qemu-stable.
> > However, the bug was only visible with virtio 1.
> 
> Applies cleanly to 2.3.1-staging.

I've gone ahead and applied locally but I may end up dropping it if I
can verify this is specifically a virtio-1 issue.

> 
> >
> >> edec47c main-loop: fix qemu_notify_event for aio_notify optimization
> > Part of the above AioContext series.
> 
> So either the whole series or none of them I guess?
> 
> >
> >> deb809e memory: count number of active VGA logging clients
> > Not this one, it's new in 2.5.
> >
> >> ab28bd2 rcu: actually register threads that have RCU read-side critical sections
> > Probably has conflicts, but can be sent to qemu-stable.
> 
> Has conflicts and I am not 100% sure to resolve them.
> 
> >
> >> 9172f42 qemu-char: handle EINTR for TCP character devices
> > Can be sent to qemu-stable if it applies to 2.4.
> 
> Applies cleanly to 2.3.1-staging.

Also applied locally, thanks!

> 
> Peter
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-08-04 12:53                 ` Peter Lieven
@ 2015-08-04 23:23                   ` ronnie sahlberg
  2015-08-05  8:39                     ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: ronnie sahlberg @ 2015-08-04 23:23 UTC (permalink / raw)
  To: Peter Lieven; +Cc: Paolo Bonzini, qemu-devel, Stefan Hajnoczi, qemu-stable

[-- Attachment #1: Type: text/plain, Size: 4188 bytes --]

On Tue, Aug 4, 2015 at 5:53 AM, Peter Lieven <pl@kamp.de> wrote:

> Am 04.08.2015 um 14:29 schrieb Peter Lieven:
>
>> Am 04.08.2015 um 14:09 schrieb Paolo Bonzini:
>>
>>>
>>> On 04/08/2015 13:57, Peter Lieven wrote:
>>>
>>>> Okay, what I found out is that in aio_poll I get revents = POLLIN for
>>>> the nfs file descriptor. But there is no data available on the socket.
>>>>
>>> Does read return 0 or EAGAIN?
>>>
>>> If it returns EAGAIN, the bug is in the QEMU main loop or the kernel.
>>> It should never happen that poll returns POLLIN and read returns EAGAIN.
>>>
>>> If it returns 0, it means the other side called shutdown(fd, SHUT_WR).
>>> Then I think the bug is in the libnfs driver or more likely libnfs.  You
>>> should stop polling the POLLIN event after read has returned 0 once.
>>>
>>
>> You might be right. Ronnie originally used the FIONREAD ioctl before
>> every read and considered
>> the socket as disconnected if the available bytes returned where 0.
>> I found that I get available bytes == 0 from that ioctl even if the
>> socket was not closed.
>>
>
You only get >0 from this call if there are actual bytes available to read.

For context,  the problem was that

75
<http://git.qemu.org/?p=qemu.git;a=blob;f=block/nfs.c;h=c026ff6883311f2a9b16a4326ad1da97c6c3d4f4;hb=2be4f242b50a84bf360df02480b173bfed161107#l75>
static void nfs_process_read(void *arg)
76
<http://git.qemu.org/?p=qemu.git;a=blob;f=block/nfs.c;h=c026ff6883311f2a9b16a4326ad1da97c6c3d4f4;hb=2be4f242b50a84bf360df02480b173bfed161107#l76>
{
77
<http://git.qemu.org/?p=qemu.git;a=blob;f=block/nfs.c;h=c026ff6883311f2a9b16a4326ad1da97c6c3d4f4;hb=2be4f242b50a84bf360df02480b173bfed161107#l77>
    NFSClient *client = arg;
78
<http://git.qemu.org/?p=qemu.git;a=blob;f=block/nfs.c;h=c026ff6883311f2a9b16a4326ad1da97c6c3d4f4;hb=2be4f242b50a84bf360df02480b173bfed161107#l78>
    nfs_service(client->context, POLLIN);
79
<http://git.qemu.org/?p=qemu.git;a=blob;f=block/nfs.c;h=c026ff6883311f2a9b16a4326ad1da97c6c3d4f4;hb=2be4f242b50a84bf360df02480b173bfed161107#l79>
    nfs_set_events(client);
80
<http://git.qemu.org/?p=qemu.git;a=blob;f=block/nfs.c;h=c026ff6883311f2a9b16a4326ad1da97c6c3d4f4;hb=2be4f242b50a84bf360df02480b173bfed161107#l80>
}

sometimes trigger and call nfs_service(POLLIN) eventhough the socket is not
readable.
I verified this by adding an extra call to poll() at around line 78
to check whether POLLIN was actually set on the fd or not. Sometimes it
would not be but I got lost in the sources and could not find if or where
this happens or even if qemu even guarantees "only call the POLLIN
callbacks if the filedescriptor is actually readable".


The old code in libnfs used to assume that IF we are called for POLLIN and
the if ioctl(FIONREAD) returns that there are 0 bytes available to read
then there was a problem with the socket.

:-(




> This seems to be some kind of bug in Linux - at least what I have thought.
>>
>> See BUGS in the select(2) manpage.
>>
>>        Under Linux, select() may report a socket file descriptor as
>> "ready for reading", while nevertheless a subsequent read blocks. This
>> could for example happen when data  has  arrived  but
>>        upon  examination  has  wrong checksum and is discarded. There may
>> be other circumstances in which a file descriptor is spuriously reported as
>> ready.  Thus it may be safer to use O_NON‐
>>        BLOCK on sockets that should not block.
>>
>> I will debug further, but it seems to be that I receive a POLLIN even if
>> there is no data available. I see 0 bytes from the recv call inside libnfs
>> and continue without a deadlock - at least
>> so far.
>>
>> Would it be a good idea to count the number of 0 bytes from recv and
>> react after I received 0 bytes for a number of consecutive times?
>>
>> And then: stop polling POLLIN or reconnect?
>>
>
> Okay, got it. Ronnie was using FIONREAD without checking for EAGAIN or
> EINTR.
>
> I will send a patch for libnfs to reconnect if count == 0. Libiscsi is not
> affected, it reconnects if count is 0.


Thanks, and merged.


>
>
> Peter
>
>

[-- Attachment #2: Type: text/html, Size: 7397 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-08-04 23:23                   ` ronnie sahlberg
@ 2015-08-05  8:39                     ` Paolo Bonzini
  2015-08-06  7:51                       ` Peter Lieven
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2015-08-05  8:39 UTC (permalink / raw)
  To: ronnie sahlberg, Peter Lieven; +Cc: qemu-devel, Stefan Hajnoczi, qemu-stable



On 05/08/2015 01:23, ronnie sahlberg wrote:
> 
> You only get >0 from this call if there are actual bytes available to read.
> 
> For context,  the problem was that
> 
> 
>   75 static void nfs_process_read(void *arg)
>   76 {
>   77     NFSClient *client = arg;
>   78     nfs_service(client->context, POLLIN);
>   79     nfs_set_events(client);
>   80 }
> 
> sometimes trigger and call nfs_service(POLLIN) eventhough the socket is
> not readable.

Does read() return -1/EAGAIN or 0?

If it returns 0, then this is expected: it means that the other side has
shutdown the socket's write-side.  libnfs should either reconnect, or
disable POLLIN, treat all pending operations as aborted and stop
submitting more.

Paolo

> I verified this by adding an extra call to poll() at around line 78
> to check whether POLLIN was actually set on the fd or not. Sometimes it
> would not be but I got lost in the sources and could not find if or
> where this happens or even if qemu even guarantees "only call the POLLIN
> callbacks if the filedescriptor is actually readable".
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-08-05  8:39                     ` Paolo Bonzini
@ 2015-08-06  7:51                       ` Peter Lieven
  0 siblings, 0 replies; 23+ messages in thread
From: Peter Lieven @ 2015-08-06  7:51 UTC (permalink / raw)
  To: Paolo Bonzini, ronnie sahlberg; +Cc: qemu-devel, Stefan Hajnoczi, qemu-stable

Am 05.08.2015 um 10:39 schrieb Paolo Bonzini:
>
> On 05/08/2015 01:23, ronnie sahlberg wrote:
>> You only get >0 from this call if there are actual bytes available to read.
>>
>> For context,  the problem was that
>>
>>
>>    75 static void nfs_process_read(void *arg)
>>    76 {
>>    77     NFSClient *client = arg;
>>    78     nfs_service(client->context, POLLIN);
>>    79     nfs_set_events(client);
>>    80 }
>>
>> sometimes trigger and call nfs_service(POLLIN) eventhough the socket is
>> not readable.
> Does read() return -1/EAGAIN or 0?
>
> If it returns 0, then this is expected: it means that the other side has
> shutdown the socket's write-side.  libnfs should either reconnect, or
> disable POLLIN, treat all pending operations as aborted and stop
> submitting more.

I also went on reproducing and can confirm, that I see FIONREAD return
0 bytes available when booting a CDROM in Qemu with libnfs before commit
cf420d3 (socket: use FIONREAD ioctl only for UDP). But a lot has changed
in rpc_read_from_socket in libnfs since we dropped FIONREAD. Most importantly
we changed from recv without flags to recv with MSG_DONTWAIT. And with
current master I never receive spurious readiness any more. So there is
nothing we need to fix here except from handling recv return 0 bytes and
then reconnect. This fix is already in the current master of libnfs. And this
solves the deadlock in qemu-img I have observed.

Thanks for you help,
Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-07-31  9:29     ` Paolo Bonzini
  2015-08-04  9:22       ` [Qemu-devel] [Qemu-stable] " Peter Lieven
@ 2015-09-03 13:11       ` Peter Lieven
  2015-09-07 16:28         ` Paolo Bonzini
  1 sibling, 1 reply; 23+ messages in thread
From: Peter Lieven @ 2015-09-03 13:11 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable

Am 31.07.2015 um 11:29 schrieb Paolo Bonzini:
>
> On 31/07/2015 10:35, Peter Lieven wrote:
>> Am 31.07.2015 um 10:22 schrieb Paolo Bonzini:
>>>> 52c91da memory: do not add a reference to the owner of aliased regions
>>> This could be backported, yes.  Feel free to send it to qemu-stable.
>>> However, the bug was only visible with virtio 1.
>> Applies cleanly to 2.3.1-staging.
> Good.
>
>>>> edec47c main-loop: fix qemu_notify_event for aio_notify optimization
>>> Part of the above AioContext series.
>> So either the whole series or none of them I guess?
> It's a separate bug, and theoretically it's there in 2.3.1 as well, but
> no one ever reproduced it (it would hang in make check) so not worthwhile.

I maybe found another case where this bugfix seems to help. For various
reasons I'm still on 2.2.1 stable with some CVE fixes manually applied.

I currently work on the problem that a haning NFS Server where I mount
my CDROM ISOs from takes down the whole vServer because the main loop
hangs sooner or later.

I already put some patches on the list which help to improve the situation:

ide/atapi: make PIO read requests async
block/io: allow AIOCB without callback
ide/atapi: partially avoid deadlock if the storage backend is dead
block/nfs: cache allocated filesize for read-only files

However, I ended up with one case where Qemu deadlocks in bdrv_drain_one.

Scenario:
  - Windows 2012R2 with a random ISO mounted.
  - Block communication to NFS via Firewall
  - execute 'eject ide1-cd0' in the HMP
  - restore connectivity.

The HMP hangs forever even if the NFS connectivity itself is restored.

Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-09-03 13:11       ` Peter Lieven
@ 2015-09-07 16:28         ` Paolo Bonzini
  2015-09-07 19:11           ` Peter Lieven
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2015-09-07 16:28 UTC (permalink / raw)
  To: Peter Lieven, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable



On 03/09/2015 15:11, Peter Lieven wrote:
>>
>>>>> edec47c main-loop: fix qemu_notify_event for aio_notify optimization
>>>> Part of the above AioContext series.
>>> So either the whole series or none of them I guess?
>> It's a separate bug, and theoretically it's there in 2.3.1 as well, but
>> no one ever reproduced it (it would hang in make check) so not
>> worthwhile.
> 
> I maybe found another case where this bugfix seems to help. For various
> reasons I'm still on 2.2.1 stable with some CVE fixes manually applied.

Don't be ashamed to say that! :)

> I currently work on the problem that a haning NFS Server where I mount
> my CDROM ISOs from takes down the whole vServer because the main loop
> hangs sooner or later.
> 
> I already put some patches on the list which help to improve the situation:
> 
> ide/atapi: make PIO read requests async
> block/io: allow AIOCB without callback
> ide/atapi: partially avoid deadlock if the storage backend is dead
> block/nfs: cache allocated filesize for read-only files
> 
> However, I ended up with one case where Qemu deadlocks in bdrv_drain_one.
> 
> Scenario:
>  - Windows 2012R2 with a random ISO mounted.
>  - Block communication to NFS via Firewall
>  - execute 'eject ide1-cd0' in the HMP
>  - restore connectivity.
> 
> The HMP hangs forever even if the NFS connectivity itself is restored.

If that patch helps, it's not a problem to include it in other stable
releases.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-09-07 16:28         ` Paolo Bonzini
@ 2015-09-07 19:11           ` Peter Lieven
  2015-09-07 19:14             ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Lieven @ 2015-09-07 19:11 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable

Am 07.09.2015 um 18:28 schrieb Paolo Bonzini:
>
> On 03/09/2015 15:11, Peter Lieven wrote:
>>>>>> edec47c main-loop: fix qemu_notify_event for aio_notify optimization
>>>>> Part of the above AioContext series.
>>>> So either the whole series or none of them I guess?
>>> It's a separate bug, and theoretically it's there in 2.3.1 as well, but
>>> no one ever reproduced it (it would hang in make check) so not
>>> worthwhile.
>> I maybe found another case where this bugfix seems to help. For various
>> reasons I'm still on 2.2.1 stable with some CVE fixes manually applied.
> Don't be ashamed to say that! :)
>
>> I currently work on the problem that a haning NFS Server where I mount
>> my CDROM ISOs from takes down the whole vServer because the main loop
>> hangs sooner or later.
>>
>> I already put some patches on the list which help to improve the situation:
>>
>> ide/atapi: make PIO read requests async
>> block/io: allow AIOCB without callback
>> ide/atapi: partially avoid deadlock if the storage backend is dead
>> block/nfs: cache allocated filesize for read-only files
>>
>> However, I ended up with one case where Qemu deadlocks in bdrv_drain_one.
>>
>> Scenario:
>>  - Windows 2012R2 with a random ISO mounted.
>>  - Block communication to NFS via Firewall
>>  - execute 'eject ide1-cd0' in the HMP
>>  - restore connectivity.
>>
>> The HMP hangs forever even if the NFS connectivity itself is restored.
> If that patch helps, it's not a problem to include it in other stable
> releases.

It helps, but I have a small issue when my backport of the patch
is applied.

I launch qemu witch a cmdline like this to probe for enforcable CPU types.

echo quit | qemu -enable-kvm -monitor stdio -nodefaults -nographic -cpu Haswell,enforce -S

Qemu hangs when entering the quit command. If I omit the -S it works. And without the
fix it also works with -S.

My backport (Linux only and without tests etc.) of the original fix is here:
https://github.com/plieven/qemu/commit/0ddcdc62a85f705017df16421d769e82b70f9b37

Maybe I made a mistake.

Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-09-07 19:11           ` Peter Lieven
@ 2015-09-07 19:14             ` Paolo Bonzini
  2015-09-07 21:05               ` Peter Lieven
  2015-09-21  7:40               ` Peter Lieven
  0 siblings, 2 replies; 23+ messages in thread
From: Paolo Bonzini @ 2015-09-07 19:14 UTC (permalink / raw)
  To: Peter Lieven, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable



On 07/09/2015 21:11, Peter Lieven wrote:
> It helps, but I have a small issue when my backport of the patch
> is applied.
> 
> I launch qemu witch a cmdline like this to probe for enforcable CPU types.
> 
> echo quit | qemu -enable-kvm -monitor stdio -nodefaults -nographic -cpu Haswell,enforce -S
> 
> Qemu hangs when entering the quit command. If I omit the -S it works. And without the
> fix it also works with -S.
> 
> My backport (Linux only and without tests etc.) of the original fix is here:
> https://github.com/plieven/qemu/commit/0ddcdc62a85f705017df16421d769e82b70f9b37

Could you be missing edec47c?

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-09-07 19:14             ` Paolo Bonzini
@ 2015-09-07 21:05               ` Peter Lieven
  2015-09-21  7:40               ` Peter Lieven
  1 sibling, 0 replies; 23+ messages in thread
From: Peter Lieven @ 2015-09-07 21:05 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, Stefan Hajnoczi, qemu-stable



> Am 07.09.2015 um 21:14 schrieb Paolo Bonzini <pbonzini@redhat.com>:
> 
> 
> 
>> On 07/09/2015 21:11, Peter Lieven wrote:
>> It helps, but I have a small issue when my backport of the patch
>> is applied.
>> 
>> I launch qemu witch a cmdline like this to probe for enforcable CPU types.
>> 
>> echo quit | qemu -enable-kvm -monitor stdio -nodefaults -nographic -cpu Haswell,enforce -S
>> 
>> Qemu hangs when entering the quit command. If I omit the -S it works. And without the
>> fix it also works with -S.
>> 
>> My backport (Linux only and without tests etc.) of the original fix is here:
>> https://github.com/plieven/qemu/commit/0ddcdc62a85f705017df16421d769e82b70f9b37
> 
> Could you be missing edec47c?

yes as well as possibly all other related patches. is it easy for you to figure out which patches should be backported?

Thanks,
Peter 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-09-07 19:14             ` Paolo Bonzini
  2015-09-07 21:05               ` Peter Lieven
@ 2015-09-21  7:40               ` Peter Lieven
  2015-09-21  9:41                 ` Paolo Bonzini
  1 sibling, 1 reply; 23+ messages in thread
From: Peter Lieven @ 2015-09-21  7:40 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable

Am 07.09.2015 um 21:14 schrieb Paolo Bonzini:
>
> On 07/09/2015 21:11, Peter Lieven wrote:
>> It helps, but I have a small issue when my backport of the patch
>> is applied.
>>
>> I launch qemu witch a cmdline like this to probe for enforcable CPU types.
>>
>> echo quit | qemu -enable-kvm -monitor stdio -nodefaults -nographic -cpu Haswell,enforce -S
>>
>> Qemu hangs when entering the quit command. If I omit the -S it works. And without the
>> fix it also works with -S.
>>
>> My backport (Linux only and without tests etc.) of the original fix is here:
>> https://github.com/plieven/qemu/commit/0ddcdc62a85f705017df16421d769e82b70f9b37
> Could you be missing edec47c?

Indeed this one fixed the deadlock with -S in cmdline.
Are there other important fixes from the series?

I currently have these two:
  main-loop: fix qemu_notify_event for aio_notify optimization
  AioContext: fix broken ctx->dispatching optimization

Thanks,
Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-09-21  7:40               ` Peter Lieven
@ 2015-09-21  9:41                 ` Paolo Bonzini
  2015-09-21  9:52                   ` Peter Lieven
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2015-09-21  9:41 UTC (permalink / raw)
  To: Peter Lieven, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable



On 21/09/2015 09:40, Peter Lieven wrote:
>>>
>>>
>>> My backport (Linux only and without tests etc.) of the original fix
>>> is here:
>>> https://github.com/plieven/qemu/commit/0ddcdc62a85f705017df16421d769e82b70f9b37
>>>
>> Could you be missing edec47c?
> 
> Indeed this one fixed the deadlock with -S in cmdline.
> Are there other important fixes from the series?
> 
> I currently have these two:
>  main-loop: fix qemu_notify_event for aio_notify optimization
>  AioContext: fix broken ctx->dispatching optimization

The full list is

e4efd8a488d0a68b0af34d8ee245463df7c8bdf4 qemu-timer: initialize "timers_done_ev" to set
12d69ac03b45156356b240424623719f15d8143e tests: remove irrelevant assertions from test-aio
6493c975af75be5b8d9ade954239bdf5492b7911 aio-win32: reorganize polling loop
eabc977973103527bbb8fed69c91cfaa6691f8ab AioContext: fix broken ctx->dispatching optimization
21a03d17f2edb1e63f7137d97ba355cc6f19d79f AioContext: fix broken placement of event_notifier_test_and_clear
05e514b1d4d5bd4209e2c8bbc76ff05c85a235f3 AioContext: optimize clearing the EventNotifier
edec47cfef96209987cb7922286cb384916aae02 main-loop: fix qemu_notify_event for aio_notify optimization
fed105e2756dde98efa5e80baca02ae516dd1e51 virtio-blk-dataplane: delete bottom half before the AioContext is freed
a076972a4d36381d610a854f0c336507650a1d34 AioContext: avoid leaking BHs on cleanup
ca96ac44dcd290566090b2435bc828fded356ad9 AioContext: force event loop iteration using BH

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-09-21  9:41                 ` Paolo Bonzini
@ 2015-09-21  9:52                   ` Peter Lieven
  2015-09-21 12:16                     ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Lieven @ 2015-09-21  9:52 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable

Am 21.09.2015 um 11:41 schrieb Paolo Bonzini:
>
> On 21/09/2015 09:40, Peter Lieven wrote:
>>>>
>>>> My backport (Linux only and without tests etc.) of the original fix
>>>> is here:
>>>> https://github.com/plieven/qemu/commit/0ddcdc62a85f705017df16421d769e82b70f9b37
>>>>
>>> Could you be missing edec47c?
>> Indeed this one fixed the deadlock with -S in cmdline.
>> Are there other important fixes from the series?
>>
>> I currently have these two:
>>   main-loop: fix qemu_notify_event for aio_notify optimization
>>   AioContext: fix broken ctx->dispatching optimization
> The full list is
>
> e4efd8a488d0a68b0af34d8ee245463df7c8bdf4 qemu-timer: initialize "timers_done_ev" to set
> 12d69ac03b45156356b240424623719f15d8143e tests: remove irrelevant assertions from test-aio
> 6493c975af75be5b8d9ade954239bdf5492b7911 aio-win32: reorganize polling loop
> eabc977973103527bbb8fed69c91cfaa6691f8ab AioContext: fix broken ctx->dispatching optimization
> 21a03d17f2edb1e63f7137d97ba355cc6f19d79f AioContext: fix broken placement of event_notifier_test_and_clear
> 05e514b1d4d5bd4209e2c8bbc76ff05c85a235f3 AioContext: optimize clearing the EventNotifier
> edec47cfef96209987cb7922286cb384916aae02 main-loop: fix qemu_notify_event for aio_notify optimization
> fed105e2756dde98efa5e80baca02ae516dd1e51 virtio-blk-dataplane: delete bottom half before the AioContext is freed
> a076972a4d36381d610a854f0c336507650a1d34 AioContext: avoid leaking BHs on cleanup
> ca96ac44dcd290566090b2435bc828fded356ad9 AioContext: force event loop iteration using BH
>
> Paolo

thanks, I will have a look at them. If I understand correctly I should consider backporting all?

Peter

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [Qemu-stable] Recent patches for 2.4
  2015-09-21  9:52                   ` Peter Lieven
@ 2015-09-21 12:16                     ` Paolo Bonzini
  0 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2015-09-21 12:16 UTC (permalink / raw)
  To: Peter Lieven, Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable



On 21/09/2015 11:52, Peter Lieven wrote:
>>
>> e4efd8a488d0a68b0af34d8ee245463df7c8bdf4 qemu-timer: initialize "timers_done_ev" to set
>> 12d69ac03b45156356b240424623719f15d8143e tests: remove irrelevant assertions from test-aio
>> 6493c975af75be5b8d9ade954239bdf5492b7911 aio-win32: reorganize polling loop
>> eabc977973103527bbb8fed69c91cfaa6691f8ab AioContext: fix broken ctx->dispatching optimization
>> 21a03d17f2edb1e63f7137d97ba355cc6f19d79f AioContext: fix broken placement of event_notifier_test_and_clear
>> 05e514b1d4d5bd4209e2c8bbc76ff05c85a235f3 AioContext: optimize clearing the EventNotifier
>> edec47cfef96209987cb7922286cb384916aae02 main-loop: fix qemu_notify_event for aio_notify optimization
>> fed105e2756dde98efa5e80baca02ae516dd1e51 virtio-blk-dataplane: delete bottom half before the AioContext is freed
>> a076972a4d36381d610a854f0c336507650a1d34 AioContext: avoid leaking BHs on cleanup
>> ca96ac44dcd290566090b2435bc828fded356ad9 AioContext: force event loop iteration using BH
>>
>> Paolo
> 
> thanks, I will have a look at them. If I understand correctly I should
> consider backporting all?

It's the safest bet, yes.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2015-09-21 12:16 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-31  8:12 [Qemu-devel] Recent patches for 2.4 Peter Lieven
2015-07-31  8:22 ` Paolo Bonzini
2015-07-31  8:35   ` Peter Lieven
2015-07-31  9:29     ` Paolo Bonzini
2015-08-04  9:22       ` [Qemu-devel] [Qemu-stable] " Peter Lieven
2015-08-04 11:53         ` Paolo Bonzini
2015-08-04 11:57           ` Peter Lieven
2015-08-04 12:09             ` Paolo Bonzini
2015-08-04 12:29               ` Peter Lieven
2015-08-04 12:53                 ` Peter Lieven
2015-08-04 23:23                   ` ronnie sahlberg
2015-08-05  8:39                     ` Paolo Bonzini
2015-08-06  7:51                       ` Peter Lieven
2015-09-03 13:11       ` Peter Lieven
2015-09-07 16:28         ` Paolo Bonzini
2015-09-07 19:11           ` Peter Lieven
2015-09-07 19:14             ` Paolo Bonzini
2015-09-07 21:05               ` Peter Lieven
2015-09-21  7:40               ` Peter Lieven
2015-09-21  9:41                 ` Paolo Bonzini
2015-09-21  9:52                   ` Peter Lieven
2015-09-21 12:16                     ` Paolo Bonzini
2015-08-04 18:20     ` Michael Roth

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.