All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tiwei Bie <tiwei.bie@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: jasowang@redhat.com, alex.williamson@redhat.com,
	pbonzini@redhat.com, stefanha@redhat.com, qemu-devel@nongnu.org,
	virtio-dev@lists.oasis-open.org, cunming.liang@intel.com,
	dan.daly@intel.com, jianfeng.tan@intel.com,
	zhihong.wang@intel.com, xiao.w.wang@intel.com
Subject: Re: [Qemu-devel] [PATCH v3 2/6] vhost-user: introduce shared vhost-user state
Date: Thu, 24 May 2018 23:22:01 +0800	[thread overview]
Message-ID: <20180524152201.GC26913@debian> (raw)
In-Reply-To: <20180524173015-mutt-send-email-mst@kernel.org>

On Thu, May 24, 2018 at 05:30:45PM +0300, Michael S. Tsirkin wrote:
> On Thu, May 24, 2018 at 06:59:36PM +0800, Tiwei Bie wrote:
> > On Thu, May 24, 2018 at 10:24:40AM +0800, Tiwei Bie wrote:
> > > On Thu, May 24, 2018 at 07:21:01AM +0800, Tiwei Bie wrote:
> > > > On Wed, May 23, 2018 at 06:43:29PM +0300, Michael S. Tsirkin wrote:
> > > > > On Wed, May 23, 2018 at 06:36:05PM +0300, Michael S. Tsirkin wrote:
> > > > > > On Wed, May 23, 2018 at 04:44:51PM +0300, Michael S. Tsirkin wrote:
> > > > > > > On Thu, Apr 12, 2018 at 11:12:28PM +0800, Tiwei Bie wrote:
> > > > > > > > When multi queue is enabled e.g. for a virtio-net device,
> > > > > > > > each queue pair will have a vhost_dev, and the only thing
> > > > > > > > shared between vhost devs currently is the chardev. This
> > > > > > > > patch introduces a vhost-user state structure which will
> > > > > > > > be shared by all vhost devs of the same virtio device.
> > > > > > > > 
> > > > > > > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > > > > > 
> > > > > > > Unfortunately this patch seems to cause crashes.
> > > > > > > To reproduce, simply run
> > > > > > > make check-qtest-x86_64
> > > > > > > 
> > > > > > > Sorry that it took me a while to find - it triggers 90% of runs but not
> > > > > > > 100% which complicates bisection somewhat.
> > > > 
> > > > It's my fault to not notice this bug before.
> > > > I'm very sorry. Thank you so much for finding
> > > > the root cause!
> > > > 
> > > > > > > 
> > > > > > > > ---
> > > > > > > >  backends/cryptodev-vhost-user.c     | 20 ++++++++++++++++++-
> > > > > > > >  hw/block/vhost-user-blk.c           | 22 +++++++++++++++++++-
> > > > > > > >  hw/scsi/vhost-user-scsi.c           | 20 ++++++++++++++++++-
> > > > > > > >  hw/virtio/Makefile.objs             |  2 +-
> > > > > > > >  hw/virtio/vhost-stub.c              | 10 ++++++++++
> > > > > > > >  hw/virtio/vhost-user.c              | 31 +++++++++++++++++++---------
> > > > > > > >  include/hw/virtio/vhost-user-blk.h  |  2 ++
> > > > > > > >  include/hw/virtio/vhost-user-scsi.h |  2 ++
> > > > > > > >  include/hw/virtio/vhost-user.h      | 20 +++++++++++++++++++
> > > > > > > >  net/vhost-user.c                    | 40 ++++++++++++++++++++++++++++++-------
> > > > > > > >  10 files changed, 149 insertions(+), 20 deletions(-)
> > > > > > > >  create mode 100644 include/hw/virtio/vhost-user.h
> > > > [...]
> > > > > > > >          qemu_chr_fe_set_handlers(&s->chr, NULL, NULL,
> > > > > > > >                                   net_vhost_user_event, NULL, nc0->name, NULL,
> > > > > > > > @@ -319,6 +336,15 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
> > > > > > > >      assert(s->vhost_net);
> > > > > > > >  
> > > > > > > >      return 0;
> > > > > > > > +
> > > > > > > > +err:
> > > > > > > > +    if (user) {
> > > > > > > > +        vhost_user_cleanup(user);
> > > > > > > > +        g_free(user);
> > > > > > > > +        s->vhost_user = NULL;
> > > > > > > > +    }
> > > > > > > > +
> > > > > > > > +    return -1;
> > > > > > > >  }
> > > > > > > >  
> > > > > > > >  static Chardev *net_vhost_claim_chardev(
> > > > > > > > -- 
> > > > > > > > 2.11.0
> > > > > > 
> > > > > > So far I figured out that commenting the free of
> > > > > > the structure removes the crash, so we seem to
> > > > > > be dealing with a use-after free here.
> > > > > > I suspect that in a MQ situation, one queue gets
> > > > > > closed and attempts to free the structure
> > > > > > while others still use it.
> > > > > > 
> > > > > > diff --git a/net/vhost-user.c b/net/vhost-user.c
> > > > > > index 525a061..6a1573b 100644
> > > > > > --- a/net/vhost-user.c
> > > > > > +++ b/net/vhost-user.c
> > > > > > @@ -157,8 +157,8 @@ static void net_vhost_user_cleanup(NetClientState *nc)
> > > > > >          s->vhost_net = NULL;
> > > > > >      }
> > > > > >      if (s->vhost_user) {
> > > > > > -        vhost_user_cleanup(s->vhost_user);
> > > > > > -        g_free(s->vhost_user);
> > > > > > +        //vhost_user_cleanup(s->vhost_user);
> > > > > > +        //g_free(s->vhost_user);
> > > > > >          s->vhost_user = NULL;
> > > > > >      }
> > > > > >      if (nc->queue_index == 0) {
> > > > > > @@ -339,8 +339,8 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
> > > > > >  
> > > > > >  err:
> > > > > >      if (user) {
> > > > > > -        vhost_user_cleanup(user);
> > > > > > -        g_free(user);
> > > > > > +        //vhost_user_cleanup(user);
> > > > > > +        //g_free(user);
> > > > > >          s->vhost_user = NULL;
> > > > > >      }
> > > > > >  
> > > > > 
> > > > > 
> > > > > So the following at least gets rid of the crashes.
> > > > > I am not sure it does not leak memory though,
> > > > > and not sure there aren't any configurations where
> > > > > the 1st queue gets cleaned up first.
> > > > > 
> > > > > Thoughts?
> > > > 
> > > > Thank you so much for catching it and fixing
> > > > it! I'll keep your SoB there. Thank you so
> > > > much! I do appreciate it!
> > > > 
> > > > You are right. This structure is freed multiple
> > > > times when multi-queue is enabled.
> > > 
> > > After a deeper digging, I got your point now..
> > > It could be a use-after-free instead of a double
> > > free.. As it's safe to deinit the char which is
> > > shared by all queue pairs when cleanup the 1st
> > > queue pair, it should be safe to free vhost-user
> > > structure there too.
> > 
> > One thing worried me is that, I can't reproduce
> > the crash with `make check-qtest-x86_64`.
> 
> I sent a patch that will make test fail on qemu coredump.

That would be very useful!

Without your explanation, I guess I won't be able to
notice such crash in this test.

Best regards,
Tiwei Bie

> 
> > I tried
> > to run it a lot of times, but the only output I
> > got each time was:
> > 
> >         CHK version_gen.h
> >   GTESTER check-qtest-x86_64
> > 
> > I did a quick glance of the `check-qtest-x86_64`
> > target in the makefile, it seems that the relevant
> > test is `tests/vhost-user-test`. So I also tried
> > to run this test directly:
> > 
> > make tests/vhost-user-test
> > while true; do
> >   QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 ./tests/vhost-user-test
> > done
> > 
> > And the only output in each loop I got was:
> > 
> > /x86_64/vhost-user/migrate: OK
> > /x86_64/vhost-user/multiqueue: OK
> > /x86_64/vhost-user/read-guest-mem/memfd: OK
> > /x86_64/vhost-user/read-guest-mem/memfile: OK
> > 
> > So I'm still not quite sure what caused the crash
> > on your side. But it does make more sense to free
> > the vhost-user structure only when cleanup the
> > 1st queue pair (i.e. where the `chr` is deinit-ed).
> > 
> > I have sent a new patch set with above change:
> > 
> > http://lists.gnu.org/archive/html/qemu-devel/2018-05/msg05508.html
> > https://patchwork.kernel.org/bundle/btw/vhost-user-host-notifiers/
> > 
> > Because the above change is got from your diff
> > and also based on your analysis, I kept your SoB
> > in that patch (if you have any concern about it,
> > please let me know).
> > 
> > In this patch set, I also introduced a protocol
> > feature to allow slave to send fds to master via
> > the slave channel.
> > 
> > If you still see crashes with the new patch set,
> > please provide me a bit more details, e.g. the
> > crash message. Thanks a lot!
> > 
> > Best regards,
> > Tiwei Bie

WARNING: multiple messages have this Message-ID (diff)
From: Tiwei Bie <tiwei.bie@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: jasowang@redhat.com, alex.williamson@redhat.com,
	pbonzini@redhat.com, stefanha@redhat.com, qemu-devel@nongnu.org,
	virtio-dev@lists.oasis-open.org, cunming.liang@intel.com,
	dan.daly@intel.com, jianfeng.tan@intel.com,
	zhihong.wang@intel.com, xiao.w.wang@intel.com
Subject: [virtio-dev] Re: [PATCH v3 2/6] vhost-user: introduce shared vhost-user state
Date: Thu, 24 May 2018 23:22:01 +0800	[thread overview]
Message-ID: <20180524152201.GC26913@debian> (raw)
In-Reply-To: <20180524173015-mutt-send-email-mst@kernel.org>

On Thu, May 24, 2018 at 05:30:45PM +0300, Michael S. Tsirkin wrote:
> On Thu, May 24, 2018 at 06:59:36PM +0800, Tiwei Bie wrote:
> > On Thu, May 24, 2018 at 10:24:40AM +0800, Tiwei Bie wrote:
> > > On Thu, May 24, 2018 at 07:21:01AM +0800, Tiwei Bie wrote:
> > > > On Wed, May 23, 2018 at 06:43:29PM +0300, Michael S. Tsirkin wrote:
> > > > > On Wed, May 23, 2018 at 06:36:05PM +0300, Michael S. Tsirkin wrote:
> > > > > > On Wed, May 23, 2018 at 04:44:51PM +0300, Michael S. Tsirkin wrote:
> > > > > > > On Thu, Apr 12, 2018 at 11:12:28PM +0800, Tiwei Bie wrote:
> > > > > > > > When multi queue is enabled e.g. for a virtio-net device,
> > > > > > > > each queue pair will have a vhost_dev, and the only thing
> > > > > > > > shared between vhost devs currently is the chardev. This
> > > > > > > > patch introduces a vhost-user state structure which will
> > > > > > > > be shared by all vhost devs of the same virtio device.
> > > > > > > > 
> > > > > > > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > > > > > 
> > > > > > > Unfortunately this patch seems to cause crashes.
> > > > > > > To reproduce, simply run
> > > > > > > make check-qtest-x86_64
> > > > > > > 
> > > > > > > Sorry that it took me a while to find - it triggers 90% of runs but not
> > > > > > > 100% which complicates bisection somewhat.
> > > > 
> > > > It's my fault to not notice this bug before.
> > > > I'm very sorry. Thank you so much for finding
> > > > the root cause!
> > > > 
> > > > > > > 
> > > > > > > > ---
> > > > > > > >  backends/cryptodev-vhost-user.c     | 20 ++++++++++++++++++-
> > > > > > > >  hw/block/vhost-user-blk.c           | 22 +++++++++++++++++++-
> > > > > > > >  hw/scsi/vhost-user-scsi.c           | 20 ++++++++++++++++++-
> > > > > > > >  hw/virtio/Makefile.objs             |  2 +-
> > > > > > > >  hw/virtio/vhost-stub.c              | 10 ++++++++++
> > > > > > > >  hw/virtio/vhost-user.c              | 31 +++++++++++++++++++---------
> > > > > > > >  include/hw/virtio/vhost-user-blk.h  |  2 ++
> > > > > > > >  include/hw/virtio/vhost-user-scsi.h |  2 ++
> > > > > > > >  include/hw/virtio/vhost-user.h      | 20 +++++++++++++++++++
> > > > > > > >  net/vhost-user.c                    | 40 ++++++++++++++++++++++++++++++-------
> > > > > > > >  10 files changed, 149 insertions(+), 20 deletions(-)
> > > > > > > >  create mode 100644 include/hw/virtio/vhost-user.h
> > > > [...]
> > > > > > > >          qemu_chr_fe_set_handlers(&s->chr, NULL, NULL,
> > > > > > > >                                   net_vhost_user_event, NULL, nc0->name, NULL,
> > > > > > > > @@ -319,6 +336,15 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
> > > > > > > >      assert(s->vhost_net);
> > > > > > > >  
> > > > > > > >      return 0;
> > > > > > > > +
> > > > > > > > +err:
> > > > > > > > +    if (user) {
> > > > > > > > +        vhost_user_cleanup(user);
> > > > > > > > +        g_free(user);
> > > > > > > > +        s->vhost_user = NULL;
> > > > > > > > +    }
> > > > > > > > +
> > > > > > > > +    return -1;
> > > > > > > >  }
> > > > > > > >  
> > > > > > > >  static Chardev *net_vhost_claim_chardev(
> > > > > > > > -- 
> > > > > > > > 2.11.0
> > > > > > 
> > > > > > So far I figured out that commenting the free of
> > > > > > the structure removes the crash, so we seem to
> > > > > > be dealing with a use-after free here.
> > > > > > I suspect that in a MQ situation, one queue gets
> > > > > > closed and attempts to free the structure
> > > > > > while others still use it.
> > > > > > 
> > > > > > diff --git a/net/vhost-user.c b/net/vhost-user.c
> > > > > > index 525a061..6a1573b 100644
> > > > > > --- a/net/vhost-user.c
> > > > > > +++ b/net/vhost-user.c
> > > > > > @@ -157,8 +157,8 @@ static void net_vhost_user_cleanup(NetClientState *nc)
> > > > > >          s->vhost_net = NULL;
> > > > > >      }
> > > > > >      if (s->vhost_user) {
> > > > > > -        vhost_user_cleanup(s->vhost_user);
> > > > > > -        g_free(s->vhost_user);
> > > > > > +        //vhost_user_cleanup(s->vhost_user);
> > > > > > +        //g_free(s->vhost_user);
> > > > > >          s->vhost_user = NULL;
> > > > > >      }
> > > > > >      if (nc->queue_index == 0) {
> > > > > > @@ -339,8 +339,8 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
> > > > > >  
> > > > > >  err:
> > > > > >      if (user) {
> > > > > > -        vhost_user_cleanup(user);
> > > > > > -        g_free(user);
> > > > > > +        //vhost_user_cleanup(user);
> > > > > > +        //g_free(user);
> > > > > >          s->vhost_user = NULL;
> > > > > >      }
> > > > > >  
> > > > > 
> > > > > 
> > > > > So the following at least gets rid of the crashes.
> > > > > I am not sure it does not leak memory though,
> > > > > and not sure there aren't any configurations where
> > > > > the 1st queue gets cleaned up first.
> > > > > 
> > > > > Thoughts?
> > > > 
> > > > Thank you so much for catching it and fixing
> > > > it! I'll keep your SoB there. Thank you so
> > > > much! I do appreciate it!
> > > > 
> > > > You are right. This structure is freed multiple
> > > > times when multi-queue is enabled.
> > > 
> > > After a deeper digging, I got your point now..
> > > It could be a use-after-free instead of a double
> > > free.. As it's safe to deinit the char which is
> > > shared by all queue pairs when cleanup the 1st
> > > queue pair, it should be safe to free vhost-user
> > > structure there too.
> > 
> > One thing worried me is that, I can't reproduce
> > the crash with `make check-qtest-x86_64`.
> 
> I sent a patch that will make test fail on qemu coredump.

That would be very useful!

Without your explanation, I guess I won't be able to
notice such crash in this test.

Best regards,
Tiwei Bie

> 
> > I tried
> > to run it a lot of times, but the only output I
> > got each time was:
> > 
> >         CHK version_gen.h
> >   GTESTER check-qtest-x86_64
> > 
> > I did a quick glance of the `check-qtest-x86_64`
> > target in the makefile, it seems that the relevant
> > test is `tests/vhost-user-test`. So I also tried
> > to run this test directly:
> > 
> > make tests/vhost-user-test
> > while true; do
> >   QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 ./tests/vhost-user-test
> > done
> > 
> > And the only output in each loop I got was:
> > 
> > /x86_64/vhost-user/migrate: OK
> > /x86_64/vhost-user/multiqueue: OK
> > /x86_64/vhost-user/read-guest-mem/memfd: OK
> > /x86_64/vhost-user/read-guest-mem/memfile: OK
> > 
> > So I'm still not quite sure what caused the crash
> > on your side. But it does make more sense to free
> > the vhost-user structure only when cleanup the
> > 1st queue pair (i.e. where the `chr` is deinit-ed).
> > 
> > I have sent a new patch set with above change:
> > 
> > http://lists.gnu.org/archive/html/qemu-devel/2018-05/msg05508.html
> > https://patchwork.kernel.org/bundle/btw/vhost-user-host-notifiers/
> > 
> > Because the above change is got from your diff
> > and also based on your analysis, I kept your SoB
> > in that patch (if you have any concern about it,
> > please let me know).
> > 
> > In this patch set, I also introduced a protocol
> > feature to allow slave to send fds to master via
> > the slave channel.
> > 
> > If you still see crashes with the new patch set,
> > please provide me a bit more details, e.g. the
> > crash message. Thanks a lot!
> > 
> > Best regards,
> > Tiwei Bie

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


  reply	other threads:[~2018-05-24 15:21 UTC|newest]

Thread overview: 98+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-12 15:12 [Qemu-devel] [PATCH v3 0/6] Extend vhost-user to support registering external host notifiers Tiwei Bie
2018-04-12 15:12 ` [virtio-dev] " Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 1/6] vhost-user: add Net prefix to internal state structure Tiwei Bie
2018-04-12 15:12   ` [virtio-dev] " Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 2/6] vhost-user: introduce shared vhost-user state Tiwei Bie
2018-04-12 15:12   ` [virtio-dev] " Tiwei Bie
2018-05-23 13:44   ` [Qemu-devel] " Michael S. Tsirkin
2018-05-23 13:44     ` [virtio-dev] " Michael S. Tsirkin
2018-05-23 15:36     ` [Qemu-devel] " Michael S. Tsirkin
2018-05-23 15:36       ` [virtio-dev] " Michael S. Tsirkin
2018-05-23 15:43       ` [Qemu-devel] " Michael S. Tsirkin
2018-05-23 15:43         ` [virtio-dev] " Michael S. Tsirkin
2018-05-23 23:21         ` [Qemu-devel] " Tiwei Bie
2018-05-23 23:21           ` [virtio-dev] " Tiwei Bie
2018-05-24  2:24           ` [Qemu-devel] " Tiwei Bie
2018-05-24  2:24             ` [virtio-dev] " Tiwei Bie
2018-05-24  7:03             ` [Qemu-devel] " Tiwei Bie
2018-05-24  7:03               ` [virtio-dev] " Tiwei Bie
2018-05-24 10:59             ` [Qemu-devel] " Tiwei Bie
2018-05-24 10:59               ` [virtio-dev] " Tiwei Bie
2018-05-24 13:55               ` [Qemu-devel] " Michael S. Tsirkin
2018-05-24 13:55                 ` [virtio-dev] " Michael S. Tsirkin
2018-05-24 14:54                 ` [Qemu-devel] " Tiwei Bie
2018-05-24 14:54                   ` Tiwei Bie
2018-05-24 14:30               ` [Qemu-devel] " Michael S. Tsirkin
2018-05-24 14:30                 ` [virtio-dev] " Michael S. Tsirkin
2018-05-24 15:22                 ` Tiwei Bie [this message]
2018-05-24 15:22                   ` Tiwei Bie
2018-05-24 13:50             ` [Qemu-devel] " Michael S. Tsirkin
2018-05-24 13:50               ` [virtio-dev] " Michael S. Tsirkin
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 3/6] vhost-user: support receiving file descriptors in slave_read Tiwei Bie
2018-04-12 15:12   ` [virtio-dev] " Tiwei Bie
2018-05-23 21:25   ` [Qemu-devel] " Michael S. Tsirkin
2018-05-23 21:25     ` [virtio-dev] " Michael S. Tsirkin
2018-05-23 23:12     ` [Qemu-devel] " Tiwei Bie
2018-05-23 23:12       ` Tiwei Bie
2018-05-24 13:48       ` [Qemu-devel] " Michael S. Tsirkin
2018-05-24 13:48         ` Michael S. Tsirkin
2018-05-24 14:56         ` [Qemu-devel] " Tiwei Bie
2018-05-24 14:56           ` Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 4/6] virtio: support setting memory region based host notifier Tiwei Bie
2018-04-12 15:12   ` [virtio-dev] " Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 5/6] vhost: allow backends to filter memory sections Tiwei Bie
2018-04-12 15:12   ` [virtio-dev] " Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 6/6] vhost-user: support registering external host notifiers Tiwei Bie
2018-04-12 15:12   ` [virtio-dev] " Tiwei Bie
2018-04-18 16:34   ` [Qemu-devel] " Michael S. Tsirkin
2018-04-18 16:34     ` [virtio-dev] " Michael S. Tsirkin
2018-04-19 11:14     ` [Qemu-devel] " Tiwei Bie
2018-04-19 11:14       ` [virtio-dev] " Tiwei Bie
2018-04-19 12:43       ` [Qemu-devel] " Liang, Cunming
2018-04-19 12:43         ` [virtio-dev] " Liang, Cunming
2018-04-19 13:02         ` [Qemu-devel] " Paolo Bonzini
2018-04-19 13:02           ` Paolo Bonzini
2018-04-19 15:19           ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 15:19             ` Michael S. Tsirkin
2018-04-19 15:51             ` [Qemu-devel] " Paolo Bonzini
2018-04-19 15:51               ` Paolo Bonzini
2018-04-19 15:59               ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 15:59                 ` Michael S. Tsirkin
2018-04-19 16:07                 ` [Qemu-devel] " Paolo Bonzini
2018-04-19 16:07                   ` Paolo Bonzini
2018-04-19 16:48                   ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 16:48                     ` Michael S. Tsirkin
2018-04-19 16:24             ` [Qemu-devel] " Liang, Cunming
2018-04-19 16:24               ` Liang, Cunming
2018-04-19 16:55               ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 16:55                 ` Michael S. Tsirkin
2018-04-20  3:01                 ` [Qemu-devel] " Liang, Cunming
2018-04-20  3:01                   ` Liang, Cunming
2018-04-19 15:42         ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 15:42           ` [virtio-dev] " Michael S. Tsirkin
2018-04-19 15:52           ` [Qemu-devel] " Paolo Bonzini
2018-04-19 15:52             ` [virtio-dev] " Paolo Bonzini
2018-04-19 16:34             ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 16:34               ` [virtio-dev] " Michael S. Tsirkin
2018-04-19 16:52             ` [Qemu-devel] " Liang, Cunming
2018-04-19 16:52               ` [virtio-dev] " Liang, Cunming
2018-04-19 16:59               ` [Qemu-devel] " Paolo Bonzini
2018-04-19 16:59                 ` Paolo Bonzini
2018-04-19 17:27                 ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 17:27                   ` Michael S. Tsirkin
2018-04-19 17:35                   ` [Qemu-devel] " Paolo Bonzini
2018-04-19 17:35                     ` Paolo Bonzini
2018-04-19 17:39                     ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 17:39                       ` Michael S. Tsirkin
2018-04-19 17:00               ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 17:00                 ` [virtio-dev] " Michael S. Tsirkin
2018-04-19 23:05                 ` [Qemu-devel] " Liang, Cunming
2018-04-19 23:05                   ` [virtio-dev] " Liang, Cunming
2018-04-19 16:27           ` [Qemu-devel] " Liang, Cunming
2018-04-19 16:27             ` [virtio-dev] " Liang, Cunming
2018-05-02 10:32       ` [Qemu-devel] " Tiwei Bie
2018-05-02 10:32         ` [virtio-dev] " Tiwei Bie
2018-05-16  1:41 ` [Qemu-devel] [PATCH v3 0/6] Extend vhost-user to " Michael S. Tsirkin
2018-05-16  1:41   ` [virtio-dev] " Michael S. Tsirkin
2018-05-16  1:56   ` [Qemu-devel] " Tiwei Bie
2018-05-16  1:56     ` [virtio-dev] " Tiwei Bie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180524152201.GC26913@debian \
    --to=tiwei.bie@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=cunming.liang@intel.com \
    --cc=dan.daly@intel.com \
    --cc=jasowang@redhat.com \
    --cc=jianfeng.tan@intel.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=virtio-dev@lists.oasis-open.org \
    --cc=xiao.w.wang@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.