All of lore.kernel.org
 help / color / mirror / Atom feed
From: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
To: Dan Williams <dan.j.williams-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Cc: Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>,
	KVM list <kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	David Hildenbrand <david-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	linux-nvdimm
	<linux-nvdimm-y27Ovi1pjclAfugRpC6u6w@public.gmane.org>,
	ross zwisler
	<ross.zwisler-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	Qemu Developers
	<qemu-devel-qX2TKyscuCcdnm+yROfE0A@public.gmane.org>,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	Linux MM <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ@public.gmane.org,
	"Michael S. Tsirkin"
	<mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Christoph Hellwig <hch-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>,
	Stefan Hajnoczi
	<stefanha-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Marcel Apfelbaum <marcel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Nitesh Narayan Lal
	<nilal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Rik van Riel <riel-ebMLmSuQjDVBDgjK7y7TUQ@public.gmane.org>,
	Stefan Hajnoczi
	<stefanha-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Paolo Bonzini <pbonzini-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Kevin Wolf <kwolf-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	xiaoguangrong eric
	<xiaoguangrong.eric-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Linux Kernel Mailing List
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Igor Mammedov <imammedo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Subject: Re: [RFC v2 2/2] pmem: device flush over VIRTIO
Date: Thu, 26 Apr 2018 13:13:44 -0400 (EDT)	[thread overview]
Message-ID: <1302242642.23016855.1524762824836.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <CAPcyv4jv-hJNKJxak98T7aCnWztVEDTE8o=8fjvOrVmrTfyjdA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>


> >
> >>
> >> On Wed, Apr 25, 2018 at 04:54:14PM +0530, Pankaj Gupta wrote:
> >> > This patch adds functionality to perform
> >> > flush from guest to hosy over VIRTIO
> >> > when 'ND_REGION_VIRTIO'flag is set on
> >> > nd_negion. Flag is set by 'virtio-pmem'
> >> > driver.
> >> >
> >> > Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >> > ---
> >> >  drivers/nvdimm/region_devs.c | 7 +++++++
> >> >  1 file changed, 7 insertions(+)
> >> >
> >> > diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
> >> > index a612be6..6c6454e 100644
> >> > --- a/drivers/nvdimm/region_devs.c
> >> > +++ b/drivers/nvdimm/region_devs.c
> >> > @@ -20,6 +20,7 @@
> >> >  #include <linux/nd.h>
> >> >  #include "nd-core.h"
> >> >  #include "nd.h"
> >> > +#include <linux/virtio_pmem.h>
> >> >
> >> >  /*
> >> >   * For readq() and writeq() on 32-bit builds, the hi-lo, lo-hi order is
> >> > @@ -1074,6 +1075,12 @@ void nvdimm_flush(struct nd_region *nd_region)
> >> >     struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev);
> >> >     int i, idx;
> >> >
> >> > +       /* call PV device flush */
> >> > +   if (test_bit(ND_REGION_VIRTIO, &nd_region->flags)) {
> >> > +           virtio_pmem_flush(&nd_region->dev);
> >> > +           return;
> >> > +   }
> >>
> >> How does libnvdimm know when flush has completed?
> >>
> >> Callers expect the flush to be finished when nvdimm_flush() returns but
> >> the virtio driver has only queued the request, it hasn't waited for
> >> completion!
> >
> > I tried to implement what nvdimm does right now. It just writes to
> > flush hint address to make sure data persists.
> 
> nvdimm_flush() is currently expected to be synchronous. Currently it
> is sfence(); write to special address; sfence(). By the time the
> second sfence returns the data is flushed. So you would need to make
> this virtio flush interface synchronous as well, but that appears
> problematic to stop the guest for unbounded amounts of time. Instead,
> you need to rework nvdimm_flush() and the pmem driver to make these
> flush requests asynchronous and add the plumbing for completion
> callbacks via bio_endio().

o.k. 

> 
> > I just did not want to block guest write requests till host side
> > fsync completes.
> 
> You must complete the flush before bio_endio(), otherwise you're
> violating the expectations of the guest filesystem/block-layer.

sure!

> 
> >
> > be worse for operations on different guest files because all these
> > operations would happen
> > ultimately on same file at host.
> >
> > I think with current way, we can achieve an asynchronous queuing mechanism
> > on cost of not
> > 100% sure when fsync would complete but it is assured it will happen. Also,
> > its entire block
> > flush.
> 
> No, again,  that's broken. We need to add the plumbing for
> communicating the fsync() completion relative the WRITE_{FLUSH,FUA}
> bio in the guest.

Sure. Thanks Dan & Stefan for the explanation and review. 

Best regards,
Pankaj

WARNING: multiple messages have this Message-ID (diff)
From: Pankaj Gupta <pagupta@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	KVM list <kvm@vger.kernel.org>,
	Qemu Developers <qemu-devel@nongnu.org>,
	linux-nvdimm <linux-nvdimm@ml01.01.org>,
	Linux MM <linux-mm@kvack.org>, Jan Kara <jack@suse.cz>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Rik van Riel <riel@surriel.com>,
	haozhong zhang <haozhong.zhang@intel.com>,
	Nitesh Narayan Lal <nilal@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	ross zwisler <ross.zwisler@intel.com>,
	David Hildenbrand <david@redhat.com>,
	xiaoguangrong eric <xiaoguangrong.eric@gmail.com>,
	Christoph Hellwig <hch@infradead.org>,
	Marcel Apfelbaum <marcel@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	niteshnarayanlal@hotmail.com, Igor Mammedov <imammedo@redhat.com>,
	lcapitulino@redhat.com
Subject: Re: [RFC v2 2/2] pmem: device flush over VIRTIO
Date: Thu, 26 Apr 2018 13:13:44 -0400 (EDT)	[thread overview]
Message-ID: <1302242642.23016855.1524762824836.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <CAPcyv4jv-hJNKJxak98T7aCnWztVEDTE8o=8fjvOrVmrTfyjdA@mail.gmail.com>


> >
> >>
> >> On Wed, Apr 25, 2018 at 04:54:14PM +0530, Pankaj Gupta wrote:
> >> > This patch adds functionality to perform
> >> > flush from guest to hosy over VIRTIO
> >> > when 'ND_REGION_VIRTIO'flag is set on
> >> > nd_negion. Flag is set by 'virtio-pmem'
> >> > driver.
> >> >
> >> > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> >> > ---
> >> >  drivers/nvdimm/region_devs.c | 7 +++++++
> >> >  1 file changed, 7 insertions(+)
> >> >
> >> > diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
> >> > index a612be6..6c6454e 100644
> >> > --- a/drivers/nvdimm/region_devs.c
> >> > +++ b/drivers/nvdimm/region_devs.c
> >> > @@ -20,6 +20,7 @@
> >> >  #include <linux/nd.h>
> >> >  #include "nd-core.h"
> >> >  #include "nd.h"
> >> > +#include <linux/virtio_pmem.h>
> >> >
> >> >  /*
> >> >   * For readq() and writeq() on 32-bit builds, the hi-lo, lo-hi order is
> >> > @@ -1074,6 +1075,12 @@ void nvdimm_flush(struct nd_region *nd_region)
> >> >     struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev);
> >> >     int i, idx;
> >> >
> >> > +       /* call PV device flush */
> >> > +   if (test_bit(ND_REGION_VIRTIO, &nd_region->flags)) {
> >> > +           virtio_pmem_flush(&nd_region->dev);
> >> > +           return;
> >> > +   }
> >>
> >> How does libnvdimm know when flush has completed?
> >>
> >> Callers expect the flush to be finished when nvdimm_flush() returns but
> >> the virtio driver has only queued the request, it hasn't waited for
> >> completion!
> >
> > I tried to implement what nvdimm does right now. It just writes to
> > flush hint address to make sure data persists.
> 
> nvdimm_flush() is currently expected to be synchronous. Currently it
> is sfence(); write to special address; sfence(). By the time the
> second sfence returns the data is flushed. So you would need to make
> this virtio flush interface synchronous as well, but that appears
> problematic to stop the guest for unbounded amounts of time. Instead,
> you need to rework nvdimm_flush() and the pmem driver to make these
> flush requests asynchronous and add the plumbing for completion
> callbacks via bio_endio().

o.k. 

> 
> > I just did not want to block guest write requests till host side
> > fsync completes.
> 
> You must complete the flush before bio_endio(), otherwise you're
> violating the expectations of the guest filesystem/block-layer.

sure!

> 
> >
> > be worse for operations on different guest files because all these
> > operations would happen
> > ultimately on same file at host.
> >
> > I think with current way, we can achieve an asynchronous queuing mechanism
> > on cost of not
> > 100% sure when fsync would complete but it is assured it will happen. Also,
> > its entire block
> > flush.
> 
> No, again,  that's broken. We need to add the plumbing for
> communicating the fsync() completion relative the WRITE_{FLUSH,FUA}
> bio in the guest.

Sure. Thanks Dan & Stefan for the explanation and review. 

Best regards,
Pankaj

WARNING: multiple messages have this Message-ID (diff)
From: Pankaj Gupta <pagupta@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	KVM list <kvm@vger.kernel.org>,
	Qemu Developers <qemu-devel@nongnu.org>,
	linux-nvdimm <linux-nvdimm@ml01.01.org>,
	Linux MM <linux-mm@kvack.org>, Jan Kara <jack@suse.cz>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Rik van Riel <riel@surriel.com>,
	haozhong zhang <haozhong.zhang@intel.com>,
	Nitesh Narayan Lal <nilal@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	ross zwisler <ross.zwisler@intel.com>,
	David Hildenbrand <david@redhat.com>,
	xiaoguangrong eric <xiaoguangrong.eric@gmail.com>,
	Christoph Hellwig <hch@infradead.org>,
	Marcel Apfelbaum <marcel@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	niteshnarayanlal@hotmail.com, Igor Mammedov <imammedo@redhat.com>,
	lcapitulino@redhat.com
Subject: Re: [Qemu-devel] [RFC v2 2/2] pmem: device flush over VIRTIO
Date: Thu, 26 Apr 2018 13:13:44 -0400 (EDT)	[thread overview]
Message-ID: <1302242642.23016855.1524762824836.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <CAPcyv4jv-hJNKJxak98T7aCnWztVEDTE8o=8fjvOrVmrTfyjdA@mail.gmail.com>


> >
> >>
> >> On Wed, Apr 25, 2018 at 04:54:14PM +0530, Pankaj Gupta wrote:
> >> > This patch adds functionality to perform
> >> > flush from guest to hosy over VIRTIO
> >> > when 'ND_REGION_VIRTIO'flag is set on
> >> > nd_negion. Flag is set by 'virtio-pmem'
> >> > driver.
> >> >
> >> > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> >> > ---
> >> >  drivers/nvdimm/region_devs.c | 7 +++++++
> >> >  1 file changed, 7 insertions(+)
> >> >
> >> > diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
> >> > index a612be6..6c6454e 100644
> >> > --- a/drivers/nvdimm/region_devs.c
> >> > +++ b/drivers/nvdimm/region_devs.c
> >> > @@ -20,6 +20,7 @@
> >> >  #include <linux/nd.h>
> >> >  #include "nd-core.h"
> >> >  #include "nd.h"
> >> > +#include <linux/virtio_pmem.h>
> >> >
> >> >  /*
> >> >   * For readq() and writeq() on 32-bit builds, the hi-lo, lo-hi order is
> >> > @@ -1074,6 +1075,12 @@ void nvdimm_flush(struct nd_region *nd_region)
> >> >     struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev);
> >> >     int i, idx;
> >> >
> >> > +       /* call PV device flush */
> >> > +   if (test_bit(ND_REGION_VIRTIO, &nd_region->flags)) {
> >> > +           virtio_pmem_flush(&nd_region->dev);
> >> > +           return;
> >> > +   }
> >>
> >> How does libnvdimm know when flush has completed?
> >>
> >> Callers expect the flush to be finished when nvdimm_flush() returns but
> >> the virtio driver has only queued the request, it hasn't waited for
> >> completion!
> >
> > I tried to implement what nvdimm does right now. It just writes to
> > flush hint address to make sure data persists.
> 
> nvdimm_flush() is currently expected to be synchronous. Currently it
> is sfence(); write to special address; sfence(). By the time the
> second sfence returns the data is flushed. So you would need to make
> this virtio flush interface synchronous as well, but that appears
> problematic to stop the guest for unbounded amounts of time. Instead,
> you need to rework nvdimm_flush() and the pmem driver to make these
> flush requests asynchronous and add the plumbing for completion
> callbacks via bio_endio().

o.k. 

> 
> > I just did not want to block guest write requests till host side
> > fsync completes.
> 
> You must complete the flush before bio_endio(), otherwise you're
> violating the expectations of the guest filesystem/block-layer.

sure!

> 
> >
> > be worse for operations on different guest files because all these
> > operations would happen
> > ultimately on same file at host.
> >
> > I think with current way, we can achieve an asynchronous queuing mechanism
> > on cost of not
> > 100% sure when fsync would complete but it is assured it will happen. Also,
> > its entire block
> > flush.
> 
> No, again,  that's broken. We need to add the plumbing for
> communicating the fsync() completion relative the WRITE_{FLUSH,FUA}
> bio in the guest.

Sure. Thanks Dan & Stefan for the explanation and review. 

Best regards,
Pankaj

  parent reply	other threads:[~2018-04-26 17:13 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-25 11:24 [RFC v2 0/2] kvm "fake DAX" device flushing Pankaj Gupta
2018-04-25 11:24 ` [Qemu-devel] " Pankaj Gupta
2018-04-25 11:24 ` Pankaj Gupta
     [not found] ` <20180425112415.12327-1-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-04-25 11:24   ` [RFC v2 1/2] virtio: add pmem driver Pankaj Gupta
2018-04-25 11:24     ` [Qemu-devel] " Pankaj Gupta
2018-04-25 11:24     ` Pankaj Gupta
     [not found]     ` <20180425112415.12327-2-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-04-25 14:21       ` Dan Williams
2018-04-25 14:21         ` [Qemu-devel] " Dan Williams
2018-04-25 14:21         ` Dan Williams
     [not found]         ` <CAPcyv4hvrB08XPTbVK0xT2_1Xmaid=-v3OMxJVDTNwQucsOHLA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-04-25 14:43           ` Dan Williams
2018-04-25 14:43             ` [Qemu-devel] " Dan Williams
2018-04-25 14:43             ` Dan Williams
     [not found]             ` <CAPcyv4hiowWozV527sQA_e4fdgCYbD6xfG==vepAqu0hxQEQcw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-04-26 12:27               ` Jeff Moyer
2018-04-26 12:27                 ` [Qemu-devel] " Jeff Moyer
2018-04-26 12:27                 ` Jeff Moyer
2018-04-26 12:27                 ` Jeff Moyer
     [not found]                 ` <x49o9i6885e.fsf-RRHT56Q3PSP4kTEheFKJxxDDeQx5vsVwAInAS/Ez/D0@public.gmane.org>
2018-04-26 17:15                   ` [Qemu-devel] " Pankaj Gupta
2018-04-26 17:15                     ` Pankaj Gupta
     [not found]                     ` <1499190564.23017177.1524762938762.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-04-26 17:24                       ` Jeff Moyer
2018-04-26 17:24                         ` Jeff Moyer
2018-04-25 14:52       ` Michael S. Tsirkin
2018-04-25 14:52         ` [Qemu-devel] " Michael S. Tsirkin
2018-04-25 14:52         ` Michael S. Tsirkin
     [not found]         ` <20180425174705-mutt-send-email-mst-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2018-04-25 15:11           ` [Qemu-devel] " Pankaj Gupta
2018-04-25 15:11             ` Pankaj Gupta
2018-04-26 13:12     ` Stefan Hajnoczi
     [not found]       ` <20180426131236.GA30991-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
2018-04-26 15:44         ` Pankaj Gupta
2018-04-26 15:44           ` Pankaj Gupta
2018-04-27 13:31           ` Stefan Hajnoczi
     [not found]             ` <20180427133146.GB11150-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
2018-04-28 10:48               ` Pankaj Gupta
2018-04-28 10:48                 ` Pankaj Gupta
2018-04-25 11:24   ` [RFC v2 2/2] pmem: device flush over VIRTIO Pankaj Gupta
2018-04-25 11:24     ` [Qemu-devel] " Pankaj Gupta
2018-04-25 11:24     ` Pankaj Gupta
     [not found]     ` <20180425112415.12327-3-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-04-25 14:23       ` Dan Williams
2018-04-25 14:23         ` [Qemu-devel] " Dan Williams
2018-04-25 14:23         ` Dan Williams
     [not found]         ` <CAPcyv4gpZzKfE7jY1peYOVd6sVhNz7jce1s_xNH_2Lt8AjRK-Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-04-25 14:47           ` Pankaj Gupta
2018-04-25 14:47             ` [Qemu-devel] " Pankaj Gupta
2018-04-25 14:47             ` Pankaj Gupta
2018-04-26 13:15     ` Stefan Hajnoczi
2018-04-26 13:15       ` [Qemu-devel] " Stefan Hajnoczi
     [not found]       ` <20180426131517.GB30991-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
2018-04-26 16:40         ` Pankaj Gupta
2018-04-26 16:40           ` [Qemu-devel] " Pankaj Gupta
2018-04-26 16:40           ` Pankaj Gupta
     [not found]           ` <58645254.23011245.1524760853269.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-04-26 16:57             ` Dan Williams
2018-04-26 16:57               ` [Qemu-devel] " Dan Williams
2018-04-26 16:57               ` Dan Williams
     [not found]               ` <CAPcyv4jv-hJNKJxak98T7aCnWztVEDTE8o=8fjvOrVmrTfyjdA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-04-26 17:13                 ` Pankaj Gupta [this message]
2018-04-26 17:13                   ` [Qemu-devel] " Pankaj Gupta
2018-04-26 17:13                   ` Pankaj Gupta
2018-04-25 11:24   ` [RFC v2] qemu: Add virtio pmem device Pankaj Gupta
2018-04-25 11:24     ` [Qemu-devel] " Pankaj Gupta
2018-04-25 11:24     ` Pankaj Gupta
     [not found]     ` <20180425112415.12327-4-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-04-25 11:35       ` [Qemu-devel] " no-reply-isE1Te71pDtAfugRpC6u6w
2018-04-25 11:35         ` no-reply
2018-04-25 11:35         ` no-reply
2018-04-25 11:58         ` Pankaj Gupta
2018-04-25 11:58           ` Pankaj Gupta
2018-04-25 14:23           ` Eric Blake
2018-04-25 14:23             ` Eric Blake
     [not found]             ` <79f72139-0fcb-3d5e-a16c-24f3b5ee1a07-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-04-25 14:51               ` Pankaj Gupta
2018-04-25 14:51                 ` Pankaj Gupta
2018-04-25 11:46     ` no-reply
2018-04-25 11:46       ` no-reply
2018-04-25 11:46       ` no-reply
2018-04-25 14:25     ` Eric Blake
2018-04-25 14:25       ` Eric Blake
     [not found]       ` <25f3e433-cfa6-4a62-ba7f-47aef1119dfc-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-04-25 14:55         ` Pankaj Gupta
2018-04-25 14:55           ` Pankaj Gupta
2018-04-26 13:24     ` Stefan Hajnoczi
2018-04-26 13:24       ` [Qemu-devel] " Stefan Hajnoczi
     [not found]       ` <20180426132406.GC30991-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
2018-04-26 16:43         ` Pankaj Gupta
2018-04-26 16:43           ` Pankaj Gupta
2018-06-01 12:24   ` [Qemu-devel] [RFC v2 0/2] kvm "fake DAX" device flushing Igor Mammedov
2018-06-01 12:24     ` Igor Mammedov
     [not found]     ` <20180601142410.5c986f13-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-06-04  5:56       ` Pankaj Gupta
2018-06-04  5:56         ` Pankaj Gupta
2018-06-04  9:55       ` David Hildenbrand
2018-06-04  9:55         ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1302242642.23016855.1524762824836.JavaMail.zimbra@redhat.com \
    --to=pagupta-h+wxahxf7alqt0dzr+alfa@public.gmane.org \
    --cc=dan.j.williams-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org \
    --cc=david-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=hch-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org \
    --cc=imammedo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=jack-AlSwsSmVLrQ@public.gmane.org \
    --cc=kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=kwolf-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=linux-nvdimm-y27Ovi1pjclAfugRpC6u6w@public.gmane.org \
    --cc=marcel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=nilal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ@public.gmane.org \
    --cc=pbonzini-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=qemu-devel-qX2TKyscuCcdnm+yROfE0A@public.gmane.org \
    --cc=riel-ebMLmSuQjDVBDgjK7y7TUQ@public.gmane.org \
    --cc=ross.zwisler-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org \
    --cc=stefanha-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=stefanha-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=xiaoguangrong.eric-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.