NVDIMM Device and Persistent Memory development
 help / color / Atom feed
* [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
@ 2021-07-15 22:35 Taylor Stark
  2021-07-19 20:36 ` Pankaj Gupta
  2021-07-19 21:17 ` Michael S. Tsirkin
  0 siblings, 2 replies; 10+ messages in thread
From: Taylor Stark @ 2021-07-15 22:35 UTC (permalink / raw)
  To: dan.j.williams, vishal.l.verma, dave.jiang, ira.weiny
  Cc: nvdimm, apais, tyhicks, jamorris, benhill, sunilmut, grahamwo, tstark

Update virtio-pmem to allow for the pmem region to be specified in either
guest absolute terms or as a PCI BAR-relative address. This is required
to support virtio-pmem in Hyper-V, since Hyper-V only allows PCI devices
to operate on PCI memory ranges defined via BARs.

Virtio-pmem will check for a shared memory window and use that if found,
else it will fallback to using the guest absolute addresses in
virtio_pmem_config. This was chosen over defining a new feature bit,
since it's similar to how virtio-fs is configured.

Signed-off-by: Taylor Stark <tstark@microsoft.com>
---
 drivers/nvdimm/virtio_pmem.c | 21 +++++++++++++++++----
 drivers/nvdimm/virtio_pmem.h |  3 +++
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c
index 726c7354d465..43c1d835a449 100644
--- a/drivers/nvdimm/virtio_pmem.c
+++ b/drivers/nvdimm/virtio_pmem.c
@@ -37,6 +37,8 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
 	struct virtio_pmem *vpmem;
 	struct resource res;
 	int err = 0;
+	bool have_shm_region;
+	struct virtio_shm_region pmem_region;
 
 	if (!vdev->config->get) {
 		dev_err(&vdev->dev, "%s failure: config access disabled\n",
@@ -58,10 +60,21 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
 		goto out_err;
 	}
 
-	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
-			start, &vpmem->start);
-	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
-			size, &vpmem->size);
+	/* Retrieve the pmem device's address and size. It may have been supplied
+	 * as a PCI BAR-relative shared memory region, or as a guest absolute address.
+	 */
+	have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region,
+						VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION);
+
+	if (have_shm_region) {
+		vpmem->start = pmem_region.addr;
+		vpmem->size = pmem_region.len;
+	} else {
+		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
+				start, &vpmem->start);
+		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
+				size, &vpmem->size);
+	}
 
 	res.start = vpmem->start;
 	res.end   = vpmem->start + vpmem->size - 1;
diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
index 0dddefe594c4..62bb564e81cb 100644
--- a/drivers/nvdimm/virtio_pmem.h
+++ b/drivers/nvdimm/virtio_pmem.h
@@ -50,6 +50,9 @@ struct virtio_pmem {
 	__u64 size;
 };
 
+/* For the id field in virtio_pci_shm_cap */
+#define VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION 0
+
 void virtio_pmem_host_ack(struct virtqueue *vq);
 int async_pmem_flush(struct nd_region *nd_region, struct bio *bio);
 #endif
-- 
2.32.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
  2021-07-15 22:35 [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses Taylor Stark
@ 2021-07-19 20:36 ` Pankaj Gupta
  2021-07-20  6:35   ` Taylor Stark
  2021-07-19 21:17 ` Michael S. Tsirkin
  1 sibling, 1 reply; 10+ messages in thread
From: Pankaj Gupta @ 2021-07-19 20:36 UTC (permalink / raw)
  To: Taylor Stark
  Cc: Dan Williams, Vishal Verma, Dave Jiang, Ira Weiny, nvdimm, apais,
	tyhicks, jamorris, benhill, sunilmut, grahamwo, tstark,
	Michael S . Tsirkin, Pankaj Gupta

> Update virtio-pmem to allow for the pmem region to be specified in either
> guest absolute terms or as a PCI BAR-relative address. This is required
> to support virtio-pmem in Hyper-V, since Hyper-V only allows PCI devices
> to operate on PCI memory ranges defined via BARs.
>
> Virtio-pmem will check for a shared memory window and use that if found,
> else it will fallback to using the guest absolute addresses in
> virtio_pmem_config. This was chosen over defining a new feature bit,
> since it's similar to how virtio-fs is configured.
>
> Signed-off-by: Taylor Stark <tstark@microsoft.com>
> ---
>  drivers/nvdimm/virtio_pmem.c | 21 +++++++++++++++++----
>  drivers/nvdimm/virtio_pmem.h |  3 +++
>  2 files changed, 20 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c
> index 726c7354d465..43c1d835a449 100644
> --- a/drivers/nvdimm/virtio_pmem.c
> +++ b/drivers/nvdimm/virtio_pmem.c
> @@ -37,6 +37,8 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
>         struct virtio_pmem *vpmem;
>         struct resource res;
>         int err = 0;
> +       bool have_shm_region;
> +       struct virtio_shm_region pmem_region;
>
>         if (!vdev->config->get) {
>                 dev_err(&vdev->dev, "%s failure: config access disabled\n",
> @@ -58,10 +60,21 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
>                 goto out_err;
>         }
>
> -       virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> -                       start, &vpmem->start);
> -       virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> -                       size, &vpmem->size);
> +       /* Retrieve the pmem device's address and size. It may have been supplied
> +        * as a PCI BAR-relative shared memory region, or as a guest absolute address.
> +        */
> +       have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region,
> +                                               VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION);

Current implementation of Virtio pmem device in Qemu does not expose
it as PCI BAR.
So, can't test it. Just curious if device side implementation is also
tested for asynchronous
flush case?

Thanks,
Pankaj

> +
> +       if (have_shm_region) {
> +               vpmem->start = pmem_region.addr;
> +               vpmem->size = pmem_region.len;
> +       } else {
> +               virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> +                               start, &vpmem->start);
> +               virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> +                               size, &vpmem->size);
> +       }
>
>         res.start = vpmem->start;
>         res.end   = vpmem->start + vpmem->size - 1;
> diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
> index 0dddefe594c4..62bb564e81cb 100644
> --- a/drivers/nvdimm/virtio_pmem.h
> +++ b/drivers/nvdimm/virtio_pmem.h
> @@ -50,6 +50,9 @@ struct virtio_pmem {
>         __u64 size;
>  };
>
> +/* For the id field in virtio_pci_shm_cap */
> +#define VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION 0
> +
>  void virtio_pmem_host_ack(struct virtqueue *vq);
>  int async_pmem_flush(struct nd_region *nd_region, struct bio *bio);
>  #endif
> --
> 2.32.0
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
  2021-07-15 22:35 [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses Taylor Stark
  2021-07-19 20:36 ` Pankaj Gupta
@ 2021-07-19 21:17 ` Michael S. Tsirkin
  2021-07-20  6:41   ` Taylor Stark
  1 sibling, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2021-07-19 21:17 UTC (permalink / raw)
  To: Taylor Stark
  Cc: dan.j.williams, vishal.l.verma, dave.jiang, ira.weiny, nvdimm,
	apais, tyhicks, jamorris, benhill, sunilmut, grahamwo, tstark

On Thu, Jul 15, 2021 at 03:35:05PM -0700, Taylor Stark wrote:
> Update virtio-pmem to allow for the pmem region to be specified in either
> guest absolute terms or as a PCI BAR-relative address. This is required
> to support virtio-pmem in Hyper-V, since Hyper-V only allows PCI devices
> to operate on PCI memory ranges defined via BARs.
> 
> Virtio-pmem will check for a shared memory window and use that if found,
> else it will fallback to using the guest absolute addresses in
> virtio_pmem_config. This was chosen over defining a new feature bit,
> since it's similar to how virtio-fs is configured.
> 
> Signed-off-by: Taylor Stark <tstark@microsoft.com>

This needs to be added to the device spec too.
Can you send a spec patch please?
It's a subscriber-only list virtio-comment@lists.oasis-open.org


> ---
>  drivers/nvdimm/virtio_pmem.c | 21 +++++++++++++++++----
>  drivers/nvdimm/virtio_pmem.h |  3 +++
>  2 files changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c
> index 726c7354d465..43c1d835a449 100644
> --- a/drivers/nvdimm/virtio_pmem.c
> +++ b/drivers/nvdimm/virtio_pmem.c
> @@ -37,6 +37,8 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
>  	struct virtio_pmem *vpmem;
>  	struct resource res;
>  	int err = 0;
> +	bool have_shm_region;
> +	struct virtio_shm_region pmem_region;
>  
>  	if (!vdev->config->get) {
>  		dev_err(&vdev->dev, "%s failure: config access disabled\n",
> @@ -58,10 +60,21 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
>  		goto out_err;
>  	}
>  
> -	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> -			start, &vpmem->start);
> -	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> -			size, &vpmem->size);
> +	/* Retrieve the pmem device's address and size. It may have been supplied
> +	 * as a PCI BAR-relative shared memory region, or as a guest absolute address.
> +	 */
> +	have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region,
> +						VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION);
> +
> +	if (have_shm_region) {
> +		vpmem->start = pmem_region.addr;
> +		vpmem->size = pmem_region.len;
> +	} else {
> +		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> +				start, &vpmem->start);
> +		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> +				size, &vpmem->size);
> +	}
>  
>  	res.start = vpmem->start;
>  	res.end   = vpmem->start + vpmem->size - 1;
> diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
> index 0dddefe594c4..62bb564e81cb 100644
> --- a/drivers/nvdimm/virtio_pmem.h
> +++ b/drivers/nvdimm/virtio_pmem.h
> @@ -50,6 +50,9 @@ struct virtio_pmem {
>  	__u64 size;
>  };
>  
> +/* For the id field in virtio_pci_shm_cap */
> +#define VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION 0
> +
>  void virtio_pmem_host_ack(struct virtqueue *vq);
>  int async_pmem_flush(struct nd_region *nd_region, struct bio *bio);
>  #endif
> -- 
> 2.32.0
> 
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
  2021-07-19 20:36 ` Pankaj Gupta
@ 2021-07-20  6:35   ` Taylor Stark
  2021-07-20  6:51     ` Pankaj Gupta
  0 siblings, 1 reply; 10+ messages in thread
From: Taylor Stark @ 2021-07-20  6:35 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: Dan Williams, Vishal Verma, Dave Jiang, Ira Weiny, nvdimm, apais,
	tyhicks, jamorris, benhill, sunilmut, grahamwo, tstark,
	Michael S . Tsirkin, Pankaj Gupta

On Mon, Jul 19, 2021 at 10:36:34PM +0200, Pankaj Gupta wrote:
> > Update virtio-pmem to allow for the pmem region to be specified in either
> > guest absolute terms or as a PCI BAR-relative address. This is required
> > to support virtio-pmem in Hyper-V, since Hyper-V only allows PCI devices
> > to operate on PCI memory ranges defined via BARs.
> >
> > Virtio-pmem will check for a shared memory window and use that if found,
> > else it will fallback to using the guest absolute addresses in
> > virtio_pmem_config. This was chosen over defining a new feature bit,
> > since it's similar to how virtio-fs is configured.
> >
> > Signed-off-by: Taylor Stark <tstark@microsoft.com>
> > ---
> >  drivers/nvdimm/virtio_pmem.c | 21 +++++++++++++++++----
> >  drivers/nvdimm/virtio_pmem.h |  3 +++
> >  2 files changed, 20 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c
> > index 726c7354d465..43c1d835a449 100644
> > --- a/drivers/nvdimm/virtio_pmem.c
> > +++ b/drivers/nvdimm/virtio_pmem.c
> > @@ -37,6 +37,8 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
> >         struct virtio_pmem *vpmem;
> >         struct resource res;
> >         int err = 0;
> > +       bool have_shm_region;
> > +       struct virtio_shm_region pmem_region;
> >
> >         if (!vdev->config->get) {
> >                 dev_err(&vdev->dev, "%s failure: config access disabled\n",
> > @@ -58,10 +60,21 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
> >                 goto out_err;
> >         }
> >
> > -       virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > -                       start, &vpmem->start);
> > -       virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > -                       size, &vpmem->size);
> > +       /* Retrieve the pmem device's address and size. It may have been supplied
> > +        * as a PCI BAR-relative shared memory region, or as a guest absolute address.
> > +        */
> > +       have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region,
> > +                                               VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION);
> 
> Current implementation of Virtio pmem device in Qemu does not expose
> it as PCI BAR.
> So, can't test it. Just curious if device side implementation is also
> tested for asynchronous
> flush case?
> 
> Thanks,
> Pankaj

Yes, I tested the async flush case as well. We basically call
FlushFileBuffers on the backing file, which is Windows' equivalent of
fsync. I also briefly tested with qemu to ensure that still works with
the patch.

Thanks,
Taylor
 
> > +
> > +       if (have_shm_region) {
> > +               vpmem->start = pmem_region.addr;
> > +               vpmem->size = pmem_region.len;
> > +       } else {
> > +               virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > +                               start, &vpmem->start);
> > +               virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > +                               size, &vpmem->size);
> > +       }
> >
> >         res.start = vpmem->start;
> >         res.end   = vpmem->start + vpmem->size - 1;
> > diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
> > index 0dddefe594c4..62bb564e81cb 100644
> > --- a/drivers/nvdimm/virtio_pmem.h
> > +++ b/drivers/nvdimm/virtio_pmem.h
> > @@ -50,6 +50,9 @@ struct virtio_pmem {
> >         __u64 size;
> >  };
> >
> > +/* For the id field in virtio_pci_shm_cap */
> > +#define VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION 0
> > +
> >  void virtio_pmem_host_ack(struct virtqueue *vq);
> >  int async_pmem_flush(struct nd_region *nd_region, struct bio *bio);
> >  #endif
> > --
> > 2.32.0
> >
> >

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
  2021-07-19 21:17 ` Michael S. Tsirkin
@ 2021-07-20  6:41   ` Taylor Stark
  2021-07-20  9:46     ` Michael S. Tsirkin
  0 siblings, 1 reply; 10+ messages in thread
From: Taylor Stark @ 2021-07-20  6:41 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: dan.j.williams, vishal.l.verma, dave.jiang, ira.weiny, nvdimm,
	apais, tyhicks, jamorris, benhill, sunilmut, grahamwo, tstark

On Mon, Jul 19, 2021 at 05:17:00PM -0400, Michael S. Tsirkin wrote:
> On Thu, Jul 15, 2021 at 03:35:05PM -0700, Taylor Stark wrote:
> > Update virtio-pmem to allow for the pmem region to be specified in either
> > guest absolute terms or as a PCI BAR-relative address. This is required
> > to support virtio-pmem in Hyper-V, since Hyper-V only allows PCI devices
> > to operate on PCI memory ranges defined via BARs.
> > 
> > Virtio-pmem will check for a shared memory window and use that if found,
> > else it will fallback to using the guest absolute addresses in
> > virtio_pmem_config. This was chosen over defining a new feature bit,
> > since it's similar to how virtio-fs is configured.
> > 
> > Signed-off-by: Taylor Stark <tstark@microsoft.com>
> 
> This needs to be added to the device spec too.
> Can you send a spec patch please?
> It's a subscriber-only list virtio-comment@lists.oasis-open.org

Absolutely! I tried looking on the virtio-spec repo on github but
I couldn't find a spec for virtio-pmem to update. There is this
issue (https://github.com/oasis-tcs/virtio-spec/issues/78) which
seems to indicate the virtio-pmem spec hasn't been added yet.
Any suggestions for where to send a patch?

Thanks,
Taylor 
 
> > ---
> >  drivers/nvdimm/virtio_pmem.c | 21 +++++++++++++++++----
> >  drivers/nvdimm/virtio_pmem.h |  3 +++
> >  2 files changed, 20 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c
> > index 726c7354d465..43c1d835a449 100644
> > --- a/drivers/nvdimm/virtio_pmem.c
> > +++ b/drivers/nvdimm/virtio_pmem.c
> > @@ -37,6 +37,8 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
> >  	struct virtio_pmem *vpmem;
> >  	struct resource res;
> >  	int err = 0;
> > +	bool have_shm_region;
> > +	struct virtio_shm_region pmem_region;
> >  
> >  	if (!vdev->config->get) {
> >  		dev_err(&vdev->dev, "%s failure: config access disabled\n",
> > @@ -58,10 +60,21 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
> >  		goto out_err;
> >  	}
> >  
> > -	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > -			start, &vpmem->start);
> > -	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > -			size, &vpmem->size);
> > +	/* Retrieve the pmem device's address and size. It may have been supplied
> > +	 * as a PCI BAR-relative shared memory region, or as a guest absolute address.
> > +	 */
> > +	have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region,
> > +						VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION);
> > +
> > +	if (have_shm_region) {
> > +		vpmem->start = pmem_region.addr;
> > +		vpmem->size = pmem_region.len;
> > +	} else {
> > +		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > +				start, &vpmem->start);
> > +		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > +				size, &vpmem->size);
> > +	}
> >  
> >  	res.start = vpmem->start;
> >  	res.end   = vpmem->start + vpmem->size - 1;
> > diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
> > index 0dddefe594c4..62bb564e81cb 100644
> > --- a/drivers/nvdimm/virtio_pmem.h
> > +++ b/drivers/nvdimm/virtio_pmem.h
> > @@ -50,6 +50,9 @@ struct virtio_pmem {
> >  	__u64 size;
> >  };
> >  
> > +/* For the id field in virtio_pci_shm_cap */
> > +#define VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION 0
> > +
> >  void virtio_pmem_host_ack(struct virtqueue *vq);
> >  int async_pmem_flush(struct nd_region *nd_region, struct bio *bio);
> >  #endif
> > -- 
> > 2.32.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
  2021-07-20  6:35   ` Taylor Stark
@ 2021-07-20  6:51     ` Pankaj Gupta
  2021-07-21 22:08       ` Taylor Stark
  0 siblings, 1 reply; 10+ messages in thread
From: Pankaj Gupta @ 2021-07-20  6:51 UTC (permalink / raw)
  To: Taylor Stark
  Cc: Pankaj Gupta, Dan Williams, Vishal Verma, Dave Jiang, Ira Weiny,
	nvdimm, apais, tyhicks, jamorris, benhill, sunilmut, grahamwo,
	tstark, Michael S . Tsirkin

> > >
> > > -       virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > -                       start, &vpmem->start);
> > > -       virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > -                       size, &vpmem->size);
> > > +       /* Retrieve the pmem device's address and size. It may have been supplied
> > > +        * as a PCI BAR-relative shared memory region, or as a guest absolute address.
> > > +        */
> > > +       have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region,
> > > +                                               VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION);
> >
> > Current implementation of Virtio pmem device in Qemu does not expose
> > it as PCI BAR.
> > So, can't test it. Just curious if device side implementation is also
> > tested for asynchronous
> > flush case?
> >
> > Thanks,
> > Pankaj
>
> Yes, I tested the async flush case as well. We basically call
> FlushFileBuffers on the backing file, which is Windows' equivalent of
> fsync. I also briefly tested with qemu to ensure that still works with
> the patch.

Thank you for the confirmation. This sounds really good.
I am also getting back to pending items for virtio-pmem.

On a side question: Do you guys have any or plan for Windows guest
implementation
for virtio-pmem?

Thanks,
Pankaj

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
  2021-07-20  6:41   ` Taylor Stark
@ 2021-07-20  9:46     ` Michael S. Tsirkin
  2021-07-21 21:18       ` Taylor Stark
  0 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2021-07-20  9:46 UTC (permalink / raw)
  To: Taylor Stark
  Cc: dan.j.williams, vishal.l.verma, dave.jiang, ira.weiny, nvdimm,
	apais, tyhicks, jamorris, benhill, sunilmut, grahamwo, tstark

On Mon, Jul 19, 2021 at 11:41:03PM -0700, Taylor Stark wrote:
> On Mon, Jul 19, 2021 at 05:17:00PM -0400, Michael S. Tsirkin wrote:
> > On Thu, Jul 15, 2021 at 03:35:05PM -0700, Taylor Stark wrote:
> > > Update virtio-pmem to allow for the pmem region to be specified in either
> > > guest absolute terms or as a PCI BAR-relative address. This is required
> > > to support virtio-pmem in Hyper-V, since Hyper-V only allows PCI devices
> > > to operate on PCI memory ranges defined via BARs.
> > > 
> > > Virtio-pmem will check for a shared memory window and use that if found,
> > > else it will fallback to using the guest absolute addresses in
> > > virtio_pmem_config. This was chosen over defining a new feature bit,
> > > since it's similar to how virtio-fs is configured.
> > > 
> > > Signed-off-by: Taylor Stark <tstark@microsoft.com>
> > 
> > This needs to be added to the device spec too.
> > Can you send a spec patch please?
> > It's a subscriber-only list virtio-comment@lists.oasis-open.org
> 
> Absolutely! I tried looking on the virtio-spec repo on github but
> I couldn't find a spec for virtio-pmem to update. There is this
> issue (https://github.com/oasis-tcs/virtio-spec/issues/78) which
> seems to indicate the virtio-pmem spec hasn't been added yet.
> Any suggestions for where to send a patch?
> 
> Thanks,
> Taylor 

Just apply that patch (whichever you think is right)
and do yours on top and indicate this in the email.
Send it to virtio-comments.

> > > ---
> > >  drivers/nvdimm/virtio_pmem.c | 21 +++++++++++++++++----
> > >  drivers/nvdimm/virtio_pmem.h |  3 +++
> > >  2 files changed, 20 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c
> > > index 726c7354d465..43c1d835a449 100644
> > > --- a/drivers/nvdimm/virtio_pmem.c
> > > +++ b/drivers/nvdimm/virtio_pmem.c
> > > @@ -37,6 +37,8 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
> > >  	struct virtio_pmem *vpmem;
> > >  	struct resource res;
> > >  	int err = 0;
> > > +	bool have_shm_region;
> > > +	struct virtio_shm_region pmem_region;
> > >  
> > >  	if (!vdev->config->get) {
> > >  		dev_err(&vdev->dev, "%s failure: config access disabled\n",
> > > @@ -58,10 +60,21 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
> > >  		goto out_err;
> > >  	}
> > >  
> > > -	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > -			start, &vpmem->start);
> > > -	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > -			size, &vpmem->size);
> > > +	/* Retrieve the pmem device's address and size. It may have been supplied
> > > +	 * as a PCI BAR-relative shared memory region, or as a guest absolute address.
> > > +	 */
> > > +	have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region,
> > > +						VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION);
> > > +
> > > +	if (have_shm_region) {
> > > +		vpmem->start = pmem_region.addr;
> > > +		vpmem->size = pmem_region.len;
> > > +	} else {
> > > +		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > +				start, &vpmem->start);
> > > +		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > +				size, &vpmem->size);
> > > +	}
> > >  
> > >  	res.start = vpmem->start;
> > >  	res.end   = vpmem->start + vpmem->size - 1;
> > > diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
> > > index 0dddefe594c4..62bb564e81cb 100644
> > > --- a/drivers/nvdimm/virtio_pmem.h
> > > +++ b/drivers/nvdimm/virtio_pmem.h
> > > @@ -50,6 +50,9 @@ struct virtio_pmem {
> > >  	__u64 size;
> > >  };
> > >  
> > > +/* For the id field in virtio_pci_shm_cap */
> > > +#define VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION 0
> > > +
> > >  void virtio_pmem_host_ack(struct virtqueue *vq);
> > >  int async_pmem_flush(struct nd_region *nd_region, struct bio *bio);
> > >  #endif
> > > -- 
> > > 2.32.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
  2021-07-20  9:46     ` Michael S. Tsirkin
@ 2021-07-21 21:18       ` Taylor Stark
  0 siblings, 0 replies; 10+ messages in thread
From: Taylor Stark @ 2021-07-21 21:18 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: dan.j.williams, vishal.l.verma, dave.jiang, ira.weiny, nvdimm,
	apais, tyhicks, jamorris, benhill, sunilmut, grahamwo, tstark

On Tue, Jul 20, 2021 at 05:46:30AM -0400, Michael S. Tsirkin wrote:
> On Mon, Jul 19, 2021 at 11:41:03PM -0700, Taylor Stark wrote:
> > On Mon, Jul 19, 2021 at 05:17:00PM -0400, Michael S. Tsirkin wrote:
> > > On Thu, Jul 15, 2021 at 03:35:05PM -0700, Taylor Stark wrote:
> > > > Update virtio-pmem to allow for the pmem region to be specified in either
> > > > guest absolute terms or as a PCI BAR-relative address. This is required
> > > > to support virtio-pmem in Hyper-V, since Hyper-V only allows PCI devices
> > > > to operate on PCI memory ranges defined via BARs.
> > > > 
> > > > Virtio-pmem will check for a shared memory window and use that if found,
> > > > else it will fallback to using the guest absolute addresses in
> > > > virtio_pmem_config. This was chosen over defining a new feature bit,
> > > > since it's similar to how virtio-fs is configured.
> > > > 
> > > > Signed-off-by: Taylor Stark <tstark@microsoft.com>
> > > 
> > > This needs to be added to the device spec too.
> > > Can you send a spec patch please?
> > > It's a subscriber-only list virtio-comment@lists.oasis-open.org
> > 
> > Absolutely! I tried looking on the virtio-spec repo on github but
> > I couldn't find a spec for virtio-pmem to update. There is this
> > issue (https://github.com/oasis-tcs/virtio-spec/issues/78) which
> > seems to indicate the virtio-pmem spec hasn't been added yet.
> > Any suggestions for where to send a patch?
> > 
> > Thanks,
> > Taylor 
> 
> Just apply that patch (whichever you think is right)
> and do yours on top and indicate this in the email.
> Send it to virtio-comments.

Posted the patch here for those that want to follow along:
https://lists.oasis-open.org/archives/virtio-comment/202107/msg00111.html
 
> > > > ---
> > > >  drivers/nvdimm/virtio_pmem.c | 21 +++++++++++++++++----
> > > >  drivers/nvdimm/virtio_pmem.h |  3 +++
> > > >  2 files changed, 20 insertions(+), 4 deletions(-)
> > > > 
> > > > diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c
> > > > index 726c7354d465..43c1d835a449 100644
> > > > --- a/drivers/nvdimm/virtio_pmem.c
> > > > +++ b/drivers/nvdimm/virtio_pmem.c
> > > > @@ -37,6 +37,8 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
> > > >  	struct virtio_pmem *vpmem;
> > > >  	struct resource res;
> > > >  	int err = 0;
> > > > +	bool have_shm_region;
> > > > +	struct virtio_shm_region pmem_region;
> > > >  
> > > >  	if (!vdev->config->get) {
> > > >  		dev_err(&vdev->dev, "%s failure: config access disabled\n",
> > > > @@ -58,10 +60,21 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
> > > >  		goto out_err;
> > > >  	}
> > > >  
> > > > -	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > > -			start, &vpmem->start);
> > > > -	virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > > -			size, &vpmem->size);
> > > > +	/* Retrieve the pmem device's address and size. It may have been supplied
> > > > +	 * as a PCI BAR-relative shared memory region, or as a guest absolute address.
> > > > +	 */
> > > > +	have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region,
> > > > +						VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION);
> > > > +
> > > > +	if (have_shm_region) {
> > > > +		vpmem->start = pmem_region.addr;
> > > > +		vpmem->size = pmem_region.len;
> > > > +	} else {
> > > > +		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > > +				start, &vpmem->start);
> > > > +		virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > > +				size, &vpmem->size);
> > > > +	}
> > > >  
> > > >  	res.start = vpmem->start;
> > > >  	res.end   = vpmem->start + vpmem->size - 1;
> > > > diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
> > > > index 0dddefe594c4..62bb564e81cb 100644
> > > > --- a/drivers/nvdimm/virtio_pmem.h
> > > > +++ b/drivers/nvdimm/virtio_pmem.h
> > > > @@ -50,6 +50,9 @@ struct virtio_pmem {
> > > >  	__u64 size;
> > > >  };
> > > >  
> > > > +/* For the id field in virtio_pci_shm_cap */
> > > > +#define VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION 0
> > > > +
> > > >  void virtio_pmem_host_ack(struct virtqueue *vq);
> > > >  int async_pmem_flush(struct nd_region *nd_region, struct bio *bio);
> > > >  #endif
> > > > -- 
> > > > 2.32.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
  2021-07-20  6:51     ` Pankaj Gupta
@ 2021-07-21 22:08       ` Taylor Stark
  2021-07-22  4:40         ` Pankaj Gupta
  0 siblings, 1 reply; 10+ messages in thread
From: Taylor Stark @ 2021-07-21 22:08 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: Pankaj Gupta, Dan Williams, Vishal Verma, Dave Jiang, Ira Weiny,
	nvdimm, apais, tyhicks, jamorris, benhill, sunilmut, grahamwo,
	tstark, Michael S . Tsirkin

On Tue, Jul 20, 2021 at 08:51:04AM +0200, Pankaj Gupta wrote:
> > > >
> > > > -       virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > > -                       start, &vpmem->start);
> > > > -       virtio_cread_le(vpmem->vdev, struct virtio_pmem_config,
> > > > -                       size, &vpmem->size);
> > > > +       /* Retrieve the pmem device's address and size. It may have been supplied
> > > > +        * as a PCI BAR-relative shared memory region, or as a guest absolute address.
> > > > +        */
> > > > +       have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region,
> > > > +                                               VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION);
> > >
> > > Current implementation of Virtio pmem device in Qemu does not expose
> > > it as PCI BAR.
> > > So, can't test it. Just curious if device side implementation is also
> > > tested for asynchronous
> > > flush case?
> > >
> > > Thanks,
> > > Pankaj
> >
> > Yes, I tested the async flush case as well. We basically call
> > FlushFileBuffers on the backing file, which is Windows' equivalent of
> > fsync. I also briefly tested with qemu to ensure that still works with
> > the patch.
> 
> Thank you for the confirmation. This sounds really good.
> I am also getting back to pending items for virtio-pmem.
> 
> On a side question: Do you guys have any or plan for Windows guest
> implementation
> for virtio-pmem?

Unfortunately, my team doesn't currently have any plans to add a Windows
virtio-pmem implementation. My team is primarily focused on virtualization
in client environments, which is a little different than server environments.
For our Windows-based scenarios, dynamically sized disks are important. It's
tricky to get that to work with pmem+DAX given that Windows isn't state separated.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses
  2021-07-21 22:08       ` Taylor Stark
@ 2021-07-22  4:40         ` Pankaj Gupta
  0 siblings, 0 replies; 10+ messages in thread
From: Pankaj Gupta @ 2021-07-22  4:40 UTC (permalink / raw)
  To: Taylor Stark
  Cc: Pankaj Gupta, Dan Williams, Vishal Verma, Dave Jiang, Ira Weiny,
	nvdimm, apais, tyhicks, jamorris, benhill, sunilmut, grahamwo,
	tstark, Michael S . Tsirkin

> > On a side question: Do you guys have any or plan for Windows guest
> > implementation
> > for virtio-pmem?
>
> Unfortunately, my team doesn't currently have any plans to add a Windows
> virtio-pmem implementation. My team is primarily focused on virtualization
> in client environments, which is a little different than server environments.
> For our Windows-based scenarios, dynamically sized disks are important. It's
> tricky to get that to work with pmem+DAX given that Windows isn't state separated.

I see. Thank you for the details.

Best regards,
Pankaj

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, back to index

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-15 22:35 [PATCH v2 1/2] virtio-pmem: Support PCI BAR-relative addresses Taylor Stark
2021-07-19 20:36 ` Pankaj Gupta
2021-07-20  6:35   ` Taylor Stark
2021-07-20  6:51     ` Pankaj Gupta
2021-07-21 22:08       ` Taylor Stark
2021-07-22  4:40         ` Pankaj Gupta
2021-07-19 21:17 ` Michael S. Tsirkin
2021-07-20  6:41   ` Taylor Stark
2021-07-20  9:46     ` Michael S. Tsirkin
2021-07-21 21:18       ` Taylor Stark

NVDIMM Device and Persistent Memory development

Archives are clonable:
	git clone --mirror https://lore.kernel.org/nvdimm/0 nvdimm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 nvdimm nvdimm/ https://lore.kernel.org/nvdimm \
		nvdimm@lists.linux.dev
	public-inbox-index nvdimm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/dev.linux.lists.nvdimm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git