All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: sbhat@linux.vnet.ibm.com, groug@kaod.org, qemu-ppc@nongnu.org,
	ehabkost@redhat.com, marcel.apfelbaum@gmail.com, mst@redhat.com,
	imammedo@redhat.com, xiaoguangrong.eric@gmail.com,
	qemu-devel@nongnu.org, linux-nvdimm@lists.01.org,
	kvm-ppc@vger.kernel.org, shivaprasadbhat@gmail.com,
	bharata@linux.vnet.ibm.com
Subject: Re: [PATCH v3 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall
Date: Fri, 26 Mar 2021 19:15:32 +0530	[thread overview]
Message-ID: <8c642adb-7c07-41e1-07d0-f23bb6c2f865@linux.ibm.com> (raw)
In-Reply-To: <YFvsmKiXtb+h9HBO@yekko.fritz.box>

On 3/25/21 7:21 AM, David Gibson wrote:
> On Wed, Mar 24, 2021 at 09:34:06AM +0530, Aneesh Kumar K.V wrote:
>> On 3/24/21 8:37 AM, David Gibson wrote:
>>> On Tue, Mar 23, 2021 at 09:47:38AM -0400, Shivaprasad G Bhat wrote:
>>>> The patch adds support for the SCM flush hcall for the nvdimm devices.
...
>>>> collects all the hcall states from 'completed' list. The necessary
>>>> nvdimm flush specific vmstate structures are added to the spapr
>>>> machine vmstate.
>>>>
>>>> Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
>>> An overal question: surely the same issue must arise on x86 with
>>> file-backed NVDIMMs.  How do they handle this case?
>> On x86 we have different ways nvdimm can be discovered. ACPI NFIT, e820 map
>> and virtio_pmem. Among these virio_pmem always operated with synchronous dax
>> disabled and both ACPI and e820 doesn't have the ability to differentiate
>> support for synchronous dax.
> Ok.  And for the virtio-pmem case, how are the extra flushes actually
> done on x86?


virtio-pmem device has virtqueue with virtio_pmem_flush() as the handler

which gets called for all flush requests from guest. virtio_pmem_flush() is

offloading the flush to thread pool with a worker doing fsync() and the

completion callback notifying the guest with response.


>> With that I would expect users to use virtio_pmem when using using file
>> backed NVDIMMS
> So... should we prevent advertising an NVDIMM through ACPI or e820 if
> it doesn't have sync-dax enabled?


Is it possible to have different defaults for sync-dax based on 
architecture ?

The behaviour on x86 is sync-dax=on for nvdimms. So, it would be correct to

have the default as "on" for x86. For pseries -  "off" for new machines.

Looking at code, I didnt find much ways to achieve this. Can you suggest

what can be done ?
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: ehabkost@redhat.com, mst@redhat.com, bharata@linux.vnet.ibm.com,
	linux-nvdimm@lists.01.org, groug@kaod.org,
	kvm-ppc@vger.kernel.org, qemu-devel@nongnu.org,
	shivaprasadbhat@gmail.com, qemu-ppc@nongnu.org,
	imammedo@redhat.com, sbhat@linux.vnet.ibm.com,
	xiaoguangrong.eric@gmail.com
Subject: Re: [PATCH v3 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall
Date: Fri, 26 Mar 2021 19:15:32 +0530	[thread overview]
Message-ID: <8c642adb-7c07-41e1-07d0-f23bb6c2f865@linux.ibm.com> (raw)
In-Reply-To: <YFvsmKiXtb+h9HBO@yekko.fritz.box>

On 3/25/21 7:21 AM, David Gibson wrote:
> On Wed, Mar 24, 2021 at 09:34:06AM +0530, Aneesh Kumar K.V wrote:
>> On 3/24/21 8:37 AM, David Gibson wrote:
>>> On Tue, Mar 23, 2021 at 09:47:38AM -0400, Shivaprasad G Bhat wrote:
>>>> The patch adds support for the SCM flush hcall for the nvdimm devices.
...
>>>> collects all the hcall states from 'completed' list. The necessary
>>>> nvdimm flush specific vmstate structures are added to the spapr
>>>> machine vmstate.
>>>>
>>>> Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
>>> An overal question: surely the same issue must arise on x86 with
>>> file-backed NVDIMMs.  How do they handle this case?
>> On x86 we have different ways nvdimm can be discovered. ACPI NFIT, e820 map
>> and virtio_pmem. Among these virio_pmem always operated with synchronous dax
>> disabled and both ACPI and e820 doesn't have the ability to differentiate
>> support for synchronous dax.
> Ok.  And for the virtio-pmem case, how are the extra flushes actually
> done on x86?


virtio-pmem device has virtqueue with virtio_pmem_flush() as the handler

which gets called for all flush requests from guest. virtio_pmem_flush() is

offloading the flush to thread pool with a worker doing fsync() and the

completion callback notifying the guest with response.


>> With that I would expect users to use virtio_pmem when using using file
>> backed NVDIMMS
> So... should we prevent advertising an NVDIMM through ACPI or e820 if
> it doesn't have sync-dax enabled?


Is it possible to have different defaults for sync-dax based on 
architecture ?

The behaviour on x86 is sync-dax=on for nvdimms. So, it would be correct to

have the default as "on" for x86. For pseries -  "off" for new machines.

Looking at code, I didnt find much ways to achieve this. Can you suggest

what can be done ?



WARNING: multiple messages have this Message-ID (diff)
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: sbhat@linux.vnet.ibm.com, groug@kaod.org, qemu-ppc@nongnu.org,
	ehabkost@redhat.com, marcel.apfelbaum@gmail.com, mst@redhat.com,
	imammedo@redhat.com, xiaoguangrong.eric@gmail.com,
	qemu-devel@nongnu.org, linux-nvdimm@lists.01.org,
	kvm-ppc@vger.kernel.org, shivaprasadbhat@gmail.com,
	bharata@linux.vnet.ibm.com
Subject: Re: [PATCH v3 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall
Date: Fri, 26 Mar 2021 13:57:32 +0000	[thread overview]
Message-ID: <8c642adb-7c07-41e1-07d0-f23bb6c2f865@linux.ibm.com> (raw)
In-Reply-To: <YFvsmKiXtb+h9HBO@yekko.fritz.box>

On 3/25/21 7:21 AM, David Gibson wrote:
> On Wed, Mar 24, 2021 at 09:34:06AM +0530, Aneesh Kumar K.V wrote:
>> On 3/24/21 8:37 AM, David Gibson wrote:
>>> On Tue, Mar 23, 2021 at 09:47:38AM -0400, Shivaprasad G Bhat wrote:
>>>> The patch adds support for the SCM flush hcall for the nvdimm devices.
...
>>>> collects all the hcall states from 'completed' list. The necessary
>>>> nvdimm flush specific vmstate structures are added to the spapr
>>>> machine vmstate.
>>>>
>>>> Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
>>> An overal question: surely the same issue must arise on x86 with
>>> file-backed NVDIMMs.  How do they handle this case?
>> On x86 we have different ways nvdimm can be discovered. ACPI NFIT, e820 map
>> and virtio_pmem. Among these virio_pmem always operated with synchronous dax
>> disabled and both ACPI and e820 doesn't have the ability to differentiate
>> support for synchronous dax.
> Ok.  And for the virtio-pmem case, how are the extra flushes actually
> done on x86?


virtio-pmem device has virtqueue with virtio_pmem_flush() as the handler

which gets called for all flush requests from guest. virtio_pmem_flush() is

offloading the flush to thread pool with a worker doing fsync() and the

completion callback notifying the guest with response.


>> With that I would expect users to use virtio_pmem when using using file
>> backed NVDIMMS
> So... should we prevent advertising an NVDIMM through ACPI or e820 if
> it doesn't have sync-dax enabled?


Is it possible to have different defaults for sync-dax based on 
architecture ?

The behaviour on x86 is sync-dax=on for nvdimms. So, it would be correct to

have the default as "on" for x86. For pseries -  "off" for new machines.

Looking at code, I didnt find much ways to achieve this. Can you suggest

what can be done ?

  reply	other threads:[~2021-03-26 13:45 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-23 13:47 [PATCH v3 0/3] spapr: nvdimm: Enable sync-dax property for nvdimm Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-23 13:47 ` [PATCH v3 1/3] spapr: nvdimm: Forward declare and move the definitions Shivaprasad G Bhat
2021-03-23 13:47   ` Shivaprasad G Bhat
2021-03-23 13:47   ` Shivaprasad G Bhat
2021-03-24  2:30   ` David Gibson
2021-03-24  2:30     ` David Gibson
2021-03-24  2:30     ` David Gibson
2021-03-23 13:47 ` [PATCH v3 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall Shivaprasad G Bhat
2021-03-23 13:47   ` Shivaprasad G Bhat
2021-03-23 13:47   ` Shivaprasad G Bhat
2021-03-24  3:07   ` David Gibson
2021-03-24  3:07     ` David Gibson
2021-03-24  3:07     ` David Gibson
2021-03-24  4:04     ` Aneesh Kumar K.V
2021-03-24  4:16       ` Aneesh Kumar K.V
2021-03-24  4:04       ` Aneesh Kumar K.V
2021-03-25  1:51       ` David Gibson
2021-03-25  1:51         ` David Gibson
2021-03-25  1:51         ` David Gibson
2021-03-26 13:45         ` Shivaprasad G Bhat [this message]
2021-03-26 13:57           ` Shivaprasad G Bhat
2021-03-26 13:45           ` Shivaprasad G Bhat
2021-03-29  9:23     ` Shivaprasad G Bhat
2021-03-29  9:23       ` Shivaprasad G Bhat
2021-03-30 23:57       ` David Gibson
2021-03-30 23:57         ` David Gibson
2021-03-30 23:57         ` David Gibson
2021-03-23 13:47 ` [PATCH v3 3/3] spapr: nvdimm: Enable sync-dax device property for nvdimm Shivaprasad G Bhat
2021-03-23 13:47   ` Shivaprasad G Bhat
2021-03-23 13:47   ` Shivaprasad G Bhat
2021-03-24  3:09   ` David Gibson
2021-03-24  3:09     ` David Gibson
2021-03-24  3:09     ` David Gibson
2021-03-24  4:09     ` Aneesh Kumar K.V
2021-03-24  4:21       ` Aneesh Kumar K.V
2021-03-24  4:09       ` Aneesh Kumar K.V

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8c642adb-7c07-41e1-07d0-f23bb6c2f865@linux.ibm.com \
    --to=sbhat@linux.ibm.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=bharata@linux.vnet.ibm.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=ehabkost@redhat.com \
    --cc=groug@kaod.org \
    --cc=imammedo@redhat.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=sbhat@linux.vnet.ibm.com \
    --cc=shivaprasadbhat@gmail.com \
    --cc=xiaoguangrong.eric@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.