From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>,
Shivaprasad G Bhat <sbhat@linux.ibm.com>
Cc: sbhat@linux.vnet.ibm.com, groug@kaod.org, qemu-ppc@nongnu.org,
ehabkost@redhat.com, marcel.apfelbaum@gmail.com, mst@redhat.com,
imammedo@redhat.com, xiaoguangrong.eric@gmail.com,
qemu-devel@nongnu.org, linux-nvdimm@lists.01.org,
kvm-ppc@vger.kernel.org, shivaprasadbhat@gmail.com,
bharata@linux.vnet.ibm.com
Subject: Re: [PATCH v3 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall
Date: Wed, 24 Mar 2021 09:34:06 +0530 [thread overview]
Message-ID: <19b5aa0b-df85-256d-d4c4-eacd0ea8312e@linux.ibm.com> (raw)
In-Reply-To: <YFqs8M1dHAFhdCL6@yekko.fritz.box>
On 3/24/21 8:37 AM, David Gibson wrote:
> On Tue, Mar 23, 2021 at 09:47:38AM -0400, Shivaprasad G Bhat wrote:
>> The patch adds support for the SCM flush hcall for the nvdimm devices.
>> To be available for exploitation by guest through the next patch.
>>
>> The hcall expects the semantics such that the flush to return
>> with H_BUSY when the operation is expected to take longer time along
>> with a continue_token. The hcall to be called again providing the
>> continue_token to get the status. So, all fresh requsts are put into
>> a 'pending' list and flush worker is submitted to the thread pool.
>> The thread pool completion callbacks move the requests to 'completed'
>> list, which are cleaned up after reporting to guest in subsequent
>> hcalls to get the status.
>>
>> The semantics makes it necessary to preserve the continue_tokens
>> and their return status even across migrations. So, the pre_save
>> handler for the device waits for the flush worker to complete and
>> collects all the hcall states from 'completed' list. The necessary
>> nvdimm flush specific vmstate structures are added to the spapr
>> machine vmstate.
>>
>> Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
>
> An overal question: surely the same issue must arise on x86 with
> file-backed NVDIMMs. How do they handle this case?
On x86 we have different ways nvdimm can be discovered. ACPI NFIT, e820
map and virtio_pmem. Among these virio_pmem always operated with
synchronous dax disabled and both ACPI and e820 doesn't have the ability
to differentiate support for synchronous dax.
With that I would expect users to use virtio_pmem when using using file
backed NVDIMMS
-aneesh
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
WARNING: multiple messages have this Message-ID (diff)
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>,
Shivaprasad G Bhat <sbhat@linux.ibm.com>
Cc: ehabkost@redhat.com, mst@redhat.com, bharata@linux.vnet.ibm.com,
linux-nvdimm@lists.01.org, groug@kaod.org,
kvm-ppc@vger.kernel.org, qemu-devel@nongnu.org,
shivaprasadbhat@gmail.com, qemu-ppc@nongnu.org,
imammedo@redhat.com, sbhat@linux.vnet.ibm.com,
xiaoguangrong.eric@gmail.com
Subject: Re: [PATCH v3 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall
Date: Wed, 24 Mar 2021 09:34:06 +0530 [thread overview]
Message-ID: <19b5aa0b-df85-256d-d4c4-eacd0ea8312e@linux.ibm.com> (raw)
In-Reply-To: <YFqs8M1dHAFhdCL6@yekko.fritz.box>
On 3/24/21 8:37 AM, David Gibson wrote:
> On Tue, Mar 23, 2021 at 09:47:38AM -0400, Shivaprasad G Bhat wrote:
>> The patch adds support for the SCM flush hcall for the nvdimm devices.
>> To be available for exploitation by guest through the next patch.
>>
>> The hcall expects the semantics such that the flush to return
>> with H_BUSY when the operation is expected to take longer time along
>> with a continue_token. The hcall to be called again providing the
>> continue_token to get the status. So, all fresh requsts are put into
>> a 'pending' list and flush worker is submitted to the thread pool.
>> The thread pool completion callbacks move the requests to 'completed'
>> list, which are cleaned up after reporting to guest in subsequent
>> hcalls to get the status.
>>
>> The semantics makes it necessary to preserve the continue_tokens
>> and their return status even across migrations. So, the pre_save
>> handler for the device waits for the flush worker to complete and
>> collects all the hcall states from 'completed' list. The necessary
>> nvdimm flush specific vmstate structures are added to the spapr
>> machine vmstate.
>>
>> Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
>
> An overal question: surely the same issue must arise on x86 with
> file-backed NVDIMMs. How do they handle this case?
On x86 we have different ways nvdimm can be discovered. ACPI NFIT, e820
map and virtio_pmem. Among these virio_pmem always operated with
synchronous dax disabled and both ACPI and e820 doesn't have the ability
to differentiate support for synchronous dax.
With that I would expect users to use virtio_pmem when using using file
backed NVDIMMS
-aneesh
WARNING: multiple messages have this Message-ID (diff)
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>,
Shivaprasad G Bhat <sbhat@linux.ibm.com>
Cc: sbhat@linux.vnet.ibm.com, groug@kaod.org, qemu-ppc@nongnu.org,
ehabkost@redhat.com, marcel.apfelbaum@gmail.com, mst@redhat.com,
imammedo@redhat.com, xiaoguangrong.eric@gmail.com,
qemu-devel@nongnu.org, linux-nvdimm@lists.01.org,
kvm-ppc@vger.kernel.org, shivaprasadbhat@gmail.com,
bharata@linux.vnet.ibm.com
Subject: Re: [PATCH v3 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall
Date: Wed, 24 Mar 2021 04:16:06 +0000 [thread overview]
Message-ID: <19b5aa0b-df85-256d-d4c4-eacd0ea8312e@linux.ibm.com> (raw)
In-Reply-To: <YFqs8M1dHAFhdCL6@yekko.fritz.box>
On 3/24/21 8:37 AM, David Gibson wrote:
> On Tue, Mar 23, 2021 at 09:47:38AM -0400, Shivaprasad G Bhat wrote:
>> The patch adds support for the SCM flush hcall for the nvdimm devices.
>> To be available for exploitation by guest through the next patch.
>>
>> The hcall expects the semantics such that the flush to return
>> with H_BUSY when the operation is expected to take longer time along
>> with a continue_token. The hcall to be called again providing the
>> continue_token to get the status. So, all fresh requsts are put into
>> a 'pending' list and flush worker is submitted to the thread pool.
>> The thread pool completion callbacks move the requests to 'completed'
>> list, which are cleaned up after reporting to guest in subsequent
>> hcalls to get the status.
>>
>> The semantics makes it necessary to preserve the continue_tokens
>> and their return status even across migrations. So, the pre_save
>> handler for the device waits for the flush worker to complete and
>> collects all the hcall states from 'completed' list. The necessary
>> nvdimm flush specific vmstate structures are added to the spapr
>> machine vmstate.
>>
>> Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
>
> An overal question: surely the same issue must arise on x86 with
> file-backed NVDIMMs. How do they handle this case?
On x86 we have different ways nvdimm can be discovered. ACPI NFIT, e820
map and virtio_pmem. Among these virio_pmem always operated with
synchronous dax disabled and both ACPI and e820 doesn't have the ability
to differentiate support for synchronous dax.
With that I would expect users to use virtio_pmem when using using file
backed NVDIMMS
-aneesh
next prev parent reply other threads:[~2021-03-24 4:04 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-23 13:47 [PATCH v3 0/3] spapr: nvdimm: Enable sync-dax property for nvdimm Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-23 13:47 ` [PATCH v3 1/3] spapr: nvdimm: Forward declare and move the definitions Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-24 2:30 ` David Gibson
2021-03-24 2:30 ` David Gibson
2021-03-24 2:30 ` David Gibson
2021-03-23 13:47 ` [PATCH v3 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-24 3:07 ` David Gibson
2021-03-24 3:07 ` David Gibson
2021-03-24 3:07 ` David Gibson
2021-03-24 4:04 ` Aneesh Kumar K.V [this message]
2021-03-24 4:16 ` Aneesh Kumar K.V
2021-03-24 4:04 ` Aneesh Kumar K.V
2021-03-25 1:51 ` David Gibson
2021-03-25 1:51 ` David Gibson
2021-03-25 1:51 ` David Gibson
2021-03-26 13:45 ` Shivaprasad G Bhat
2021-03-26 13:57 ` Shivaprasad G Bhat
2021-03-26 13:45 ` Shivaprasad G Bhat
2021-03-29 9:23 ` Shivaprasad G Bhat
2021-03-29 9:23 ` Shivaprasad G Bhat
2021-03-30 23:57 ` David Gibson
2021-03-30 23:57 ` David Gibson
2021-03-30 23:57 ` David Gibson
2021-03-23 13:47 ` [PATCH v3 3/3] spapr: nvdimm: Enable sync-dax device property for nvdimm Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-23 13:47 ` Shivaprasad G Bhat
2021-03-24 3:09 ` David Gibson
2021-03-24 3:09 ` David Gibson
2021-03-24 3:09 ` David Gibson
2021-03-24 4:09 ` Aneesh Kumar K.V
2021-03-24 4:21 ` Aneesh Kumar K.V
2021-03-24 4:09 ` Aneesh Kumar K.V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=19b5aa0b-df85-256d-d4c4-eacd0ea8312e@linux.ibm.com \
--to=aneesh.kumar@linux.ibm.com \
--cc=bharata@linux.vnet.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=ehabkost@redhat.com \
--cc=groug@kaod.org \
--cc=imammedo@redhat.com \
--cc=kvm-ppc@vger.kernel.org \
--cc=linux-nvdimm@lists.01.org \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=sbhat@linux.ibm.com \
--cc=sbhat@linux.vnet.ibm.com \
--cc=shivaprasadbhat@gmail.com \
--cc=xiaoguangrong.eric@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.