From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: Stefan Hajnoczi <stefanha@redhat.com>,
Shivaprasad G Bhat <sbhat@linux.ibm.com>
Cc: peter.maydell@linaro.org, mst@redhat.com, qemu-devel@nongnu.org,
linux-nvdimm@lists.01.org, armbru@redhat.com,
bharata@linux.vnet.ibm.com, haozhong.zhang@intel.com,
ehabkost@redhat.com, richard.henderson@linaro.org,
groug@kaod.org, kvm-ppc@vger.kernel.org, qemu-arm@nongnu.org,
imammedo@redhat.com, kwangwoo.lee@sk.com,
david@gibson.dropbear.id.au, xiaoguangrong.eric@gmail.com,
shameerali.kolothum.thodi@huawei.com, shivaprasadbhat@gmail.com,
qemu-ppc@nongnu.org, pbonzini@redhat.com
Subject: Re: [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm
Date: Thu, 29 Apr 2021 22:02:23 +0530 [thread overview]
Message-ID: <433e352d-5341-520c-5c57-79650277a719@linux.ibm.com> (raw)
In-Reply-To: <YIrW4bwbR1R0CWm/@stefanha-x1.localdomain>
On 4/29/21 9:25 PM, Stefan Hajnoczi wrote:
> On Wed, Apr 28, 2021 at 11:48:21PM -0400, Shivaprasad G Bhat wrote:
>> The nvdimm devices are expected to ensure write persistence during power
>> failure kind of scenarios.
>>
>> The libpmem has architecture specific instructions like dcbf on POWER
>> to flush the cache data to backend nvdimm device during normal writes
>> followed by explicit flushes if the backend devices are not synchronous
>> DAX capable.
>>
>> Qemu - virtual nvdimm devices are memory mapped. The dcbf in the guest
>> and the subsequent flush doesn't traslate to actual flush to the backend
>> file on the host in case of file backed v-nvdimms. This is addressed by
>> virtio-pmem in case of x86_64 by making explicit flushes translating to
>> fsync at qemu.
>>
>> On SPAPR, the issue is addressed by adding a new hcall to
>> request for an explicit flush from the guest ndctl driver when the backend
>> nvdimm cannot ensure write persistence with dcbf alone. So, the approach
>> here is to convey when the hcall flush is required in a device tree
>> property. The guest makes the hcall when the property is found, instead
>> of relying on dcbf.
>
> Sorry, I'm not very familiar with SPAPR. Why add a hypercall when the
> virtio-nvdimm device already exists?
>
On virtualized ppc64 platforms, guests use papr_scm.ko kernel drive for
persistent memory support. This was done such that we can use one kernel
driver to support persistent memory with multiple hypervisors. To avoid
supporting multiple drivers in the guest, -device nvdimm Qemu
command-line results in Qemu using PAPR SCM backend. What this patch
series does is to make sure we expose the correct synchronous fault
support, when we back such nvdimm device with a file.
The existing PAPR SCM backend enables persistent memory support with the
help of multiple hypercall.
#define H_SCM_READ_METADATA 0x3E4
#define H_SCM_WRITE_METADATA 0x3E8
#define H_SCM_BIND_MEM 0x3EC
#define H_SCM_UNBIND_MEM 0x3F0
#define H_SCM_UNBIND_ALL 0x3FC
Most of them are already implemented in Qemu. This patch series
implements H_SCM_FLUSH hypercall.
-aneesh
next prev parent reply other threads:[~2021-04-29 17:15 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-29 3:48 [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm Shivaprasad G Bhat
2021-04-29 3:48 ` [PATCH v4 1/3] spapr: nvdimm: Forward declare and move the definitions Shivaprasad G Bhat
2021-05-03 18:23 ` Eric Blake
2021-05-04 1:21 ` David Gibson
2021-04-29 3:48 ` [PATCH v4 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall Shivaprasad G Bhat
2021-04-29 3:49 ` [PATCH v4 3/3] nvdimm: Enable sync-dax device property for nvdimm Shivaprasad G Bhat
2021-05-03 18:27 ` Eric Blake
2021-04-29 15:55 ` [PATCH v4 0/3] nvdimm: Enable sync-dax " Stefan Hajnoczi
2021-04-29 16:32 ` Aneesh Kumar K.V [this message]
2021-04-30 4:27 ` David Gibson
2021-04-30 15:08 ` Stefan Hajnoczi
2021-04-30 19:14 ` Dan Williams
2021-05-01 13:55 ` Aneesh Kumar K.V
2021-05-03 14:05 ` Shivaprasad G Bhat
2021-05-03 19:41 ` Dan Williams
2021-05-04 4:59 ` Aneesh Kumar K.V
2021-05-04 5:43 ` Pankaj Gupta
2021-05-04 9:02 ` Aneesh Kumar K.V
2021-05-05 0:12 ` Dan Williams
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=433e352d-5341-520c-5c57-79650277a719@linux.ibm.com \
--to=aneesh.kumar@linux.ibm.com \
--cc=armbru@redhat.com \
--cc=bharata@linux.vnet.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=ehabkost@redhat.com \
--cc=groug@kaod.org \
--cc=haozhong.zhang@intel.com \
--cc=imammedo@redhat.com \
--cc=kvm-ppc@vger.kernel.org \
--cc=kwangwoo.lee@sk.com \
--cc=linux-nvdimm@lists.01.org \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=sbhat@linux.ibm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=shivaprasadbhat@gmail.com \
--cc=stefanha@redhat.com \
--cc=xiaoguangrong.eric@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).