All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>,
	Greg Kurz <groug@kaod.org>,
	qemu-ppc@nongnu.org, Eduardo Habkost <ehabkost@redhat.com>,
	marcel.apfelbaum@gmail.com, "Michael S. Tsirkin" <mst@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Eric Blake <eblake@redhat.com>,
	qemu-arm@nongnu.org, richard.henderson@linaro.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Haozhong Zhang <haozhong.zhang@intel.com>,
	shameerali.kolothum.thodi@huawei.com, kwangwoo.lee@sk.com,
	Markus Armbruster <armbru@redhat.com>,
	Qemu Developers <qemu-devel@nongnu.org>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	kvm-ppc@vger.kernel.org, shivaprasadbhat@gmail.com,
	bharata@linux.vnet.ibm.com
Subject: Re: [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm
Date: Mon, 3 May 2021 12:41:08 -0700	[thread overview]
Message-ID: <CAPcyv4hAOC89JOXr-ZCps=n8gEKD5W0jmGU1Enfo8ECVMf3veQ@mail.gmail.com> (raw)
In-Reply-To: <bca3512d-5437-e8e6-68ae-0c9b887115f9@linux.ibm.com>

On Mon, May 3, 2021 at 7:06 AM Shivaprasad G Bhat <sbhat@linux.ibm.com> wrote:
>
>
> On 5/1/21 12:44 AM, Dan Williams wrote:
> > Some corrections to terminology confusion below...
> >
> >
> > On Wed, Apr 28, 2021 at 8:49 PM Shivaprasad G Bhat <sbhat@linux.ibm.com> wrote:
> >> The nvdimm devices are expected to ensure write persistence during power
> >> failure kind of scenarios.
> > No, QEMU is not expected to make that guarantee. QEMU is free to lie
> > to the guest about the persistence guarantees of the guest PMEM
> > ranges. It's more accurate to say that QEMU nvdimm devices can emulate
> > persistent memory and optionally pass through host power-fail
> > persistence guarantees to the guest. The power-fail persistence domain
> > can be one of "cpu_cache", or "memory_controller" if the persistent
> > memory region is "synchronous". If the persistent range is not
> > synchronous, it really isn't "persistent memory"; it's memory mapped
> > storage that needs I/O commands to flush.
>
> Since this is virtual nvdimm(v-nvdimm) backed by a file, and the data is
> completely
>
> in the host pagecache, and we need a way to ensure that host pagecaches
>
> are flushed to the backend. This analogous to the WPQ flush being offloaded
>
> to the hypervisor.

No, it isn't analogous. WPQ flush is an optional mechanism to force
data to a higher durability domain. The flush in this interface is
mandatory. It's a different class of device.

The proposal that "sync-dax=unsafe" for non-PPC architectures, is a
fundamental misrepresentation of how this is supposed to work. Rather
than make "sync-dax" a first class citizen of the device-description
interface I'm proposing that you make this a separate device-type.
This also solves the problem that "sync-dax" with an implicit
architecture backend assumption is not precise, but a new "non-nvdimm"
device type would make it explicit what the host is advertising to the
guest.

>
>
> Ref: https://github.com/dgibson/qemu/blob/main/docs/nvdimm.txt
>
>
>
> >
> >> The libpmem has architecture specific instructions like dcbf on POWER
> > Which "libpmem" is this? PMDK is a reference library not a PMEM
> > interface... maybe I'm missing what libpmem has to do with QEMU?
>
>
> I was referrering to semantics of flushing pmem cache lines as in
>
> PMDK/libpmem.
>
>
> >
> >> to flush the cache data to backend nvdimm device during normal writes
> >> followed by explicit flushes if the backend devices are not synchronous
> >> DAX capable.
> >>
> >> Qemu - virtual nvdimm devices are memory mapped. The dcbf in the guest
> >> and the subsequent flush doesn't traslate to actual flush to the backend
> > s/traslate/translate/
> >
> >> file on the host in case of file backed v-nvdimms. This is addressed by
> >> virtio-pmem in case of x86_64 by making explicit flushes translating to
> >> fsync at qemu.
> > Note that virtio-pmem was a proposal for a specific optimization of
> > allowing guests to share page cache. The virtio-pmem approach is not
> > to be confused with actual persistent memory.
> >
> >> On SPAPR, the issue is addressed by adding a new hcall to
> >> request for an explicit flush from the guest ndctl driver when the backend
> > What is an "ndctl" driver? ndctl is userspace tooling, do you mean the
> > guest pmem driver?
>
>
> oops, wrong terminologies. I was referring to guest libnvdimm and
>
> papr_scm kernel modules.
>
>
> >
> >> nvdimm cannot ensure write persistence with dcbf alone. So, the approach
> >> here is to convey when the hcall flush is required in a device tree
> >> property. The guest makes the hcall when the property is found, instead
> >> of relying on dcbf.
> >>
> >> A new device property sync-dax is added to the nvdimm device. When the
> >> sync-dax is 'writeback'(default for PPC), device property
> >> "hcall-flush-required" is set, and the guest makes hcall H_SCM_FLUSH
> >> requesting for an explicit flush.
> > I'm not sure "sync-dax" is a suitable name for the property of the
> > guest persistent memory.
>
>
> sync-dax property translates ND_REGION_ASYNC flag being set/unset

Yes, I am aware, but that property alone is not sufficient to identify
the flush mechanism.

>
> for the pmem region also if the nvdimm_flush callback is provided in the
>
> papr_scm or not. As everything boils down to synchronous nature
>
> of the device, I chose sync-dax for the name.
>
>
> >   There is no requirement that the
> > memory-backend file for a guest be a dax-capable file. It's also
> > implementation specific what hypercall needs to be invoked for a given
> > occurrence of "sync-dax". What does that map to on non-PPC platforms
> > for example?
>
>
> The backend file can be dax-capable, to be hinted using "sync-dax=direct".

All memory-mapped files are "dax-capable". "DAX" is an access
mechanism, not a data-integrity contract.

> When the backend is not dax-capable, the "sync-dax=writeback" to used,

No, the qemu property for this shuold be a separate device-type.

> so that the guest makes the hcall. On all non-PPC archs, with the
>
> "sync-dax=writeback" qemu errors out stating the lack of support.

There is no "lack of support" to be worried about on other archs if
the interface is explicit about the atypical flush arrangement.

>
>
> >   It seems to me that an "nvdimm" device presents the
> > synchronous usage model and a whole other device type implements an
> > async-hypercall setup that the guest happens to service with its
> > nvdimm stack, but it's not an "nvdimm" anymore at that point.
>
>
> In case the file backing the v-nvdimm is not dax-capable, we need flush
>
> semantics on the guest to be mapped to pagecache flush on the host side.
>
>
> >
> >> sync-dax is "unsafe" on all other platforms(x86, ARM) and old pseries machines
> >> prior to 5.2 on PPC. sync-dax="writeback" on ARM and x86_64 is prevented
> >> now as the flush semantics are unimplemented.
> > "sync-dax" has no meaning on its own, I think this needs an explicit
> > mechanism to convey both the "not-sync" property *and* the callback
> > method, it shouldn't be inferred by arch type.
>
>
> Yes. On all platforms the "sync-dax=unsafe" meaning - with host power
>
> failure the host pagecache is lost and subsequently data written by the
>
> guest will also be gone. This is the default for non-PPC.

The default to date has been for the guest to trust that an nvdimm is
an nvdimm with no explicit flushing required. It's too late now to
introduce an "unsafe" default.

>
>
> On PPC, the default is "sync-dax=writeback" - so the ND_REGION_ASYNC
>
> is set for the region and the guest makes hcalls to issue fsync on the host.
>
>
> Are you suggesting me to keep it "unsafe" as default for all architectures
>
> including PPC and a user can set it to "writeback" if desired.

No, I am suggesting that "sync-dax" is insufficient to convey this
property. This behavior warrants its own device type, not an ambiguous
property of the memory-backend-file with implicit architecture
assumptions attached.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams@intel.com>
To: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Qemu Developers <qemu-devel@nongnu.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
	Markus Armbruster <armbru@redhat.com>,
	bharata@linux.vnet.ibm.com,
	Haozhong Zhang <haozhong.zhang@intel.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	richard.henderson@linaro.org, Greg Kurz <groug@kaod.org>,
	kvm-ppc@vger.kernel.org, qemu-arm@nongnu.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	kwangwoo.lee@sk.com, David Gibson <david@gibson.dropbear.id.au>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>,
	shameerali.kolothum.thodi@huawei.com, shivaprasadbhat@gmail.com,
	qemu-ppc@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm
Date: Mon, 3 May 2021 12:41:08 -0700	[thread overview]
Message-ID: <CAPcyv4hAOC89JOXr-ZCps=n8gEKD5W0jmGU1Enfo8ECVMf3veQ@mail.gmail.com> (raw)
In-Reply-To: <bca3512d-5437-e8e6-68ae-0c9b887115f9@linux.ibm.com>

On Mon, May 3, 2021 at 7:06 AM Shivaprasad G Bhat <sbhat@linux.ibm.com> wrote:
>
>
> On 5/1/21 12:44 AM, Dan Williams wrote:
> > Some corrections to terminology confusion below...
> >
> >
> > On Wed, Apr 28, 2021 at 8:49 PM Shivaprasad G Bhat <sbhat@linux.ibm.com> wrote:
> >> The nvdimm devices are expected to ensure write persistence during power
> >> failure kind of scenarios.
> > No, QEMU is not expected to make that guarantee. QEMU is free to lie
> > to the guest about the persistence guarantees of the guest PMEM
> > ranges. It's more accurate to say that QEMU nvdimm devices can emulate
> > persistent memory and optionally pass through host power-fail
> > persistence guarantees to the guest. The power-fail persistence domain
> > can be one of "cpu_cache", or "memory_controller" if the persistent
> > memory region is "synchronous". If the persistent range is not
> > synchronous, it really isn't "persistent memory"; it's memory mapped
> > storage that needs I/O commands to flush.
>
> Since this is virtual nvdimm(v-nvdimm) backed by a file, and the data is
> completely
>
> in the host pagecache, and we need a way to ensure that host pagecaches
>
> are flushed to the backend. This analogous to the WPQ flush being offloaded
>
> to the hypervisor.

No, it isn't analogous. WPQ flush is an optional mechanism to force
data to a higher durability domain. The flush in this interface is
mandatory. It's a different class of device.

The proposal that "sync-dax=unsafe" for non-PPC architectures, is a
fundamental misrepresentation of how this is supposed to work. Rather
than make "sync-dax" a first class citizen of the device-description
interface I'm proposing that you make this a separate device-type.
This also solves the problem that "sync-dax" with an implicit
architecture backend assumption is not precise, but a new "non-nvdimm"
device type would make it explicit what the host is advertising to the
guest.

>
>
> Ref: https://github.com/dgibson/qemu/blob/main/docs/nvdimm.txt
>
>
>
> >
> >> The libpmem has architecture specific instructions like dcbf on POWER
> > Which "libpmem" is this? PMDK is a reference library not a PMEM
> > interface... maybe I'm missing what libpmem has to do with QEMU?
>
>
> I was referrering to semantics of flushing pmem cache lines as in
>
> PMDK/libpmem.
>
>
> >
> >> to flush the cache data to backend nvdimm device during normal writes
> >> followed by explicit flushes if the backend devices are not synchronous
> >> DAX capable.
> >>
> >> Qemu - virtual nvdimm devices are memory mapped. The dcbf in the guest
> >> and the subsequent flush doesn't traslate to actual flush to the backend
> > s/traslate/translate/
> >
> >> file on the host in case of file backed v-nvdimms. This is addressed by
> >> virtio-pmem in case of x86_64 by making explicit flushes translating to
> >> fsync at qemu.
> > Note that virtio-pmem was a proposal for a specific optimization of
> > allowing guests to share page cache. The virtio-pmem approach is not
> > to be confused with actual persistent memory.
> >
> >> On SPAPR, the issue is addressed by adding a new hcall to
> >> request for an explicit flush from the guest ndctl driver when the backend
> > What is an "ndctl" driver? ndctl is userspace tooling, do you mean the
> > guest pmem driver?
>
>
> oops, wrong terminologies. I was referring to guest libnvdimm and
>
> papr_scm kernel modules.
>
>
> >
> >> nvdimm cannot ensure write persistence with dcbf alone. So, the approach
> >> here is to convey when the hcall flush is required in a device tree
> >> property. The guest makes the hcall when the property is found, instead
> >> of relying on dcbf.
> >>
> >> A new device property sync-dax is added to the nvdimm device. When the
> >> sync-dax is 'writeback'(default for PPC), device property
> >> "hcall-flush-required" is set, and the guest makes hcall H_SCM_FLUSH
> >> requesting for an explicit flush.
> > I'm not sure "sync-dax" is a suitable name for the property of the
> > guest persistent memory.
>
>
> sync-dax property translates ND_REGION_ASYNC flag being set/unset

Yes, I am aware, but that property alone is not sufficient to identify
the flush mechanism.

>
> for the pmem region also if the nvdimm_flush callback is provided in the
>
> papr_scm or not. As everything boils down to synchronous nature
>
> of the device, I chose sync-dax for the name.
>
>
> >   There is no requirement that the
> > memory-backend file for a guest be a dax-capable file. It's also
> > implementation specific what hypercall needs to be invoked for a given
> > occurrence of "sync-dax". What does that map to on non-PPC platforms
> > for example?
>
>
> The backend file can be dax-capable, to be hinted using "sync-dax=direct".

All memory-mapped files are "dax-capable". "DAX" is an access
mechanism, not a data-integrity contract.

> When the backend is not dax-capable, the "sync-dax=writeback" to used,

No, the qemu property for this shuold be a separate device-type.

> so that the guest makes the hcall. On all non-PPC archs, with the
>
> "sync-dax=writeback" qemu errors out stating the lack of support.

There is no "lack of support" to be worried about on other archs if
the interface is explicit about the atypical flush arrangement.

>
>
> >   It seems to me that an "nvdimm" device presents the
> > synchronous usage model and a whole other device type implements an
> > async-hypercall setup that the guest happens to service with its
> > nvdimm stack, but it's not an "nvdimm" anymore at that point.
>
>
> In case the file backing the v-nvdimm is not dax-capable, we need flush
>
> semantics on the guest to be mapped to pagecache flush on the host side.
>
>
> >
> >> sync-dax is "unsafe" on all other platforms(x86, ARM) and old pseries machines
> >> prior to 5.2 on PPC. sync-dax="writeback" on ARM and x86_64 is prevented
> >> now as the flush semantics are unimplemented.
> > "sync-dax" has no meaning on its own, I think this needs an explicit
> > mechanism to convey both the "not-sync" property *and* the callback
> > method, it shouldn't be inferred by arch type.
>
>
> Yes. On all platforms the "sync-dax=unsafe" meaning - with host power
>
> failure the host pagecache is lost and subsequently data written by the
>
> guest will also be gone. This is the default for non-PPC.

The default to date has been for the guest to trust that an nvdimm is
an nvdimm with no explicit flushing required. It's too late now to
introduce an "unsafe" default.

>
>
> On PPC, the default is "sync-dax=writeback" - so the ND_REGION_ASYNC
>
> is set for the region and the guest makes hcalls to issue fsync on the host.
>
>
> Are you suggesting me to keep it "unsafe" as default for all architectures
>
> including PPC and a user can set it to "writeback" if desired.

No, I am suggesting that "sync-dax" is insufficient to convey this
property. This behavior warrants its own device type, not an ambiguous
property of the memory-backend-file with implicit architecture
assumptions attached.


WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams@intel.com>
To: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>,
	Greg Kurz <groug@kaod.org>,
	qemu-ppc@nongnu.org, Eduardo Habkost <ehabkost@redhat.com>,
	marcel.apfelbaum@gmail.com, "Michael S. Tsirkin" <mst@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Eric Blake <eblake@redhat.com>,
	qemu-arm@nongnu.org, richard.henderson@linaro.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Haozhong Zhang <haozhong.zhang@intel.com>,
	shameerali.kolothum.thodi@huawei.com, kwangwoo.lee@sk.com,
	Markus Armbruster <armbru@redhat.com>,
	Qemu Developers <qemu-devel@nongnu.org>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	kvm-ppc@vger.kernel.org, shivaprasadbhat@gmail.com,
	bharata@linux.vnet.ibm.com
Subject: Re: [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm
Date: Mon, 03 May 2021 19:41:08 +0000	[thread overview]
Message-ID: <CAPcyv4hAOC89JOXr-ZCps=n8gEKD5W0jmGU1Enfo8ECVMf3veQ@mail.gmail.com> (raw)
In-Reply-To: <bca3512d-5437-e8e6-68ae-0c9b887115f9@linux.ibm.com>

On Mon, May 3, 2021 at 7:06 AM Shivaprasad G Bhat <sbhat@linux.ibm.com> wrote:
>
>
> On 5/1/21 12:44 AM, Dan Williams wrote:
> > Some corrections to terminology confusion below...
> >
> >
> > On Wed, Apr 28, 2021 at 8:49 PM Shivaprasad G Bhat <sbhat@linux.ibm.com> wrote:
> >> The nvdimm devices are expected to ensure write persistence during power
> >> failure kind of scenarios.
> > No, QEMU is not expected to make that guarantee. QEMU is free to lie
> > to the guest about the persistence guarantees of the guest PMEM
> > ranges. It's more accurate to say that QEMU nvdimm devices can emulate
> > persistent memory and optionally pass through host power-fail
> > persistence guarantees to the guest. The power-fail persistence domain
> > can be one of "cpu_cache", or "memory_controller" if the persistent
> > memory region is "synchronous". If the persistent range is not
> > synchronous, it really isn't "persistent memory"; it's memory mapped
> > storage that needs I/O commands to flush.
>
> Since this is virtual nvdimm(v-nvdimm) backed by a file, and the data is
> completely
>
> in the host pagecache, and we need a way to ensure that host pagecaches
>
> are flushed to the backend. This analogous to the WPQ flush being offloaded
>
> to the hypervisor.

No, it isn't analogous. WPQ flush is an optional mechanism to force
data to a higher durability domain. The flush in this interface is
mandatory. It's a different class of device.

The proposal that "sync-dax=unsafe" for non-PPC architectures, is a
fundamental misrepresentation of how this is supposed to work. Rather
than make "sync-dax" a first class citizen of the device-description
interface I'm proposing that you make this a separate device-type.
This also solves the problem that "sync-dax" with an implicit
architecture backend assumption is not precise, but a new "non-nvdimm"
device type would make it explicit what the host is advertising to the
guest.

>
>
> Ref: https://github.com/dgibson/qemu/blob/main/docs/nvdimm.txt
>
>
>
> >
> >> The libpmem has architecture specific instructions like dcbf on POWER
> > Which "libpmem" is this? PMDK is a reference library not a PMEM
> > interface... maybe I'm missing what libpmem has to do with QEMU?
>
>
> I was referrering to semantics of flushing pmem cache lines as in
>
> PMDK/libpmem.
>
>
> >
> >> to flush the cache data to backend nvdimm device during normal writes
> >> followed by explicit flushes if the backend devices are not synchronous
> >> DAX capable.
> >>
> >> Qemu - virtual nvdimm devices are memory mapped. The dcbf in the guest
> >> and the subsequent flush doesn't traslate to actual flush to the backend
> > s/traslate/translate/
> >
> >> file on the host in case of file backed v-nvdimms. This is addressed by
> >> virtio-pmem in case of x86_64 by making explicit flushes translating to
> >> fsync at qemu.
> > Note that virtio-pmem was a proposal for a specific optimization of
> > allowing guests to share page cache. The virtio-pmem approach is not
> > to be confused with actual persistent memory.
> >
> >> On SPAPR, the issue is addressed by adding a new hcall to
> >> request for an explicit flush from the guest ndctl driver when the backend
> > What is an "ndctl" driver? ndctl is userspace tooling, do you mean the
> > guest pmem driver?
>
>
> oops, wrong terminologies. I was referring to guest libnvdimm and
>
> papr_scm kernel modules.
>
>
> >
> >> nvdimm cannot ensure write persistence with dcbf alone. So, the approach
> >> here is to convey when the hcall flush is required in a device tree
> >> property. The guest makes the hcall when the property is found, instead
> >> of relying on dcbf.
> >>
> >> A new device property sync-dax is added to the nvdimm device. When the
> >> sync-dax is 'writeback'(default for PPC), device property
> >> "hcall-flush-required" is set, and the guest makes hcall H_SCM_FLUSH
> >> requesting for an explicit flush.
> > I'm not sure "sync-dax" is a suitable name for the property of the
> > guest persistent memory.
>
>
> sync-dax property translates ND_REGION_ASYNC flag being set/unset

Yes, I am aware, but that property alone is not sufficient to identify
the flush mechanism.

>
> for the pmem region also if the nvdimm_flush callback is provided in the
>
> papr_scm or not. As everything boils down to synchronous nature
>
> of the device, I chose sync-dax for the name.
>
>
> >   There is no requirement that the
> > memory-backend file for a guest be a dax-capable file. It's also
> > implementation specific what hypercall needs to be invoked for a given
> > occurrence of "sync-dax". What does that map to on non-PPC platforms
> > for example?
>
>
> The backend file can be dax-capable, to be hinted using "sync-dax=direct".

All memory-mapped files are "dax-capable". "DAX" is an access
mechanism, not a data-integrity contract.

> When the backend is not dax-capable, the "sync-dax=writeback" to used,

No, the qemu property for this shuold be a separate device-type.

> so that the guest makes the hcall. On all non-PPC archs, with the
>
> "sync-dax=writeback" qemu errors out stating the lack of support.

There is no "lack of support" to be worried about on other archs if
the interface is explicit about the atypical flush arrangement.

>
>
> >   It seems to me that an "nvdimm" device presents the
> > synchronous usage model and a whole other device type implements an
> > async-hypercall setup that the guest happens to service with its
> > nvdimm stack, but it's not an "nvdimm" anymore at that point.
>
>
> In case the file backing the v-nvdimm is not dax-capable, we need flush
>
> semantics on the guest to be mapped to pagecache flush on the host side.
>
>
> >
> >> sync-dax is "unsafe" on all other platforms(x86, ARM) and old pseries machines
> >> prior to 5.2 on PPC. sync-dax="writeback" on ARM and x86_64 is prevented
> >> now as the flush semantics are unimplemented.
> > "sync-dax" has no meaning on its own, I think this needs an explicit
> > mechanism to convey both the "not-sync" property *and* the callback
> > method, it shouldn't be inferred by arch type.
>
>
> Yes. On all platforms the "sync-dax=unsafe" meaning - with host power
>
> failure the host pagecache is lost and subsequently data written by the
>
> guest will also be gone. This is the default for non-PPC.

The default to date has been for the guest to trust that an nvdimm is
an nvdimm with no explicit flushing required. It's too late now to
introduce an "unsafe" default.

>
>
> On PPC, the default is "sync-dax=writeback" - so the ND_REGION_ASYNC
>
> is set for the region and the guest makes hcalls to issue fsync on the host.
>
>
> Are you suggesting me to keep it "unsafe" as default for all architectures
>
> including PPC and a user can set it to "writeback" if desired.

No, I am suggesting that "sync-dax" is insufficient to convey this
property. This behavior warrants its own device type, not an ambiguous
property of the memory-backend-file with implicit architecture
assumptions attached.

  reply	other threads:[~2021-05-03 19:41 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-29  3:48 [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm Shivaprasad G Bhat
2021-04-29  3:48 ` Shivaprasad G Bhat
2021-04-29  3:48 ` Shivaprasad G Bhat
2021-04-29  3:48 ` [PATCH v4 1/3] spapr: nvdimm: Forward declare and move the definitions Shivaprasad G Bhat
2021-04-29  3:48   ` Shivaprasad G Bhat
2021-04-29  3:48   ` Shivaprasad G Bhat
2021-05-03 18:23   ` Eric Blake
2021-05-03 18:23     ` Eric Blake
2021-05-03 18:23     ` Eric Blake
2021-05-04  1:21     ` David Gibson
2021-05-04  1:21       ` David Gibson
2021-05-04  1:21       ` David Gibson
2021-04-29  3:48 ` [PATCH v4 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall Shivaprasad G Bhat
2021-04-29  3:48   ` Shivaprasad G Bhat
2021-04-29  3:48   ` Shivaprasad G Bhat
2021-04-29  3:49 ` [PATCH v4 3/3] nvdimm: Enable sync-dax device property for nvdimm Shivaprasad G Bhat
2021-04-29  3:49   ` Shivaprasad G Bhat
2021-04-29  3:49   ` Shivaprasad G Bhat
2021-05-03 18:27   ` Eric Blake
2021-05-03 18:27     ` Eric Blake
2021-05-03 18:27     ` Eric Blake
2021-04-29 15:55 ` [PATCH v4 0/3] nvdimm: Enable sync-dax " Stefan Hajnoczi
2021-04-29 15:55   ` Stefan Hajnoczi
2021-04-29 15:55   ` Stefan Hajnoczi
2021-04-29 16:32   ` Aneesh Kumar K.V
2021-04-29 16:44     ` Aneesh Kumar K.V
2021-04-29 16:32     ` Aneesh Kumar K.V
2021-04-30  4:27     ` David Gibson
2021-04-30  4:27       ` David Gibson
2021-04-30  4:27       ` David Gibson
2021-04-30 15:08       ` Stefan Hajnoczi
2021-04-30 15:08         ` Stefan Hajnoczi
2021-04-30 15:08         ` Stefan Hajnoczi
2021-04-30 19:14 ` Dan Williams
2021-04-30 19:14   ` Dan Williams
2021-04-30 19:14   ` Dan Williams
2021-05-01 13:55   ` Aneesh Kumar K.V
2021-05-01 13:56     ` Aneesh Kumar K.V
2021-05-01 13:55     ` Aneesh Kumar K.V
2021-05-03 14:05   ` Shivaprasad G Bhat
2021-05-03 14:17     ` Shivaprasad G Bhat
2021-05-03 14:05     ` Shivaprasad G Bhat
2021-05-03 19:41     ` Dan Williams [this message]
2021-05-03 19:41       ` Dan Williams
2021-05-03 19:41       ` Dan Williams
2021-05-04  4:59       ` Aneesh Kumar K.V
2021-05-04  4:59         ` Aneesh Kumar K.V
2021-05-04  4:59         ` Aneesh Kumar K.V
2021-05-04  5:43         ` Pankaj Gupta
2021-05-04  5:43           ` Pankaj Gupta
2021-05-04  5:43           ` Pankaj Gupta
2021-05-04  9:02           ` Aneesh Kumar K.V
2021-05-04  9:14             ` Aneesh Kumar K.V
2021-05-04  9:02             ` Aneesh Kumar K.V
2021-05-05  0:12             ` Dan Williams
2021-05-05  0:12               ` Dan Williams
2021-05-05  0:12               ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPcyv4hAOC89JOXr-ZCps=n8gEKD5W0jmGU1Enfo8ECVMf3veQ@mail.gmail.com' \
    --to=dan.j.williams@intel.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=armbru@redhat.com \
    --cc=bharata@linux.vnet.ibm.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=eblake@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=groug@kaod.org \
    --cc=haozhong.zhang@intel.com \
    --cc=imammedo@redhat.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kwangwoo.lee@sk.com \
    --cc=linux-nvdimm@lists.01.org \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=sbhat@linux.ibm.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=shivaprasadbhat@gmail.com \
    --cc=stefanha@redhat.com \
    --cc=xiaoguangrong.eric@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.