nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: groug@kaod.org, qemu-ppc@nongnu.org, qemu-devel@nongnu.org,
	aneesh.kumar@linux.ibm.com, nvdimm@lists.linux.dev,
Subject: Re: [PATCH REBASED v5 2/2] spapr: nvdimm: Introduce spapr-nvdimm device
Date: Wed, 2 Feb 2022 03:11:22 +0530	[thread overview]
Message-ID: <342e1a28-b06d-5289-d431-de97e10f5cce@linux.ibm.com> (raw)
In-Reply-To: <YUl8e5NLb1Jnn5W6@yekko>

On 9/21/21 12:02, David Gibson wrote:
> On Wed, Jul 07, 2021 at 09:57:31PM -0500, Shivaprasad G Bhat wrote:
>> If the device backend is not persistent memory for the nvdimm, there is
>> need for explicit IO flushes on the backend to ensure persistence.
>> On SPAPR, the issue is addressed by adding a new hcall to request for
>> an explicit flush from the guest when the backend is not pmem. So, the
>> approach here is to convey when the hcall flush is required in a device
>> tree property. The guest once it knows the device backend is not pmem,
>> makes the hcall whenever flush is required.
>> To set the device tree property, the patch introduces a new papr specific
>> device type inheriting the nvdimm device. When the backend doesn't have
>> pmem="yes", the device tree property "ibm,hcall-flush-required" is set,
>> and the guest makes hcall H_SCM_FLUSH requesting for an explicit flush.
>> Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
>> @@ -91,6 +93,14 @@ bool spapr_nvdimm_validate(HotplugHandler *hotplug_dev, NVDIMMDevice *nvdimm,
>>           return false;
>>       }
>> +    if (object_dynamic_cast(OBJECT(nvdimm), TYPE_SPAPR_NVDIMM) &&
>> +        (memory_region_get_fd(mr) < 0)) {
>> +        error_setg(errp, "spapr-nvdimm device requires the "
>> +                   "memdev %s to be of memory-backend-file type",
>> +                   object_get_canonical_path_component(OBJECT(dimm->hostmem)));
> It's not obvious to me why the spapr nvdimm device has an additional
> restriction here over the regular nvdimm device.

For memory-backend-ram the fd is set to -1. The fdatasync would fail 
later. This restriction is for preventing the hcall failure later. May 
be it is intentionally allowed with nvdimms for testing purposes. Let me 
know if you want me to allow it with a dummy success return for the hcall.

>> +        return false;
>> +    }
>> +
>>       return true;
>>   }
>> @@ -162,6 +172,21 @@ static int spapr_dt_nvdimm(SpaprMachineState *spapr, void *fdt,
>>                                "operating-system")));
>>       _FDT(fdt_setprop(fdt, child_offset, "ibm,cache-flush-required", NULL, 0));
>> +    if (object_dynamic_cast(OBJECT(nvdimm), TYPE_SPAPR_NVDIMM)) {
>> +        bool is_pmem = false;
>> +        PCDIMMDevice *dimm = PC_DIMM(nvdimm);
>> +        HostMemoryBackend *hostmem = dimm->hostmem;
>> +
>> +        is_pmem = object_property_get_bool(OBJECT(hostmem), "pmem",
>> +                                           &error_abort);
> Presenting to the guest a property of the backend worries me
> slightly.  How the backends are synchronized between the source and
> destination is out of scope for qemu: is there any possibility that we
> could migrate from a host where the backend is pmem to one where it is
> not (or the reverse).
> I think at the least we want a property on the spapr-nvdimm object
> which will override what's presented to the guest (which, yes, might
> mean lying to the guest).  I think that could be important for
> testing, if nothing else.

Mix configurations can be attempted on a nested setup itself.

On a side note, the attempts to use pmem=on on non-pmem backend is being 
deprecated as that is unsafe pretension effective commit cdcf766d0b0.

I see your point, adding "pmem-override"(?, suggest me if you have 
better name) to spapr-nvdimm can be helpful. Adding it to spapr-nvdimm 
device. With pmem-override "on" device tree property is added allowing 
hcall-flush even when pmem=on for the backend. This works for migration 
compatibility in such a setup.

>> +#endif
>> +        if (!is_pmem) {
>> +            _FDT(fdt_setprop(fdt, child_offset, "ibm,hcall-flush-required",
>> +                             NULL, 0));
>> +        }
>> +    }
>> +
>>       return child_offset;
>>   }
>> @@ -585,7 +610,16 @@ static target_ulong h_scm_flush(PowerPCCPU *cpu, SpaprMachineState *spapr,
>>       }
>>       dimm = PC_DIMM(drc->dev);
>> +    if (!object_dynamic_cast(OBJECT(dimm), TYPE_SPAPR_NVDIMM)) {
>> +        return H_PARAMETER;
>> +    }
> Hmm.  If you're going to make flushes specific to spapr nvdimms, you
> could put the queue of pending flushes into the spapr-nvdimm object,
> rather than having a global list in the machine.

Yes. I have changed the patches to move all the flush specific data 
structures into the spapr-nvdimm object.

>> +
>>       backend = MEMORY_BACKEND(dimm->hostmem);
>> +    if (object_property_get_bool(OBJECT(backend), "pmem", &error_abort)) {
>> +        return H_UNSUPPORTED;
> Could you make this not be UNSUPPORTED, but instead fake the flush for
> the pmem device?  Either as a no-op, or simulating the guest invoking
> the right cpu cache flushes?  That seems like it would be more useful:
> that way users who don't care too much about performance could just
> always do a flush hcall and not have to have another path for the
> "real" pmem case.

It would actually be wrong use for kernel to attempt that. The device 
tree property is checked before setting the callback to flush in the 
kernel. If someone makes the hcall without the device tree property 
being set, it would actually be a mistaken/wrong usage.

For pmem-override=on, its better to allow this as you suggested along 
with exposing the device tree property. Will call the pmem_persist() for 
pmem backed devices. Switch between pmem_persist() or fdatasync() based 
on the backend type while flushing.

>> +    }
>> +#endif
>>       fd = memory_region_get_fd(&backend->mr);
>>       if (fd < 0) {
>> @@ -766,3 +800,15 @@ static void spapr_scm_register_types(void)


      reply	other threads:[~2022-02-01 21:41 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-08  2:57 [PATCH REBASED v5 0/3] spapr: nvdimm: Introduce spapr-nvdimm device Shivaprasad G Bhat
2021-07-08  2:57 ` [PATCH REBASED v5 1/2] spapr: nvdimm: Implement H_SCM_FLUSH hcall Shivaprasad G Bhat
2021-07-08  6:12   ` David Gibson
2021-09-21  6:23   ` David Gibson
2022-02-01 21:41     ` Shivaprasad G Bhat
2021-07-08  2:57 ` [PATCH REBASED v5 2/2] spapr: nvdimm: Introduce spapr-nvdimm device Shivaprasad G Bhat
2021-09-21  6:32   ` David Gibson
2022-02-01 21:41     ` Shivaprasad G Bhat [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=342e1a28-b06d-5289-d431-de97e10f5cce@linux.ibm.com \
    --to=sbhat@linux.ibm.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=groug@kaod.org \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=nvdimm@lists.linux.dev \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).