From: Daniel Henrique Barboza <danielhb413@gmail.com> To: qemu-devel@nongnu.org Cc: Shivaprasad G Bhat <sbhat@linux.ibm.com>, aneesh.kumar@linux.ibm.com, Daniel Henrique Barboza <danielhb413@gmail.com>, groug@kaod.org, qemu-ppc@nongnu.org, david@gibson.dropbear.id.au Subject: [RFC PATCH v2 6/7] spapr_numa, spapr_nvdimm: write secondary NUMA domain for nvdimms Date: Tue, 15 Jun 2021 22:19:43 -0300 [thread overview] Message-ID: <20210616011944.2996399-7-danielhb413@gmail.com> (raw) In-Reply-To: <20210616011944.2996399-1-danielhb413@gmail.com> Using the new 'device-node' property, write it in the nvdimm DT to set a secondary domain for the persistent memory operation mode. If 'device-node' isn't set, secondary domain is equal to the primary domain. Note that this is only available in FORM2 affinity. FORM1 affinity NVDIMMs aren't affected by this change. CC: Shivaprasad G Bhat <sbhat@linux.ibm.com> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> --- hw/ppc/spapr_numa.c | 20 ++++++++++++++++++++ hw/ppc/spapr_nvdimm.c | 5 +++-- include/hw/ppc/spapr_numa.h | 3 +++ 3 files changed, 26 insertions(+), 2 deletions(-) diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c index 8678ff4272..e7d455d304 100644 --- a/hw/ppc/spapr_numa.c +++ b/hw/ppc/spapr_numa.c @@ -266,6 +266,26 @@ void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt, sizeof(spapr->numa_assoc_array[nodeid])))); } +void spapr_numa_write_nvdimm_assoc_dt(SpaprMachineState *spapr, void *fdt, + int offset, int nodeid, + int device_node) +{ + uint32_t *nvdimm_assoc_array = spapr->numa_assoc_array[nodeid]; + + /* + * 'device-node' is the secondary domain for NVDIMMs when + * using FORM2. The secondary domain for FORM2 in QEMU + * is 0x3. + */ + if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) { + nvdimm_assoc_array[0x3] = cpu_to_be32(device_node); + } + + _FDT((fdt_setprop(fdt, offset, "ibm,associativity", + nvdimm_assoc_array, + sizeof(spapr->numa_assoc_array[nodeid])))); +} + static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr, PowerPCCPU *cpu) { diff --git a/hw/ppc/spapr_nvdimm.c b/hw/ppc/spapr_nvdimm.c index 91de1052f2..7cc4e9a28f 100644 --- a/hw/ppc/spapr_nvdimm.c +++ b/hw/ppc/spapr_nvdimm.c @@ -92,7 +92,6 @@ bool spapr_nvdimm_validate(HotplugHandler *hotplug_dev, NVDIMMDevice *nvdimm, return true; } - void spapr_add_nvdimm(DeviceState *dev, uint64_t slot) { SpaprDrc *drc; @@ -126,6 +125,7 @@ static int spapr_dt_nvdimm(SpaprMachineState *spapr, void *fdt, uint64_t lsize = nvdimm->label_size; uint64_t size = object_property_get_int(OBJECT(nvdimm), PC_DIMM_SIZE_PROP, NULL); + int device_node = nvdimm->device_node != -1 ? nvdimm->device_node : node; drc = spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, slot); g_assert(drc); @@ -142,7 +142,8 @@ static int spapr_dt_nvdimm(SpaprMachineState *spapr, void *fdt, _FDT((fdt_setprop_string(fdt, child_offset, "compatible", "ibm,pmemory"))); _FDT((fdt_setprop_string(fdt, child_offset, "device_type", "ibm,pmemory"))); - spapr_numa_write_associativity_dt(spapr, fdt, child_offset, node); + spapr_numa_write_nvdimm_assoc_dt(spapr, fdt, child_offset, + node, device_node); buf = qemu_uuid_unparse_strdup(&nvdimm->uuid); _FDT((fdt_setprop_string(fdt, child_offset, "ibm,unit-guid", buf))); diff --git a/include/hw/ppc/spapr_numa.h b/include/hw/ppc/spapr_numa.h index adaec8e163..af25741e70 100644 --- a/include/hw/ppc/spapr_numa.h +++ b/include/hw/ppc/spapr_numa.h @@ -26,6 +26,9 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr); void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas); void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt, int offset, int nodeid); +void spapr_numa_write_nvdimm_assoc_dt(SpaprMachineState *spapr, void *fdt, + int offset, int nodeid, + int device_node); int spapr_numa_fixup_cpu_dt(SpaprMachineState *spapr, void *fdt, int offset, PowerPCCPU *cpu); int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt, -- 2.31.1
next prev parent reply other threads:[~2021-06-16 1:34 UTC|newest] Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-06-16 1:19 [RFC PATCH v2 0/7] pSeries base FORM2 NUMA affinity support Daniel Henrique Barboza 2021-06-16 1:19 ` [RFC PATCH v2 1/7] spapr_numa.c: split FORM1 code into helpers Daniel Henrique Barboza 2021-06-16 1:19 ` [RFC PATCH v2 2/7] spapr: move NUMA data init to post-CAS Daniel Henrique Barboza 2021-06-16 1:19 ` [RFC PATCH v2 3/7] spapr_numa.c: base FORM2 NUMA affinity support Daniel Henrique Barboza 2021-06-16 1:19 ` [RFC PATCH v2 4/7] spapr: simplify spapr_numa_associativity_init params Daniel Henrique Barboza 2021-06-16 1:19 ` [RFC PATCH v2 5/7] nvdimm: add PPC64 'device-node' property Daniel Henrique Barboza 2021-06-16 1:19 ` Daniel Henrique Barboza [this message] 2021-06-16 1:19 ` [RFC PATCH v2 7/7] spapr: move memory/cpu less check to spapr_numa_FORM1_affinity_init() Daniel Henrique Barboza 2021-06-24 5:47 ` [RFC PATCH v2 0/7] pSeries base FORM2 NUMA affinity support David Gibson
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210616011944.2996399-7-danielhb413@gmail.com \ --to=danielhb413@gmail.com \ --cc=aneesh.kumar@linux.ibm.com \ --cc=david@gibson.dropbear.id.au \ --cc=groug@kaod.org \ --cc=qemu-devel@nongnu.org \ --cc=qemu-ppc@nongnu.org \ --cc=sbhat@linux.ibm.com \ --subject='Re: [RFC PATCH v2 6/7] spapr_numa, spapr_nvdimm: write secondary NUMA domain for nvdimms' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).