iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com>
To: Greg KH <gregkh@linuxfoundation.org>
Cc: "rafael@kernel.org" <rafael@kernel.org>,
	Linuxarm <linuxarm@huawei.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"Zengtao \(B\)" <prime.zeng@hisilicon.com>,
	Robin Murphy <robin.murphy@arm.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: RE: [PATCH] driver core: platform: expose numa_node to users in sysfs
Date: Tue, 2 Jun 2020 07:02:03 +0000	[thread overview]
Message-ID: <B926444035E5E2439431908E3842AFD24D9167@DGGEMI525-MBS.china.huawei.com> (raw)
In-Reply-To: 20200602061112.GC2256033@kroah.com

> >
> > On Tue, Jun 02, 2020 at 05:09:57AM +0000, Song Bao Hua (Barry Song)
> wrote:
> > > > >
> > > > > Platform devices are NUMA?  That's crazy, and feels like a total
> > > > > abuse of platform devices and drivers that really should belong
> > > > > on a
> > "real"
> > > > > bus.
> > > >
> > > > I am not sure if it is an abuse of platform device. But smmu is a
> > > > platform device, drivers/iommu/arm-smmu-v3.c is a platform driver.
> > > > In a typical ARM server, there are maybe multiple SMMU devices
> > > > which can support IO virtual address and page tables for other
> > > > devices on PCI-like busses.
> > > > Each different SMMU device might be close to different NUMA node.
> > > > There is really a hardware topology.
> > > >
> > > > If you have multiple CPU packages in a NUMA server, some platform
> > > > devices might Belong to CPU0, some other might belong to CPU1.
> > >
> > > Those devices are populated by acpi_iort for an ARM server:
> > >
> > > drivers/acpi/arm64/iort.c:
> > >
> > > static const struct iort_dev_config iort_arm_smmu_v3_cfg __initconst = {
> > >         .name = "arm-smmu-v3",
> > >         .dev_dma_configure = arm_smmu_v3_dma_configure,
> > >         .dev_count_resources = arm_smmu_v3_count_resources,
> > >         .dev_init_resources = arm_smmu_v3_init_resources,
> > >         .dev_set_proximity = arm_smmu_v3_set_proximity, };
> > >
> > > void __init acpi_iort_init(void)
> > > {
> > >         acpi_status status;
> > >
> > >         status = acpi_get_table(ACPI_SIG_IORT, 0, &iort_table);
> > >         ...
> > >         iort_check_id_count_workaround(iort_table);
> > >         iort_init_platform_devices(); }
> > >
> > > static void __init iort_init_platform_devices(void) {
> > >         ...
> > >
> > >         for (i = 0; i < iort->node_count; i++) {
> > >                 if (iort_node >= iort_end) {
> > >                         pr_err("iort node pointer overflows, bad
> > table\n");
> > >                         return;
> > >                 }
> > >
> > >                 iort_enable_acs(iort_node);
> > >
> > >                 ops = iort_get_dev_cfg(iort_node);
> > >                 if (ops) {
> > >                         fwnode = acpi_alloc_fwnode_static();
> > >                         if (!fwnode)
> > >                                 return;
> > >
> > >                         iort_set_fwnode(iort_node, fwnode);
> > >
> > >                         ret = iort_add_platform_device(iort_node,
> ops);
> > >                         if (ret) {
> > >                                 iort_delete_fwnode(iort_node);
> > >                                 acpi_free_fwnode_static(fwnode);
> > >                                 return;
> > >                         }
> > >                 }
> > >
> > >                 ...
> > >         }
> > > ...
> > > }
> > >
> > > NUMA node is got from ACPI:
> > >
> > > static int  __init arm_smmu_v3_set_proximity(struct device *dev,
> > >                                               struct
> acpi_iort_node
> > > *node) {
> > >         struct acpi_iort_smmu_v3 *smmu;
> > >
> > >         smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
> > >         if (smmu->flags & ACPI_IORT_SMMU_V3_PXM_VALID) {
> > >                 int dev_node = acpi_map_pxm_to_node(smmu->pxm);
> > >
> > >                 ...
> > >
> > >                 set_dev_node(dev, dev_node);
> > >                 ...
> > >         }
> > >         return 0;
> > > }
> > >
> > > Barry
> >
> > That's fine, but those are "real" devices, not platform devices, right?
> >
> 
> Most platform devices are "real" memory-mapped hardware devices. For an
> embedded system, almost all "simple-bus"
> devices are populated from device trees as platform devices. Only a part of
> platform devices are not "real" hardware.
> 
> Smmu is a memory-mapped device. It is totally like most other platform
> devices populated in a memory space mapped in cpu's local space. It uses
> ioremap to map registers, use readl/writel to read/write its space.
> 
> > What platform device has this issue?  What one will show up this way
> > with the new patch?

Meanwhile, this patch also only shows numa_node for those platform devices with "real"
hardware and acpi support. For those platform devices which are not "real", numa_node
sys entry will not be created as dev_to_node(dev) == NUMA_NO_NODE.

For instances, 
smmu is a platform device with "real" hardware backend, it gets a numa_node like:

root@ubuntu:/sys/bus/platform/devices/arm-smmu-v3.0.auto# cat numa_node
0

root@ubuntu:/sys/bus/platform/devices/arm-smmu-v3.7.auto# cat numa_node
2

snd-soc-dummy is a platform device without "real" hardware, then it gets no numa_node:

root@ubuntu:/sys/bus/platform/devices/snd-soc-dummy# ls
driver  driver_override  modalias  power  subsystem  uevent

-barry

> 
> if platform device shouldn't be a real hardware, there is no platform device
> with a hardware topology.
> But platform devices are "real" hardware at most time. Smmu is a "real" device,
> but it is a platform device in Linux.
> 
> >
> > thanks,
> >
> > greg k-h
> 
> -barry

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  parent reply	other threads:[~2020-06-02  7:02 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-02  3:01 [PATCH] driver core: platform: expose numa_node to users in sysfs Barry Song
2020-06-02  4:23 ` Greg KH
2020-06-02  4:42   ` Song Bao Hua (Barry Song)
2020-06-02  5:09   ` Song Bao Hua (Barry Song)
2020-06-02  6:11     ` Greg KH
2020-06-02  6:26       ` Song Bao Hua (Barry Song)
2020-06-02  7:02       ` Song Bao Hua (Barry Song) [this message]
2020-06-02  4:24 ` Greg KH

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=B926444035E5E2439431908E3842AFD24D9167@DGGEMI525-MBS.china.huawei.com \
    --to=song.bao.hua@hisilicon.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=prime.zeng@hisilicon.com \
    --cc=rafael@kernel.org \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).