All of lore.kernel.org
 help / color / mirror / Atom feed
From: "wangyanan (Y)" via <qemu-devel@nongnu.org>
To: Gavin Shan <gshan@redhat.com>, <qemu-arm@nongnu.org>
Cc: <qemu-devel@nongnu.org>, <imammedo@redhat.com>,
	<drjones@redhat.com>, <peter.maydell@linaro.org>,
	<richard.henderson@linaro.org>, <shan.gavin@gmail.com>,
	<zhenyzha@redhat.com>
Subject: Re: [PATCH v2 2/3] hw/acpi/aml-build: Use existing CPU topology to build PPTT table
Date: Fri, 18 Mar 2022 14:34:12 +0800	[thread overview]
Message-ID: <12b4a089-b01f-f536-499e-d6029d0b1dea@huawei.com> (raw)
In-Reply-To: <20220303031152.145960-3-gshan@redhat.com>

Hi Gavin,

On 2022/3/3 11:11, Gavin Shan wrote:
> When the PPTT table is built, the CPU topology is re-calculated, but
> it's unecessary because the CPU topology, except the cluster IDs,
> has been populated in virt_possible_cpu_arch_ids() on arm/virt machine.
>
> This avoids to re-calculate the CPU topology by reusing the existing
> one in ms->possible_cpus. However, the cluster ID for the CPU instance
> has to be calculated dynamically because there is no corresponding
> field in struct CpuInstanceProperties. Currently, the only user of
> build_pptt() is arm/virt machine.
>
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>   hw/acpi/aml-build.c | 106 ++++++++++++++++++++++++++++++++++----------
>   1 file changed, 82 insertions(+), 24 deletions(-)
>
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 8966e16320..572cf5fc00 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -2002,18 +2002,27 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>                   const char *oem_id, const char *oem_table_id)
>   {
>       MachineClass *mc = MACHINE_GET_CLASS(ms);
> +    CPUArchIdList *cpus = ms->possible_cpus;
> +    GQueue *socket_list = g_queue_new();
> +    GQueue *cluster_list = g_queue_new();
> +    GQueue *core_list = g_queue_new();
>       GQueue *list = g_queue_new();
>       guint pptt_start = table_data->len;
>       guint parent_offset;
>       guint length, i;
> -    int uid = 0;
> -    int socket;
> +    int n, id, socket_id, cluster_id, core_id, thread_id;
>       AcpiTable table = { .sig = "PPTT", .rev = 2,
>                           .oem_id = oem_id, .oem_table_id = oem_table_id };
>   
>       acpi_table_begin(&table, table_data);
>   
> -    for (socket = 0; socket < ms->smp.sockets; socket++) {
> +    for (n = 0; n < cpus->len; n++) {
> +        socket_id = cpus->cpus[n].props.socket_id;
> +        if (g_queue_find(socket_list, GUINT_TO_POINTER(socket_id))) {
> +            continue;
> +        }
> +
> +        g_queue_push_tail(socket_list, GUINT_TO_POINTER(socket_id));
>           g_queue_push_tail(list,
>               GUINT_TO_POINTER(table_data->len - pptt_start));
>           build_processor_hierarchy_node(
> @@ -2023,65 +2032,114 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>                * of a physical package
>                */
>               (1 << 0),
> -            0, socket, NULL, 0);
> +            0, socket_id, NULL, 0);
>       }
>   
>       if (mc->smp_props.clusters_supported) {
>           length = g_queue_get_length(list);
>           for (i = 0; i < length; i++) {
> -            int cluster;
> -
>               parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> -            for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
> +            socket_id = GPOINTER_TO_UINT(g_queue_pop_head(socket_list));
> +
> +            for (n = 0; n < cpus->len; n++) {
> +                if (cpus->cpus[n].props.socket_id != socket_id) {
> +                    continue;
> +                }
> +
> +                /*
> +                 * We have to calculate the cluster ID because it isn't
> +                 * available in the CPU instance properties.
> +                 */
Since we need cluster ID now, maybe we can simply make it supported
in the CPU instance properties.

Thanks,
Yanan
> +                cluster_id = cpus->cpus[n].props.thread_id /
> +                             (ms->smp.cores * ms->smp.threads);
> +                if (g_queue_find(cluster_list, GUINT_TO_POINTER(cluster_id))) {
> +                    continue;
> +                }
> +
> +                g_queue_push_tail(cluster_list, GUINT_TO_POINTER(cluster_id));
>                   g_queue_push_tail(list,
>                       GUINT_TO_POINTER(table_data->len - pptt_start));
>                   build_processor_hierarchy_node(
>                       table_data,
>                       (0 << 0), /* not a physical package */
> -                    parent_offset, cluster, NULL, 0);
> +                    parent_offset, cluster_id, NULL, 0);
>               }
>           }
>       }
>   
>       length = g_queue_get_length(list);
>       for (i = 0; i < length; i++) {
> -        int core;
> -
>           parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> -        for (core = 0; core < ms->smp.cores; core++) {
> -            if (ms->smp.threads > 1) {
> -                g_queue_push_tail(list,
> -                    GUINT_TO_POINTER(table_data->len - pptt_start));
> -                build_processor_hierarchy_node(
> -                    table_data,
> -                    (0 << 0), /* not a physical package */
> -                    parent_offset, core, NULL, 0);
> -            } else {
> +        if (!mc->smp_props.clusters_supported) {
> +            socket_id = GPOINTER_TO_UINT(g_queue_pop_head(socket_list));
> +        } else {
> +            cluster_id = GPOINTER_TO_UINT(g_queue_pop_head(cluster_list));
> +        }
> +
> +        for (n = 0; n < cpus->len; n++) {
> +            if (!mc->smp_props.clusters_supported &&
> +                cpus->cpus[n].props.socket_id != socket_id) {
> +                continue;
> +            }
> +
> +            /*
> +             * We have to calculate the cluster ID because it isn't
> +             * available in the CPU instance properties.
> +             */
> +            id = cpus->cpus[n].props.thread_id /
> +                (ms->smp.cores * ms->smp.threads);
> +            if (mc->smp_props.clusters_supported && id != cluster_id) {
> +                continue;
> +            }
> +
> +            core_id = cpus->cpus[n].props.core_id;
> +            if (ms->smp.threads <= 1) {
>                   build_processor_hierarchy_node(
>                       table_data,
>                       (1 << 1) | /* ACPI Processor ID valid */
>                       (1 << 3),  /* Node is a Leaf */
> -                    parent_offset, uid++, NULL, 0);
> +                    parent_offset, core_id, NULL, 0);
> +                continue;
>               }
> +
> +            if (g_queue_find(core_list, GUINT_TO_POINTER(core_id))) {
> +                continue;
> +            }
> +
> +            g_queue_push_tail(core_list, GUINT_TO_POINTER(core_id));
> +            g_queue_push_tail(list,
> +                GUINT_TO_POINTER(table_data->len - pptt_start));
> +            build_processor_hierarchy_node(
> +                table_data,
> +                (0 << 0), /* not a physical package */
> +                parent_offset, core_id, NULL, 0);
>           }
>       }
>   
>       length = g_queue_get_length(list);
>       for (i = 0; i < length; i++) {
> -        int thread;
> -
>           parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> -        for (thread = 0; thread < ms->smp.threads; thread++) {
> +        core_id = GPOINTER_TO_UINT(g_queue_pop_head(core_list));
> +
> +        for (n = 0; n < cpus->len; n++) {
> +            if (cpus->cpus[n].props.core_id != core_id) {
> +                continue;
> +            }
> +
> +            thread_id = cpus->cpus[n].props.thread_id;
>               build_processor_hierarchy_node(
>                   table_data,
>                   (1 << 1) | /* ACPI Processor ID valid */
>                   (1 << 2) | /* Processor is a Thread */
>                   (1 << 3),  /* Node is a Leaf */
> -                parent_offset, uid++, NULL, 0);
> +                parent_offset, thread_id, NULL, 0);
>           }
>       }
>   
>       g_queue_free(list);
> +    g_queue_free(core_list);
> +    g_queue_free(cluster_list);
> +    g_queue_free(socket_list);
>       acpi_table_end(linker, &table);
>   }
>   



  reply	other threads:[~2022-03-18  6:38 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-03  3:11 [PATCH v2 0/3] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
2022-03-03  3:11 ` [PATCH v2 1/3] " Gavin Shan
2022-03-18  6:23   ` wangyanan (Y) via
2022-03-18  9:56     ` Igor Mammedov
2022-03-18 13:00       ` wangyanan (Y) via
2022-03-18 13:27         ` Igor Mammedov
2022-03-21  2:28           ` wangyanan (Y) via
2022-03-23  3:26             ` Gavin Shan
2022-03-23  3:29             ` Gavin Shan
2022-03-03  3:11 ` [PATCH v2 2/3] hw/acpi/aml-build: Use existing CPU topology to build PPTT table Gavin Shan
2022-03-18  6:34   ` wangyanan (Y) via [this message]
2022-03-18 13:28     ` Igor Mammedov
2022-03-23  3:31       ` Gavin Shan
2022-03-03  3:11 ` [PATCH v2 3/3] hw/arm/virt: Unify ACPI processor ID in MADT and SRAT table Gavin Shan
2022-03-14  6:24 ` [PATCH v2 0/3] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12b4a089-b01f-f536-499e-d6029d0b1dea@huawei.com \
    --to=qemu-devel@nongnu.org \
    --cc=drjones@redhat.com \
    --cc=gshan@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=shan.gavin@gmail.com \
    --cc=wangyanan55@huawei.com \
    --cc=zhenyzha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.