All of lore.kernel.org
 help / color / mirror / Atom feed
From: Igor Mammedov <imammedo@redhat.com>
To: Gavin Shan <gshan@redhat.com>
Cc: peter.maydell@linaro.org, drjones@redhat.com,
	richard.henderson@linaro.org, qemu-devel@nongnu.org,
	zhenyzha@redhat.com, wangyanan55@huawei.com, qemu-arm@nongnu.org,
	shan.gavin@gmail.com
Subject: Re: [PATCH v5 4/4] hw/acpi/aml-build: Use existing CPU topology to build PPTT table
Date: Wed, 13 Apr 2022 15:52:32 +0200	[thread overview]
Message-ID: <20220413155232.0a1f4d88@redhat.com> (raw)
In-Reply-To: <20220403145953.10522-5-gshan@redhat.com>

On Sun,  3 Apr 2022 22:59:53 +0800
Gavin Shan <gshan@redhat.com> wrote:

> When the PPTT table is built, the CPU topology is re-calculated, but
> it's unecessary because the CPU topology has been populated in
> virt_possible_cpu_arch_ids() on arm/virt machine.
> 
> This reworks build_pptt() to avoid by reusing the existing one in
> ms->possible_cpus. Currently, the only user of build_pptt() is
> arm/virt machine.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  hw/acpi/aml-build.c | 100 +++++++++++++++++---------------------------
>  1 file changed, 38 insertions(+), 62 deletions(-)
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 4086879ebf..4b0f9df3e3 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -2002,86 +2002,62 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>                  const char *oem_id, const char *oem_table_id)
>  {
>      MachineClass *mc = MACHINE_GET_CLASS(ms);
> -    GQueue *list = g_queue_new();
> -    guint pptt_start = table_data->len;
> -    guint parent_offset;
> -    guint length, i;
> -    int uid = 0;
> -    int socket;
> +    CPUArchIdList *cpus = ms->possible_cpus;
> +    int64_t socket_id = -1, cluster_id = -1, core_id = -1;
> +    uint32_t socket_offset, cluster_offset, core_offset;
> +    uint32_t pptt_start = table_data->len;
> +    int n;
>      AcpiTable table = { .sig = "PPTT", .rev = 2,
>                          .oem_id = oem_id, .oem_table_id = oem_table_id };
>  
>      acpi_table_begin(&table, table_data);
>  
> -    for (socket = 0; socket < ms->smp.sockets; socket++) {
> -        g_queue_push_tail(list,
> -            GUINT_TO_POINTER(table_data->len - pptt_start));
> -        build_processor_hierarchy_node(
> -            table_data,
> -            /*
> -             * Physical package - represents the boundary
> -             * of a physical package
> -             */
> -            (1 << 0),
> -            0, socket, NULL, 0);
> -    }
> +    for (n = 0; n < cpus->len; n++) {

> +        if (cpus->cpus[n].props.socket_id != socket_id) {
> +            socket_id = cpus->cpus[n].props.socket_id;

this relies on cpus->cpus[n].props.*_id being sorted form top to down levels
I'd add here and for other container_id an assert() that checks for that
specific ID goes in only one direction, to be able to detect when rule is broken.

otherwise on may end up with duplicate containers silently.

> +            cluster_id = -1;
> +            core_id = -1;
> +            socket_offset = table_data->len - pptt_start;
> +            build_processor_hierarchy_node(table_data,
> +                (1 << 0), /* Physical package */
> +                0, socket_id, NULL, 0);
> +        }
>  
> -    if (mc->smp_props.clusters_supported) {
> -        length = g_queue_get_length(list);
> -        for (i = 0; i < length; i++) {
> -            int cluster;
> -
> -            parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> -            for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
> -                g_queue_push_tail(list,
> -                    GUINT_TO_POINTER(table_data->len - pptt_start));
> -                build_processor_hierarchy_node(
> -                    table_data,
> -                    (0 << 0), /* not a physical package */
> -                    parent_offset, cluster, NULL, 0);
> +        if (mc->smp_props.clusters_supported) {
> +            if (cpus->cpus[n].props.cluster_id != cluster_id) {
> +                cluster_id = cpus->cpus[n].props.cluster_id;
> +                core_id = -1;
> +                cluster_offset = table_data->len - pptt_start;
> +                build_processor_hierarchy_node(table_data,
> +                    (0 << 0), /* Not a physical package */
> +                    socket_offset, cluster_id, NULL, 0);
>              }
> +        } else {
> +            cluster_offset = socket_offset;
>          }
> -    }
>  
> -    length = g_queue_get_length(list);
> -    for (i = 0; i < length; i++) {
> -        int core;
> -
> -        parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> -        for (core = 0; core < ms->smp.cores; core++) {
> -            if (ms->smp.threads > 1) {
> -                g_queue_push_tail(list,
> -                    GUINT_TO_POINTER(table_data->len - pptt_start));
> -                build_processor_hierarchy_node(
> -                    table_data,
> +        if (ms->smp.threads <= 1) {

why <= instead of < is used here?

> +            build_processor_hierarchy_node(table_data,
> +                (1 << 1) | /* ACPI Processor ID valid */
> +                (1 << 3),  /* Node is a Leaf */
> +                cluster_offset, n, NULL, 0);
> +        } else {
> +            if (cpus->cpus[n].props.core_id != core_id) {
> +                core_id = cpus->cpus[n].props.core_id;
> +                core_offset = table_data->len - pptt_start;
> +                build_processor_hierarchy_node(table_data,
>                      (0 << 0), /* not a physical package */
> -                    parent_offset, core, NULL, 0);
> -            } else {
> -                build_processor_hierarchy_node(
> -                    table_data,
> -                    (1 << 1) | /* ACPI Processor ID valid */
> -                    (1 << 3),  /* Node is a Leaf */
> -                    parent_offset, uid++, NULL, 0);
> +                    cluster_offset, core_id, NULL, 0);
>              }
> -        }
> -    }
> -
> -    length = g_queue_get_length(list);
> -    for (i = 0; i < length; i++) {
> -        int thread;
>  
> -        parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> -        for (thread = 0; thread < ms->smp.threads; thread++) {
> -            build_processor_hierarchy_node(
> -                table_data,
> +            build_processor_hierarchy_node(table_data,
>                  (1 << 1) | /* ACPI Processor ID valid */
>                  (1 << 2) | /* Processor is a Thread */
>                  (1 << 3),  /* Node is a Leaf */
> -                parent_offset, uid++, NULL, 0);
> +                core_offset, n, NULL, 0);
>          }
>      }
>  
> -    g_queue_free(list);
>      acpi_table_end(linker, &table);
>  }
>  



  parent reply	other threads:[~2022-04-13 13:54 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-03 14:59 [PATCH v5 0/4] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
2022-04-03 14:59 ` [PATCH v5 1/4] qapi/machine.json: Add cluster-id Gavin Shan
2022-04-04  8:37   ` Daniel P. Berrangé
2022-04-04  8:40     ` Daniel P. Berrangé
2022-04-04 10:40       ` Gavin Shan
2022-04-13 11:49   ` wangyanan (Y) via
2022-04-14  0:06     ` Gavin Shan
2022-04-14  2:27       ` wangyanan (Y) via
2022-04-14  7:56         ` Gavin Shan
2022-04-14  9:33           ` wangyanan (Y) via
2022-04-19 15:59         ` Daniel P. Berrangé
2022-04-20  2:17           ` Gavin Shan
2022-04-03 14:59 ` [PATCH v5 2/4] hw/arm/virt: Consider SMP configuration in CPU topology Gavin Shan
2022-04-04  8:39   ` Daniel P. Berrangé
2022-04-04 10:48     ` Gavin Shan
2022-04-04 12:03       ` Igor Mammedov
2022-04-13 12:39   ` wangyanan (Y) via
2022-04-14  0:08     ` Gavin Shan
2022-04-14  2:27       ` wangyanan (Y) via
2022-04-14  2:37         ` Gavin Shan
2022-04-14  2:49           ` wangyanan (Y) via
2022-04-14  7:35             ` Gavin Shan
2022-04-14  9:29               ` wangyanan (Y) via
2022-04-15  6:08                 ` Gavin Shan
2022-04-14  9:33               ` Jonathan Cameron via
2022-04-15  6:13                 ` Gavin Shan
2022-04-03 14:59 ` [PATCH v5 3/4] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
2022-04-03 14:59 ` [PATCH v5 4/4] hw/acpi/aml-build: Use existing CPU topology to build PPTT table Gavin Shan
2022-04-12 15:40   ` Jonathan Cameron via
2022-04-13  2:15     ` Gavin Shan
2022-04-13 13:52   ` Igor Mammedov [this message]
2022-04-14  0:33     ` Gavin Shan
2022-04-14  2:56       ` wangyanan (Y) via
2022-04-14  7:39         ` Gavin Shan
2022-04-19  8:54       ` Igor Mammedov
2022-04-20  5:19         ` Gavin Shan
2022-04-20  8:10           ` Igor Mammedov
2022-04-20 10:22             ` Gavin Shan
2022-04-14  3:09   ` wangyanan (Y) via
2022-04-14  7:45     ` Gavin Shan
2022-04-14  9:22       ` wangyanan (Y) via
2022-04-11  6:48 ` [PATCH v5 0/4] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220413155232.0a1f4d88@redhat.com \
    --to=imammedo@redhat.com \
    --cc=drjones@redhat.com \
    --cc=gshan@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=shan.gavin@gmail.com \
    --cc=wangyanan55@huawei.com \
    --cc=zhenyzha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.