xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xenproject.org>
Cc: "Stefano Stabellini" <sstabellini@kernel.org>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>,
	"Wei Liu" <wl@xen.org>, "Jan Beulich" <JBeulich@suse.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/4] xen: Drop raw_smp_processor_id()
Date: Sat, 21 Mar 2020 10:14:37 +0000	[thread overview]
Message-ID: <7ee60956-f02c-d185-0df8-b69e9c3894cf@xen.org> (raw)
In-Reply-To: <20200320212453.21685-4-andrew.cooper3@citrix.com>



On 20/03/2020 21:24, Andrew Cooper wrote:
> There is only a single user of raw_smp_processor_id() left in the tree (and it
> is unconditionally compiled out).  Drop the alias from all architectures.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> ---
>   xen/arch/x86/cpu/microcode/amd.c | 2 +-
>   xen/include/asm-arm/smp.h        | 2 +-
>   xen/include/asm-x86/smp.h        | 2 +-
>   xen/include/xen/smp.h            | 2 --
>   4 files changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
> index a053e43923..0998a36b5c 100644
> --- a/xen/arch/x86/cpu/microcode/amd.c
> +++ b/xen/arch/x86/cpu/microcode/amd.c
> @@ -306,7 +306,7 @@ static int get_ucode_from_buffer_amd(
>       memcpy(mc_amd->mpb, mpbuf->data, mpbuf->len);
>   
>       pr_debug("microcode: CPU%d size %zu, block size %u offset %zu equivID %#x rev %#x\n",
> -             raw_smp_processor_id(), bufsize, mpbuf->len, *offset,
> +             smp_processor_id(), bufsize, mpbuf->len, *offset,
>                ((struct microcode_header_amd *)mc_amd->mpb)->processor_rev_id,
>                ((struct microcode_header_amd *)mc_amd->mpb)->patch_id);
>   
> diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h
> index fdbcefa241..af5a2fe652 100644
> --- a/xen/include/asm-arm/smp.h
> +++ b/xen/include/asm-arm/smp.h
> @@ -12,7 +12,7 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
>   
>   #define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
>   
> -#define raw_smp_processor_id() (get_processor_id())
> +#define smp_processor_id() get_processor_id()
>   
>   /*
>    * Do we, for platform reasons, need to actually keep CPUs online when we
> diff --git a/xen/include/asm-x86/smp.h b/xen/include/asm-x86/smp.h
> index 6150363655..f7485f602e 100644
> --- a/xen/include/asm-x86/smp.h
> +++ b/xen/include/asm-x86/smp.h
> @@ -53,7 +53,7 @@ int cpu_add(uint32_t apic_id, uint32_t acpi_id, uint32_t pxm);
>    * from the initial startup. We map APIC_BASE very early in page_setup(),
>    * so this is correct in the x86 case.
>    */
> -#define raw_smp_processor_id() (get_processor_id())
> +#define smp_processor_id() get_processor_id()
>   
>   void __stop_this_cpu(void);
>   
> diff --git a/xen/include/xen/smp.h b/xen/include/xen/smp.h
> index a64c9b3882..d5a3644611 100644
> --- a/xen/include/xen/smp.h
> +++ b/xen/include/xen/smp.h
> @@ -65,8 +65,6 @@ void smp_call_function_interrupt(void);
>   
>   void smp_send_call_function_mask(const cpumask_t *mask);
>   
> -#define smp_processor_id() raw_smp_processor_id()
> -
>   int alloc_cpu_id(void);
>   
>   extern void *stack_base[NR_CPUS];
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2020-03-21 10:15 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-20 21:24 [Xen-devel] [PATCH 0/4] x86/ucode: Cleanup - Part 2/n Andrew Cooper
2020-03-20 21:24 ` [Xen-devel] [PATCH 1/4] x86/ucode/amd: Fix assertion in compare_patch() Andrew Cooper
2020-03-21 16:45   ` Wei Liu
2020-03-20 21:24 ` [Xen-devel] [PATCH 2/4] x86/ucode: Fix error paths in apply_microcode() Andrew Cooper
2020-03-21 16:49   ` Wei Liu
2020-03-23  8:32   ` Jan Beulich
2020-03-20 21:24 ` [Xen-devel] [PATCH 3/4] xen: Drop raw_smp_processor_id() Andrew Cooper
2020-03-21 10:14   ` Julien Grall [this message]
2020-03-21 16:50   ` Wei Liu
2020-03-20 21:24 ` [Xen-devel] [PATCH 4/4] xen: Introduce a xmemdup_bytes() helper Andrew Cooper
2020-03-21 16:51   ` Wei Liu
2020-03-21 22:19   ` Julien Grall
2020-03-23  8:38     ` Jan Beulich
2020-03-26 15:38       ` Andrew Cooper
2020-03-26 15:47         ` Jan Beulich
2020-03-26 14:53     ` Andrew Cooper
2020-03-26 19:05       ` Julien Grall
2020-03-23  8:41 ` [Xen-devel] [PATCH 0/4] x86/ucode: Cleanup - Part 2/n Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7ee60956-f02c-d185-0df8-b69e9c3894cf@xen.org \
    --to=julien@xen.org \
    --cc=JBeulich@suse.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).