xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH 0/4] x86 MCE adjustments for AMD / general per-CPU accessor cleanup
@ 2019-06-14 15:33 Jan Beulich
  2019-06-14 15:35 ` [Xen-devel] [PATCH 1/4] x86/mcheck: allow varying bank counts per CPU Jan Beulich
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Jan Beulich @ 2019-06-14 15:33 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Wei Liu, Roger Pau Monne

After patch 1, which really is the one I was after here, I did realize
that the number of uses of __get_cpu_var() had shrunk enough
that it seemed worthwhile to take the time and convert the
remaining uses, such that the construct could finally be dropped.

1: x86/mcheck: allow varying bank counts per CPU
2: x86/mcheck: replace remaining uses of __get_cpu_var()
3: x86: replace remaining uses of __get_cpu_var()
4: drop __get_cpu_var() and __get_cpu_ptr()

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Xen-devel] [PATCH 1/4] x86/mcheck: allow varying bank counts per CPU
  2019-06-14 15:33 [Xen-devel] [PATCH 0/4] x86 MCE adjustments for AMD / general per-CPU accessor cleanup Jan Beulich
@ 2019-06-14 15:35 ` Jan Beulich
  2019-06-21 17:45   ` Andrew Cooper
  2019-06-14 15:37 ` [Xen-devel] [PATCH 2/4] x86/mcheck: replace remaining uses of __get_cpu_var() Jan Beulich
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2019-06-14 15:35 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, WeiLiu, Roger Pau Monne

Up to now we've been assuming that all CPUs would have the same number
of reporting banks. However, on upcoming AMD CPUs this isn't the case,
and one can observe

(XEN) mce.c:666: Different bank number on cpu <N>

indicating that Machine Check support would not be enabled on the
affected CPUs. Convert the count variable to a per-CPU one, and adjust
code where needed to cope with the values not being the same. In
particular the mcabanks_alloc() invocations during AP bringup need to
now allocate maximum-size bitmaps, because the truly needed size can't
be known until we actually execute on that CPU, yet mcheck_init() gets
called too early to do any allocations itself.

Take the liberty and also
- make mca_cap_init() static,
- replace several __get_cpu_var() uses when a local variable suitable
  for use with per_cpu() appears,
- correct which CPU's cpu_data[] entry x86_mc_msrinject_verify() uses,
- replace a BUG() by panic().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -34,7 +34,7 @@ bool __read_mostly opt_mce = true;
 boolean_param("mce", opt_mce);
 bool __read_mostly mce_broadcast;
 bool is_mc_panic;
-unsigned int __read_mostly nr_mce_banks;
+DEFINE_PER_CPU_READ_MOSTLY(unsigned int, nr_mce_banks);
 unsigned int __read_mostly firstbank;
 uint8_t __read_mostly cmci_apic_vector;
 
@@ -120,7 +120,7 @@ void mce_recoverable_register(mce_recove
     mc_recoverable_scan = cbfunc;
 }
 
-struct mca_banks *mcabanks_alloc(void)
+struct mca_banks *mcabanks_alloc(unsigned int nr_mce_banks)
 {
     struct mca_banks *mb;
 
@@ -128,6 +128,13 @@ struct mca_banks *mcabanks_alloc(void)
     if ( !mb )
         return NULL;
 
+    /*
+     * For APs allocations get done by the BSP, i.e. when the bank count may
+     * may not be known yet. A zero bank count is a clear indication of this.
+     */
+    if ( !nr_mce_banks )
+        nr_mce_banks = MCG_CAP_COUNT;
+
     mb->bank_map = xzalloc_array(unsigned long,
                                  BITS_TO_LONGS(nr_mce_banks));
     if ( !mb->bank_map )
@@ -319,7 +326,7 @@ mcheck_mca_logout(enum mca_source who, s
      */
     recover = mc_recoverable_scan ? 1 : 0;
 
-    for ( i = 0; i < nr_mce_banks; i++ )
+    for ( i = 0; i < this_cpu(nr_mce_banks); i++ )
     {
         /* Skip bank if corresponding bit in bankmask is clear */
         if ( !mcabanks_test(i, bankmask) )
@@ -565,7 +572,7 @@ void mcheck_mca_clearbanks(struct mca_ba
 {
     int i;
 
-    for ( i = 0; i < nr_mce_banks; i++ )
+    for ( i = 0; i < this_cpu(nr_mce_banks); i++ )
     {
         if ( !mcabanks_test(i, bankmask) )
             continue;
@@ -638,54 +645,56 @@ static void set_poll_bankmask(struct cpu
 
     if ( cmci_support && opt_mce )
     {
-        mb->num = per_cpu(no_cmci_banks, cpu)->num;
-        bitmap_copy(mb->bank_map, per_cpu(no_cmci_banks, cpu)->bank_map,
-                    nr_mce_banks);
+        const struct mca_banks *cmci = per_cpu(no_cmci_banks, cpu);
+
+        if ( unlikely(cmci->num < mb->num) )
+            bitmap_fill(mb->bank_map, mb->num);
+        bitmap_copy(mb->bank_map, cmci->bank_map, min(mb->num, cmci->num));
     }
     else
     {
-        bitmap_copy(mb->bank_map, mca_allbanks->bank_map, nr_mce_banks);
+        bitmap_copy(mb->bank_map, mca_allbanks->bank_map,
+                    per_cpu(nr_mce_banks, cpu));
         if ( mce_firstbank(c) )
             mcabanks_clear(0, mb);
     }
 }
 
 /* The perbank ctl/status init is platform specific because of AMD's quirk */
-int mca_cap_init(void)
+static int mca_cap_init(void)
 {
     uint64_t msr_content;
+    unsigned int nr, cpu = smp_processor_id();
 
     rdmsrl(MSR_IA32_MCG_CAP, msr_content);
 
     if ( msr_content & MCG_CTL_P ) /* Control register present ? */
         wrmsrl(MSR_IA32_MCG_CTL, 0xffffffffffffffffULL);
 
-    if ( nr_mce_banks && (msr_content & MCG_CAP_COUNT) != nr_mce_banks )
-    {
-        dprintk(XENLOG_WARNING, "Different bank number on cpu %x\n",
-                smp_processor_id());
-        return -ENODEV;
-    }
-    nr_mce_banks = msr_content & MCG_CAP_COUNT;
+    per_cpu(nr_mce_banks, cpu) = nr = MASK_EXTR(msr_content, MCG_CAP_COUNT);
 
-    if ( !nr_mce_banks )
+    if ( !nr )
     {
-        printk(XENLOG_INFO "CPU%u: No MCE banks present. "
-               "Machine check support disabled\n", smp_processor_id());
+        printk(XENLOG_INFO
+               "CPU%u: No MCE banks present. Machine check support disabled\n",
+               cpu);
         return -ENODEV;
     }
 
     /* mcabanks_alloc depends on nr_mce_banks */
-    if ( !mca_allbanks )
+    if ( !mca_allbanks || nr > mca_allbanks->num )
     {
-        int i;
+        unsigned int i;
+        struct mca_banks *all = mcabanks_alloc(nr);
 
-        mca_allbanks = mcabanks_alloc();
-        for ( i = 0; i < nr_mce_banks; i++ )
+        if ( !all )
+            return -ENOMEM;
+        for ( i = 0; i < nr; i++ )
             mcabanks_set(i, mca_allbanks);
+        mcabanks_free(xchg(&mca_allbanks, all));
     }
 
-    return mca_allbanks ? 0 : -ENOMEM;
+    return 0;
 }
 
 static void cpu_bank_free(unsigned int cpu)
@@ -702,8 +711,9 @@ static void cpu_bank_free(unsigned int c
 
 static int cpu_bank_alloc(unsigned int cpu)
 {
-    struct mca_banks *poll = per_cpu(poll_bankmask, cpu) ?: mcabanks_alloc();
-    struct mca_banks *clr = per_cpu(mce_clear_banks, cpu) ?: mcabanks_alloc();
+    unsigned int nr = per_cpu(nr_mce_banks, cpu);
+    struct mca_banks *poll = per_cpu(poll_bankmask, cpu) ?: mcabanks_alloc(nr);
+    struct mca_banks *clr = per_cpu(mce_clear_banks, cpu) ?: mcabanks_alloc(nr);
 
     if ( !poll || !clr )
     {
@@ -752,6 +762,7 @@ static struct notifier_block cpu_nfb = {
 void mcheck_init(struct cpuinfo_x86 *c, bool bsp)
 {
     enum mcheck_type inited = mcheck_none;
+    unsigned int cpu = smp_processor_id();
 
     if ( !opt_mce )
     {
@@ -762,8 +773,7 @@ void mcheck_init(struct cpuinfo_x86 *c,
 
     if ( !mce_available(c) )
     {
-        printk(XENLOG_INFO "CPU%i: No machine check support available\n",
-               smp_processor_id());
+        printk(XENLOG_INFO "CPU%i: No machine check support available\n", cpu);
         return;
     }
 
@@ -771,9 +781,13 @@ void mcheck_init(struct cpuinfo_x86 *c,
     if ( mca_cap_init() )
         return;
 
-    /* Early MCE initialisation for BSP. */
-    if ( bsp && cpu_bank_alloc(smp_processor_id()) )
-        BUG();
+    if ( !bsp )
+    {
+        per_cpu(poll_bankmask, cpu)->num = per_cpu(nr_mce_banks, cpu);
+        per_cpu(mce_clear_banks, cpu)->num = per_cpu(nr_mce_banks, cpu);
+    }
+    else if ( cpu_bank_alloc(cpu) )
+        panic("Insufficient memory for MCE bank allocations\n");
 
     switch ( c->x86_vendor )
     {
@@ -1111,24 +1125,22 @@ bool intpose_inval(unsigned int cpu_nr,
     return true;
 }
 
-#define IS_MCA_BANKREG(r) \
+#define IS_MCA_BANKREG(r, cpu) \
     ((r) >= MSR_IA32_MC0_CTL && \
-    (r) <= MSR_IA32_MCx_MISC(nr_mce_banks - 1) && \
-    ((r) - MSR_IA32_MC0_CTL) % 4 != 0) /* excludes MCi_CTL */
+     (r) <= MSR_IA32_MCx_MISC(per_cpu(nr_mce_banks, cpu) - 1) && \
+     ((r) - MSR_IA32_MC0_CTL) % 4) /* excludes MCi_CTL */
 
 static bool x86_mc_msrinject_verify(struct xen_mc_msrinject *mci)
 {
-    struct cpuinfo_x86 *c;
+    const struct cpuinfo_x86 *c = &cpu_data[mci->mcinj_cpunr];
     int i, errs = 0;
 
-    c = &cpu_data[smp_processor_id()];
-
     for ( i = 0; i < mci->mcinj_count; i++ )
     {
         uint64_t reg = mci->mcinj_msr[i].reg;
         const char *reason = NULL;
 
-        if ( IS_MCA_BANKREG(reg) )
+        if ( IS_MCA_BANKREG(reg, mci->mcinj_cpunr) )
         {
             if ( c->x86_vendor == X86_VENDOR_AMD )
             {
@@ -1448,7 +1460,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_m
         break;
 
     case XEN_MC_msrinject:
-        if ( nr_mce_banks == 0 )
+        if ( !mca_allbanks || !mca_allbanks->num )
             return x86_mcerr("do_mca inject", -ENODEV);
 
         mc_msrinject = &op->u.mc_msrinject;
@@ -1461,6 +1473,9 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_m
             return x86_mcerr("do_mca inject: target offline",
                              -EINVAL);
 
+        if ( !per_cpu(nr_mce_banks, target) )
+            return x86_mcerr("do_mca inject: no banks", -ENOENT);
+
         if ( mc_msrinject->mcinj_count == 0 )
             return 0;
 
@@ -1521,7 +1536,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_m
         break;
 
     case XEN_MC_mceinject:
-        if ( nr_mce_banks == 0 )
+        if ( !mca_allbanks || !mca_allbanks->num )
             return x86_mcerr("do_mca #MC", -ENODEV);
 
         mc_mceinject = &op->u.mc_mceinject;
@@ -1533,6 +1548,9 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_m
         if ( !cpu_online(target) )
             return x86_mcerr("do_mca #MC: target offline", -EINVAL);
 
+        if ( !per_cpu(nr_mce_banks, target) )
+            return x86_mcerr("do_mca #MC: no banks", -ENOENT);
+
         add_taint(TAINT_ERROR_INJECT);
 
         if ( mce_broadcast )
@@ -1548,7 +1566,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_m
         cpumask_var_t cmv;
         bool broadcast = op->u.mc_inject_v2.flags & XEN_MC_INJECT_CPU_BROADCAST;
 
-        if ( nr_mce_banks == 0 )
+        if ( !mca_allbanks || !mca_allbanks->num )
             return x86_mcerr("do_mca #MC", -ENODEV);
 
         if ( broadcast )
@@ -1570,6 +1588,16 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_m
                         "Not all required CPUs are online\n");
         }
 
+        for_each_cpu(target, cpumap)
+            if ( cpu_online(target) && !per_cpu(nr_mce_banks, target) )
+            {
+                ret = x86_mcerr("do_mca #MC: CPU%u has no banks",
+                                -ENOENT, target);
+                break;
+            }
+        if ( ret )
+            break;
+
         switch ( op->u.mc_inject_v2.flags & XEN_MC_INJECT_TYPE_MASK )
         {
         case XEN_MC_INJECT_TYPE_MCE:
--- a/xen/arch/x86/cpu/mcheck/mce_amd.c
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.c
@@ -297,7 +297,7 @@ amd_mcheck_init(struct cpuinfo_x86 *ci)
     x86_mce_vector_register(mcheck_cmn_handler);
     mce_need_clearbank_register(amd_need_clearbank_scan);
 
-    for ( i = 0; i < nr_mce_banks; i++ )
+    for ( i = 0; i < this_cpu(nr_mce_banks); i++ )
     {
         if ( quirkflag == MCEQUIRK_K8_GART && i == 4 )
             mcequirk_amd_apply(quirkflag);
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -535,16 +535,16 @@ out:
 static void cmci_discover(void)
 {
     unsigned long flags;
-    int i;
+    unsigned int i, cpu = smp_processor_id();
     mctelem_cookie_t mctc;
     struct mca_summary bs;
 
-    mce_printk(MCE_VERBOSE, "CMCI: find owner on CPU%d\n", smp_processor_id());
+    mce_printk(MCE_VERBOSE, "CMCI: find owner on CPU%u\n", cpu);
 
     spin_lock_irqsave(&cmci_discover_lock, flags);
 
-    for ( i = 0; i < nr_mce_banks; i++ )
-        if ( !mcabanks_test(i, __get_cpu_var(mce_banks_owned)) )
+    for ( i = 0; i < per_cpu(nr_mce_banks, cpu); i++ )
+        if ( !mcabanks_test(i, per_cpu(mce_banks_owned, cpu)) )
             do_cmci_discover(i);
 
     spin_unlock_irqrestore(&cmci_discover_lock, flags);
@@ -557,7 +557,7 @@ static void cmci_discover(void)
      */
 
     mctc = mcheck_mca_logout(
-        MCA_CMCI_HANDLER, __get_cpu_var(mce_banks_owned), &bs, NULL);
+        MCA_CMCI_HANDLER, per_cpu(mce_banks_owned, cpu), &bs, NULL);
 
     if ( bs.errcnt && mctc != NULL )
     {
@@ -576,9 +576,9 @@ static void cmci_discover(void)
         mctelem_dismiss(mctc);
 
     mce_printk(MCE_VERBOSE, "CMCI: CPU%d owner_map[%lx], no_cmci_map[%lx]\n",
-               smp_processor_id(),
-               *((unsigned long *)__get_cpu_var(mce_banks_owned)->bank_map),
-               *((unsigned long *)__get_cpu_var(no_cmci_banks)->bank_map));
+               cpu,
+               per_cpu(mce_banks_owned, cpu)->bank_map[0],
+               per_cpu(no_cmci_banks, cpu)->bank_map[0]);
 }
 
 /*
@@ -613,24 +613,24 @@ static void cpu_mcheck_distribute_cmci(v
 
 static void clear_cmci(void)
 {
-    int i;
+    unsigned int i, cpu = smp_processor_id();
 
     if ( !cmci_support || !opt_mce )
         return;
 
-    mce_printk(MCE_VERBOSE, "CMCI: clear_cmci support on CPU%d\n",
-               smp_processor_id());
+    mce_printk(MCE_VERBOSE, "CMCI: clear_cmci support on CPU%u\n", cpu);
 
-    for ( i = 0; i < nr_mce_banks; i++ )
+    for ( i = 0; i < per_cpu(nr_mce_banks, cpu); i++ )
     {
         unsigned msr = MSR_IA32_MCx_CTL2(i);
         u64 val;
-        if ( !mcabanks_test(i, __get_cpu_var(mce_banks_owned)) )
+
+        if ( !mcabanks_test(i, per_cpu(mce_banks_owned, cpu)) )
             continue;
         rdmsrl(msr, val);
         if ( val & (CMCI_EN|CMCI_THRESHOLD_MASK) )
             wrmsrl(msr, val & ~(CMCI_EN|CMCI_THRESHOLD_MASK));
-        mcabanks_clear(i, __get_cpu_var(mce_banks_owned));
+        mcabanks_clear(i, per_cpu(mce_banks_owned, cpu));
     }
 }
 
@@ -826,7 +826,7 @@ static void intel_init_mce(void)
     intel_mce_post_reset();
 
     /* clear all banks */
-    for ( i = firstbank; i < nr_mce_banks; i++ )
+    for ( i = firstbank; i < this_cpu(nr_mce_banks); i++ )
     {
         /*
          * Some banks are shared across cores, use MCi_CTRL to judge whether
@@ -866,8 +866,9 @@ static void cpu_mcabank_free(unsigned in
 
 static int cpu_mcabank_alloc(unsigned int cpu)
 {
-    struct mca_banks *cmci = mcabanks_alloc();
-    struct mca_banks *owned = mcabanks_alloc();
+    unsigned int nr = per_cpu(nr_mce_banks, cpu);
+    struct mca_banks *cmci = mcabanks_alloc(nr);
+    struct mca_banks *owned = mcabanks_alloc(nr);
 
     if ( !cmci || !owned )
         goto out;
@@ -924,6 +925,13 @@ enum mcheck_type intel_mcheck_init(struc
         register_cpu_notifier(&cpu_nfb);
         mcheck_intel_therm_init();
     }
+    else
+    {
+        unsigned int cpu = smp_processor_id();
+
+        per_cpu(no_cmci_banks, cpu)->num = per_cpu(nr_mce_banks, cpu);
+        per_cpu(mce_banks_owned, cpu)->num = per_cpu(nr_mce_banks, cpu);
+    }
 
     intel_init_mca(c);
 
--- a/xen/arch/x86/cpu/mcheck/x86_mca.h
+++ b/xen/arch/x86/cpu/mcheck/x86_mca.h
@@ -125,7 +125,7 @@ static inline int mcabanks_test(int bit,
     return test_bit(bit, banks->bank_map);
 }
 
-struct mca_banks *mcabanks_alloc(void);
+struct mca_banks *mcabanks_alloc(unsigned int nr);
 void mcabanks_free(struct mca_banks *banks);
 extern struct mca_banks *mca_allbanks;
 
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2374,7 +2374,7 @@ static int svm_is_erratum_383(struct cpu
         return 0;
     
     /* Clear MCi_STATUS registers */
-    for (i = 0; i < nr_mce_banks; i++)
+    for (i = 0; i < this_cpu(nr_mce_banks); i++)
         wrmsrl(MSR_IA32_MCx_STATUS(i), 0ULL);
     
     rdmsrl(MSR_IA32_MCG_STATUS, msr_content);
--- a/xen/include/asm-x86/mce.h
+++ b/xen/include/asm-x86/mce.h
@@ -40,6 +40,6 @@ extern int vmce_rdmsr(uint32_t msr, uint
 extern bool vmce_has_lmce(const struct vcpu *v);
 extern int vmce_enable_mca_cap(struct domain *d, uint64_t cap);
 
-extern unsigned int nr_mce_banks;
+DECLARE_PER_CPU(unsigned int, nr_mce_banks);
 
 #endif




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Xen-devel] [PATCH 2/4] x86/mcheck: replace remaining uses of __get_cpu_var()
  2019-06-14 15:33 [Xen-devel] [PATCH 0/4] x86 MCE adjustments for AMD / general per-CPU accessor cleanup Jan Beulich
  2019-06-14 15:35 ` [Xen-devel] [PATCH 1/4] x86/mcheck: allow varying bank counts per CPU Jan Beulich
@ 2019-06-14 15:37 ` Jan Beulich
  2019-06-21 17:46   ` Andrew Cooper
  2019-06-14 15:37 ` [Xen-devel] [PATCH 3/4] x86: " Jan Beulich
  2019-06-14 15:38 ` [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr() Jan Beulich
  3 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2019-06-14 15:37 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, WeiLiu, Roger Pau Monne

this_cpu() is shorter, and when there are multiple uses in a function
per_cpu() it's also more efficient.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -473,7 +473,8 @@ void mcheck_cmn_handler(const struct cpu
     static atomic_t found_error = ATOMIC_INIT(0);
     static cpumask_t mce_fatal_cpus;
     struct mca_banks *bankmask = mca_allbanks;
-    struct mca_banks *clear_bank = __get_cpu_var(mce_clear_banks);
+    unsigned int cpu = smp_processor_id();
+    struct mca_banks *clear_bank = per_cpu(mce_clear_banks, cpu);
     uint64_t gstatus;
     mctelem_cookie_t mctc = NULL;
     struct mca_summary bs;
@@ -504,17 +505,17 @@ void mcheck_cmn_handler(const struct cpu
              * the telemetry after reboot (the MSRs are sticky)
              */
             if ( bs.pcc || !bs.recoverable )
-                cpumask_set_cpu(smp_processor_id(), &mce_fatal_cpus);
+                cpumask_set_cpu(cpu, &mce_fatal_cpus);
         }
         else if ( mctc != NULL )
             mctelem_commit(mctc);
         atomic_set(&found_error, 1);
 
         /* The last CPU will be take check/clean-up etc */
-        atomic_set(&severity_cpu, smp_processor_id());
+        atomic_set(&severity_cpu, cpu);
 
-        mce_printk(MCE_CRITICAL, "MCE: clear_bank map %lx on CPU%d\n",
-                   *((unsigned long *)clear_bank), smp_processor_id());
+        mce_printk(MCE_CRITICAL, "MCE: clear_bank map %lx on CPU%u\n",
+                   *((unsigned long *)clear_bank), cpu);
         if ( clear_bank != NULL )
             mcheck_mca_clearbanks(clear_bank);
     }
@@ -524,14 +525,14 @@ void mcheck_cmn_handler(const struct cpu
 
     mce_barrier_enter(&mce_trap_bar, bcast);
     if ( mctc != NULL && mce_urgent_action(regs, mctc) )
-        cpumask_set_cpu(smp_processor_id(), &mce_fatal_cpus);
+        cpumask_set_cpu(cpu, &mce_fatal_cpus);
     mce_barrier_exit(&mce_trap_bar, bcast);
 
     /*
      * Wait until everybody has processed the trap.
      */
     mce_barrier_enter(&mce_trap_bar, bcast);
-    if ( lmce || atomic_read(&severity_cpu) == smp_processor_id() )
+    if ( lmce || atomic_read(&severity_cpu) == cpu )
     {
         /*
          * According to SDM, if no error bank found on any cpus,
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -492,6 +492,7 @@ static int do_cmci_discover(int i)
     unsigned msr = MSR_IA32_MCx_CTL2(i);
     u64 val;
     unsigned int threshold, max_threshold;
+    unsigned int cpu = smp_processor_id();
     static unsigned int cmci_threshold = 2;
     integer_param("cmci-threshold", cmci_threshold);
 
@@ -499,7 +500,7 @@ static int do_cmci_discover(int i)
     /* Some other CPU already owns this bank. */
     if ( val & CMCI_EN )
     {
-        mcabanks_clear(i, __get_cpu_var(mce_banks_owned));
+        mcabanks_clear(i, per_cpu(mce_banks_owned, cpu));
         goto out;
     }
 
@@ -512,7 +513,7 @@ static int do_cmci_discover(int i)
     if ( !(val & CMCI_EN) )
     {
         /* This bank does not support CMCI. Polling timer has to handle it. */
-        mcabanks_set(i, __get_cpu_var(no_cmci_banks));
+        mcabanks_set(i, per_cpu(no_cmci_banks, cpu));
         wrmsrl(msr, val & ~CMCI_THRESHOLD_MASK);
         return 0;
     }
@@ -522,13 +523,13 @@ static int do_cmci_discover(int i)
     {
         mce_printk(MCE_QUIET,
                    "CMCI: threshold %#x too large for CPU%u bank %u, using %#x\n",
-                   threshold, smp_processor_id(), i, max_threshold);
+                   threshold, cpu, i, max_threshold);
         threshold = max_threshold;
     }
     wrmsrl(msr, (val & ~CMCI_THRESHOLD_MASK) | CMCI_EN | threshold);
-    mcabanks_set(i, __get_cpu_var(mce_banks_owned));
+    mcabanks_set(i, per_cpu(mce_banks_owned, cpu));
 out:
-    mcabanks_clear(i, __get_cpu_var(no_cmci_banks));
+    mcabanks_clear(i, per_cpu(no_cmci_banks, cpu));
     return 1;
 }
 
@@ -648,7 +649,7 @@ static void cmci_interrupt(struct cpu_us
     ack_APIC_irq();
 
     mctc = mcheck_mca_logout(
-        MCA_CMCI_HANDLER, __get_cpu_var(mce_banks_owned), &bs, NULL);
+        MCA_CMCI_HANDLER, this_cpu(mce_banks_owned), &bs, NULL);
 
     if ( bs.errcnt && mctc != NULL )
     {
--- a/xen/arch/x86/cpu/mcheck/non-fatal.c
+++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
@@ -38,7 +38,8 @@ static void mce_checkregs (void *info)
 	struct mca_summary bs;
 	static uint64_t dumpcount = 0;
 
-	mctc = mcheck_mca_logout(MCA_POLLER, __get_cpu_var(poll_bankmask), &bs, NULL);
+	mctc = mcheck_mca_logout(MCA_POLLER, this_cpu(poll_bankmask),
+				 &bs, NULL);
 
 	if (bs.errcnt && mctc != NULL) {
 		adjust++;
@@ -93,7 +94,7 @@ static int __init init_nonfatal_mce_chec
 	if (!opt_mce || !mce_available(c))
 		return -ENODEV;
 
-	if (__get_cpu_var(poll_bankmask) == NULL)
+	if (!this_cpu(poll_bankmask))
 		return -EINVAL;
 
 	/*




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Xen-devel] [PATCH 3/4] x86: replace remaining uses of __get_cpu_var()
  2019-06-14 15:33 [Xen-devel] [PATCH 0/4] x86 MCE adjustments for AMD / general per-CPU accessor cleanup Jan Beulich
  2019-06-14 15:35 ` [Xen-devel] [PATCH 1/4] x86/mcheck: allow varying bank counts per CPU Jan Beulich
  2019-06-14 15:37 ` [Xen-devel] [PATCH 2/4] x86/mcheck: replace remaining uses of __get_cpu_var() Jan Beulich
@ 2019-06-14 15:37 ` Jan Beulich
  2019-06-21 17:47   ` Andrew Cooper
  2019-06-14 15:38 ` [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr() Jan Beulich
  3 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2019-06-14 15:37 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, WeiLiu, Roger Pau Monne

this_cpu() is shorter, and when there are multiple uses in a function
per_cpu() it's also more efficient.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/cpuidle_menu.c
+++ b/xen/arch/x86/acpi/cpuidle_menu.c
@@ -146,7 +146,7 @@ static inline int which_bucket(unsigned
 
 static inline s_time_t avg_intr_interval_us(void)
 {
-    struct menu_device *data = &__get_cpu_var(menu_devices);
+    struct menu_device *data = &this_cpu(menu_devices);
     s_time_t    duration, now;
     s_time_t    avg_interval;
     unsigned int irq_sum;
@@ -187,7 +187,7 @@ static unsigned int get_sleep_length_us(
 
 static int menu_select(struct acpi_processor_power *power)
 {
-    struct menu_device *data = &__get_cpu_var(menu_devices);
+    struct menu_device *data = &this_cpu(menu_devices);
     int i;
     s_time_t    io_interval;
 
@@ -239,7 +239,7 @@ static int menu_select(struct acpi_proce
 
 static void menu_reflect(struct acpi_processor_power *power)
 {
-    struct menu_device *data = &__get_cpu_var(menu_devices);
+    struct menu_device *data = &this_cpu(menu_devices);
     u64 new_factor;
 
     data->measured_us = power->last_residency;
@@ -294,7 +294,8 @@ static struct cpuidle_governor menu_gove
 struct cpuidle_governor *cpuidle_current_governor = &menu_governor;
 void menu_get_trace_data(u32 *expected, u32 *pred)
 {
-    struct menu_device *data = &__get_cpu_var(menu_devices);
+    const struct menu_device *data = &this_cpu(menu_devices);
+
     *expected = data->expected_us;
     *pred = data->predicted_us;
 }
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -651,7 +651,7 @@ void irq_move_cleanup_interrupt(struct c
         unsigned int irq;
         unsigned int irr;
         struct irq_desc *desc;
-        irq = __get_cpu_var(vector_irq)[vector];
+        irq = per_cpu(vector_irq, me)[vector];
 
         if ((int)irq < 0)
             continue;
@@ -690,7 +690,7 @@ void irq_move_cleanup_interrupt(struct c
         TRACE_3D(TRC_HW_IRQ_MOVE_CLEANUP,
                  irq, vector, smp_processor_id());
 
-        __get_cpu_var(vector_irq)[vector] = ~irq;
+        per_cpu(vector_irq, me)[vector] = ~irq;
         desc->arch.move_cleanup_count--;
 
         if ( desc->arch.move_cleanup_count == 0 )
@@ -822,7 +822,7 @@ void do_IRQ(struct cpu_user_regs *regs)
     uint32_t          tsc_in;
     struct irq_desc  *desc;
     unsigned int      vector = (u8)regs->entry_vector;
-    int irq = __get_cpu_var(vector_irq[vector]);
+    int               irq = this_cpu(vector_irq)[vector];
     struct cpu_user_regs *old_regs = set_irq_regs(regs);
     
     perfc_incr(irqs);
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -68,12 +68,12 @@ DECLARE_PER_CPU(struct cpu_user_regs *,
 
 static inline struct cpu_user_regs *get_irq_regs(void)
 {
-	return __get_cpu_var(__irq_regs);
+	return this_cpu(__irq_regs);
 }
 
 static inline struct cpu_user_regs *set_irq_regs(struct cpu_user_regs *new_regs)
 {
-	struct cpu_user_regs *old_regs, **pp_regs = &__get_cpu_var(__irq_regs);
+	struct cpu_user_regs *old_regs, **pp_regs = &this_cpu(__irq_regs);
 
 	old_regs = *pp_regs;
 	*pp_regs = new_regs;





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr()
  2019-06-14 15:33 [Xen-devel] [PATCH 0/4] x86 MCE adjustments for AMD / general per-CPU accessor cleanup Jan Beulich
                   ` (2 preceding siblings ...)
  2019-06-14 15:37 ` [Xen-devel] [PATCH 3/4] x86: " Jan Beulich
@ 2019-06-14 15:38 ` Jan Beulich
  2019-06-17 15:27   ` Julien Grall
                     ` (2 more replies)
  3 siblings, 3 replies; 11+ messages in thread
From: Jan Beulich @ 2019-06-14 15:38 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Daniel de Graaf, Roger Pau Monne

this_cpu{,_ptr}() are shorter, and have previously been marked as
preferred in Xen anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -225,7 +225,7 @@ void call_rcu(struct rcu_head *head,
     head->func = func;
     head->next = NULL;
     local_irq_save(flags);
-    rdp = &__get_cpu_var(rcu_data);
+    rdp = &this_cpu(rcu_data);
     *rdp->nxttail = head;
     rdp->nxttail = &head->next;
     if (unlikely(++rdp->qlen > qhimark)) {
@@ -409,7 +409,7 @@ static void __rcu_process_callbacks(stru
 
 static void rcu_process_callbacks(void)
 {
-    __rcu_process_callbacks(&rcu_ctrlblk, &__get_cpu_var(rcu_data));
+    __rcu_process_callbacks(&rcu_ctrlblk, &this_cpu(rcu_data));
 }
 
 static int __rcu_pending(struct rcu_ctrlblk *rcp, struct rcu_data *rdp)
--- a/xen/include/asm-arm/percpu.h
+++ b/xen/include/asm-arm/percpu.h
@@ -17,12 +17,12 @@ void percpu_init_areas(void);
 
 #define per_cpu(var, cpu)  \
     (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
-#define __get_cpu_var(var) \
+#define this_cpu(var) \
     (*RELOC_HIDE(&per_cpu__##var, READ_SYSREG(TPIDR_EL2)))
 
 #define per_cpu_ptr(var, cpu)  \
     (*RELOC_HIDE(var, __per_cpu_offset[cpu]))
-#define __get_cpu_ptr(var) \
+#define this_cpu_ptr(var) \
     (*RELOC_HIDE(var, READ_SYSREG(TPIDR_EL2)))
 
 #define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
--- a/xen/include/asm-x86/percpu.h
+++ b/xen/include/asm-x86/percpu.h
@@ -15,12 +15,12 @@ void percpu_init_areas(void);
 /* var is in discarded region: offset to particular copy we want */
 #define per_cpu(var, cpu)  \
     (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
-#define __get_cpu_var(var) \
+#define this_cpu(var) \
     (*RELOC_HIDE(&per_cpu__##var, get_cpu_info()->per_cpu_offset))
 
 #define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
 
-#define __get_cpu_ptr(var) \
+#define this_cpu_ptr(var) \
     (*RELOC_HIDE(var, get_cpu_info()->per_cpu_offset))
 
 #define per_cpu_ptr(var, cpu)  \
--- a/xen/include/xen/percpu.h
+++ b/xen/include/xen/percpu.h
@@ -13,11 +13,6 @@
 #define DEFINE_PER_CPU_READ_MOSTLY(type, name) \
 	__DEFINE_PER_CPU(type, _##name, .read_mostly)
 
-/* Preferred on Xen. Also see arch-defined per_cpu(). */
-#define this_cpu(var)    __get_cpu_var(var)
-
-#define this_cpu_ptr(ptr)    __get_cpu_ptr(ptr)
-
 #define get_per_cpu_var(var)  (per_cpu__##var)
 
 /* Linux compatibility. */
--- a/xen/xsm/flask/avc.c
+++ b/xen/xsm/flask/avc.c
@@ -57,9 +57,9 @@ const struct selinux_class_perm selinux_
 #define AVC_CACHE_RECLAIM        16
 
 #ifdef CONFIG_XSM_FLASK_AVC_STATS
-#define avc_cache_stats_incr(field)                 \
-do {                                \
-    __get_cpu_var(avc_cache_stats).field++;        \
+#define avc_cache_stats_incr(field)    \
+do {                                   \
+    this_cpu(avc_cache_stats).field++; \
 } while (0)
 #else
 #define avc_cache_stats_incr(field)    do {} while (0)





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr()
  2019-06-14 15:38 ` [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr() Jan Beulich
@ 2019-06-17 15:27   ` Julien Grall
  2019-06-18 18:00   ` Daniel De Graaf
  2019-06-21 17:49   ` Andrew Cooper
  2 siblings, 0 replies; 11+ messages in thread
From: Julien Grall @ 2019-06-17 15:27 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Daniel de Graaf, Roger Pau Monne

Hi Jan,

On 14/06/2019 16:38, Jan Beulich wrote:
> this_cpu{,_ptr}() are shorter, and have previously been marked as
> preferred in Xen anyway.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <julien.grall@arm.com>

> 
> --- a/xen/common/rcupdate.c
> +++ b/xen/common/rcupdate.c
> @@ -225,7 +225,7 @@ void call_rcu(struct rcu_head *head,
>       head->func = func;
>       head->next = NULL;
>       local_irq_save(flags);
> -    rdp = &__get_cpu_var(rcu_data);
> +    rdp = &this_cpu(rcu_data);
>       *rdp->nxttail = head;
>       rdp->nxttail = &head->next;
>       if (unlikely(++rdp->qlen > qhimark)) {
> @@ -409,7 +409,7 @@ static void __rcu_process_callbacks(stru
>   
>   static void rcu_process_callbacks(void)
>   {
> -    __rcu_process_callbacks(&rcu_ctrlblk, &__get_cpu_var(rcu_data));
> +    __rcu_process_callbacks(&rcu_ctrlblk, &this_cpu(rcu_data));
>   }
>   
>   static int __rcu_pending(struct rcu_ctrlblk *rcp, struct rcu_data *rdp)
> --- a/xen/include/asm-arm/percpu.h
> +++ b/xen/include/asm-arm/percpu.h
> @@ -17,12 +17,12 @@ void percpu_init_areas(void);
>   
>   #define per_cpu(var, cpu)  \
>       (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
> -#define __get_cpu_var(var) \
> +#define this_cpu(var) \
>       (*RELOC_HIDE(&per_cpu__##var, READ_SYSREG(TPIDR_EL2)))
>   
>   #define per_cpu_ptr(var, cpu)  \
>       (*RELOC_HIDE(var, __per_cpu_offset[cpu]))
> -#define __get_cpu_ptr(var) \
> +#define this_cpu_ptr(var) \
>       (*RELOC_HIDE(var, READ_SYSREG(TPIDR_EL2)))
>   
>   #define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
> --- a/xen/include/asm-x86/percpu.h
> +++ b/xen/include/asm-x86/percpu.h
> @@ -15,12 +15,12 @@ void percpu_init_areas(void);
>   /* var is in discarded region: offset to particular copy we want */
>   #define per_cpu(var, cpu)  \
>       (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
> -#define __get_cpu_var(var) \
> +#define this_cpu(var) \
>       (*RELOC_HIDE(&per_cpu__##var, get_cpu_info()->per_cpu_offset))
>   
>   #define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
>   
> -#define __get_cpu_ptr(var) \
> +#define this_cpu_ptr(var) \
>       (*RELOC_HIDE(var, get_cpu_info()->per_cpu_offset))
>   
>   #define per_cpu_ptr(var, cpu)  \
> --- a/xen/include/xen/percpu.h
> +++ b/xen/include/xen/percpu.h
> @@ -13,11 +13,6 @@
>   #define DEFINE_PER_CPU_READ_MOSTLY(type, name) \
>   	__DEFINE_PER_CPU(type, _##name, .read_mostly)
>   
> -/* Preferred on Xen. Also see arch-defined per_cpu(). */
> -#define this_cpu(var)    __get_cpu_var(var)
> -
> -#define this_cpu_ptr(ptr)    __get_cpu_ptr(ptr)
> -
>   #define get_per_cpu_var(var)  (per_cpu__##var)
>   
>   /* Linux compatibility. */
> --- a/xen/xsm/flask/avc.c
> +++ b/xen/xsm/flask/avc.c
> @@ -57,9 +57,9 @@ const struct selinux_class_perm selinux_
>   #define AVC_CACHE_RECLAIM        16
>   
>   #ifdef CONFIG_XSM_FLASK_AVC_STATS
> -#define avc_cache_stats_incr(field)                 \
> -do {                                \
> -    __get_cpu_var(avc_cache_stats).field++;        \
> +#define avc_cache_stats_incr(field)    \
> +do {                                   \
> +    this_cpu(avc_cache_stats).field++; \
>   } while (0)
>   #else
>   #define avc_cache_stats_incr(field)    do {} while (0)
> 
> 
> 
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr()
  2019-06-14 15:38 ` [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr() Jan Beulich
  2019-06-17 15:27   ` Julien Grall
@ 2019-06-18 18:00   ` Daniel De Graaf
  2019-06-21 17:49   ` Andrew Cooper
  2 siblings, 0 replies; 11+ messages in thread
From: Daniel De Graaf @ 2019-06-18 18:00 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Roger Pau Monne

On 6/14/19 11:38 AM, Jan Beulich wrote:
> this_cpu{,_ptr}() are shorter, and have previously been marked as
> preferred in Xen anyway.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 1/4] x86/mcheck: allow varying bank counts per CPU
  2019-06-14 15:35 ` [Xen-devel] [PATCH 1/4] x86/mcheck: allow varying bank counts per CPU Jan Beulich
@ 2019-06-21 17:45   ` Andrew Cooper
  0 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2019-06-21 17:45 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: George Dunlap, WeiLiu, Roger Pau Monne

On 14/06/2019 16:35, Jan Beulich wrote:
> Up to now we've been assuming that all CPUs would have the same number
> of reporting banks. However, on upcoming AMD CPUs this isn't the case,
> and one can observe
>
> (XEN) mce.c:666: Different bank number on cpu <N>
>
> indicating that Machine Check support would not be enabled on the
> affected CPUs. Convert the count variable to a per-CPU one, and adjust
> code where needed to cope with the values not being the same. In
> particular the mcabanks_alloc() invocations during AP bringup need to
> now allocate maximum-size bitmaps, because the truly needed size can't
> be known until we actually execute on that CPU, yet mcheck_init() gets
> called too early to do any allocations itself.
>
> Take the liberty and also
> - make mca_cap_init() static,
> - replace several __get_cpu_var() uses when a local variable suitable
>   for use with per_cpu() appears,
> - correct which CPU's cpu_data[] entry x86_mc_msrinject_verify() uses,
> - replace a BUG() by panic().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 2/4] x86/mcheck: replace remaining uses of __get_cpu_var()
  2019-06-14 15:37 ` [Xen-devel] [PATCH 2/4] x86/mcheck: replace remaining uses of __get_cpu_var() Jan Beulich
@ 2019-06-21 17:46   ` Andrew Cooper
  0 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2019-06-21 17:46 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: George Dunlap, WeiLiu, Roger Pau Monne

On 14/06/2019 16:37, Jan Beulich wrote:
> this_cpu() is shorter, and when there are multiple uses in a function
> per_cpu() it's also more efficient.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 3/4] x86: replace remaining uses of __get_cpu_var()
  2019-06-14 15:37 ` [Xen-devel] [PATCH 3/4] x86: " Jan Beulich
@ 2019-06-21 17:47   ` Andrew Cooper
  0 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2019-06-21 17:47 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: George Dunlap, WeiLiu, Roger Pau Monne

On 14/06/2019 16:37, Jan Beulich wrote:
> this_cpu() is shorter, and when there are multiple uses in a function
> per_cpu() it's also more efficient.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr()
  2019-06-14 15:38 ` [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr() Jan Beulich
  2019-06-17 15:27   ` Julien Grall
  2019-06-18 18:00   ` Daniel De Graaf
@ 2019-06-21 17:49   ` Andrew Cooper
  2 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2019-06-21 17:49 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Tim Deegan, Ian Jackson, Julien Grall,
	Daniel de Graaf, Roger Pau Monne

On 14/06/2019 16:38, Jan Beulich wrote:
> this_cpu{,_ptr}() are shorter, and have previously been marked as
> preferred in Xen anyway.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-06-21 17:49 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-14 15:33 [Xen-devel] [PATCH 0/4] x86 MCE adjustments for AMD / general per-CPU accessor cleanup Jan Beulich
2019-06-14 15:35 ` [Xen-devel] [PATCH 1/4] x86/mcheck: allow varying bank counts per CPU Jan Beulich
2019-06-21 17:45   ` Andrew Cooper
2019-06-14 15:37 ` [Xen-devel] [PATCH 2/4] x86/mcheck: replace remaining uses of __get_cpu_var() Jan Beulich
2019-06-21 17:46   ` Andrew Cooper
2019-06-14 15:37 ` [Xen-devel] [PATCH 3/4] x86: " Jan Beulich
2019-06-21 17:47   ` Andrew Cooper
2019-06-14 15:38 ` [Xen-devel] [PATCH 4/4] drop __get_cpu_var() and __get_cpu_ptr() Jan Beulich
2019-06-17 15:27   ` Julien Grall
2019-06-18 18:00   ` Daniel De Graaf
2019-06-21 17:49   ` Andrew Cooper

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).