All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH v4 2/2] x86: add accessors for scratch cpu mask
Date: Fri, 28 Feb 2020 11:16:55 +0100	[thread overview]
Message-ID: <12c75d73-cc89-9b8c-011a-b6e11f6cf58d@suse.com> (raw)
In-Reply-To: <20200228093334.36586-3-roger.pau@citrix.com>

On 28.02.2020 10:33, Roger Pau Monne wrote:
> Current usage of the per-CPU scratch cpumask is dangerous since
> there's no way to figure out if the mask is already being used except
> for manual code inspection of all the callers and possible call paths.
> 
> This is unsafe and not reliable, so introduce a minimal get/put
> infrastructure to prevent nested usage of the scratch mask and usage
> in interrupt context.
> 
> Move the definition of scratch_cpumask to smp.c in order to place the
> declaration and the accessors as close as possible.

You've changed one instance of "declaration", but not also the other.

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -196,7 +196,7 @@ static void _clear_irq_vector(struct irq_desc *desc)
>  {
>      unsigned int cpu, old_vector, irq = desc->irq;
>      unsigned int vector = desc->arch.vector;
> -    cpumask_t *tmp_mask = this_cpu(scratch_cpumask);
> +    cpumask_t *tmp_mask = get_scratch_cpumask();
>  
>      BUG_ON(!valid_irq_vector(vector));
>  
> @@ -208,6 +208,7 @@ static void _clear_irq_vector(struct irq_desc *desc)
>          ASSERT(per_cpu(vector_irq, cpu)[vector] == irq);
>          per_cpu(vector_irq, cpu)[vector] = ~irq;
>      }
> +    put_scratch_cpumask(tmp_mask);
>  
>      desc->arch.vector = IRQ_VECTOR_UNASSIGNED;
>      cpumask_clear(desc->arch.cpu_mask);
> @@ -227,8 +228,9 @@ static void _clear_irq_vector(struct irq_desc *desc)
>  
>      /* If we were in motion, also clear desc->arch.old_vector */
>      old_vector = desc->arch.old_vector;
> -    cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);
>  
> +    cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);
> +    tmp_mask = get_scratch_cpumask();

Did you test this? It looks overwhelmingly likely that the two
lines need to be the other way around.

> @@ -4384,6 +4389,17 @@ static int __do_update_va_mapping(
>          break;
>      }
>  
> +    switch ( flags & ~UVMF_FLUSHTYPE_MASK )
> +    {
> +    case UVMF_LOCAL:
> +    case UVMF_ALL:
> +        break;
> +
> +    default:
> +        put_scratch_cpumask(mask);
> +    }
> +
> +
>      return rc;

No two successive blank lines please.

> --- a/xen/arch/x86/smp.c
> +++ b/xen/arch/x86/smp.c
> @@ -25,6 +25,31 @@
>  #include <irq_vectors.h>
>  #include <mach_apic.h>
>  
> +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask);
> +
> +#ifndef NDEBUG
> +cpumask_t *scratch_cpumask(bool use)
> +{
> +    static DEFINE_PER_CPU(void *, scratch_cpumask_use);

I'd make this "const void *", btw.

> +    /*
> +     * Due to reentrancy scratch cpumask cannot be used in IRQ, #MC or #NMI
> +     * context.
> +     */
> +    BUG_ON(in_irq() || in_mce_handler() || in_nmi_handler());
> +
> +    if ( use && unlikely(this_cpu(scratch_cpumask_use)) )
> +    {
> +        printk("scratch CPU mask already in use by %ps (%p)\n",
> +               this_cpu(scratch_cpumask_use), this_cpu(scratch_cpumask_use));

Why the raw %p as well? We don't do so elsewhere, I think. Yes,
it's debugging code only, but I wonder anyway.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2020-02-28 10:17 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-28  9:33 [Xen-devel] [PATCH v4 0/2] x86: scratch cpumask fixes/improvement Roger Pau Monne
2020-02-28  9:33 ` [Xen-devel] [PATCH v4 1/2] x86/smp: use a dedicated CPU mask in send_IPI_mask Roger Pau Monne
2020-02-28 10:08   ` Jan Beulich
2020-02-28 10:22     ` Roger Pau Monné
2020-02-28  9:33 ` [Xen-devel] [PATCH v4 2/2] x86: add accessors for scratch cpu mask Roger Pau Monne
2020-02-28 10:16   ` Jan Beulich [this message]
2020-02-28 10:31     ` Roger Pau Monné
2020-02-28 11:15       ` Jan Beulich
2020-02-28 11:41         ` Roger Pau Monné
2020-02-28 12:03           ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12c75d73-cc89-9b8c-011a-b6e11f6cf58d@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.