xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>, "Wei Liu" <wl@xen.org>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Ian Jackson" <iwj@xenproject.org>,
	"Julien Grall" <julien@xen.org>,
	"Stefano Stabellini" <sstabellini@kernel.org>
Subject: Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
Date: Tue, 13 Oct 2020 17:28:15 +0200	[thread overview]
Message-ID: <75c5328c-c061-7ddf-a34d-0cd8b93043fc@suse.com> (raw)
In-Reply-To: <20201012092740.1617-3-jgross@suse.com>

On 12.10.2020 11:27, Juergen Gross wrote:
> @@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
>  
>      d = v->domain;
>      chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
> -    evtchn_port_set_pending(d, v->vcpu_id, chn);
> -    spin_unlock(&chn->lock);
> +    if ( evtchn_tryread_lock(chn) )
> +    {
> +        evtchn_port_set_pending(d, v->vcpu_id, chn);
> +        evtchn_read_unlock(chn);
> +    }
>  
>   out:
>      spin_unlock_irqrestore(&v->virq_lock, flags);
> @@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
>          goto out;
>  
>      chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
> -    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
> -    spin_unlock(&chn->lock);
> +    if ( evtchn_tryread_lock(chn) )
> +    {
> +        evtchn_port_set_pending(d, v->vcpu_id, chn);

Is this simply a copy-and-paste mistake (re-using the code from
send_guest_vcpu_virq()), or is there a reason you switch from
where to obtain the vCPU to send to (in which case this ought
to be mentioned in the description, and in which case you could
use literal zero)?

> --- a/xen/include/xen/event.h
> +++ b/xen/include/xen/event.h
> @@ -105,6 +105,45 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>  #define bucket_from_port(d, p) \
>      ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>  
> +#define EVENT_WRITE_LOCK_INC    MAX_VIRT_CPUS

Isn't the ceiling on simultaneous readers the number of pCPU-s,
and the value here then needs to be NR_CPUS + 1 to accommodate
the maximum number of readers? Furthermore, with you dropping
the disabling of interrupts, one pCPU can acquire a read lock
now more than once, when interrupting a locked region.

> +static inline void evtchn_write_lock(struct evtchn *evtchn)
> +{
> +    int val;
> +
> +    /* No barrier needed, atomic_add_return() is full barrier. */
> +    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +          val != EVENT_WRITE_LOCK_INC;
> +          val = atomic_read(&evtchn->lock) )
> +        cpu_relax();
> +}
> +
> +static inline void evtchn_write_unlock(struct evtchn *evtchn)
> +{
> +    arch_lock_release_barrier();
> +
> +    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +}
> +
> +static inline bool evtchn_tryread_lock(struct evtchn *evtchn)

The corresponding "generic" function is read_trylock() - I'd
suggest to use the same base name, with the evtchn_ prefix.

> @@ -274,12 +312,12 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
>      if ( port_is_valid(d, port) )
>      {
>          struct evtchn *evtchn = evtchn_from_port(d, port);
> -        unsigned long flags;
>  
> -        spin_lock_irqsave(&evtchn->lock, flags);
> -        if ( evtchn_usable(evtchn) )
> +        if ( evtchn_tryread_lock(evtchn) && evtchn_usable(evtchn) )
> +        {
>              rc = evtchn_is_pending(d, evtchn);
> -        spin_unlock_irqrestore(&evtchn->lock, flags);
> +            evtchn_read_unlock(evtchn);
> +        }
>      }

This needs to be two nested if()-s, as you need to drop the lock
even when evtchn_usable() returns false.

Jan


  parent reply	other threads:[~2020-10-13 15:28 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-12  9:27 [PATCH v2 0/2] XSA-343 followup patches Juergen Gross
2020-10-12  9:27 ` [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
2020-10-12  9:48   ` Paul Durrant
2020-10-12  9:56     ` Jürgen Groß
2020-10-12 10:06       ` Paul Durrant
2020-10-13 13:58   ` Jan Beulich
2020-10-13 14:20     ` Jürgen Groß
2020-10-13 14:26       ` Jan Beulich
2020-10-14 11:40         ` Julien Grall
2020-10-15 12:07           ` Jan Beulich
2020-10-16  5:46             ` Jürgen Groß
2020-10-16  9:36             ` Julien Grall
2020-10-16 12:09               ` Jan Beulich
2020-10-20  9:25                 ` Julien Grall
2020-10-20  9:34                   ` Jan Beulich
2020-10-20 10:01                     ` Julien Grall
2020-10-20 10:06                       ` Jan Beulich
2020-10-12  9:27 ` [PATCH v2 2/2] xen/evtchn: rework per event channel lock Juergen Gross
2020-10-13 14:02   ` Jan Beulich
2020-10-13 14:13     ` Jürgen Groß
2020-10-13 15:30       ` Jan Beulich
2020-10-13 15:28   ` Jan Beulich [this message]
2020-10-14  6:00     ` Jürgen Groß
2020-10-14  6:52       ` Jan Beulich
2020-10-14  7:27         ` Jürgen Groß
2020-10-16  9:51   ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=75c5328c-c061-7ddf-a34d-0cd8b93043fc@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jgross@suse.com \
    --cc=julien@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).