xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>, "Wei Liu" <wl@xen.org>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Ian Jackson" <iwj@xenproject.org>,
	"Julien Grall" <julien@xen.org>,
	"Stefano Stabellini" <sstabellini@kernel.org>
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
Date: Tue, 20 Oct 2020 11:28:13 +0200	[thread overview]
Message-ID: <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com> (raw)
In-Reply-To: <20201016105839.14796-3-jgross@suse.com>

On 16.10.2020 12:58, Juergen Gross wrote:
> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>      if ( port_is_valid(guest, port) )
>      {
>          struct evtchn *chn = evtchn_from_port(guest, port);
> -        unsigned long flags;
>  
> -        spin_lock_irqsave(&chn->lock, flags);
> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
> -        spin_unlock_irqrestore(&chn->lock, flags);
> +        if ( evtchn_read_trylock(chn) )
> +        {
> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
> +            evtchn_read_unlock(chn);
> +        }

Does this want some form of else, e.g. at least a printk_once()?

> @@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>      if ( rc )
>          goto out;
>  
> -    flags = double_evtchn_lock(lchn, rchn);
> +    double_evtchn_lock(lchn, rchn);

This introduces an unfortunate conflict with my conversion of
the per-domain event lock to an rw one: It acquires rd's lock
in read mode only, while the requirements here would not allow
doing so. (Same in evtchn_close() then.)

> @@ -736,7 +723,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
>  
>      lchn = evtchn_from_port(ld, lport);
>  
> -    spin_lock_irqsave(&lchn->lock, flags);
> +    if ( !evtchn_read_trylock(lchn) )
> +        return 0;

With this, the auxiliary call to xsm_evtchn_send() up from
here should also go away again (possibly in a separate follow-
on, which would then likely be a clean revert).

> @@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
>  
>      d = v->domain;
>      chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
> -    evtchn_port_set_pending(d, v->vcpu_id, chn);
> -    spin_unlock(&chn->lock);
> +    if ( evtchn_read_trylock(chn) )
> +    {
> +        evtchn_port_set_pending(d, v->vcpu_id, chn);
> +        evtchn_read_unlock(chn);
> +    }
>  
>   out:
>      spin_unlock_irqrestore(&v->virq_lock, flags);
> @@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
>          goto out;
>  
>      chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
> -    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
> -    spin_unlock(&chn->lock);
> +    if ( evtchn_read_trylock(chn) )
> +    {
> +        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
> +        evtchn_read_unlock(chn);
> +    }
>  
>   out:
>      spin_unlock_irqrestore(&v->virq_lock, flags);

As said before, I think these lock uses can go away altogether.
I shall put together a patch.

And on the whole I'd really prefer if we first convinced ourselves
that there's no way to simply get rid of the IRQ-safe locking
forms instead, before taking a decision to go with this model with
its extra constraints.

> @@ -1060,15 +1053,16 @@ int evtchn_unmask(unsigned int port)
>  {
>      struct domain *d = current->domain;
>      struct evtchn *evtchn;
> -    unsigned long flags;
>  
>      if ( unlikely(!port_is_valid(d, port)) )
>          return -EINVAL;
>  
>      evtchn = evtchn_from_port(d, port);
> -    spin_lock_irqsave(&evtchn->lock, flags);
> -    evtchn_port_unmask(d, evtchn);
> -    spin_unlock_irqrestore(&evtchn->lock, flags);
> +    if ( evtchn_read_trylock(evtchn) )
> +    {
> +        evtchn_port_unmask(d, evtchn);
> +        evtchn_read_unlock(evtchn);
> +    }

I think this wants mentioning together with send / query in the
description.

> --- a/xen/include/xen/event.h
> +++ b/xen/include/xen/event.h
> @@ -105,6 +105,60 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>  #define bucket_from_port(d, p) \
>      ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>  
> +#define EVENT_WRITE_LOCK_INC    INT_MIN
> +
> +/*
> + * Lock an event channel exclusively. This is allowed only with holding
> + * d->event_lock AND when the channel is free or unbound either when taking
> + * or when releasing the lock, as any concurrent operation on the event
> + * channel using evtchn_read_trylock() will just assume the event channel is
> + * free or unbound at the moment.

... when the evtchn_read_trylock() returns false.

> + */
> +static inline void evtchn_write_lock(struct evtchn *evtchn)
> +{
> +    int val;
> +
> +    /*
> +     * The lock can't be held by a writer already, as all writers need to
> +     * hold d->event_lock.
> +     */
> +    ASSERT(atomic_read(&evtchn->lock) >= 0);
> +
> +    /* No barrier needed, atomic_add_return() is full barrier. */
> +    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +          val != EVENT_WRITE_LOCK_INC;

The _INC suffix is slightly odd for this 2nd use, but I guess
the dual use will make it so for about any name you may pick.

> +          val = atomic_read(&evtchn->lock) )
> +        cpu_relax();
> +}
> +
> +static inline void evtchn_write_unlock(struct evtchn *evtchn)
> +{
> +    arch_lock_release_barrier();
> +
> +    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +}
> +
> +static inline bool evtchn_read_trylock(struct evtchn *evtchn)
> +{
> +    if ( atomic_read(&evtchn->lock) < 0 )
> +        return false;
> +
> +    /* No barrier needed, atomic_inc_return() is full barrier. */
> +    if ( atomic_inc_return(&evtchn->lock) >= 0 )

atomic_*_return() return the new value, so I think you mean ">"
here?

Jan


  reply	other threads:[~2020-10-20  9:28 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-16 10:58 [PATCH v3 0/2] XSA-343 followup patches Juergen Gross
2020-10-16 10:58 ` [PATCH v3 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
2020-11-04  9:42   ` Julien Grall
2020-10-16 10:58 ` [PATCH v3 2/2] xen/evtchn: rework per event channel lock Juergen Gross
2020-10-20  9:28   ` Jan Beulich [this message]
2020-11-02 13:41     ` Jürgen Groß
2020-11-02 13:52       ` Jan Beulich
2020-11-02 13:59         ` Jürgen Groß
2020-11-02 15:18           ` Jan Beulich
2020-11-02 15:26             ` Jürgen Groß
2020-11-04  9:50               ` Julien Grall
2020-11-04  9:56                 ` Jürgen Groß
2020-11-04 10:02                   ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jgross@suse.com \
    --cc=julien@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).