xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Tamas K Lengyel <lengyelt@ainfosec.com>,
	Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
	Alexandru Isaila <aisaila@bitdefender.com>
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held
Date: Wed, 2 Dec 2020 21:10:35 +0000	[thread overview]
Message-ID: <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org> (raw)
In-Reply-To: <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>

Hi Jan,

On 23/11/2020 13:30, Jan Beulich wrote:
> While there don't look to be any problems with this right now, the lock
> order implications from holding the lock can be very difficult to follow
> (and may be easy to violate unknowingly). The present callbacks don't
> (and no such callback should) have any need for the lock to be held.
> 
> However, vm_event_disable() frees the structures used by respective
> callbacks and isn't otherwise synchronized with invocations of these
> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> to wait to drop to zero before freeing the port (and dropping the lock).

AFAICT, this callback is not the only place where the synchronization is 
missing in the VM event code.

For instance, vm_event_put_request() can also race against 
vm_event_disable().

So shouldn't we handle this issue properly in VM event?

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Should we make this accounting optional, to be requested through a new
> parameter to alloc_unbound_xen_event_channel(), or derived from other
> than the default callback being requested?

Aside the VM event, do you see any value for the other caller?

> ---
> v3: Drain callbacks before proceeding with closing. Re-base.
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -397,6 +397,7 @@ static long evtchn_bind_interdomain(evtc
>       
>       rchn->u.interdomain.remote_dom  = ld;
>       rchn->u.interdomain.remote_port = lport;
> +    atomic_set(&rchn->u.interdomain.active_calls, 0);
>       rchn->state                     = ECS_INTERDOMAIN;
>   
>       /*
> @@ -720,6 +721,10 @@ int evtchn_close(struct domain *d1, int
>   
>           double_evtchn_lock(chn1, chn2);
>   
> +        if ( consumer_is_xen(chn1) )
> +            while ( atomic_read(&chn1->u.interdomain.active_calls) )
> +                cpu_relax();
> +
>           evtchn_free(d1, chn1);
>   
>           chn2->state = ECS_UNBOUND;
> @@ -781,9 +786,15 @@ int evtchn_send(struct domain *ld, unsig
>           rport = lchn->u.interdomain.remote_port;
>           rchn  = evtchn_from_port(rd, rport);
>           if ( consumer_is_xen(rchn) )
> +        {
> +            /* Don't keep holding the lock for the call below. */
> +            atomic_inc(&rchn->u.interdomain.active_calls);
> +            evtchn_read_unlock(lchn);
>               xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
> -        else
> -            evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);

atomic_dec() doesn't contain any memory barrier, so we will want one 
between xen_notification_fn() and atomic_dec() to avoid re-ordering.

> +            atomic_dec(&rchn->u.interdomain.active_calls);
> +            return 0;
> +        }
> +        evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
>           break;
>       case ECS_IPI:
>           evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -104,6 +104,7 @@ struct evtchn
>           } unbound;     /* state == ECS_UNBOUND */
>           struct {
>               evtchn_port_t  remote_port;
> +            atomic_t       active_calls;
>               struct domain *remote_dom;
>           } interdomain; /* state == ECS_INTERDOMAIN */
>           struct {
> 

Cheers,

-- 
Julien Grall


  parent reply	other threads:[~2020-12-02 21:11 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-23 13:26 [PATCH v3 0/5] evtchn: (not so) recent XSAs follow-on Jan Beulich
2020-11-23 13:28 ` [PATCH v3 1/5] evtchn: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq() Jan Beulich
2020-12-02 19:03   ` Julien Grall
2020-12-03  9:46     ` Jan Beulich
2020-12-09  9:53       ` Julien Grall
2020-12-09 14:24         ` Jan Beulich
2020-11-23 13:28 ` [PATCH v3 2/5] evtchn: avoid access tearing for ->virq_to_evtchn[] accesses Jan Beulich
2020-12-02 21:14   ` Julien Grall
2020-11-23 13:28 ` [PATCH v3 3/5] evtchn: convert vIRQ lock to an r/w one Jan Beulich
2020-12-09 11:16   ` Julien Grall
2020-11-23 13:29 ` [PATCH v3 4/5] evtchn: convert domain event " Jan Beulich
2020-12-09 11:54   ` Julien Grall
2020-12-11 10:32     ` Jan Beulich
2020-12-11 10:57       ` Julien Grall
2020-12-14  9:40         ` Jan Beulich
2020-12-21 17:45           ` Julien Grall
2020-12-22  9:46             ` Jan Beulich
2020-12-23 11:22               ` Julien Grall
2020-12-23 12:57                 ` Jan Beulich
2020-12-23 13:19                   ` Julien Grall
2020-12-23 13:36                     ` Jan Beulich
2020-11-23 13:30 ` [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held Jan Beulich
2020-11-30 10:39   ` Isaila Alexandru
2020-12-02 21:10   ` Julien Grall [this message]
2020-12-03 10:09     ` Jan Beulich
2020-12-03 14:40       ` Tamas K Lengyel
2020-12-04 11:28       ` Julien Grall
2020-12-04 11:48         ` Jan Beulich
2020-12-04 11:51           ` Julien Grall
2020-12-04 12:01             ` Jan Beulich
2020-12-04 15:09               ` Julien Grall
2020-12-07  8:02                 ` Jan Beulich
2020-12-07 17:22                   ` Julien Grall
2020-12-04 15:21         ` Tamas K Lengyel
2020-12-04 15:29           ` Julien Grall
2020-12-04 19:15             ` Tamas K Lengyel
2020-12-04 19:22               ` Julien Grall
2020-12-04 21:23                 ` Tamas K Lengyel
2020-12-07 15:28               ` Jan Beulich
2020-12-07 17:30                 ` Julien Grall
2020-12-07 17:35                   ` Tamas K Lengyel
2020-12-23 13:12                     ` Jan Beulich
2020-12-23 13:33                       ` Julien Grall
2020-12-23 13:41                         ` Jan Beulich
2020-12-23 14:44                           ` Julien Grall
2020-12-23 14:56                             ` Jan Beulich
2020-12-23 15:08                               ` Julien Grall
2020-12-23 15:15                             ` Tamas K Lengyel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org \
    --to=julien@xen.org \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=aisaila@bitdefender.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jbeulich@suse.com \
    --cc=lengyelt@ainfosec.com \
    --cc=ppircalabu@bitdefender.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).