xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <xadimgnik@gmail.com>
To: "'Jürgen Groß'" <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>
Subject: RE: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Mon, 12 Oct 2020 11:06:58 +0100	[thread overview]
Message-ID: <001201d6a07f$6c3d0f40$44b72dc0$@xen.org> (raw)
In-Reply-To: <4fec0346-6048-723c-f5c6-50c3f68f508a@suse.com>

> -----Original Message-----
> From: Jürgen Groß <jgross@suse.com>
> Sent: 12 October 2020 10:56
> To: paul@xen.org; xen-devel@lists.xenproject.org
> Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap' <george.dunlap@citrix.com>; 'Ian
> Jackson' <iwj@xenproject.org>; 'Jan Beulich' <jbeulich@suse.com>; 'Julien Grall' <julien@xen.org>;
> 'Stefano Stabellini' <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>
> Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
> 
> On 12.10.20 11:48, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
> >> Sent: 12 October 2020 10:28
> >> To: xen-devel@lists.xenproject.org
> >> Cc: Juergen Gross <jgross@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> >> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Jan Beulich <jbeulich@suse.com>;
> Julien
> >> Grall <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> >> Subject: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
> >>
> >> The queue for a fifo event is depending on the vcpu_id and the
> >> priority of the event. When sending an event it might happen the
> >> event needs to change queues and the old queue needs to be kept for
> >> keeping the links between queue elements intact. For this purpose
> >> the event channel contains last_priority and last_vcpu_id values
> >> elements for being able to identify the old queue.
> >>
> >> In order to avoid races always access last_priority and last_vcpu_id
> >> with a single atomic operation avoiding any inconsistencies.
> >>
> >> Signed-off-by: Juergen Gross <jgross@suse.com>
> >> ---
> >>   xen/common/event_fifo.c | 25 +++++++++++++++++++------
> >>   xen/include/xen/sched.h |  3 +--
> >>   2 files changed, 20 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
> >> index fc189152e1..fffbd409c8 100644
> >> --- a/xen/common/event_fifo.c
> >> +++ b/xen/common/event_fifo.c
> >> @@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
> >>       unsigned int num_evtchns;
> >>   };
> >>
> >> +union evtchn_fifo_lastq {
> >> +    u32 raw;
> >> +    struct {
> >> +        u8 last_priority;
> >> +        u16 last_vcpu_id;
> >> +    };
> >> +};
> >
> > I guess you want to s/u32/uint32_t, etc. above.
> 
> Hmm, yes, probably.
> 
> >
> >> +
> >>   static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
> >>                                                          unsigned int port)
> >>   {
> >> @@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
> >>       struct vcpu *v;
> >>       struct evtchn_fifo_queue *q, *old_q;
> >>       unsigned int try;
> >> +    union evtchn_fifo_lastq lastq;
> >>
> >>       for ( try = 0; try < 3; try++ )
> >>       {
> >> -        v = d->vcpu[evtchn->last_vcpu_id];
> >> -        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
> >> +        lastq.raw = read_atomic(&evtchn->fifo_lastq);
> >> +        v = d->vcpu[lastq.last_vcpu_id];
> >> +        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
> >>
> >>           spin_lock_irqsave(&old_q->lock, *flags);
> >>
> >> -        v = d->vcpu[evtchn->last_vcpu_id];
> >> -        q = &v->evtchn_fifo->queue[evtchn->last_priority];
> >> +        v = d->vcpu[lastq.last_vcpu_id];
> >> +        q = &v->evtchn_fifo->queue[lastq.last_priority];
> >>
> >>           if ( old_q == q )
> >>               return old_q;
> >> @@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
> >>           /* Moved to a different queue? */
> >>           if ( old_q != q )
> >>           {
> >> -            evtchn->last_vcpu_id = v->vcpu_id;
> >> -            evtchn->last_priority = q->priority;
> >> +            union evtchn_fifo_lastq lastq;
> >> +
> >> +            lastq.last_vcpu_id = v->vcpu_id;
> >> +            lastq.last_priority = q->priority;
> >> +            write_atomic(&evtchn->fifo_lastq, lastq.raw);
> >>
> >
> > You're going to leak some stack here I think. Perhaps add a 'pad' field between 'last_priority' and
> 'last_vcpu_id' and zero it?
> 
> I can do that, but why? This is nothing a guest is supposed to see at
> any time.

True, but it would also be nice if the value of 'raw' was at least predictable. I guest just adding '= {}' to the declaration would actually be easiest.

  Paul

> 
> 
> Juergen



  reply	other threads:[~2020-10-12 10:07 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-12  9:27 [PATCH v2 0/2] XSA-343 followup patches Juergen Gross
2020-10-12  9:27 ` [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
2020-10-12  9:48   ` Paul Durrant
2020-10-12  9:56     ` Jürgen Groß
2020-10-12 10:06       ` Paul Durrant [this message]
2020-10-13 13:58   ` Jan Beulich
2020-10-13 14:20     ` Jürgen Groß
2020-10-13 14:26       ` Jan Beulich
2020-10-14 11:40         ` Julien Grall
2020-10-15 12:07           ` Jan Beulich
2020-10-16  5:46             ` Jürgen Groß
2020-10-16  9:36             ` Julien Grall
2020-10-16 12:09               ` Jan Beulich
2020-10-20  9:25                 ` Julien Grall
2020-10-20  9:34                   ` Jan Beulich
2020-10-20 10:01                     ` Julien Grall
2020-10-20 10:06                       ` Jan Beulich
2020-10-12  9:27 ` [PATCH v2 2/2] xen/evtchn: rework per event channel lock Juergen Gross
2020-10-13 14:02   ` Jan Beulich
2020-10-13 14:13     ` Jürgen Groß
2020-10-13 15:30       ` Jan Beulich
2020-10-13 15:28   ` Jan Beulich
2020-10-14  6:00     ` Jürgen Groß
2020-10-14  6:52       ` Jan Beulich
2020-10-14  7:27         ` Jürgen Groß
2020-10-16  9:51   ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='001201d6a07f$6c3d0f40$44b72dc0$@xen.org' \
    --to=xadimgnik@gmail.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=julien@xen.org \
    --cc=paul@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).