From: Julien Grall <julien@xen.org>
To: "Jan Beulich" <jbeulich@suse.com>, "Jürgen Groß" <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
Andrew Cooper <andrew.cooper3@citrix.com>,
George Dunlap <george.dunlap@citrix.com>,
Ian Jackson <iwj@xenproject.org>,
Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Wed, 14 Oct 2020 12:40:57 +0100 [thread overview]
Message-ID: <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org> (raw)
In-Reply-To: <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
Hi Jan,
On 13/10/2020 15:26, Jan Beulich wrote:
> On 13.10.2020 16:20, Jürgen Groß wrote:
>> On 13.10.20 15:58, Jan Beulich wrote:
>>> On 12.10.2020 11:27, Juergen Gross wrote:
>>>> The queue for a fifo event is depending on the vcpu_id and the
>>>> priority of the event. When sending an event it might happen the
>>>> event needs to change queues and the old queue needs to be kept for
>>>> keeping the links between queue elements intact. For this purpose
>>>> the event channel contains last_priority and last_vcpu_id values
>>>> elements for being able to identify the old queue.
>>>>
>>>> In order to avoid races always access last_priority and last_vcpu_id
>>>> with a single atomic operation avoiding any inconsistencies.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>>> I seem to vaguely recall that at the time this seemingly racy
>>> access was done on purpose by David. Did you go look at the
>>> old commits to understand whether there really is a race which
>>> can't be tolerated within the spec?
>>
>> At least the comments in the code tell us that the race regarding
>> the writing of priority (not last_priority) is acceptable.
>
> Ah, then it was comments. I knew I read something to this effect
> somewhere, recently.
>
>> Especially Julien was rather worried by the current situation. In
>> case you can convince him the current handling is fine, we can
>> easily drop this patch.
>
> Julien, in the light of the above - can you clarify the specific
> concerns you (still) have?
Let me start with that the assumption if evtchn->lock is not held when
evtchn_fifo_set_pending() is called. If it is held, then my comment is moot.
From my understanding, the goal of lock_old_queue() is to return the
old queue used.
last_priority and last_vcpu_id may be updated separately and I could not
convince myself that it would not be possible to return a queue that is
neither the current one nor the old one.
The following could happen if evtchn->priority and
evtchn->notify_vcpu_id keeps changing between calls.
pCPU0 | pCPU1
|
evtchn_fifo_set_pending(v0,...) |
| evtchn_fifo_set_pending(v1, ...)
[...] |
/* Queue has changed */ |
evtchn->last_vcpu_id = v0 |
| -> evtchn_old_queue()
| v = d->vcpu[evtchn->last_vcpu_id];
| old_q = ...
| spin_lock(old_q->...)
| v = ...
| q = ...
| /* q and old_q would be the same */
|
evtchn->las_priority = priority|
If my diagram is correct, then pCPU1 would return a queue that is
neither the current nor old one.
In which case, I think it would at least be possible to corrupt the
queue. From evtchn_fifo_set_pending():
/*
* If this event was a tail, the old queue is now empty and
* its tail must be invalidated to prevent adding an event to
* the old queue from corrupting the new queue.
*/
if ( old_q->tail == port )
old_q->tail = 0;
Did I miss anything?
Cheers,
--
Julien Grall
next prev parent reply other threads:[~2020-10-14 11:41 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-12 9:27 [PATCH v2 0/2] XSA-343 followup patches Juergen Gross
2020-10-12 9:27 ` [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
2020-10-12 9:48 ` Paul Durrant
2020-10-12 9:56 ` Jürgen Groß
2020-10-12 10:06 ` Paul Durrant
2020-10-13 13:58 ` Jan Beulich
2020-10-13 14:20 ` Jürgen Groß
2020-10-13 14:26 ` Jan Beulich
2020-10-14 11:40 ` Julien Grall [this message]
2020-10-15 12:07 ` Jan Beulich
2020-10-16 5:46 ` Jürgen Groß
2020-10-16 9:36 ` Julien Grall
2020-10-16 12:09 ` Jan Beulich
2020-10-20 9:25 ` Julien Grall
2020-10-20 9:34 ` Jan Beulich
2020-10-20 10:01 ` Julien Grall
2020-10-20 10:06 ` Jan Beulich
2020-10-12 9:27 ` [PATCH v2 2/2] xen/evtchn: rework per event channel lock Juergen Gross
2020-10-13 14:02 ` Jan Beulich
2020-10-13 14:13 ` Jürgen Groß
2020-10-13 15:30 ` Jan Beulich
2020-10-13 15:28 ` Jan Beulich
2020-10-14 6:00 ` Jürgen Groß
2020-10-14 6:52 ` Jan Beulich
2020-10-14 7:27 ` Jürgen Groß
2020-10-16 9:51 ` Julien Grall
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org \
--to=julien@xen.org \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@citrix.com \
--cc=iwj@xenproject.org \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=sstabellini@kernel.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).