xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "Jürgen Groß" <jgross@suse.com>,
	xen-devel@lists.xenproject.org,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Ian Jackson" <iwj@xenproject.org>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	"Wei Liu" <wl@xen.org>
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Fri, 16 Oct 2020 10:36:18 +0100	[thread overview]
Message-ID: <4b77ba6d-bf49-7286-8f2a-53f7b2e7d122@xen.org> (raw)
In-Reply-To: <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>



On 15/10/2020 13:07, Jan Beulich wrote:
> On 14.10.2020 13:40, Julien Grall wrote:
>> Hi Jan,
>>
>> On 13/10/2020 15:26, Jan Beulich wrote:
>>> On 13.10.2020 16:20, Jürgen Groß wrote:
>>>> On 13.10.20 15:58, Jan Beulich wrote:
>>>>> On 12.10.2020 11:27, Juergen Gross wrote:
>>>>>> The queue for a fifo event is depending on the vcpu_id and the
>>>>>> priority of the event. When sending an event it might happen the
>>>>>> event needs to change queues and the old queue needs to be kept for
>>>>>> keeping the links between queue elements intact. For this purpose
>>>>>> the event channel contains last_priority and last_vcpu_id values
>>>>>> elements for being able to identify the old queue.
>>>>>>
>>>>>> In order to avoid races always access last_priority and last_vcpu_id
>>>>>> with a single atomic operation avoiding any inconsistencies.
>>>>>>
>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>
>>>>> I seem to vaguely recall that at the time this seemingly racy
>>>>> access was done on purpose by David. Did you go look at the
>>>>> old commits to understand whether there really is a race which
>>>>> can't be tolerated within the spec?
>>>>
>>>> At least the comments in the code tell us that the race regarding
>>>> the writing of priority (not last_priority) is acceptable.
>>>
>>> Ah, then it was comments. I knew I read something to this effect
>>> somewhere, recently.
>>>
>>>> Especially Julien was rather worried by the current situation. In
>>>> case you can convince him the current handling is fine, we can
>>>> easily drop this patch.
>>>
>>> Julien, in the light of the above - can you clarify the specific
>>> concerns you (still) have?
>>
>> Let me start with that the assumption if evtchn->lock is not held when
>> evtchn_fifo_set_pending() is called. If it is held, then my comment is moot.
> 
> But this isn't interesting - we know there are paths where it is
> held, and ones (interdomain sending) where it's the remote port's
> lock instead which is held. What's important here is that a
> _consistent_ lock be held (but it doesn't need to be evtchn's).

Yes, a _consistent_ lock *should* be sufficient. But it is better to use 
the same lock everywhere so it is easier to reason (see more below).

> 
>>   From my understanding, the goal of lock_old_queue() is to return the
>> old queue used.
>>
>> last_priority and last_vcpu_id may be updated separately and I could not
>> convince myself that it would not be possible to return a queue that is
>> neither the current one nor the old one.
>>
>> The following could happen if evtchn->priority and
>> evtchn->notify_vcpu_id keeps changing between calls.
>>
>> pCPU0				| pCPU1
>> 				|
>> evtchn_fifo_set_pending(v0,...)	|
>> 				| evtchn_fifo_set_pending(v1, ...)
>>    [...]				|
>>    /* Queue has changed */	|
>>    evtchn->last_vcpu_id = v0 	|
>> 				| -> evtchn_old_queue()
>> 				| v = d->vcpu[evtchn->last_vcpu_id];
>>     				| old_q = ...
>> 				| spin_lock(old_q->...)
>> 				| v = ...
>> 				| q = ...
>> 				| /* q and old_q would be the same */
>> 				|
>>    evtchn->las_priority = priority|
>>
>> If my diagram is correct, then pCPU1 would return a queue that is
>> neither the current nor old one.
> 
> I think I agree.
> 
>> In which case, I think it would at least be possible to corrupt the
>> queue. From evtchn_fifo_set_pending():
>>
>>           /*
>>            * If this event was a tail, the old queue is now empty and
>>            * its tail must be invalidated to prevent adding an event to
>>            * the old queue from corrupting the new queue.
>>            */
>>           if ( old_q->tail == port )
>>               old_q->tail = 0;
>>
>> Did I miss anything?
> 
> I don't think you did. The important point though is that a consistent
> lock is being held whenever we come here, so two racing set_pending()
> aren't possible for one and the same evtchn. As a result I don't think
> the patch here is actually needed.

I haven't yet read in full details the rest of the patches to say 
whether this is necessary or not. However, at a first glance, I think 
this is not a sane to rely on different lock to protect us. And don't 
get me started on the lack of documentation...

Furthermore, the implementation of old_lock_queue() suggests that the 
code was planned to be lockless. Why would you need the loop otherwise?

Therefore, regardless the rest of the discussion, I think this patch 
would be useful to have for our peace of mind.

> 
> If I take this further, then I think I can reason why it wasn't
> necessary to add further locking to send_guest_{global,vcpu}_virq():
> The virq_lock is the "consistent lock" protecting ECS_VIRQ ports. The
> spin_barrier() while closing the port guards that side against the
> port changing to a different ECS_* behind the sending functions' backs.
> And binding such ports sets ->virq_to_evtchn[] last, with a suitable
> barrier (the unlock).

This makes sense.

> 
> Which leaves send_guest_pirq() before we can drop the IRQ-safe locking
> again. I guess we would need to work towards using the underlying
> irq_desc's lock as consistent lock here, but this certainly isn't the
> case just yet, and I'm not really certain this can be achieved.
I can't comment on the PIRQ code but I think this is a risky approach 
(see more above).

Cheers,

-- 
Julien Grall


  parent reply	other threads:[~2020-10-16  9:36 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-12  9:27 [PATCH v2 0/2] XSA-343 followup patches Juergen Gross
2020-10-12  9:27 ` [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
2020-10-12  9:48   ` Paul Durrant
2020-10-12  9:56     ` Jürgen Groß
2020-10-12 10:06       ` Paul Durrant
2020-10-13 13:58   ` Jan Beulich
2020-10-13 14:20     ` Jürgen Groß
2020-10-13 14:26       ` Jan Beulich
2020-10-14 11:40         ` Julien Grall
2020-10-15 12:07           ` Jan Beulich
2020-10-16  5:46             ` Jürgen Groß
2020-10-16  9:36             ` Julien Grall [this message]
2020-10-16 12:09               ` Jan Beulich
2020-10-20  9:25                 ` Julien Grall
2020-10-20  9:34                   ` Jan Beulich
2020-10-20 10:01                     ` Julien Grall
2020-10-20 10:06                       ` Jan Beulich
2020-10-12  9:27 ` [PATCH v2 2/2] xen/evtchn: rework per event channel lock Juergen Gross
2020-10-13 14:02   ` Jan Beulich
2020-10-13 14:13     ` Jürgen Groß
2020-10-13 15:30       ` Jan Beulich
2020-10-13 15:28   ` Jan Beulich
2020-10-14  6:00     ` Jürgen Groß
2020-10-14  6:52       ` Jan Beulich
2020-10-14  7:27         ` Jürgen Groß
2020-10-16  9:51   ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4b77ba6d-bf49-7286-8f2a-53f7b2e7d122@xen.org \
    --to=julien@xen.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).