xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH v5 0/6] evtchn: (not so) recent XSAs follow-on
Date: Mon, 17 May 2021 09:15:53 +0200	[thread overview]
Message-ID: <19a976e1-a4cb-16a9-cd53-1b7198859ddd@suse.com> (raw)
In-Reply-To: <YJ6XVmadaDbP3aUx@Air-de-Roger>

On 14.05.2021 17:29, Roger Pau Monné wrote:
> On Thu, Apr 22, 2021 at 10:53:05AM +0200, Jan Beulich wrote:
>> On 21.04.2021 17:56, Julien Grall wrote:
>>>
>>>
>>> On 21/04/2021 16:23, Jan Beulich wrote:
>>>> On 27.01.2021 09:13, Jan Beulich wrote:
>>>>> These are grouped into a series largely because of their origin,
>>>>> not so much because there are (heavy) dependencies among them.
>>>>> The main change from v4 is the dropping of the two patches trying
>>>>> to do away with the double event lock acquires in interdomain
>>>>> channel handling. See also the individual patches.
>>>>>
>>>>> 1: use per-channel lock where possible
>>>>> 2: convert domain event lock to an r/w one
>>>>> 3: slightly defer lock acquire where possible
>>>>> 4: add helper for port_is_valid() + evtchn_from_port()
>>>>> 5: type adjustments
>>>>> 6: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq()
>>>>
>>>> Only patch 4 here has got an ack so far - may I ask for clear feedback
>>>> as to at least some of these being acceptable (I can see the last one
>>>> being controversial, and if this was the only one left I probably
>>>> wouldn't even ping, despite thinking that it helps reduce unecessary
>>>> overhead).
>>>
>>> I left feedback for the series one the previous version (see [1]). It 
>>> would have been nice is if it was mentionned somewhere as this is still 
>>> unresolved.
>>
>> I will admit I forgot about the controversy on patch 1. I did, however,
>> reply to your concerns. What didn't happen is the feedback from others
>> that you did ask for.
>>
>> And of course there are 4 more patches here (one of them having an ack,
>> yes) which could do with feedback. I'm certainly willing, where possible,
>> to further re-order the series such that controversial changes are at its
>> end.
> 
> I think it would easier to figure out whether the changes are correct
> if we had some kind of documentation about what/how the per-domain
> event_lock and the per-event locks are supposed to be used. I don't
> seem to be able to find any comments regarding how they are to be
> used.

I think especially in pass-through code there are a number of cases
where the per-domain lock really is being abused, simply for being
available without much further thought. I'm not convinced documenting
such abuse is going to help the situation. Yet of course I can see
that having documentation would make review easier ...

> Regarding the changes itself in patch 1 (which I think has caused part
> of the controversy here), I'm unsure they are worth it because the
> functions modified all seem to be non-performance critical:
> evtchn_status, domain_dump_evtchn_info, flask_get_peer_sid.
> 
> So I would say that unless we have clear rules written down for what
> the per-domain event_lock protects, I would be hesitant to change any
> of the logic, specially for critical paths.

Okay, I'll drop patch 1 and also patch 6 for being overly controversial.
Some of the other patches still look worthwhile to me, though. I'll
also consider moving the spin->r/w lock conversion last in the series.

Jan


      reply	other threads:[~2021-05-17  7:16 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-27  8:13 [PATCH v5 0/6] evtchn: (not so) recent XSAs follow-on Jan Beulich
2021-01-27  8:15 ` [PATCH v5 1/6] evtchn: use per-channel lock where possible Jan Beulich
2021-01-27  8:16 ` [PATCH v5 2/6] evtchn: convert domain event lock to an r/w one Jan Beulich
2021-05-27 11:01   ` Roger Pau Monné
2021-05-27 11:16     ` Jan Beulich
2022-07-07 18:00   ` Julien Grall
2021-01-27  8:16 ` [PATCH v5 3/6] evtchn: slightly defer lock acquire where possible Jan Beulich
2021-01-27  8:16 ` [PATCH v5 4/6] evtchn: add helper for port_is_valid() + evtchn_from_port() Jan Beulich
2021-01-27  8:17 ` [PATCH v5 5/6] evtchn: type adjustments Jan Beulich
2021-01-27  8:17 ` [PATCH v5 6/6] evtchn: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq() Jan Beulich
2021-04-21 15:23 ` Ping: [PATCH v5 0/6] evtchn: (not so) recent XSAs follow-on Jan Beulich
2021-04-21 15:56   ` Julien Grall
2021-04-22  8:53     ` Jan Beulich
2021-05-14 15:29       ` Roger Pau Monné
2021-05-17  7:15         ` Jan Beulich [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=19a976e1-a4cb-16a9-cd53-1b7198859ddd@suse.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=julien@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).