xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: "Jürgen Groß" <jgross@suse.com>, "Jan Beulich" <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>, "Wei Liu" <wl@xen.org>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Ian Jackson" <iwj@xenproject.org>,
	"Stefano Stabellini" <sstabellini@kernel.org>
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
Date: Wed, 4 Nov 2020 10:02:33 +0000	[thread overview]
Message-ID: <cf028788-c70c-d946-72d2-583446494dc7@xen.org> (raw)
In-Reply-To: <9017a6fd-35ec-1d66-7a73-7901f64ea64e@suse.com>



On 04/11/2020 09:56, Jürgen Groß wrote:
> On 04.11.20 10:50, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 02/11/2020 15:26, Jürgen Groß wrote:
>>> On 02.11.20 16:18, Jan Beulich wrote:
>>>> On 02.11.2020 14:59, Jürgen Groß wrote:
>>>>> On 02.11.20 14:52, Jan Beulich wrote:
>>>>>> On 02.11.2020 14:41, Jürgen Groß wrote:
>>>>>>> On 20.10.20 11:28, Jan Beulich wrote:
>>>>>>>> On 16.10.2020 12:58, Juergen Gross wrote:
>>>>>>>>> @@ -360,7 +352,7 @@ static long 
>>>>>>>>> evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>>>>>>>>>         if ( rc )
>>>>>>>>>             goto out;
>>>>>>>>> -    flags = double_evtchn_lock(lchn, rchn);
>>>>>>>>> +    double_evtchn_lock(lchn, rchn);
>>>>>>>>
>>>>>>>> This introduces an unfortunate conflict with my conversion of
>>>>>>>> the per-domain event lock to an rw one: It acquires rd's lock
>>>>>>>> in read mode only, while the requirements here would not allow
>>>>>>>> doing so. (Same in evtchn_close() then.)
>>>>>>>
>>>>>>> Is it a problem to use write mode for those cases?
>>>>>>
>>>>>> "Problem" can have a wide range of meanings - it's not going to
>>>>>> be the end of the world, but I view any use of a write lock as
>>>>>> a problem when a read lock would suffice. This can still harm
>>>>>> parallelism.
>>>>>
>>>>> Both cases are very rare ones in the life time of an event channel. I
>>>>> don't think you'll ever be able to measure any performance impact from
>>>>> switching these case to a write lock for any well behaved guest.
>>>>
>>>> I agree as far as the lifetime of an individual port goes, but
>>>> we're talking about the per-domain lock here. (Perhaps my
>>>> choice of context in your patch wasn't the best one, as there
>>>> it is the per-channel lock of which two instances get acquired.
>>>> I'm sorry if this has lead to any confusion.)
>>>
>>> Hmm, with the switch to an ordinary rwlock it should be fine to drop
>>> the requirement to hold the domain's event channel lock exclusively
>>> for taking the per-channel lock as a writer.
>>
>> I don't think you can drop d->event_lock. It protects us against 
>> allocating new ports while evtchn_reset() is called.
> 
> I wrote "exclusively", as in case of a switch to a rwlock it should be
> fine to hold it as a reader in case the reset coding takes it as a
> writer.

Oh I misread your comment. Sorry for the noise.

Cheers,

-- 
Julien Grall


      reply	other threads:[~2020-11-04 10:02 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-16 10:58 [PATCH v3 0/2] XSA-343 followup patches Juergen Gross
2020-10-16 10:58 ` [PATCH v3 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
2020-11-04  9:42   ` Julien Grall
2020-10-16 10:58 ` [PATCH v3 2/2] xen/evtchn: rework per event channel lock Juergen Gross
2020-10-20  9:28   ` Jan Beulich
2020-11-02 13:41     ` Jürgen Groß
2020-11-02 13:52       ` Jan Beulich
2020-11-02 13:59         ` Jürgen Groß
2020-11-02 15:18           ` Jan Beulich
2020-11-02 15:26             ` Jürgen Groß
2020-11-04  9:50               ` Julien Grall
2020-11-04  9:56                 ` Jürgen Groß
2020-11-04 10:02                   ` Julien Grall [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cf028788-c70c-d946-72d2-583446494dc7@xen.org \
    --to=julien@xen.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).