All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
Date: Fri, 11 Dec 2020 11:32:47 +0100	[thread overview]
Message-ID: <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com> (raw)
In-Reply-To: <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>

On 09.12.2020 12:54, Julien Grall wrote:
> On 23/11/2020 13:29, Jan Beulich wrote:
>> @@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
>>       long           rc = 0;
>>   
>>    again:
>> -    spin_lock(&d1->event_lock);
>> +    write_lock(&d1->event_lock);
>>   
>>       if ( !port_is_valid(d1, port1) )
>>       {
>> @@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
>>                   BUG();
>>   
>>               if ( d1 < d2 )
>> -            {
>> -                spin_lock(&d2->event_lock);
>> -            }
>> +                read_lock(&d2->event_lock);
> 
> This change made me realized that I don't quite understand how the 
> rwlock is meant to work for event_lock. I was actually expecting this to 
> be a write_lock() given there are state changed in the d2 events.

Well, the protection needs to be against racing changes, i.e.
parallel invocations of this same function, or evtchn_close().
It is debatable whether evtchn_status() and
domain_dump_evtchn_info() would better also be locked out
(other read_lock() uses aren't applicable to interdomain
channels).

> Could you outline how a developper can find out whether he/she should 
> use read_lock or write_lock?

I could try to, but it would again be a port type dependent
model, just like for the per-channel locks. So I'd like it to
be clarified first whether you aren't instead indirectly
asking for these to become write_lock().

>> --- a/xen/common/rwlock.c
>> +++ b/xen/common/rwlock.c
>> @@ -102,6 +102,14 @@ void queue_write_lock_slowpath(rwlock_t
>>       spin_unlock(&lock->lock);
>>   }
>>   
>> +void _rw_barrier(rwlock_t *lock)
>> +{
>> +    check_barrier(&lock->lock.debug);
>> +    smp_mb();
>> +    while ( _rw_is_locked(lock) )
>> +        arch_lock_relax();
>> +    smp_mb();
>> +}
> 
> As I pointed out when this implementation was first proposed (see [1]), 
> there is a risk that the loop will never exit.

The [1] reference was missing, but I recall you saying so.

> I think the following implementation would be better (although it is ugly):
> 
> write_lock();
> /* do nothing */
> write_unlock();
> 
> This will act as a barrier between lock held before and after the call.

Right, and back then I indicated agreement. When getting to
actually carry out the change, I realized though that then the less
restrictive check_barrier() can't be used anymore (or to be precise,
it could be used, but the stronger check_lock() would subsequently
still come into play). This isn't a problem here, but would be for
any IRQ-safe r/w lock that the barrier may want to be used on down
the road.

Thinking about it, a read_lock() / read_unlock() pair would suffice
though. But this would then still have check_lock() involved.

Given all of this, maybe it's better not to introduce the function
at all and instead open-code the read_lock() / read_unlock() pair at
the use site.

> As an aside, I think the introduction of rw_barrier() deserve to be a in 
> separate patch to help the review.

I'm aware there are differing views on this - to me, putting this in
a separate patch would be introduction of dead code. But as per
above maybe the function now won't get introduced anymore anyway.

Jan


  reply	other threads:[~2020-12-11 10:33 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-23 13:26 [PATCH v3 0/5] evtchn: (not so) recent XSAs follow-on Jan Beulich
2020-11-23 13:28 ` [PATCH v3 1/5] evtchn: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq() Jan Beulich
2020-12-02 19:03   ` Julien Grall
2020-12-03  9:46     ` Jan Beulich
2020-12-09  9:53       ` Julien Grall
2020-12-09 14:24         ` Jan Beulich
2020-11-23 13:28 ` [PATCH v3 2/5] evtchn: avoid access tearing for ->virq_to_evtchn[] accesses Jan Beulich
2020-12-02 21:14   ` Julien Grall
2020-11-23 13:28 ` [PATCH v3 3/5] evtchn: convert vIRQ lock to an r/w one Jan Beulich
2020-12-09 11:16   ` Julien Grall
2020-11-23 13:29 ` [PATCH v3 4/5] evtchn: convert domain event " Jan Beulich
2020-12-09 11:54   ` Julien Grall
2020-12-11 10:32     ` Jan Beulich [this message]
2020-12-11 10:57       ` Julien Grall
2020-12-14  9:40         ` Jan Beulich
2020-12-21 17:45           ` Julien Grall
2020-12-22  9:46             ` Jan Beulich
2020-12-23 11:22               ` Julien Grall
2020-12-23 12:57                 ` Jan Beulich
2020-12-23 13:19                   ` Julien Grall
2020-12-23 13:36                     ` Jan Beulich
2020-11-23 13:30 ` [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held Jan Beulich
2020-11-30 10:39   ` Isaila Alexandru
2020-12-02 21:10   ` Julien Grall
2020-12-03 10:09     ` Jan Beulich
2020-12-03 14:40       ` Tamas K Lengyel
2020-12-04 11:28       ` Julien Grall
2020-12-04 11:48         ` Jan Beulich
2020-12-04 11:51           ` Julien Grall
2020-12-04 12:01             ` Jan Beulich
2020-12-04 15:09               ` Julien Grall
2020-12-07  8:02                 ` Jan Beulich
2020-12-07 17:22                   ` Julien Grall
2020-12-04 15:21         ` Tamas K Lengyel
2020-12-04 15:29           ` Julien Grall
2020-12-04 19:15             ` Tamas K Lengyel
2020-12-04 19:22               ` Julien Grall
2020-12-04 21:23                 ` Tamas K Lengyel
2020-12-07 15:28               ` Jan Beulich
2020-12-07 17:30                 ` Julien Grall
2020-12-07 17:35                   ` Tamas K Lengyel
2020-12-23 13:12                     ` Jan Beulich
2020-12-23 13:33                       ` Julien Grall
2020-12-23 13:41                         ` Jan Beulich
2020-12-23 14:44                           ` Julien Grall
2020-12-23 14:56                             ` Jan Beulich
2020-12-23 15:08                               ` Julien Grall
2020-12-23 15:15                             ` Tamas K Lengyel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.