All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1.1] evtchn: add early-out to evtchn_move_pirqs()
@ 2022-04-26 10:33 Jan Beulich
  2022-04-26 10:44 ` Julien Grall
  0 siblings, 1 reply; 4+ messages in thread
From: Jan Beulich @ 2022-04-26 10:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu, Dario Faggioli

See the code comment. The higher the rate of vCPU-s migrating across
pCPU-s, the less useful this attempted optimization actually is. With
credit2 the migration rate looks to be unduly high even on mostly idle
systems, and hence on large systems lock contention here isn't very
difficult to observe (as was the case for a failed 4.12 osstest flight).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1559,6 +1559,16 @@ void evtchn_move_pirqs(struct vcpu *v)
     unsigned int port;
     struct evtchn *chn;
 
+    /*
+     * The work done below is an attempt to keep pIRQ-s on the pCPU-s that the
+     * vCPU-s they're to be delivered to run on. In order to limit lock
+     * contention, check for an empty list prior to acquiring the lock. In the
+     * worst case a pIRQ just bound to this vCPU will be delivered elsewhere
+     * until the vCPU is migrated (again) to another pCPU.
+     */
+    if ( !v->pirq_evtchn_head )
+        return;
+
     spin_lock(&d->event_lock);
     for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
     {



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v1.1] evtchn: add early-out to evtchn_move_pirqs()
  2022-04-26 10:33 [PATCH v1.1] evtchn: add early-out to evtchn_move_pirqs() Jan Beulich
@ 2022-04-26 10:44 ` Julien Grall
  2022-07-05 16:10   ` Jan Beulich
  0 siblings, 1 reply; 4+ messages in thread
From: Julien Grall @ 2022-04-26 10:44 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, George Dunlap, Stefano Stabellini, Wei Liu,
	Dario Faggioli

Hi Jan,

On 26/04/2022 11:33, Jan Beulich wrote:
> See the code comment. The higher the rate of vCPU-s migrating across
> pCPU-s, the less useful this attempted optimization actually is. With
> credit2 the migration rate looks to be unduly high even on mostly idle
> systems, and hence on large systems lock contention here isn't very
> difficult to observe (as was the case for a failed 4.12 osstest flight).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1559,6 +1559,16 @@ void evtchn_move_pirqs(struct vcpu *v)
>       unsigned int port;
>       struct evtchn *chn;
>   
> +    /*
> +     * The work done below is an attempt to keep pIRQ-s on the pCPU-s that the
> +     * vCPU-s they're to be delivered to run on. In order to limit lock
> +     * contention, check for an empty list prior to acquiring the lock. In the
> +     * worst case a pIRQ just bound to this vCPU will be delivered elsewhere
> +     * until the vCPU is migrated (again) to another pCPU.
> +     */
> +    if ( !v->pirq_evtchn_head )
> +        return;

I was hoping Andrew would give some insight (hence why I haven't replied 
to your previous answer).

I am still not really convinced about this optimization. Aside what I 
wrote about the IRQ raised on the "wrong" pCPU, the lock contention 
would still be present if an OS is deciding to spread the PIRQs across 
all the vCPUs.

So it seems to me switching to a rwlock would help to address the 
contention on all the cases.

> +
>       spin_lock(&d->event_lock);
>       for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>       {
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v1.1] evtchn: add early-out to evtchn_move_pirqs()
  2022-04-26 10:44 ` Julien Grall
@ 2022-07-05 16:10   ` Jan Beulich
  2022-07-05 21:43     ` Julien Grall
  0 siblings, 1 reply; 4+ messages in thread
From: Jan Beulich @ 2022-07-05 16:10 UTC (permalink / raw)
  To: Julien Grall
  Cc: Andrew Cooper, George Dunlap, Stefano Stabellini, Wei Liu,
	Dario Faggioli, xen-devel

On 26.04.2022 12:44, Julien Grall wrote:
> On 26/04/2022 11:33, Jan Beulich wrote:
>> See the code comment. The higher the rate of vCPU-s migrating across
>> pCPU-s, the less useful this attempted optimization actually is. With
>> credit2 the migration rate looks to be unduly high even on mostly idle
>> systems, and hence on large systems lock contention here isn't very
>> difficult to observe (as was the case for a failed 4.12 osstest flight).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1559,6 +1559,16 @@ void evtchn_move_pirqs(struct vcpu *v)
>>       unsigned int port;
>>       struct evtchn *chn;
>>   
>> +    /*
>> +     * The work done below is an attempt to keep pIRQ-s on the pCPU-s that the
>> +     * vCPU-s they're to be delivered to run on. In order to limit lock
>> +     * contention, check for an empty list prior to acquiring the lock. In the
>> +     * worst case a pIRQ just bound to this vCPU will be delivered elsewhere
>> +     * until the vCPU is migrated (again) to another pCPU.
>> +     */
>> +    if ( !v->pirq_evtchn_head )
>> +        return;
> 
> I was hoping Andrew would give some insight (hence why I haven't replied 
> to your previous answer).
> 
> I am still not really convinced about this optimization. Aside what I 
> wrote about the IRQ raised on the "wrong" pCPU, the lock contention 
> would still be present if an OS is deciding to spread the PIRQs across 
> all the vCPUs.
> 
> So it seems to me switching to a rwlock would help to address the 
> contention on all the cases.

But that patch of mine didn't get any ack either, and hence at some
point I've shelved it. The same is going to happen to this patch,
and sooner or later we'll re-observe the issue osstest had hit (at
least) once.

Jan


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v1.1] evtchn: add early-out to evtchn_move_pirqs()
  2022-07-05 16:10   ` Jan Beulich
@ 2022-07-05 21:43     ` Julien Grall
  0 siblings, 0 replies; 4+ messages in thread
From: Julien Grall @ 2022-07-05 21:43 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, George Dunlap, Stefano Stabellini, Wei Liu,
	Dario Faggioli, xen-devel

Hi Jan,

On 05/07/2022 17:10, Jan Beulich wrote:
> On 26.04.2022 12:44, Julien Grall wrote:
>> On 26/04/2022 11:33, Jan Beulich wrote:
>>> See the code comment. The higher the rate of vCPU-s migrating across
>>> pCPU-s, the less useful this attempted optimization actually is. With
>>> credit2 the migration rate looks to be unduly high even on mostly idle
>>> systems, and hence on large systems lock contention here isn't very
>>> difficult to observe (as was the case for a failed 4.12 osstest flight).
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
>>>
>>> --- a/xen/common/event_channel.c
>>> +++ b/xen/common/event_channel.c
>>> @@ -1559,6 +1559,16 @@ void evtchn_move_pirqs(struct vcpu *v)
>>>        unsigned int port;
>>>        struct evtchn *chn;
>>>    
>>> +    /*
>>> +     * The work done below is an attempt to keep pIRQ-s on the pCPU-s that the
>>> +     * vCPU-s they're to be delivered to run on. In order to limit lock
>>> +     * contention, check for an empty list prior to acquiring the lock. In the
>>> +     * worst case a pIRQ just bound to this vCPU will be delivered elsewhere
>>> +     * until the vCPU is migrated (again) to another pCPU.
>>> +     */
>>> +    if ( !v->pirq_evtchn_head )
>>> +        return;
>>
>> I was hoping Andrew would give some insight (hence why I haven't replied
>> to your previous answer).
>>
>> I am still not really convinced about this optimization. Aside what I
>> wrote about the IRQ raised on the "wrong" pCPU, the lock contention
>> would still be present if an OS is deciding to spread the PIRQs across
>> all the vCPUs.
>>
>> So it seems to me switching to a rwlock would help to address the
>> contention on all the cases.
> 
> But that patch of mine didn't get any ack either, and hence at some
> point I've shelved it.

Looking at v5, your latest answer suggested you were going to drop the 
patch. So I didn't bother to review/ack the latest version.

Now, in the context of this discussion, I think this is better than this 
approach. I will review the other patch.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-07-05 21:43 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-26 10:33 [PATCH v1.1] evtchn: add early-out to evtchn_move_pirqs() Jan Beulich
2022-04-26 10:44 ` Julien Grall
2022-07-05 16:10   ` Jan Beulich
2022-07-05 21:43     ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.