All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] common/vm_event: Prevent guest locking with large max_vcpus
@ 2017-02-08  9:00 Razvan Cojocaru
  2017-02-08 16:25 ` Tamas K Lengyel
  0 siblings, 1 reply; 5+ messages in thread
From: Razvan Cojocaru @ 2017-02-08  9:00 UTC (permalink / raw)
  To: xen-devel; +Cc: tamas, Razvan Cojocaru

It is currently possible for the guest to lock when subscribing
to synchronous vm_events if max_vcpus is larger than the
number of available ring buffer slots. This patch no longer
blocks already paused VCPUs, fixing the issue for this use
case.

Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
---
 xen/common/vm_event.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 82ce8f1..2005a64 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d,
      * See the comments above wake_blocked() for more information
      * on how this mechanism works to avoid waiting. */
     avail_req = vm_event_ring_available(ved);
-    if( current->domain == d && avail_req < d->max_vcpus )
+    if( current->domain == d && avail_req < d->max_vcpus &&
+        !atomic_read( &current->vm_event_pause_count ) )
         vm_event_mark_and_pause(current, ved);
 
     vm_event_ring_unlock(ved);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] common/vm_event: Prevent guest locking with large max_vcpus
  2017-02-08  9:00 [PATCH] common/vm_event: Prevent guest locking with large max_vcpus Razvan Cojocaru
@ 2017-02-08 16:25 ` Tamas K Lengyel
  2017-02-08 17:08   ` Razvan Cojocaru
  0 siblings, 1 reply; 5+ messages in thread
From: Tamas K Lengyel @ 2017-02-08 16:25 UTC (permalink / raw)
  To: Razvan Cojocaru; +Cc: Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1363 bytes --]

On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru <rcojocaru@bitdefender.com>
wrote:

> It is currently possible for the guest to lock when subscribing
> to synchronous vm_events if max_vcpus is larger than the
> number of available ring buffer slots. This patch no longer
> blocks already paused VCPUs, fixing the issue for this use
> case.
>
> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
> ---
>  xen/common/vm_event.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 82ce8f1..2005a64 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d,
>       * See the comments above wake_blocked() for more information
>       * on how this mechanism works to avoid waiting. */
>      avail_req = vm_event_ring_available(ved);
> -    if( current->domain == d && avail_req < d->max_vcpus )
> +    if( current->domain == d && avail_req < d->max_vcpus &&
> +        !atomic_read( &current->vm_event_pause_count ) )
>          vm_event_mark_and_pause(current, ved);
>

Hi Razvan,
I would also like to have the change made in this patch that unblocks the
vCPUs as soon as a spot opens up on the ring. Doing just what this patch
has will not solve the problem if there are asynchronous events used.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1984 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] common/vm_event: Prevent guest locking with large max_vcpus
  2017-02-08 16:25 ` Tamas K Lengyel
@ 2017-02-08 17:08   ` Razvan Cojocaru
  2017-02-08 17:20     ` Andrew Cooper
  0 siblings, 1 reply; 5+ messages in thread
From: Razvan Cojocaru @ 2017-02-08 17:08 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: Xen-devel

On 02/08/2017 06:25 PM, Tamas K Lengyel wrote:
> 
> 
> On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru
> <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote:
> 
>     It is currently possible for the guest to lock when subscribing
>     to synchronous vm_events if max_vcpus is larger than the
>     number of available ring buffer slots. This patch no longer
>     blocks already paused VCPUs, fixing the issue for this use
>     case.
> 
>     Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com
>     <mailto:rcojocaru@bitdefender.com>>
>     ---
>      xen/common/vm_event.c | 3 ++-
>      1 file changed, 2 insertions(+), 1 deletion(-)
> 
>     diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>     index 82ce8f1..2005a64 100644
>     --- a/xen/common/vm_event.c
>     +++ b/xen/common/vm_event.c
>     @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d,
>           * See the comments above wake_blocked() for more information
>           * on how this mechanism works to avoid waiting. */
>          avail_req = vm_event_ring_available(ved);
>     -    if( current->domain == d && avail_req < d->max_vcpus )
>     +    if( current->domain == d && avail_req < d->max_vcpus &&
>     +        !atomic_read( &current->vm_event_pause_count ) )
>              vm_event_mark_and_pause(current, ved);
> 
> 
> Hi Razvan,
> I would also like to have the change made in this patch that unblocks
> the vCPUs as soon as a spot opens up on the ring. Doing just what this
> patch has will not solve the problem if there are asynchronous events used.

Fair enough, I thought that might need more discussion and thus put into
a subsequent patch, but I'll modify that as well, give it a spin and
submit V2.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] common/vm_event: Prevent guest locking with large max_vcpus
  2017-02-08 17:08   ` Razvan Cojocaru
@ 2017-02-08 17:20     ` Andrew Cooper
  2017-02-08 17:31       ` Razvan Cojocaru
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Cooper @ 2017-02-08 17:20 UTC (permalink / raw)
  To: Razvan Cojocaru, Tamas K Lengyel; +Cc: Xen-devel

On 08/02/17 17:08, Razvan Cojocaru wrote:
> On 02/08/2017 06:25 PM, Tamas K Lengyel wrote:
>>
>> On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru
>> <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote:
>>
>>     It is currently possible for the guest to lock when subscribing
>>     to synchronous vm_events if max_vcpus is larger than the
>>     number of available ring buffer slots. This patch no longer
>>     blocks already paused VCPUs, fixing the issue for this use
>>     case.
>>
>>     Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com
>>     <mailto:rcojocaru@bitdefender.com>>
>>     ---
>>      xen/common/vm_event.c | 3 ++-
>>      1 file changed, 2 insertions(+), 1 deletion(-)
>>
>>     diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>>     index 82ce8f1..2005a64 100644
>>     --- a/xen/common/vm_event.c
>>     +++ b/xen/common/vm_event.c
>>     @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d,
>>           * See the comments above wake_blocked() for more information
>>           * on how this mechanism works to avoid waiting. */
>>          avail_req = vm_event_ring_available(ved);
>>     -    if( current->domain == d && avail_req < d->max_vcpus )
>>     +    if( current->domain == d && avail_req < d->max_vcpus &&
>>     +        !atomic_read( &current->vm_event_pause_count ) )
>>              vm_event_mark_and_pause(current, ved);
>>
>>
>> Hi Razvan,
>> I would also like to have the change made in this patch that unblocks
>> the vCPUs as soon as a spot opens up on the ring. Doing just what this
>> patch has will not solve the problem if there are asynchronous events used.
> Fair enough, I thought that might need more discussion and thus put into
> a subsequent patch, but I'll modify that as well, give it a spin and
> submit V2.

If you are spinning a v2, please pull current out into a variable at the
start (more efficient), and atomic_read() doesn't need spaces inside.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] common/vm_event: Prevent guest locking with large max_vcpus
  2017-02-08 17:20     ` Andrew Cooper
@ 2017-02-08 17:31       ` Razvan Cojocaru
  0 siblings, 0 replies; 5+ messages in thread
From: Razvan Cojocaru @ 2017-02-08 17:31 UTC (permalink / raw)
  To: Andrew Cooper, Tamas K Lengyel; +Cc: Xen-devel

On 02/08/2017 07:20 PM, Andrew Cooper wrote:
> On 08/02/17 17:08, Razvan Cojocaru wrote:
>> On 02/08/2017 06:25 PM, Tamas K Lengyel wrote:
>>>
>>> On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru
>>> <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote:
>>>
>>>     It is currently possible for the guest to lock when subscribing
>>>     to synchronous vm_events if max_vcpus is larger than the
>>>     number of available ring buffer slots. This patch no longer
>>>     blocks already paused VCPUs, fixing the issue for this use
>>>     case.
>>>
>>>     Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com
>>>     <mailto:rcojocaru@bitdefender.com>>
>>>     ---
>>>      xen/common/vm_event.c | 3 ++-
>>>      1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>>     diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>>>     index 82ce8f1..2005a64 100644
>>>     --- a/xen/common/vm_event.c
>>>     +++ b/xen/common/vm_event.c
>>>     @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d,
>>>           * See the comments above wake_blocked() for more information
>>>           * on how this mechanism works to avoid waiting. */
>>>          avail_req = vm_event_ring_available(ved);
>>>     -    if( current->domain == d && avail_req < d->max_vcpus )
>>>     +    if( current->domain == d && avail_req < d->max_vcpus &&
>>>     +        !atomic_read( &current->vm_event_pause_count ) )
>>>              vm_event_mark_and_pause(current, ved);
>>>
>>>
>>> Hi Razvan,
>>> I would also like to have the change made in this patch that unblocks
>>> the vCPUs as soon as a spot opens up on the ring. Doing just what this
>>> patch has will not solve the problem if there are asynchronous events used.
>> Fair enough, I thought that might need more discussion and thus put into
>> a subsequent patch, but I'll modify that as well, give it a spin and
>> submit V2.
> 
> If you are spinning a v2, please pull current out into a variable at the
> start (more efficient), and atomic_read() doesn't need spaces inside.

Will do.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-02-08 17:31 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-08  9:00 [PATCH] common/vm_event: Prevent guest locking with large max_vcpus Razvan Cojocaru
2017-02-08 16:25 ` Tamas K Lengyel
2017-02-08 17:08   ` Razvan Cojocaru
2017-02-08 17:20     ` Andrew Cooper
2017-02-08 17:31       ` Razvan Cojocaru

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.