xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Patch V2] xen: release lock occasionally during ballooning
@ 2015-07-20 11:49 Juergen Gross
  2015-07-20 13:11 ` [Xen-devel] " David Vrabel
  0 siblings, 1 reply; 4+ messages in thread
From: Juergen Gross @ 2015-07-20 11:49 UTC (permalink / raw)
  To: linux-kernel, xen-devel, konrad.wilk, david.vrabel, boris.ostrovsky
  Cc: Juergen Gross

When dom0 is being ballooned balloon_process() will hold the balloon
mutex until it is finished. This will block e.g. creation of new
domains as the device backends for the new domain need some
autoballooned pages for the ring buffers.

Avoid this by releasing the balloon mutex from time to time during
ballooning. Adjust the comment above balloon_process() regarding
multiple instances of balloon_process().

Instead of open coding it, just use cond_resched().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/balloon.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index fd93369..bf4a23c 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -472,7 +472,7 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
 }
 
 /*
- * We avoid multiple worker processes conflicting via the balloon mutex.
+ * As this is a work item it is guaranteed to run as a single instance only.
  * We may of course race updates of the target counts (which are protected
  * by the balloon lock), or with changes to the Xen hard limit, but we will
  * recover from these in time.
@@ -482,9 +482,10 @@ static void balloon_process(struct work_struct *work)
 	enum bp_state state = BP_DONE;
 	long credit;
 
-	mutex_lock(&balloon_mutex);
 
 	do {
+		mutex_lock(&balloon_mutex);
+
 		credit = current_credit();
 
 		if (credit > 0) {
@@ -499,17 +500,15 @@ static void balloon_process(struct work_struct *work)
 
 		state = update_schedule(state);
 
-#ifndef CONFIG_PREEMPT
-		if (need_resched())
-			schedule();
-#endif
+		mutex_unlock(&balloon_mutex);
+
+		cond_resched();
+
 	} while (credit && state == BP_DONE);
 
 	/* Schedule more work if there is some still to be done. */
 	if (state == BP_EAGAIN)
 		schedule_delayed_work(&balloon_worker, balloon_stats.schedule_delay * HZ);
-
-	mutex_unlock(&balloon_mutex);
 }
 
 /* Resets the Xen limit, sets new target, and kicks off processing. */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [Xen-devel] [Patch V2] xen: release lock occasionally during ballooning
  2015-07-20 11:49 [Patch V2] xen: release lock occasionally during ballooning Juergen Gross
@ 2015-07-20 13:11 ` David Vrabel
  0 siblings, 0 replies; 4+ messages in thread
From: David Vrabel @ 2015-07-20 13:11 UTC (permalink / raw)
  To: Juergen Gross, linux-kernel, xen-devel, konrad.wilk,
	david.vrabel, boris.ostrovsky

On 20/07/15 12:49, Juergen Gross wrote:
> When dom0 is being ballooned balloon_process() will hold the balloon
> mutex until it is finished. This will block e.g. creation of new
> domains as the device backends for the new domain need some
> autoballooned pages for the ring buffers.
> 
> Avoid this by releasing the balloon mutex from time to time during
> ballooning. Adjust the comment above balloon_process() regarding
> multiple instances of balloon_process().
> 
> Instead of open coding it, just use cond_resched().

Applied to for-linus-4.2, thanks.

David

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Patch V2] xen: release lock occasionally during ballooning
  2015-07-20 11:46 Juergen Gross
@ 2015-07-20 11:48 ` Juergen Gross
  0 siblings, 0 replies; 4+ messages in thread
From: Juergen Gross @ 2015-07-20 11:48 UTC (permalink / raw)
  To: linux-kernel, xen-devel, konrad.wilk, david.vrabel, boris.ostrovsky

Please ignore, forgot stg refresh...

Juergen

On 07/20/2015 01:46 PM, Juergen Gross wrote:
> When dom0 is being ballooned balloon_process() will hold the balloon
> mutex until it is finished. This will block e.g. creation of new
> domains as the device backends for the new domain need some
> autoballooned pages for the ring buffers.
>
> Avoid this by releasing the balloon mutex from time to time during
> ballooning. Adjust the comment above balloon_process() regarding
> multiple instances of balloon_process().
>
> Instead of open coding it, just use cond_resched().
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   drivers/xen/balloon.c | 19 +++++++++++++++----
>   1 file changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index fd93369..e6d9eee 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -481,9 +481,16 @@ static void balloon_process(struct work_struct *work)
>   {
>   	enum bp_state state = BP_DONE;
>   	long credit;
> +	static bool active;
>
>   	mutex_lock(&balloon_mutex);
>
> +	if (active) {
> +		mutex_unlock(&balloon_mutex);
> +		return;
> +	}
> +	active = true;
> +
>   	do {
>   		credit = current_credit();
>
> @@ -499,12 +506,16 @@ static void balloon_process(struct work_struct *work)
>
>   		state = update_schedule(state);
>
> -#ifndef CONFIG_PREEMPT
> -		if (need_resched())
> -			schedule();
> -#endif
> +		mutex_unlock(&balloon_mutex);
> +
> +		cond_resched();
> +
> +		mutex_lock(&balloon_mutex);
> +
>   	} while (credit && state == BP_DONE);
>
> +	active = false;
> +
>   	/* Schedule more work if there is some still to be done. */
>   	if (state == BP_EAGAIN)
>   		schedule_delayed_work(&balloon_worker, balloon_stats.schedule_delay * HZ);
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Patch V2] xen: release lock occasionally during ballooning
@ 2015-07-20 11:46 Juergen Gross
  2015-07-20 11:48 ` Juergen Gross
  0 siblings, 1 reply; 4+ messages in thread
From: Juergen Gross @ 2015-07-20 11:46 UTC (permalink / raw)
  To: linux-kernel, xen-devel, konrad.wilk, david.vrabel, boris.ostrovsky
  Cc: Juergen Gross

When dom0 is being ballooned balloon_process() will hold the balloon
mutex until it is finished. This will block e.g. creation of new
domains as the device backends for the new domain need some
autoballooned pages for the ring buffers.

Avoid this by releasing the balloon mutex from time to time during
ballooning. Adjust the comment above balloon_process() regarding
multiple instances of balloon_process().

Instead of open coding it, just use cond_resched().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/balloon.c | 19 +++++++++++++++----
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index fd93369..e6d9eee 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -481,9 +481,16 @@ static void balloon_process(struct work_struct *work)
 {
 	enum bp_state state = BP_DONE;
 	long credit;
+	static bool active;
 
 	mutex_lock(&balloon_mutex);
 
+	if (active) {
+		mutex_unlock(&balloon_mutex);
+		return;
+	}
+	active = true;
+
 	do {
 		credit = current_credit();
 
@@ -499,12 +506,16 @@ static void balloon_process(struct work_struct *work)
 
 		state = update_schedule(state);
 
-#ifndef CONFIG_PREEMPT
-		if (need_resched())
-			schedule();
-#endif
+		mutex_unlock(&balloon_mutex);
+
+		cond_resched();
+
+		mutex_lock(&balloon_mutex);
+
 	} while (credit && state == BP_DONE);
 
+	active = false;
+
 	/* Schedule more work if there is some still to be done. */
 	if (state == BP_EAGAIN)
 		schedule_delayed_work(&balloon_worker, balloon_stats.schedule_delay * HZ);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-07-20 13:11 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-20 11:49 [Patch V2] xen: release lock occasionally during ballooning Juergen Gross
2015-07-20 13:11 ` [Xen-devel] " David Vrabel
  -- strict thread matches above, loose matches on Subject: below --
2015-07-20 11:46 Juergen Gross
2015-07-20 11:48 ` Juergen Gross

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).