All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V1] PM: In kernel power management domain_pm created for async schedules
       [not found] <CGME20171206120714epcms5p70081d5bc518c3bb5f9cca4f56b203abf@epcms5p7>
@ 2017-12-06 12:07 ` Vikas Bansal
  2017-12-06 13:50   ` Rafael J. Wysocki
  2017-12-06 14:12   ` gregkh
  2017-12-13  8:46   ` Vikas Bansal
  1 sibling, 2 replies; 9+ messages in thread
From: Vikas Bansal @ 2017-12-06 12:07 UTC (permalink / raw)
  To: rjw, len.brown, pavel, gregkh, linux-pm, linux-kernel

Description:

If there is a driver in system which starts creating async schedules
just after resume (Same as our case, in which we faced issue).
Then async_synchronize_full API in PM cores starts waiting for completion
of async schedules created by that driver (Even though those are in a domain).
Because of this kernel resume time is increased (We faces the same issue)
and whole system is delayed.
This problem can be solved by creating a domain for
async schedules in PM core (As we solved in our case).
Below patch is for solving this problem.

Changelog:
1. Created Async domain domain_pm.
2. Converted async_schedule to async_schedule_domain.
3. Converted async_synchronize_full to async_synchronize_full_domain



Signed-off-by: Vikas Bansal <vikas.bansal@samsung.com>
Signed-off-by: Anuj Gupta <anuj01.gupta@samsung.com>
---
 drivers/base/power/main.c |   27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index db2f044..042b034 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -39,6 +39,7 @@
 #include "power.h"
 
 typedef int (*pm_callback_t)(struct device *);
+static ASYNC_DOMAIN(domain_pm);
 
 /*
  * The entries in the dpm_list list are in a depth first order, simply
@@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_noirq, dev);
+			async_schedule_domain(async_resume_noirq, dev, 
+			&domain_pm);
 		}
 	}
 
@@ -641,7 +643,7 @@ void dpm_noirq_resume_devices(pm_message_t state)
 		put_device(dev);
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&domain_pm);
 	dpm_show_time(starttime, state, 0, "noirq");
 	trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
 }
@@ -755,7 +757,8 @@ void dpm_resume_early(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_early, dev);
+			async_schedule_domain(async_resume_early, dev, 
+			&domain_pm);
 		}
 	}
 
@@ -780,7 +783,7 @@ void dpm_resume_early(pm_message_t state)
 		put_device(dev);
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&domain_pm);
 	dpm_show_time(starttime, state, 0, "early");
 	trace_suspend_resume(TPS("dpm_resume_early"), state.event, false);
 }
@@ -919,7 +922,7 @@ void dpm_resume(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume, dev);
+			async_schedule_domain(async_resume, dev, &domain_pm);
 		}
 	}
 
@@ -946,7 +949,7 @@ void dpm_resume(pm_message_t state)
 		put_device(dev);
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&domain_pm);
 	dpm_show_time(starttime, state, 0, NULL);
 
 	cpufreq_resume();
@@ -1156,7 +1159,7 @@ static int device_suspend_noirq(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_noirq, dev);
+		async_schedule_domain(async_suspend_noirq, dev, &domain_pm);
 		return 0;
 	}
 	return __device_suspend_noirq(dev, pm_transition, false);
@@ -1202,7 +1205,7 @@ int dpm_noirq_suspend_devices(pm_message_t state)
 			break;
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&domain_pm);
 	if (!error)
 		error = async_error;
 
@@ -1316,7 +1319,7 @@ static int device_suspend_late(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_late, dev);
+		async_schedule_domain(async_suspend_late, dev, &domain_pm);
 		return 0;
 	}
 
@@ -1361,7 +1364,7 @@ int dpm_suspend_late(pm_message_t state)
 			break;
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&domain_pm);
 	if (!error)
 		error = async_error;
 	if (error) {
@@ -1576,7 +1579,7 @@ static int device_suspend(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend, dev);
+		async_schedule_domain(async_suspend, dev, &domain_pm);
 		return 0;
 	}
 
@@ -1622,7 +1625,7 @@ int dpm_suspend(pm_message_t state)
 			break;
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&domain_pm);
 	if (!error)
 		error = async_error;
 	if (error) {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH V1] PM: In kernel power management domain_pm created for async schedules
  2017-12-06 12:07 ` [PATCH V1] PM: In kernel power management domain_pm created for async schedules Vikas Bansal
@ 2017-12-06 13:50   ` Rafael J. Wysocki
  2017-12-06 14:12   ` gregkh
  1 sibling, 0 replies; 9+ messages in thread
From: Rafael J. Wysocki @ 2017-12-06 13:50 UTC (permalink / raw)
  To: vikas.bansal, gregkh; +Cc: len.brown, pavel, linux-pm, linux-kernel

Greg, I'll take care of this if you don't mind.

On Wednesday, December 6, 2017 1:07:14 PM CET Vikas Bansal wrote:
> Description:
> 
> If there is a driver in system which starts creating async schedules
> just after resume (Same as our case, in which we faced issue).
> Then async_synchronize_full API in PM cores starts waiting for completion
> of async schedules created by that driver (Even though those are in a domain).
> Because of this kernel resume time is increased (We faces the same issue)
> and whole system is delayed.
> This problem can be solved by creating a domain for
> async schedules in PM core (As we solved in our case).
> Below patch is for solving this problem.

OK, it is more clear what's going on here.

First question, is the driver using the async things in the tree?

> Changelog:
> 1. Created Async domain domain_pm.
> 2. Converted async_schedule to async_schedule_domain.
> 3. Converted async_synchronize_full to async_synchronize_full_domain

This still isn't a proper changelog, but I can fix it up.

> Signed-off-by: Vikas Bansal <vikas.bansal@samsung.com>
> Signed-off-by: Anuj Gupta <anuj01.gupta@samsung.com>
> ---
>  drivers/base/power/main.c |   27 +++++++++++++++------------
>  1 file changed, 15 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
> index db2f044..042b034 100644
> --- a/drivers/base/power/main.c
> +++ b/drivers/base/power/main.c
> @@ -39,6 +39,7 @@
>  #include "power.h"
>  
>  typedef int (*pm_callback_t)(struct device *);
> +static ASYNC_DOMAIN(domain_pm);

Rename this to async_pm or similar (just use the word "async" in the name
at least or it will be super-confusing).

>  
>  /*
>   * The entries in the dpm_list list are in a depth first order, simply
> @@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
>  		reinit_completion(&dev->power.completion);
>  		if (is_async(dev)) {
>  			get_device(dev);
> -			async_schedule(async_resume_noirq, dev);
> +			async_schedule_domain(async_resume_noirq, dev, 
> +			&domain_pm);

This is not the right way to format the line above and you don't need to break
it (yes, it will be longer than 80 chars, but it will look better anyway).

>  		}
>  	}
>  
> @@ -641,7 +643,7 @@ void dpm_noirq_resume_devices(pm_message_t state)
>  		put_device(dev);
>  	}
>  	mutex_unlock(&dpm_list_mtx);
> -	async_synchronize_full();
> +	async_synchronize_full_domain(&domain_pm);
>  	dpm_show_time(starttime, state, 0, "noirq");
>  	trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
>  }
> @@ -755,7 +757,8 @@ void dpm_resume_early(pm_message_t state)
>  		reinit_completion(&dev->power.completion);
>  		if (is_async(dev)) {
>  			get_device(dev);
> -			async_schedule(async_resume_early, dev);
> +			async_schedule_domain(async_resume_early, dev, 
> +			&domain_pm);

Same here.

>  		}
>  	}
>  

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V1] PM: In kernel power management domain_pm created for async schedules
  2017-12-06 12:07 ` [PATCH V1] PM: In kernel power management domain_pm created for async schedules Vikas Bansal
  2017-12-06 13:50   ` Rafael J. Wysocki
@ 2017-12-06 14:12   ` gregkh
  2017-12-06 14:18     ` Rafael J. Wysocki
  1 sibling, 1 reply; 9+ messages in thread
From: gregkh @ 2017-12-06 14:12 UTC (permalink / raw)
  To: Vikas Bansal; +Cc: rjw, len.brown, pavel, linux-pm, linux-kernel

On Wed, Dec 06, 2017 at 12:07:14PM +0000, Vikas Bansal wrote:
> Description:

Why is this here?

> 
> If there is a driver in system which starts creating async schedules
> just after resume (Same as our case, in which we faced issue).
> Then async_synchronize_full API in PM cores starts waiting for completion
> of async schedules created by that driver (Even though those are in a domain).
> Because of this kernel resume time is increased (We faces the same issue)
> and whole system is delayed.
> This problem can be solved by creating a domain for
> async schedules in PM core (As we solved in our case).
> Below patch is for solving this problem.

Very odd formatting.

> 
> Changelog:
> 1. Created Async domain domain_pm.
> 2. Converted async_schedule to async_schedule_domain.
> 3. Converted async_synchronize_full to async_synchronize_full_domain

I'm confused.  Have you read kernel patch submissions?  Look at how they
are formatted.  The documentation in the kernel tree should help you out
a lot here.

Also, this is not v1, it has changed from the previous version.  Always
describe, in the correct way, the changes from previous submissions.


> 
> 
> 
> Signed-off-by: Vikas Bansal <vikas.bansal@samsung.com>
> Signed-off-by: Anuj Gupta <anuj01.gupta@samsung.com>
> ---
>  drivers/base/power/main.c |   27 +++++++++++++++------------
>  1 file changed, 15 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
> index db2f044..042b034 100644
> --- a/drivers/base/power/main.c
> +++ b/drivers/base/power/main.c
> @@ -39,6 +39,7 @@
>  #include "power.h"
>  
>  typedef int (*pm_callback_t)(struct device *);
> +static ASYNC_DOMAIN(domain_pm);
>  
>  /*
>   * The entries in the dpm_list list are in a depth first order, simply
> @@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
>  		reinit_completion(&dev->power.completion);
>  		if (is_async(dev)) {
>  			get_device(dev);
> -			async_schedule(async_resume_noirq, dev);
> +			async_schedule_domain(async_resume_noirq, dev, 

Always run your patches through scripts/checkpatch.pl so you do you not
get grumpy maintainers telling you to use scripts/checkpatch.pl

Stop.  Take some time.  Redo the patch in another day or so, and then
resend it later, _AFTER_ you have addressed the issues.  Don't rush,
there is no race here.

greg k-h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V1] PM: In kernel power management domain_pm created for async schedules
  2017-12-06 14:12   ` gregkh
@ 2017-12-06 14:18     ` Rafael J. Wysocki
  0 siblings, 0 replies; 9+ messages in thread
From: Rafael J. Wysocki @ 2017-12-06 14:18 UTC (permalink / raw)
  To: gregkh, Vikas Bansal; +Cc: len.brown, pavel, linux-pm, linux-kernel

On Wednesday, December 6, 2017 3:12:38 PM CET gregkh@linuxfoundation.org wrote:
> On Wed, Dec 06, 2017 at 12:07:14PM +0000, Vikas Bansal wrote:
> > Description:
> 
> Why is this here?
> 
> > 
> > If there is a driver in system which starts creating async schedules
> > just after resume (Same as our case, in which we faced issue).
> > Then async_synchronize_full API in PM cores starts waiting for completion
> > of async schedules created by that driver (Even though those are in a domain).
> > Because of this kernel resume time is increased (We faces the same issue)
> > and whole system is delayed.
> > This problem can be solved by creating a domain for
> > async schedules in PM core (As we solved in our case).
> > Below patch is for solving this problem.
> 
> Very odd formatting.
> 
> > 
> > Changelog:
> > 1. Created Async domain domain_pm.
> > 2. Converted async_schedule to async_schedule_domain.
> > 3. Converted async_synchronize_full to async_synchronize_full_domain
> 
> I'm confused.  Have you read kernel patch submissions?  Look at how they
> are formatted.  The documentation in the kernel tree should help you out
> a lot here.
> 
> Also, this is not v1, it has changed from the previous version.  Always
> describe, in the correct way, the changes from previous submissions.
> 
> 
> > 
> > 
> > 
> > Signed-off-by: Vikas Bansal <vikas.bansal@samsung.com>
> > Signed-off-by: Anuj Gupta <anuj01.gupta@samsung.com>
> > ---
> >  drivers/base/power/main.c |   27 +++++++++++++++------------
> >  1 file changed, 15 insertions(+), 12 deletions(-)
> > 
> > diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
> > index db2f044..042b034 100644
> > --- a/drivers/base/power/main.c
> > +++ b/drivers/base/power/main.c
> > @@ -39,6 +39,7 @@
> >  #include "power.h"
> >  
> >  typedef int (*pm_callback_t)(struct device *);
> > +static ASYNC_DOMAIN(domain_pm);
> >  
> >  /*
> >   * The entries in the dpm_list list are in a depth first order, simply
> > @@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
> >  		reinit_completion(&dev->power.completion);
> >  		if (is_async(dev)) {
> >  			get_device(dev);
> > -			async_schedule(async_resume_noirq, dev);
> > +			async_schedule_domain(async_resume_noirq, dev, 
> 
> Always run your patches through scripts/checkpatch.pl so you do you not
> get grumpy maintainers telling you to use scripts/checkpatch.pl
> 
> Stop.  Take some time.  Redo the patch in another day or so, and then
> resend it later, _AFTER_ you have addressed the issues.  Don't rush,
> there is no race here.

Also it is not clear to me if this fixes a mainline kernel issue,
because the changelog mentions a driver doing something odd, but it
doesn't say which one it is and whether or not it is in the tree.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V3] PM: In kernel power management domain_pm created for async schedules
       [not found] <CGME20171206120714epcms5p70081d5bc518c3bb5f9cca4f56b203abf@epcms5p7>
@ 2017-12-13  8:46   ` Vikas Bansal
  2017-12-13  8:46   ` Vikas Bansal
  1 sibling, 0 replies; 9+ messages in thread
From: Vikas Bansal @ 2017-12-13  8:46 UTC (permalink / raw)
  To: Rafael J. Wysocki, gregkh, len.brown, pavel, linux-pm, linux-kernel

 
Sender : Rafael J. Wysocki <rjw@rjwysocki.net>
Date   : 2017-12-06 19:48 (GMT+5:30)
 
> On Wednesday, December 6, 2017 3:12:38 PM CET gregkh@linuxfoundation.org wrote:
> > On Wed, Dec 06, 2017 at 12:07:14PM +0000, Vikas Bansal wrote:
> > > Description:
> > 
> > Why is this here?
> > 
> > > 
> > > If there is a driver in system which starts creating async schedules
> > > just after resume (Same as our case, in which we faced issue).
> > > Then async_synchronize_full API in PM cores starts waiting for completion
> > > of async schedules created by that driver (Even though those are in a domain).
> > > Because of this kernel resume time is increased (We faces the same issue)
> > > and whole system is delayed.
> > > This problem can be solved by creating a domain for
> > > async schedules in PM core (As we solved in our case).
> > > Below patch is for solving this problem.
> > 
> > Very odd formatting.
> > 
> > > 
> > > Changelog:
> > > 1. Created Async domain domain_pm.
> > > 2. Converted async_schedule to async_schedule_domain.
> > > 3. Converted async_synchronize_full to async_synchronize_full_domain
> > 
> > I'm confused.  Have you read kernel patch submissions?  Look at how they
> > are formatted.  The documentation in the kernel tree should help you out
> > a lot here.
> > 
> > Also, this is not v1, it has changed from the previous version.  Always
> > describe, in the correct way, the changes from previous submissions.

Setting the correct version and chaging the formatting.

> > 
> > 
> > > 
> > > 
> > > 
> > > Signed-off-by: Vikas Bansal <vikas.bansal@samsung.com>
> > > Signed-off-by: Anuj Gupta <anuj01.gupta@samsung.com>
> > > ---
> > >  drivers/base/power/main.c |   27 +++++++++++++++------------
> > >  1 file changed, 15 insertions(+), 12 deletions(-)
> > > 
> > > diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
> > > index db2f044..042b034 100644
> > > --- a/drivers/base/power/main.c
> > > +++ b/drivers/base/power/main.c
> > > @@ -39,6 +39,7 @@
> > >  #include "power.h"
> > >  
> > >  typedef int (*pm_callback_t)(struct device *);
> > > +static ASYNC_DOMAIN(domain_pm);
> > >  
> > >  /*
> > >   * The entries in the dpm_list list are in a depth first order, simply
> > > @@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
> > >                  reinit_completion(&dev->power.completion);
> > >                  if (is_async(dev)) {
> > >                          get_device(dev);
> > > -                        async_schedule(async_resume_noirq, dev);
> > > +                        async_schedule_domain(async_resume_noirq, dev, 
> > 
> > Always run your patches through scripts/checkpatch.pl so you do you not
> > get grumpy maintainers telling you to use scripts/checkpatch.pl
> > 
> > Stop.  Take some time.  Redo the patch in another day or so, and then
> > resend it later, _AFTER_ you have addressed the issues.  Don't rush,
> > there is no race here.
> 
> Also it is not clear to me if this fixes a mainline kernel issue,
> because the changelog mentions a driver doing something odd, but it
> doesn't say which one it is and whether or not it is in the tree.

No, this driver is not part of mainline yet.

Chaging the patch and changelog as suggested. Changed the name of domain from
"domain_pm" to "async_pm". But kept the name in subject as domain_pm, just to
avoid confusion.

>  
> Thanks,
> Rafael
 

If there is a driver in system which starts creating async schedules just after
resume (Same as our case, in which we faced issue). Then async_synchronize_full
API in PM cores starts waiting for completion of async schedules created by
that driver (Even though those are in a domain). Because of this kernel resume
time is increased (We faces the same issue) and whole system is delayed. 

For solving this problem Async domain async_pm was created and "async_schedule"
API call was replaced with "async_schedule_domain"."async_synchronize_full" was
replaced with "async_synchronize_full_domain".



Signed-off-by: Vikas Bansal <vikas.bansal@samsung.com>
Signed-off-by: Anuj Gupta <anuj01.gupta@samsung.com>
---
 drivers/base/power/main.c |   27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index db2f044..03b71e3 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -39,6 +39,7 @@
 #include "power.h"
 
 typedef int (*pm_callback_t)(struct device *);
+static ASYNC_DOMAIN(async_pm);
 
 /*
  * The entries in the dpm_list list are in a depth first order, simply
@@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_noirq, dev);
+			async_schedule_domain(async_resume_noirq, dev, 
+					      &async_pm);
 		}
 	}
 
@@ -641,7 +643,7 @@ void dpm_noirq_resume_devices(pm_message_t state)
 		put_device(dev);
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	dpm_show_time(starttime, state, 0, "noirq");
 	trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
 }
@@ -755,7 +757,8 @@ void dpm_resume_early(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_early, dev);
+			async_schedule_domain(async_resume_early, dev, 
+					      &async_pm);
 		}
 	}
 
@@ -780,7 +783,7 @@ void dpm_resume_early(pm_message_t state)
 		put_device(dev);
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	dpm_show_time(starttime, state, 0, "early");
 	trace_suspend_resume(TPS("dpm_resume_early"), state.event, false);
 }
@@ -919,7 +922,7 @@ void dpm_resume(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume, dev);
+			async_schedule_domain(async_resume, dev, &async_pm);
 		}
 	}
 
@@ -946,7 +949,7 @@ void dpm_resume(pm_message_t state)
 		put_device(dev);
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	dpm_show_time(starttime, state, 0, NULL);
 
 	cpufreq_resume();
@@ -1156,7 +1159,7 @@ static int device_suspend_noirq(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_noirq, dev);
+		async_schedule_domain(async_suspend_noirq, dev, &async_pm);
 		return 0;
 	}
 	return __device_suspend_noirq(dev, pm_transition, false);
@@ -1202,7 +1205,7 @@ int dpm_noirq_suspend_devices(pm_message_t state)
 			break;
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	if (!error)
 		error = async_error;
 
@@ -1316,7 +1319,7 @@ static int device_suspend_late(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_late, dev);
+		async_schedule_domain(async_suspend_late, dev, &async_pm);
 		return 0;
 	}
 
@@ -1361,7 +1364,7 @@ int dpm_suspend_late(pm_message_t state)
 			break;
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	if (!error)
 		error = async_error;
 	if (error) {
@@ -1576,7 +1579,7 @@ static int device_suspend(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend, dev);
+		async_schedule_domain(async_suspend, dev, &async_pm);
 		return 0;
 	}
 
@@ -1622,7 +1625,7 @@ int dpm_suspend(pm_message_t state)
 			break;
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	if (!error)
 		error = async_error;
 	if (error) {
-- 
1.7.9.5
 
 

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH V3] PM: In kernel power management domain_pm created for async schedules
@ 2017-12-13  8:46   ` Vikas Bansal
  0 siblings, 0 replies; 9+ messages in thread
From: Vikas Bansal @ 2017-12-13  8:46 UTC (permalink / raw)
  To: Rafael J. Wysocki, gregkh, len.brown, pavel, linux-pm, linux-kernel

 
Sender : Rafael J. Wysocki <rjw@rjwysocki.net>
Date   : 2017-12-06 19:48 (GMT+5:30)
 
> On Wednesday, December 6, 2017 3:12:38 PM CET gregkh@linuxfoundation.org wrote:
> > On Wed, Dec 06, 2017 at 12:07:14PM +0000, Vikas Bansal wrote:
> > > Description:
> > 
> > Why is this here?
> > 
> > > 
> > > If there is a driver in system which starts creating async schedules
> > > just after resume (Same as our case, in which we faced issue).
> > > Then async_synchronize_full API in PM cores starts waiting for completion
> > > of async schedules created by that driver (Even though those are in a domain).
> > > Because of this kernel resume time is increased (We faces the same issue)
> > > and whole system is delayed.
> > > This problem can be solved by creating a domain for
> > > async schedules in PM core (As we solved in our case).
> > > Below patch is for solving this problem.
> > 
> > Very odd formatting.
> > 
> > > 
> > > Changelog:
> > > 1. Created Async domain domain_pm.
> > > 2. Converted async_schedule to async_schedule_domain.
> > > 3. Converted async_synchronize_full to async_synchronize_full_domain
> > 
> > I'm confused.  Have you read kernel patch submissions?  Look at how they
> > are formatted.  The documentation in the kernel tree should help you out
> > a lot here.
> > 
> > Also, this is not v1, it has changed from the previous version.  Always
> > describe, in the correct way, the changes from previous submissions.

Setting the correct version and chaging the formatting.

> > 
> > 
> > > 
> > > 
> > > 
> > > Signed-off-by: Vikas Bansal <vikas.bansal@samsung.com>
> > > Signed-off-by: Anuj Gupta <anuj01.gupta@samsung.com>
> > > ---
> > >  drivers/base/power/main.c |   27 +++++++++++++++------------
> > >  1 file changed, 15 insertions(+), 12 deletions(-)
> > > 
> > > diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
> > > index db2f044..042b034 100644
> > > --- a/drivers/base/power/main.c
> > > +++ b/drivers/base/power/main.c
> > > @@ -39,6 +39,7 @@
> > >  #include "power.h"
> > >  
> > >  typedef int (*pm_callback_t)(struct device *);
> > > +static ASYNC_DOMAIN(domain_pm);
> > >  
> > >  /*
> > >   * The entries in the dpm_list list are in a depth first order, simply
> > > @@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
> > >                  reinit_completion(&dev->power.completion);
> > >                  if (is_async(dev)) {
> > >                          get_device(dev);
> > > -                        async_schedule(async_resume_noirq, dev);
> > > +                        async_schedule_domain(async_resume_noirq, dev, 
> > 
> > Always run your patches through scripts/checkpatch.pl so you do you not
> > get grumpy maintainers telling you to use scripts/checkpatch.pl
> > 
> > Stop.  Take some time.  Redo the patch in another day or so, and then
> > resend it later, _AFTER_ you have addressed the issues.  Don't rush,
> > there is no race here.
> 
> Also it is not clear to me if this fixes a mainline kernel issue,
> because the changelog mentions a driver doing something odd, but it
> doesn't say which one it is and whether or not it is in the tree.

No, this driver is not part of mainline yet.

Chaging the patch and changelog as suggested. Changed the name of domain from
"domain_pm" to "async_pm". But kept the name in subject as domain_pm, just to
avoid confusion.

>  
> Thanks,
> Rafael
 

If there is a driver in system which starts creating async schedules just after
resume (Same as our case, in which we faced issue). Then async_synchronize_full
API in PM cores starts waiting for completion of async schedules created by
that driver (Even though those are in a domain). Because of this kernel resume
time is increased (We faces the same issue) and whole system is delayed. 

For solving this problem Async domain async_pm was created and "async_schedule"
API call was replaced with "async_schedule_domain"."async_synchronize_full" was
replaced with "async_synchronize_full_domain".



Signed-off-by: Vikas Bansal <vikas.bansal@samsung.com>
Signed-off-by: Anuj Gupta <anuj01.gupta@samsung.com>
---
 drivers/base/power/main.c |   27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index db2f044..03b71e3 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -39,6 +39,7 @@
 #include "power.h"
 
 typedef int (*pm_callback_t)(struct device *);
+static ASYNC_DOMAIN(async_pm);
 
 /*
  * The entries in the dpm_list list are in a depth first order, simply
@@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_noirq, dev);
+			async_schedule_domain(async_resume_noirq, dev, 
+					      &async_pm);
 		}
 	}
 
@@ -641,7 +643,7 @@ void dpm_noirq_resume_devices(pm_message_t state)
 		put_device(dev);
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	dpm_show_time(starttime, state, 0, "noirq");
 	trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
 }
@@ -755,7 +757,8 @@ void dpm_resume_early(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_early, dev);
+			async_schedule_domain(async_resume_early, dev, 
+					      &async_pm);
 		}
 	}
 
@@ -780,7 +783,7 @@ void dpm_resume_early(pm_message_t state)
 		put_device(dev);
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	dpm_show_time(starttime, state, 0, "early");
 	trace_suspend_resume(TPS("dpm_resume_early"), state.event, false);
 }
@@ -919,7 +922,7 @@ void dpm_resume(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume, dev);
+			async_schedule_domain(async_resume, dev, &async_pm);
 		}
 	}
 
@@ -946,7 +949,7 @@ void dpm_resume(pm_message_t state)
 		put_device(dev);
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	dpm_show_time(starttime, state, 0, NULL);
 
 	cpufreq_resume();
@@ -1156,7 +1159,7 @@ static int device_suspend_noirq(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_noirq, dev);
+		async_schedule_domain(async_suspend_noirq, dev, &async_pm);
 		return 0;
 	}
 	return __device_suspend_noirq(dev, pm_transition, false);
@@ -1202,7 +1205,7 @@ int dpm_noirq_suspend_devices(pm_message_t state)
 			break;
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	if (!error)
 		error = async_error;
 
@@ -1316,7 +1319,7 @@ static int device_suspend_late(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_late, dev);
+		async_schedule_domain(async_suspend_late, dev, &async_pm);
 		return 0;
 	}
 
@@ -1361,7 +1364,7 @@ int dpm_suspend_late(pm_message_t state)
 			break;
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	if (!error)
 		error = async_error;
 	if (error) {
@@ -1576,7 +1579,7 @@ static int device_suspend(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend, dev);
+		async_schedule_domain(async_suspend, dev, &async_pm);
 		return 0;
 	}
 
@@ -1622,7 +1625,7 @@ int dpm_suspend(pm_message_t state)
 			break;
 	}
 	mutex_unlock(&dpm_list_mtx);
-	async_synchronize_full();
+	async_synchronize_full_domain(&async_pm);
 	if (!error)
 		error = async_error;
 	if (error) {
-- 
1.7.9.5
 
 

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH V3] PM: In kernel power management domain_pm created for async schedules
  2017-12-13  8:46   ` Vikas Bansal
  (?)
@ 2017-12-13 22:26   ` Rafael J. Wysocki
  -1 siblings, 0 replies; 9+ messages in thread
From: Rafael J. Wysocki @ 2017-12-13 22:26 UTC (permalink / raw)
  To: vikas.bansal
  Cc: Rafael J. Wysocki, gregkh, len.brown, pavel, linux-pm, linux-kernel

On Wed, Dec 13, 2017 at 9:46 AM, Vikas Bansal <vikas.bansal@samsung.com> wrote:
>
> Sender : Rafael J. Wysocki <rjw@rjwysocki.net>
> Date   : 2017-12-06 19:48 (GMT+5:30)
>
>> On Wednesday, December 6, 2017 3:12:38 PM CET gregkh@linuxfoundation.org wrote:
>> > On Wed, Dec 06, 2017 at 12:07:14PM +0000, Vikas Bansal wrote:
>> > > Description:
>> >
>> > Why is this here?
>> >
>> > >
>> > > If there is a driver in system which starts creating async schedules
>> > > just after resume (Same as our case, in which we faced issue).
>> > > Then async_synchronize_full API in PM cores starts waiting for completion
>> > > of async schedules created by that driver (Even though those are in a domain).
>> > > Because of this kernel resume time is increased (We faces the same issue)
>> > > and whole system is delayed.
>> > > This problem can be solved by creating a domain for
>> > > async schedules in PM core (As we solved in our case).
>> > > Below patch is for solving this problem.
>> >
>> > Very odd formatting.
>> >
>> > >
>> > > Changelog:
>> > > 1. Created Async domain domain_pm.
>> > > 2. Converted async_schedule to async_schedule_domain.
>> > > 3. Converted async_synchronize_full to async_synchronize_full_domain
>> >
>> > I'm confused.  Have you read kernel patch submissions?  Look at how they
>> > are formatted.  The documentation in the kernel tree should help you out
>> > a lot here.
>> >
>> > Also, this is not v1, it has changed from the previous version.  Always
>> > describe, in the correct way, the changes from previous submissions.
>
> Setting the correct version and chaging the formatting.
>
>> >
>> >
>> > >
>> > >
>> > >
>> > > Signed-off-by: Vikas Bansal <vikas.bansal@samsung.com>
>> > > Signed-off-by: Anuj Gupta <anuj01.gupta@samsung.com>
>> > > ---
>> > >  drivers/base/power/main.c |   27 +++++++++++++++------------
>> > >  1 file changed, 15 insertions(+), 12 deletions(-)
>> > >
>> > > diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
>> > > index db2f044..042b034 100644
>> > > --- a/drivers/base/power/main.c
>> > > +++ b/drivers/base/power/main.c
>> > > @@ -39,6 +39,7 @@
>> > >  #include "power.h"
>> > >
>> > >  typedef int (*pm_callback_t)(struct device *);
>> > > +static ASYNC_DOMAIN(domain_pm);
>> > >
>> > >  /*
>> > >   * The entries in the dpm_list list are in a depth first order, simply
>> > > @@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
>> > >                  reinit_completion(&dev->power.completion);
>> > >                  if (is_async(dev)) {
>> > >                          get_device(dev);
>> > > -                        async_schedule(async_resume_noirq, dev);
>> > > +                        async_schedule_domain(async_resume_noirq, dev,
>> >
>> > Always run your patches through scripts/checkpatch.pl so you do you not
>> > get grumpy maintainers telling you to use scripts/checkpatch.pl
>> >
>> > Stop.  Take some time.  Redo the patch in another day or so, and then
>> > resend it later, _AFTER_ you have addressed the issues.  Don't rush,
>> > there is no race here.
>>
>> Also it is not clear to me if this fixes a mainline kernel issue,
>> because the changelog mentions a driver doing something odd, but it
>> doesn't say which one it is and whether or not it is in the tree.
>
> No, this driver is not part of mainline yet.

So please submit it along with the driver that needs it, whenever that
one is ready.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V3] PM: In kernel power management domain_pm created for async schedules
       [not found]   ` <CGME20171206120714epcms5p70081d5bc518c3bb5f9cca4f56b203abf@epcms5p6>
@ 2018-01-04  8:59     ` Vikas Bansal
  2018-01-04  9:25       ` gregkh
  0 siblings, 1 reply; 9+ messages in thread
From: Vikas Bansal @ 2018-01-04  8:59 UTC (permalink / raw)
  To: Rafael J. Wysocki, Rafael J. Wysocki, gregkh, len.brown, pavel,
	linux-pm, linux-kernel
  Cc: Anuj Gupta

[-- Attachment #1: Type: text/html, Size: 2624 bytes --]

[-- Attachment #2: 201602111742151_N3WZA6X7.png --]
[-- Type: image/png, Size: 33527 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V3] PM: In kernel power management domain_pm created for async schedules
  2018-01-04  8:59     ` Vikas Bansal
@ 2018-01-04  9:25       ` gregkh
  0 siblings, 0 replies; 9+ messages in thread
From: gregkh @ 2018-01-04  9:25 UTC (permalink / raw)
  To: Vikas Bansal
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, len.brown, pavel, linux-pm,
	linux-kernel, Anuj Gupta

On Thu, Jan 04, 2018 at 08:59:00AM +0000, Vikas Bansal wrote:
> Sender : Rafael J. Wysocki <rafael@kernel.org>
> 
> Date : 2017-12-14 03:56 (GMT+5:30)
> 
> >
> 
> > On Wed, Dec 13, 2017 at 9:46 AM, Vikas Bansal <vikas.bansal@samsung.com>
> wrote:
> >>
> >> Sender : Rafael J. Wysocki <rjw@rjwysocki.net>
> >> Date   : 2017-12-06 19:48 (GMT+5:30)
> >>
> >>>
> >>> Also it is not clear to me if this fixes a mainline kernel issue,
> >>> because the changelog mentions a driver doing something odd, but it
> >>> doesn't say which one it is and whether or not it is in the tree.
> >>
> >> No, this driver is not part of mainline yet.
> >
> > So please submit it along with the driver that needs it, whenever that
> > one is ready.
> >
> > Thanks,
> > Rafael
> 
> I am not sure when that driver will be submitted to open source and I even
> don't know whether final implementation of driver will have async_schedules or
> not. I just found one optimization in PM core because of that driver. If there
> is any issue with patch, please let me know. I will correct.

We can't take the patch without the driver at the same time, sorry.

greg k-h

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-01-04  9:25 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20171206120714epcms5p70081d5bc518c3bb5f9cca4f56b203abf@epcms5p7>
2017-12-06 12:07 ` [PATCH V1] PM: In kernel power management domain_pm created for async schedules Vikas Bansal
2017-12-06 13:50   ` Rafael J. Wysocki
2017-12-06 14:12   ` gregkh
2017-12-06 14:18     ` Rafael J. Wysocki
2017-12-13  8:46 ` [PATCH V3] " Vikas Bansal
2017-12-13  8:46   ` Vikas Bansal
2017-12-13 22:26   ` Rafael J. Wysocki
     [not found]   ` <CGME20171206120714epcms5p70081d5bc518c3bb5f9cca4f56b203abf@epcms5p6>
2018-01-04  8:59     ` Vikas Bansal
2018-01-04  9:25       ` gregkh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.