All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] PM / Wakeup: Fixes in wakeup.c
@ 2011-01-25  0:12 Rafael J. Wysocki
  2011-01-25  0:14 ` [PATCH 1/3] PM / Wakeup: Add missing memory barriers Rafael J. Wysocki
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-25  0:12 UTC (permalink / raw)
  To: Linux-pm mailing list; +Cc: LKML, Alan Stern

Hi,

The following three patches fix some issues I've recently noticed in
drivers/base/power/wakeup.c.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-25  0:12 [PATCH 0/3] PM / Wakeup: Fixes in wakeup.c Rafael J. Wysocki
  2011-01-25  0:14 ` [PATCH 1/3] PM / Wakeup: Add missing memory barriers Rafael J. Wysocki
@ 2011-01-25  0:14 ` Rafael J. Wysocki
  2011-01-26 20:21   ` Alan Stern
  2011-01-26 20:21   ` Alan Stern
  2011-01-25  0:15 ` [PATCH 2/3] PM / Wakeup: Make pm_save_wakeup_count() work as documented Rafael J. Wysocki
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-25  0:14 UTC (permalink / raw)
  To: Linux-pm mailing list; +Cc: LKML, Alan Stern

From: Rafael J. Wysocki <rjw@sisk.pl>

The memory barrier in wakeup_source_deactivate() is supposed to
prevent the callers of pm_wakeup_pending() and pm_get_wakeup_count()
from seeing the new value of events_in_progress (0, in particular)
and the old value of event_count at the same time.  However, if
wakeup_source_deactivate() is executed by CPU0 and, for instance,
pm_wakeup_pending() is executed by CPU1, where both processors can
reorder operations, the memory barrier in wakeup_source_deactivate()
doesn't affect CPU1 which can reorder reads.  In that case CPU1 may
very well decide to fetch event_count before it's modified and
events_in_progress after it's been updated, so pm_wakeup_pending()
may fail to detect a wakeup event.  This issue can be addressed by
adding a read memory barrier in pm_wakeup_pending() that will enforce
events_in_progress to be read before event_count.

For similar reason, a read memory barrier should be added to
pm_get_wakeup_count().

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
---
 drivers/base/power/wakeup.c |   33 +++++++++++++++++++++++++--------
 1 file changed, 25 insertions(+), 8 deletions(-)

Index: linux-2.6/drivers/base/power/wakeup.c
===================================================================
--- linux-2.6.orig/drivers/base/power/wakeup.c
+++ linux-2.6/drivers/base/power/wakeup.c
@@ -568,12 +568,30 @@ static void pm_wakeup_update_hit_counts(
 }
 
 /**
+ * __pm_wakeup_pending - Check if there are any new wakeup events.
+ * @count: Expected number of wakeup events registered so far.
+ *
+ * Compare the current number of registered wakeup events with @count and return
+ * true if they are different.  Also return true if the current number of wakeup
+ * events being processed is different from zero.
+ */
+static bool __pm_wakeup_pending(unsigned int count)
+{
+	bool in_progress;
+
+	in_progress = !!atomic_read(&events_in_progress);
+	smp_rmb();
+	return in_progress || (unsigned int)atomic_read(&event_count) != count;
+}
+
+/**
  * pm_wakeup_pending - Check if power transition in progress should be aborted.
  *
- * Compare the current number of registered wakeup events with its preserved
- * value from the past and return true if new wakeup events have been registered
- * since the old value was stored.  Also return true if the current number of
- * wakeup events being processed is different from zero.
+ * If wakeup events detection is enabled, call __pm_wakeup_pending() for
+ * saved_count (preserved number of registered wakeup events from the past) and
+ * return its result.  If it is 'true' (i.e. new wakeup events have been
+ * registered since the last modification of saved_count), disable wakeup events
+ * detection and update the statistics of all wakeup sources.
  */
 bool pm_wakeup_pending(void)
 {
@@ -582,8 +600,7 @@ bool pm_wakeup_pending(void)
 
 	spin_lock_irqsave(&events_lock, flags);
 	if (events_check_enabled) {
-		ret = ((unsigned int)atomic_read(&event_count) != saved_count)
-			|| atomic_read(&events_in_progress);
+		ret = __pm_wakeup_pending(saved_count);
 		events_check_enabled = !ret;
 	}
 	spin_unlock_irqrestore(&events_lock, flags);
@@ -616,6 +633,7 @@ bool pm_get_wakeup_count(unsigned int *c
 	}
 
 	ret = !atomic_read(&events_in_progress);
+	smp_rmb();
 	*count = atomic_read(&event_count);
 	return ret;
 }
@@ -634,8 +652,7 @@ bool pm_save_wakeup_count(unsigned int c
 	bool ret = false;
 
 	spin_lock_irq(&events_lock);
-	if (count == (unsigned int)atomic_read(&event_count)
-	    && !atomic_read(&events_in_progress)) {
+	if (!__pm_wakeup_pending(count)) {
 		saved_count = count;
 		events_check_enabled = true;
 		ret = true;


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-25  0:12 [PATCH 0/3] PM / Wakeup: Fixes in wakeup.c Rafael J. Wysocki
@ 2011-01-25  0:14 ` Rafael J. Wysocki
  2011-01-25  0:14 ` Rafael J. Wysocki
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-25  0:14 UTC (permalink / raw)
  To: Linux-pm mailing list; +Cc: LKML

From: Rafael J. Wysocki <rjw@sisk.pl>

The memory barrier in wakeup_source_deactivate() is supposed to
prevent the callers of pm_wakeup_pending() and pm_get_wakeup_count()
from seeing the new value of events_in_progress (0, in particular)
and the old value of event_count at the same time.  However, if
wakeup_source_deactivate() is executed by CPU0 and, for instance,
pm_wakeup_pending() is executed by CPU1, where both processors can
reorder operations, the memory barrier in wakeup_source_deactivate()
doesn't affect CPU1 which can reorder reads.  In that case CPU1 may
very well decide to fetch event_count before it's modified and
events_in_progress after it's been updated, so pm_wakeup_pending()
may fail to detect a wakeup event.  This issue can be addressed by
adding a read memory barrier in pm_wakeup_pending() that will enforce
events_in_progress to be read before event_count.

For similar reason, a read memory barrier should be added to
pm_get_wakeup_count().

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
---
 drivers/base/power/wakeup.c |   33 +++++++++++++++++++++++++--------
 1 file changed, 25 insertions(+), 8 deletions(-)

Index: linux-2.6/drivers/base/power/wakeup.c
===================================================================
--- linux-2.6.orig/drivers/base/power/wakeup.c
+++ linux-2.6/drivers/base/power/wakeup.c
@@ -568,12 +568,30 @@ static void pm_wakeup_update_hit_counts(
 }
 
 /**
+ * __pm_wakeup_pending - Check if there are any new wakeup events.
+ * @count: Expected number of wakeup events registered so far.
+ *
+ * Compare the current number of registered wakeup events with @count and return
+ * true if they are different.  Also return true if the current number of wakeup
+ * events being processed is different from zero.
+ */
+static bool __pm_wakeup_pending(unsigned int count)
+{
+	bool in_progress;
+
+	in_progress = !!atomic_read(&events_in_progress);
+	smp_rmb();
+	return in_progress || (unsigned int)atomic_read(&event_count) != count;
+}
+
+/**
  * pm_wakeup_pending - Check if power transition in progress should be aborted.
  *
- * Compare the current number of registered wakeup events with its preserved
- * value from the past and return true if new wakeup events have been registered
- * since the old value was stored.  Also return true if the current number of
- * wakeup events being processed is different from zero.
+ * If wakeup events detection is enabled, call __pm_wakeup_pending() for
+ * saved_count (preserved number of registered wakeup events from the past) and
+ * return its result.  If it is 'true' (i.e. new wakeup events have been
+ * registered since the last modification of saved_count), disable wakeup events
+ * detection and update the statistics of all wakeup sources.
  */
 bool pm_wakeup_pending(void)
 {
@@ -582,8 +600,7 @@ bool pm_wakeup_pending(void)
 
 	spin_lock_irqsave(&events_lock, flags);
 	if (events_check_enabled) {
-		ret = ((unsigned int)atomic_read(&event_count) != saved_count)
-			|| atomic_read(&events_in_progress);
+		ret = __pm_wakeup_pending(saved_count);
 		events_check_enabled = !ret;
 	}
 	spin_unlock_irqrestore(&events_lock, flags);
@@ -616,6 +633,7 @@ bool pm_get_wakeup_count(unsigned int *c
 	}
 
 	ret = !atomic_read(&events_in_progress);
+	smp_rmb();
 	*count = atomic_read(&event_count);
 	return ret;
 }
@@ -634,8 +652,7 @@ bool pm_save_wakeup_count(unsigned int c
 	bool ret = false;
 
 	spin_lock_irq(&events_lock);
-	if (count == (unsigned int)atomic_read(&event_count)
-	    && !atomic_read(&events_in_progress)) {
+	if (!__pm_wakeup_pending(count)) {
 		saved_count = count;
 		events_check_enabled = true;
 		ret = true;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 2/3] PM / Wakeup: Make pm_save_wakeup_count() work as documented
  2011-01-25  0:12 [PATCH 0/3] PM / Wakeup: Fixes in wakeup.c Rafael J. Wysocki
                   ` (2 preceding siblings ...)
  2011-01-25  0:15 ` [PATCH 2/3] PM / Wakeup: Make pm_save_wakeup_count() work as documented Rafael J. Wysocki
@ 2011-01-25  0:15 ` Rafael J. Wysocki
  2011-01-25  0:16 ` [PATCH 3/3] PM / Wakeup: Don't update events_check_enabled in pm_get_wakeup_count() Rafael J. Wysocki
  2011-01-25  0:16 ` Rafael J. Wysocki
  5 siblings, 0 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-25  0:15 UTC (permalink / raw)
  To: Linux-pm mailing list; +Cc: LKML, Alan Stern

From: Rafael J. Wysocki <rjw@sisk.pl>

According to Documentation/ABI/testing/sysfs-power, the
/sys/power/wakeup_count interface should only make the kernel react
to wakeup events during suspend if the last write to it has been
successful.  However, if /sys/power/wakeup_count is written to two
times in a row, where the first write is successful and the second
is not, the kernel will still react to wakeup events during suspend
due to a bug in pm_save_wakeup_count().

Fix the bug by making pm_save_wakeup_count() clear
events_check_enabled unconditionally before checking if there are
any new wakeup events registered since the previous read from
/sys/power/wakeup_count.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
---
 drivers/base/power/wakeup.c |   13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

Index: linux-2.6/drivers/base/power/wakeup.c
===================================================================
--- linux-2.6.orig/drivers/base/power/wakeup.c
+++ linux-2.6/drivers/base/power/wakeup.c
@@ -644,23 +644,22 @@ bool pm_get_wakeup_count(unsigned int *c
  *
  * If @count is equal to the current number of registered wakeup events and the
  * current number of wakeup events being processed is zero, store @count as the
- * old number of registered wakeup events to be used by pm_check_wakeup_events()
- * and return true.  Otherwise return false.
+ * old number of registered wakeup events for pm_check_wakeup_events(), enable
+ * wakeup events detection and return 'true'.  Otherwise disable wakeup events
+ * detection and return 'false'.
  */
 bool pm_save_wakeup_count(unsigned int count)
 {
-	bool ret = false;
-
+	events_check_enabled = false;
 	spin_lock_irq(&events_lock);
 	if (!__pm_wakeup_pending(count)) {
 		saved_count = count;
 		events_check_enabled = true;
-		ret = true;
 	}
 	spin_unlock_irq(&events_lock);
-	if (!ret)
+	if (!events_check_enabled)
 		pm_wakeup_update_hit_counts();
-	return ret;
+	return events_check_enabled;
 }
 
 static struct dentry *wakeup_sources_stats_dentry;


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 2/3] PM / Wakeup: Make pm_save_wakeup_count() work as documented
  2011-01-25  0:12 [PATCH 0/3] PM / Wakeup: Fixes in wakeup.c Rafael J. Wysocki
  2011-01-25  0:14 ` [PATCH 1/3] PM / Wakeup: Add missing memory barriers Rafael J. Wysocki
  2011-01-25  0:14 ` Rafael J. Wysocki
@ 2011-01-25  0:15 ` Rafael J. Wysocki
  2011-01-25  0:15 ` Rafael J. Wysocki
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-25  0:15 UTC (permalink / raw)
  To: Linux-pm mailing list; +Cc: LKML

From: Rafael J. Wysocki <rjw@sisk.pl>

According to Documentation/ABI/testing/sysfs-power, the
/sys/power/wakeup_count interface should only make the kernel react
to wakeup events during suspend if the last write to it has been
successful.  However, if /sys/power/wakeup_count is written to two
times in a row, where the first write is successful and the second
is not, the kernel will still react to wakeup events during suspend
due to a bug in pm_save_wakeup_count().

Fix the bug by making pm_save_wakeup_count() clear
events_check_enabled unconditionally before checking if there are
any new wakeup events registered since the previous read from
/sys/power/wakeup_count.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
---
 drivers/base/power/wakeup.c |   13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

Index: linux-2.6/drivers/base/power/wakeup.c
===================================================================
--- linux-2.6.orig/drivers/base/power/wakeup.c
+++ linux-2.6/drivers/base/power/wakeup.c
@@ -644,23 +644,22 @@ bool pm_get_wakeup_count(unsigned int *c
  *
  * If @count is equal to the current number of registered wakeup events and the
  * current number of wakeup events being processed is zero, store @count as the
- * old number of registered wakeup events to be used by pm_check_wakeup_events()
- * and return true.  Otherwise return false.
+ * old number of registered wakeup events for pm_check_wakeup_events(), enable
+ * wakeup events detection and return 'true'.  Otherwise disable wakeup events
+ * detection and return 'false'.
  */
 bool pm_save_wakeup_count(unsigned int count)
 {
-	bool ret = false;
-
+	events_check_enabled = false;
 	spin_lock_irq(&events_lock);
 	if (!__pm_wakeup_pending(count)) {
 		saved_count = count;
 		events_check_enabled = true;
-		ret = true;
 	}
 	spin_unlock_irq(&events_lock);
-	if (!ret)
+	if (!events_check_enabled)
 		pm_wakeup_update_hit_counts();
-	return ret;
+	return events_check_enabled;
 }
 
 static struct dentry *wakeup_sources_stats_dentry;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 3/3] PM / Wakeup: Don't update events_check_enabled in pm_get_wakeup_count()
  2011-01-25  0:12 [PATCH 0/3] PM / Wakeup: Fixes in wakeup.c Rafael J. Wysocki
                   ` (4 preceding siblings ...)
  2011-01-25  0:16 ` [PATCH 3/3] PM / Wakeup: Don't update events_check_enabled in pm_get_wakeup_count() Rafael J. Wysocki
@ 2011-01-25  0:16 ` Rafael J. Wysocki
  5 siblings, 0 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-25  0:16 UTC (permalink / raw)
  To: Linux-pm mailing list; +Cc: LKML, Alan Stern

From: Rafael J. Wysocki <rjw@sisk.pl>

Since pm_save_wakeup_count() has just been changed to clear
events_check_enabled unconditionally before checking if there are
any new wakeup events registered since the last read from
/sys/power/wakeup_count, the detection of wakeup events during
suspend may be disabled, after it's been enabled, by writing a
"wrong" value back to /sys/power/wakeup_count.  For this reason,
it is not necessary to update events_check_enabled in
pm_get_wakeup_count() any more.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
---
 drivers/base/power/wakeup.c |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

Index: linux-2.6/drivers/base/power/wakeup.c
===================================================================
--- linux-2.6.orig/drivers/base/power/wakeup.c
+++ linux-2.6/drivers/base/power/wakeup.c
@@ -616,17 +616,14 @@ bool pm_wakeup_pending(void)
  * Store the number of registered wakeup events at the address in @count.  Block
  * if the current number of wakeup events being processed is nonzero.
  *
- * Return false if the wait for the number of wakeup events being processed to
+ * Return 'false' if the wait for the number of wakeup events being processed to
  * drop down to zero has been interrupted by a signal (and the current number
- * of wakeup events being processed is still nonzero).  Otherwise return true.
+ * of wakeup events being processed is still nonzero).  Otherwise return 'true'.
  */
 bool pm_get_wakeup_count(unsigned int *count)
 {
 	bool ret;
 
-	if (capable(CAP_SYS_ADMIN))
-		events_check_enabled = false;
-
 	while (atomic_read(&events_in_progress) && !signal_pending(current)) {
 		pm_wakeup_update_hit_counts();
 		schedule_timeout_interruptible(msecs_to_jiffies(TIMEOUT));


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 3/3] PM / Wakeup: Don't update events_check_enabled in pm_get_wakeup_count()
  2011-01-25  0:12 [PATCH 0/3] PM / Wakeup: Fixes in wakeup.c Rafael J. Wysocki
                   ` (3 preceding siblings ...)
  2011-01-25  0:15 ` Rafael J. Wysocki
@ 2011-01-25  0:16 ` Rafael J. Wysocki
  2011-01-25  0:16 ` Rafael J. Wysocki
  5 siblings, 0 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-25  0:16 UTC (permalink / raw)
  To: Linux-pm mailing list; +Cc: LKML

From: Rafael J. Wysocki <rjw@sisk.pl>

Since pm_save_wakeup_count() has just been changed to clear
events_check_enabled unconditionally before checking if there are
any new wakeup events registered since the last read from
/sys/power/wakeup_count, the detection of wakeup events during
suspend may be disabled, after it's been enabled, by writing a
"wrong" value back to /sys/power/wakeup_count.  For this reason,
it is not necessary to update events_check_enabled in
pm_get_wakeup_count() any more.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
---
 drivers/base/power/wakeup.c |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

Index: linux-2.6/drivers/base/power/wakeup.c
===================================================================
--- linux-2.6.orig/drivers/base/power/wakeup.c
+++ linux-2.6/drivers/base/power/wakeup.c
@@ -616,17 +616,14 @@ bool pm_wakeup_pending(void)
  * Store the number of registered wakeup events at the address in @count.  Block
  * if the current number of wakeup events being processed is nonzero.
  *
- * Return false if the wait for the number of wakeup events being processed to
+ * Return 'false' if the wait for the number of wakeup events being processed to
  * drop down to zero has been interrupted by a signal (and the current number
- * of wakeup events being processed is still nonzero).  Otherwise return true.
+ * of wakeup events being processed is still nonzero).  Otherwise return 'true'.
  */
 bool pm_get_wakeup_count(unsigned int *count)
 {
 	bool ret;
 
-	if (capable(CAP_SYS_ADMIN))
-		events_check_enabled = false;
-
 	while (atomic_read(&events_in_progress) && !signal_pending(current)) {
 		pm_wakeup_update_hit_counts();
 		schedule_timeout_interruptible(msecs_to_jiffies(TIMEOUT));

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-25  0:14 ` Rafael J. Wysocki
  2011-01-26 20:21   ` Alan Stern
@ 2011-01-26 20:21   ` Alan Stern
  2011-01-26 20:36     ` Rafael J. Wysocki
                       ` (2 more replies)
  1 sibling, 3 replies; 18+ messages in thread
From: Alan Stern @ 2011-01-26 20:21 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux-pm mailing list, LKML

On Tue, 25 Jan 2011, Rafael J. Wysocki wrote:

> From: Rafael J. Wysocki <rjw@sisk.pl>
> 
> The memory barrier in wakeup_source_deactivate() is supposed to
> prevent the callers of pm_wakeup_pending() and pm_get_wakeup_count()
> from seeing the new value of events_in_progress (0, in particular)
> and the old value of event_count at the same time.  However, if
> wakeup_source_deactivate() is executed by CPU0 and, for instance,
> pm_wakeup_pending() is executed by CPU1, where both processors can
> reorder operations, the memory barrier in wakeup_source_deactivate()
> doesn't affect CPU1 which can reorder reads.  In that case CPU1 may
> very well decide to fetch event_count before it's modified and
> events_in_progress after it's been updated, so pm_wakeup_pending()
> may fail to detect a wakeup event.  This issue can be addressed by
> adding a read memory barrier in pm_wakeup_pending() that will enforce
> events_in_progress to be read before event_count.
> 
> For similar reason, a read memory barrier should be added to
> pm_get_wakeup_count().

How come this is implemented using memory barriers rather than a lock?  
Is it because this is potentially a fairly hot path?

New memory barriers are supposed to have comments present in the code, 
explaining why they are needed.

Ideally you could do away with the need for synchronization entirely.  
For example, events_in_progress and event_count could be stored as two 
16-bit values stuffed into a single atomic variable.  Then they could 
both be read or updated simultaneously.

Alan Stern


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-25  0:14 ` Rafael J. Wysocki
@ 2011-01-26 20:21   ` Alan Stern
  2011-01-26 20:21   ` Alan Stern
  1 sibling, 0 replies; 18+ messages in thread
From: Alan Stern @ 2011-01-26 20:21 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux-pm mailing list, LKML

On Tue, 25 Jan 2011, Rafael J. Wysocki wrote:

> From: Rafael J. Wysocki <rjw@sisk.pl>
> 
> The memory barrier in wakeup_source_deactivate() is supposed to
> prevent the callers of pm_wakeup_pending() and pm_get_wakeup_count()
> from seeing the new value of events_in_progress (0, in particular)
> and the old value of event_count at the same time.  However, if
> wakeup_source_deactivate() is executed by CPU0 and, for instance,
> pm_wakeup_pending() is executed by CPU1, where both processors can
> reorder operations, the memory barrier in wakeup_source_deactivate()
> doesn't affect CPU1 which can reorder reads.  In that case CPU1 may
> very well decide to fetch event_count before it's modified and
> events_in_progress after it's been updated, so pm_wakeup_pending()
> may fail to detect a wakeup event.  This issue can be addressed by
> adding a read memory barrier in pm_wakeup_pending() that will enforce
> events_in_progress to be read before event_count.
> 
> For similar reason, a read memory barrier should be added to
> pm_get_wakeup_count().

How come this is implemented using memory barriers rather than a lock?  
Is it because this is potentially a fairly hot path?

New memory barriers are supposed to have comments present in the code, 
explaining why they are needed.

Ideally you could do away with the need for synchronization entirely.  
For example, events_in_progress and event_count could be stored as two 
16-bit values stuffed into a single atomic variable.  Then they could 
both be read or updated simultaneously.

Alan Stern

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-26 20:21   ` Alan Stern
  2011-01-26 20:36     ` Rafael J. Wysocki
@ 2011-01-26 20:36     ` Rafael J. Wysocki
  2011-01-26 22:53     ` Rafael J. Wysocki
  2 siblings, 0 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-26 20:36 UTC (permalink / raw)
  To: Alan Stern; +Cc: Linux-pm mailing list, LKML

On Wednesday, January 26, 2011, Alan Stern wrote:
> On Tue, 25 Jan 2011, Rafael J. Wysocki wrote:
> 
> > From: Rafael J. Wysocki <rjw@sisk.pl>
> > 
> > The memory barrier in wakeup_source_deactivate() is supposed to
> > prevent the callers of pm_wakeup_pending() and pm_get_wakeup_count()
> > from seeing the new value of events_in_progress (0, in particular)
> > and the old value of event_count at the same time.  However, if
> > wakeup_source_deactivate() is executed by CPU0 and, for instance,
> > pm_wakeup_pending() is executed by CPU1, where both processors can
> > reorder operations, the memory barrier in wakeup_source_deactivate()
> > doesn't affect CPU1 which can reorder reads.  In that case CPU1 may
> > very well decide to fetch event_count before it's modified and
> > events_in_progress after it's been updated, so pm_wakeup_pending()
> > may fail to detect a wakeup event.  This issue can be addressed by
> > adding a read memory barrier in pm_wakeup_pending() that will enforce
> > events_in_progress to be read before event_count.
> > 
> > For similar reason, a read memory barrier should be added to
> > pm_get_wakeup_count().
> 
> How come this is implemented using memory barriers rather than a lock?  
> Is it because this is potentially a fairly hot path?

Yes, that's the reason.

> New memory barriers are supposed to have comments present in the code, 
> explaining why they are needed.

Of course I can add them.

> Ideally you could do away with the need for synchronization entirely.  
> For example, events_in_progress and event_count could be stored as two 
> 16-bit values stuffed into a single atomic variable.  Then they could 
> both be read or updated simultaneously.

I thought about that too, but didn't actually implement it.

Well, I guess it would be better.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-26 20:21   ` Alan Stern
@ 2011-01-26 20:36     ` Rafael J. Wysocki
  2011-01-26 20:36     ` Rafael J. Wysocki
  2011-01-26 22:53     ` Rafael J. Wysocki
  2 siblings, 0 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-26 20:36 UTC (permalink / raw)
  To: Alan Stern; +Cc: Linux-pm mailing list, LKML

On Wednesday, January 26, 2011, Alan Stern wrote:
> On Tue, 25 Jan 2011, Rafael J. Wysocki wrote:
> 
> > From: Rafael J. Wysocki <rjw@sisk.pl>
> > 
> > The memory barrier in wakeup_source_deactivate() is supposed to
> > prevent the callers of pm_wakeup_pending() and pm_get_wakeup_count()
> > from seeing the new value of events_in_progress (0, in particular)
> > and the old value of event_count at the same time.  However, if
> > wakeup_source_deactivate() is executed by CPU0 and, for instance,
> > pm_wakeup_pending() is executed by CPU1, where both processors can
> > reorder operations, the memory barrier in wakeup_source_deactivate()
> > doesn't affect CPU1 which can reorder reads.  In that case CPU1 may
> > very well decide to fetch event_count before it's modified and
> > events_in_progress after it's been updated, so pm_wakeup_pending()
> > may fail to detect a wakeup event.  This issue can be addressed by
> > adding a read memory barrier in pm_wakeup_pending() that will enforce
> > events_in_progress to be read before event_count.
> > 
> > For similar reason, a read memory barrier should be added to
> > pm_get_wakeup_count().
> 
> How come this is implemented using memory barriers rather than a lock?  
> Is it because this is potentially a fairly hot path?

Yes, that's the reason.

> New memory barriers are supposed to have comments present in the code, 
> explaining why they are needed.

Of course I can add them.

> Ideally you could do away with the need for synchronization entirely.  
> For example, events_in_progress and event_count could be stored as two 
> 16-bit values stuffed into a single atomic variable.  Then they could 
> both be read or updated simultaneously.

I thought about that too, but didn't actually implement it.

Well, I guess it would be better.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-26 20:21   ` Alan Stern
  2011-01-26 20:36     ` Rafael J. Wysocki
  2011-01-26 20:36     ` Rafael J. Wysocki
@ 2011-01-26 22:53     ` Rafael J. Wysocki
  2011-01-27 19:00       ` Alan Stern
  2011-01-27 19:00       ` Alan Stern
  2 siblings, 2 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-26 22:53 UTC (permalink / raw)
  To: Alan Stern; +Cc: Linux-pm mailing list, LKML

On Wednesday, January 26, 2011, Alan Stern wrote:
> On Tue, 25 Jan 2011, Rafael J. Wysocki wrote:
> 
> > From: Rafael J. Wysocki <rjw@sisk.pl>
> > 
> > The memory barrier in wakeup_source_deactivate() is supposed to
> > prevent the callers of pm_wakeup_pending() and pm_get_wakeup_count()
> > from seeing the new value of events_in_progress (0, in particular)
> > and the old value of event_count at the same time.  However, if
> > wakeup_source_deactivate() is executed by CPU0 and, for instance,
> > pm_wakeup_pending() is executed by CPU1, where both processors can
> > reorder operations, the memory barrier in wakeup_source_deactivate()
> > doesn't affect CPU1 which can reorder reads.  In that case CPU1 may
> > very well decide to fetch event_count before it's modified and
> > events_in_progress after it's been updated, so pm_wakeup_pending()
> > may fail to detect a wakeup event.  This issue can be addressed by
> > adding a read memory barrier in pm_wakeup_pending() that will enforce
> > events_in_progress to be read before event_count.
> > 
> > For similar reason, a read memory barrier should be added to
> > pm_get_wakeup_count().
> 
> How come this is implemented using memory barriers rather than a lock?  
> Is it because this is potentially a fairly hot path?
> 
> New memory barriers are supposed to have comments present in the code, 
> explaining why they are needed.
> 
> Ideally you could do away with the need for synchronization entirely.  
> For example, events_in_progress and event_count could be stored as two 
> 16-bit values stuffed into a single atomic variable.  Then they could 
> both be read or updated simultaneously.

OK, the patch below appears to work for me.  Can you have a look at it, please?

Rafael


---
 drivers/base/power/wakeup.c |   82 +++++++++++++++++++++++++++++++-------------
 1 file changed, 58 insertions(+), 24 deletions(-)

Index: linux-2.6/drivers/base/power/wakeup.c
===================================================================
--- linux-2.6.orig/drivers/base/power/wakeup.c
+++ linux-2.6/drivers/base/power/wakeup.c
@@ -24,12 +24,48 @@
  */
 bool events_check_enabled;
 
-/* The counter of registered wakeup events. */
-static atomic_t event_count = ATOMIC_INIT(0);
-/* A preserved old value of event_count. */
+#define EVENT_COUNT_BITS	(sizeof(atomic_t) * 4)
+#define MAX_EVENT_COUNT		((1 << EVENT_COUNT_BITS) - 1)
+
+/* Combined counters of registered wakeup events and events in progress. */
+static atomic_t combined_event_count = ATOMIC_INIT(0);
+
+static unsigned int split_counters(unsigned int *inpr, unsigned int *cnt)
+{
+	unsigned int comb = atomic_read(&combined_event_count);
+
+	*inpr = (comb >> EVENT_COUNT_BITS);
+	*cnt = comb & MAX_EVENT_COUNT;
+	return comb;
+}
+
+static unsigned int merge_counters(unsigned int inpr, unsigned int cnt)
+{
+	return (inpr << EVENT_COUNT_BITS) | cnt;
+}
+
+static void update_events_in_progress(void)
+{
+	unsigned int cnt, inpr, old, new;
+
+	do {
+		old = split_counters(&inpr, &cnt);
+		new = merge_counters(inpr + 1, cnt);
+	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
+}
+
+static void update_counters(void)
+{
+	unsigned int cnt, inpr, old, new;
+
+	do {
+		old = split_counters(&inpr, &cnt);
+		new = merge_counters(inpr - 1, cnt + 1);
+	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
+}
+
+/* A preserved old value of event counter. */
 static unsigned int saved_count;
-/* The counter of wakeup events being processed. */
-static atomic_t events_in_progress = ATOMIC_INIT(0);
 
 static DEFINE_SPINLOCK(events_lock);
 
@@ -333,7 +369,7 @@ static void wakeup_source_activate(struc
 	ws->timer_expires = jiffies;
 	ws->last_time = ktime_get();
 
-	atomic_inc(&events_in_progress);
+	update_events_in_progress();
 }
 
 /**
@@ -419,15 +455,7 @@ static void wakeup_source_deactivate(str
 
 	del_timer(&ws->timer);
 
-	/*
-	 * event_count has to be incremented before events_in_progress is
-	 * modified, so that the callers of pm_check_wakeup_events() and
-	 * pm_save_wakeup_count() don't see the old value of event_count and
-	 * events_in_progress equal to zero at the same time.
-	 */
-	atomic_inc(&event_count);
-	smp_mb__before_atomic_dec();
-	atomic_dec(&events_in_progress);
+	update_counters();
 }
 
 /**
@@ -582,8 +610,10 @@ bool pm_wakeup_pending(void)
 
 	spin_lock_irqsave(&events_lock, flags);
 	if (events_check_enabled) {
-		ret = ((unsigned int)atomic_read(&event_count) != saved_count)
-			|| atomic_read(&events_in_progress);
+		unsigned int inpr, cnt;
+
+		split_counters(&inpr, &cnt);
+		ret = (cnt != saved_count || inpr > 0);
 		events_check_enabled = !ret;
 	}
 	spin_unlock_irqrestore(&events_lock, flags);
@@ -605,19 +635,22 @@ bool pm_wakeup_pending(void)
  */
 bool pm_get_wakeup_count(unsigned int *count)
 {
-	bool ret;
+	unsigned int inpr, cnt;
 
 	if (capable(CAP_SYS_ADMIN))
 		events_check_enabled = false;
 
-	while (atomic_read(&events_in_progress) && !signal_pending(current)) {
+	for (;;) {
+		split_counters(&inpr, &cnt);
+		if (inpr == 0 || signal_pending(current))
+			break;
 		pm_wakeup_update_hit_counts();
 		schedule_timeout_interruptible(msecs_to_jiffies(TIMEOUT));
 	}
 
-	ret = !atomic_read(&events_in_progress);
-	*count = atomic_read(&event_count);
-	return ret;
+	split_counters(&inpr, &cnt);
+	*count = cnt;
+	return !inpr;
 }
 
 /**
@@ -631,11 +664,12 @@ bool pm_get_wakeup_count(unsigned int *c
  */
 bool pm_save_wakeup_count(unsigned int count)
 {
+	unsigned int inpr, cnt;
 	bool ret = false;
 
 	spin_lock_irq(&events_lock);
-	if (count == (unsigned int)atomic_read(&event_count)
-	    && !atomic_read(&events_in_progress)) {
+	split_counters(&inpr, &cnt);
+	if (cnt == count && inpr == 0) {
 		saved_count = count;
 		events_check_enabled = true;
 		ret = true;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-26 22:53     ` Rafael J. Wysocki
  2011-01-27 19:00       ` Alan Stern
@ 2011-01-27 19:00       ` Alan Stern
  2011-01-27 21:19         ` Rafael J. Wysocki
  2011-01-27 21:19         ` Rafael J. Wysocki
  1 sibling, 2 replies; 18+ messages in thread
From: Alan Stern @ 2011-01-27 19:00 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux-pm mailing list, LKML

On Wed, 26 Jan 2011, Rafael J. Wysocki wrote:

> > Ideally you could do away with the need for synchronization entirely.  
> > For example, events_in_progress and event_count could be stored as two 
> > 16-bit values stuffed into a single atomic variable.  Then they could 
> > both be read or updated simultaneously.
> 
> OK, the patch below appears to work for me.  Can you have a look at it, please?
> 
> Rafael
> 
> 
> ---
>  drivers/base/power/wakeup.c |   82 +++++++++++++++++++++++++++++++-------------
>  1 file changed, 58 insertions(+), 24 deletions(-)
> 
> Index: linux-2.6/drivers/base/power/wakeup.c
> ===================================================================
> --- linux-2.6.orig/drivers/base/power/wakeup.c
> +++ linux-2.6/drivers/base/power/wakeup.c
> @@ -24,12 +24,48 @@
>   */
>  bool events_check_enabled;
>  
> -/* The counter of registered wakeup events. */
> -static atomic_t event_count = ATOMIC_INIT(0);
> -/* A preserved old value of event_count. */
> +#define EVENT_COUNT_BITS	(sizeof(atomic_t) * 4)

This should be sizeof(int), since atomic_t variables store int values.  
In principle, the atomic_t might include other things along with the 
stored value (it used to, on some architectures).

> +#define MAX_EVENT_COUNT		((1 << EVENT_COUNT_BITS) - 1)
> +
> +/* Combined counters of registered wakeup events and events in progress. */
> +static atomic_t combined_event_count = ATOMIC_INIT(0);
> +

Comment here, explaining that this is needed so that the in_progress 
and count parts can be operated on simultaneously.

> +static unsigned int split_counters(unsigned int *inpr, unsigned int *cnt)
> +{
> +	unsigned int comb = atomic_read(&combined_event_count);
> +
> +	*inpr = (comb >> EVENT_COUNT_BITS);
> +	*cnt = comb & MAX_EVENT_COUNT;

The inpr part is bounded, whereas the cnt part increments without 
limit.  Therefore inpr should occupy the lower bits and cnt should 
occupy the upper bits, where overflow isn't an issue.

> +	return comb;
> +}
> +
> +static unsigned int merge_counters(unsigned int inpr, unsigned int cnt)
> +{
> +	return (inpr << EVENT_COUNT_BITS) | cnt;
> +}
> +
> +static void update_events_in_progress(void)
> +{
> +	unsigned int cnt, inpr, old, new;
> +
> +	do {
> +		old = split_counters(&inpr, &cnt);
> +		new = merge_counters(inpr + 1, cnt);
> +	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
> +}

Just atomic_inc(&combined_event_count) -- after inpr has been moved to 
the lower bits.

> +
> +static void update_counters(void)
> +{
> +	unsigned int cnt, inpr, old, new;
> +
> +	do {
> +		old = split_counters(&inpr, &cnt);
> +		new = merge_counters(inpr - 1, cnt + 1);
> +	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
> +}

Just atomic_add(MAX_EVENT_COUNT, &combined_event_count).

The rest looks fine.

Alan Stern


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-26 22:53     ` Rafael J. Wysocki
@ 2011-01-27 19:00       ` Alan Stern
  2011-01-27 19:00       ` Alan Stern
  1 sibling, 0 replies; 18+ messages in thread
From: Alan Stern @ 2011-01-27 19:00 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux-pm mailing list, LKML

On Wed, 26 Jan 2011, Rafael J. Wysocki wrote:

> > Ideally you could do away with the need for synchronization entirely.  
> > For example, events_in_progress and event_count could be stored as two 
> > 16-bit values stuffed into a single atomic variable.  Then they could 
> > both be read or updated simultaneously.
> 
> OK, the patch below appears to work for me.  Can you have a look at it, please?
> 
> Rafael
> 
> 
> ---
>  drivers/base/power/wakeup.c |   82 +++++++++++++++++++++++++++++++-------------
>  1 file changed, 58 insertions(+), 24 deletions(-)
> 
> Index: linux-2.6/drivers/base/power/wakeup.c
> ===================================================================
> --- linux-2.6.orig/drivers/base/power/wakeup.c
> +++ linux-2.6/drivers/base/power/wakeup.c
> @@ -24,12 +24,48 @@
>   */
>  bool events_check_enabled;
>  
> -/* The counter of registered wakeup events. */
> -static atomic_t event_count = ATOMIC_INIT(0);
> -/* A preserved old value of event_count. */
> +#define EVENT_COUNT_BITS	(sizeof(atomic_t) * 4)

This should be sizeof(int), since atomic_t variables store int values.  
In principle, the atomic_t might include other things along with the 
stored value (it used to, on some architectures).

> +#define MAX_EVENT_COUNT		((1 << EVENT_COUNT_BITS) - 1)
> +
> +/* Combined counters of registered wakeup events and events in progress. */
> +static atomic_t combined_event_count = ATOMIC_INIT(0);
> +

Comment here, explaining that this is needed so that the in_progress 
and count parts can be operated on simultaneously.

> +static unsigned int split_counters(unsigned int *inpr, unsigned int *cnt)
> +{
> +	unsigned int comb = atomic_read(&combined_event_count);
> +
> +	*inpr = (comb >> EVENT_COUNT_BITS);
> +	*cnt = comb & MAX_EVENT_COUNT;

The inpr part is bounded, whereas the cnt part increments without 
limit.  Therefore inpr should occupy the lower bits and cnt should 
occupy the upper bits, where overflow isn't an issue.

> +	return comb;
> +}
> +
> +static unsigned int merge_counters(unsigned int inpr, unsigned int cnt)
> +{
> +	return (inpr << EVENT_COUNT_BITS) | cnt;
> +}
> +
> +static void update_events_in_progress(void)
> +{
> +	unsigned int cnt, inpr, old, new;
> +
> +	do {
> +		old = split_counters(&inpr, &cnt);
> +		new = merge_counters(inpr + 1, cnt);
> +	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
> +}

Just atomic_inc(&combined_event_count) -- after inpr has been moved to 
the lower bits.

> +
> +static void update_counters(void)
> +{
> +	unsigned int cnt, inpr, old, new;
> +
> +	do {
> +		old = split_counters(&inpr, &cnt);
> +		new = merge_counters(inpr - 1, cnt + 1);
> +	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
> +}

Just atomic_add(MAX_EVENT_COUNT, &combined_event_count).

The rest looks fine.

Alan Stern

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-27 19:00       ` Alan Stern
  2011-01-27 21:19         ` Rafael J. Wysocki
@ 2011-01-27 21:19         ` Rafael J. Wysocki
  2011-01-28 20:23           ` Alan Stern
  2011-01-28 20:23           ` Alan Stern
  1 sibling, 2 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-27 21:19 UTC (permalink / raw)
  To: Alan Stern; +Cc: Linux-pm mailing list, LKML

On Thursday, January 27, 2011, Alan Stern wrote:
> On Wed, 26 Jan 2011, Rafael J. Wysocki wrote:
> 
> > > Ideally you could do away with the need for synchronization entirely.  
> > > For example, events_in_progress and event_count could be stored as two 
> > > 16-bit values stuffed into a single atomic variable.  Then they could 
> > > both be read or updated simultaneously.
> > 
> > OK, the patch below appears to work for me.  Can you have a look at it, please?
> > 
> > Rafael
> > 
> > 
> > ---
> >  drivers/base/power/wakeup.c |   82 +++++++++++++++++++++++++++++++-------------
> >  1 file changed, 58 insertions(+), 24 deletions(-)
> > 
> > Index: linux-2.6/drivers/base/power/wakeup.c
> > ===================================================================
> > --- linux-2.6.orig/drivers/base/power/wakeup.c
> > +++ linux-2.6/drivers/base/power/wakeup.c
> > @@ -24,12 +24,48 @@
> >   */
> >  bool events_check_enabled;
> >  
> > -/* The counter of registered wakeup events. */
> > -static atomic_t event_count = ATOMIC_INIT(0);
> > -/* A preserved old value of event_count. */
> > +#define EVENT_COUNT_BITS	(sizeof(atomic_t) * 4)
> 
> This should be sizeof(int), since atomic_t variables store int values.  
> In principle, the atomic_t might include other things along with the 
> stored value (it used to, on some architectures).

OK

> > +#define MAX_EVENT_COUNT		((1 << EVENT_COUNT_BITS) - 1)
> > +
> > +/* Combined counters of registered wakeup events and events in progress. */
> > +static atomic_t combined_event_count = ATOMIC_INIT(0);
> > +
> 
> Comment here, explaining that this is needed so that the in_progress 
> and count parts can be operated on simultaneously.

OK

> > +static unsigned int split_counters(unsigned int *inpr, unsigned int *cnt)
> > +{
> > +	unsigned int comb = atomic_read(&combined_event_count);
> > +
> > +	*inpr = (comb >> EVENT_COUNT_BITS);
> > +	*cnt = comb & MAX_EVENT_COUNT;
> 
> The inpr part is bounded, whereas the cnt part increments without 
> limit.  Therefore inpr should occupy the lower bits and cnt should 
> occupy the upper bits, where overflow isn't an issue.

Well, it wouldn't matter if merge_counters() weren't buggy (cnt should have
been masked before applying the | in there).  Plus the simplifications below
require inpr to be the lower bits, right?

> > +	return comb;
> > +}
> > +
> > +static unsigned int merge_counters(unsigned int inpr, unsigned int cnt)
> > +{
> > +	return (inpr << EVENT_COUNT_BITS) | cnt;
> > +}
> > +
> > +static void update_events_in_progress(void)
> > +{
> > +	unsigned int cnt, inpr, old, new;
> > +
> > +	do {
> > +		old = split_counters(&inpr, &cnt);
> > +		new = merge_counters(inpr + 1, cnt);
> > +	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
> > +}
> 
> Just atomic_inc(&combined_event_count) -- after inpr has been moved to 
> the lower bits.

OK

> > +
> > +static void update_counters(void)
> > +{
> > +	unsigned int cnt, inpr, old, new;
> > +
> > +	do {
> > +		old = split_counters(&inpr, &cnt);
> > +		new = merge_counters(inpr - 1, cnt + 1);
> > +	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
> > +}
> 
> Just atomic_add(MAX_EVENT_COUNT, &combined_event_count).

Right.

> The rest looks fine.

OK

Updated patch is appended.

Thanks,
Rafael

---
 drivers/base/power/wakeup.c |   61 ++++++++++++++++++++++++++++----------------
 1 file changed, 39 insertions(+), 22 deletions(-)

Index: linux-2.6/drivers/base/power/wakeup.c
===================================================================
--- linux-2.6.orig/drivers/base/power/wakeup.c
+++ linux-2.6/drivers/base/power/wakeup.c
@@ -24,12 +24,26 @@
  */
 bool events_check_enabled;
 
-/* The counter of registered wakeup events. */
-static atomic_t event_count = ATOMIC_INIT(0);
-/* A preserved old value of event_count. */
+/*
+ * Combined counters of registered wakeup events and wakeup events in progress.
+ * They need to be modified together atomically, so it's better to use one
+ * atomic variable to hold them both.
+ */
+static atomic_t combined_event_count = ATOMIC_INIT(0);
+
+#define IN_PROGRESS_BITS	(sizeof(int) * 4)
+#define MAX_IN_PROGRESS		((1 << IN_PROGRESS_BITS) - 1)
+
+static void split_counters(unsigned int *cnt, unsigned int *inpr)
+{
+	unsigned int comb = atomic_read(&combined_event_count);
+
+	*cnt = (comb >> IN_PROGRESS_BITS);
+	*inpr = comb & MAX_IN_PROGRESS;
+}
+
+/* A preserved old value of the events counter. */
 static unsigned int saved_count;
-/* The counter of wakeup events being processed. */
-static atomic_t events_in_progress = ATOMIC_INIT(0);
 
 static DEFINE_SPINLOCK(events_lock);
 
@@ -333,7 +347,8 @@ static void wakeup_source_activate(struc
 	ws->timer_expires = jiffies;
 	ws->last_time = ktime_get();
 
-	atomic_inc(&events_in_progress);
+	/* Increment the counter of events in progress. */
+	atomic_inc(&combined_event_count);
 }
 
 /**
@@ -420,14 +435,10 @@ static void wakeup_source_deactivate(str
 	del_timer(&ws->timer);
 
 	/*
-	 * event_count has to be incremented before events_in_progress is
-	 * modified, so that the callers of pm_check_wakeup_events() and
-	 * pm_save_wakeup_count() don't see the old value of event_count and
-	 * events_in_progress equal to zero at the same time.
+	 * Increment the counter of registered wakeup events and decrement the
+	 * couter of wakeup events in progress simultaneously.
 	 */
-	atomic_inc(&event_count);
-	smp_mb__before_atomic_dec();
-	atomic_dec(&events_in_progress);
+	atomic_add(MAX_IN_PROGRESS, &combined_event_count);
 }
 
 /**
@@ -582,8 +593,10 @@ bool pm_wakeup_pending(void)
 
 	spin_lock_irqsave(&events_lock, flags);
 	if (events_check_enabled) {
-		ret = ((unsigned int)atomic_read(&event_count) != saved_count)
-			|| atomic_read(&events_in_progress);
+		unsigned int cnt, inpr;
+
+		split_counters(&cnt, &inpr);
+		ret = (cnt != saved_count || inpr > 0);
 		events_check_enabled = !ret;
 	}
 	spin_unlock_irqrestore(&events_lock, flags);
@@ -605,19 +618,22 @@ bool pm_wakeup_pending(void)
  */
 bool pm_get_wakeup_count(unsigned int *count)
 {
-	bool ret;
+	unsigned int cnt, inpr;
 
 	if (capable(CAP_SYS_ADMIN))
 		events_check_enabled = false;
 
-	while (atomic_read(&events_in_progress) && !signal_pending(current)) {
+	for (;;) {
+		split_counters(&cnt, &inpr);
+		if (inpr == 0 || signal_pending(current))
+			break;
 		pm_wakeup_update_hit_counts();
 		schedule_timeout_interruptible(msecs_to_jiffies(TIMEOUT));
 	}
 
-	ret = !atomic_read(&events_in_progress);
-	*count = atomic_read(&event_count);
-	return ret;
+	split_counters(&cnt, &inpr);
+	*count = cnt;
+	return !inpr;
 }
 
 /**
@@ -631,11 +647,12 @@ bool pm_get_wakeup_count(unsigned int *c
  */
 bool pm_save_wakeup_count(unsigned int count)
 {
+	unsigned int cnt, inpr;
 	bool ret = false;
 
 	spin_lock_irq(&events_lock);
-	if (count == (unsigned int)atomic_read(&event_count)
-	    && !atomic_read(&events_in_progress)) {
+	split_counters(&cnt, &inpr);
+	if (cnt == count && inpr == 0) {
 		saved_count = count;
 		events_check_enabled = true;
 		ret = true;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-27 19:00       ` Alan Stern
@ 2011-01-27 21:19         ` Rafael J. Wysocki
  2011-01-27 21:19         ` Rafael J. Wysocki
  1 sibling, 0 replies; 18+ messages in thread
From: Rafael J. Wysocki @ 2011-01-27 21:19 UTC (permalink / raw)
  To: Alan Stern; +Cc: Linux-pm mailing list, LKML

On Thursday, January 27, 2011, Alan Stern wrote:
> On Wed, 26 Jan 2011, Rafael J. Wysocki wrote:
> 
> > > Ideally you could do away with the need for synchronization entirely.  
> > > For example, events_in_progress and event_count could be stored as two 
> > > 16-bit values stuffed into a single atomic variable.  Then they could 
> > > both be read or updated simultaneously.
> > 
> > OK, the patch below appears to work for me.  Can you have a look at it, please?
> > 
> > Rafael
> > 
> > 
> > ---
> >  drivers/base/power/wakeup.c |   82 +++++++++++++++++++++++++++++++-------------
> >  1 file changed, 58 insertions(+), 24 deletions(-)
> > 
> > Index: linux-2.6/drivers/base/power/wakeup.c
> > ===================================================================
> > --- linux-2.6.orig/drivers/base/power/wakeup.c
> > +++ linux-2.6/drivers/base/power/wakeup.c
> > @@ -24,12 +24,48 @@
> >   */
> >  bool events_check_enabled;
> >  
> > -/* The counter of registered wakeup events. */
> > -static atomic_t event_count = ATOMIC_INIT(0);
> > -/* A preserved old value of event_count. */
> > +#define EVENT_COUNT_BITS	(sizeof(atomic_t) * 4)
> 
> This should be sizeof(int), since atomic_t variables store int values.  
> In principle, the atomic_t might include other things along with the 
> stored value (it used to, on some architectures).

OK

> > +#define MAX_EVENT_COUNT		((1 << EVENT_COUNT_BITS) - 1)
> > +
> > +/* Combined counters of registered wakeup events and events in progress. */
> > +static atomic_t combined_event_count = ATOMIC_INIT(0);
> > +
> 
> Comment here, explaining that this is needed so that the in_progress 
> and count parts can be operated on simultaneously.

OK

> > +static unsigned int split_counters(unsigned int *inpr, unsigned int *cnt)
> > +{
> > +	unsigned int comb = atomic_read(&combined_event_count);
> > +
> > +	*inpr = (comb >> EVENT_COUNT_BITS);
> > +	*cnt = comb & MAX_EVENT_COUNT;
> 
> The inpr part is bounded, whereas the cnt part increments without 
> limit.  Therefore inpr should occupy the lower bits and cnt should 
> occupy the upper bits, where overflow isn't an issue.

Well, it wouldn't matter if merge_counters() weren't buggy (cnt should have
been masked before applying the | in there).  Plus the simplifications below
require inpr to be the lower bits, right?

> > +	return comb;
> > +}
> > +
> > +static unsigned int merge_counters(unsigned int inpr, unsigned int cnt)
> > +{
> > +	return (inpr << EVENT_COUNT_BITS) | cnt;
> > +}
> > +
> > +static void update_events_in_progress(void)
> > +{
> > +	unsigned int cnt, inpr, old, new;
> > +
> > +	do {
> > +		old = split_counters(&inpr, &cnt);
> > +		new = merge_counters(inpr + 1, cnt);
> > +	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
> > +}
> 
> Just atomic_inc(&combined_event_count) -- after inpr has been moved to 
> the lower bits.

OK

> > +
> > +static void update_counters(void)
> > +{
> > +	unsigned int cnt, inpr, old, new;
> > +
> > +	do {
> > +		old = split_counters(&inpr, &cnt);
> > +		new = merge_counters(inpr - 1, cnt + 1);
> > +	} while (atomic_cmpxchg(&combined_event_count, old, new) != old);
> > +}
> 
> Just atomic_add(MAX_EVENT_COUNT, &combined_event_count).

Right.

> The rest looks fine.

OK

Updated patch is appended.

Thanks,
Rafael

---
 drivers/base/power/wakeup.c |   61 ++++++++++++++++++++++++++++----------------
 1 file changed, 39 insertions(+), 22 deletions(-)

Index: linux-2.6/drivers/base/power/wakeup.c
===================================================================
--- linux-2.6.orig/drivers/base/power/wakeup.c
+++ linux-2.6/drivers/base/power/wakeup.c
@@ -24,12 +24,26 @@
  */
 bool events_check_enabled;
 
-/* The counter of registered wakeup events. */
-static atomic_t event_count = ATOMIC_INIT(0);
-/* A preserved old value of event_count. */
+/*
+ * Combined counters of registered wakeup events and wakeup events in progress.
+ * They need to be modified together atomically, so it's better to use one
+ * atomic variable to hold them both.
+ */
+static atomic_t combined_event_count = ATOMIC_INIT(0);
+
+#define IN_PROGRESS_BITS	(sizeof(int) * 4)
+#define MAX_IN_PROGRESS		((1 << IN_PROGRESS_BITS) - 1)
+
+static void split_counters(unsigned int *cnt, unsigned int *inpr)
+{
+	unsigned int comb = atomic_read(&combined_event_count);
+
+	*cnt = (comb >> IN_PROGRESS_BITS);
+	*inpr = comb & MAX_IN_PROGRESS;
+}
+
+/* A preserved old value of the events counter. */
 static unsigned int saved_count;
-/* The counter of wakeup events being processed. */
-static atomic_t events_in_progress = ATOMIC_INIT(0);
 
 static DEFINE_SPINLOCK(events_lock);
 
@@ -333,7 +347,8 @@ static void wakeup_source_activate(struc
 	ws->timer_expires = jiffies;
 	ws->last_time = ktime_get();
 
-	atomic_inc(&events_in_progress);
+	/* Increment the counter of events in progress. */
+	atomic_inc(&combined_event_count);
 }
 
 /**
@@ -420,14 +435,10 @@ static void wakeup_source_deactivate(str
 	del_timer(&ws->timer);
 
 	/*
-	 * event_count has to be incremented before events_in_progress is
-	 * modified, so that the callers of pm_check_wakeup_events() and
-	 * pm_save_wakeup_count() don't see the old value of event_count and
-	 * events_in_progress equal to zero at the same time.
+	 * Increment the counter of registered wakeup events and decrement the
+	 * couter of wakeup events in progress simultaneously.
 	 */
-	atomic_inc(&event_count);
-	smp_mb__before_atomic_dec();
-	atomic_dec(&events_in_progress);
+	atomic_add(MAX_IN_PROGRESS, &combined_event_count);
 }
 
 /**
@@ -582,8 +593,10 @@ bool pm_wakeup_pending(void)
 
 	spin_lock_irqsave(&events_lock, flags);
 	if (events_check_enabled) {
-		ret = ((unsigned int)atomic_read(&event_count) != saved_count)
-			|| atomic_read(&events_in_progress);
+		unsigned int cnt, inpr;
+
+		split_counters(&cnt, &inpr);
+		ret = (cnt != saved_count || inpr > 0);
 		events_check_enabled = !ret;
 	}
 	spin_unlock_irqrestore(&events_lock, flags);
@@ -605,19 +618,22 @@ bool pm_wakeup_pending(void)
  */
 bool pm_get_wakeup_count(unsigned int *count)
 {
-	bool ret;
+	unsigned int cnt, inpr;
 
 	if (capable(CAP_SYS_ADMIN))
 		events_check_enabled = false;
 
-	while (atomic_read(&events_in_progress) && !signal_pending(current)) {
+	for (;;) {
+		split_counters(&cnt, &inpr);
+		if (inpr == 0 || signal_pending(current))
+			break;
 		pm_wakeup_update_hit_counts();
 		schedule_timeout_interruptible(msecs_to_jiffies(TIMEOUT));
 	}
 
-	ret = !atomic_read(&events_in_progress);
-	*count = atomic_read(&event_count);
-	return ret;
+	split_counters(&cnt, &inpr);
+	*count = cnt;
+	return !inpr;
 }
 
 /**
@@ -631,11 +647,12 @@ bool pm_get_wakeup_count(unsigned int *c
  */
 bool pm_save_wakeup_count(unsigned int count)
 {
+	unsigned int cnt, inpr;
 	bool ret = false;
 
 	spin_lock_irq(&events_lock);
-	if (count == (unsigned int)atomic_read(&event_count)
-	    && !atomic_read(&events_in_progress)) {
+	split_counters(&cnt, &inpr);
+	if (cnt == count && inpr == 0) {
 		saved_count = count;
 		events_check_enabled = true;
 		ret = true;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-27 21:19         ` Rafael J. Wysocki
@ 2011-01-28 20:23           ` Alan Stern
  2011-01-28 20:23           ` Alan Stern
  1 sibling, 0 replies; 18+ messages in thread
From: Alan Stern @ 2011-01-28 20:23 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux-pm mailing list, LKML

On Thu, 27 Jan 2011, Rafael J. Wysocki wrote:

> Updated patch is appended.
> 
> Thanks,
> Rafael
> 
> ---
>  drivers/base/power/wakeup.c |   61 ++++++++++++++++++++++++++++----------------
>  1 file changed, 39 insertions(+), 22 deletions(-)
> 
> Index: linux-2.6/drivers/base/power/wakeup.c
> ===================================================================
> --- linux-2.6.orig/drivers/base/power/wakeup.c
> +++ linux-2.6/drivers/base/power/wakeup.c
> @@ -24,12 +24,26 @@
>   */
>  bool events_check_enabled;
>  
> -/* The counter of registered wakeup events. */
> -static atomic_t event_count = ATOMIC_INIT(0);
> -/* A preserved old value of event_count. */
> +/*
> + * Combined counters of registered wakeup events and wakeup events in progress.
> + * They need to be modified together atomically, so it's better to use one
> + * atomic variable to hold them both.
> + */
> +static atomic_t combined_event_count = ATOMIC_INIT(0);
> +
> +#define IN_PROGRESS_BITS	(sizeof(int) * 4)
> +#define MAX_IN_PROGRESS		((1 << IN_PROGRESS_BITS) - 1)
> +
> +static void split_counters(unsigned int *cnt, unsigned int *inpr)
> +{
> +	unsigned int comb = atomic_read(&combined_event_count);
> +
> +	*cnt = (comb >> IN_PROGRESS_BITS);
> +	*inpr = comb & MAX_IN_PROGRESS;
> +}
> +
> +/* A preserved old value of the events counter. */
>  static unsigned int saved_count;
> -/* The counter of wakeup events being processed. */
> -static atomic_t events_in_progress = ATOMIC_INIT(0);
>  
>  static DEFINE_SPINLOCK(events_lock);
>  
> @@ -333,7 +347,8 @@ static void wakeup_source_activate(struc
>  	ws->timer_expires = jiffies;
>  	ws->last_time = ktime_get();
>  
> -	atomic_inc(&events_in_progress);
> +	/* Increment the counter of events in progress. */
> +	atomic_inc(&combined_event_count);
>  }
>  
>  /**
> @@ -420,14 +435,10 @@ static void wakeup_source_deactivate(str
>  	del_timer(&ws->timer);
>  
>  	/*
> -	 * event_count has to be incremented before events_in_progress is
> -	 * modified, so that the callers of pm_check_wakeup_events() and
> -	 * pm_save_wakeup_count() don't see the old value of event_count and
> -	 * events_in_progress equal to zero at the same time.
> +	 * Increment the counter of registered wakeup events and decrement the
> +	 * couter of wakeup events in progress simultaneously.
>  	 */
> -	atomic_inc(&event_count);
> -	smp_mb__before_atomic_dec();
> -	atomic_dec(&events_in_progress);
> +	atomic_add(MAX_IN_PROGRESS, &combined_event_count);
>  }
>  
>  /**
> @@ -582,8 +593,10 @@ bool pm_wakeup_pending(void)
>  
>  	spin_lock_irqsave(&events_lock, flags);
>  	if (events_check_enabled) {
> -		ret = ((unsigned int)atomic_read(&event_count) != saved_count)
> -			|| atomic_read(&events_in_progress);
> +		unsigned int cnt, inpr;
> +
> +		split_counters(&cnt, &inpr);
> +		ret = (cnt != saved_count || inpr > 0);
>  		events_check_enabled = !ret;
>  	}
>  	spin_unlock_irqrestore(&events_lock, flags);
> @@ -605,19 +618,22 @@ bool pm_wakeup_pending(void)
>   */
>  bool pm_get_wakeup_count(unsigned int *count)
>  {
> -	bool ret;
> +	unsigned int cnt, inpr;
>  
>  	if (capable(CAP_SYS_ADMIN))
>  		events_check_enabled = false;
>  
> -	while (atomic_read(&events_in_progress) && !signal_pending(current)) {
> +	for (;;) {
> +		split_counters(&cnt, &inpr);
> +		if (inpr == 0 || signal_pending(current))
> +			break;
>  		pm_wakeup_update_hit_counts();
>  		schedule_timeout_interruptible(msecs_to_jiffies(TIMEOUT));
>  	}
>  
> -	ret = !atomic_read(&events_in_progress);
> -	*count = atomic_read(&event_count);
> -	return ret;
> +	split_counters(&cnt, &inpr);
> +	*count = cnt;
> +	return !inpr;
>  }
>  
>  /**
> @@ -631,11 +647,12 @@ bool pm_get_wakeup_count(unsigned int *c
>   */
>  bool pm_save_wakeup_count(unsigned int count)
>  {
> +	unsigned int cnt, inpr;
>  	bool ret = false;
>  
>  	spin_lock_irq(&events_lock);
> -	if (count == (unsigned int)atomic_read(&event_count)
> -	    && !atomic_read(&events_in_progress)) {
> +	split_counters(&cnt, &inpr);
> +	if (cnt == count && inpr == 0) {
>  		saved_count = count;
>  		events_check_enabled = true;
>  		ret = true;

This looks okay.  Sure, it's a little clunky, but it's better than the 
alternatives.

Alan Stern


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/3] PM / Wakeup: Add missing memory barriers
  2011-01-27 21:19         ` Rafael J. Wysocki
  2011-01-28 20:23           ` Alan Stern
@ 2011-01-28 20:23           ` Alan Stern
  1 sibling, 0 replies; 18+ messages in thread
From: Alan Stern @ 2011-01-28 20:23 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux-pm mailing list, LKML

On Thu, 27 Jan 2011, Rafael J. Wysocki wrote:

> Updated patch is appended.
> 
> Thanks,
> Rafael
> 
> ---
>  drivers/base/power/wakeup.c |   61 ++++++++++++++++++++++++++++----------------
>  1 file changed, 39 insertions(+), 22 deletions(-)
> 
> Index: linux-2.6/drivers/base/power/wakeup.c
> ===================================================================
> --- linux-2.6.orig/drivers/base/power/wakeup.c
> +++ linux-2.6/drivers/base/power/wakeup.c
> @@ -24,12 +24,26 @@
>   */
>  bool events_check_enabled;
>  
> -/* The counter of registered wakeup events. */
> -static atomic_t event_count = ATOMIC_INIT(0);
> -/* A preserved old value of event_count. */
> +/*
> + * Combined counters of registered wakeup events and wakeup events in progress.
> + * They need to be modified together atomically, so it's better to use one
> + * atomic variable to hold them both.
> + */
> +static atomic_t combined_event_count = ATOMIC_INIT(0);
> +
> +#define IN_PROGRESS_BITS	(sizeof(int) * 4)
> +#define MAX_IN_PROGRESS		((1 << IN_PROGRESS_BITS) - 1)
> +
> +static void split_counters(unsigned int *cnt, unsigned int *inpr)
> +{
> +	unsigned int comb = atomic_read(&combined_event_count);
> +
> +	*cnt = (comb >> IN_PROGRESS_BITS);
> +	*inpr = comb & MAX_IN_PROGRESS;
> +}
> +
> +/* A preserved old value of the events counter. */
>  static unsigned int saved_count;
> -/* The counter of wakeup events being processed. */
> -static atomic_t events_in_progress = ATOMIC_INIT(0);
>  
>  static DEFINE_SPINLOCK(events_lock);
>  
> @@ -333,7 +347,8 @@ static void wakeup_source_activate(struc
>  	ws->timer_expires = jiffies;
>  	ws->last_time = ktime_get();
>  
> -	atomic_inc(&events_in_progress);
> +	/* Increment the counter of events in progress. */
> +	atomic_inc(&combined_event_count);
>  }
>  
>  /**
> @@ -420,14 +435,10 @@ static void wakeup_source_deactivate(str
>  	del_timer(&ws->timer);
>  
>  	/*
> -	 * event_count has to be incremented before events_in_progress is
> -	 * modified, so that the callers of pm_check_wakeup_events() and
> -	 * pm_save_wakeup_count() don't see the old value of event_count and
> -	 * events_in_progress equal to zero at the same time.
> +	 * Increment the counter of registered wakeup events and decrement the
> +	 * couter of wakeup events in progress simultaneously.
>  	 */
> -	atomic_inc(&event_count);
> -	smp_mb__before_atomic_dec();
> -	atomic_dec(&events_in_progress);
> +	atomic_add(MAX_IN_PROGRESS, &combined_event_count);
>  }
>  
>  /**
> @@ -582,8 +593,10 @@ bool pm_wakeup_pending(void)
>  
>  	spin_lock_irqsave(&events_lock, flags);
>  	if (events_check_enabled) {
> -		ret = ((unsigned int)atomic_read(&event_count) != saved_count)
> -			|| atomic_read(&events_in_progress);
> +		unsigned int cnt, inpr;
> +
> +		split_counters(&cnt, &inpr);
> +		ret = (cnt != saved_count || inpr > 0);
>  		events_check_enabled = !ret;
>  	}
>  	spin_unlock_irqrestore(&events_lock, flags);
> @@ -605,19 +618,22 @@ bool pm_wakeup_pending(void)
>   */
>  bool pm_get_wakeup_count(unsigned int *count)
>  {
> -	bool ret;
> +	unsigned int cnt, inpr;
>  
>  	if (capable(CAP_SYS_ADMIN))
>  		events_check_enabled = false;
>  
> -	while (atomic_read(&events_in_progress) && !signal_pending(current)) {
> +	for (;;) {
> +		split_counters(&cnt, &inpr);
> +		if (inpr == 0 || signal_pending(current))
> +			break;
>  		pm_wakeup_update_hit_counts();
>  		schedule_timeout_interruptible(msecs_to_jiffies(TIMEOUT));
>  	}
>  
> -	ret = !atomic_read(&events_in_progress);
> -	*count = atomic_read(&event_count);
> -	return ret;
> +	split_counters(&cnt, &inpr);
> +	*count = cnt;
> +	return !inpr;
>  }
>  
>  /**
> @@ -631,11 +647,12 @@ bool pm_get_wakeup_count(unsigned int *c
>   */
>  bool pm_save_wakeup_count(unsigned int count)
>  {
> +	unsigned int cnt, inpr;
>  	bool ret = false;
>  
>  	spin_lock_irq(&events_lock);
> -	if (count == (unsigned int)atomic_read(&event_count)
> -	    && !atomic_read(&events_in_progress)) {
> +	split_counters(&cnt, &inpr);
> +	if (cnt == count && inpr == 0) {
>  		saved_count = count;
>  		events_check_enabled = true;
>  		ret = true;

This looks okay.  Sure, it's a little clunky, but it's better than the 
alternatives.

Alan Stern

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2011-01-28 20:23 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-25  0:12 [PATCH 0/3] PM / Wakeup: Fixes in wakeup.c Rafael J. Wysocki
2011-01-25  0:14 ` [PATCH 1/3] PM / Wakeup: Add missing memory barriers Rafael J. Wysocki
2011-01-25  0:14 ` Rafael J. Wysocki
2011-01-26 20:21   ` Alan Stern
2011-01-26 20:21   ` Alan Stern
2011-01-26 20:36     ` Rafael J. Wysocki
2011-01-26 20:36     ` Rafael J. Wysocki
2011-01-26 22:53     ` Rafael J. Wysocki
2011-01-27 19:00       ` Alan Stern
2011-01-27 19:00       ` Alan Stern
2011-01-27 21:19         ` Rafael J. Wysocki
2011-01-27 21:19         ` Rafael J. Wysocki
2011-01-28 20:23           ` Alan Stern
2011-01-28 20:23           ` Alan Stern
2011-01-25  0:15 ` [PATCH 2/3] PM / Wakeup: Make pm_save_wakeup_count() work as documented Rafael J. Wysocki
2011-01-25  0:15 ` Rafael J. Wysocki
2011-01-25  0:16 ` [PATCH 3/3] PM / Wakeup: Don't update events_check_enabled in pm_get_wakeup_count() Rafael J. Wysocki
2011-01-25  0:16 ` Rafael J. Wysocki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.