linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] PM / hibernate: memory_bm_find_bit -- tighten node optimisation
@ 2019-09-25 14:39 Andy Whitcroft
  2019-10-11  9:49 ` Rafael J. Wysocki
  0 siblings, 1 reply; 2+ messages in thread
From: Andy Whitcroft @ 2019-09-25 14:39 UTC (permalink / raw)
  To: linux-pm
  Cc: Rafael J. Wysocki, Len Brown, Pavel Machek, Andy Whitcroft,
	Andrea Righi, linux-kernel

When looking for a bit by number we make use of the cached result from the
preceding lookup to speed up operation.  Firstly we check if the requested
pfn is within the cached zone and if not lookup the new zone.  We then
check if the offset for that pfn falls within the existing cached node.
This happens regardless of whether the node is within the zone we are
now scanning.  With certain memory layouts it is possible for this to
false trigger creating a temporary alias for the pfn to a different bit.
This leads the hibernation code to free memory which it was never allocated
with the expected fallout.

Ensure the zone we are scanning matches the cached zone before considering
the cached node.

Deep thanks go to Andrea for many, many, many hours of hacking and testing
that went into cornering this bug.

Reported-by: Andrea Righi <andrea.righi@canonical.com>
Tested-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
---
 kernel/power/snapshot.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 83105874f255..26b9168321e7 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -734,8 +734,15 @@ static int memory_bm_find_bit(struct memory_bitmap *bm, unsigned long pfn,
 	 * We have found the zone. Now walk the radix tree to find the leaf node
 	 * for our PFN.
 	 */
+
+	/*
+	 * If the zone we wish to scan is the the current zone and the
+	 * pfn falls into the current node then we do not need to walk
+	 * the tree.
+	 */
 	node = bm->cur.node;
-	if (((pfn - zone->start_pfn) & ~BM_BLOCK_MASK) == bm->cur.node_pfn)
+	if (zone == bm->cur.zone &&
+	    ((pfn - zone->start_pfn) & ~BM_BLOCK_MASK) == bm->cur.node_pfn)
 		goto node_found;
 
 	node      = zone->rtree;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH 1/1] PM / hibernate: memory_bm_find_bit -- tighten node optimisation
  2019-09-25 14:39 [PATCH 1/1] PM / hibernate: memory_bm_find_bit -- tighten node optimisation Andy Whitcroft
@ 2019-10-11  9:49 ` Rafael J. Wysocki
  0 siblings, 0 replies; 2+ messages in thread
From: Rafael J. Wysocki @ 2019-10-11  9:49 UTC (permalink / raw)
  To: Andy Whitcroft
  Cc: linux-pm, Len Brown, Pavel Machek, Andrea Righi, linux-kernel

On Wednesday, September 25, 2019 4:39:12 PM CEST Andy Whitcroft wrote:
> When looking for a bit by number we make use of the cached result from the
> preceding lookup to speed up operation.  Firstly we check if the requested
> pfn is within the cached zone and if not lookup the new zone.  We then
> check if the offset for that pfn falls within the existing cached node.
> This happens regardless of whether the node is within the zone we are
> now scanning.  With certain memory layouts it is possible for this to
> false trigger creating a temporary alias for the pfn to a different bit.
> This leads the hibernation code to free memory which it was never allocated
> with the expected fallout.
> 
> Ensure the zone we are scanning matches the cached zone before considering
> the cached node.
> 
> Deep thanks go to Andrea for many, many, many hours of hacking and testing
> that went into cornering this bug.
> 
> Reported-by: Andrea Righi <andrea.righi@canonical.com>
> Tested-by: Andrea Righi <andrea.righi@canonical.com>
> Signed-off-by: Andy Whitcroft <apw@canonical.com>
> ---
>  kernel/power/snapshot.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 83105874f255..26b9168321e7 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -734,8 +734,15 @@ static int memory_bm_find_bit(struct memory_bitmap *bm, unsigned long pfn,
>  	 * We have found the zone. Now walk the radix tree to find the leaf node
>  	 * for our PFN.
>  	 */
> +
> +	/*
> +	 * If the zone we wish to scan is the the current zone and the
> +	 * pfn falls into the current node then we do not need to walk
> +	 * the tree.
> +	 */
>  	node = bm->cur.node;
> -	if (((pfn - zone->start_pfn) & ~BM_BLOCK_MASK) == bm->cur.node_pfn)
> +	if (zone == bm->cur.zone &&
> +	    ((pfn - zone->start_pfn) & ~BM_BLOCK_MASK) == bm->cur.node_pfn)
>  		goto node_found;
>  
>  	node      = zone->rtree;
> 

Applying as 5.5 material, thanks!





^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-10-11  9:49 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-25 14:39 [PATCH 1/1] PM / hibernate: memory_bm_find_bit -- tighten node optimisation Andy Whitcroft
2019-10-11  9:49 ` Rafael J. Wysocki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).