linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Patch "mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem" has been added to the 3.3-stable tree
       [not found] <13325234973644@kroah.org>
@ 2012-03-29  5:36 ` Nikola Ciprich
  2012-03-29  9:31   ` Mel Gorman
  0 siblings, 1 reply; 2+ messages in thread
From: Nikola Ciprich @ 2012-03-29  5:36 UTC (permalink / raw)
  To: gregkh; +Cc: mel, stable, linux-kernel mlist

[-- Attachment #1: Type: text/plain, Size: 6473 bytes --]

Hi,

I'm not 100% sure, but I think this one could go to 3.0.x as well, am I right?
If it's so, could I try to provide backport? (it't doesn't apply cleanly).
Mel, would You care to review then? Or do You plan to send Your own backport?

cheers!

nik


On Fri, Mar 23, 2012 at 10:24:57AM -0700, gregkh@linuxfoundation.org wrote:
> 
> This is a note to let you know that I've just added the patch titled
> 
>     mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem
> 
> to the 3.3-stable tree which can be found at:
>     http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
> 
> The filename of the patch is:
>      mm-vmscan-forcibly-scan-highmem-if-there-are-too-many-buffer_heads-pinning-highmem.patch
> and it can be found in the queue-3.3 subdirectory.
> 
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <stable@vger.kernel.org> know about it.
> 
> 
> From cc715d99e529d470dde2f33a6614f255adea71f3 Mon Sep 17 00:00:00 2001
> From: Mel Gorman <mel@csn.ul.ie>
> Date: Wed, 21 Mar 2012 16:34:00 -0700
> Subject: mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem
> 
> From: Mel Gorman <mel@csn.ul.ie>
> 
> commit cc715d99e529d470dde2f33a6614f255adea71f3 upstream.
> 
> Stuart Foster reported on bugzilla that copying large amounts of data
> from NTFS caused an OOM kill on 32-bit X86 with 16G of memory.  Andrew
> Morton correctly identified that the problem was NTFS was using 512
> blocks meaning each page had 8 buffer_heads in low memory pinning it.
> 
> In the past, direct reclaim used to scan highmem even if the allocating
> process did not specify __GFP_HIGHMEM but not any more.  kswapd no longer
> will reclaim from zones that are above the high watermark.  The intention
> in both cases was to minimise unnecessary reclaim.  The downside is on
> machines with large amounts of highmem that lowmem can be fully consumed
> by buffer_heads with nothing trying to free them.
> 
> The following patch is based on a suggestion by Andrew Morton to extend
> the buffer_heads_over_limit case to force kswapd and direct reclaim to
> scan the highmem zone regardless of the allocation request or watermarks.
> 
> Addresses https://bugzilla.kernel.org/show_bug.cgi?id=42578
> 
> [hughd@google.com: move buffer_heads_over_limit check up]
> [akpm@linux-foundation.org: buffer_heads_over_limit is unlikely]
> Reported-by: Stuart Foster <smf.linux@ntlworld.com>
> Tested-by: Stuart Foster <smf.linux@ntlworld.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Christoph Lameter <cl@linux.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> 
> 
> ---
>  mm/vmscan.c |   42 +++++++++++++++++++++++++++++-------------
>  1 file changed, 29 insertions(+), 13 deletions(-)
> 
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1643,18 +1643,6 @@ static void move_active_pages_to_lru(str
>  	unsigned long pgmoved = 0;
>  	struct page *page;
>  
> -	if (buffer_heads_over_limit) {
> -		spin_unlock_irq(&zone->lru_lock);
> -		list_for_each_entry(page, list, lru) {
> -			if (page_has_private(page) && trylock_page(page)) {
> -				if (page_has_private(page))
> -					try_to_release_page(page, 0);
> -				unlock_page(page);
> -			}
> -		}
> -		spin_lock_irq(&zone->lru_lock);
> -	}
> -
>  	while (!list_empty(list)) {
>  		struct lruvec *lruvec;
>  
> @@ -1737,6 +1725,14 @@ static void shrink_active_list(unsigned
>  			continue;
>  		}
>  
> +		if (unlikely(buffer_heads_over_limit)) {
> +			if (page_has_private(page) && trylock_page(page)) {
> +				if (page_has_private(page))
> +					try_to_release_page(page, 0);
> +				unlock_page(page);
> +			}
> +		}
> +
>  		if (page_referenced(page, 0, mz->mem_cgroup, &vm_flags)) {
>  			nr_rotated += hpage_nr_pages(page);
>  			/*
> @@ -2235,6 +2231,14 @@ static bool shrink_zones(int priority, s
>  	unsigned long nr_soft_scanned;
>  	bool aborted_reclaim = false;
>  
> +	/*
> +	 * If the number of buffer_heads in the machine exceeds the maximum
> +	 * allowed level, force direct reclaim to scan the highmem zone as
> +	 * highmem pages could be pinning lowmem pages storing buffer_heads
> +	 */
> +	if (buffer_heads_over_limit)
> +		sc->gfp_mask |= __GFP_HIGHMEM;
> +
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist,
>  					gfp_zone(sc->gfp_mask), sc->nodemask) {
>  		if (!populated_zone(zone))
> @@ -2724,6 +2728,17 @@ loop_again:
>  			 */
>  			age_active_anon(zone, &sc, priority);
>  
> +			/*
> +			 * If the number of buffer_heads in the machine
> +			 * exceeds the maximum allowed level and this node
> +			 * has a highmem zone, force kswapd to reclaim from
> +			 * it to relieve lowmem pressure.
> +			 */
> +			if (buffer_heads_over_limit && is_highmem_idx(i)) {
> +				end_zone = i;
> +				break;
> +			}
> +
>  			if (!zone_watermark_ok_safe(zone, order,
>  					high_wmark_pages(zone), 0, 0)) {
>  				end_zone = i;
> @@ -2786,7 +2801,8 @@ loop_again:
>  				(zone->present_pages +
>  					KSWAPD_ZONE_BALANCE_GAP_RATIO-1) /
>  				KSWAPD_ZONE_BALANCE_GAP_RATIO);
> -			if (!zone_watermark_ok_safe(zone, order,
> +			if ((buffer_heads_over_limit && is_highmem_idx(i)) ||
> +				    !zone_watermark_ok_safe(zone, order,
>  					high_wmark_pages(zone) + balance_gap,
>  					end_zone, 0)) {
>  				shrink_zone(priority, zone, &sc);
> 
> 
> Patches currently in stable-queue which might be from mel@csn.ul.ie are
> 
> queue-3.3/mm-vmscan-forcibly-scan-highmem-if-there-are-too-many-buffer_heads-pinning-highmem.patch
> --
> To unsubscribe from this list: send the line "unsubscribe stable" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
-------------------------------------
Ing. Nikola CIPRICH
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava

tel.:   +420 596 603 142
fax:    +420 596 621 273
mobil:  +420 777 093 799
www.linuxbox.cz

mobil servis: +420 737 238 656
email servis: servis@linuxbox.cz
-------------------------------------

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Patch "mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem" has been added to the 3.3-stable tree
  2012-03-29  5:36 ` Patch "mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem" has been added to the 3.3-stable tree Nikola Ciprich
@ 2012-03-29  9:31   ` Mel Gorman
  0 siblings, 0 replies; 2+ messages in thread
From: Mel Gorman @ 2012-03-29  9:31 UTC (permalink / raw)
  To: Nikola Ciprich; +Cc: gregkh, stable, linux-kernel mlist

On Thu, Mar 29, 2012 at 07:36:39AM +0200, Nikola Ciprich wrote:
> Hi,
> 
> I'm not 100% sure, but I think this one could go to 3.0.x as well, am I right?
> If it's so, could I try to provide backport? (it't doesn't apply cleanly).

If you want to try a backport go right ahead. There is an indirect dependency
on [2bcf8879: mm: take pagevecs off reclaim stack] which is not suitable
for backporting to -stable but handling that should be easy.

> Mel, would You care to review then? Or do You plan to send Your own backport?
> 

I had not planned my own backport because the type of bug made it a low
priority for me (copying NTFS with small block size on 32 bit X86 with 16G
of RAM). If you backport it, I'll review the result. I will be at LSF/MM next
week so I might be a bit slow to respond but I'll get around to it. Thanks.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-03-29  9:31 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <13325234973644@kroah.org>
2012-03-29  5:36 ` Patch "mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem" has been added to the 3.3-stable tree Nikola Ciprich
2012-03-29  9:31   ` Mel Gorman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).