linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mark Hemment <markhe@veritas.com>
To: Linus Torvalds <torvalds@transmeta.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH] allocation looping + kswapd CPU cycles
Date: Tue, 8 May 2001 12:56:02 +0100 (BST)	[thread overview]
Message-ID: <Pine.LNX.4.21.0105081225520.31900-100000@alloc> (raw)


  In 2.4.3pre6, code in page_alloc.c:__alloc_pages(), changed from;

	try_to_free_pages(gfp_mask);
	wakeup_bdflush();
	if (!order)
		goto try_again;
to
	try_to_free_pages(gfp_mask);
	wakeup_bdflush();
	goto try_again;


  This introduced the effect of a non-zero order, __GFP_WAIT allocation
(without PF_MEMALLOC set), never returning failure.  The allocation keeps
looping in __alloc_pages(), kicking kswapd, until the allocation succeeds.

  If there is plenty of memory in the free-pools and inactive-lists
free_shortage() will return false, causing the state of these
free-pools/inactive-lists not to be 'improved' by kswapd.

  If there is nothing else changing/improving the free-pools or
inactive-lists, the allocation loops forever (kicking kswapd).

  Does anyone know why the 2.4.3pre6 change was made?

  The attached patch (against 2.4.5-pre1) fixes the looping symptom, by
adding a counter and looping only twice for non-zero order allocations.

  The real fix is to measure fragmentation and the progress of kswapd, but
that is too drastic for 2.4.x.

Mark


diff -ur linux-2.4.5-pre1/mm/page_alloc.c markhe-2.4.5-pre1/mm/page_alloc.c
--- linux-2.4.5-pre1/mm/page_alloc.c	Fri Apr 27 22:18:08 2001
+++ markhe-2.4.5-pre1/mm/page_alloc.c	Tue May  8 13:42:12 2001
@@ -275,6 +275,7 @@
 {
 	zone_t **zone;
 	int direct_reclaim = 0;
+	int loop;
 	unsigned int gfp_mask = zonelist->gfp_mask;
 	struct page * page;
 
@@ -313,6 +314,7 @@
 			&& nr_inactive_dirty_pages >= freepages.high)
 		wakeup_bdflush(0);
 
+	loop = 0;
 try_again:
 	/*
 	 * First, see if we have any zones with lots of free memory.
@@ -453,7 +455,8 @@
 		if (gfp_mask & __GFP_WAIT) {
 			memory_pressure++;
 			try_to_free_pages(gfp_mask);
-			goto try_again;
+			if (!order || loop++ < 2)
+				goto try_again;
 		}
 	}
 


             reply	other threads:[~2001-05-08 12:45 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-05-08 11:56 Mark Hemment [this message]
2001-05-08 14:54 ` [PATCH] allocation looping + kswapd CPU cycles Alex Bligh - linux-kernel
2001-05-08 17:23 ` Marcelo Tosatti
2001-05-08 19:21   ` Jens Axboe
2001-05-08 20:25 ` David S. Miller
2001-05-09  9:46   ` Mark Hemment
2001-05-09 16:36     ` Marcelo Tosatti
2001-05-10  8:41       ` Mark Hemment
2001-05-10 16:43         ` Marcelo Tosatti
2001-05-10 19:52           ` Stephen C. Tweedie
2001-05-10 18:22             ` Marcelo Tosatti
2001-05-10 20:19               ` Stephen C. Tweedie
2001-05-10 18:49                 ` Marcelo Tosatti
2001-05-10 20:52                   ` Stephen C. Tweedie
2001-05-12 14:56   ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.21.0105081225520.31900-100000@alloc \
    --to=markhe@veritas.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=torvalds@transmeta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).