linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* 2.4.8-pre7: still buffer cache problems
@ 2001-08-09 13:56 marc heckmann
  2001-08-09 16:09 ` Chris Mason
  2001-08-09 20:55 ` Rik van Riel
  0 siblings, 2 replies; 7+ messages in thread
From: marc heckmann @ 2001-08-09 13:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm

Hi.

While 2.4.8-pre7 definitely fixes the "dd if=/dev/zero of=bigfile bs=1000k
count=bignumber" case. The "dd if=/dev/hda of=/dev/null" is still quite
broken for me. while I appreciate that it is a case of "root" doing
something stupid, it shouldn't mess up the system so badly. On 2.2.19 the
system is completely useable. on 2.4.8-pre7 it's thrashing swap like mad and
the buffercache is huge. this is all on a PPC [G3] w/ 192Mb's of RAM and
200MB's of swap. so no highmem is involved. vmstat outputs:

##############
2.2.19: [in X w/ full gnome, galeon all is very useable]

 r  b  w   swpd   free   buff  cache  si  so    bi  bo   in    cs  us  sy id
 2  0  0      0   2148  79160  25388   0   0 10112   4  174   639   6  17 76
 1  0  0      0   3164  78136  25388   0   0 10496   0  179   625   5  21 74
 1  0  0      0   2264  79208  25212   0   0 10112   1  187   612   4  21 75
 1  0  0      0   3032  78440  25212   0   0 10624   0  183   613   3  23 75
 2  0  0      0   2888  78568  25212   0   0 10496   2  184   617   5  23 73
 1  0  0      0   2768  78696  25212   0   0 10368   0  186   614   4  19  7

###############
2.4.8-pre7: [the buffer cache tends to max out at around 145mb but the
system is still trashing after we get there...]

 r  b  w   swpd free   buff  cache  si  so    bi    bo   in    cs  us  sy id
 2  1  0  57696 2648 135076   8172 648   0  3656     0  281   653  20  15 66
 0  2  0  57696 2404 135092   8172 784   0  2320     0  312   350  12   9 79
 1  1  0  58540 2096 135076   8172 360 1452   808  1452  175  274   0   4 96 
 1  2  0  59024 2036 135972   8172 772   0  1668     0  155   518  23  11 67
 1  0  0  59024 2152 136536   7936 200   0  6728     0  356   699  10  18 73
 1  2  1  59024 2656 135648   7956 368 1024  1224  1024  375  600  12   6  
                     76
 2  0  0  59560 2216 137784   7912   8 380  6216   380  254   682  15  17 69
 2  2  0  60824 2720 137956   7732 100 1772  3044  1772  191   645  9  12 79  
 1  1  0  61928 2144 139372   7656 340 508  4720   508  286   746  15   1 71
 2  2  1  62856 2036 139992   7652   4 768  3344   768  245   649   9  11 81

Hope this helps. willing to test.

Cheers,

-- 
	Marc Heckmann <heckmann@hbe.ca>
 	C3C5 9226 3C03 CDF7 2EF1  029F 4CAD FBA4 F5ED 68EB
	key: http://people.hbesoftware.com/~heckmann/



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 2.4.8-pre7: still buffer cache problems
  2001-08-09 13:56 2.4.8-pre7: still buffer cache problems marc heckmann
@ 2001-08-09 16:09 ` Chris Mason
  2001-08-09 20:55 ` Rik van Riel
  1 sibling, 0 replies; 7+ messages in thread
From: Chris Mason @ 2001-08-09 16:09 UTC (permalink / raw)
  To: marc heckmann, linux-kernel; +Cc: linux-mm



On Thursday, August 09, 2001 09:56:31 AM -0400 marc heckmann
<heckmann@hbesoftware.com> wrote:

> Hi.
> 
> While 2.4.8-pre7 definitely fixes the "dd if=/dev/zero of=bigfile bs=1000k
> count=bignumber" case. The "dd if=/dev/hda of=/dev/null" is still quite
> broken for me. while I appreciate that it is a case of "root" doing
> something stupid, it shouldn't mess up the system so badly. On 2.2.19 the
> system is completely useable. on 2.4.8-pre7 it's thrashing swap like mad
> and the buffercache is huge. this is all on a PPC [G3] w/ 192Mb's of RAM
> and 200MB's of swap. so no highmem is involved. vmstat outputs:
>

Hmmm, perhaps its because the buffer cache doesn't have any use-once or
drop behind optimizations?

What happens when you do this instead (assuming your dd supports large
files, otherwise use 1000 instead of 9000)

dd if=/dev/zero of=some_file seek=9000 bs=1MB count=1

Then, run your test again:

dd if=some_file of=/dev/null

-chris


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 2.4.8-pre7: still buffer cache problems
  2001-08-09 13:56 2.4.8-pre7: still buffer cache problems marc heckmann
  2001-08-09 16:09 ` Chris Mason
@ 2001-08-09 20:55 ` Rik van Riel
  2001-08-10  0:20   ` marc heckmann
                     ` (2 more replies)
  1 sibling, 3 replies; 7+ messages in thread
From: Rik van Riel @ 2001-08-09 20:55 UTC (permalink / raw)
  To: marc heckmann; +Cc: linux-kernel, linux-mm

On Thu, 9 Aug 2001, marc heckmann wrote:

> While 2.4.8-pre7 definitely fixes the "dd if=/dev/zero
> of=bigfile bs=1000k count=bignumber" case. The "dd if=/dev/hda
> of=/dev/null" is still quite broken for me.

OK, there is no obvious way to do do drop-behind on
buffer cache pages, but I think we can use a quick
hack to make the system behave well under the presence
of large amounts of buffer cache pages.

What we could do is, in refill_inactive_scan(), just
moving buffer cache pages to the inactive list regardless
of page aging when there are too many buffercache pages
around in the system.

Does the patch below help you ?

regards,

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


--- linux-2.4.7-ac7/mm/vmscan.c.buffer	Thu Aug  9 17:54:24 2001
+++ linux-2.4.7-ac7/mm/vmscan.c	Thu Aug  9 17:55:09 2001
@@ -708,6 +708,8 @@
  * This function will scan a portion of the active list to find
  * unused pages, those pages will then be moved to the inactive list.
  */
+#define too_many_buffers (atomic_read(&buffermem_pages) > \
+		(num_physpages * buffer_mem.borrow_percent / 100))
 int refill_inactive_scan(zone_t *zone, unsigned int priority, int target)
 {
 	struct list_head * page_lru;
@@ -770,6 +772,18 @@
 				page_active = 1;
 			}
 		}
+
+		/*
+		 * If the amount of buffer cache pages is too
+		 * high we just move every buffer cache page we
+		 * find to the inactive list. Eventually they'll
+		 * be reclaimed there...
+		 */
+		if (page->buffers && !page->mapping && too_many_buffers) {
+			deactivate_page_nolock(page);
+			page_active = 0;
+		}
+
 		/*
 		 * If the page is still on the active list, move it
 		 * to the other end of the list. Otherwise we exit if


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 2.4.8-pre7: still buffer cache problems
  2001-08-09 20:55 ` Rik van Riel
@ 2001-08-10  0:20   ` marc heckmann
  2001-08-15 11:06     ` 2.4.8-pre7: still buffer cache problems[+2.4.9-pre3 comments] Marc Heckmann
  2001-08-10  1:52   ` 2.4.8-pre7: still buffer cache problems Ed Tomlinson
  2001-08-10  8:01   ` Zdenek Kabelac
  2 siblings, 1 reply; 7+ messages in thread
From: marc heckmann @ 2001-08-10  0:20 UTC (permalink / raw)
  To: riel; +Cc: heckmann, linux-kernel, linux-mm

> 
> OK, there is no obvious way to do do drop-behind on
> buffer cache pages, but I think we can use a quick
> hack to make the system behave well under the presence
> of large amounts of buffer cache pages.
> 
> What we could do is, in refill_inactive_scan(), just
> moving buffer cache pages to the inactive list regardless
> of page aging when there are too many buffercache pages
> around in the system.
> 
> Does the patch below help you ?

well, the buffer cache still got huge and the system still swapped out like
mad, but it seemed like the buffer cache grew _slower_ and that the vm was
more fair towards other vm users. so interactivity was better but still far
from 2.2. and then it oops'ed [I don't think it was because of your patch
though..]:


Oops: kernel access of bad area, sig: 11
NIP: C005DEDC XER: 00000000 LR: C005B78C SP: C1251E10 REGS: c1251d60 TRAP: 0300
Using defaults from ksymoops -t elf32-powerpc -a powerpc:common
MSR: 00009032 EE: 1 PR: 0 FP: 0 ME: 1 IR/DR: 11
TASK = c1250000[1386] 'vmstat' Last syscall: 3 
last math c4568000 last altivec 00000000
GPR00: 00002000 C1251E10 C1250000 C8CC8000 C7262000 C01536F0 C5549880 00000000 
GPR08: 00007262 7FFFE000 00000000 00000000 84004883 10019BEC 7FFFF678 7FFFF680 
GPR16: 00000000 00000000 C7262000 00000052 00000625 00000440 00000000 C8CC8232 
GPR24: C0003CE0 7FFFF634 0020D000 C1251EA8 C1251EA0 C681A67C C681A660 C8CC8000 
Call backtrace: 
C58631A0 C005B78C C003A980 C0003D3C 1000141C 10000E18 0FEB5308 
00000000 
Warning (Oops_read): Code line not seen, dumping what data is available

>>NIP; c005dedc <proc_pid_stat+104/300>   <=====
Trace; c58631a0 <_end+567567c/d64853c>
Trace; c005b78c <proc_info_read+74/19c>
Trace; c003a980 <sys_read+c8/114>
Trace; c0003d3c <ret_from_syscall_1+0/b4>
Trace; 1000141c Before first symbol
Trace; 10000e18 Before first symbol
Trace; 0feb5308 Before first symbol
Trace; 00000000 Before first symbol


20 warnings issued.  Results may not be reliable.

Cheers,

-marc

-- 
	Marc Heckmann <heckmann@hbe.ca>

	C3C5 9226 3C03 CDF7 2EF1  029F 4CAD FBA4 F5ED 68EB
	key: http://people.hbesoftware.com/~heckmann/



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 2.4.8-pre7: still buffer cache problems
  2001-08-09 20:55 ` Rik van Riel
  2001-08-10  0:20   ` marc heckmann
@ 2001-08-10  1:52   ` Ed Tomlinson
  2001-08-10  8:01   ` Zdenek Kabelac
  2 siblings, 0 replies; 7+ messages in thread
From: Ed Tomlinson @ 2001-08-10  1:52 UTC (permalink / raw)
  To: Rik van Riel, marc heckmann; +Cc: linux-kernel, linux-mm

Hi Rik,

This has nice effects here.  With 320M memory starting a tob backup would put 
about 120M in the buffer cache.  With this applied it peaks at about 60M - the
system also remains more interactive.

If buffer_mem.borrow_percent is not used anywhere else suggest we reduce the
default percentage a bit more and see if things get even better.

Thoughts

Ed Tomlinson

On August 9, 2001 04:55 pm, Rik van Riel wrote:
> On Thu, 9 Aug 2001, marc heckmann wrote:
> > While 2.4.8-pre7 definitely fixes the "dd if=/dev/zero
> > of=bigfile bs=1000k count=bignumber" case. The "dd if=/dev/hda
> > of=/dev/null" is still quite broken for me.
>
> OK, there is no obvious way to do do drop-behind on
> buffer cache pages, but I think we can use a quick
> hack to make the system behave well under the presence
> of large amounts of buffer cache pages.
>
> What we could do is, in refill_inactive_scan(), just
> moving buffer cache pages to the inactive list regardless
> of page aging when there are too many buffercache pages
> around in the system.
>
> Does the patch below help you ?
>
> regards,
>
> Rik
> --
> IA64: a worthy successor to the i860.
>
> 		http://www.surriel.com/
> http://www.conectiva.com/	http://distro.conectiva.com/
>
>
> --- linux-2.4.7-ac7/mm/vmscan.c.buffer	Thu Aug  9 17:54:24 2001
> +++ linux-2.4.7-ac7/mm/vmscan.c	Thu Aug  9 17:55:09 2001
> @@ -708,6 +708,8 @@
>   * This function will scan a portion of the active list to find
>   * unused pages, those pages will then be moved to the inactive list.
>   */
> +#define too_many_buffers (atomic_read(&buffermem_pages) > \
> +		(num_physpages * buffer_mem.borrow_percent / 100))
>  int refill_inactive_scan(zone_t *zone, unsigned int priority, int target)
>  {
>  	struct list_head * page_lru;
> @@ -770,6 +772,18 @@
>  				page_active = 1;
>  			}
>  		}
> +
> +		/*
> +		 * If the amount of buffer cache pages is too
> +		 * high we just move every buffer cache page we
> +		 * find to the inactive list. Eventually they'll
> +		 * be reclaimed there...
> +		 */
> +		if (page->buffers && !page->mapping && too_many_buffers) {
> +			deactivate_page_nolock(page);
> +			page_active = 0;
> +		}
> +
>  		/*
>  		 * If the page is still on the active list, move it
>  		 * to the other end of the list. Otherwise we exit if
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 2.4.8-pre7: still buffer cache problems
  2001-08-09 20:55 ` Rik van Riel
  2001-08-10  0:20   ` marc heckmann
  2001-08-10  1:52   ` 2.4.8-pre7: still buffer cache problems Ed Tomlinson
@ 2001-08-10  8:01   ` Zdenek Kabelac
  2 siblings, 0 replies; 7+ messages in thread
From: Zdenek Kabelac @ 2001-08-10  8:01 UTC (permalink / raw)
  To: Rik van Riel

Rik van Riel wrote:
> 
> On Thu, 9 Aug 2001, marc heckmann wrote:
> 
> > While 2.4.8-pre7 definitely fixes the "dd if=/dev/zero
> > of=bigfile bs=1000k count=bignumber" case. The "dd if=/dev/hda
> > of=/dev/null" is still quite broken for me.
> 
> OK, there is no obvious way to do do drop-behind on
> buffer cache pages, but I think we can use a quick
> hack to make the system behave well under the presence

There is one simple way - leave some configurable number of
maximum cachable pages.

I've been having this problem for very looooong time -
and even this simple trick - like saying that disk cache could not
take more than 40MB would help a lot.

kabi


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 2.4.8-pre7: still buffer cache problems[+2.4.9-pre3 comments]
  2001-08-10  0:20   ` marc heckmann
@ 2001-08-15 11:06     ` Marc Heckmann
  0 siblings, 0 replies; 7+ messages in thread
From: Marc Heckmann @ 2001-08-15 11:06 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, linux-mm

On Thu, Aug 09, 2001 at 08:20:32PM -0400, marc heckmann wrote:
> > 
> > OK, there is no obvious way to do do drop-behind on
> > buffer cache pages, but I think we can use a quick
> > hack to make the system behave well under the presence
> > of large amounts of buffer cache pages.
> > 
> > What we could do is, in refill_inactive_scan(), just
> > moving buffer cache pages to the inactive list regardless
> > of page aging when there are too many buffercache pages
> > around in the system.
> > 
> > Does the patch below help you ?
> 
> well, the buffer cache still got huge and the system still swapped out like
> mad, but it seemed like the buffer cache grew _slower_ and that the vm was
> more fair towards other vm users. so interactivity was better but still far
> from 2.2. and then it oops'ed [I don't think it was because of your patch
> though..]:
> 

I tried 2.4.8 final and it fixes the problem.... could it be the 
fs/buffer.c changes? behaviour is now like 2.2 (good in this case). if I 
have time I'll try 2.4.8-ac5 to se if it also fixes it. thanks to whoever 
is responsible for the fix.

also I tried 2.4.9-pre3 and it performs _much_ [I'd say 10 times better!]
better under high VM load specifically when filling all ram+swap. Where
2.4.8 used to thrash without making any progress what so ever [I'd have to
reset], 2.4.9-pre3 will either oom_kill (the _right_ process) or manage to
handle swap to let processes run without thrashing. this is all on PPC 
without any highmem (192Mb + 200mb swap.).


	Cheers,

	-marc


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2001-08-15 11:06 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-09 13:56 2.4.8-pre7: still buffer cache problems marc heckmann
2001-08-09 16:09 ` Chris Mason
2001-08-09 20:55 ` Rik van Riel
2001-08-10  0:20   ` marc heckmann
2001-08-15 11:06     ` 2.4.8-pre7: still buffer cache problems[+2.4.9-pre3 comments] Marc Heckmann
2001-08-10  1:52   ` 2.4.8-pre7: still buffer cache problems Ed Tomlinson
2001-08-10  8:01   ` Zdenek Kabelac

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).