linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Feedback on preemptible kernel patch
@ 2001-09-15 19:18 Robert Love
  2001-09-16  1:28 ` Daniel Phillips
  0 siblings, 1 reply; 39+ messages in thread
From: Robert Love @ 2001-09-15 19:18 UTC (permalink / raw)
  To: phillips; +Cc: linux-kernel

On Sun, 2001-09-09 at 23:24, Daniel Phillips wrote:
> This may not be your fault.  It's a GFP_NOFS recursive allocation - this
> comes either from grow_buffers or ReiserFS, probably the former.  In
> either case, it means we ran completely out of free pages, even though
> the caller is willing to wait.  Hmm.  It smells like a loophole in vm
> scanning.

Hi, Daniel.  If you remember, a few users of the preemption patch
reported instability and/or syslog messages such as:

Sep  9 23:08:02 sjoerd kernel: __alloc_pages: 0-order allocation failed (gfp=0x70/1).
Sep  9 23:08:02 sjoerd last message repeated 93 times
Sep  9 23:08:02 sjoerd kernel: cation failed (gfp=0x70/1).
Sep  9 23:08:02 sjoerd kernel: __alloc_pages: 0-order allocation failed (gfp=0x70/1).
Sep  9 23:08:02 sjoerd last message repeated 281 times

It now seems that all of them are indeed using ReiserFS.  There are no
other reported problems with the preemption patch, except from those
users...

I am beginning to muse over the source, looking at when kmalloc is
called with GFP_NOFS in ReiserFS, and then the path that code takes in
the VM source.

I assume the kernel VM code has a hole somewhere, and the request is
falling through?  It should wait, even if no pages were free so, right? 

Where should I begin looking?  How does it relate to ReiserFS?  How is
preemption related?

Thank you very much,

-- 
Robert M. Love
rml at ufl.edu
rml at tech9.net


^ permalink raw reply	[flat|nested] 39+ messages in thread
[parent not found: <Pine.LNX.4.33.0109140838040.21992-100000@sjoerd.sjoerdnet>]
* Re: Feedback on preemptible kernel patch
@ 2001-09-14  2:47 Dieter Nützel
  0 siblings, 0 replies; 39+ messages in thread
From: Dieter Nützel @ 2001-09-14  2:47 UTC (permalink / raw)
  To: Robert Love; +Cc: Linux Kernel List

Robert Love wrote:

> Hi Arjan,
>
> first, highmem is fixed and the original patch you have from me is good.
> second, Daniel Phillips gave me some feedback into how to figure out the
> VM error.  I am working on it, although just the VM potential

Good to hear.

> -- ReiserFS may be another problem.

Can't wait for that.

> third, you may be experiencing problems with a kernel optimized for
> Athlon.  this may or may not be related to the current issues with an
> Athlon-optimized kernel.  Basically, functions in arch/i386/lib/mmx.c
> seem to need some locking to prevent preemption.  I have a basic patch
> and we are working on a final one.

Can you please send this stuff along to me?
You know I own an Athlon (since yester Athlon II 1 GHz :-) and need some 
input...

Mobo is MSI MS-6167 Rev 1.0B (AMD Irongate C4, yes the very first one)

Kernel with preempt patch and mmx/3dnow! optimization crash randomly.
Never had that (without preempt) during the last two years.

Thanks,
	Dieter


^ permalink raw reply	[flat|nested] 39+ messages in thread
* Re: Feedback on preemptible kernel patch
@ 2001-09-11 22:53 Robert Love
  0 siblings, 0 replies; 39+ messages in thread
From: Robert Love @ 2001-09-11 22:53 UTC (permalink / raw)
  To: iafilius, ledzep37; +Cc: linux-kernel

Arjan, Jordan, and anyone using preemption and highmem:

Can you _please_ test the following patch?  It is my final version of
the highmem patch, and the one I would like to include in the preempt
patch itself.

It should be faster than the patch you have been using before, the locks
in the previous patch were held for the duration that the entire page
remained map.  I don't see any reason for that.

I would appreciate if you could patch your kernel and let me know.  I am
putting 2.4.10-pre8 patches up at http://tech9.net/rml/linux shortly. 
2.4.9-ac10 patches are there, too.

If you test, please reply with: kernel version, do you lock/oops with
highmem enabled but no extra patch (you should), do you lock/oops with
highmem enabled and the original highmem patch (you should not), and do
you lock/oops/something with this new highmem patch (I hope not). 
Obviously enable CONFIG_PREEMPT and CONFIG_HIGHMEM, and test well. 
Please tell me if CONFIG_SMP is enabled (that is another bag of fun...).

Thank you...




--- linux-not-rml/include/asm-i386/highmem.h	Tue Sep 11 17:54:32 2001
+++ linux/include/asm-i386/highmem.h	Tue Sep 11 18:42:13 2001
@@ -88,6 +88,8 @@
 	if (page < highmem_start_page)
 		return page_address(page);
 
+	ctx_sw_off();
+
 	idx = type + KM_TYPE_NR*smp_processor_id();
 	vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
 #if HIGHMEM_DEBUG
@@ -97,6 +99,8 @@
 	set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
 	__flush_tlb_one(vaddr);
 
+	ctx_sw_on();
+
 	return (void*) vaddr;
 }
 
@@ -106,6 +110,8 @@
 	unsigned long vaddr = (unsigned long) kvaddr;
 	enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();
 
+	ctx_sw_off();
+
 	if (vaddr < FIXADDR_START) // FIXME
 		return;
 
@@ -118,6 +124,8 @@
 	 */
 	pte_clear(kmap_pte-idx);
 	__flush_tlb_one(vaddr);
+
+	ctx_sw_on();
 #endif
 }
 


-- 
Robert M. Love
rml at ufl.edu
rml at tech9.net


^ permalink raw reply	[flat|nested] 39+ messages in thread
* Feedback on preemptible kernel patch
@ 2001-09-08  5:22 grue
  2001-09-08  5:47 ` Robert Love
  0 siblings, 1 reply; 39+ messages in thread
From: grue @ 2001-09-08  5:22 UTC (permalink / raw)
  To: Robert Love; +Cc: linux-kernel

I am running 2.4.10-pre4 with the rml-preempt patch.
built and rebooted this on my workstation yesterday when I saw the patch
posted and it's been working great.

I'm running it on a dual P3-550 with 256MB ram with CONFIG_SMP and no
problems whatsoever although it hasn't been worked 'real' hard yet.
(load no higher than 4) ;)

Figured I'd give some positive feedback about the patch. If you want,
Rob, I could run some benchmarks on this against an unpatched kernel, or
if you have some ideas for me to really stress this thing to see if it
breaks, let me know.

--
Gregory Finch



^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2001-09-20  6:40 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <200109150444.f8F4iEG19063@zero.tech9.net>
     [not found] ` <200109140302.f8E32LG13400@zero.tech9.net>
2001-09-14  4:35   ` Feedback on preemptible kernel patch Robert Love
2001-09-15  4:25     ` Dieter Nützel
2001-09-15  5:14     ` Robert Love
2001-09-18  4:06       ` Dieter Nützel
2001-09-18  8:35         ` Daniel Phillips
2001-09-18 18:18         ` Roger Larsson
2001-09-18 23:31         ` Robert Love
2001-09-20  6:40           ` Dieter Nützel
2001-09-18 23:31       ` Robert Love
2001-09-15 19:18 Robert Love
2001-09-16  1:28 ` Daniel Phillips
2001-09-16  1:54   ` Daniel Phillips
     [not found] <Pine.LNX.4.33.0109140838040.21992-100000@sjoerd.sjoerdnet>
2001-09-14 15:04 ` Robert Love
2001-09-15  9:44   ` Arjan Filius
2001-09-15 10:38     ` Erik Mouw
2001-09-15 17:57     ` Robert Love
  -- strict thread matches above, loose matches on Subject: below --
2001-09-14  2:47 Dieter Nützel
2001-09-11 22:53 Robert Love
2001-09-08  5:22 grue
2001-09-08  5:47 ` Robert Love
2001-09-08 17:33   ` Arjan Filius
2001-09-08 18:22     ` safemode
2001-09-09  4:40     ` Robert Love
2001-09-09 17:09     ` Robert Love
2001-09-09 21:07       ` Arjan Filius
2001-09-09 21:26         ` Robert Love
2001-09-09 21:23       ` Arjan Filius
2001-09-09 21:37         ` Robert Love
2001-09-10  3:24           ` Daniel Phillips
2001-09-10  3:37             ` Jeremy Zawodny
2001-09-10  5:09           ` Robert Love
2001-09-10 18:25             ` Daniel Phillips
2001-09-10 21:29             ` Arjan Filius
2001-09-13 17:27               ` Robert Love
2001-09-14  7:30                 ` george anzinger
2001-09-14 15:01                 ` Robert Love
2001-09-11 19:47           ` Arjan Filius
2001-09-09 18:57   ` grue
2001-09-09 21:44     ` Robert Love

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).